Dynamic KPIs, Saving Filters as Groups, Updates to Root Cause Analysis and More – Explorazor Product Updates January 2023

We’re rapidly developing Explorazor to help Brand Managers conduct fast and efficient data exploration. Having already launched seamless root cause analysis, conditional formatting, dual and triple-axis charts in November’s release, we have made some other improvements this time around.

If you are yet to be acquainted with Explorazor, it is a CPG and pharma-specific data exploration tool laying the groundwork for skilled professionals to focus on solving real market problems instead of grappling with unstandardized data and slow laptops all the time. 

The Explorazor proposal for Brand Managers is to work on a harmonized dataset, accessible to all, facilitating instant data pivot extraction (via simple querying) and root cause analysis (via simple clicks) – saving time and effort while accelerating hypothesis testing rates. We do model and engineer your data for you as well.

Let’s look at January’s updates:

  1. Dynamic KPI creation

Users can get custom KPIs created as per their requirement, which are dynamically calculated for every query 

Let’s take an example of ‘Rate of Sales’ as a dynamically created KPI:

Simply insert ‘Rate of Sales’ as a keyword in your query as shown below:

The above image is an example of a dynamically created keyword. Users can get custom KPIs such as Rate of Sales, Market Share, etc. created as per their requirements.

It’s dynamic, so the query will be relevant all the time. As per your query, your numbers of the KPI will be calculated and updated in real-time. For example, you can get KPIs like ‘market share’ which can be calculated dynamically for brands, geography, or distribution, using the keyword, you can use the resulting table to create all kinds of visuals for presentation purposes, and/or perform root cause analysis on it.

  1. Filter Grouping

The rationale is simple – managers use a particular set of filters frequently. Typing in these set of filters repeatedly for every query is undesirable. 

Filter Grouping, as you’re smart enough to figure out by now, allows you to save a group of filters under a common header, and use it to apply the group of filters with ease in the future. Simply recall the header the next time you want to use that set of filters.

  1. Updates to Root Cause Analysis

Explore the root cause analysis/drill-down in detail in the linked blog. 

We have introduced more interactive elements to root cause analysis this time. To show important metrics for a data field, directly click on that field to display all its corresponding values in the left panel. 

This will be better understood with an example:

For any field you click, the numbers on the left panel change dynamically to reflect metrics for our area of interest.

An additional convenience here is the ability to sort the information on an ascending or descending basis. 

Some Other Updates

  1. Recently used keywords will now be prompted as suggestions as you type for quick access. A Google-like feature, and when it comes to a search interface, there’s no reason not to have it Google-like
  2. Min & Max query support is live
  3. There’s an option to edit live data connection options
  4. Updates to conditional formatting

That’s it for this time, and we’ll be back with more updates next month. Our goal remains the same: to help Brand Managers in CPG and Pharma focus only and only on data exploration, and create real impacts through it, with the ultimate objective of improving brand and company revenue. 


Explorazor is a product of vPhrase Analytics.

Take an Interactive Product Tour of Explorazor Today!

CPG Jargon Buster Master Article

Hello, and welcome to the knowledge hub that is the CPG Jargon Buster Master Article!

Here you will find direct links to many relevant jargon/concepts in the CPG Industry. Each term is explained in brief below, with a link to the detailed blog at the end of it. 

We keep adding more jargon as we write about them, so be sure to bookmark this page and keep learning! We’re also creating a FANTASTIC CPG-specific product for optimal and super-easy data exploration – you might want to check Explorazor out!

Till now, we have covered 

  1. ACV

ACV stands for All Commodity Volume. It is used in the calculation of %ACV (obviously, but the term ‘ACV’ is often used interchangeably with %ACV, so one needs to be mindful of that). 

ACV is nothing but the total monetary sales of a store. Assessing the ACV of a retailer helps suppliers know which outlet presents the best sales potential based on its business health. 

Learn how to calculate ACV using Nielsen data and how ACV relates to %ACV 

Read more: What is ACV in CPG?


  1. %ACV 

A more comprehensive blog than the ACV blog above, %ACV, or %ACV Distribution, helps managers understand the quality of their distribution networks. You might wonder why a product is not selling well in a region despite being apparently well-distributed there. A deep analysis of metrics such as %ACV will help you resolve that. 

Read the blog to understand how to calculate %ACV, and the 5 points to consider when performing the calculations:

Read more: What is %ACV?


  1. Velocity

Velocity is another metric to study distribution. Velocity factors the rate at which products move off the store shelves once they are placed there. 

Managers can take charge of sales by utilizing velocity fully, and understanding the two major velocity measures – Sales per Point of Distribution (SPPD) and Sales per Million. Refer to the blog to learn what these measures are, with examples to help. As Sales per Million is a complex concept we’ve also explained it separately in another blog:

Read more: ALL About Velocity / Sales Rate in CPG


  1. Average Items Carried

This is the average number of items that a retailer carries – be it of a segment, brand, category, etc. For example, suppose that Brand X has 5 products/items under its name. Average items Carried would be from a retailer’s perspective – he could be carrying 2 products, or 2.5 products, or 4 products of Brand X, on average. 

AIC is one of the 2 components of Total Distribution Points (TDP), the other being %ACV Distribution. The blog explains the relationship between AIC and %ACV with respect to TDP (Total Distribution Points), using examples to simplify. 

Learn why AIC and %ACV are called the width and depth in distribution, and how to calculate AIC in Excel:

Read more: What is ‘Average Items Carried’ and How Does it relate to %ACV?


  1. Total Distribution Points – Basics

Total Distribution Points, or Total Points of Distribution, is again a distribution measure, considering both %ACV and Average items Carried to produce a TDP score that helps Brand Managers understand things like product distribution and store health, and base their future strategies accordingly. 

There’s also a method for managers to know whether their brand is being represented in a fair manner on the retailer’s shelf, using TDP. Learn how to calculate TDP and the special case of TDP if %ACV is 95 or above:

Read More: Basics of Total Distribution Points (TDP) in CPG


  1. Sales per Million

How do you compare two markets where one is many times larger than the other? Does a manager simply say “It’s a smaller market, thus sales are less” and be done with it? Shouldn’t s/he investigate if the products in the smaller market are moving as fast as they are in the larger market? 

Sales per million helps compare across markets, while controlling for distribution. It accounts for the varying Market ACVs and stabilizes them, so managers can find how each product is doing in each market, regardless of market size.

Learn how to calculate Sales per Million with a cross-market comparison example following it:

Read More: Sales per Million 


  1. Panel Data Measures

Nielsen and IRI provide the numbers for these 4 measures, and even those who do not use Nielsen/IRI need to have an understanding of household-level analysis using these 4 measures.

Here are the one-line introductions:

  1. Household Penetration

How many households are buying my product?

  1. Buying Rate

How much is each household buying?

Purchase Frequency and Purchase Size are sub-components of Buying Rate.

  1. Purchase Frequency (Trips per Buyer)

(For each household) How often do they buy my product? 

  1. Purchase Size (Sales per Trip)

(For each household) How much do they buy at one time?

These 4 measures in table format can be used by managers to understand the consumer dynamics that drive the total sales for their product.

Understand these 4 measures in detail, and how they relate to sales:

Read More: Panel Data Measures


  1. Market Basket Analysis

Market Basket Analysis (MBA) is a powerful data mining technique used in the CPG industry to analyze customer purchase behavior and identify relationships between products.

Learn how Market Basket Analysis can help you gain valuable insights into consumer behavior in the CPG industry.

Read more on: Market Basket Analysis


  1. Point of Sale

The consumer packaged goods (CPG) industry is a highly competitive market, and companies need to make informed decisions to stay ahead.

One tool that CPG companies use to make data-driven decisions is Point of Sale (POS) data.

Learn how CPG and Pharma companies optimize their performance using Point of Sale


  1. Customer Segmentation

Customer segmentation, is a technique that helps you divide your audience into distinct groups based on their characteristics, behavior, or preferences.

By doing so, enterprises can tailor your strategies to each segment’s specific needs, improving your chances of success.

Read more on: Customer Segmentation


  1. Price Elasticity of Demand

Price elasticity of demand is calculated by dividing the percentage change in the quantity demanded of a product by the percentage change in the price of that product. 

The resulting number is a measure of how sensitive the quantity of the product demanded is to changes in its price. 

The formula for calculation Price of Elasticity is:

Price Elasticity of Demand = (% Change in Quantity Demanded) / (% Change in Price)

Check out our blog on how CPG companies take decision on the basis of Price Elasticity.

Take an Interactive Product Tour of Explorazor Today!

Panel Data Measures – Household Penetration, Buying Rate, Purchase Frequency and Purchase Size

Let’s continue with our CPG Jargon Buster Series. Having already covered ACV, %ACV, Velocity, Sales per Million, Average Items Carried and the basics of TDP, we shall now look at the 4 Panel Data measures mentioned in the title, namely Household Penetration, Buying Rate, Purchase Frequency, and Purchase Size.

Nielsen and IRI provide the numbers for these 4 measures, and even those who do not use Nielsen/IRI need to have an understanding of household-level analysis using these 4 measures.

Here are the one-line introductions:

  1. Household Penetration

How many households are buying my product?

  1. Buying Rate

How much is each household buying?

Purchase Frequency and Purchase Size are sub-components of Buying Rate.

  1. Purchase Frequency (Trips per Buyer)

(For each household) How often do they buy my product? 

  1. Purchase Size (Sales per Trip)

(For each household) How much do they buy at one time?

These 4 measures in table format can be used by managers to understand the consumer dynamics that drive the total sales for their product. Some terminologies used for this approach are: Sales Driver Analysis, Key Measures Report, Market Summary, and Purchase Summary.

Let’s understand our 4 measures in detail:

  1. Penetration

The percentage of households purchasing your products through a retailer or any channel is penetration. Take an example of Market Y containing 100 households. If 48 households buy Product X at least once during the year, the penetration of Product X in Market Y was 48%.

For a specific 

  1. Product
  2. Brand, or
  3. Category

Nielsen calls it Item Penetration. 

For 

  1. Channels
  2. Retailers

Nielsen uses the term Shopper Penetration.

You can easily derive your sales through this formula:

Sales = (Total number of households x Penetration) x Buying Rate

Let’s move on to Buying Rate

2. Buying Rate

We explained it above as ‘How much is each household buying?’. Termed by Nielsen as ‘Item Sales per Item. Buying Rate refers to the average amount of a product purchased during the entire year (or any time period; it’s usually a year) by one household. Households are buying households only.

For example, if the annual Buying Rate of Product X is noted to be Rs. 50, this means that every household that purchased Product X spent an average of Rs. 50 over Product X during the year.

We saw the formula above, where 

Sales = (Total number of households x Penetration) x Buying Rate

Now, Buying Rate itself is dependent on 2 things:

Buying Rate = Purchase Frequency x Purchase Size 

3. Purchase Frequency (Trips per Buyer)

Nielsen calls it ‘Item Trips per Item Buyer’. How frequently your product is purchased by an average buying household over a year is purchase frequency – straightforward.

Example: Suppose the annual purchase frequency for Product X is 3.8. This means that Product X was purchased by every buying household, on average, 3.8 times over the course of the year.

4. Purchase Size (Sales per Trip)

For every buying household, what was the average amount purchased in a single trip, is purchase size. Note that the condition of ‘a single trip’ is a must. 

Nielsen calls it ‘Item Sales per Item Trip’.

For example, if the annual purchase size of Product X is 1.3, this means that every household that purchased Product X purchased 1.3 units of it in one go, each time they purchased it during the entire year. 

The Calculation Part

Assume these given set of variables:

In the last year (52 weeks) a store had 2,00,000 shoppers, out of whom 20,000 purchased your products.

Each of these 20,000 purchasers purchased your products 5 times over the year’s course, and for every purchase, they spent Rs. 30 for two 3L packets.

So now, Penetration = 20000 / 200000 = 10%

Purchase Frequency = 5

Purchase Size (Rupees) = 2 x 30(rs) = Rs. 60

Purchase Size (Units) = 2

Purchase Size (Litres)  = 2 x  3 = 6 litres

Buying Rate (Rupees) = 5 x 30(rs) = Rs. 150

Total Sales (Rupees) = 20,000 x 150(rs) = Rs. 3000000

Your total sales amounts to Rs. 30 Lacs.

Hope you were able to grasp all the concepts, and do check out our custom-made for CPG data exploration tool Explorazor. Until next time!

Take an Interactive Product Tour of Explorazor Today!

Sales Per Million

Sales per million is the great equalizer. It is used to measure how fast your products are moving off the shelves in stores where they are in distribution, while controlling for distribution.

What this means is, suppose there are two markets where one is bigger than the other. Now how do you know if the smaller market sells at the same rate as the bigger market? Is the smaller market selling less because of market size, or is consumer demand weak in that area? Or, on the contrary, do products move faster in the smaller market? 

Sales per million takes into account the varying Total ACVs of different markets and stabilizes them in the denominator in its formula. Let’s look at the formula and then an example:

HOW TO CALCULATE SALES PER MILLION

Sales per Million is calculated as: 

Sales 

÷ 

%ACV distribution X (Market’s ACV ÷ 10,00,000)

‘Sales ÷ %ACV Distribution’ is the formula for ‘Sales per Point of Distribution (SPPD)’ which is used for checking velocity within a single market, or a single retailer. 

Also, ‘Sales’ here can be expressed in terms of units as well as in terms of rupees/dollars.

Market ACV has to be taken in the denominator to account for the size difference in ACV. Market ACVs are very large numbers, so we denote them in millions.

EXAMPLE – SALES PER MILLION

With the theory cleared, let’s understand the concept in practicality through an example:

Let’s suppose that the Mumbai market is 3x larger than Pune. The numbers below point to the same:

Observe that Pune’s Market ACV is significantly lesser than that of Mumbai. 

Now, let’s calculate Sales per Million using information from the above table:

For Product 1, Mumbai –

Sales = 65,000

%ACV Distribution = 80

Market ACV Size = 120 million

Sales per Million 

= 65000 ÷ [(80/100) x (120 million / 1 million) 

= 65000 ÷ [0.80 x (120)]

= 677

Similarly for all.

Pune’s sales velocity compared to Mumbai

  • For Product 1, is essentially the same 
  • For Product 2, has some discrepancy, but not too much
  • For Products 3 and 4, is very low

What’s the benefit here?

With the stakes equalized, we note that Product 3 and Product 4 are actually not doing well in Pune, and that cannot be attributed to Pune being a smaller market. The actual reason may lie in a weaker consumer demand, or lack of a suitable strategy for the city, or any other reason. 

It was calculation using Sales per Million that helped us identify that Pune needs more attention if products are to do well there. 

Note that one can use Sales per Million instead of SPPD (Sales per Point of Distribution) for single market/retailer calculation as well. While SPPD is easier to perform, managers who prefer uniformity in calculations do opt for Sales per million as against SPPD.
Refer to the blog on velocity for more detail on SPPD and Sales per million. Also invest 10 minutes each day to learn about ACV, %ACV, Average Items Carried, and the basics of TDP.

Take an Interactive Product Tour of Explorazor Today!

ALL About Velocity / Sales Rate in CPG

Blog snapshot:

What is velocity, and why is it important to focus on this measure? After learning about ACV and %ACV in our CPG Jargon Buster Series, let’s have a look at what velocity is, its relation to sales and distribution, how to calculate it, and what the two major velocity measures are:

WHAT IS VELOCITY?

While distribution tells you how well your product is distributed in the market, or how widely available it is, velocity tells you well it sells once it is on the shelf. Velocity is the measure you want to look at when judging which product is the best-selling or most preferred by consumers, not distribution.

VELOCITY’S RELATION TO SALES AND DISTRIBUTION

When velocity and distribution are combined, one arrives at retail sales. Thus, 

Sales = Velocity x Distribution.

CALCULATING VELOCITY

The formula to calculate velocity is derived as:

Velocity = Sales ÷ Distribution.

TAKING CHARGE OF SALES THROUGH VELOCITY

It is generally considered that distribution is in the hands of the distributor, and the manufacturer can always follow up with the distributor for better product availability across geographical areas. However, if the product is not moving off the shelf, meaning that velocity is low, then the manufacturer has greater control over being able to change that. 

Let’s understand this through an example, for greater clarity. Suppose 2 products, A and B, are sold equally in a market of 100 stores. Product A has good distribution but low velocity while product B is vice versa. 

The table is as follows:

Market of 100 storesSales =Distribution (x)Velocity
(units)(stores)(units/store)
Product A 600060100
Product B600010060

We see that although distribution for product A is not very impressive, the velocity, or the speed at which the product is selling in these stores, equalizes the sales of Product B, which, although present in all 100 stores, only manages to sell as much as Product A.

In the case of Product B, the manufacturer must have a closer look at his pricing and promotional strategies. Why are people not preferring the product even when it’s available to them in the outlet? Are my competitors outdoing me in those areas, or is their product quality better, or better suited to the audience I am trying to capture? Questions like these need to be raised and answered asap.

Tools like Explorazor and its root-cause analysis function can help a lot here.

TWO MAJOR VELOCITY MEASURES:

The example we described above was one of ‘Sales per Store’. This, however, is not and should not be used in real-world scenarios as store sizes differ, which leads to biases when estimating velocity.

When looking at sales for a single retailer or within a single market, we go with the first velocity measure – Sales Per Point of Distribution, or SPPD.

  1. SPPD = Sales ÷  %ACV Distribution

SPPD is great for understanding where the root cause of a problem lies – is it in the distribution, or the velocity? Let’s understand this further with an example:

Mumbai Market
DistributionVelocity
Brand Sales (in Rupees)%ACV DistributionSPPD
Product 16500080812
Product 295000751267
Product 370000154667
Product 480000204000

Above is an item level report for an individual market. We see that Products 1 and 2, although impressively distributed, but have poor velocity. The opposite holds true for Products 3 and 4 – %ACV is poor, while velocity is great. 

Note that SPPD works only for one market, be it at the retailer level, the channel, market, or the national level. When comparing across markets, SPPD doesn’t work. Also note that a 100% or close to 100% market distribution will mean that velocity and sales will almost be the same, so managers can overlook velocity in favour of focusing on sales only.

  1. Sales per Million 

In a cross-market comparison, certain markets are naturally bigger than others. In other words, the ACV of a Large Market, call it Market L, is bigger than the ACV of a smaller market, Market S.  

This is where Sales per million comes in, because it accounts for the ACV of each individual market in the denominator. 

Sales per Million is calculated as: 

Sales 

÷ 

%ACV distribution X (Market’s ACV ÷ 10,00,000)

Note that ‘Sales ÷ %ACV Distribution’ is nothing but the formula for SPPD. Market ACV, as explained above, has to be taken in the denominator to account for the size difference in ACV.

Regarding the ‘in millions’, Market ACVs are large numbers, and we simply ease our calculations by denoting them in millions.

Let’s compare Mumbai, a bigger market, to Pune, which is 3 times smaller:

Mumbai vs Pune market comparison with respect to Sales per Million
Mumbai vs Pune market comparison

Clearly, Pune’s numbers are lesser than Mumbai’s because of the size discrepancy. In comes Sales per Million to level that out.

Example of how we calculated Sales per Million (in the below table) using information from the above table:

For Product 1, Mumbai –

Sales = 65,000

%ACV Distribution = 80

Market ACV Size = 120 million

Sales per Million 

= 65000 ÷ [(80/100) x (120 million / 1 million) 

= 65000 ÷ [0.80 x (120)]

= 677

Similarly for all.

Mumbai vs Pune Velocity comparison in perspective of Sales per Million

Notice that Pune’s sales compared to Mumbai

  • For Product 1, is almost equal
  • For Product 2, not far off
  • For Products 3 and 4, is miserably low

Without the Sales per Million calculation, Pune as a whole would have been swept under the rug under the guise of ‘It’s a small city, hence our products don’t do well there’. But conducting the above analysis clearly demonstrates that Products 3 and 4 need a lot of attention if they are to sell in Pune. 

Some Notes: 

  1. Sales per Million can be used within 1 market as well, if you want to keep your velocity measures uniform throughout. SPPD is easier to use than Sales per Million, hence people prefer that too
  2. Velocity is uber-important. Hope we didn’t fail to convey that!

Take an Interactive Product Tour of Explorazor today!

What is ‘Average Items Carried’ and How Does it relate to %ACV?

Welcome to another blog in the CPG Jargon Buster Series. Today we’ll be gaining clarity on what ‘Average Items Carried’ is and how it is related to %ACV. We’ll also learn how to calculate it in Excel, in case the measure is not present directly in your database. 

WHAT IS AVERAGE ITEMS CARRIED/SELLING?

As the name suggests, it is the average number of items that a retailer carries, whether of a brand, category, segment, etc. A brand may carry 7 items or SKUs under its name, and on average, a retailer may carry 2, or 3.5, or 5.8 items of that brand. 

How we arrive at this number is through 2 ways – either it is readily available in your Nielsen data as ‘Average Items Carried’ or in your IRI data as ‘Average Items Selling’. We can also calculate it in Excel, as we will see later in this blog. 

AIC is one of the 2 components of Total Distribution Points (TDP), with the other being %ACV Distribution.

HOW IS AVERAGE ITEMS CARRIED RELATED TO %ACV?

Just like %ACV, Average Items Carried is related to the quality of your distribution efforts. While %ACV tells you about the breadth of your distribution efforts, AIC/AIS focuses on the depth of your distribution efforts. 

Consider this simplest of examples that illustrates how perspectives can shift based on whether you are looking at %ACV or AIC. Suppose there’s a category containing 3 brands, with brand distribution as follows:

Brands%ACV Distribution
95
B92
C90

We observe that Brand A had the best %ACV Distribution. However, this is the conclusion without consideration of Average Items Carried within each brand. 

Let’s look at the AIC:

BrandsAverage Items Carried
10.5
B12.5
C13.5

Here we see that Brand C has the largest number of items carried by outlets/retailers. 

Simply looking at %ACV without considering AIC is not where you want to be as a Brand Manager looking to uncover new growth avenues. To reinforce what was mentioned earlier, %ACV and AIC are two components of TDP, and optimal data analysis assigns importance to both. 

EXAMPLE – HOW WIDTH AND DEPTH MATTER IN DECISION-MAKING

Assume that only two Brands, LG and Samsung, are present in a market. 

LG offers 4 items/SKUs and is present in 60 stores. 

Samsung offers 8 items/SKUs and is present in 70 stores.

In table format with additional information:

# of items in stores
LG (present in 60 stores)SAMSUNG (present in 70 stores)
Item 16030
Item 26535
Item 37030
Item 45530
Item 535
Item 635
Item 745
Item 840
Total 250280

Now, for LG:

Average number of items of LG in stores:

= 250 / 60

= 4.16 items

LG’s efficiency rate:

= Average number of items of LG in stores / Total items that LG offers

= 4.16 / 4

= 1.04

Similarly for Samsung:

Average number of items of Samsung in stores:

= 280 / 70

= 4 items

Samsung’s efficiency rate:

= Average number of items of Samsung in stores / Total items that Samsung offers

= 4 / 8

= 0.50

Conclusion: While Samsung had greater distribution width by being present in more stores than LG (70 to 60) and more items listed to be sold (280 to 250), LG had greater distribution depth as is evidenced by its higher efficiency rate. This means that while Samsung is more widely distributed in the market, it is not as successful as LG when it comes to securing distribution depth. 

HOW TO CALCULATE ‘AVERAGE ITEMS CARRIED’ IN EXCEL

Very straightforward: you will have a %ACV of, say, a Brand, and the %ACV of all the items (or SKUs) within that brand. Now,

  1. Add up the %ACVs of all the items/SKUs
  2. Divide by the %ACV of the Brand

Cooking up an example:

%ACV Distribution
Total Brand90
Item 180
Item 245
Item 325
Item 430

Adding up all the items: 

80 + 45 + 25 + 30 = 180

Dividing by 90, we get the AIC as 2.0. We infer that retailers, on average, carry/sell 2.0 of the 4 items offered by the Brand.

Hope you found this blog helpful, and do not forget to refer to our CPG Jargon Buster Master Article for knowledge on the various CPG concepts. We’re building a product centred around Managers in CPG and Pharma cos only, so if you’re interested in exploring the niche Explorazor, you’re most welcome to!

Take an Interactive Product Tour of Explorazor Today!

Basics of Total Distribution Points (TDP) in CPG

We previously discussed ACV and %ACV as part of our CPG Jargon Buster Series. Let’s focus on TDP, Total Distribution Points, also sometimes known as TPD or Total Points of Distribution.

TDP numbers reflect the overall health of your brand from a distribution perspective. The higher the TDP numbers rise, the better your brand’s overall health.

CALCULATING TDP

TDP is closely related to %ACV – for distribution width, and Average Items Carried – for distribution depth. In fact, Nielsen states the method for calculating TDP as follows: 

“You can find it by calculating the number of retailers your products are in (breadth) and the number of products you’re selling in those stores (depth).”

TDP is generally part of your Nielsen database, so you will have it ready at your table. However, if you have to calculate it yourself, here’s how it goes: 

Suppose a Brand has 5 items/SKUs in its portfolio. TDP would be applicable at the item level of the Brand, meaning the 5 items/SKUs, and is calculated simply as the sum of the %ACV distribution of all these items. It is not necessary that these items be part of a brand; they can be clubbed under a category, segment, or any other similar product aggregations as well.

Example:

%ACV Distribution
Total Brand A80
Item A50
Item B60
Item C65
Item D75

TDP = 250 (50 + 60 + 65 + 75).

The maximum TDP score one can achieve here is 400, where %ACV distribution for all items is 100. One cannot set a partition and say that a certain TDP score and above is good, and below it is concerning. It all depends on the unique set of circumstances that surround your company, brand, category, etc.

Note: Avoid double-counting by excluding Total Brand from the calculation. 

IF %ACV DISTRIBUTION IS 95% OR ABOVE

If we were to calculate the TDP of Brands with %ACV Distribution of 95% or more, the TDP score and the Average Items Carried would be almost the same, provided that we shift the decimal point two places to the left. 

Look at the table below:

%ACV Distribution 
Total Brand X98
Item 166
Item 164
Item 178
Item 182
Item 180

TDP = 370

Average Items Carried = 370 ÷ 98 = 3.77 

Notice how if we would have moved the decimal two places in the TDP, we would have arrived at 3.70 of Brand X’s items carried by a retailer, on average.

A MASTER MEASURE – TDP

By allowing data analysts to look at both how widely the product items are being distributed and how well they are performing once they are in the store, TDP provides a solid base for managers to base their next strategies and objectives on. 

TDP further helps Managers in CPG understand Volume vs Brand Distribution. The item-level scrutiny ensures that managers know when their product is off the retailer’s shelf, as would be reflected in the total volume reduction.

TDP also lets managers know whether their brand is being represented in a fair manner on the retailer’s shelf. The method to do that is to find out your TDP percentage as against competition ÷ the in-store volume percentage. The volume percentage should not be higher than the TDP percentage.

Finally, TDP also allows you to gain intel on whether the product category has expanded, and find ways to bypass competition in securing shelf space to increase velocity.

Take an Interactive Product Tour of Explorazor

What is ACV in CPG?

ACV stands for All Commodity Volume. Bear in mind that ACV is often used synonymously with %ACV, but it is actually not interchangeable. ACV is used in the calculation of %ACV.

WHAT IS ALL COMMODITY VOLUME?

The definition of ACV is simple: It is the total monetary sales of a store. To explain further, ACV includes everything that a retailer sells in his outlet – across products, across categories. 

Thus, ACV, All Commodity Volume, is not based on the physical size of an outlet. Rather, the total business of that outlet is the yardstick of ACV.

With that clear, let us understand, 

WHY SHOULD SUPPLIERS CARE ABOUT ACV?

A CPG manager can go “Shouldn’t I be concerned about the sales of my product, and how much of my product the store sells, instead of assessing everything that the store sells?”

A fair question, and there is an answer. Assessing the ACV of a retailer helps suppliers 

  1. Know which outlet’s business health is the best 
  2. Which outlet has the maximum growth and sales potential based on its business health trend

Essentially, ACV tells you which outlet to care most about with respect to distribution/presence, so you can optimize your distribution efforts and optimize sales.

Couple of points about ACV calculation done by Nielsen/IRI: 

The data collection done by Nielsen and IRI around ACV excludes some departments like pharmacy, gasoline, and lottery because all stores do not contain these departments. This is the reason behind the variance in numbers in annual reports vs Nielsen/IRI data. Your data supplier may be able to provide you with a list of all the included, or excluded, departments during ACV calculation. This will help you reconcile the numbers.

The second point is that ACV figures are usually updated annually. 

HOW DOES ACV RELATE TO %ACV?

%ACV is nothing but ‘ACV weighted distribution’. (Read all about %ACV)

ACV is used mostly as an input to calculate %ACV. Let’s see how ACV lets us calculate %ACV:

A market contains 3 stores. As we mentioned in the %ACV blog linked above, we consider only the stores where our product has scanned, and not those stores where our product is not moving off the shelf. 

Store ACV (millions)Did Product Y scan/sell here? 
Store 120Yes
Store 240No
Store 380Yes

Total Market ACV: 

20 + 40 + 80 = 140 Million

Distribution for Product Y can be measured in two ways: Weighted (%ACV) and Unweighted.

Unweighted distribution for Product Y is the %of stores in which the product is selling/scanned. This would be 2 stores of 3, hence we arrive at a distribution of 67%.

Weighted distribution for Product Y (%ACV) =  Total ACV of stores where Product Y is sold/scanned divided by Total ACV across all stores.

(20 + 80) divided by 140.

Converting into percentage, it’s 71.42%.

The aim is to get the product placed in a high ACV store. It happens that the number of outlets of a retail chain is lesser than others, yet its ACV is better. Only the calculation which uses ACV will identify the right retail chain or outlet where one’s product/brand must necessarily scan. Once managers understand the importance of ACV, they begin to use it in other measures such as velocity and promotion measures as well. 

Take an Interactive Product tour of Explorazor!

What is %ACV?

In this blog, we’ll understand precisely what %ACV distribution is, and why you as a manager should be paying maximum attention to it. The contents of the blog are as follows:

  • What is %ACV
  • Why managers should care about it
  • How to calculate %ACV, and
  • Some points to consider when using it in your data analysis

The total sales made is the bottom line of all your efforts, but for a commodity to sell, it needs to be present in stores, and the right ones at that. Distribution, therefore, is widely considered as the most important sales driver, and %ACV helps you get your distribution right.

WHY SHOULD MANAGERS CARE ABOUT %ACV?

Here are some reasons: 

%ACV helps managers understand the quality of their distribution networks, so they are not deceived into feeling cozy because their products are seemingly well-distributed, when, in fact, they might be well-distributed only at the surface level. %ACV can answer why certain products are not selling in an area despite widespread distribution in that area.  

On the other end of things, CPG managers get to know which retailers are the fastest at moving products off their shelves, and categorize them as such. Managers can then focus on specifically targeting these stores and ensuring distribution’s on point there. Knowing which stores are the best performing also provides a blueprint which can be referred to and possibly replicated.

If managers care about their distribution goals, and what’s really going on at the store-level, they should care about %ACV. 

WHAT IS %ACV DISTRIBUTION?

%ACV Distribution, simply known as %ACV, stands for All Commodity Volume. It is a metric that can be understood as the ‘percentage of stores selling, where each store is prioritized based on its size’. This figure is then compared to the sales of other (rival) retailers, territory-wise.

Now, size here means the total annual sales of the store, called All Commodity Volume (ACV). This means that the larger the store you are present in, by (ACV) size, the more weight is assigned to it. 

However,

IT’S ALL ABOUT SCANNING

Being present in a large store means nothing if your product is not getting scanned. Your brand may have a dedicated shelf or shelf tag in a store, but if

  • The product’s out of stock, or
  • Is in stock, but is not moving out (customers aren’t purchasing it)

it won’t be captured under %ACV distribution in the Nielsen and the IRI data.   

%ACV distribution helps managers understand the quality of their distribution networks. The golden word to gauge quality is ‘scanning’. 

HOW IS %ACV CALCULATED?

The formula to get Retailer %ACV is this:

(ACV of that retailer/ ACV of all the retailers) * 100

The City of Mumbai has 3 retailers (oh, the oversimplification) selling your brand. 

Assume the details are as such:

No. of storesACV (Rupees)% of stores%ACV
Retail Store 15080 Mn50%40
Retail Store 230100 Mn30%50
Retail Store 32020 Mn20%10
Total 100 200 Mn100%100

Now, if your brand is present in Retail Store 1 and Retail Store 2, then the distribution by % of stores is 80%, but the distribution by %ACV is 90

How we arrived at 90 for %ACV is thus:

[(ACV of Store 1 + ACV of Store 2) divided by Total ACV]

(80 + 100) divided by 200 = 90.

The entire column of ‘%ACV’ is similarly calculated.

Similarly, if your brand is present in Store 1 and Store 3, then the distribution by % of stores would be 70%, but the distribution by %ACV would only be 50%. 

Studying these two scenarios in light of the %ACV distribution metric helped us understand the classical ‘I am present in many stores, therefore I should be selling more’ mistake that a manager may make. Store 2 is clearly the most valuable store, even with a lesser number of outlets (30) than Store 1 (50).

To revise, %ACV is meant to categorize, or value stores based on their ACV size, which is the total annual sales of a store, and target the largest store.  

SOME POINTS TO CONSIDER WHEN PERFORMING %ACV DISTRIBUTION 

When using %ACV distribution in your data analysis, keep in mind the following points:

  1. Scanning = Quality of distribution

An actual product scan is what counts – and it’s all that counts. Nielsen does not consider your product to be distributed when it is sitting in a store shelf and not moving out, and you should follow the same reasoning. Retail authorization means nothing – our focus is on the quality of distribution

  1.  Can’t add distribution up 

%ACV distribution is non-additive, meaning that if one UPC (Universal Product Code) has 20% distribution and another has 25%, you can’t just add up and conclude that total distribution is 45%. Neither markets, nor products, nor periods can be added. If you do, that would be incorrect, not to mention you may end up with a distribution of more than 100%.

Use the periods, markets, and products available in your database for analysis, without adding them up

  1. Don’t go weekly for non-perishable items

For non-perishable items, you might want to look at longer distribution periods such as 12 weeks for slow-moving products, or 4 weeks for relatively faster-moving products. Conducting 1-week analysis for slow-moving products, for example, will lead to grossly incorrect conclusions, because these products have longer purchase cycles and do not get scanned on a weekly basis. As such, you might be finding faults with your distribution infrastructure when there are none.

Of course, as you widen the territory of analysis on a weekly basis, you will see units being sold, but micro-analysis at retailer-level or for a specific item is not possible in this manner

  1. Be careful with 52-week analysis as well

Longer periods of distribution-related numbers are often extrapolated from smaller data chunks. Now, if the current distribution is fluctuating i.e. moving up or down rather than being stable, and the small data chunk is relatively stable, the extrapolation will not represent the current fluctuation. The extrapolation may consider the average or the maximum of the week/s within the smaller data chunk, and produce a year-long picture or that basis

  1. Individual item distribution vs Total brand distribution

Total brand distribution will always be higher than individual item distribution, since every store will hold your brand, but not all stores will hold every product variety you produce. Discrepancy is to be expected, except for super-seller products which every store wants to keep

Until next time!


Explorazor, the data exploration tool for Brand Managers, is a product of vPhrase Analytics.

Take an Interactive Product tour of Explorazor!

What is Search Based Analytics?

In today’s data-driven world, organizations face a constant challenge: extracting meaningful insights from vast amounts of data. The sheer volume and complexity of data can often be overwhelming, making it difficult to uncover the hidden gems of information that can drive business success. 

Fortunately, Search-Based Analytics has emerged as a powerful solution to navigate this data deluge. Combining the familiarity of search functionality with advanced analytical capabilities, Search-Based Analytics empowers businesses to efficiently explore and analyze data, uncover valuable patterns, and make informed decisions. 

In this blog, we will embark on a journey to demystify Search-Based Analytics, exploring its definition, significance, and the transformative impact it has on organizations in the pursuit of data-driven excellence.

Evolution of Data Analysis

Traditional data analysis methods, such as manual spreadsheets and basic statistical tools, have been the go-to approach for extracting insights from data for decades. 

While these methods have served their purpose, they struggle to cope with the exponential growth of data in today’s digital age. As the volume, variety, and velocity of data continue to increase, traditional methods often fall short in providing timely and comprehensive analysis. 

Manual data processing is time-consuming, error-prone, and limited in its ability to handle large datasets. Moreover, the complexity of data structures and the presence of unstructured data further compound the challenges faced by traditional analysis techniques. 

Recognizing these limitations, organizations are turning to advanced techniques like Search-Based Analytics to overcome these hurdles and unlock the full potential of their data assets.

What is Search-Based Analytics work?

To truly grasp the power of Search-Based Analytics, it is essential to understand its core principles and concepts. At its core, Search-Based Analytics leverages the familiar search functionality to explore and analyze data. By applying search algorithms and natural language processing (NLP), this approach enables users to interact with data in a more intuitive and efficient manner. 

Unlike traditional analytics methods that rely on predefined queries or structured data models, Search-Based Analytics allows users to pose ad-hoc queries using natural language, enabling a more flexible and exploratory analysis process. 

This dynamic nature sets it apart from traditional analytics, where rigid queries and predefined parameters often limit the scope and depth of analysis. Key components of Search-Based Analytics include data crawling and indexing, where relevant data sources are discovered and organized, and search algorithms that facilitate efficient retrieval of information. 

Additionally, NLP techniques play a crucial role in understanding user queries and interpreting unstructured data, enabling enhanced search capabilities. Understanding these core principles and the underlying technology will lay a solid foundation for exploring the benefits and applications of Search-Based Analytics.

How do Search-Based Analytics work?

To unleash the full potential of Search-Based Analytics, a crucial step is the crawling and indexing of data. This process involves systematically collecting data from various sources, such as databases, documents, websites, and other repositories, and organizing it in a structured manner. 

Through automated crawling techniques, Search-Based Analytics scans and retrieves relevant data based on predefined criteria, ensuring that a comprehensive and diverse dataset is available for analysis. Once the data is collected, it is indexed to create a searchable index that facilitates efficient retrieval of information. 

This indexing process involves creating metadata, such as keywords, tags, and summaries, which enable quick and accurate search results. By effectively crawling and indexing data, Search-Based Analytics sets the stage for the subsequent analysis and exploration of information.

The next step in the process involves applying search algorithms to analyze the indexed data. These algorithms are designed to process user queries and retrieve relevant information from the indexed dataset. By leveraging various ranking and relevance algorithms, Search-Based Analytics identifies patterns, correlations, and trends within the data. 

The algorithms consider factors such as keyword matches, proximity, and statistical significance to deliver meaningful and accurate search results. This ensures that users can swiftly navigate through large datasets and obtain valuable insights in a timely manner.

Furthermore, Search-Based Analytics incorporates the power of natural language processing (NLP) to enhance the search experience. NLP allows users to interact with the data using natural language queries, making the analysis process more intuitive and user-friendly. By understanding the context, semantics, and intent behind user queries, NLP enables advanced search capabilities, such as sentiment analysis, entity recognition, and concept extraction. These features provide deeper insights and enable users to explore data in a more conversational and exploratory manner.

Understanding the inner workings of Search-Based Analytics, from data crawling and indexing to search algorithms and NLP, sheds light on the mechanisms that enable efficient and effective data exploration and analysis. By harnessing these processes, organizations can unlock the hidden value within their data, uncover actionable insights, and make informed decisions that drive success in an increasingly data-driven world.

Real Life Case Study of Search Driven Analytics.

Let us delve into real-world case studies that demonstrate the tangible impact of Search-Based Analytics on businesses and organizations.

In this Case Study, we would see how Explorazor, a Search Based Analytics tool helped Danone save 23 days of manual effort per month and reduced 60% of their time effort spent on performing Analysis on Excel.

The Insights team members at Danone faced challenges in managing data across multiple siloed data sources. The team was burdened with ad-hoc queries from the business users from the Brand & Marketing teams, which took days to gather data and find insights. 

The existing BI tool (Power BI) was unable to handle the dynamic nature of the data and meet the insights needs of the Brand & Marketing managers.

Explorazor’s team built an automated data pipeline combining 34 datasets across Market, Sales, Consumer, and event data. All these data sources were harmonized, standardized, and uploaded on Explorazor so that they could talk to each other.

Danone were able to perform ad-hoc search based analysis on their single source of truth to perform Data Analysis.

For more information regarding Explorazor, visit our website.

In conclusion, Search-Based Analytics has emerged as a powerful tool in the data-driven era, revolutionizing the way organizations explore, analyze, and derive insights from vast amounts of data. 

With its ability to navigate through complex datasets, provide real-time insights, and empower users with intuitive search interfaces, Search-Based Analytics has become an invaluable asset for businesses seeking to make informed decisions and gain a competitive edge.

Explorazor is a product of vPhrase Analytics, which also owns Phrazor.

Take an Interactive Product Tour of Explorazor!