Analytics teams already have the business domain knowledge, analytic mindset and the ability to prepare and access data sets.
Here’s how a global retailer equipped their analytics team with AI to find answers hidden in large, complex and disconnected data sets, massively reducing time to value.
Many companies have a team of analysts who are well-placed for driving business insight (BI). Yet in order to ensure the success of an analytics project, data science expertise are required.
AI analytics can help your existing teams build this capability in a no-code environment. The goal here was to empower the existing analytics team with robust and intuitive data-science capabilities.
1. Inadequate tools
• Requires extensive coding skills
• Involves significant manual effort to connect disparate datasets
2. Limited expertise
• Identifying and validating high ROI use cases
• Steep learning curve to effective project execution
3. Inefficient approach
• Lack of involvement of the right experts at the right moment
• Delay in time-to-value
Working in partnership with McKinsey & Co, the retailer had already identified a number of high ROI business problems it wanted to solve. Each use case was then evaluated using SparkBeyond’s methodology, and the best one was selected to launch AI analytics across their business.
Given the substantial effects of Covid-19 on customer behaviour, the retailer was unable to explore the complex use case (pictured below) without the support of the platform.
With so many potential drivers that would influence the size of a customer basket, the traditional top-down approach to analytics would take too long and leave too many questions unanswered.
Analytic outputs: Insights or predictive model?
The company wanted to uncover answers and insights that they could take action on, rather than build a predictive ML model.
The structure and size of the clients data meant that to perform any analytics on this use case would usually require someone to undertake time-consuming and difficult SQL table joins.
The data featured “wide” datasets with many columns, thousands of products across thousands of stores. With the added complexity of the depth of data, tens of gigabytes (hundreds of millions of rows of data) would be computationally difficult to perform.
Armed with successful results, the retailer built three work-streams off the back of the Pilot Project:
• Go deeper on the category-focused analysis
• Go wider on the basket analysis: leveraging external datasets, evaluating pricing elasticity, geospatial and more
• Validate and begin new use cases
In order to reduce operational burdens while enabling flexible scaling, the retailer used SparkBeyond’s managed service package, whereby SparkBeyond hosts the platform and prediction server.
"I’ve always had the belief that this insight was true, but I’ve never been able to prove it. Now I can work with our merchandise team to take immediate action on this.”
"Although we have great insights here, it’s clear that we can dig deeper into this. This is impressive, scarily so.”
Apply key dataset transformations through no/low-code workflows to clean, prep, and scope your datasets as needed for analysis
Apply key dataset transformations through no/low-code workflows to clean, prep, and scope your datasets as needed for analysis
Apply key dataset transformations through no/low-code workflows to clean, prep, and scope your datasets as needed for analysis
Apply key dataset transformations through no/low-code workflows to clean, prep, and scope your datasets as needed for analysis
Apply key dataset transformations through no/low-code workflows to clean, prep, and scope your datasets as needed for analysis
Apply key dataset transformations through no/low-code workflows to clean, prep, and scope your datasets as needed for analysis