Blog

Predictive Modelling for B2B Churn: Building Your Retention Arsenal

Vikram Aditya
January 8, 2024
min read
IconIconIconIcon

Predictive Modelling for B2B Churn: Building Your Retention Arsenal

SaaS businesses lose 10% of their revenue every year to unmanaged customer attrition and this figure may be significantly higher for startups. Institutions that invest a high amount of capital in subscriptions and tools need to see consistent business value from their investment and they might churn at any point citing challenges like bad onboarding experience, inconsistent customer support, complex functionality, or improper communication.

Different businesses may have different challenges leading to churn, but the intent to predict and prevent churn is ubiquitous. SaaS businesses invest heavily in predicting churn through data monitoring and analysis paving the way for the widespread adoption of AI in the field of churn prediction.

With a good enough dataset, a team of data analysts, and a prediction model, a business can predict churn with high accuracy and use retention strategies to lower churn.

What is a Churn Prediction Model?

Churn prediction models take baseline ML algorithms (Logistic Regression, Decision Tree) and combine them with data like customer demographics, product interactions, and user transactions to predict churn. These algorithms uncover hidden patterns embedded in the data and extrapolate them to identify churn indicators.

Logistic Regression Churn Prediction Model

Here’s what the basic wireframe of a logistic regression based prediction model looks like. The model even considers the weights (importance) of inputs and inherent biases for more accurate predictions.

While algorithms like logical regression are more suited for uncovering action signals from data, decision trees work more towards answering critical questions like “what makes a customer churn?”.

Basic Decision Tree

Here’s a basic example of how a decision tree works, except decision trees in churn prediction models have hundreds if not thousands of nodes.

Looking at customer churn after it happens isn’t necessarily the best course of action when there are clearly better alternatives available. Churn prediction models give vendors a headstart by predicting customer attrition long before it happens and offers an opportunity to prevent it.

Some Less Intuitive Benefits of Using Churn Prediction Models

Churn prediction and prevention are not the sole advantages of deploying and using prediction models. There are some less apparent benefits associated with their use


1. Better Engagement Strategies: This data can then be leveraged by customer success and sales teams to fine-tune their engagement strategies and address user concerns more effectively, thus minimising the risk of churn. For further insights on this topic, you can refer to our comprehensive blog on how SaaS businesses can act on these indicators to prevent churn.

2. Custom Offers for Better Conversions: These models offer behavioural insights that dictate conversions, which can be used to customise upsell and cross sell offers for better conversions. 

3. Better Budget Forecasting: Predicting churn offers a more holistic view of future revenue, that helps in better budget planning and forecasting.

4. Enhanced Customer Experience: When customer experience and sales teams start focussing on areas that cause dissatisfaction and start targeting customers with new and better strategies, customers will naturally feel more appreciated and valued.

Building a Churn Prediction Model

Here’s a quick run through of the steps involved in building a churn prediction model. The first step is identifying data to be used in the model, usually done by data scientists or analysts. The next step is pipelining and processing data to make it more suitable for building and training ML models.

A data scientist then picks the most suitable ML algorithm that works best with this data, and once the accuracy of this algorithm has been verified, it is finally handed over to data analysts for preparing and feeding data into the model.

There are two ways you can approach building a prediction model.

One that relies on data scientists, engineers, and analysts to build the prediction model by preprocessing data using data mining techniques like cluster analysis, regression analysis, and statistical classification, choosing prediction models like logical regression, decision tree, and K-means based on the data type, and formulating the training set distribution based on the ML model to ensure high prediction accuracy. (You can read more about the technical aspects of predictive modelling here).

Or you can use what’s called automated machine learning or AutoML to create prediction models by automating numerous steps within the process. This approach, unlike the first one, doesn't rely on data scientists and engineers for preprocessing the data, picking the prediction models, and training them. 

We believe that the latter is more accessible for B2B SaaS brands and offers more flexibility when compared to the traditional approach.

Do you actually need one to build your retention arsenal?

Predicting customer churn isn’t as straightforward as acting on visible signals like NPS ratings, and customer reviews. Sure, these indicators are a step in the right direction, but they’re barely enough on their own.

As a B2B vendor it’s important for you to know:

  • Where did the lead come from? (Sales-Led Motion or Product-Led Growth)
  • How a customer is using your product?
  • What are their objectives and challenges?
  • What is the perceived value of your product?

There are instances where customers who don’t have any vocal complaints about your product or service for years, suddenly decide to churn for reasons like:

  • Change in leadership
  • Merger with a separate entity
  • Denied feature request
  • Longer than expected time to value
  • Shift in perceived value of the product

You could’ve stopped this from happening. There were enough preemptive signals under veils of data, you just weren’t looking.

Predicting such instances requires pooling in data from multiple avenues (CRM, Sales Software, Marketing Tools), since the signal can come from any step of the customer journey. It also involves filtering, processing, and analysing this data to surface new patterns (which may not be perceptible to humans) and dependencies between variables like NPS scores, lead source, product interactions, and customer churn.

You really need to think about scale here, we are talking about data for thousands of customers, where each customer has at least a hundred interactions with your product. That’s why you need a prediction model that can comfortably work with high volumes of data and is trained to look for correlations between dependent and independent variables to predict churn. 


Why Automation is a Better alternative?

Churn prediction requires a good mix of business intelligence and technical expertise. While the traditional approach focuses immensely on the technical aspects of predictive modelling, it barely considers the significance of business intelligence.

Data scientists often fail to deliver strategic business outcomes because they are somewhat disconnected from business KPIs. According to the state of data science report by Anaconda, 28% of data scientists believe that their operations fall under the data science excellence centre while 22% believe that their services belong to the R&D department.

Regardless of the scenario, these individuals often work in departments with limited exposure to typical business functions such as sales, marketing, and customer success. This means that data scientists have distinct goals and beliefs, and this misalignment in objectives sets the project up for failure.

The ultimate goal of a prediction model is to proactively prevent churn by offering actionable insights, and those best positioned to act on these insights are business stakeholders. However, achieving this is incredibly challenging when the goals aren't aligned from the outset.

Moreover, developing a functional predictive model through traditional methods typically takes anywhere from 4 to 9 months. This means that the data used for creating and training the model differs from the data used for prediction.

An important distinction is that while prediction models crafted by data scientists focus on a single attribute (variable), those generated using Automated Machine Learning (AutoML) encompass multiple attributes. AutoML enhances the prediction model by including the most valuable attributes to improve its predictive capabilities.

What Steps can be Automated in Predictive Modelling?

Predictive modelling constitutes manual tasks like data cleaning, pipelining, feature engineering, and training the ML algorithm, and while not all of them favour automation, ones that require domain expertise and restrict data analysts and business professionals from making these models can be automated.

The following processes can be automated using AutoML.

Data Aggregation

To build an accurate prediction model, it's essential to collect data from all vital data points, including CRM software, sales enablement tools, and data analytics platforms. Because data from multiple sources reveals different facets of the customer journey. For instance, data from sales enablement tools sheds light on initial interactions with sales personnel, while data from CRM software and analytics tools uncovers client interactions with the product and any issues they may encounter. However, manually connecting multiple data points and aggregating data into a sink can be challenging.  With AutoML, you just need to select the data points and the rest is taken care of.

Data Preprocessing

Typically performed by data engineers and analysts, data preprocessing refers to the cleaning, formatting, and structuring of the data in a way that makes it suitable for ML algorithms. This step requires a specific skill set and experience with data handling and processing and therefore acts as a barrier to entry for business professionals, most predictive analytics platforms offer a reusable template that automatically structures the data in a set way.

Algorithm Selection

You can't simply choose any prediction algorithm and expect it to perform optimally with your data. Different algorithms align with specific goals and data types. Therefore, it's important for business stakeholders to have a well-defined objective. Churn prediction algorithms can vary in their objectives. While some aim to use the data to identify churn indicators, others focus on answering vital questions such as 'What makes a customer likely to churn in the future?'. There are many prediction algorithms to choose from like Naive Bayes Classifier, K-Nearest Neighbours, Linear Regression, Decision Tree, and K-means clustering. AutoML platforms can analyse your data and select the most suitable algorithm that aligns with your specific objectives.

Model Selection

Once you pick the right prediction algorithm, it is then introduced to different training set distributions to figure out the optimal parameter configurations. You can’t just dump all your data into the algorithm, it won’t work. You have to consider different combinations of data to help the model map dependencies between the variables and based on the performance of the algorithm on different parameters and dataset combinations, the model with the optimal performance is selected. This is a complicated step in the process and can be automated to a great extent given that a data analyst works synergistically with the automation tool.

Model Deployment and Monitoring

Your job isn't done once the model is deployed. Churn prediction is an ongoing, dynamic process because customer behaviour evolves over time. The causes for this change can range from shifts in leadership or objectives to even a simple update you've made to your product. The key is to continuously monitor the prediction model and adapt it to accommodate shifting customer behaviour and market conditions. For example, if your CSM notes that some clients cancelled their subscriptions after updating payment information, it suggests a potential correlation between churn and payment detail changes. This valuable insight should be integrated into the model.This is the most labour intensive step in the process and automating this step can help reduce human resource deployment significantly.

How to get the most out of your data For Predictive Modelling?

When your goal is to make the most out of a predictive model, it is imperative that you get the most out of your data.

But how does one get the most out of their data?

The short answer? Through feature engineering.

The long answer? You see, prediction algorithms use computer sense to analyse data and uncover patterns in them, but for that to happen, the data needs to be processed in a certain way that brings out crucial features relevant to the dependent variable being predicted.

This is what the practice of feature engineering solves and unlike the processes discussed earlier, feature engineering cannot be automated, because it requires years of hands-on experience with managing and processing data.

Machine learning algorithms use sample data to learn a solution to a problem and feature engineering asks “What is the best possible representation of data to learn this solution?”.

Tomasz Malisiewicz, Research Scientist Manager at Meta Reality Labs describes feature engineering as “manually designing what the input x’s should be” or to put it plainly, converting your data into things the algorithm can understand.

Good features bring out the inherent structure of the data and most ML algorithms can naturally recognize good structure in data, which simply means that having good features offers more flexibility in terms of the complexity of the prediction models. You can pick simpler, less optimal models and still get good results. Moreover, simpler models are easier to work with and require less maintenance effort.

A perfect example of this comes from the KDD Cup, a machine learning competition held for attendees of the ACM Special Interest Group on Knowledge Discovery and Data Mining conferences, where a group of students and academics from National Taiwan University won the competition by predicting student test performance using their prediction model. The team attributed their success to feature engineering.

You can read more about their project here.

Feature engineering is perhaps the most intriguing aspect of predictive modelling, however the intricacies of feature engineering lie outside the scope of this article.

You can learn more about feature engineering, feature extraction, feature selection, and feature construction here

Challenges to Predictive Modelling and Ways to Address Them

The advent of low-cost business analytics tools has certainly made predictive modelling more accessible, allowing for faster deployment of prediction models through automation. Nevertheless, using a predictive modelling platform presents its own array of challenges. In the following, we will delve into the most common challenges in predictive modelling and explore strategies to overcome them..

  1. Incompleteness of Data: Churn prediction models are as good as the data that goes into them. Deficiency in data could directly translate to deficiency in prediction capabilities of the model. Modern businesses have multiple silos of data across tools and subscription based software that they use to streamline operations and optimise workflows. Integrating these data points and importing data into a sink is a significant challenge, that’s where Crunch shines as a unified analytics platform that runs through all your tools and software to collect relevant data based on your queries.

  1. Skill Barrier in Building Teams: Building a prediction model requires a team and with the increasing reliance on predictive analytics platforms for creating and deploying models, the prerequisites to be a part of the predictive modelling team are now higher than ever. As discussed earlier, automation offers a possible solution to this.

  1. Muted Adoption: There used to be a time when predictive analytics platforms were too complicated for business professionals, but times have changed and these platforms are now more business user friendly than ever. And yet according to Boris Evelson, VP and principal analyst at Forrester“...no more than 20% of enterprise decision-makers who could be using business intelligence (BI) applications hands on are doing so.”. The only possible solution to this is running more awareness campaigns and educating businesses regarding the same.

  1. Obsoleteness: The accuracy of prediction models tends to decline as time passes, since the variance between the training distribution set and the prediction set increases over time. The best way to counter this is to frequently monitor and make changes to the model to accommodate new user behaviour.

  1. Budget Constraints: The primary objective of building a churn prediction model is to minimise risk, and risk management is often a severely neglected vertical across businesses in terms of capital allocation. Therefore, securing a workable budget to build, train, and deploy a prediction model can be challenging. In light of this perennial challenge, our suggestion of using automation seems optimal.

In any case, efficient predictive modelling comes down to the mastery of user data analytics. The better you are at clustering and analysing relevant user data based on your objectives, the greater your chances of success.

Crunch offers a smart solution by assisting businesses in leveraging the potential of user data analytics to gain a competitive edge.

Join our waitlist here to know more.



Share this post
IconIconIconIcon

Checkout our latest posts

Dive into the world of analytics and why the future of insights rests with AI Solutions.

An indispensable aspect of optimizing the product development process revolves around data connectivity. It is pivotal that the leadership of the product delivery organization establishes an approach wherein data is regarded as Process Indicators rather than metrics for evaluating individuals.
Vikram Aditya
January 8, 2024
Leveraging data points across multiple areas of your business can be the key factor when it comes to unlocking a successful product roadmap for all stakeholders.
Apoorv Nandan
January 8, 2024
With a good enough dataset, a team of data analysts, and a prediction model, a business can predict churn with high accuracy and use retention strategies to lower churn.
Vikram Aditya
January 8, 2024

Ready to unlock your data team's potential with Crunch ?

All it takes is a call and we can spin off a PoC for you. See for yourself before investing more time and energy from your end.