Larry, the Digital Analyst®
Larry, the Digital Analyst® is Complexica’s Artificial Intelligence engine and it powers the Decision Cloud® software platform of award-winning applications. Larry was developed to unlock the inherent value in large data sets by transforming raw data into optimised recommendations, so that staff across multiple business functions can make better & faster decisions.
The primary task of Larry, the Digital Analyst® is to recommend optimised decisions by centralising and managing the interaction with internal data warehouses and external data sets, and automating the associated data loading, handling, and analytical processes. Larry consists of various smart algorithms (such as Bayes nets, artificial neural networks, rough sets, classifier systems, support-vector machines, decision trees, genetic algorithms, etc.) that are used for various data mining problems, such as:
Anomaly detection (outlier/change/deviation detection) for identifying unusual data records that might be interesting or contain data errors that require further investigation. Anomaly detection is of key importance in the process of cleaning (and understanding) data
Association rule learning (dependency modelling) for finding relationships between variables. For example, a supermarket might gather data on customer purchasing habits, and use association rule learning to determine which products are frequently bought together. This is sometimes referred to as market basket analysis
Clustering for discovering groups and structures in data that are "similar." The data is viewed as points in a multi-dimensional space, and points that are “close” in this space are assigned to the same cluster. For example, an organisation may group their customers into micro-clusters for personalised promotional campaigns
Classification for classifying new data to existing categories. For example, to classify new credit card transactions into either the "legitimate" or "fraudulent" category
Regression for finding mathematical functions that model the data with the least error. In other words, regression is a statistical process for estimating the relationships among variables (i.e. the relationship between a dependent variable and one or more independent variables). The discovered relationships aid in understanding how values of the dependent variables change when any of the independent variables are varied. A common application of regression is in the area of price elasticity
Summarization for providing a more compact representation of the data, including visualisation and report generation. For example, after clustering is carried out, the clusters themselves are summarised (e.g. by generating the centroid of the cluster and the average distance from the centroid of points in the cluster) and these cluster summaries become the summary of the entire dataset
Simulation is used to evaluate a variety of scenarios, thus addressing many important what-if questions. Simulation allows for the analysis of a complex model and understanding of how individual elements interact and affect the simulated environment
Predictive modelling is used to assess the likelihood of future outcomes. It goes beyond knowing what has happened in the past (and why) by providing the best assessment of what might happen in the future. It can be used, for example, to better understand changing market dynamics, or identifying the next best action for each customer
Optimisation is used for recommending the best possible decision (whether for promotions, production, resource allocation, etc.) by identifying the key variables, business rules and problem-specific constraints, and taking into account many potential objectives (e.g. inventory levels, DIFOT)
Complexica's AI engine, Larry, the Digital Analyst® – which encompassess all these processes, algorithms, and models – was named the 2018 Australian Innovation of the Year, the 2019 Australian Software Innovation of the Year, and the 2020 Digital Transformation of the Year. Larry, the Digital Analyst® powers Complexica's modularised software platform called Decision Cloud®.
Larry, the Digital Analyst® can be applied to a variety of business functions and application areas – such as increasing the sales effectiveness of call centres and in-field sales reps, maximising customer loyalty, capturing revenue and margin opportunities through optimised promotional planning, influencing customer buying behaviour and preferences, understanding price elasticity, optimising production plans and schedules, predicting demand, and improving trade spend outcomes – among many others. The way Larry "works" within these application areas is different, because the problem and outcomes are different. However, even within the same application area, Larry will still "work" differently from one customer implementation to another, because the process of training and tuning the algorithms involves using different training datasets and customer-specific parameters (e.g. guardrails, objectives, business rules, constraints, etc.). Hence, even though Larry is the "same" software product, it will "work" differently from one customer the next and no two deployments will ever be exactly the same.
Larry, the Digital Analyst® is comprised of many algorithmic techniques and models that can be applied to specific business functions, such as demand forecasting, price elasticity modelling, promotional planning, omni-channel ordering, and production planning & scheduling, among others. In addition to Larry, Complexica has developed a range of software modules called Decision Cloud® to enable end-users to access and utilise Larry's algorithms within a variety of workflows (e.g. trade spend optimisation). For every new deployment where Complexica applies Larry's algorithms to a specific business function, these algorithms are trained on customer-specific data in order to enable proper use. At a high level, the learning aspects of Larry can be summarised by the following methods:
- Training & deploying models
- Self-learning feedback loops
Training & deploying models
The process of configuring Larry's algorithms to a particular problem includes several steps:
- Learning the problem domain (relevant prior knowledge and goals)
- Creating a target data set (data selection), including both internal and external datasets
- Data cleaning and pre-processing
- Data reduction and transformation (finding useful features, dimensionality/variable reduction, invariant representation)
- Choosing functions of data mining (summarisation, classification, regression, association, clustering)
- Choosing the data mining algorithm(s)
- Searching for patterns and trends of interest
- Pattern evaluation and knowledge presentation (visualisation, transformation, removing redundant patterns, etc.)
- Establishing the key variables to be used in the models for the relevant use-case.
- Choosing or combining Larry's algorithmic techniques for a specific business problem
- Training the algorithmic techniques on customer data (as well as potentially external data)
- Configuring Larry's parameters for the customer-specific use-case
- Evaluating the model(s) performance against key metrics
The final step is deploying the trained model(s) into the customer-specific AWS instance and enable users to access through the relevant Complexica Decision Cloud® software module (e.g. Promotional Campaign Manager or Demand Planner)
Self-Learning Feedback Loops
Cognitive computing involves self-learning systems that use Artificial Intelligence, data mining/machine learning, visualization, and natural language processing to mimic the way the human brain works. The overall goal of cognitive analytics is to create automated IT systems that are capable of “learning from” and “adapting to” changes in the environment.
To understand the “self-learning” mechanism within Larry, the Digital Analyst®, there is a step within the modelling stage described as “Configuring Larry's parameters for the customer-specific use-case,” which works in close alignment with the next step “Evaluating the model(s) performance against key metrics”. Once live, Larry continues to evaluate the model's performance against actual results (e.g. did the promotional plan recommended by Larry actually generate the predicted uplift in sales? If not, to what extent did it fall short?). The difference detected being the “error rate," which acts as a trigger (when it exceeds an acceptable threshold) for Larry's adaptive algorithms to automatically adjust/self-tune the initially configured parameters to "learn" and improve performance in the future.
The ability of Larry, the Digital Analyst® to learn and improve is directly tied to the variables that were initially considered in the underlying models. As long as the number of variables doesn't change within these models, then the process of re-training and learning is automated. However, sometimes the models may require extensions to the original number of variables to factor in new channels, product categories, competitors, or even force majeure events that alter economic and market conditions (such as the recent COVID pandemic). These additional variables require retraining of the original models so that Larry's self-learning process can retain integrity and continue to generate meaningful predictions and recommendations.
Yes, Larry, the Digital Analyst® can generate decision recommendations based upon the automatic analysis of the data carried out by its underlying algorithms, rather than just providing insights or analytic reports. Hence, Larry is "forward-focused" on what the next best decision should be, rather than "rearward focused" on what happened in the past and why.
As a simple example, it's possible to define two classes of entities within Larry: Customers and Products, where customers have preferences for certain products and these preferences must be extracted from the data. The utility matrix gives (for each customer-product pair) a value that represents what is known about the degree of preference of that customer for that product (the matrix is often sparse, as most entries are unknown). Larry, the Digital Analyst® then examines the properties of products recommended to customers: For example, customer A “likes” products with characteristics X, so that Larry would recommend products with characteristic X, and analyse the similarity measures between customers and/or products (the products that are recommended to a customer are those preferred by “similar” customers, which is often referred to as collaborative filtering).
No customer data is ever shared, re-used, or re-sold in any way. Each Complexica customer has their own AWS instance where Larry, the Digital Analyst® is trained and configured using that customer's data plus any relevant external data. Customer data is also covered under the provisions of the Confidentiality Clause of Complexica's standard terms & conditions.
This is a commonly asked question and the answer is no, because each instance of Larry, the Digital Analyst® (along with the relevant Decision Cloud® module) is configured to the specific business rules, constraints, and objectives of each individual customer. These configurations will never be exactly the same. Furthermore, the algorithmic techniques within Larry, the Digital Analyst® are trained on customer-specific data, which also differs from one organisation to the next. For these reasons and others, two competitors using their own versions of Larry, the Digital Analyst® will never get the same result, especially that these two competitors will also differ from one another in their products, brands, priorities, strategies, initiatives, and more, putting further distance between one installation of Larry and another.
Information security is critical to maintaining the reputation and brand of any organisation in today's world, as well as its ongoing success and viability. Complexica's core security principle is to ensure the confidentiality, integrity, and availability of the data entrusted to us by our customers and business partners. To this end, Complexica has achieved ISO27001 certification, recognising its commitment to providing customers with the highest level of information security management. Following an extensive audit process, the certification was issued by TQCSI International, an accredited, third-party certification body providing auditing and certification of international management system standards with offices in more than 30 countries. For more information, please visit our information security policy page.
In an ideal world, each customer would provide Complexica with a clean and complete data set, which is well documented and highly structured. In reality, however, this is rarely (if ever) the case. Our experience with data is that most organisations suffer from missing, incomplete, and inaccurate data (i.e. "dirty data") which needs to be evaluated during the scoping phase and addressed during the software deployment phase. Because today's data comes from increasingly disparate sources and in an ever-growing variety of forms, it usually needs to be prepared to become usable. During the data cleaning process, corrupt or inaccurate records are corrected or removed from a record set, table, or database, and incomplete, incorrect, or irrelevant parts of the data are replaced, modified, or deleted. Unsurprisingly, this process (often referred to as “data cleansing”) can be demanding, but pays dividends during the software go-live phase and during usage thereafter.
Click here to learn more about data as part of the problem-to-decision pyramid.
One of the most common issues is dealing with missing data. Incompleteness is almost impossible to fix with any data cleansing methodology, because facts cannot be inferred that were not captured when the data was initially recorded. However, there are some techniques that can assist in mitigating this issue, like turning to other available datasets (internal or external), and looking for existing fields that could be used as a proxy to estimate the missing data. Another method (often used in promotional planning) is to use agent-based simulations to augment the existing, incomplete dataset.
There is also an entire category of business and government problems for which the data simply doesn’t exist. A few examples include:
Predicting demand for a brand-new product where there’s no historical sales to reference, and no equivalent product in the marketplace that can be used as a proxy (and yet a decision needs to be made on how many units to produce)
Deciding whether to implement a new advertising strategy or product pack size, even though that type of strategy or pack size has never been tried before
Understanding the performance of new product designs that have never been created nor tested before
Evaluating new government policies for which there is no historical data or precedent, for example, devising evacuation strategies for major cities that have never been evacuated before
For such problems, data scientists turn to simulation-based techniques to gain insight into process being simulated and into the effects of alternative conditions and courses of action.
Complexica has a well-defined engagement model that progresses through the following stages:
This initial exercise is conducted at no cost with the intent of developing a common understanding of the business problem as well as an organisation's people, processes, and technology as they relate to the current state and desired future state. The availability and quality of data is also explored during the workshop, with the output being a fixed-price proposal for scoping the relevant opportunity.
During the scoping stage, Complexica analyses the current-state data, processes, systems, and practises within the relevant business unit(s) and workflow(s) of an organisation. The goal of the scoping engagement is to define the applicable scope of the applicable software, along with the high-level business and technical requirements, proposed interfaces, as well as all deployment milestones, cost estimates, and business benefits. Each scoping is a fixed-price, fixed-time engagement that delivers:
- A fixed-price software project plan, which includes configuration recommendations and a suggested approach for achieving the most advantageous Return on Investment (ROI) and Time to Value (TTV)
- A detailed Statement of Work (SoW), which contains the specification for the software being deployed, including data inputs and outputs, integrations, use-cases, constraints, business rules, optimisation objectives, among others
Software Deployment Project
The software deployment project is broken up into a number of milestones that culminate in Business Verification Testing (BVT), User Acceptance Testing (UA), and ultimately, the go-live process. The initial milestone covers off any further detailed scoping, specification, and design work, while further milestones are used to deliver partial configurations of the software for customer validation before heading into the BVT and UAT. During each software project, Complexica’s science team also trains the relevant algorithmic techniques using customer data (along with any relevant external data) before testing and tuning the models. These software projects are capital expenditures from a budgetary point of view and overall cost and duration depends on the scope of the software modules being deployed.
Software as a Service (SaaS)
Once live, end-users can access the deployed software through a web interface (or native app) that is deployed into a dedicated, customer-specific AWS instance. Ongoing access and support for the software is covered by a monthly SaaS fees (Software as a Service, or “Online Service Fee”) which includes:
- AWS hosting, backup, and DR
- AWS computation load (as some prediction and optimisation problems are more computationally "expensive" than others)
- Maintenance and support
- Monitoring the performance of the underlying model(s)
- Future product upgrades
The monthly SaaS fee is an ongoing operational expense
Complexica does not custom-make software applications for each customer, as this approach would be too prohibitive from a cost, time, and risk perspective. Based on almost a decade of research and development, Larry, the Digital Analyst® and the Decision Cloud® platform are software products with pre-existing algorithms, models, workflows, user interfaces, functions & features, integrations, and security controls. Hence, from this perspective, Complexica's software applications can be categorised as enterprise-grade, commercial-off-the-shelf (COTS) products.
That said, our experience is that most COTS products are unable to handle many customer-specific requirements such as complex business rules set in time-changing data environments, non-linear relationships, and non-standard business processes and workflows, among others. For this reason, the Decision Cloud® platform allows for a "customer-specific layer" to be added on top of the COTS product which caters for unique requirements without compromising the base product (or the ability to upgrade to future versions and releases). Hence, Complexica's Decision Cloud® platform combines the best of both worlds by leveraging the cost and time efficiencies of a COTS product, and the flexibility of being able to handle unique requirements within the customer-specific layer.
There are many different AI-driven approaches for optimising sales activities, with each approach varying significantly in terms of what problems it can solve and the types of use case can be handled. Broadly speaking however, these approaches can be grouped into two categories:
1. “Mass AI”: In this category, there are products with highly scalable “model-ready” algorithmic approaches that can be deployed “at the press of a button.” The use-cases revolve around well-defined areas such as lead scoring, lead prioritisation, propensity to close, pipeline coaching, but are unable to extend into more complex areas like pricing optimisation, predicting churn, automated generation of Next Best Conversations (NBCs) or Next Best Actions (NBA), and more.
2. “Targeted AI”: in this category, organisations have developed a set of algorithmic techniques for specific industries and complex areas (like pricing, churn, and more), as well as the know-how on how to apply/combine these algorithms for various use cases. Each time these algorithms are applied, there is a need for training and tuning on customer data in order to properly address each area and optimise performance.
Complexica’s Customer Opportunity Profiler (COP) sits within the second category, enabling optimised recommendations that are targeted to specific industries, processes, and use cases (as opposed to the generic nature of recommendations from AI approaches in the first category). As an example of this granularity, Complexica's Customer Opportunity Profiler (COP) can generate a detailed, sales-rep specific Next Best Conversations (NBCs) that are tuned to the exact nature of a business, it's customer base, product range, pricing structure, competitive position in the market place, and more.
Examples of NBCs across different industries include:
- What product to sell?
- What promotional bundles to offer?
- Which customers are likely to churn?
- Which customers to visit this week?
- What product/categories to focus on?
- What products to reprice for which customers?
Complexica's approach of using Targeted AI is able to generate the most value through:
- Improving revenue and margin outcomes for field sales through the automated identification of sales opportunities within each territory
- Predicting and preventing customer churn
- Increasing customer share-of-wallet through personalised cross-selling & up-selling recommendations
- Reducing lost sales through product substitution recommendations
Learn more about COP here
Many CPG/FMCG companies look for software solutions to digitalise their workflows and optimise their promotions. There are two main components to that process, and (despite many vendor’s claims) they are not the same, and are rarely executed by the same software:
- Trade Promotion Management (TPM) systems
- Trade Promotion Optimization (TPO) systems
TPM is usually the starting point for companies wanting to replace the home-grown spreadsheets and automate the labor-intensive processes for the following activities:
- data loading and handling
- customer planning
- promotion execution and monitoring
- account settlement
- monitoring and managing deductions
- tracking and validating claims
TPM systems are used by a broad range of users (e.g. Merchandisers, Category teams, Trade Marketing, Strategy and customer finance) for the transactional workflows associated with planning, managing and monitoring promotional activities in a more streamlined manner.
TPM systems are transactional at their core and include basic analytics and a workflow for collating, reconciling, and visualising the past performance results.
Trade promotion optimisation (TPO) are sophisticated systems capable of advanced analytics to enable users to automate the exploration of a large number of what-if scenarios, running optimisation algorithms across a promotional program, whilst considering various constraints and objectives (e.g. trading terms constraints, market share objectives, promotional guardrails, supplier constraints, category growth objectives, etc)
TPO solutions extract data (both transactional data as well as Promotional Slotting Boards) from the TPM system to perform its workflow and then ingest back into TPM optimised plans that deliver the right mix of promotions that drive category growth and sustainable share. It is important to note that as this is a process ridden with Science and complexity, no two deployments are the same as the algorithmic techniques need to be trained in each client’s data and the various constraints and objectives used by the Optimiser (e.g. trading terms constraints, market share objectives, etc.) need to be configured to each Client’s needs.
Learn more about TPM/TPO and how Complexica can help maximise trade promotion ROI.
When planning for future promotions, one of the manufacturer’s goals is to maximize the “gain” from competitor products (so that consumers switch from a competing product to thier own) and minimize the “loss” from their own product range. If an increase in promotional sales comes at the cost of another one of the manufacturer’s products, this is called cannibalization, which means that consumers have switched from one of our products they regularly buy, to the one on promotion.
Through data analysis, we could create a cannibalization matrix, which is a table that outlines the expected cannibalization effect of certain products when placed on promotion. However, the practical challenges of constructing such a table are significant. Without applying any domain knowledge or business rules, it might be necessary to look up each individual product and calculate its cannibalizing effect on every other product in our range. For most manufacturers, which may sell hundreds or even thousands of products, this would result in a table with tens or even hundreds of thousands of values. While it’s possible to calculate this automatically, there’s no easy way to validate these values without going through them line by line.
A better approach is through the application of human knowledge along with various Artificial Intelligence methods; for example, by implementing human rules based on well-founded assumptions, such as cannibalization occurring across individual categories, we could capture 80% of the expected cannibalization effect with just 20% of the modeling effort, and then complement the cannibalization matrix by additional modifications that result from a deeper analysis of the data using AI algorithms.
To learn more about promotional planning and pricing, watch this supplementary video to Chapter 3 of The Rise of Artificial Intelligence, a recently published book authored by the Complexica team.
Depending on the industry, a supply chain could be as straightforward as a few retail shops, warehouses, and trucks, or as complex as a sprawling network of mine sites and processing plants connected by rail and sea transport. From a higher perspective, however, all these supply chains are linked in one way or another through an intricate web of interactions.
Each of these interactions has its own challenges and complexities. Let's take the mining supply chain as an example to illustrate these challenges and complexities. Planning a mine site requires consideration of what grade of ore is required at what point in time, coupled with truck and digger availability, workforce rosters, maintenance schedules, and more—which is just the first step in the process—followed by the scheduling of trains that will transport the ore from various mine sites to the port, where the coordination of stackers and reclaimers happens to ensure that each ship is loaded on time.
After that, there is more, as the ore arrives in another country and is heated by coal (which arrived at the furnace through a similarly complex supply chain) to become steel, which in turn is molded into the car’s frame and doors and hood—components that represent just a handful of the 30,000 parts that make up the average car, each of which has their own supply chain from raw materials to finished part. On top of this, the automaker needs to predict consumer demand for its cars across different countries—a difficult problem in itself—all while the cars are being assembled and placed on ships for transport to those markets.
At every step, there is complexity, and the more steps we consider together, the more complex the problem becomes. This inherent complexity makes supply chain problems particularly well suited for the application of Artificial Intelligence. At the highest level, these are the core components of a supply chain operation: predicting demand, planning and scheduling production (whether it be the production of iron ore from a mine or the assembly of cars in a factory), and then organising logistics and distribution. Each component represents a complex business problem in itself, and together, an almost impossible challenge for humans to solve. This is where leveraging Artificial Intelligence can optimise decision-making across a variety of supply business functions, be it improving the accuracy of demand forecasts, or optimising inventory levels, or creating schedules that are optimised for various constraints and objectives.
Contact us to request a soft copy of Chapter 10 of the book The Rise of Artificial Intelligence discussing supply chain optimisation.
Although real-world problems are usually comprised of smaller sub-problems (i.e. components) that interact with each other, most organizations realise that these components are related and affect one another, and what’s most desirable is a solution for the overall problem that takes into account all components. For example, scheduling production lines (e.g. maximizing the efficiency or minimizing the cost) is directly related to inventory costs, transportation costs, delivery-in-full-on-time (DIFOT) to customers, among other business metrics, and shouldn’t be considered in isolation. Moreover, optimizing one component may negatively impact another. For these reasons, organizations can unlock more value through “globally optimized” solutions that consider all the components together, simultaneously, rather than just a single component.
Many modern organizations are interested in “globally optimal solutions”—solutions that allow their organization to reap the greatest possible benefit—rather than solutions to individual components of their operation (which they have to “assemble”).
Despite the obvious observation that multi-component problems should be solved as a single problem to come up with the best overall solution, the vast majority of modern organizations don’t do this. Why? Because of how overwhelmingly complex the problem becomes when all components are assembled together, requiring not only sophisticated AI algorithms, but significant process re-engineering and change management within each component of the operation. Hence, for these reasons and others, the design and creation of such algorithms, as well as putting them into software applications that become embedded into business processes and workflows, is beyond the capability of most organizations, despite the significant benefits that global optimization can unlock.
For more information on some of the issues relating to optimisation, please refer to this chapter of the recently published book authored by the Complexica team
Most demand forecasting applications are based on standard statistical models. Some use other techniques such as event-based simulations, neural networks, agent-based modelling, fuzzy logic etc. Any single technique is sub-optimal by definition (better or worse for different instances of a problem).
Complexica’s Demand Planner relies on an ensemble model – a combination of statistical techniques and adaptive AI algorithms. Statistical techniques provide “sensors” to identify new conditions, whereas AI algorithms provide the learning that allows prediction models to automatically adjust to the new conditions. Over time, the prediction model learns from different environments and can “anticipate” changes. This enables continuous improvement in forecasting accuracy.
In addition, each model is fed with both internal data (historical sales, customer forecasts, promotional and pricing information for each product, etc) as well as relevant external data to improve accuracy.
As a result, the system can provide a number of tangible benefits, including:
· Improved forecast accuracy
· Reduction in finished goods inventory
· Reduction in stockouts, leading to a corresponding increase in customer fill rates
· Less time and effort for inventory planning and replenishment, with some tasks being reduced from a few days to a few hours
Contact us to request a soft copy of Chapter 10 of the book The Rise of Artificial Intelligence discussing supply chain optimization.
Learn more about Complexica's Demand Planner here