Automation and Machine Learning
The client was seeking innovative data-driven solutions to identify clients’ trade interests through the use of predictive models and a machine learning feedback loop. In particular the client sought to:
- Assess and prioritize opportunities for business development
- Maximize the deal values by anticipating the end-clients’ needs more efficiently
- Provide customized offers to strengthen end-client relationships
- We combined internal data (e.g., transaction history, inventory lists, client holdings, etc.) and market data (e.g., interest rates, currency fluctuations, market volatility, etc.)
- We applied predictive analytic techniques to identify clients most likely to be interested in trading specific products in the bank’s portfolio
- We built a user feedback loop to continually update the process to identify better opportunities
Fulcrum effectively prioritized end-clients and provided a matching engine to allows the client to stay competitive in the market.
The client was seeking to transform a labor-intensive pricing process to a seamless, sustainable and transparent operation. Key challenges included:
- Lack of standardization in reporting across types of policy (e.g., auto, GL, etc.) which made it difficult to analyze account-level performance at a glance
- Manual data pulling, and copying/pasting during the data preparation created high human-error risk
- Difficulty tracking changes and updates to quotes
- We mapped out user stories to better understand various user needs and experiences
- We developed a platform-flexible front-end pricing tool that accounted for user needs (e.g., metrics, reports, etc.) utilizing real-time data ingestion
- We developed automated scripts for consistent data pulls and to reduce labor hours
- We built a job management framework to provide pricing governance and monitoring
Fulcrum’s development of a reliable and sustainable process resulted in an 80% reduction of manual labor hours, increased pricing transparency, and yielded faster quote turnaround.
In reaction to regulatory requirements to retain and classify payment records, the client needed help creating a cash transaction classification system for regulatory reporting and compliance, looking specifically to solve for following challenges:
- Migrating away from using Excel that applied the classification rules with a manual process
- Implementing scripts to allow for processing a greater amount of data
- Needing to improve operational scalability rather than rules being created small chunks at a time
- Improving repeatability, decreasing processing time, and reducing error
We developed a text mining and reporting engine in R that efficiently runs against the client’s full data warehouse
- We automated code in R to be applied against the data warehouse for fast and accurate classification
- We built a front-end UI to allow business users to create and modify rules to refine business logic and improve resulting classification rate
- We created reporting on summary statistics and classification rate, including the impact of the addition of new rules
- We created a next generation text mining process to prioritize the new candidate classification rules
Fulcrum created a scalable and sustainable solution to comply with regulations, which provided significant time-savings from manual labor, increased accuracy of classification, and introduced a mechanism to easily create and integrate new rules.
The client was seeking to apply predictive models to improve its rating structure, moving away from subjective rating. Key challenges included:
- Highly subjective rating process based on individual underwriter's experience, without data-driven guidelines
- Lack of experience and resources evaluating external data sources
Furthermore, the client wanted to explore options to incorporate and build machine learning into the rating structure
- We identified pricing key drivers through LASSO regression
- We developed a statistical model (using GLM) that was implemented for predictive ratings scoring
- We used advanced techniques (including GBM, Random Forest, etc.) to improve model performance
- We created benchmarking models with SVM and NN to ensure model performance (i.e., how close the model can perform when compared with a simulated one without restrictions)
Fulcrum’s data-driven rating structure standardized future of underwriting and increased pricing transparency.
The client was seeking to streamline their multi-step and labor-heavy scoring process of a pricing model to improve efficiency. Key challenges included:
- Labor intensive monthly/quarterly updates that involved manually updating hard-coded parameters
- Because it was a multi-step scoring process, it was prone to errors and difficult to troubleshoot
- Required underwriters and actuaries to perform manual steps that were tedious and time consuming
- Model inputs and performance were not monitored for anomalies and/or population shifts
- We parameterized and centralized configuration files to reduce turnaround time and errors
- We built QA checks within scripts to provide quick diagnosis for unexpected results
- Then we streamlined the feedback process and standardized the input template to minimize room for error
- Lastly, we developed a model tracking report that refreshed upon each update to track population shifts and performance changes
Fulcrum developed an automated model scoring and monitoring process that enhanced efficiency, improved decision making, and increased confidence in results.
The client sought to retain an at-risk multi-million dollar portfolio. In an effort to find a solution, the client needed help with insurance claims data analysis identifying risk drivers and preventing losses. In particular the client wanted to utilize their massive amount of unstructured adjusters’ notes to bring more detail-rich information to the risk consulting team aiming to prevent loss and reduce costs, and to identify additional risk drivers to support ongoing pricing strategy.
The key challenge was the inability to process massive amount of unstructured data (i.e., adjusters’ notes), limiting opportunities to quickly generate insights.
- We performed Natural Language Processing (NLP) on their unstructured adjusters’ notes data including Term Frequency-Inverse Document Frequency (TF-IDF) and topic modeling (LDA)
- We enriched the input data to the pricing models through text mining that led to increased prediction accuracy
- We developed an interactive risk analytics tool to deliver loss and cost insights
Fulcrum helped the client with enhanced coverage insights and pricing inputs through the introduction Natural Language Processing on unstructured data.