Managing Analytic Apps: Our Approach To Evaluating Efficacy

Many investment banks have expended significant time and resources in data science to generate insights, build applications, and develop platforms. As discussed in our first article of the Summer 2018 Series (“Challenges Facing Investment Banks in the Adoption of Data Science”, May 17, 2018), it can be easy to overlook the fact that a strong process is not only needed to design and develop apps, but also to curate, renovate and maintain them.

Managing Analytic Apps: Our Approach To Evaluating Efficacy

August 6, 2018


Many investment banks have expended significant time and resources in data science to generate insights, build applications, and develop platforms. As discussed in our first article of the Summer 2018 Series (“Challenges Facing Investment Banks in the Adoption of Data Science”, May 17, 2018), it can be easy to overlook the fact that a strong process is not only needed to design and develop apps, but also to curate, renovate and maintain them.

Suite of Applications

Common pitfalls

Suite of Applications under a magnifying glass showing various types of charts color coded to correspond with one of the underlying application icons

Much like the measurement of any interactive analytic tools, evaluating performance and usage in relation to future business objectives is crucial for success. While individual tools within a powerful suite of apps are often measured against each tool’s unique business objectives, it is virtually impossible to evaluate user experience across multiple apps without analyzing traffic and identifying patterns through a common lens. In fact, one of the most common obstacles in application maintenance is properly measuring the captured data by which metrics can be compared. Often, because apps are created independently from one another, measurement among them is not simple. Consistent qualifications for engagement are imperative to building a holistic user landscape, and if possible should be built into the design process from the very start. A lack of understanding into data limitations such as errors and assumptions, within a suite or single app, is another common challenge -- such data limitations need to be addressed in the measurement design as well.

Finding cohesion in metrics

Suite of Applications under a magnifying glass showing a single bar chart, with line graph above, that consists of colors from each of the application icons

A suite of apps requires an enormous amount of development and upkeep effort. As such, it is important to understand which are worth continued maintenance and investment. The ability to prescribe actionable insights with regard to app evaluation requires not only a consistent method to gather and analyze patterns of data, but also remediation procedures to correct for any data limitations.

A great set of core questions, to create “common ground” among and across a variety of apps, when measuring engagement includes:

  • Are users launching multiple apps at a time?
  • What type of visitors are making use of each app?
  • Are apps used on a regular basis or only on a certain trigger?
  • Is use of an app following or leading to use of other apps or content?
  • Which ones are most popular?
  • Which apps are being underutilized?
  • Which apps lead to new accounts or account growth?
  • Which apps are used by active customers versus lapsed or inactive customers or prospects?

By harvesting the traffic data and evaluating this in conjunction with other metrics of usership such as engagement and trigger events, basic patterns will arise. More granular data such as campaign click through rate, types of filters applied in a search, etc. become more insightful when overlaid with the baseline metrics. Additionally, evaluating usage patterns through a filter of customer segmentation can yield useful suggestions to personalize user experience and deepen engagement.

Follow Fulcrum Analytics on Linkedin. For any questions, comments or inquiries contact us here.