Stanton Chase uses cookies to ensure you get the best experience on our website. Learn More
I disagree | I accept

How To Use Data In A Crisis

August 2020
Greg Selker
Share LinkedIn Share E-mail
How To Use Data In A Crisis Cover Image

Business intelligence expert John Parkinson on how leaders today need to prepare for the new environment

Stanton Chase recently convened a group of thought leaders to discuss how new data practices are reshaping the corporate landscape and lighting the way forward for those who innovate wisely. This month, we followed up with business intelligence expert John Parkinson, the former CTO of TransUnion and CapGemini and a leading figure in the field of data innovation, to delve deeper into the question of how companies and business leaders can better use technology to survive and thrive in today’s environment.

As a partner and founder of Parkwood Advisors, Parkinson provides intelligent data solutions for clients like Amazon and the World Economic Forum. Throughout our interview, led by Greg Selker of Stanton Chase, Parkinson shared his insights about the novel ways that data is being used to respond to crises — and the common mistakes and false assumptions he has seen companies make as they strive to adapt. We are delighted to share an excerpt from this timely conversation.

Greg Selker: When you look at the state of the world today, what are the data-oriented questions that you believe businesses aren’t thinking about but should? How should companies seeking to adapt to this new environment think about data?

John Parkinson: When you’re in a situation you have never encountered, you don’t know what’s important so there is a tendency to chase things that look interesting, whether or not they turn out to be important. Today, that tendency is exacerbated because we have lots of data. Our customers expect definitive answers that we can’t give them. We can’t give them the “right” answer because there isn’t one. Trend analyses and predictive analytics really require some sense of the structure of causality that you’re looking at, “if this, then that.” We’ve stopped trying to offer this type of causal prediction. Instead, what we’re saying to clients is, “tell me what you want the outcome to be, and we’ll tell you what variables to watch to see if your desired outcome is going to happen.”

GS: How do you guard against potentially cherry-picking data or ignoring data that you might uncover that doesn’t resonate with the outcome you’re ideally trying to deliver?

JP: We try and run as many scenarios as we can. If I have a model that I think is predictive of the outcome that a customer wants, I’ll run a sensitivity analysis. I’ll run 100 million scenarios until I figure out what factors change the answer because those tend to be the things that you should be paying attention to.

GS: Can you give an example?

JP: We recently had a client ask us, “As we look at coming out of our current crisis, should we keep our current office footprint or continue to have the majority of our employees work virtually?” They have 40,000 employees today and, currently, about 1,000 of them are working from company-owned real estate. Right now, most of their office space is empty. That’s expensive overhead. So, the client wanted to know if they should downsize or relocate, and they also needed to consider how those types of changes to the office footprint would impact the number of employees likely to return.

To explore a set of questions like this, we build models. We mine all the proprietary and public data sources we can and take hundreds of factors into consideration. It gets complicated pretty quickly. For instance, if we know that most of their employees drove to work prior to the pandemic, that means that most employees will not be coming into contact with anyone during their commute, so we can make that a low-weighted factor. However, when we look at the office itself, many of the cubicles are only three or four feet apart, which makes social distancing difficult, so that factor may be more significant to the outcome. The model also considers factors like the age demographic of the workforce. We can make assumptions about the propensity of people that want to work close to other people if they’re 25 versus if they’re 50. We also look at the emotional response of their workforce in terms of COVID-19. How safe will people feel? We look at and weigh hundreds of factors.

Once the model is built, we run lots of iterations to see what changes. It’s cheap to run; we did about 1 million runs in this particular example. I think it cost $25.

GS: I’m struck by that. Ten years ago, instead of a day to run that kind of model, I think it would have been days or weeks — and at what expense?

JP: Right. Cheap computing has made a much bigger difference than most people realize. Ten years ago, when I was at TransUnion, running models like this could take weeks. I think our record was a model that took six weeks to run; it ran on 8,000 Intel cores in a data center that we owned. That’s about $2.5 million worth of hardware, plus a whole bunch of people to look after it. Today, all I do is load it up in a bunch of containers on AWS and just press the go button.

GS: Are businesses today taking advantage of computing power and strategic computational models to guide their response to COVID-19? If not, what do they need to do to take better advantage of the technology that we have accessible to us today?

JP: Every model you build is a simplification of reality, and your model’s usefulness is as much defined by what you throw away as what you keep. So although the cost of computation is very low, there is a significant cost to collecting, curating, and using the data.

Historically, businesses tended to collect and use only the data that they thought was relevant to their operations. They did not generally build these kinds of strategic, analytic models. There are a couple of reasons for this: one is that the talent component of this does not come for nothing — the talent to do this type of data modeling is relatively rare. The other reason is, over the years, there has been a tension between investing in a small group of talented data workers and the democratization of data and analytics, which would equip every employee with the tools to ask more intelligent questions and get better answers.

Unfortunately, we have not been able to give the average knowledge-worker sufficiently intuitive and easy-to-use tools that make asking good questions easier. It turns out that most people don’t want to ask better questions. They want to be told better answers.

GS: Regardless of the fact that we’ve got more data at our disposal and greater capabilities for quickly analyzing that data, it still seems to comes down to talent. There is a relatively small group of highly skilled people who truly understand how to pose the right questions, curate the data, and construct useful models.

JP: Yes. I think that’s one factor. The second issue is return on investment. A question that we always ask customers who come with a complex business-operations problem is: “Are you prepared to make the potential significant investments to your overall operational processes as a result of what you learn from this exercise?” I’ll give you an example. We had a customer, a big retailer, who told us they had so much data that if they knew who you were when you walked in the door, they could predict with more than 90% accuracy what you were going to buy. However, their physical supply chain operating capability could not respond fast enough to the fact that they knew what a customer was going to buy. Their CMO said to me at the time, “We can change our inventory in about four months. You can change your mind in four minutes.”

GS: Even if a business today acquires the right talent to take advantage of the cheap and speedy computational processes that are available, there is an additional barrier to success: Companies need to be willing to listen to the data, even if it means making big investments that cut across multiple areas of a company’s operations and infrastructure.

JP: Yes, exactly. So, one answer to this is robotic process automation (RPA). What do we do in first-world economies when we don’t have enough talent to solve a problem? We turn it into software. RPA doesn’t necessarily ask the best questions, but it asks better questions most of the time.

GS: Can you give an example of that?

JP: Think of customer service. A company needs to decide whether a given RMA (return merchandise authorization) request should be accepted. To answer this, an RPA system can look at dozens of data points in about 100 milliseconds. No human can do that even if they know the rules. Even if they have a checklist to go through, a human is going to take five minutes. With an RPA system, the customer gets an instant answer, and a rationale, and in general their satisfaction scores are higher.

We [then] circle back to the question that was originally asked, “Are these employees going to come back to the office?” In this case, part of the model output might be, “If you no longer need them, why would they come back?” If you’re going to make the investments in having a higher level of automation in your business processes, one of the outcomes is that you will likely need less people, but the people you do need will have to be more talented. So, as I said, we’re not giving people answers; we’re telling them what to pay attention to make decisions themselves.

GS: So, if you’re willing to take action based on what the analysis of the data is showing you, then regardless of what the new reality looks like you’ll be in a better position to thrive in that new reality?

JP: That’s right. You focus on trying to figure out what you have to be able to do no matter what the outcome is. Then on top of that, you put in what we call “watch points” that make you better at guessing what will happen. Then you need to invest in the ability to be agile in response.

GS: I would love to hear your perspective on COVID-19 as a data intelligence expert. What are some common misconceptions or false assumptions about the pandemic?

JP: It is amazing that we had a completely tight genome for a novel virus within almost two weeks of getting samples. We’ve never been able to do that before. That gave the pharmacy industry data to work with to figure out what kind of response, as in a vaccine, can be produced. Instead of trial and error, computational chemistry allows us to look at the structure of the surface of the virus genome and figure out attachment points to block the virus’s action.

However, this tells us nothing important about the impact of the virus on humanity as a whole. We do not have a good model for community spread, how the infection actually works, or the range of impacts on humans. We still don’t understand why some people get sick [and] some people don’t, and it will be years before system biology catches up with the reality of what we are experiencing because we still have fundamental gaps in our understanding of how system biology works.

People have asked us a lot, “Will things go back to normal after the pandemic is over?” and the first thing we say is, “Why do you think the pandemic will ever be over?” Because, today we do not know. We are very light on therapeutic; we don’t have a vaccine; we have evidence that things are going in a good direction, but we don’t know. There’s a kind of belief in the population unfortunately that science is better than it actually is. Which is not to say that it’s not wonderful — it’s amazing, but it’s not everything. We have a declining percentage of people with real scientific understanding in the general population; that causes problems. We can’t turn everybody into a scientist, but we can at least manage the relevant data in a better way.

GS: Given all of this uncertainty, what are the best practices that businesses should engage in today to stay agile? Is it to move very quickly into putting RPA systems and infrastructure in place, or to move quickly to make certain you have the right talent? What are your recommendations?

JP: First, companies should pay attention to where automation is relatively straightforward and reliable. The challenge with automated systems is that when they break, the problem is much worse than it would have been without them because all the routine processes have been handled. This means that as companies increase automation, they also need to expand their incident response capability to be significantly more sophisticated. And that means companies should be looking at how to curate talent: not only acquire it but nurture, encourage, and invest in it because you’re going to need it. It doesn’t matter that you’ve shuffled off some percentage of your current workforce. The ones you need to keep, you really need to invest in.

Second, companies should focus on understanding their customers. What we say is, “You should not be trying to influence customers. You should be trying to understand them so that the interaction between you is natural, not forced.” We have enough data to begin to do that now, although almost nobody does it very well.

Finally, if data matters to you, you better make sure it stays safe and correct; therefore, you should invest in some capability to curate the data that matters to you. When we talk to executives, they just don’t understand what virtually free computation cycles let you do. It means you can try out a lot of things in the abstract to support your decision-making process. You don’t have to be driven by a single decision set anymore.

This interview excerpt has been lightly edited for length and clarity. For a full transcript of the interview, click here.

About the Author:

Greg Selker is the North American Sector Leader Software at Stanton Chase International, and a Director in the firm’s Baltimore Office.

 

Featured Articles

COVID-19 Corporate Response Survey: How Prepared Were Boards?

Read

The Noble Mission of Astra Zeneca: How the company and it’s leaders make a positive impact on our world

Read

In A Crisis, Employer Branding Is More Crucial Than Ever

Read

Contact a Stanton Chase office near you

Find an office