Competitive Intelligence Spotlight #3: Robin Riviere, VP of Competitive Intelligence & Price to Win at SAIC

Robin Riviere, VP of Competitive Intelligence & Price to Win at SAIC

Robin Riviere, vice president of Competitive Intelligence and Price to Win with Science Applications International Corp. (SAIC), recently spoke to ArchIntel to discuss the strategic decisions, goals, and outcomes of competitive intelligence, as well as how to manage success in the “fog” of the field. 

“I think the interesting work is when you pull all of this together. When you have all these observations and string them together in a way that reveals a theme. That is the real value of competitive intelligence.”

ArchIntel: How does a company’s level of competitive intelligence define its strategic decisions and actions?

“It defines the approach that you take in regard to how your company views its competitors. At the lowest level, we look at how a competitor would fit a customers’ set of requirements. We have to view them through the lens of who makes decisions at the capture level rather than who makes the decisions at the highest level. 

For example, we compete against our top five competitors in just about every market that we’re in ourselves, but depending upon the market you’re in, the competitors exhibit different behaviors. We have to examine the competitor in the context of what market we’re competition in and then we need to determine how urgently our competitor is working to secure the contract. 

You have to dissect the competitor as much as you are drilling down into the nature of the competitive intelligence in each market. As you go further down, it’s necessary to parse through the corporate, business unit and organizational portion of that company because your competitors do things differently depending upon the customer and what objectives they are trying to reach.”

ArchIntel: What is a competitive intelligence analyst’s approach and set of goals?

“Within tactical competitive intelligence, I like to weave the competitive intelligence into the price-to-win. I want to make sure that the competitive intelligence team also serves as an input into the price-to-win process, then I want to have the answers to certain questions that help determine where someone needs to go in terms of research to get those answers. 

I always start with the customer. I want to understand, since we’re competing against our competitors for the buyers’ dollars, the strategic intent of the customer because that will determine the approach that all of us take and it’s our job to determine how a competitor is going to interpret the same data that we have to interpret. First and foremost, we want to understand the customer. 

Let’s assume that we’ve examined, to the best of our ability, the customer. In my mind, the best insights come from the people that are on the ground in the customer space. Who is there? Who do the people on the ground see walking around because they have existing contracts or they’ve managed to get customer meetings? 

The first option is to solicit the internal teams who have access to the customer to verify whether your competitors have spoken with the customer. Second, we look at the publicly available data and determine, based on spending, who has a presence within that space. Next, we figure out what are the capabilities in the nature of the scope, and who has the capabilities adjacent to the customer. 

After that, we figure out who has the capabilities and ask if those capabilities are relevant to this customer and then we pivot to financial data. People certainly look at LinkedIn and company websites to verify relationships. We try to scour the open source. In many cases, that’s all you have. 

In my mind, that’s the legwork. I think the interesting work is when you pull all of this together. When you have all these observations and string them together in a way that reveals a theme. That is the real value of competitive intelligence.

The data is an input to competitive intelligence, it is not competitive intelligence. The goal is to be able to objectively examine the data and let it tell you the story. If you have a predetermined outcome, you’ll always be able to find data that tells you the story that you want it to tell you, and that is a challenge. 

I think jumping to conclusions is very tempting because it’s convenient. It shortcuts the process, but the real objective is to produce an artifact whose inferred conclusions result in some actionable takeaways. Our job is to support the capture teams with our third party view of the competition in the context of the opportunity that lays in front of our team.”

ArchIntel: How do you double-check, confirm and fill gaps within competitive intelligence data?

“You never have complete information. You don’t even know if you have the right information. You have data, but you don’t know whether what you have is factual. We try to limit what we can reasonably claim to be fact. If you can’t answer anything factually; you can’t accept it. If you do note it, you have to at least indicate that it’s speculative.

You’re never going to be able to fully account for gaps. You have to be able to acknowledge that they exist, then make decisions on the basis of that acknowledgement. There’s fog and it’s just a couple pieces of a puzzle, some of which may be connected and surrounded by fog. 

Based on that methodology, you present recommendations for how to proceed with admittedly a vague and ambiguous picture. First, you must accept that you’re operating in a guess world. You’re still trying to find the pieces of information that can help build a foundation.

In many ways, competitive intelligence is very similar to cybersecurity. When you observe network security, the software uses artificial intelligence to indicate what kind of threats are likely to manifest themselves and where. Those algorithms are trained on the basis of prior observed outcomes. 

Competitive intelligence is particularly similar when you examine awards and patterns. We do a fair bit of that examination when we have data that we get from debriefs and Government Accountability Office (GAO) protests. When we place a bid and lose, we know who’s won, their scores and price. 

When we bid and win, most of the time we won’t know anything more than what our score was and our price. The GAO filing generally has information that tells us more about the protester’s disposition at proposal, submit or award. 

For those that we didn’t bid on that still were protested, we can get information around at least two companies. The awardee whose award was protested and the protesting company. We accumulate this data because it tells us a number of different things. 

First and foremost, it tells us the differential between the winning proposal from a non-cost and a cost perspective as well as the protesting proposal. Next, it tells us about the customer and enables us to validate whether customers are awarding more or less blue proposals, whether visitors are capable of achieving blue proposals and then what makes them blue or not because it is often described in the briefs and particular in protest documents. 

If we can accumulate a history of observations around customers and our competitors, we can build and train machine learning algorithms to examine new inputs for yet to be awarded acquisitions. We can then compare the attributes associated with those unawarded contracts to the observed outcomes of awards that have already taken place to find similarities and a pattern.

Those attributes and their similarities will be what the algorithm uses to determine whether or not this will likely be awarded to the incumbent or to a new company. We’ve tried to use data science to the greatest extent possible. The limiting factor will always be data because you have to create data sets. You cannot just go out and buy a data set to be able to produce the kind of outputs that are possible. You have to build them organically.”

ArchIntel: How do you utilize competitive intelligence to remain marketable in a bidding battle?

“You know the price that someone bid when you lost and they won. You don’t know the price of the other companies that came up short. Generally, you know the market rate for those people. You also know the differences in what each of the indirect cost structures look like and how much the profit requirement has been generated from these bids. 

The solicitation allows for a unique staffing approach, which is always the “X Factor.” You can be off by 30 percent on a labor-rate basis, but that’s still going to be far less influential in the total price than if I miss-estimated by one Full Time Equivalent (FTE) that a competitor is going to bid.

The impact of one FTE dwarfs whether or not you’re going to pay in the basement or at the peak for salaries, especially if your level of effort was static or was consistent between yours and a competitor’s bid. Depending upon what the customer gives you, your ability to predict price is more or less variable. You know those things after an award. 

If you bid on a contract and lose, you know what the winning price is, who won it and what their non-cost score was as well. Depending upon the request for proposal (RFP) and how that RFP is structured, you can back into all of the variables that made up their price within a reasonable range of error.

Let’s say you lost, but the winning bid was awarded at a higher price than yours. You have to validate whether the winning bid was technically superior to yours because, on a best-value trade off, it could theoretically be awarded at a higher price. In almost every case that we see where we’ve been beaten by a higher price, it’s when we have a lower technical score.”

ArchIntel: How do you constitute success within competitive intelligence?

“This applies not just to competitive intelligence, but price-to-win. There are those who say, ‘Hey, winning is success.’ However, just because you don’t always win, and in the case of new business pursuits you lose most of the time, many companies lose more often than they win. 

Sometimes, the quality of your competitive intelligence wasn’t the problem. What I choose to use as an additional gauge for success is the degree to which we can influence the outcomes.

If you’re privy to how the capture is developing and how it converts itself into a proposal, you can compare those decisions with what you had previously recommended. If you can see a strong relationship between the recommendations and the decisions made which drove the outcome, that is a great sign that you’ve achieved credibility. 

The quality of your product has been deemed to be influential enough that it was the basis for the decision that went into a proposal. Irrespective of whether you win or lose, that still counts. You were influential because if no one believes you, no one is going to incorporate your work into their decision making.”