Market insights

Scaling Research Projects: One Expert vs. Multiple Experts via Aggregators

Mar 18, 2026 6 minutes read
Mar 18, 2026 6 minutes read

Whenever a complex problem and a looming deadline are involved, it can seem that consulting just one expert may not be enough. Yet introducing more opinions can often merely add more noise rather than crystallize things.

Research teams face the same dilemma every single time:

Whenever a complex problem and a looming deadline are involved, it can seem that consulting just one expert may not be enough. Yet introducing more opinions can often merely add more noise rather than crystallize things.

So, what’s the solution? The answer is a classic "it depends."

There is a broad spectrum of factors that should affect the number of experts you consult, and in this article, we’ll take a look at the most important ones.

Read on to learn more about what academic research suggests regarding combining expert input, the cost-benefit threshold, project types, and the practical implications.

Tapping into academia

You could just take our word for it, but there’s an academic body of research that goes back decades with a surprisingly consistent takeaway: averaging across multiple forecasters almost always beats relying on a single source.

One important study is Winkler and Clemen’s work in Decision Analysis that proves that adding more experts to your pool will invariably increase accuracy, but, naturally, there comes a point of diminishing returns as you add more input.

The gains tend to be greater as a result of adding experts rather than by adding assessment methods. This makes sense even from an intuitive standpoint, considering that different professionals tend to bring different information to the table, while the same person, even using different methods, will be tapping into the same underlying knowledge base.

But the most compelling evidence comes from the IARPA forecasting tournament that ran from 2011 to 2015.

This project, operated by Good Judgment, spanned over four years and involved 500 questions and more than a million individual forecasts to test what actually makes forecasting work.

Suggested reading: Cost-Effectiveness Analysis: Expert Networks vs. In-House Research Teams

The conclusion arrived at is that teams are 23% more accurate than individuals working alone. And the best forecasters, dubbed ‘superforecasters’ (the top 2% of participants), were 35-72% more accurate than competing research teams. The real shocker, however, was that amateur forecasters who worked without access to classified data were about a third more accurate than seasoned intelligence analysts who had access to insider information.

Let’s consider this statement for a moment. The implication here is that the way we combine and process information matters more than what information we have access to. Diverse reasoning paths, structured independence, and updating systematic beliefs seem to perform better than informational advantage.

Genre and colleagues confirmed something similar in their 2010 analysis of the European Central Bank's Survey of Professional Forecasters.

They analyzed GDP growth, unemployment, and inflation forecasts, and identified that simply averaging expert predictions can establish a remarkably robust benchmark, with only a few sophisticated combination methods able to consistently perform better.

The wisdom of crowds, as it turns out, is surprisingly hard to improve upon. It's fair to assume that sometimes the most effective thing you can do is to simply gain a few genuinely independent perspectives and average these.

When more voices clarify rather than confuse

Stable, well-understood markets that have rigid regulatory frameworks are, well, predictable. Experts will generally tend to converge here.

Think about it this way:

  • If you ask three experts about approval timelines in the U.S. pharmaceutical industry, you’ll essentially get three similar answers because they operate within the constraints of the same FDA playbook.
  • The second and third opinions confirm the first but don't add much new information. You've basically paid for the same insight three times.

But once uncertainty enters the equation, the math changes radically. In volatile environments like emerging markets, frontier tech, or geographies with rapid regulatory changes, independent experts are likely to draw on genuinely different information sources and analytical frameworks.

Simply put, a private equity analyst who has spent a decade in the Southeast Asian markets will see patterns that a Boston consultant might miss entirely.

The IARPA study confirmed the suspicion that many practitioners have had for a long time – the key isn’t the number of opinions, but rather their diversity. Groups of forecasters outperformed single individuals on a regular basis, but only under circumstances that involved team members maintaining genuine independence of thought.

Suggested reading: How to Choose Experts Wisely: Vetting Criteria for High-Stakes Calls

The moment experts started to converge on the same reasoning, the benefits of an aggregated approach pretty much vanished. About 70% of the superforecasters mentioned above have maintained their elite status from year to year, suggesting that good forecasting is a genuine skill, but one that still benefits from structured collaboration with other skilled forecasters.

More isn't always better. Different is better.

The cost-benefit threshold

Here's where theory meets budget constraints, and things become more practical.

Following the rule of diminishing returns, the first additional expert will always yield the greatest increase in accuracy. The next one could potentially provide some insight into potential blind spots. The third specialist may still add some value when it comes to complex problems.

Beyond that, though? The returns flatten out pretty quickly.

There are, of course, cases where consulting four or five specialists can be warranted, mainly for really high-stakes decisions where the cost of a mistake would significantly outweigh the cost of additional perspectives.

But it’s safe to say that for most commercial due diligence and market analysis, you’ll reach optimal investment at around three or four opinions, sometimes even sooner.

There’s one practical implication that academic research doesn’t cover and that’s the friction of coordinating knowledge. It’s important to take into account the time, money, and effort necessary to process five or six sometimes-contradictory insights and resolve the disparities among these specialists.

image

Project type matters

Triangulating your findings isn’t essential in all types of research. Some general patterns tend to hold, although there are always exceptions:

  • Mergers & Acquisitions due diligence often benefits from multiple perspectives, simply due to the sheer complexity of such projects. You need someone to validate the data, someone to assess management quality, along with company culture, and someone else to provide context for the regulatory and competitive environment.

All of these are genuinely different knowledge domains, so a diverse set of opinions doesn’t just end up being redundant. A CFO, an industry operator, and a regulatory specialist will more than likely provide you with genuinely different perspectives.

  • Market sizing and product validation usually work well with two to three cross-functional voices.

Typically, you’d want a person who has a solid understanding of the customer segment, a person who understands positioning, maybe someone with supply chain or distribution expertise. These should be enough to triangulate without drowning in perspectives.

  • Technical or niche Research & Development questions, counterintuitively, often work best with a single expert.

If you need to understand the viability of a specific semiconductor manufacturing process, finding the one person who has actually worked on that technology at scale probably beats averaging across three people with adjacent-but-not-quite-right experience. Basically, expertise in the exact thing beats triangulation from the general vicinity.

  • Crisis or fast-moving scenarios benefit from parallel rather than sequential engagement with multiple experts being consulted simultaneously rather than one after another. Speed matters, and you're looking for convergent signals that can inform immediate decisions.

If three people independently point to the same conclusion under time pressure, that's worth something.

What this means in practice

The practical takeaway isn't complicated, although it does require some judgment for it to be applied well.

For straightforward questions in stable domains, one well-qualified expert will probably be sufficient. We recommend not overcomplicating the process, since in most cases this will only incur greater expense.

When it comes to highly specialized domains that reside in genuine uncertainty, consider working with two to four perspectives with the emphasis on genuine diversity of viewpoint. The key point is diversity of thought, not the number of consultants.

The IARPA research suggests that getting the composition of the team right matters more than the raw numbers. The superforecasters who regularly outperform their peers are actively open-minded, willing to update their beliefs, and skilled at synthesizing diverse viewpoints.

The same logic applies when you’re putting together a team of experts for triangulation.

If you're working through an aggregator platform, such as Expert Network Calls (ENC), for example:

  1. Use the parallel scheduling capability strategically. Don't just fill slots – think about which combinations of expertise will provide the most useful triangulation for your specific question. That's where the real value is.
  2. And maintain realistic expectations. Multiple experts reduce variance and catch blind spots, but they don't eliminate uncertainty. They can't. The goal isn't certainty – it's informed judgment.

The research suggests you can achieve meaningfully better outcomes through thoughtful expert aggregation, but "meaningfully better" still leaves plenty of room for being wrong. It's about improving your odds, not guaranteeing success.

The choice between single-expert and multi-expert approaches is about matching your research architecture to the actual uncertainty profile of the question you're trying to answer. Sometimes that means one deeply informed conversation. Sometimes it means parallel triangulation across multiple perspectives.

The research is fairly clear that getting this choice right matters more than most practitioners assume. Which, depending on how you look at it, might be the most useful insight of all.

More insights

Scaling Research Projects: One Expert vs. Multiple Experts via Aggregators Scaling Research Projects: One Expert vs. Multiple Experts via Aggregators
6 minutes read
How to Choose Experts Wisely: Vetting Criteria for High-Stakes Calls How to Choose Experts Wisely: Vetting Criteria for High-Stakes Calls
6 minutes read
Cost-Effectiveness Analysis: Expert Networks vs. In-House Research Teams Cost-Effectiveness Analysis: Expert Networks vs. In-House Research Teams
3 minutes read
GLG Platform Review - Expert Network Features and User Insights GLG Platform Review - Expert Network Features and User Insights
7 minutes read

Subscribe and never miss a publication

Stay connected to industry news.

Please enter email
Please enter a valid email
allow cookies

Allow cookies

We use cookies to ensure that we give you the best experience on our website. Learn more.