Modernizing Defense Through Crowdsourcing.

AuthorDurland, Harrison

To support U.S. competitiveness, policymakers need to make strategic decisions for defense modernization, including where to invest attention and resources. Whether identifying threats or opportunities, leaders benefit from accurate forecasts about the capabilities and limitations of emerging technologies and industries.

Crowdsourced forecasting, or CSF, has received increased attention and advocacy in recent years due to the convenience afforded by widespread internet access combined with positive experimental performance for questions regarding politics and international affairs.

Although further experimentation and evaluation are needed, such forecasting could prove to be a valuable tool for defense modernization planning, technological assessments and related activities in the Defense Department and industry.

Crowdsourced forecasting refers to a variety of systems for eliciting quantitative forecasts from groups of people about defined outcomes. There are two major types of such systems: "prediction markets," where participants use real or fake currency to trade outcome "contracts" at prices which translate to the market's probability estimates, and "prediction polls," where participants state their estimates, and these forecasts are aggregated into simple or weighted averages usually based on past performance.

Some active examples of platforms using prediction polls include Good Judgment Open, INFER Public, Metaculus and the U.K.'s Cosmic Bazaar. Examples of real-money prediction markets include the Iowa Electronic Markets and Kalshi.

The RAND Corp. has developed the Delphi method, which involves cycles of surveying select experts for their estimates and reasoning, has some similarities to crowdsourced forecasting but typically uses a narrower "crowd" and puts less emphasis on evaluating and rewarding individual accuracy.

Researchers have analyzed crowdsourcing for decades, and some early platforms such as the Iowa Electronic Markets have been effective at forecasting elections. However, one of the most influential recent studies was led by the University of Pennsylvania's Philip Tetlock for Intelligence Advanced Research Projects Agency's Aggregative Contingent Estimation competition in the 2010s. Tetlock's project, which birthed Good Judgment Open and was the focus of the book "Superforecasting," found that teams of skilled volunteer forecasters outperformed intelligence community analysts by sizable margins on a variety of...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT