Labels

*ORA 14 Forms of Fun 2013-14 2014 360 degree videos 5 Myths Of Game-based Learning ACH activism ADVAT agent network al Qaeda Alumni Amanda Palmer American Nuclear Society AML analysis analysis of competing hypotheses analyst Analyst's Cookbook analytic methods analytic techniques Angry Bird applied intelligence April fools Arab Spring Arbor Networks artificial intelligence assassination assignments asymmetric warfare attention attitudes augmented reality baking Banking Secrecy Act Bastion Bayes BBC bias biases big data bing Biometrics black swans blogging blogroll blogs Boston bombing Boston marathon Braid brainstorming Breckenridge BSA budget business Button Microscope calendar Call of Duty CAMS Canada card game careers careers in intelligence case officer CASOS casual games CentralDesktop Chechnya China Christmas CIA ciphers classroom exercises Clausewitz codes coffee cognitive bias cognitive biases collaboration collection collection management Competitive intelligence compliance conceptual modeling conference Congressional Budget Office conspiracy convergent thinking cooperative game correlations counterterrorism crime analysis Crimea critical minerals Critical thinking Crowdfunding crowdmap crowdmapping crowdsourcing Cthulhu Cthulhu vs. The Vikings CVTV cyber cyberthreat DAGGRE.org data analytics DDOS dea Decision Games Decision making decisionmaking Defense Language Institute dhs dia DICAS digital immigrant digital native divergent thinking diving doe dos drones DuckDuckGo e-international relations economics education education. conference Effectual reasoning Egypt elections Employment encryption ENTINT Entrepreneurial intelligence entrepreneurs Entrepreneurship Entry-level job epic 2014 epub espionage Ethan Zuckerman ethics Ethnolinguistics eurasia Eve Online experimental scholarship facebook faculty Fancy Hands Farmville FBI Fermi problems Fermi questions flow forbidden desert forecast Forecasting forecasting accuracy foreign language Foreign Service Institute Foursquare Free Syrian Army game Game based learning Game Genome Project game-based learning gamebook Games Games based learning Games for change festival gaming GEOINT Georgia Tech geospatial intelligence gerrymandering Global Intelligence Forum Google Google Translate grading graduate certificate graduate course Graduate school Gravity Models Great Firewall greg fyffe groups hardware heuristics hga hiring projection History Hnefatafl how to HUMINT Hunger Games IAFIE IARPA IMINT India INFORMAÇÕES inr integration intelligence Intelligence agency intelligence analysis Intelligence Analyst's Deck Of Cards intelligence collection Intelligence Community intelligence cycle intelligence in business Intelligence preparation of the battlefield intelligence process intelligence production intelligence studies intelligence theory Internet investigations IPB James Sanborn James Shelton Jane McGonigal Jen Stark Jigsaw Job hunting Job Search jobs John F. Kennedy John Stasko judgment july Kickstarter Kindle Kingdoms of Amalur Kriegspiel Kristan J. Wheaton Kryptos kwheaton Labels: Art Labels: Counterintelligence language languages law law enforcement law enforcement intelligence Learning Leksika Let's Kill The Intelligence Cycle liberal arts link list LinkedIn LKTIC Lord of The Rings Online macro photography MakeUseOf map mapping Mark Lombardi Market Intelligence MASINT Mass Effect MCIIS MCIIS Press Measurement Media Melonie K. Richey mental model Mercyhurst Mercyhurst Model methodologies mindmapping Minecraft Monopoly Moros murder Music Genome Project Myst National Post national security NCTC network analysis networking News NGA nominal group technique North Korea NoScript NOTICIAS NSA odni Online Open Source open source Intelligence organization original research Origins Game Fair OSINT OWS Pakistan pandemic Pandora passports pattern matching Pebble watch perspective PICL pintrest popplet Portal 2 post-mortem power laws pre-order Prediction prediction markets predictive market primary source Privacy privacyscore Problem solving professional development professionalism psychology questions Quickstarter Raph Koster rare earth Reader Recommended reading list Reality is Broken recession refugee crisis refugee population refugees request for information Resource resumes rfi Robert Heibel Role-playing game Roleplaying rolling pins Ronald Reagan ROTM Russia SAMs Games sandpiles Sankey diagram Saras Sarasvathy satellites Sculpture search Secrecy News secret sensors serious games Shippensburg Showdown SIGINT simulation SIRIUS social media social network analysis social networks Society for Effectual Action software Sources and Methods Games soviet union Spencer Vuksic spies spurious correlations spying Spymaster stanford AI course statistics strategic intelligence Strategic Minerals Strategy STRATINT Strawman structured analytic techniques Structured role-playing students survey Swayable symposium Syria tabletop games teaching techniques team building teams technology roadmap technology trends Terrorism textbooks Thanksgiving The Mind's Lie Theory of Fun thought experiment tips Tom Ridge Tor trade training translation travel tree treps Turkey TUTORIAIS Twitter UK Ukraine United States federal budget Upstart US IC US military USA Today USCG VAST Veterans' Day video vikings visual analysis visualizing intelligence voxy.com Wall Street Journal wargame Washington DC weekend What they know Widget wiki Wikipedia Words With Friends Work of art Yelp YouTube

What Makes An Easy Question Easy? (DAGGRE.org)

For most of last year I have had the privilege of working with the DAGGRE Team on the Intelligence Advanced Research Projects Activity's (IARPA's) Aggregative Contingent Estimation (ACE) Project.  While all the real scientists have been busy exploring research questions involving Bayesian networks and combinatorial markets, this old soldier has been focusing on more mundane things like "What makes an easy question easy?"
(Note:  If you have not had a chance to check out the DAGGRE.org site and its mind-numbingly cool companion blog, Future Perfect, you should.  Three reasons:  First, it is pretty interesting research that could impact the future of the intel community.  Second, you can actually participate in it.  Third (and maybe most important to many of the readers of this blog),  the DAGGRE team has gone the extra mile to make sure your personal data, etc is secure while participating (check out the FAQ page for all the details)).
As I explained in this post, having some way to evaluate and even rank intelligence reqyuirements according to difficulty is important.  Analysts are supposed to be accurate but if you aren't also evaluating the difficulty of the underlying question, two equally accurate analysts could be miles apart in terms of overall quality.  It would be kind of like saying a little leaguer who hits .400 is as good as a major leaguer hitting .400.

While that distinction is easy to see in baseball, it is much more difficult in intelligence analysis.  Questions come in all shapes and sizes and vary in an enormous number of ways.  There is also a psychological, subjective aspect to it:  Questions that seem tough to some analysts may seem very easy to others.  On its face, it appears difficult if not impossible to come up with a system that can reliably evaluate and categorize questions by difficulty level.

Which is why I want to try.

And I may be making some progress.  I think I have figured out how to spot an "easy" question.  DAGGRE, you see, is a predictive market.  This means that people assign probabilities to the outcomes of questions.  Imagine, for example, I asked if Sarkozy would still be the president of France on 1 JUN 2012 (He is running for re-election in April and May).  Now imagine that you thought the odds of Sarkozy's re-election were 80%.  You could establish your position in the market at that "price" and others would be free to do the same (The Iowa Electronic Markets do this for the US election, by the way).

The market would reward people that were right on 1 JUN and would heavily reward those that were right when lots of others were wrong.  Studies have shown that, on average, these types of markets are pretty good at making these kinds of estimates.

Now, imagine if I asked you to estimate the chances that Sarkozy would still be president of France by 1700 tomorrow?  Sarkozy is not sick (at least I hope he is not) and there are no direct, immediate threats to his presidency.  There is no reason to expect that he would not still be president tomorrow. Likewise, successfully predicting that he will still be in office tomorrow is no sign of great analytic ability.  The question is too easy.

Generalizing this pattern, I think it is worth exploring the idea that "easy" questions are those that start and end their run on a predictive market close to either 0% or 100% probability, do not vary much during the course of that run and, finally, resolve in accordance with their probabilities (i.e. they happen if close to 100% and don't happen if close to 0%.  See the picture that accompanies this post for an idea of what such patterns might look like).  Furthermore, I think that these kinds of questions will see much less trading activity than other ("not easy") questions.

Of course, the problem with this definition is that it only identifies easy questions after the fact, after the question has been resolved.  My hope, however, is that by examining the set of questions that we already know are easy (at least under this definition), that we might be able to see other patterns that will allow us to identify easy questions when they are asked rather than only after they are answered.

Our (I say "our" because I am working on this with one of our superb grad students, Brian Manning) first attempt to get at these patterns will be a simple one -- question length.  We hypothesize that, on average, questions that match the "easy" pattern I described above will be shorter than other questions.  When you think about it, it makes some sense.  After all, "What time is it?" seems like an easier question to answer than "What time is it in Nigeria?" 

Brian has found some research that says that, subjectively, people don't perceive longer questions as necessarily more difficult.  The difference, of course, is that we have a definition of "easy" that is based on objective criteria.  Still, I think it best to start with the easiest possible measurement and then go from there.  Not sure where I will end up or if this will be a dead end but I will keep you posted...

0 Response to "What Makes An Easy Question Easy? (DAGGRE.org)"

Post a Comment