Labels

*ORA 14 Forms of Fun 2013-14 2014 360 degree videos 5 Myths Of Game-based Learning ACH activism ADVAT agent network al Qaeda Alumni Amanda Palmer American Nuclear Society AML analysis analysis of competing hypotheses analyst Analyst's Cookbook analytic methods analytic techniques Angry Bird applied intelligence April fools Arab Spring Arbor Networks artificial intelligence assassination assignments asymmetric warfare attention attitudes augmented reality baking Banking Secrecy Act Bastion Bayes BBC bias biases big data bing Biometrics black swans blogging blogroll blogs Boston bombing Boston marathon Braid brainstorming Breckenridge BSA budget business Button Microscope calendar Call of Duty CAMS Canada card game careers careers in intelligence case officer CASOS casual games CentralDesktop Chechnya China Christmas CIA ciphers classroom exercises Clausewitz codes coffee cognitive bias cognitive biases collaboration collection collection management Competitive intelligence compliance conceptual modeling conference Congressional Budget Office conspiracy convergent thinking cooperative game correlations counterterrorism crime analysis Crimea critical minerals Critical thinking Crowdfunding crowdmap crowdmapping crowdsourcing Cthulhu Cthulhu vs. The Vikings CVTV cyber cyberthreat DAGGRE.org data analytics DDOS dea Decision Games Decision making decisionmaking Defense Language Institute dhs dia DICAS digital immigrant digital native divergent thinking diving doe dos drones DuckDuckGo e-international relations economics education education. conference Effectual reasoning Egypt elections Employment encryption ENTINT Entrepreneurial intelligence entrepreneurs Entrepreneurship Entry-level job epic 2014 epub espionage Ethan Zuckerman ethics Ethnolinguistics eurasia Eve Online experimental scholarship facebook faculty Fancy Hands Farmville FBI Fermi problems Fermi questions flow forbidden desert forecast Forecasting forecasting accuracy foreign language Foreign Service Institute Foursquare Free Syrian Army game Game based learning Game Genome Project game-based learning gamebook Games Games based learning Games for change festival gaming GEOINT Georgia Tech geospatial intelligence gerrymandering Global Intelligence Forum Google Google Translate grading graduate certificate graduate course Graduate school Gravity Models Great Firewall greg fyffe groups hardware heuristics hga hiring projection History Hnefatafl how to HUMINT Hunger Games IAFIE IARPA IMINT India INFORMAÇÕES inr integration intelligence Intelligence agency intelligence analysis Intelligence Analyst's Deck Of Cards intelligence collection Intelligence Community intelligence cycle intelligence in business Intelligence preparation of the battlefield intelligence process intelligence production intelligence studies intelligence theory Internet investigations IPB James Sanborn James Shelton Jane McGonigal Jen Stark Jigsaw Job hunting Job Search jobs John F. Kennedy John Stasko judgment july Kickstarter Kindle Kingdoms of Amalur Kriegspiel Kristan J. Wheaton Kryptos kwheaton Labels: Art Labels: Counterintelligence language languages law law enforcement law enforcement intelligence Learning Leksika Let's Kill The Intelligence Cycle liberal arts link list LinkedIn LKTIC Lord of The Rings Online macro photography MakeUseOf map mapping Mark Lombardi Market Intelligence MASINT Mass Effect MCIIS MCIIS Press Measurement Media Melonie K. Richey mental model Mercyhurst Mercyhurst Model methodologies mindmapping Minecraft Monopoly Moros murder Music Genome Project Myst National Post national security NCTC network analysis networking News NGA nominal group technique North Korea NoScript NOTICIAS NSA odni Online Open Source open source Intelligence organization original research Origins Game Fair OSINT OWS Pakistan pandemic Pandora passports pattern matching Pebble watch perspective PICL pintrest popplet Portal 2 post-mortem power laws pre-order Prediction prediction markets predictive market primary source Privacy privacyscore Problem solving professional development professionalism psychology questions Quickstarter Raph Koster rare earth Reader Recommended reading list Reality is Broken recession refugee crisis refugee population refugees request for information Resource resumes rfi Robert Heibel Role-playing game Roleplaying rolling pins Ronald Reagan ROTM Russia SAMs Games sandpiles Sankey diagram Saras Sarasvathy satellites Sculpture search Secrecy News secret sensors serious games Shippensburg Showdown SIGINT simulation SIRIUS social media social network analysis social networks Society for Effectual Action software Sources and Methods Games soviet union Spencer Vuksic spies spurious correlations spying Spymaster stanford AI course statistics strategic intelligence Strategic Minerals Strategy STRATINT Strawman structured analytic techniques Structured role-playing students survey Swayable symposium Syria tabletop games teaching techniques team building teams technology roadmap technology trends Terrorism textbooks Thanksgiving The Mind's Lie Theory of Fun thought experiment tips Tom Ridge Tor trade training translation travel tree treps Turkey TUTORIAIS Twitter UK Ukraine United States federal budget Upstart US IC US military USA Today USCG VAST Veterans' Day video vikings visual analysis visualizing intelligence voxy.com Wall Street Journal wargame Washington DC weekend What they know Widget wiki Wikipedia Words With Friends Work of art Yelp YouTube

Thinking In Parallel (Part 2 - The Mercyhurst Model)

Part 1 -- Introduction

While a number of tweaks and modifications to the cycle have been proposed over the years , very few professionals or academics have recommended wholesale abandonment of this vision of the intelligence process.  

This is odd.  

Other fields routinely modify and improve their processes in order to remain more competitive or productive.  The US Army, for example, has gone through several major revisions to its combat doctrine over the last 30 years, from the Active Defense Doctrine of the 1970’s to the AirLand Battle Doctrine of the 80’s and 90’s to Network Centric Operations in the early part of the 21st Century.  The model of the intelligence process, the Intelligence Cycle, however, has largely remained the same throughout this period despite the criticisms leveled against it.  The best answers, then, to the questions, “What is the intelligence process?” and “What should the Intelligence process be?” remain open theoretical questions, ripe for examination.

There are common themes, however, that emerge from this discussion of process.  These themes dictate, in my mind, that a complete understanding of the intelligence process must always include both an understanding of intelligence's role in relationship to both operations and the decisionmaker and an understanding of how intelligence products are created.  Likewise, I believe that the process of creating intelligence is best visualized as a parallel rather than as a sequential process.  I call this the "Mercyhurst Model" and believe it is a better way to do intelligence.  More importantly, I think I have the evidence to back that statement up.  

The first of the common themes referenced above is that the center of the process should be an interactive relationship between operations, the decisionmaker and the intelligence unit.  It is very clear that the intelligence process cannot be viewed in a vacuum.  If it is correct to talk about an “intelligence process” on one side of the coin, it is equally important for intelligence professionals to realize that there is a operational process, just as large if not larger and equally important if not more so, on the other side and a decisionmaking process that includes both.

The operational and intelligence processes overlap in significant ways, particularly with respect to the purpose and the goals of the individual or organization they support.  The intelligence professional is, however, focused externally and attempts to answer questions such as “What is the enemy up to?” and “What are the threats and opportunities in my environment?”  The decisionmaking side of the coin is more focused on questions such as “How will we organize ourselves to take advantage of the opportunity or to mitigate the threat?” and “How do we optimize the use of our own resources to accomplish our objectives?”  In many ways, the fundamental intelligence question is “What are they likely to do?” and the decisionmaker’s question is “What are we going to do?”  The image below suggests this relationship graphically.



The second theme is that it should be from this shared vision of the organization’s purpose and goals that intelligence requirements “emerge”.  With few exceptions, there does not seem to be much concern among the various authors who have written about the intelligence process about where requirements come from.  While most acknowledge that they generally come from the decisionmakers or operators who have questions or need estimates to help them make decisions, it also seems to be appropriate for intelligence professionals to raise issues or provide information that was not specifically requested when relevant to the goals and purpose of the organization.  In short, there seems to be room for both “I need this” coming from a decisionmaker and for “I thought you would want to know this” coming from the intelligence professional as long as it is relevant to the organization’s goals and purposes.

Theoretically, at least, the shared vision of the goals and purpose of the organization should drive decisionmaker feedback as well.  The theoretical possibility of feedback, however, is regularly compared with the common perception of reality, at least within the US national security community, that feedback is ad hoc at best.  There, the intelligence professionals preparing the intelligence are oftentimes so distant from the decisionmakers they are supporting that feedback is a rare occurrence and, if it comes at all, is typically only when there has been a flaw in the analysis or products.  As former Deputy Director Of National Intelligence for Analysis, Thomas Fingar (among others), has noted, “There are only two possibilities: policy success and intelligence failure” suggesting that “bad” intelligence is often a convenient whipping boy for poor decisions while “good” intelligence rarely gets credit for the eventual decisionmaker successes.

It is questionable whether this perception of reality applies throughout the intelligence discipline or even within the broader national security community.  Particularly on a tactical level, where the intelligence professional often shares the same foxhole, as it were, with the decisionmaker, it becomes obvious relatively quickly how accurate and how useful the intelligence provided is to the operators.   While most intelligence professionals subscribe to the poor feedback theory, most intelligence professionals also have a story or two about how they were able to give analysis to decisionmakers and how that analysis made a real difference, a difference willingly acknowledged by that decisionmaker.  The key to this kind of feedback seems less related to the issue or to intelligence writ large and more related to how closely tied are the intelligence and decisionmaking functions.  The more distance between the two, the less feedback, unsurprisingly, there is likely to be.

The third theme, is that from the requirement also emerges a mental model in the mind of the intelligence professional regarding the kinds of information that the he or she needs in order to address the requirement.  This model, whether implicit or explicit, emerges as the intelligence professional thinks about how best to answer the question and is constructed in the mind of the intelligence professional based on previous knowledge and the professional’s understanding of the question.  

This mental model typically contains at least two kinds of information; information already known and information that needs to be gathered.  Analysts rarely start with a completely blank slate.  In fact, Phillip Tetlock has demonstrated that a relatively high level of general knowledge about the world significantly improves forecasting accuracy across any domain of knowledge, even highly specialized ones (Counter-intuitively, he also offers good evidence to suggest that high degrees of specialized knowledge, even within the domain under investigation does not add significantly to forecasting accuracy). 

The mental model is more than just an outline, however.  It is where biases and mental shortcuts are most likely to impact the analysis.  It is where divergent thinking strategies are most likely to benefit and where their opposites, convergent thinking strategies such as grouping, prioritizing and filtering, need to be most carefully applied.  One of the true benefits of this model over the traditional Intelligence Cycle is that it explicitly includes humans in the loop - both what they do well and what they don't.

Almost as soon as the requirement gains enough form to be answerable, however, and even if it continues to be modified as a result of an exchange or series of exchanges between the decisionmakers and the intelligence professionals, four processes, operating in parallel, start to take hold: The modeling process we just discussed, collection (in a broad sense) of additional relevant information, analysis of that information with the requirement in mind and early ideas about production (i.e. how the final product will look, feel and be disseminated in order to facilitate communicating the results to the decisionmaker).

The notional graphic below visualizes the relationship between these four factors over the life of an intelligence product.  Such a product might have a short suspense (or due date) as in the case of a crisis or a lengthier timeline, as in the case of most strategic reports, but the fundamental relationship between the four functions will remain the same.  All four begin almost immediately but, through the course of the project, the amount of time spent focused on each function will change, with each function dominating the overall process at some point.  The key, however, is that these four major functions operate in parallel rather than in sequence, with each factor informing and influencing the other three at any given point in the process.



A good example of how these four functions interrelate is your own internal dialogue when someone asks you a question.  Understanding the question is clearly the first part followed almost immediately by a usually unconscious realization of what it would take to answer the question along with a basic understanding of the form that answer needs to take.  You might recall information from memory but you also realize that there are certain facts you might need to check out before you answer the question.  If the question is more than a simple fact–based question, you would probably have to do at least some type of analysis before framing the answer in a form that would most effectively communicate your thoughts to the person asking the question.  You would likely speak differently to a child than you would to an adult, for example, and, if the question pertained to a sport, you would likely answer the question differently when speaking with a rabid fan than to a foreigner who knew nothing about that particular sport.

This model of the process concludes then where it started, back with the relationship between the decisionmaker, the intelligence professional and the goals and purposes of the organization.  The question here is not requirements, however, but feedback.  The intelligence products the intelligence unit produced were, ultimately, either useful or not.  The feedback that results from the execution of the intelligence process will impact, in many ways, the types of requirements put to the intelligence unit in the future, the methods and processes the unit will use to address those requirements and the way in which the decisionmaker will view future products.

This model envisions the intelligence process as one where everything, to one degree or another, is happening at once.  It starts with the primacy of the relationship between the intelligence professional and the decisionmakers those professionals support.  It broadens and redefines, however, those few generally agreed upon functions of the intelligence cycle but sees them as operating in parallel with each taking precedence in more or less predictable ways throughout the process.  This model, however, explicitly adds the creation and refinement of the mental model of the requirement created by the intelligence unit as an essential part of the process.  This combined approach captures the best of the old and new ways of thinking about the process of intelligence.  Does it, however, test well against the reality of intelligence as it is performed on real-world intelligence problems?

Part Three - Testing The Mercyhurst Model Against The Real World

0 Response to "Thinking In Parallel (Part 2 - The Mercyhurst Model)"

Post a Comment