Sign In
Not register? Register Now!
Pages:
1 page/β‰ˆ275 words
Sources:
1 Source
Style:
APA
Subject:
Literature & Language
Type:
Essay
Language:
English (U.S.)
Document:
MS Word
Date:
Total cost:
$ 3.6
Topic:

Structured Analytic Techniques

Essay Instructions:

Which of the 12 U.S. Government (2009) Structured Analytic Techniques do you think best enable analysts to reduce cognitive and perceptual biases? Pick one diagnostic, one contrarian, and one imaginative thinking technique and describe why you chose those three. In the article, The future of the intelligence analysis task by Hare and Coghill (2016), they describe how technology and artificial intelligence will impact the intel analysis function. Do you think the skills necessary to be an effective intelligence analyst in the future will be fundamentally different than those used by analysts just before 9/11?
This is for a discussion, not a paper it does not need a proper introduction or conclusion
1. Introduction
Information technology will have a significant impact on the intelligence analysis workflow, skills, and organization in the next couple of decades. In future, instead of ingesting information themselves, analysts will use a range of information tools to add value to data. Future analysts will need less knowledge of subject matter, and more general reasoning skills. The future task will involve more creativity, and less focus on detail, than today.
Analysis is a broad church. Analysts are found in finance and business, intelligence, operational research, hard science, engineering, statistics, economics and beyond. What their roles have in common are the processing of data from one form to another, making sense of noisy or obscure concepts, working out what is true and what isn't, and communicating conclusions to other people who might find them helpful.
The business of analysis is, at least in most domains, still one in which human cognition forms a vital part of the processing chain. Human cognitive tasks involve a range of things: sensing and interpreting of information, cross-reference and comparison with other information, and inference. Moreover, in the professional environment, analytical advice must be communicated to at least one other human, and inter-human communication is manifestly one of the most complex tasks we have to navigate.
But humans are also tool-builders. When tasks become well-understood, theorized-about, repeated and perfected, we engineer ways of accomplishing them with less physical and cognitive strain. The nineteenth century saw vast numbers of complex physical tasks effectively subordinated to machines. But it was only in the twentieth century that it became possible to conceive of machinery that could begin to unburden humans of their cognitive load. Collectively known as 'artificial intelligence', cognition machines have begun to inveigle their way into our lives, not insidiously, but by making those lives easier. This is as true in professional analysis as it is for any other cognitive domain.[ 1]
This article asks what the future holds for analysis, using the specific role of an intelligence analyst as a case study. We, the authors, have had many years' experience in the world of government intelligence analysis. There, the distinction between analysts and decision-makers is firmly demarcated, with intelligence analysts empowered to reach useful conclusions on behalf of decision-makers, from whom they are deliberately sequestered. This separation of function provides an interesting perspective on the interaction between analysis and decision-making.
The intelligence analyst faces a number of challenges, from which broader lessons can be drawn. First, they must tailor their work to be timely and relevant to the decision-maker, often in an environment in which the consequences of inaccuracy can be grave. Second, they are required to address analytical problems with a very wide range of characteristics. Intelligence problems include analysis of highly-measurable phenomena, such as soil sample characteristics or seismic activity that generate hard data of a kind that is amenable to statistical analysis. They also encomPASS poorly-understood, complex concepts such as population sentiment that produce little in the way of structured diagnostic information. Some intelligence disciplines – such as the interpretation of audio signals or satellite imagery – involve heavy use of sensory capacities. Others – like the analysis of political motives – almost entirely involve abstract cognition. In other words, almost any category of problem has an analogue in intelligence, and as such it represents an instructive example for considering the future of the analyst.
This paper will necessarily be somewhat speculative due to the rapid advance of technology and its impact on analytical tasks. however, it is possible to foresee some of these impacts, provided we are systematic in our attempts to understand two things: the fundamental nature of the 'analysis' task, and the ways in which technology is affecting and will continue to affect the drivers that determine how that task is performed.
The first part of this paper will look at a simple production model of intelligence analysis, comprising inputs, process, and outputs, exploring what technology will do to each of these elements in the next couple of decades. The second part will draw these trends together to ask what they mean for the role of the analyst, and for the analytical organization.
2. Technology and the analytical task
A fundamentalist approach to defining analysis starts with its goal, which is to improve decision-making by PASSively reducing uncertainty.[ 2] In this sense, we are all analysts almost all the time, using a kaleidoscope of sense-data to navigate our world, test our beliefs about its behaviour and exploit our observations. But analysts in the workplace face a different sort of challenge: the decisions they need to inform are typically those of other people, whose objectives may be unclear, and whose interests are likely to be less well-illuminated and more distant in time and space. They must also often account for an adversarial environment, where rival organizations may actively seek to mask or confuse the truth.
Organizations have adapted to this problem in ways that bear interesting comparison to the solutions arrived at by natural selection. Organizations of sufficient size, who routinely deal with sufficiently complex or voluminous problems, tend to divide the tasks associated with agency into structurally-distinct roles.[ 3] 'Decision-makers' are the (usually) relatively-senior staff with elevated responsibility to whom it falls to 'make the call' about how the organization's resources are best deployed. They are often privy to various sources of insight in support of this task, one of which may be the deliberated-over product of in-house 'analysts'; however, they may also consult business-relevant 'intelligence' from other sources, scientific advice, knowledge of internal politics, and whatever they read on the way to work. Crucially, decision-makers are charged with objectives – which for the customers of intelligence analysts are ultimately to support the government's aims – and their job is to deploy whatever resources they have to achieve them. This is where intelligence analysts fit in: their job, briefly, is to tell political and military decision-makers what the impact will be of their doing one thing rather than another.
If we consider an analyst, an analytical team or an analytical organization as a 'black box' (see Figure 1), we see information going in, something happening inside and then some other information being pushed out the other end. In order to add value, the information that comes out needs to be worth more than the cost of the information going in, and the machine's running costs (i.e. paying and housing the analysts involved).
Graph: Figure 1. The analytical organization.
Within intelligence, the 'inputs' encomPASS almost anything of relevance. This comprises secret intelligence, but non-secret sources of information as well. It also includes the personal or organizational knowledge assets of the analysts involved. The 'process' is whatever occurs to transform (low value) inputs into (higher value) outputs: filtering, sorting, storing, retrieving, the generation and testing of hypotheses, forecasting and so on. The 'outputs', if they are to be valuable, should consist of information that is likely to change the beliefs of intelligence customers, so reducing the risk of decision-error. At present, this output may still take the anachronistic form of 'intelligence reports', but a five-minute conversation could also constitute an output.
If we are to understand how this machine – a key component of which is almost always a human analyst – will change over time, a good way to start is to look at how technology and other factors will influence the answers to the following three questions: how are the demands of customers, which condition the outputs, going to change? how are the information inputs going to change? And how is intelligence processing going to change?
2.1. how will intelligence outputs change?
Intelligence outputs are whatever intelligence organizations deliver to customers to make their beliefs more accurate and therefore their decisions less risky. There are two things we need to think about: the content of the intelligence output, and its delivery method.
Assuming, perhaps optimistically, that intelligence analysis organizations gear their outputs to the requirements of customers, and that these requirements are in turn influenced by the decisions that need to be made, we need to look at what kinds of decisions intelligence analysis customers need to make, and how these decisions will be different in future.
Prima facie, it is a bit of a puzzle as to why intelligence analysts cover the things they do (such as the political and military dispositions of other states, or the intent of organized criminals) and not other things (such as natural hazards, or domestic unemployment) that have similar effects or which might require similar skills to understand. The reason is almost certainly a game-theoretical one: the need for secrecy that arises when there are multiple agents with divergent interests whose actions affect one another. Broadly, intelligence analysis is valuable when we're playing against someone. So looking ahead, is the number of inimical actors, or their influence, likely to change in the coming decades?
It is commonly believed that the world is becoming an 'ever more dangerous place'. DCDC's Global Strategic Trends, which underpins many of the UK MOD's planning assumptions, says:
Out to 2040, there are few convincing reasons to suggest that the world will become more peaceful. Pressure on resources, climate change, population increases and the changing distribution of power are likely to result in increased instability and likelihood of armed conflict.[ 4]
But the statistics paint a long-term picture that is surprisingly rosy: violence, as measured by a range of figures including deaths from armed conflict, is currently at a historical global low.[ 5] Analysis of the types of conflicts entered into by the UK (and others) suggests that in terms of scale, symmetry, frequency, and so on, the trend over the last few decades has been dominated by randomness.[ 6] In battles between intuitive political forecasting and statistics, the statistics usually win.[ 7] But even if the more pessimistic view is correct, it is unlikely that the core problem-set faced by defence and security decision-makers will change significantly in the next few decades, at least in comparison to the speed of change that we expect to see in information technology (see sections 2.2 and 2.3 below).
This means that the kinds of problems faced by intelligence analysis customers – problems of state security, appropriate use of the military, coercion and deterrence – are unlikely to change significantly over the next few decades. The intelligence customer of 2050 would be able to sit down with his counterpart of 1950 and share interests, problems and stories. The gulf in experience is likely instead to be in the volume of intelligence and the way it is received.
Optimal information searching is a complex problem. By definition, you don't know what you don't know; therefore, deciding where to look for information, and when to stop looking, is a convoluted trade-off of benefits, costs and risks. In an environment when information is scarce but easily-obtained, the optimal solution is to consume it all; for an information provider, if you don't have much of it, you should deliver it all. But where information is abundant, the optimal solution is a lot more intricate, and involves a dynamic alternation between scanning and hunting. This is why information search technologies have had to develop alongside the dramatic fall in the cost of information storage. We now navigate the information environment using a set of dynamic technologies, including crowdsourcing (recommendations from Facebook or Twitter), tagging (of YouTube videos), and narrowly-intelligent search tools (like Google).
For intelligence analysis, the most important element of this shift in information-consumption patterns is that all of us are now used to searching for information when we need it, and being alerted to information that we might find interesting. The still-current analytical model of requesting, analysing, drafting, editing, and finally publishing already looks extremely old-fashioned. The reasons for its endurance in the UK are many, and may include cultural isolation, security concerns, a failure to invest in modern interactive information technology and the compartmentalization of analysis to demarcate it from decision-making and policy formation. The analytical production line is optimized for a highly edited and version-controlled publication regime. But in many other domains the appetite, and the technological possibilities, for ever greater collaboration and customization are already breaking down the barrier between information creator and information user. It is sensible to expect that intelligence analysis will eventually follow.
The dynamic, customer-driven model of information provision – which is significantly more efficient, if it can be done properly – requires a major shift in the current model of intelligence production. At present, analysts produce bespoke assessments to request, in response to developments, or in line with production battle-rhythms. But this analyst-to-customer-pipeline is too low in bandwidth and insufficiently agile either to cope with what customers expect, or to exploit what technology offers. A customer-driven information model instead requires that analysts effectively curate a knowledge-base that can be interrogated by customers, and it requires a set of tools to enable customers to do that. The result is an information consumption paradigm that more closely approximates optimal search behaviour in an information-rich environment.
This model for the delivery of intelligence production has two important implications: it means that analysts and customers will no longer interact through the direct transfer of information from one to the other (they are both interacting with the information base), and it automates some of the drudgery associated with the retrieval and distribution of stored information that used to be a core part of the analytical role. We will look at what this leaves – or opens up – for the role of the human analysts in section 3 below.
2.2. how will intelligence inputs change?
Data comes from everywhere. As the cost of information storage has fallen (see Figure 2), and our lives have become increasingly sensor-rich and connected, data is routinely collected about everything every individual does. Governments are deliberately and openly publishing huge collections of data to invite academic and entrepreneurial innovation, while secret intelligence collection technologies increase the available classified data all the time. As new data collection systems have come online, its volume has risen, along (in most cases) with its quality. More information, incorporating more inherent structure, is available to the analyst than ever before. Cisco report that by 2017 the global IP traffic will PASS the zettabyte threshold (1000 exabytes).[ 8] Any of this information, if it could be used effectively by analysts, could potentially inform intelligence questions of any kind.
Graph: Figure 2. Information storage costs have fallen exponentially. (RAM costs, US$/MB). Source: John Mccallum
But data availability already massively outstrips the capacity of humans to assimilate it, and of corporations and governments collectively to analyse using traditional methods. In response, there has been an explosion of tools and techniques for manipulation of vast data sets, while many-to-many models of mass collaboration (e.g. Wikipedia, Facebook, TripAdvisor) provide new mechanisms for creating, organizing and sharing knowledge. Data is changing not only in volume, but also in connectedness and depth. In Too Big to Know,[ 9] David Weinberger suggests that:
The final product of networked science is not the knowledge embodied in self-standing publications. Indeed, the final product of science is now neither final nor a product. It is the network itself – the seamless connection of scientists, data, methodologies, hypotheses, theories, facts, speculations, instruments, readings, ambitions, controversies, schools of thought, textbooks, faculties, collaborators, and disagreements that used to struggle to print a relative handful of articles in a relative handful of journals.
It is no longer possible – or necessary – to collate, package and store all of the information that analysts need in one place. Closed, authoritative, complete, in-house data repositories will soon look as outdated as entirely physically stored information repositories (e.g. paper media libraries). The information that analysts need is increasingly transient, in streams, with a mixture of machine and human indices tying it together.
What this means is that future human analysts will not be able to interact directly with all the relevant data and information that, in previous eras, they might be expected to know and to reel off as needed. Instead, interaction with data will increasingly be via tools that 'do things' to the data before presenting it (in whatever form) to analysts. These tools will perform a number of functions such as filtering, categorizing, abstracting and visualizing. They will enable analysts to navigate the information-space dynamically as the picture is refined. This also means that analysts will no longer perform these tasks (for example, of collation or indexing) themselves, relying instead on tools that do this automatically. The effect will be to distance analysts from the data, enabling them to focus on its import and significance from a loftier perspective.
2.3. how will the intelligence process change?
'Processing' in this context is a catch-all term for anything that an analyst, or an analytical organization, does with information in order to produce higher-value information. There are lots of ways that intelligence processing adds value. Traditionally, intelligence organizations could be valuable simply by filtering, storing and enabling retrieval of information and knowledge. But the analyst's sine qua non – the cognitive heart of the analytical process – is the carrying-out of inference tasks. These involve going beyond the data by using it to test hypotheses that are of relevance to customers' decisions. Sometimes labelled the 'so what' factor, inference is where information is transformed into beliefs about the world that are (if the process is working) more accurate than before, and to which are ascribed evidentially-justified levels of uncertainty.
Inference involves two key elements, which are the identification of hypotheses or scenarios of interest, and the testing of those hypotheses using information, allowing probabilities to be assigned to them. These are very distinct tasks. Hypothesis generation, broadly, involves taking data and using it to identify possible truths that are not necessarily contained within the finite dataset. Hypothesis testing, on the other hand, involves starting with a hypothesis, and subjecting it to the data. The compatibility of a hypothesis with the data will, when compared with that of other hypotheses, determine its probability in the light of the available information.
Importantly, the output of hypothesis testing is in some sense 'contained' within the information one has: the hypotheses themselves and the data. The output of hypothesis generation, on the other hand, is not straightforwardly 'contained' in the data: it involves going beyond the data to identify propositions that might help explain that data.
In humans, these analytical tasks are conducted in parallel rather than serially, and are each associated with very distinct analytical styles. Hypothesis generation is optimized by collaborative, creative, open, exploratory methods, while hypothesis testing is optimized by serial, critical-thinking, closed, data-driven methods. The distinction between these two tasks, and the way that they are best performed, is (not coincidentally) mirrored in the 'dual process' architecture of the brain's cognitive machinery.
In previous epochs, technologies for external information storage and retrieval systems were unwieldy and unreliable, and information-processing technology was laborious and expensive to use. Until the proliferation of microprocessors and effectively-free data storage, the human brain was the cheapest, fastest and most versatile instrument available to perform many of these tasks. But this is no longer the case.
The idea of intelligent machinery is perhaps thousands of years old.[10] But the modern artificial intelligence programme arguably only began in the Enlightenment, and particularly with David Hume, whose Enquiry Concerning Human Understanding (1748) was the first systematic attempt to understand what cognition actually involves, in what we would now think of as engineering terms.
It was not until the twentieth century that technology emerged that was capable of performing cognitive tasks. It was not until the twenty-first century that artificial intelligence began to be commonplace enough to influence our daily lives in the form of search engines, face and voice recognition, satellite navigation, digital assistants, robotic vacuum cleaners and countless other tangible ways. This use of 'artificial intelligence' may be widespread, but it does not take homogenous form. Demonstrable progress in artificial intelligence has largely proceeded through the development of many different artificial intelligences that are narrowly-focused on solving specific tasks, such as playing chess, indexing web pages, creating recipes, plotting routes and so on. These intelligences remain unable to step outside their domain: you can't play chess against a satnav. The Holy Grail for artificial intelligence research is the development of an artificial general intelligence: a machine powerful enough to learn, solve problems, and make inferences about the world unbounded by domain, in much the same way that humans appear able to.[11]
In contrast to the algorithmically-driven approaches to developing narrow artificial intelligences – where the challenge is to work out, for example, what the procedures are describing 'being good at chess' or 'finding a face in an image' – there is still some fairly-fundamental uncertainty about what kind of approach is needed for an artificial general intelligence (AGI); indeed, the term itself was only coined in 1997. One avenue aims to get to an AGI from the bottom up, bolting together lots of narrow AI modules to create a PASSable general intelligence. Another takes a 'top down' route: to attempt to develop a learning architecture of sufficient power and generality that it can just be dropped into the world and left to get on with it. A third route is brain emulation: to piggyback on evolution's hard work by building a machine that is essentially a copy of the human brain. At present, none of these are emerging as the clear programmatic winner, and most observers think that the development of an AGI is still several decades away, although most observers agree it would be the most important development in human history.[12]
The two fundamental tasks of inference that we discussed above – hypothesis testing and hypothesis generation – neatly align to the distinction between narrow and general artificial intelligence. Hypothesis testing is algorithmic. Provided you have specified your hypotheses in a sufficiently-accurate way, that you have enough memory and processing power, and that there is enough structure in the data, hypothesis testing is just number-crunching. Hypothesis generation, however, is not (yet) algorithmic, because we still do not understand the processes and mechanisms necessary to perform the task. In other words, we have not yet specified in procedural terms what a thing needs to do to be able to generate genuinely-novel hypotheses, and we don't really understand why human brains are capable of it.
What this means in practice is that machines are getting demonstrably and rapidly better at using data to test theories, but are not yet able to generate those theories in the first place. To translate this into everyday life, we can rely on machines to find the cheapest flight if we know we want a flight, calculate when we need to leave for the airport assuming that's what we want to do, alert us to likely traffic problems if we are driving there, and to find the best route from our house if getting to the airport is important. But we can't yet ask how to advance our company's market penetration in Europe, and therefore whether flying to Stuttgart for a meeting with a marketing firm is really worth it. There is too much complexity and too little measurability in most real-world systems for these kinds of unbounded problems to be represented in artificial problem-solving architectures.
In the near future at least, humans will still be needed to generate hypotheses or, to phrase it another way, find potential answers to questions. In an intelligence context this means identifying potential answers to questions like what a country is planning to do next, how a group's tactics will evolve, or how a leader will respond to sanctions. But analysts will increasingly rely on machines to test those hypotheses: for example, to ask how likely a country is to be planning an amphibious invasion, a group is to be moving towards the use of IEDs, or a leader is to be ousted.
To be clear though, in the near future machines will not do this unaided. Analysts will not type 'will President Jones be ousted in a coup?' into a terminal and be given a probability, in the way perhaps envisioned in 1960s Hollywood sci-fi films. Instead, analysts will utilize combinations of tools to build models and interrogate intelligence data, in much the way that we might use TripAdvisor, a weather app, and a journey planner to decide where to meet for lunch in Central London, and whether to take an umbrella with us.
In this technology scenario, analysts will be doing a number of things: working through and helping unpick new or poorly-defined problems with customers, selecting or building appropriate models for new types of problem, attempting to explain new or anomalous data, identifying potentially-diagnostic data sources, and essentially doing the messy 'real world' stuff that machines won't be able to. When problems become sufficiently well-defined, however, the analyst will create structured representations of them using appropriate tools, and hand off to the machines. In a sense, the analyst will be there to assist the computer with things it can't do, rather than vice versa, although the relationship is really one of mutual support. This picture of the analyst's role has significant implications for the workflow, toolkit and skills of the future analyst.
3. The future role of the analyst
Analysis is a fundamentally cognitive activity. Artificial intelligence is the term we give to tools that are designed to perform cognitive tasks, and over the next couple of decades these tools will become increasingly effective, easier to use and fuelled by boundless data. In future, the analyst's role will fit around the machinery in the same way that a fighter pilot or car mechanic's role is moulded today. By 2035, the analyst of 2015 will seem positively impoverished, and the idea of an analyst relying on their brain to do everything will seem extraordinary – like the fact that people used to remember phone numbers. What will these changes mean for the workflow and tasking of the analyst?
3.1. The analytical workflow
The traditional (and still most prevalent) model of intelligence analysis places the analyst at the centre of the information flow (see Figure 3). Analysts 'receive' information (via various bespoke feeds), hunt for missing information, decide what is important and what is not, and collate and manage their own (and the organization's) knowledge base using a combination of their own memory and external storage.
Graph: Figure 3. Traditional analysis organization.
The scenario we describe in the previous section implies, however, a very different model. Analysts are no longer triaging information. Instead, automatic searching, indexing, categorizing and structuring processes are continually ingesting information (or indices of it) into the organizational knowledge-base. At the lowest levels, this structuring will be on a syntactical level, but in the next two decades we envisage considerable advances in the sophistication of semantic data-structuring: the automatic identification of people, places, sentiments, movements, associations, and so on. All of this will happen before human analysts go anywhere near the raw data itself (see Figure 4).
Graph: Figure 4. Future analysis organization.
Instead of triaging all the information 'as it comes in' and responding primarily to this information flow (a hopeless endeavour given the volumes involved), the future analyst's interaction with information will be a more natural (and efficient) one, driven by both requirement and salience. These flow in opposite directions: requirement is what the customer wants to know about the world, and salience is what the customer ought to know about the world. One is driven by the customer, the other by the world, and the information organization is (or ought to be) positioned in the middle.
It's possible to speculate that most of the time, human analysts won't need to be involved in this process at all. The traditional low-level value-adding activities (filtering, categorizing, storing and retrieval) will be done at the machine level, with customers interacting with it via search and alert tools. These will be considerably more sophisticated than today's in their capacity to interpret the intent of the user, and will deliver information that is much more structured, visual and interactive. So when will analysts be involved? In the next couple of decades, humans will need to be involved when the relevant or salient questions relate to systems that are either too complex or not measurable enough for machines to cope with, or where the customer's decision-problem is novel or poorly-understood.
'Complexity' is defined in a number of ways, but in this context it refers to systems which are non-linear, subtle, intricate and best described qualitatively rather than quantitatively. Because of their highly-parallelized, holistic, imaginative neural architecture, humans will (until the development of an AGI) have the edge over machines in their capacity to characterize systems beyond a certain level of complexity. In the intelligence space, such systems might include the behaviour of whimsical leaders, population dynamics, political brinkmanship, and a host of other phenomena for which the data are relatively scarce compared to the dynamics of the problem. Many forecasting problems will fall into this category. Complexity is not, however, a binary quality, but a sliding scale, and as processing power and data grow, the space of systems for which human understanding surPASSes that of machines will shrink concomitantly.
'Measurability' refers to the extent to which the most important characteristics of a system are capable of being compiled into a structured dataset, of the kind that machine inference depends upon. There is an overlap with complexity, but problems can be both simple, easy for humans to understand, yet hard to capture quantitatively. Many 'human' phenomena fall into this category: how charismatic a leader is, how angry a mob, or how unjust a verdict. These are important kinds of phenomena for intelligence analysts, and we can expect humans to be required to help characterize them for some time to come.
Finally, humans will be needed where there is novelty. In circumstances where the customer faces a new kind of decision-problem or technological opportunity, the analyst will be needed to help bound the space of possibility, to work with the customer to define terms, to understand what outcomes are important and to identify potentially useful information streams. Over the last couple of years, the kinds of questions that could plausibly have fallen into this category might include the aims, drivers and growth trajectory of the Islamic State, the impact of financial sanctions on Russia's decision-making or Iran's willingness to compromise on its nuclear programme.
In summary, we expect the purview of analysts to move away from questions of 'situational awareness' – the compilation, processing and repackaging of data – and towards questions about the complex, the hard to measure and the novel. But what will analysts actually do with these problems, and how will they add value? In the future scenario we have described above, the role of the analyst is to help mediate between the customer and the information: to help assist the customer in defining their scenarios of interest, to help design analytical products (in whatever form) that best suit the requirement and decision time-frame of the customer and, perhaps most importantly, to be expert in the use of analytical tools that can be set to work on the data to help test hypotheses of interest and forecast decision-relevant outcomes. This leads us to the question of what these tools will be able to do.
3.2. The analytical toolkit
The successful construction of all machinery depends on the perfection of the tools employed; and whoever is a master in the arts of tool-making possesses the key to the construction of all machines... The contrivance and construction of tools must therefore ever stand at the head of the industrial arts.[13]
The brain is often the bottleneck in systems which employ machines and humans. This will be particularly true for the future intelligence analyst in an information-abundant environment. To maximize the analyst's effectiveness, the tools available must have the net effect of shrinking the information feed to a size that can be reasonably ingested by the analyst to perform the tasks outlined above. A back-of-the-envelope calculation suggests that the pol-mil intelligence analyst of 1995, covering a single second-tier country, would have to read only around 20,000 words a day to read everything published about it: secret intelligence, academic material and news, the last of which might well (on a good day) have consisted of nothing. This could be comfortably performed in a couple of hours, leaving the rest of the day for liaising with customers or producing written reports. Our estimate for the number of words an analyst of 2015 would need to read everything published (including local press, social media and so on) is closer to 200,000 a day – around three times more than would be possible assuming no time to do anything else.
Analytical tools will therefore need to filter information to identify and present the most highly-informative items, either responsively (when the analyst is searching) or actively (when salient items appear in the feed). The search tools we have now are fairly dumb. They are based on word-, phrase- or synonym-matching, and rely on crowdsourcing to help discern relevance. Future information search tools will be more intelligent, and interpret the analyst's intent. Semantic search tools like Wolfram Alpha or IBM's Watson will increase in sophistication and, over the next couple of decades, we can expect to see them responding to requests for information in (perhaps a stilted form of) natural language. These tools will be used not just by analysts, but also by customers to interrogate data and to customize information feeds for their own needs.
Efficient data presentation is about much more than just finding the right information. Effective data tools will present information to analysts in a way that minimizes their cognitive load; given the immense sophistication of our visual cortices, this primarily means visualizing information where possible, through mapping, timelining, charting and other methods. But analysis is of course about far more than just finding and visualizing information. As we touched on above, inference tools designed to construct and test hypotheses will also be a key part of the toolkit. These will enable analysts to build 'models' of the world, run simulations, test forecasts and keep watch on developments to spot emerging indicators of scenarios of interest. At present, these are at a very low level of technological readiness, and exist only in very data-rich areas (such as physical sensing, medical diagnosis, cybercrime and so on). But in future we can expect to see tools that can form probabilistic judgements based on messy, incomplete data, with assistance as necessary from human analysts concerning complex or hard-to-categorize problems.
Finally, we can expect greater sophistication in the tools available to turn information into 'glossy product': structured reports, briefings, infographics and so on. Instead of expending cognitive effort creating these rather artificial communication tools, analysts will be able to generate, with one click, skeleton structures for such products, to customizable lengths, which will automatically identify what appear to be the most important or diagnostic elements of the information-base in relation to the subject matter.
3.3. The skills of the future analyst
The picture we have sketched above is of a significant shift in the role of the analyst in the next couple of decades. Instead of being the repository for the data (a task to which humans will be manifestly inadequate) the analyst will become a 'curator' – or perhaps 'librarian' – for decision-relevant information. Their role, as today, is to enable customers to receive the information they need, when they need it. But instead of the analyst producing this information themselves, they will use a sophisticated array of information tools to find, ask questions about, package and deliver it from wherever it happens to be, to whoever needs it. This will require a different set of skills compared to those used by the analyst of today.
We have envisaged a workflow in which analytical graft (collating, reading, remembering, etc.) is largely outsourced to machinery. Analysts will instead be relied on for their ability to think about and model systems, and help customers add structure to their messy and poorly-understood problems. If this is correct, the future analyst will need to use their creative faculties – imagining, hypothesizing, disentangling, analysing, playing, communicating – significantly more, and their critical thinking faculties somewhat less.
This scenario also implies that analysts will need to do more collaboration: with each other and with customers. There will be less need to ingest and assimilate information, and more of a need to understand customer requirements so appropriate responses can be designed, and to work towards making those requirements sufficiently formal that they can be represented as structured hypotheses for machine analysis.
The analyst will also need to collaborate with their electronic colleague. They will need to understand how to interact with data tools, what the most effective methods are, and why. This does not mean that analysts will have to be software engineers or data wizards, any more than a photographer needs to understand optical physics, or a pilot needs to know how to fix a jet engine. But analysts will need to accommodate the limitations of machines, which primarily means being more explicit and formal in their approach to methodology than is necessary today. At present, hypothesis formulation and testing happens largely unseen inside analysts' heads. In future, analysts will need to be able to externalize their network of assumptions and beliefs in order for machines to interpret them as statements about observable data. This suggests that tomorrow's cadre of analysts will need more explicit training or experience in method, reasoning, logic, and inference: a basket of skills that enable robust analytical structures to be spun from the tangled wool of messy problems, and represented within the literal mind of a machine.
What of subject-matter knowledge? If this scenario is correct, the kind of subject-matter knowledge required will be different to today. Analysts won't need to know 'facts' as such – vast, well-structured datasets will do that – but they will need to be able to generate theories about the ways systems behave. These skills are more akin to the kind of expertise possessed by operational researchers, engineers, scientists and economists. The emphasis will not be on what an analyst knows but on how they think. Experience in breadth – the ability to draw abstractions and generalities about the characteristics of systems – will be more valuable, and experience in depth less so; that's what the database is for.
These kinds of general reasoning skills have been studied in detail over the last few years by the Good Judgment Project, an IARPA-sponsored tournament to find the best forecasters. The high-performing forecasters exhibited a range of analytical behaviours that probably correspond to the kinds of skills described above: the ability to envisage multiple scenarios, to think in terms of reference-classes and base rates, to be comfortable with uncertainty and attend to only the most relevant items of information, rather than all of it. Importantly for the analytical organization, the GJP has demonstrated that these skills are not innate but can be learned.
Analysts will also need to understand information design, visual and verbal. Part of their role will be to understand and to characterize the requirements of customers, and to use their toolkit to build outputs that meet them. There will be less requirement for formal briefing skills – standing up in front of a room and reading a script – although we foresee an increase in communication and collaboration.
how should analysts of today prepare for this future, which (if we are right) will come in their career lifetimes? The biggest change in the workflow that we foresee will be in the centrality of information tools to the analytical endeavour. Analysts will need to become adept at using these tools, thinking about why they are supposed to work, and understanding their limitations. The analyst who secretly hopes that IT will quietly 'go away' – and there are a few – is in for a severe disappointment. Analysts will need to be less subject-matter aware and more information aware. This entails building an understanding of the theory of inference, probability and uncertainty, data analytics and hypothesis-testing, visualization and information management.
3.4. The future of the analytical organization
This paper is about the future of the analytical task, but in covering this we have inevitably touched on the business model, skills-base and technological requirements of the future analytical organization. The picture we have sketched out above means a number of things for intelligence analysis organizations, some of which represent significant changes from the current way of doing business.
The viability of the future intelligence analysis organization will be founded on its ability to deliver information to decision-makers who need it. Our vision involves significantly more, and additionally effective, use of information technology than today, when computers are still primarily used to compose and distribute information rather than to perform cognitive tasks. But what is to stop it simply being a room full of servers and a good search engine? The answer lies in the analysts: the people who add value to information by knowing how to exploit it to reduce uncertainty, and are generally more expert at using tools than their customers. This is a well-trodden path: the widespread availability of ingredients and recipes has not eliminated chefs, professional photographers have not been swept away by easy-to-use cameras, and social media has not reduced demand for investigative journalism.
As these examples suggest, the analysis organization will no longer be able to add value simply by having information: secret intelligence feeds or analysts who 'know their stuff'. Customers will be able to find these things out for themselves. Instead, the recruitment and development of analysts will be crucial in giving the organization the edge.
The future analytical organization will be more networked, more collaborative and less hierarchical. Hierarchy serves a useful purpose when information-flows are constricted. But when everyone has access to all the information all the time, they merely create organizational friction. Analysts will not necessarily have defined subject-areas as they do today, but – because they can, and because it's a more-effective use of knowledge assets – will be able to work on many problem areas, possibly in self-organizing teams, in a way that best suits their skills and preferences. Information organizations such as Google and Facebook already use this kind of model, and one day intelligence analysis organizations will follow.
Notes on contributors
Nick Hare has worked in various roles across the Ministry of Defence, the Cabinet Office and the intelligence community for 15 years, most recently as the Head of the Defence Intelligence Futures and Analytical Methods Team within the MOD, where he was responsible for professionalizing intelligence analysis within government. He founded Aleph Insights, a decision-making and analysis consultancy, in 2014.
Peter Coghill has worked for BAE Systems Applied Intelligence, assisting military and other government customers by designing bespoke analytical systems and delivering operational support. He has also designed electronic testing and manufacturing systems in industry, and has worked as a systems analyst in DSTL and Defence Intelligence. Peter's interest is in the intersection between information, analysis and information technology. He joined Aleph Insights in 2015.
Footnotes
1 Carl Frey and Michael Osborne, 'The Future of Employment: how Susceptible Are Jobs to Computerisation?', OMS Working Paper. Oxford Martin Programme on the Impacts of Future Technology, 16 August 2013 < http://www(dot)futuretech(dot)ox(dot)ac(dot)uk/future-employment-how-susceptible-are-jobs-computerisation-oms-working-paper-dr-carl-benedikt-frey-m>>
2 'PASSive' reduction of uncertainty involves processing information. 'Active' reduction of uncertainty involves making changes to the world to make it more predictable.
3 Jay R. Galbraith, 'Organization Design: An Information Processing View', Interfaces 4/3 (1974) pp.28–36 < http://www(dot)jstor(dot)org(dot)ezproxy2(dot)apus(dot)edu/stable/25059090>
4 United Kingdom. Ministry of Defence. Development, Concepts and Doctrine Centre. Global Strategic Trends - Out to 2040. 4th ed. January 2010. p.14. < https://www(dot)gov(dot)uk/government/uploads/system/uploads/attachment%5fdata/file/33717/GST4%5fv9%5fFeb10.pdf>.
5 Steven Pinker, The Better Angels of Our Nature: Why Violence Has Declined (NY: Viking 2011).
6 See John Medhurst, 'Still Agile? Back to the Future with Agile Forces' < http://ismor(dot)cds(dot)cranfield(dot)ac(dot)uk/30th-symposium-2013/still-agile-back-to-the-future-5-years-on../@@download/paper/30ismor%5fmedhurst%5fpaper.pdf.>
7 Philip E. Tetlock, Expert Political Judgment: how Good Is It? how Can We Know? (Princeton, NJ: Princeton UP 2005).
8 Cisco Systems, 'The Zettabyte Era: Trends and Analysis', May 2015 < http://www(dot)cisco(dot)com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/VNI%5fHyperconnectivity%5fWP.pdf>
9 David Weinberger, Too Big to Know: Rethinking Knowledge Now That the Facts Aren't the Facts, Experts Are Everywhere, and the Smartest Person in the Room Is the Room (NY: Basic 2011).
'She found Hephaestus running back and forth to his bellows, sweating with toil, as he fashioned twenty triple-legged tables to stand round the walls of his great hall. He had fitted their legs with golden wheels, so they might take themselves to the gods' assembly if he wished, and roll home again, a wondrous sight' (The Iliad, p.xviii).
Or perhaps our cognitive domain is bounded too, by our evolutionary heritage? how would we know if it were not?
Tim Urban, 'The AI Revolution: Road to Superintelligence' < http://waitbutwhy(dot)com/2015/01/artificial-intelligence-revolution-1.html>
Charles Babbage,. The Exposition of 1851: Views Of The Industry, The Science, and the Government Of England (London: John Murray1851) p.173.
~~~~~~~~
By Nick Hare and Peter Coghill
to the writer:
Hare, N., & Coghill, P. (2016). The future of the intelligence analysis task. Intelligence & National Security, 31(6), 858–870. https://doi-org(dot)ezproxy2(dot)apus(dot)edu/10.1080/02684527.2015.1115238

Essay Sample Content Preview:

Structured Analytic Techniques
Student’s Name
Institutional Affiliation
Structured Analytic Techniques
Analysts utilize various techniques, such as contrarian, diagnostic, and imaginative thinking to minimize perceptual and cognitive biases. Using different methods enhances the quality of analysis while preventing personal preferences from interfering with the research.
Diagnostic Techniques
Diagnostic techniques are effective in making intelligence gaps transparent. The fundamental assumption for the diagnostic methods is applicable during the early stages of analysis. In such a case, analysts can review and recheck assumptions (Weinberger, 2011). However, the analysts ought to validate their beliefs. Notably, the Analysis of Competing Hypothesis (ACH) is ideal for enabling analysts to minimize perceptual and cognitive biases. The technique enhances the overall outcome concerning holding large sizes of data. Besides, ACH helps in addressing shortcomings that can have varying results. The method is efficient by ensuring data is complete while straying away from standard concepts of logic that might have plagued the “standard.”
Contrarian
The technique is split into Team A/Team B and devil’s advocate. The former is more effective since it helps address any form of biasness of a single viewpoint (Weinberger, 2011). On the contrary, devil’s advocate only works to faci...
Updated on
Get the Whole Paper!
Not exactly what you need?
Do you need a custom essay? Order right now:

πŸ‘€ Other Visitors are Viewing These APA Essay Samples:

HIRE A WRITER FROM $11.95 / PAGE
ORDER WITH 15% DISCOUNT!