Saturday, February 28, 2015

The Right Answer?

It is strange how serendipity occasionally intervenes to link multiple lines of thinking and the conversations that go along with them and thinking on a subject crystallizes. In June 2014, Harvard Business Review published an article titled "Why Smart People Struggle With Strategy." The piece begins thus:
Strategy is often seen as something really smart people do — those head-of-the-class folks with top-notch academic credentials. But just because these are the folks attracted to strategy doesn't mean they will naturally excel at it. The problem with smart people is that they are used to seeking and finding the right answer; unfortunately, in strategy there is no single right answer to find. Strategy requires making choices about an uncertain future. It is not possible, no matter how much of the ocean you boil, to discover the one right answer. There isn't one. In fact, even after the fact, there is no way to determine that one’s strategy choice was “right,” because there is no way to judge the relative quality of any path against all the paths not actually chosen. There are no double-blind experiments in strategy.
When this crossed my digital desk this week, I was reminded of a slew of recent articles on the Service Academies (here, here, and here) and a great discussion that ensued on The Constant Strategist over the questions raised in the first of the linked articles. Here, two questions matter: What was the crux of the original issue and where have I landed in the overarching questions?

The initial online debate in this exploration centered on the curriculum most appropriate to the education of military officers. What should be the emphasis in a liberal education intended to develop them deliberately? There are two--one might call them adversarial--camps in this debate centered on the relative importance of the sciences and the humanities. I have always found myself standing athwart the apparent chasm between these two positions. As a military analyst with too many graduate degrees in math, I have enormous sympathy for the technical side of this debate (perhaps selfishly, since another position would invalidate much of my education and professional life). But I began my life as a student of English Literature and I spent a formative interlude as a graduate student in military history and strategy, and I know the technocrat's approach to conflict and strategic planning is problematic. But since it is hard to ask that everyone know everything, what is the right answer?

In the end, I think appropriate diversity is the answer, at every unit of analysis from the individual to the population. The trick is to ensure both expertise in the population (i.e., someone somewhere has spent a life studying the topics of interest) and familiarity in individuals (i.e., we have all studied enough of the other that we can speak a common language and seek useful metaphors). That means we should encourage and incentivize both expertise and exposure in a variety of disciplines--math, statistics, physics, chemistry, engineering, history, anthropology, theology, literature, etc. But there's a catch.

In our world of military operations research analysts so well trained in seeking optimality the idea of external familiarity really matters, and this is why I'm writing. There is a distressing and problematic bias in the technical fields toward the existence of a "correct" answer, not unlike the assertion in the Harvard Business Review piece. It's how we're trained. In our education, there exists a provably true answer (at least within the constraints of our axiomatic systems) to most of the textbook questions we answer as we learn our trade. This is, in fact, part of the reason for my own shift once upon a time between literature and math as a chosen field of study. Certainty has a certain comfort and led to fewer arguments between student and teacher.

I do NOT want to encourage anyone to not study the sciences, operations research, or (my own love) mathematics. And I do NOT want to encourage avoidance of the the less technical disciplines. Rather, what I want to encourage is an appreciation of contingency in the application of the technical disciplines and a rigor in the application of the non-technical disciplines, especially the context of militarily relevant questions.

Why? Fundamentally because our models and computational tools are by definition rife with assumptions. What if one or more of those assumptions are wrong? What if we forget some minor idea that turns out to be critical? What if our axioms don't work? (I'm looking at you, Economics.) What if optimality itself is a chimera?

For us, reading history of various kinds and actively considering the question of how our forebears (analytic and otherwise) erred is perhaps a useful remedy. The problem is not that one is smart or not. The problem is how and what one studies and with what intent.

The proverb says, "Iron sharpeneth iron." A suggested circular addendum to this wisdom is that the humanities temper the steel of the sciences while the sciences sharpen the analytic edge of the humanities.

Saturday, February 14, 2015

Seeking Truth

The last few posts I've penned for this forum (here and here) have danced around the edges--and occasionally jumped up and down on--the notion that we humans are flawed, cognitively compromised, and subject to some intrinsic constraints on our ability to see, understand, communicate, and act on the truth. Though this is not a new soapbox, I hadn't realized that this notion had taken over my writing and become as strident as it had. Then a good friend asked a simple question, and I found myself wrestling with the consequences of the human cognitive silliness on which I've been recently focused and what it means for truth in general and, perfectly apropos of this forum, truth in our analytic profession.

So, what poser did my wise friend propose? He offered three alternative positions based on the existence of truth and our ability to know it:
  1. There is a truth and we can grow to understand it.
  2. There is a truth and we cannot understand it.
  3. There is no truth for us to understand.
(Technically, I suppose there is a fourth possibility--that there is no truth and we can grow to understand it--but this isn't a particularly useful alternative to consider. As a mathematician and pedant by training and inclination, though, it is difficult to not at least acknowledge this.) 

The question is then where I fall on this list of possibilities. It's an important question, if for no other reason than where we sit is where we stand, and it becomes difficult to hypocritical to conscientiously pursue an analytic profession if we believe either two or three is the case. Strangely, though, I found this a harder question to answer than perhaps I should have, but here is where I landed:

At least with respect to the human physical and social universes with which we contend, there is an objective truth that is in some sense knowable and we, finite and flawed as we are, can discover these truths via observation, experimentation, and analysis.

In retrospect, my position on this question should have been obvious. I've been making statements that human cognition is biased and flawed, averring that this is a truth, and I believe it to be one. We can observe any number of truths in the way humans and the universe we occupy behave. I find, on refection, though that there is a limit to this idea. Specifically, we can probably never know with precision the underlying mechanisms that produce the truths we observe. We may know that cognitive biases exist and we may be able to describe their tendencies, but (speaking charitably) we are unlikely to ever have an incontrovertible cause-and-effect model to allow us to interact with and influence these tendencies in a push-button way.

So, the trouble I have with truth is that we apply truth value to the explanatory models we create. Since these models are artificial creations and not the systems themselves they must, by definition, fail to represent the system perfectly. Newtonian theories of gravity based on mass give way to relativistic theories of gravity based on energy. In some ways one is better than the other, but neither is true in a deep sense. Our models are never true in the larger sense. They may constitute the best available model. They may be "true enough" or " right in all the ways that matter." But both of these conditions are mutable and context-dependent. In a sense, I find myself intellectually drawn to the notion that truth in the contexts that matter to us professionally is an inductive question and not a deductive one.

In the end, I'm actually encouraged by this reflection, though the conclusion that models are and must be inherently flawed results in some serious consternation for this mathematician (soothed only by the clarity with which mathematicians state and evaluate our axiomatic models). I understand better what I'm seeking. I understand better the limitations involved. And, at the risk of beating a dead horse, I am more convinced of the need to put our ideas out in the world. This reflection might never have taken place if not for Admiral Stavridis and his injunction to read, think, and write.