Thursday, September 30, 2010
The Zealot and His Disciples
Friday, September 24, 2010
The Empire Builder
Sunday, September 19, 2010
The Analyst: Bold--Gusting to Arrogant
Saturday, September 18, 2010
Frameworks for Understanding
Thank you Mooch for the opportunity to learn and share. Though I have long been an amateur thinker, the job title I have had for the past two years, "management analyst" asserts that I am now a professional thinker. Consequently, it is in my best professional interest to learn from my peers and avail myself to them, to the extent I am able. For now, I will not detail my background, because my current thought is that I want the ideas that I present to stand on their own. As you will see, they do require critique, maturing, and possibly expansion.
So, friends, without further introduction, here is one of my thoughts on "analysis." Analysis being defined as: seeking the truth of a particular matter with intent to understand in order to make useful decisions. To think productively, we need to understand the frameworks we use (one will not do, for reasons that appear intuitively obvious to me – let me know if you disagree; and to omit any one puts daylight between our analysis and reality/truth). Please let me know what other frameworks I have over looked. I will grant that they overlap, but I consider each a primary driver for how things work.
Frameworks:
Deterministic (because some things happen because another event caused them)
Random (because some things happen on a randomly distributed basis)
Chaotic (because some things happen on a non-random, non-periodic, unpredictable basis) Deliberate (because of free will)
Bias based (because we are human)
My goal is to increase in knowledge, understanding, and (some day) wisdom. Ogre
Friday, September 17, 2010
Lies, damn lies, and ...
“There is always a well-known solution to every human problem—neat, plausible, and wrong.” H.L. Mencken
On the subject of analytic and scientific truth…I haven’t entirely sussed out what this means to analysis, to analysts, and to me, but it leaves me questioning an awful lot of the things I’ve seen, read, and done. If nothing else, it leaves me with an even greater skepticism than I had when I woke up this morning.
I've been reading a little book titled Wrong: Why Experts Keep Failing Us--and How to Know When Not to Trust Them, by David H. Freeman. In this case, experts refers to “scientists, finance wizards, doctors, relationship gurus, celebrity CEOs, high-powered consultants, health officials, and more”—pretty much everyone who offers advice or conclusions in other words—and the book is all about the many and varied ways they (and we) get it…well…wrong most of the time. According to Freeman, we live in a world of “punctuated wrongness,” a world where, according to one expert (the irony here is intentional on my part and acknowledged on Freeman’s), “The facts suggest that for many, if not the majority, of fields, the majority of published studies are likely to be wrong…[probably] the vast majority.” This is a pretty stunning claim. In fact, if I think about this issue as a mathematician—the area of emphasis for most of my formal training and publication—I’m simply staggered by the claim. But my field is a little special I suppose, since “truth” (within the axioms) is pretty easy to spot. We may be the only discipline wherein one can actually lay legitimate claim to prove anything since ours is probably the only completely deductive intellectual endeavor. (That still doesn't mean we have any greater access to Truth, though.) In other fields of inquiry, the fundamental process is inductive—observe, hypothesize, observe, adjust, observe, adjust, etc.—and claims to proof are problematic in the extreme—which doesn’t stop anyone and everyone from using the phrase “studies show” as if they’re quoting from the Book of Heaven. But I also have a fair bit of training in statistics—both on the theory side and in applications—and one of Freeman’s explorations of “wrongness” really hit home.
Why do we use statistical methods in our research? Basically, we want to account for the fact that the world—as we observe it—is stochastic (although whether it is fundamentally stochastic might be an interesting debate) and ensure the measurements we make and the inferences derived from those observations are not (likely to be) statistical flukes. So, when we make a claim that some observation is “statistically significant” (not to be confused with a claim that something is “true”—a mistake we see far too often, even in our professional crowd) we mean there is some known probability—the level of significance—that we'll make a (Type I) mistake in our conclusion based on observing a statistical fluke. So, for example, a level of significance of .05 indicates (kinda sorta) a 5% chance that the results observed are the result of chance—and that our inferences/conclusions/recommendations are “wrong.” 1 in 20? Not so bad. How do we make the leap from there to “the majority of published studies are…wrong?”
As an exercise for the student, suppose 20 teams of researchers are all studying some novel hypothesis/theory and that this theory is “actually” false. Well, (very roughly speaking) we can expect 19 of these teams will come up with the correct ("true negative") conclusion and the 20th will experience a “data fluke” and conclude the mistaken theory is correct (a "false positive"). With me so far? Good. The problem is that this makes for a wonderful theoretical construct and ignores the confounding effects of reality—real researchers with real staff doing real research at real universities/companies/laboratories and submitting results to real journals for actual publication. Freeman has estimates from another set of experts (again with the irony!) indicating that “positive studies” confirming a theory are (one the order of) 10 times more likely to be submitted and accepted for publication than negative studies. So, we don’t get 19 published studies claiming “NO!” and one study crying “YES!” We see 2 negative studies and 1 positive study (using “squint at the blackboard” math)...and 2 out of three ain’t bad. (Isn’t that a line from a song by Meatloaf? I think it’s right before “Paradise by the Dashboard Light” on Bat Out of Hell. Anyway…) The other 17 studies go in a drawer, go in the trash, or are simply rejected. Cool, huh? Still…we don’t have anything like a majority of published studies coming out in the category of “wrong.” In the immortal words of Ron Popeil, “Wait! There’s more!”
Statistical flukes and “publication bias” aren’t the only pernicious little worms of wrongness working their way into the heart of science. “Significance” doesn’t tell us anything about study design, measurement methods, data or meta/proxy-data used, the biases of the researchers, and a brazillion other factors that bear on the outcome of an experiment, and ALL of these affect the results of a study. Each of these are a long discussion in themselves, but it suffices to say “exerts agree” (irony alert) that these are all alive and well in most research fields. So, suppose some proportion of studies have their results “pushed” in the direction of positive results—after all, positive studies are more likely to get published and result in renewed grants and professional accolades and adoring looks from doe-eyed freshman girls (because chicks dig statistics)—and suppose that proportion is in the neighborhood of an additional 20%. Accepting all these (not entirely made up) numbers, we now have 5 false positives from the original 20 studies. If all five of the “positive” studies and the expected proportion (one tenth) of the “negative” studies get published, we expect to see 7 total studies published, of which 5 come to the wrong conclusion. 5 of 7! Holy Crappy Conclusions, Batman! (Don't go reaching for that bottle of Vioxx to treat the sudden pain in your head, now.)
Freeman, following all of this, goes on to warn we should not hold science as a method or scientists themselves in low regard because of these issues. They are, in fact, our most trustworthy experts (as opposed to diet gurus, self-help goobers, television investment wankers, and other such random wieners.) They're the very best we have. Scientists are at the top of the heap, but “that doesn’t mean we shouldn’t have a good understanding of how modest compliment it may be to say so.”
CUMBAYA! It’s no wonder we poor humans muddle through life and screw up on such a grand scale so often! I need a drink, and recent studies show that drinking one glass of red wine each day may have certain health benefits…