19 September 2013


Sometimes, truth be told, one is a day late and a dollar short. For the past while, I've been reading up on new and revised stat stuff. And I've rediscovered the angst of dysalgebria. Don't bother running to your on-line dictionary, it's not there. But, as it turns out, I can't claim to have coined it. Dang. Here's a cite for the word:
Dysalgebria -- students with average to above average IQ can master calculations but can not master algebra (Nolting, 2000).

Oddly, perhaps, I fiddled with how to spell a word for the condition; my first inclination was: dysalgia, using only the root. That one appears, other than some vague site references, to be not taken.

Anyway, the condition I'm discussing isn't exactly what Nolting defines, but rather more the Einstein (by legend, in any case) syndrome; needing the assistance of math types to resolve the ideas into math formulae.

So, to continue the saga. I've been mulling over writing up a piece on how it is that so many are math-phobic, even math disdaining, when I read my dead trees Times this morning, and find an op-ed piece nearly on point. Dang. The author's point is not quite what I want to prattle on about, but there's enough meaty quotes to impel the musing.
As a mathematician, I can attest that my field is really about ideas above anything else.

And that's the key, it seems to me. This past Friday, Maher had both Matt Taibbi and Bill Nye. Taibbi on the panel, and Nye as the fifth chair later. Taibbi made a remark, tied to a recent piece in "Rolling Stone" apparently, that student loans were the next catalyst for collapse. Which led to a short digression on education and cost of same. In the course of that digression, Nye avers (paraphrasing) that we should incentivize the math nimble to make shit as real engineers, rather than wasting themselves in banksterism. Knock me over with a feather. I know of Nye, but have never seen any of his TV bits. He was quite animated. As I've asserted more than once: policy trumps data, and what Nye was pointing at was the perverse incentive that the Western economies have made for the math nimble. One might argue that capitalism has become so productive that we've not much need for further scientists and engineers, so the financial services industries are merely soaking up excess supply. Even so, it's still perverse.

Both mathematicians and educators have been fretting over how to teach math since at least Sputnik (1957). I was a guinea pig for a couple of those experiments: SMSG (Some Math, Some Garbage; as we students called it), and TutorTexts (and here). Neither made it to 1980.

Despite what most people suppose, many profound mathematical ideas don't require advanced skills to appreciate. One can develop a fairly good understanding of the power and elegance of calculus, say, without actually being able to use it to solve scientific or engineering problems.

Or as one recent text put it: you don't need to be able to derive these equations, just understand what they mean.

On a similar note, Nick Carr took issue, in a recent post, with the notion of teaching "thinking" over teaching "facts". As it happens, my self image has long been: mostly cpu, not so much memory.
Why bother to make the effort to cram stuff into your own long-term memory when there's such a capacious store of external, or "transactive," memory to draw on? A kid can google the facts she needs, plug them into those well-honed "critical thinking skills," and - voila! - brilliance ensues.

That sounds good, but it's wrong. The idea that thinking and knowing can be separated is a fallacy...

On the whole, Carr's holistic notion (he never uses those words) is correct. One needs a body of facts in order to reason. The issue is when should fact accumulation taper, and reasoning begin. Another larger issue is whether, and what's the result of, education should be about "critical thinking"? The answer here is, Yes it is. The reasoning is simple: how many plumbers, or any trade worker with at best a high school diploma, discredits the canards from the Right Wingnuts? Or, as all studies have shown, the better educated the more likely to be a Left Wingnut? Or, more specifically, the better educated the more socialized. Obama's difficulty with Syria has been as much from the Left (that doesn't buy the story) as from the Right (which only supports military adventures of Right Wing administrations). If you want a society of facile drones then, what's now called, vocational education is the limit to what you want to provide. Education is really about how to spot the holes. Training is about how to turn the dials. Fleshy robots. Is that Batty around the corner?

Back to math-y parts.
In schools, as I've heard several teachers lament, the opportunity to immerse students in interesting mathematical ideas is usually jettisoned to make more time for testing and arithmetic drills.

Here, I only partly agree. The symptom of dysalgebria is easy to understand: when the text moves from sentences to formula derivation (even one line of chicken scratches, as my Pappy used to say), the brain seems to lock up. The idea gets lost in the notation. The irony, if one is positively disposed, is that real mathematicians find the concision comforting. It's easy to spot a second or third year graduate school math-y textbook (for a full year course) in the shelves: it'll be the thinnest.

How does this happen? The answer seems to be concise: algebra is typically taught in 10th or 11th grade of high school, and never used, per se, much again. Trig and calculus often follow, but the rules of algebraic manipulation aren't consistently exercised. It's as if Goode learned the "Hammerklavier" at 15, but never practiced it again, but expected to be able to perform it (every few years) well for the rest of his life. Not. And, there's the fact that it's all connected, so a stat derivation (or, horror of horrors, a proof) make use of some set theory (or worse a bit of integration through a trig function) you last saw as a freshman. And so on.

So what math ideas can be appreciated without calculation or formulas?

The op-ed discusses some avenues, but my take is: watch The Science Channel. The formulas exist in order to prove the universe, or some part of it, really exists the way the ideas say it does. That's how the Higgs Boson was found: the formulas said it has to be there. Similarly for much of RM and stat, believe it or don't. Codd was a mathematician by training, and the ad hoc-ness of IMS is what led him to define the RM. On the stat side, it's much longer term. Stat started out as a poor cousin in math departments; MIT still doesn't have a stat department, while Harvard does (by Mosteller, 1957). Moreover, many of the ideas of stat come from math, and many of those are utterly abstract, not tied to things. Unlike physics, which uses maths in a more concrete form. Well, mostly.

Consider the lowly stepchild, ordinary least squares regression. Economics has been living off it for decades. These days, it's at most a chapter. Yes, there are still re-edited textbooks from the 70's still in print but Statistical Learning is the hot topic, and OLS gets a few pages. Oh, the insult. Now, the justification for minimizing squared error to fit some (linear) model (equation) to a bunch of data points can be simply: there's not much choice; you've got a bunch of data points to characterize, and the most logical way to do that (in two dimensions) is draw a straight line through the points. Where to draw that line? In the middle, of course. How to calculate the middle? Well, the line which keeps most of the points mostly near the line. Since some points will be above the line, and some below, taking squares ensures that all points are treated fairly; the negative ones don't cancel out the positive ones, or vice-versa. The best line will be the one that has the smallest amount of overall difference betwixt the line and the bunch of points. That's it. And, oddly enough, much of stat boils down to minimizing squared differences in practice.

But, that's just ad hoc futzing, isn't it? Yes, it is. Is that sufficient backup for pricing CDOs? Well, no; but not that the fancy algorithms were much help either. Viewed as an analytical problem, where the bunch of points is merely hypothetical, what would you do? Sounds like "best" means optimal, for some definition of optimal. Well, this sounds like an optimization problem. And optimization problems are what differential calculus does (way back in either late high school or early college). From there one specifies the functional form, calculate the derivative(s), set same to zero, solve; and voila': the maximum likelihood estimators, which conveniently devolve to OLS calculations for convenient assumptions about actual bunches of data points. Phew!!

Back to the quote. The focus shouldn't be to make math-iness interesting sans math, but to devise a method of pedagogy which improves the uptake of skills needed to not only follow algebraic proofs, but impart the skills to create said. Not so easy, or we wouldn't still be worried about it. Remember Bobby Fischer? When I was a young-un, chess playing and Fischer particularly, was a badge of intelligence. How could a normal person see so many moves ahead? Turns out, the psycho folks figured out the answer, and Nick Carr is more right than might be imagined. Really good chess players don't "think ahead", rather, they've memorized/internalized so many winning games that they identify the pattern of a game as it emerges, and play accordingly. Many general patterns/strategies are named: these are just openings. All those facts (memory), not so much logic (cycles). Activities which appear to be reasoning and math-y aren't always so.

When I was a freshman I had calculus; missed out in high school, where it was just beginning to be offered. The college had adopted a New Math-ish text, which used some au courant notation. The professor, on the other hand, was old and old school, and he insisted on using classic (not that we knew what classic meant, of course) notation in lecture. Same with the TAs. Since understanding math is largely about subvocalizing notation into comprehensible english sentences (that again!!), it was all a puzzlement. Any of you in the teaching/training biz: don't ever do that. It's naughty.

This would be incomplete without a mention of Khan Academy. I've watched a few of the videos, and they looked oddly familiar. Then, I recalled Sunrise Semester. Memory was faulty, in that I assumed it was an educational TV (what preceded PBS) show, but the fact that Springfield didn't have an educational channel, and the Hartford channel was UHF and not easily gotten. Hmmm. Well, as the Wiki piece tells us, it was CBS (and for rather longer than one might expect), and Hartford/CBS was easily received. Old wine, new bottles.

If I had the answer, after at least 50 years of professionals' failure, I'd be a rich man.

Since I've let this piece simmer on a back burner for some days, I just saw this piece that claims dyslexia is ameliorated by e-reader/phone type of display. My immediate reaction: "bollocks". And that is because I've, in the past, tortured myself with Great Books books. For those unfamiliar; the Great Books were/are printed much as bibles tend, with two narrow columns of text on a standard-ish form factor vertical page. I get headaches trying to read these. I guess I'm not dyslexic. On the whole, I don't buy this "cause" of dyslexia. Have I mentioned that the 80-column line zealots, in these days of ever wider screens, make me want to throttle them?
There is a controversy over what role visual attention problems play in dyslexia, [Lorie Humphrey, an assistant professor of neuropsychology at the University of California, Los Angeles] says. Many experts say the real problem is a difficulty in linking individual and groups of letters with the sounds they correspond to. The e-reader method wouldn't help much with that problem, Humphrey suspects.

Note the allusion to failure to subvocalize. That's the key to both dyslexia and dysalgebria.

No comments: