Axiom Tutoring

Advanced Math, Stats, Logic, Physics, CompSci, and test-prep tutoring

Professional tutoring service offering individualized, private instruction for high school and college students.  Well qualified and experienced professional with an advanced degree from Columbia University.  Advanced/college mathematics, logic, philosophy, test prep, microeconomics, computer science and more.

An Object Lesson in Philosophy and Physics

When computing the moment of inertia, there is a small but important step that most people pass over without even noticing it. It requires a little familiarity with Calculus but if you don’t have that, I’ll keep the mathy symbols very brief, so just some skimming will hopefully still get the point across.

If we take dI (a little bit of inertia) to be d(mr^2) (the formula for rotational inertia) then students are taught that this is r^2 dm. This implicitly treats r^2 as a constant but not m—in some sense, treating m, the mass, as the “thing that’s really there” and r, the radius, is defined only relative to m. Why not use the product rule for derivatives and treat both quantities as if they have “equal reality” and therefore get the same mathematical treatment?

The answer is that the mass really is more real than the radius. The mass really is right there, and the radius is our measurement of a distance from some rotational axis. The distance is also real—it is whatever it is, but it only exists relative to the mass element and the rotational axis.

I find that entirely fine and satisfactory, but it does highlight something subtle and worrying for other contexts. We have to have these human, intuitive ideas about what the physical world is like in order to appropriately model physics with Math. What if we were in some much more strange and unintuitive setting, like subatomic physics or the physics of things at high velocity, where our intuitions about which things are “more real” aren’t going to be up to the task of informing the choice of mathematical model?

If our intuitions just can’t help us to choose among the competing mathematical models, does that mean that our biology has limited our capacity to understand nature?

Computing as Mere Rewritting

I've been studying functional programming lately, and it's really changed my whole conception of what computing is. The standard concept of computing is: Info is stored in memory.

(To elaborate on that: You pull that memory out when you want it, do some function on it, put the result somewhere in memory again. So for instance x = 1+(2*3) is thought of as 1, 2, and 3 as being info somewhere in memory. You do * to 2 and 3, get 6. Then do + to 1 and 6, get 7. Then store that 7 in an address in memory which we reference as x.)

But functional computing doesn't have memory! All of functional programming can be thought of as *rewriting*. The function 1+(2*3) is just regarded as an expression that we can rewrite as 1+6 and then rewrite that as 7.

But computers need to be able to interact, right? If you click here, the computer does this. If you click there the computer does something else. How do you get this behavior with just rewriting? It turns out it's possible! You just need a conditional function. We could write it as

`cond click1 doBrowser doGame`

where the computer is told that if you find the pattern `cond click1 ...` then do the first thing. That way `cond click1 doBrowser doGame` gets rewritten as `doBrowser` and the browser function runs. To get the other behavior you write

`cond click2 doBrowser doGame`

and the computer has another rule which says, if you find the pattern `cond click2 ...` it'll rewrite to the second thing, in this case, it rewrites to `doGame` and now you're playing a computer game.

And in general, everything computable can be done with rewriting rules! I think that's wild. You can even re-create the behavior of having memory, by using this memory-less structure!

Without =

All I want to say here is how interesting I think it is to consider:

For thousands of years people did Mathematics without an equal-sign! The “=” symbol, if memory of a history of Math text serves me correctly, made its first appears due to … Fibonacci? Perhaps I’m getting my mental wires crossed with the fact that he imported Arabic numerals. Maybe it was Recorde? If only there were a way to determine its source without leaving my computer … Well I guess we’ll never know.

But before the 13th century much of mathematics had no symbol for equality, and people merely used the word “equals” or just a blank space where an equality symbol might go. In some ways we might do well to return to that, at least for introductory students, because they often fail to realize that the equal-symbol means the same that the English word “equals” means: That the thing on the left is the same as the thing on the right. (That’s not actually entirely true, equality can be used for the definition of a function, which isn’t precisely, logically, the same thing.)

However, ancient Egyptians had a symbol for it similar to what we now use for proportionality. Certain later Greek speakers had their own symbol although it did not persist through history. Ancient Indians used pha which was a contraction in their language. Today’s equal-sign, “=”, derives from the signification of two parallel lines. This choice is due to the fact that parallelness behaves very similarly to equality. Some of the features of equality that are most useful are:

  1. Reflexivity: Every line is parallel to itself just as every object is equal to itself.

  2. Symmetry: If line m is parallel to line n then line n is also parallel to line m. A similar statement holds for equality.

  3. Transitivity: If m is parallel to n, and n is parallel to p, then m must be parallel to p. Again, the same sort of thing is true for equality.

Any relation satisfying the three properties above is called an equivalence relation. Equality and parallelness are both equivalence relations, and polygon similarity is also an equivalence relation. It is worth noting, though, that equality is stronger than a mere equivalence relation. The distinguishing feature of equality which separates it from all other equivalence relation is that of substitutability. If two quantities are equal, all the same sentences must be true about those numbers. If 11 is prime then, since 11 = 9+2, we must also be justified in saying 9+2 is prime. Substitutability does not hold for parallel lines. Line m may be parallel to n but a distance of 10 units away from it. If substitutability held we would be able to infer that line n is also 10 units away from n. Since that’s ridiculous, we see that parallelness does not permit of substitution.

But the use of a symbol for equality was not practiced even by startlingly modern European mathematicians such as Fermat. In some way I find this inspiring—Math really just is what we could express in many more words in regular English. Math is not categorically different reasoning from other kinds of reasoning, the symbols merely compact a very large amount of information and abstraction.

Incentivizing good makes it bad?

I overheard two liberals talking about Nancy Pelosi; one said that she was corrupt because her husband is a wealthy businessman, and the other agreed immediately. I thought that was strange, since we wouldn’t say that about any non-politician. Because your neighbor or cousin marries a wealthy businessperson therefore she’s corrupt? In general, I get the impression that the reality just doesn’t matter—one way or another, someone will always find grounds to accuse Pelosi.

I noticed conservatives doing something similar to Christine Blasey Ford who is accusing Supreme Court nominee Kavanaugh of sexual assault. It’s almost like a script, how the far-right will instantly and without any actual evidence, claim that an accusation against a conservative is done from profit motive. Since there doesn’t seem to be any actual evidence that Ford has a profit motive, I hear them point to potential book sales as the source of the money.

The phenomenon generalizes pretty widely. Doctors supposedly only cure people from profit motive. This is why we should distrust what they say about vaccines. Car companies profit from sales. So we should assume the claim that their cars are safe is a lie. A politician wants to protect or foster an industry, and it is at least conceivable that somehow the politician is being paid to hold this position. Therefore the politician is only acting from money motive. Even in our private lives people can be quick to assume that a friend or relative does something out of self-interest based on no more evidence that self-interest was possible.

We really need to think about how our judgements and behaviors set up carrots and sticks for society. If personal gain, whether real or merely possible, is grounds for undermining a person’s character, that has the effect of not rewarding good people. As soon as you reward people for being good, the reward becomes a possible self-interested motive. Two things follow from setting up this inversion.

  1. We don’t encourage people to be good.

  2. We don’t give good people the extra resources to do more good!

Why is Mitch McConnell so successful in spite of his obvious corruption? He gets plenty of attacks in the media, but so does Pelosi, and they are nowhere near each other on the corruption scale. The function of how much damage your career takes … doesn’t depend on the variable of how corrupt you actually are! Everybody always gets attacked no matter what. So why not just be as corrupt as you can—you get the same amount of criticism but lots more reward. We don’t give fuel to good people in order to crowd out the McConnells.

It reminds me of something that seems completely irrelevant but I think is a nice analogy. How do you isolate the yeast used in bread and beer? It lives on most wheat crops, but so do a lot of other microscopic bugs. You want the particular kind of yeast that is good for you, Saccharomyces cerevisiae for instance, to grow and everything else to die out. The key is to identify some differentiating property and use it to separate the two out—or rather, to foster one and starve the other. That’s why you can mash wheat into a gruel with some water and let it sit until you see some bubbles from fermentation. That means the good yeast has eaten and grown a little, and is crowding out the bad. Feed it some more wheat and the new offspring eats and reproduces some more, crowds out the bad some more. Do it again and again and again, and you get a culture of only the good yeast. Now bake bread or brew beer with it.

I assume the analogy is by now obvious. By tearing down anyone who achieves any level of wealth or happiness in the process of doing good for society, we’re not feeding the good. And so in part, we’re to blame for why the good guys don’t win. We can’t just engage in a constantly negative project where we only attack people in power. We need a constructive project where good people get rewarded with power. Think of the alternative … or rather, you don’t have to because we’re living it.

What Determines Who's Good at Math?

As a Math tutor I’ve met a lot of people struggling through Math, all from lots of different backgrounds. Young, old, men, women, racial groups, nationalities, rich, poor. Some teachers can boast the same, although usually they have a pretty homogeneous age and income group—but I think one thing I see which even teachers don’t is a lot of one-on-one interaction, obviously. I get to know my students, even spend a lot of time in their homes if that’s where we hold meetings.

I wonder if that’s why I have a particular opinion about which people are good at Math. Nearly everyone thinks it’s about the brain you’re born with and I very much disbelieve that. Sure, there are correlational studies about parents and children, twin studies. We obviously don’t know enough about the brain to be able to say what the brain physically does to solve Math problems. But the statistics—or rather, the statisticians—or rather, the psychologists who pretend to be part-time statisticians—seem to tell us that intelligence is measurable and heritable. Maybe everyone’s belief in innate intelligence is right?

There are a couple points, the first of which is the surprising nature of what biologists mean by “heritable”. For instance, the number of fingers on your hand turns out not to be heritable! Heritability is measured as the ratio of variation due to lineage against the variation in the whole population. The number of fingers on your hand has no variation within families, so the numerator of the fraction is zero—so the fraction is zero!

The definition is the best quantitative measure of what we intuitively think of as heritable: If people tend not to vary a lot in height within a family even though they vary a lot throughout the human global population, that seems to indicate that height is heritable. Makes sense, although it is fundamentally correlational. This measure does not demonstrate causation, and if families all tend to eat alike and diet is a stronger determinant of height than genes … well, you’ve got a lot of bias in your data.

Establishing causation usually requires being able to manipulate a variable while holding other variables constant. Since we don’t know enough about genetics, we can’t do that, and so the best we can do to study heritability right now is fundamentally correlational. And this isn’t just abstract nit-picking. I have serious reservations about the extent to which environmental effects have been adequately factored out of intelligence studies. If intelligent people are more common in some families, but those families have expensive prenatal care, lots of time with parents, living in a culture that values education, I think by the time a person is able to read, write, and take an intelligence test, they’ve already had a lot of biases in their data.

I also just don’t think that “intelligence” has been adequately defined in a measurable way. A reasonable definition is “The ability to solve problems in order to acheive goals.” However, that requires a measure both of the difficulty of the problem and the “capacity” to solve it. Factoring out laziness and valuable alternate goals particularly makes the second hard to measure.

So I don’t buy the genetics answer, not entirely. It can’t be nothing in the equation, but I don’t believe it’s everything either. I also don’t think it’s sheer effort. I’ve seen lots of students work hard and come out the other end not grasping the concepts well. I fully, deeply, whole-heartedly, even aggressively reject the idea that some people have Math brains and others don’t. Knowledge and the universe are not so neatly compartmentalized, and the reasoning you exercise while doing Math is fundamentally the same skill that you use while reasoning about language, strategy, organization, projecting the future, understanding the past, studying law. If you navigate the world at all you make use of the same geometric, arithmetic, and logical intuitions that mathematicians use—mathematicians just use them with an extremely large and continuous chain of iterations.

The one common factor that I’ve seen almost without exception among people who do well at Math is: Interest. If you think Math is meaningful, interesting, beautiful, worthy of study intrinsically and not for the sake of anything else—absent any physical, psychological, emotional, or economic impediments to your study, I think you have a near certain probability of succeeding in studying Mathematics. Every student I’ve seen make wild progress in their studies believed in the amazing ability of Mathematics to say something deep about reality. Every student I’ve seen disappointed with the ineffectiveness of their own studies just could not manage to give a sincere shit about about the subject. Of course most people fall in-between, both on the accomplishment axis and the interest axis.

Most people think ability causes interest. I doubt that is more than a rarity. I think interest causes ability.

Sleeping on It

I’ve found the concept of dual spaces in Linear Algebra just the dizzying height of abstraction. Already the very object of study, the “vector space”, is an abstraction that most people find confusing on first meeting it. It’s been years since I first encountered such an abstraction so by now I don’t bat an eye at it. However, we then discuss linear functions between vector spaces … ok, I’m pretty good with the idea, I’ve had a lot of practice with this sort of stuff. Then we talk about linear functionals on vector spaces, a particular kind of linear function. Again, I’m good with that. Then we talk about the dual space, the space of all functionals … whoa, starting to get wobbly. A vector space that you construct out of functions of vector spaces?

… Alright, that was kind of a blow, I think I can handle it. But let’s start moving slowly because this is weird territory. Next we get the dual transformation of any given linear function, and now it feels to me that all hell has broken lose. A transformation of a space of transformations of a space to a space? Oh dear god what are we doing?

So that bothered me about six months ago. Since then I’ve gotten distracted by certain programming topics, and now that I’ve figured out a lot of what I wanted from that subject, I’ve been returning to the Linear Algebra. I dreaded coming back to the dual transformation. But as I went carefully through it again, taking careful notes and trying to solve problems before reading the solutions in the textbook, I found a remarkable new level of comfort with everything. Sleeping on the subject helped a great deal.

In fact it’s a somewhat surprising common-place how a subject that confounds you one day, the next morning can seem simple. I can’t tell you then number of times I’ve woken up with a solution to last night’s problem. Sleeping on a problem is a tried and true technique of proof for me, at this point. Like I think Von Neumann said, “In Mathematics there is no understanding, there is only getting used to.”

The End

I read a lot of history, and lately that’s been a lot of prehistory. You know, stuff that existed before writing, since academics officially regard history as analysis of text. What I’m now reading isn’t exactly that, it’s The Knowledge. It’s a literal manual for rebuilding the world after a futuristic collapse. In a way it’s the opposite direction to what I’ve been studying: Off in the future. But just like every post-apocalyptic story, part of the fun and the point is to see regress to our primitive state. The past and the future intersect.

At least on the face of it the book is a literal manual for how to rebuild civilization after some great collapse, be it meteor strike, nuclear war, or epidemic. But I don’t think the author believes in an impending collapse and isn’t a so-called “Prepper” who prepares for the end times. At least he seems to think it’s not very likely in the very near term. How likely should we think a global catastrophe is?

Here’s a very not-data-based answer that is correspondingly not very precise. It depends on the time frame and the size of the class of events we call “global catastrophe”. If we say 200 years is the time frame and catastrophe can include most of the world (but not, say, Africa and Australia) being in a nuclear shoot-out, or the avian flu depopulating the world by 10%, then … I’d say the odds of such an event are “good”. Especially given that global warming seems inevitable and disastrous, I think we can lump that in with a slow-burning global catastrophe, making some degree of catastrophe pretty near certain. But if we call the catastrophe, by definition, something that happens suddenly one day in the next few decades, and leaves no corner of the Earth unaffected (and definitely destroys New York because it’s not yet a global disaster if you don’t get a half-sunked Statue of LIberty) … I’d put the odds near zero.

Back to the point, although the book on its face is about The Fall and rebuilding, I think that’s largely just a fun and interesting conduit to re-conceive of modern technology. It gives an excuse to remember that the Serbian army besieged the city of Gorazde. The people inside built little turbines and placed them in a river, fixed to a bridge, to generate electricity. It reminds us that, if the survivors of The Fall did not take care to collect the seeds of domesticated crops, they would quickly lose in the competition with weeds and die out—and humanity would have to spend an extra several thousand years re-domesticating new or old species. It is a reminder of how much technology was necessary to build the modern world. So much of it is so deep in the background that we need it threatened or removed before we can see it again.


I've lately been learning to program in OCaml.  Having some familiarity with Haskell I'll say that, as of now, it's hardly distinguishable from Haskell to me.  Perhaps I'll change my tune in a few weeks of playing with it.  For now, though, they both seem like functional languages that incorporate object-oriented functionality, they strongly depend on recursion and pattern-matching, and both are largely seen as educational play-things rather than industrial-strength languages. 

Well, there's that stereotypical grumbling over nothing that all techy people do.  On the bright side, using OCaml is fun, and I hope to exercise and tighten my familiarity with Recursion Theory through its use. 

I'm also simultaneously getting through Hopcroft's Automata book, and that's more confusing.  The exact nature of the physical systems that he (they?) model with graphs is not transparent.  Perhaps I'm being a little too tedious in trying to match things up. 

I Can't Fight the Homunculi-Headed Robot

I don't know how to make heads or tails of this famous thought experiment from the Philosophy of Mind:  Suppose you see a person walking down the street who accidentally hits his shin against the edge of a bench.  He collapses, gripping his shin, wincing, groaning.  You would reasonably infer he's in pain. 

But in the commotion, his face hits the pavement and lo and behold, his face-plate comes off!  He wasn't a human at all!  Behind the face-plate are tiny men pushing buttons and reading panels to control the behavior of the robot they inhabit.  It seems the belief that this person was in pain was a mis-attribution of pain. 

But that's not the problem, otherwise this would be a fairly plain (if sci-fi cheesy) example of reasonably making a mistake.  Rather, Ned Block produced this example as a response to functionalism.  Functionalism was a response to behaviorism('s shortcomings).  And behaviorism was a response to ... modernity, I guess.  So let's do some speedy History of Philosophy and Science.

In the beginning people were dumb and believed in spooks.  They weren't actually dumb, but many (all?) early cultures believed that certain inanimate objects had spirits.  Most attributed spirits to the sun and moon; some were more liberal and believed rocks and other mundane things had spirits.  With the march of the progress of Science, the set of things which most people ascribed spirituality to contracted.  Angels fell out of favor, the devil was an increasingly unsatisfying answer, even god no longer existed literally in, or just beyond, the sky. 

But exactly which things are sppoks?  Clearly literal spooks, spooky ghosts, are spooks.  What about angels?  Probably.  Miracles?  I guess.  The human mind?  Well, wait a minute.  As far as we know it's not physical, at least not in a very simple and obvious way.  The idea of a square, which I may contemplate right now, is not located anywhere.  Not the idea of it, even if instances of squares exist in places. 

But Science abhors a non-physical thing we believe in.  The behaviorists in Psychology therefore believed they were advancing the march of Science by trying to identify the mind with behaviors of people--physical things that could be observed by scientists, measured, tested, and so on.  Pain for them was a tendency to wince, groan, seek relief, and so on.  Pain was identical to a set of behaviors. 

This obviously wouldn't do, though.  One can behave that way without pain, and one can have pain without behaving that way.  Enter the functionalists who would like to persist in identifying the mental with something physical.  Rather than say that pain was identical with behavior, they claimed that pain and other mental states were identical with a function. 

An analogy from one of the greatest American philosophers, David Lewis, was that of a lock.  Common sense tells us that the lock will unlock when the numbers on its pad are correctly aligned.  This is the fairly superficial understanding of what the lock is.  We may later learn through close observation and scientific discovery that in fact, the alignment of numbers coincides with the alignment of certain gaps in metal plates inside of the lock, such that when they are aligned the bolt in the lock can pass through, and unlock the lock.  The lock is defined by its function, and the unlocking mechanism is that which plays the causal role of causing the lock to unlock.  It turns out that this phrase superficially denotes the alignment of numbers on the pad, and more deeply denotes the alignment of gaps in metal plates. 

So by analogy, our superficial understanding of pain is the mental experience of it.  However, pain just is that which plays the causal role whereby crushed or torn flesh causes a certain set of behaviors like wincing.  We know it by the mental experience, but since certain physiological or neurological functions coincide with this and cause the experience of pain, then these must be the same thing.  To keep things easy, let's imagine that humans had a single pain sensation caused by something called C-fibers.  A person is in pain, we imagine, if and only if that person's C-fibers are firing.  Since this plays the right causal role, with inputs of some kind of physical damage, and outputs of pain-like behavior, it is identified with the mental experience of pain.

But the homunculi-headed robot is having none of this.  It takes the same causal inputs, the same behavioral outputs, and therefore is supposed to be in pain when it bangs its shin.  And yet, that's patently absurd.  This seems to be a counter-example to functionalism.  It's functioning the way a pained being would and yet it doesn't have pain. 

I don't know what to say to this.  Functionalism sure sounded good and right until this little monster popped up.  My theory is that perhaps, after we discovered that human pain is due to C-fibers, then the C-fibers themselves become part of the functioning that we refer to when we talk about the function of pain.  If in the history of Science we had discovered that in fact we were all actually homunculi-headed robots, and our visceral experience of pain were coincident with the tiny men pushing certain buttons, then this would in fact be the same thing as being in pain. 

Well, that's my best shot.

Politics is hard y'all

There are some things that we do rightly take to be basic principles of decency.  These things, like treating good people well, we don't have to justify further.  Some things have to be fundamental, otherwise we would face an infinite regress of justifications.  But it seems to me politics is rarely a place where truly fundamental principles are at play.  It's too derivative of more basic moral beliefs, and of tough empirical questions like what makes the economy better and how to measure it. 

Yet increasingly it seems that a political disagreement entails a deep hatred.  It is so easy to anger either side and short-circuit any productive conversation that all of politics now seems like screaming and taking advantage of momentary power. 

I think the main thing I advocate for in politics now is that politics is hard.  We can disagree and still be good people.  Our deepest held values are located somewhere else, like in how we choose to treat neighbors and friends daily, not here in politics.  The salient judgment about people in a conversation isn't whether they're a liberal or a conservative, but whether they're sincere.  If a racist wants to have a sincere conversation, where she's truly open to introspecting, offering her honest reasons, and changing her mind with adequate evidence--I'll have that conversation gladly.  The same for a communist, a theist, and anyone else. 

Sincerity matters more than nearly anything else.

The classics

Today I'm getting back to reading SICP (Structure and Interpretation of Computer Programs) and Halliday and Resnick's classic Physics text.  These are sort of like The Odyssey for CompSci and Physics in a sense, not necessarily because they are widely beloved--although I get the sense that they are at least appreciated by most academics in the relevant field.  They're classics in the sense that almost everyone in the relevant field has read them.

I like reading classics because

  1. Something must have made them a classic.  Especially the old ones that became popular before the textbook industry turned into big business with lots of corruption in university departments, gain a lot of prestige on this account.  Stewart's Calculus is fine I guess but it's not nearly as respected as it is widely used--and I suspect that some amount of kickbacks are responsible. 
  2. They get everyone on the same page.  In order to meaningfully say that you know programming, it suffices to read a book (and do the exercises, and similar projects) like SICP.  Other books also suffice, like Knuth's monster, but it's nice that we're all on basically the same page about SICP.

I'd like to develop a running list of classics.  I think I'll go do that now and make a page about it.

Free speech

I'm tutoring someone in Philosophy, focusing in free speech.  It's interesting to think about how you would defend an idea so absolutely fundamental to Western liberal society.  Former Supreme Court Justice Holmes put the problem as a cute paradox, which I will paraphrase as

If you know that you’re right about something important, while someone else contradicts you, and you know that you have the power to silence them, then it is natural to do just that. Put the other way around, if you do not silence someone’s speech, then you doubt either your correctness, importance, or power.
— Justice Oliver Wendell Holmes, but not quite

My gut tells me there's an analogy to freedom of religion.  These two freedoms are both interesting in that they're not regulations of citizens but rather regulations of the government.  We forbid the government from forbidding religions or speech.  It's actually historically interesting that, in fact, all of the Bill of Rights in America are of this form. 

But I have in mind a deeper equivalence between freedom of speech and freedom of religion.  It seems to me these freedoms keeps people from warring with each other through the government.  Even if one religion is wrong, and the vast majority of citizens agree that it's wrong, we still refuse to persecute followers of that religion.  Even if the religion is harmful to its followers and others, we refuse to regulate followers' speech and beliefs.  One reason for this is that, in societies where people try to control the religions of others, we have always seen brutality and abuse come from people who believe in the righteousness of their beliefs.  It may be fine for a while if you're the one who gets to be brutal--maybe it's not fine, but those people think it is anyway, and would not be convinced by sympathy for the oppressed.  But I think they should be very concerned for the day, which seems to inevitably come, when they are vulnerable to another group with the same oppressive tactics. 

I think the same sort of idea is true for freedom of expression.  You might want to regulate Nazi hate speech, but by accepting that the group in power gets to dictate acceptable speech acts, you should worry for the day when people hold power and enforce a set of beliefs you don't like.  I think actually the analogy extends to a lot of other topics, like gerrymandering:  You might love it or at least tolerate it when your own party benefits, but we should all oppose it on principle, always, for fear of what happens when the other side can use the same principle. 

Oddly, I always thought I disagreed with Hobbes, and yet here I am thinking that one good reason for a lot of our rights and freedoms, and constraints on government is a very Hobbesian idea by which we all lay down our arms against each other simultaneously. 

Current projects: Bayesian Statistics, Measure Theory, Automata

I've picked up a few Stats and Physics books that I just can't read, and it's because the Math is hard.  Hard in a weird way, to me anyway.  Knowing quite a good bit of Math it's strange to me that I'd pick up a book in another field and not understand the Math in it.  In the first chapter of an Econometrics book, casual appeal to a matrix form of Taylor's Theorem?  Matrix operations defined by the Dirac delta function when that never comes up as a significant topic in a Linear Algebra book. 

Well I think I need to just push harder on the Math.  I have fairly short patience tracing through the less mathematically rigorous material that is standard in these other subjects, used to build up intuition.  So I figure, more Math is always better and at the end of the tunnel I have a better chance of circling back to the other topics. 

So these days I've been working on Measure Theory and Probability Theory, in order to get me to that great Bayesian mountain: Jaynes' Probability Theory.  With Math muscles that big I'm sure most Econometric books should be no problem.  I should also pick up some advanced Linear Algebra books or maybe something on the Math of Physics, and eventually be ready to learn relativity.