Alarming Development - Mea Culpa

Alarming Development - Mea Culpa

Continuing the “hard work” theme but also returning to the question of intelligence and computing:

It is an easy trap to fall into. Programming requires a certain kind of analytical intelligence. Being more intelligent in that way increases your programming ability exponentially. It is emotionally satisfying to think of yourself as a different species from the average programmer. Programming becomes a demonstration of your superior intellect. Surely such powers shouldn’t be wasted on mundane chores, but instead applied to timeless works of brilliant inspiration, to be admired by the common programmer only through protective eyewear. What a load of self-indulgent adolescent crap. In programming as in the rest of life, attitude trumps intelligence. I had to learn that the hard way.

That experience taught me a lot about what really matters in programming. It is not about solving puzzles and being the brightest kid in the class. It is about realizing that the complexity of software dwarfs even the most brilliant human; that cleverness cannot win. The only weapons we have are simplicity and convention.

This hits a lot of themes I like.

“Convention” is the heart of Confucianism. The idea is that people have to work together (even if the “people” in question are just you now and you six months from now), and the only way for us to work together is by establishing channels of convention through which expression is possible. The Romantic ideal is to have pure expression apart from convention, but this is absurd. Convention is what makes expression possible.

“Intelligence” is the idea that enough a priori thinking will let us solve all our problems in one swoop without doing the hard work of physically learning. To the contrary though, in Analects 15.31 Confucius says,

吾嘗終日不食,終夜不寢,以思,無益,不如學也。

Once, I spent the whole day without eating and the whole night without sleeping, because I was thinking. Unlike learning, this profited me nothing.

This isn’t to say that there’s no use for intelligence or thinking. The point is that without accompanying our thinking with action, there’s no way to ground the process and ensure that it is productive.

Why did growth slow after 1973?

TheMoneyIllusion - Why did growth slow after 1973?

Here’s what I think happened. There were a few underlying technological developments in the late 19th century that dramatically affected living standards in the 1920-70 period, when they were widely adopted in advanced economies. I would certainly include electric power and the internal combustion engine. I also think indoor plumbing is underrated. Imagine having to rely on outhouses in cold climates. And also recall the health advantage of safe drinking water. And I suppose modern chemistry should be included—something I know little about. Many key products were first invented in the late 1800s or early 1900s (electric lights, home appliances, cars, airplanes, etc) and were widely adopted by about 1973. No matter how rich people get, they really don’t need 10 washing machines. One will usually do the job. So as consumer demand became saturated for many of these products, we had to push the technological frontier in different directions. And that has proved surprisingly difficult to do.

Philosophy Bites has a podcast up now with David Chalmers about the “singularity.” The singularity is dumb for a number of reasons, but here are just two of them.

First, as mentioned above, the rate of progress is slowing, not increasing. It’s slowing because when you solve the easy problems, you solve them quickly, but eventually you get stuck and all that are left are hard problems. The poster child for this is rocket science. We went from no satellites to landing on the moon in such a short period of time because we had already done the hard part (getting a powerful enough rocket fuel) and we just needed to work out the engineering kinks in directing the rocket thrust without blowing up along the way. Since the Saturn V there have been no real innovations in rocket fuel, and as a result we’re arguably worse off today launching capacity-wise than we were in 1969.

Airplanes are similar story: the Wright brothers were the first people to hook a light enough engine to a good enough airfoil. It took about 60 years to refine that as well as it could be refined. Now, all the innovations are in the field of “in-seat entertainment systems.”

Computer clock speeds are already flat, and we have no reason to believe that they’ll ever go up again. Even if they did, the speed of light divided by 1 centimeter is 30GHz. We’ll never get a chip that cycles faster than the speed of light.

Technology only seems to go quickly in the early days of seeking an asymptote. Before long you run up against the limits of the medium.

The second reason why the singularity is dumb (and one that ought to occur to a philosopher) is that the idea of a computer having “intelligence” and therefore being able to build a better computer to succeed it is absurd. Even if there were an AI with a high IQ, building a chip is a matter of empirical scientific engineering not a priori speculation. Without doing experiments, you can’t hope to make better chips. A computer in a box could puzz its puzzler all day long without coming up with any breakthroughs in fundamental science, and that’s what we need if the state of the art is to advance.

The point might be scaled back a bit, so that the claim is not that an AI will be able to build a better chip using a priori reasoning, but that the computer will be better able to lay out the circuit components than were produced using conventional empirical research. Which would be a good point, if we hadn’t already started doing that back in the 1970s. Ever since Intel’s second chip series, they’ve been designing their chips in CAD, since using a blueprint to lay out all the little bits was too bulky. In other words, computers are already doing what singularity junkies hope can someday be done. So, far from being prophets of the future, singularity enthusiasts are blinded to the past!

In fact, if we go with one definition of the singularity, “the point after which it is impossible to predict the future trajectory of technology” we’ve been at the singularity already for millennia. The whole point of a new technology is that you don’t know where it will go. The problem of induction can only be fudged in cases where we’ve seen the same thing play out several times. The point of a new technology is that its new. As such, the business of predicting the future of technology from its past is philosophically muddled in a rather ridiculous way. Hume’s problem of induction was never a problem of predicting whether apples will fall from trees or the Moon will orbit the Earth. (Hume considered those things as certain as anything can ever be in this life.) The problem is with things we haven’t seen before.

There are more problems with the singularity, but that’s enough ranting for now.

NS_Howl_P

merlin:

NS_Howl_P

Malloc!
Malloc zone malloc!
Szone malloc should clear
tiny malloc from free list!

Firefox, in a vivid homage to the verse of Allen Ginsberg.

“Howl” is the best thing by the Beatniks, ever.

One Div Zero - Getting to the Bottom of Nothing At All

One Div Zero - Getting to the Bottom of Nothing At All

This is an interesting article about programming that basically argues that we can think of not coming back from a function call as a special type that is the sub-type of all other types. That is, if function f promises to return a string, we should really think of this as f promising either to return a string or to get stuck calculating until the end of time or to crash altogether.

Conclusion

I somehow don’t think anybody will be running into the office saying “I just read an article on the bottom type, so now I know how to solve our biggest engineering challenge.” But the bottom type is still interesting. It’s a kind of hole in your static type system that follows inevitably from the Turing Halting Problem. It says that a function can’t promise to compute a string, it can only promise to not compute something that isn’t a string. It might compute nothing at all. And that in turn leads to the conclusion that in a Turing complete language static types don’t classify values (as much as we pretend they do) they classify expressions.

Nothingness is so beautiful because it seeps in through even the strongest of iron doors.

Mac OS X Automation - Services

Mac OS X Automation - Services

I mentioned earlier that the main effect of OS X Snow Leopard is to brighten the screen and free some HD space, but actually so far my favorite feature has been the revamped Services menu. Services were, apparently, very useful back in the NeXTSTEP days, but in OS X, they’ve always just been a series of cluttered menus up until now. With OS X Snow Leopard, you can not only control what items appear in your Services menu, you can also very easily build your own services using Automator and then give that service a keyboard shortcut in the System Preferences. I’ve already adapted some of my existing Python scripts so that, for example, I can highlight any text, press control-option-u, and then have that text turned into a Markdown rendered PDF that gets uploaded to my website, all in one go. It’s very handy.

How To Design A Good API and Why it Matters

YouTube embed

Josh Bloch - How To Design A Good API and Why it Matters

Guido van Rossum, creator of Python says, “Watch it at least once a year”.

This talk screams “Chinese philosophy” to me, everything from exemplar models to make right the names to learning (學) over thinking (思) to affecting the tips by taking care with the root. He even wrote up his findings as a series of aphorisms! I can’t tell if this is because they were right or because I read them into places where they don’t belong due to an over-application of the principle of charity to the originals in my interpretations.

The one point that seems to differ to me is that I can’t off the top of my head think of a parallel to hiding the implementation details of an API. I can see though why one would want to keep an API small, since one has to have trustworthiness (信, 誠, etc.) to keep the API working in the same way in the future. I guess that the reason for this difference is that in the social world, you shouldn’t try to refactor the underpinnings of some process, since things are almost never neatly modular. In computers, however, conceptual modularity is a fiction that can be maintained with a not unreasonable amount of diligence (though it does take some work; that’s why they say “Goto Considered Harmful,” etc.).

It may seem interesting that Guido, the creator of such a dynamic and plainspoken language as Python, should be recommending a talk by one of the creators of Java, a static and verbose language. However, the principles of good API design are more or less universal. What’s interesting about Java though is that because it is suited to use by large “enterprise”-type teams rather than an individual programmer or a small group of programmers, it more concretely embodies the social dynamics of Chinese philosophy. In Python, one ought to be a good citizen and use normal variable names and not monkeypatch and stay out of people’s private class/module members, etc., etc. But in Java these things are not just good ideas but absolutely vital practices. Having a large number of unrelated individuals (who are, to be honest, probably not the best programmers) attempt to work together on an enterprise application would be doomed from the start without the social straightjacket of classes, interfaces, public and private, “design patterns,” and so on to keep the masses inline behind the sage at the lead of development.

A Map Doesn't Help You in the Dark

I’ve seen some quite negative reactions to Doug Bowman’s post, which insinuated he was ungrateful for his position at Google. Most of this seemed to hinge on the phrase “I can’t operate in an environment like that.“ But I think this phrase was widely misinterpreted. It doesn’t mean "I don’t like working in this environment”. Rather, it means “You are forcing me to deliver an inferior result based on a flawed belief.”

That belief is that data can’t lie.

The frustration for someone in Doug’s position is that he is hired as an expert in his field, eager to share his experiences. The problem, though, is the people in Doug’s position are often hired under the incorrect assumption that a designer has amassed information in their career, not experiences. That assumption leads to a second flawed assumption: that all decisions will be based on hard facts.

Theocacao: Measuring the Design Process

copy_paste.py

Copying and pasting in Python on Mac OS X.

#! /usr/bin/env python

"""copy_paste: a module with two function, pbcopy and pbpaste. 
Relies on AppKit and Foundation frameworks from PyObjC."""

#On my computer, these are in 
#/System/Library/Frameworks/Python.framework
#/Versions/Current/Extras/lib/python/PyObjC
import Foundation, AppKit

def pbcopy(s):
    "Copy string argument to clipboard"
    newStr = Foundation.NSString.stringWithString_(s).nsstring()
    newData = newStr.dataUsingEncoding_(Foundation.NSUTF8StringEncoding)
    board = AppKit.NSPasteboard.generalPasteboard()
    board.declareTypes_owner_([AppKit.NSStringPboardType], None)
    board.setData_forType_(newData, AppKit.NSStringPboardType)

def pbpaste():
    "Returns contents of clipboard"
    board = AppKit.NSPasteboard.generalPasteboard()
    content = board.stringForType_(AppKit.NSStringPboardType)
    return content

Alternative method for those without Foundation and Appkit:

import Carbon.Scrap
def pbcopy(arg):
    Carbon.Scrap.ClearCurrentScrap()
    scrap = Carbon.Scrap.GetCurrentScrap()
    scrap.PutScrapFlavor('TEXT', 0, arg)

def pbpaste():
    scrap = Carbon.Scrap.GetCurrentScrap()
    try:
        return scrap.GetScrapFlavorData('TEXT')
    except:
        return ''

Also, note to OS X command line python users, put export LC_CTYPE=en_US.utf-8 in your .bash_profile and you’ll be able to print non-ASCII characters in Terminal instead of getting an annoying UnicodeError.

“Fermat's last Python script”

fermat.py

def fermat(n):
    """Returns triplets of the form x^n + y^n = z^n.
    Warning! Untested with n > 2."""
    from itertools import count
    for x in count(1):
     for y in range(1, x+1):
      for z in range(1, x**n+y**n + 1):
       if x**n + y**n == z**n:
        yield x, y, z