The Atlantic - Mind vs. Machine

The Atlantic - Mind vs. Machine

Interesting article about the Loebner Prize, a Turing Test competition. The author tried his best to be named “the Most Human Human” by increasing his interactivity: it’s not about saying one or two interesting things, it’s about creating a thread of conversation that’s logical and reactive to the interlocutor.

For instance, Richard Wallace, the three-time Most Human Computer winner, recounts an “AI urban legend” in which

a famous natural language researcher was embarrassed … when it became apparent to his audience of Texas bankers that the robot was consistently responding to the next question he was about to ask … [His] demonstration of natural language understanding … was in reality nothing but a simple script.

The moral of the story: no demonstration is ever sufficient. Only interaction will do. In the 1997 contest, one judge gets taken for a ride by Catherine, waxing political and really engaging in the topical conversation “she” has been programmed to lead about the Clintons and Whitewater. In fact, everything is going swimmingly until the very end, when the judge signs off:

We so often think of intelligence, of AI, in terms of sophistication, or complexity of behavior. But in so many cases, it’s impossible to say much with certainty about the program itself, because any number of different pieces of software—of wildly varying levels of “intelligence”—could have produced that behavior.

No, I think sophistication, complexity of behavior, is not it at all. For instance, you can’t judge the intelligence of an orator by the eloquence of his prepared remarks; you must wait until the Q&A and see how he fields questions. The computation theorist Hava Siegelmann once described intelligence as “a kind of sensitivity to things.” These Turing Test programs that hold forth may produce interesting output, but they’re rigid and inflexible. They are, in other words, insensitive—occasionally fascinating talkers that cannot listen.

I’m working on revising an old paper of mine about Daoism and computers, and one concept I’m trying to suss out is “appropriateness.” Reason lets us do what’s appropriate for the moment by taking everything into consideration, not just what a narrow algorithm specifies.

Robot Artist

Cat and Girl - ROBOT ARTIST in “Artificial Intelligence”

Cat and Girl apply the second Cartesian test.

Yes, it’s an old favorite hobbyhorse on this site. Remember kids: the Turing test is a simplified version of something that Descartes said better three centuries beforehand! To be intelligent requires not only the power of speech but also the power of creativity:

although such machines might execute many things with equal or perhaps greater perfection than any of us, they would, without doubt, fail in certain others from which it could be discovered that they did not act from knowledge, but solely from the disposition of their organs: for while reason is an universal instrument that is alike available on every occasion, these organs, on the contrary, need a particular arrangement for each particular action; whence it must be morally impossible that there should exist in any machine a diversity of organs sufficient to enable it to act in all the occurrences of life, in the way in which our reason enables us to act.

René Descartes - Discourse on the Method, Part 5

Source: catandgirl.com

Antonio E. Porreca - Do waterfalls play chess? and other stories

Antonio E. Porreca - Do waterfalls play chess? and other stories

From the better late than never files, here’s something I should have blogged about back when it came out.

Scott Aaronson wrote paper called Why philosophers should care about computational complexity and posted about it on his blog.

To make up for the delay, I’m offering a bonus link to Antonia Perreca’s commentary on the same.

As for me, I’m still waiting to see some make a good definition of “computation.”

On Archiving Everything

Jonathan Gray - On Archiving Everything: Borges, Calvino, Google

In a sense Google’s approach to meaning is uncannily like that of the later Wittgenstein: don’t look for deeper structures underlying the way we make sense of things, pay attention to the surface, to what people do and how they interact with language, with words, sentences, and signs. Don’t derive an arbitrary ontology or an abstract rule from particular cases: watch what people do, how they behave, and iterate accordingly. The success of their algorithms is predicated on the recognition that meaning is not something fixed which can be analysed and understood apart from what people do. Statistical modelling based on actual user behaviour will win out over attempting to second guess what they want with static schema. In Google’s total archive, the company don’t just retain every book, every page, every sentence, but every interaction with every item: every click, pause, foray, allusion, babble, farrago and yawn. For our cacophonies are Google’s gold.

There can be no doubt that Google’s use of statistical techniques has helped it advance far beyond earlier attempts at “artificial intelligence,” since it can use its data supply to automate the process of learning, instead of relying on experts to mold the data to perfection.

However, it should be pointed out that whatever Google is doing, it is obviously inferior to whatever it is the brain is doing. A normal human brain doesn’t need to read every book ever in order to make terrible, ungrammatical translations from Chinese to English. A normal human brain doesn’t need to process thousands of training messages to tell spam messages from ham. However it is that the brain works, it is still able to learn much more and much more quickly than Google is with the same data set.

Source: jonathangray.org

What Business is Wall Street in?

The best analogy for traders? They are hackers. Just as hackers search for and exploit operating system and application shortcomings, traders do the same thing. A hacker wants to jump in front of your shopping cart and grab your credit card and then sell it. A high frequency trader wants to jump in front of your trade and then sell that stock to you. A hacker will tell you that they are serving a purpose by identifying the weak links in your system. A trader will tell you they deserve the pennies they are making on the trade because they provide liquidity to the market.

Mark Cuban - What Business is Wall Street in? (via Bill Mill)

Source: blogmaverick.com

The Joy of Ambiguous Boundaries

Iwata Asks - The Joy of Ambiguous Boundaries

Let’s see, it’s a link where Shigesato Itoi (creator of EarthBound) talks to Shigeru Miyamoto and Satoru Iwata about epistemology. Yeah, I guess, I’ll blog that.

Itoi: Ambiguous boundaries are the key. You can say that about anything, like virtual worlds, that draw upon the power of our imagination. Put an unusual way, I think that ambiguous world is like the Otherworld. We live in this world that we think is the real one, so if something from over there enters here, this world becomes unstable. That is at times frightening, at times thrilling.

Iwata: Ahh, the boundaries break down. We can relax only when there are proper boundary lines.

Itoi: Right, right. People long ago always existed in an unstable world. In other words, to people in the time of the Tale of Genji, there really were ghosts.

Iwata: Uh-huh.

Itoi: But maybe that’s not something to talk about here.

Iwata: No, it’s interesting. (laughs)

Itoi: Takaaki Yoshimoto says that, a long time ago, things were comprehended far less clearly, and that many things were not clearly delineated. With regard to questions like whether gods or ghosts existed, we can never relate to people in the past as long as we try to apply modern thinking. Rather, they must have existed in a more ambiguous frame of mind, he said.

Itoi: To pull in yet another area of thought, I think one of the reasons that such works are increasing each year is that academic fields in modern science that treat of epistemology are still young and not yet fully established. Everyone has an almost physical reluctance to plunge down that path. It’s interesting to think that people are trying so hard to recapture that fading ambiguity.

Iwata: That fogginess that was a matter of course in the past exists at the boundary between the real world and the one within the screen.

Itoi: Yes. You can sense that, which is peculiarly pleasing. For example, the boundaries between fiction and nonfiction and fantasy and documentary were once all vague.

Iwata: Hmm, that may be exactly what we are trying to give people now. I showed you this briefly before, but the AR Games software in the Nintendo 3DS system itself is interesting for the way it mixes reality and virtual space.

You heard it hear first: Nintendo 3DS, now you’re playing with ghosts!

Source: iwataasks.nintendo.com

Directed Edge - Google Spam Heresy: The AdSense Paradox

Directed Edge - Google Spam Heresy: The AdSense Paradox

Being something of a search weenie, as my eyelids were feeling heavy today I found myself mulling over the problem, “How would one detect Google spamming?”

The answer turns out to be surprisingly easy. Who has an incentive to spam Google? People living from advertising. Who owns the largest online display ad network? Google.

So, here’s the heresy: the spamminess of a web site is inversely proportional to its ad click-through.

Think about it — in a typical internet search, a navigation path terminating at that page is the best result. If they click on an ad, it probably means you missed serving up the right page in the first place. As a corollary, the pages best optimized to pull you in via a search term and send you back out via a related ad are among the worst results.

I’ve said for years that Google exists on the razor’s edge of a paradox: if their results are good, then web surfers will ignore the ads on the side and just click on the top search result. If their search results are good, then the top companies in each field will already be the top search results and won’t need to advertise. That means that the ones who do advertise will be the crappy companies in the field, so surfers will be further trained to ignore the ads. The result is that Google’s natural settling point is with results that are crappy enough to get people to look at the ads but not so crappy that users switch to Bing or something.

I don’t see how Google gets around this paradox. I kind of wonder if search isn’t one of those areas of natural monopoly that ought to be tightly regulated or dispersed into open source or something. Mickey Kaus recently predicted that Google will get itself sucked into some political controversy sooner or later, and I tend to agree that it’s inevitable. What result should you get when you search for “health care bill cost”? “Obama birth place”? That the answers to these queries are generated algorithmically doesn’t mean they’re neutral! There’s an old hacker koan:

In the days when the Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.

“What are you doing?”, asked Minsky.

“I am training a randomly wired neural net to play Tic-tac-toe”, Sussman replied.

“Why is the net wired randomly?”, asked Minsky.

“I do not want it to have any preconceptions of how to play”, Sussman said.

Minsky then shut his eyes.

“Why do you close your eyes?” Sussman asked his teacher.

“So that the room will be empty.”

At that moment, Sussman was enlightened.

The algorithm always has a bias.

The myth of scale is seductive

The myth of scale is seductive because it is easier to spread technology than to effect extensive change in social attitudes and human capacity. In other words, it is much less painful to purchase a hundred thousand PCs than to provide a real education for a hundred thousand children; it is easier to run a text-messaging health hotline than to convince people to boil water before ingesting it; it is easier to write an app that helps people find out where they can buy medicine than it is to persuade them that medicine is good for their health.

Kentaro Toyama (Boston Review) - Can Technology End Poverty? (Spoiler Alert: The answer is no. I particularly like the explanation that technology is multiplicative not additive, so for a society trending negative, technology just makes the slide faster. Via llimllib)

Source: bostonreview.net

Evolution of the Chess Computer

Tom Gauld - Chess Computers

And here I specially stayed to show that, were there such machines exactly resembling organs and outward form an ape or any other irrational animal, we could have no means of knowing that they were in any respect of a different nature from these animals; but if there were machines bearing the image of our bodies, and capable of imitating our actions as far as it is morally possible, there would still remain two most certain tests whereby to know that they were not therefore really men.

Of these the first is that they could never use words or other signs arranged in such a manner as is competent to us in order to declare our thoughts to others: for we may easily conceive a machine to be so constructed that it emits vocables, and even that it emits some correspondent to the action upon it of external objects which cause a change in its organs; for example, if touched in a particular place it may demand what we wish to say to it; if in another it may cry out that it is hurt, and such like; but not that it should arrange them variously so as appositely to reply to what is said in its presence, as men of the lowest grade of intellect can do.

The second test is, that although such machines might execute many things with equal or perhaps greater perfection than any of us, they would, without doubt, fail in certain others from which it could be discovered that they did not act from knowledge, but solely from the disposition of their organs: for while reason is an universal instrument that is alike available on every occasion, these organs, on the contrary, need a particular arrangement for each particular action; whence it must be morally impossible that there should exist in any machine a diversity of organs sufficient to enable it to act in all the occurrences of life, in the way in which our reason enables us to act.

— René Descartes, Discourse on Method, Part V

A third test? (Incidentally, I think the “Turing Test” should really be called the First Cartesian Test. This passage holds up today much better than Turing’s non-mathematical thoughts do.)

Source: Flickr / tomgauld