I am or AI am?

thumbnail
Dada Kind, modified 8 Years ago at 7/17/15 12:31 AM
Created 8 Years ago at 7/17/15 12:29 AM

I am or AI am?

Posts: 633 Join Date: 11/15/13 Recent Posts
I am or AI am?

Edward Frenkel, mathematics professor at Berkeley, at the Aspen Ideas Festival discussing the plausibility of AI and transhumanism

Discussions around AI and consciousness tend to intersect with the meditation community frequently, so I think this might be appreciated here
The Poster Formerly Known As RyanJ, modified 8 Years ago at 7/17/15 12:53 AM
Created 8 Years ago at 7/17/15 12:51 AM

RE: I am or AI am?

Posts: 85 Join Date: 6/19/15 Recent Posts
AI is always a fun topic. This is where David Chapman excels for the simple fact he was a researcher at MIT's AI laboratory. Here are some excerpts that may be relevant, tied in with Integral theory, which David (And myself) have some resevations about as a Theory of Everything. It's more of a page link for others as its something I plan to investigate in detail at a later time. Although having studied artificial neural networks and backpropagation lightly, the whole time I was thinking, "This thing is basically the Chain rule from calculus 1 feeding back into itself with a few tricks here and there, how can super intelligent AI ever evolve from anything so flat and linear?"

http://meaningness.com/metablog/ken-wilber-boomeritis-artificial-intelligence

"The interesting part of AI research is the attempt to create minds, people, selves.8 Besides the fun of playing Dr. Frankenstein, AI calls orange’s bluff.

Orange says that rationality is what is essential to being human. If that’s right, we ought be able to program rationality into a computer, and thereby create something that is also essentially human—an intelligent self—although it would not be of our species.

This project seemed to be going very well up until about 1980, when progress ground to a halt. Perhaps it was a temporary lull? Ironically, by 1985, hype about AI in the press reached its all-time peak. Human-level intelligence was supposed to be just around the corner. Huge amounts of money poured into the field. For those of us on the inside, the contrast between image and reality was getting embarrassing. What had gone wrong?

An annoying philosopher named Hubert Dreyfus had been arguing for years that AI was impossible. He wrote a book about this called What Computers Can’t Do.9 We had all read it, and it was silly. He claimed that a dead German philosopher named Martin Heidegger proved that AI couldn’t work. Heidegger is famous as being the most obscure, voluminous, and anti-intellectual philosopher of all time.

I found a more sensible diagnosis. Rationality requires reasoning about the effects of actions. This turned out to be surprisingly difficult, and came to be called the “frame problem”. In 1985, I proved a series of mathematical theorems that showed that the frame problem was probably inherently unsolvable.10

This was a jarring result. Rational action requires a solution to the frame problem; but rationality (a mathematical proof) appeared to show that no solution was possible.11

Orange had turned against itself, and cut off the tree-limb it was standing on. Still, as we hurtled to the ground, we figured that we’d somehow find a way out. There had to be a solution, because of course we do all act rationally.

At this point, Phil Agre came back from a gig in California with a shocking announcement:

Dreyfus was right.

What?? Had Phil gone over to the Dark Side?

But with the announcement, he brought the secret key: a pre-publication draft of Dreyfus’ next book, Being-in-the-World, which for the first time made Heidegger’s magnum opus, Being and Time, comprehensible.

Being and Time demolishes the whole orange framework. Human being is not a matter of calculation. People are not isolated individuals, living in a world of dead material objects, strategizing to manipulate them to achieve utilitarian goals. We are always already embedded in a web of connections with living nature and with other people. Our actions are called forth spontaneously by the situation we find ourselves in—not rationally planned in advance.

If you have a green worldview, you’re thinking “duh, everyone knows all that—we don’t need a dead German philosopher to tell us.” But it is only because of Heidegger that you can be green. More than anyone else, he invented that worldview.

Being-in-the-World showed us why the frame problem was insoluble. But it also provided an alternative understanding of activity. Most of the time, you simply see what to do. Action is driven by perception, not plans.

Now, seeing is something us AI guys knew something about. Computer vision research had been about identifying manufactured objects in a scene. But could it be redirected into seeing what to do?

Yes, it could.12 In a feverish few months, Agre and I developed a completely new, non-orange approach to AI.13 We found that bypassing the frame problem eliminated a host of other intractable technical difficulties that had bedeviled the field.14

In 1987, we wrote a computer program called Pengi that illustrated some of what we had learned from Dreyfus, Heidegger, and the Continental philosophical tradition.15 Pengi participated in a life-world. It did not have to mentally represent and reason about its circumstances, because it was embedded in them, causally coupled with them in a purposive dance. Its skill came from spontaneous improvisation, not strategic planning. Its apparently intelligent activity derived from interactive dynamics that—continually involving both its self and others—were neither subjective nor objective.

Pengi was a triumph: it could do things that the old paradigm clearly couldn’t, and (although quite crude) seemed to point to a wide-open new paradigm for further research. AI was unstuck again! And, in fact, Pengi was highly influential for a few years.
David Chapman, Vision, Instruction, and Action

Although arguably non-orange, Pengi was hardly green. Particularly, it was in no sense social. The next program I wrote, Sonja, illustrated certain aspects of what it might mean for an AI to be socially embedded.16 I will have more to say about this elsewhere when I explain participation, the nebulosity of the self/other boundary, and the fact that meaningness is neither subjective nor objective. This work is arguably “yellow,” in offering orange-language explanations for green facts of existence.

There was another problem. Pengi’s job was to play a particular video game. Its ability to do that had to be meticulously programmed in by hand. We found that programming more complicated abilities was difficult (although there seemed to be no obstacle in principle). Also, although perhaps ant brains come wired up by evolution to do everything they ever can, people are flexible and adaptable. We pick up new capabilities in new circumstances.

The way forward seemed to be machine learning, an existing technical field. Working with Leslie Kaelbling, I tried to find ways an AI could develop skills with experience.17 The more I thought about this, though, the harder it seemed. “Machine learning” is a fancy word for “statistics,” and statistics take an awful lot of data to reach any conclusions. People frequently learn all they need from a single event, because we understand what is going on.

In 1992, I concluded that, although AI is probably possible in principle, no one has any clue where to start. So I lost interest and went off to do other things.18

In Boomeritis, the anti-hero—who may be me—says:

    I know, the computer part sounds far out, but that’s only because you don’t know what’s actually happening in AI. I’m telling you, it’s moving faster than you can imagine. (p. 306)

The reality, though, is that AI is moving slower than you can imagine. There’s been no noticeable progress in the past twenty years. And a few pages later “I” explain why:"
The Poster Formerly Known As RyanJ, modified 8 Years ago at 7/17/15 1:37 AM
Created 8 Years ago at 7/17/15 1:35 AM

RE: I am or AI am?

Posts: 85 Join Date: 6/19/15 Recent Posts
Having skimmed the video, which I have not seen him before but already really like the guy, it seems to be a criticism of the same thing David Chapman criticizes. And while my education in this topic is definitely not up to par to talk about it, it basically points to historical causes of cartesian selves of the European Enlightenment as this sort of clockwork mechanical newtonian system, the orange framework in Chapman's case, and many explorations of AI are a continuation of this assumption of us as these sort of robots, one that appears to be so ingrained into culture it basically exists invisibly in the very narrative structure of language. But these words are pointless, afterall, its just chemicals firing and fingers typing! I am a robot. Pew pew pew!

thumbnail
Dada Kind, modified 8 Years ago at 7/17/15 4:44 PM
Created 8 Years ago at 7/17/15 4:44 PM

RE: I am or AI am?

Posts: 633 Join Date: 11/15/13 Recent Posts
Word