Message Boards Message Boards

Podcasts

AI, Identity, and Consciousness

Toggle
AI, Identity, and Consciousness Sim 2/1/20 5:40 PM
RE: AI, Identity, and Consciousness Chris Marti 2/1/20 6:23 PM
RE: AI, Identity, and Consciousness Sim 2/1/20 7:04 PM
RE: AI, Identity, and Consciousness Chris Marti 2/1/20 7:31 PM
RE: AI, Identity, and Consciousness Chris Marti 2/1/20 6:29 PM
RE: AI, Identity, and Consciousness Milo 2/1/20 7:20 PM
RE: AI, Identity, and Consciousness Sim 2/12/20 10:06 PM
RE: AI, Identity, and Consciousness Chris Marti 2/13/20 7:09 AM
RE: AI, Identity, and Consciousness Chris Marti 2/13/20 8:04 AM
RE: AI, Identity, and Consciousness Sim 2/13/20 9:25 AM
RE: AI, Identity, and Consciousness Chris Marti 2/13/20 10:42 AM
RE: AI, Identity, and Consciousness Sim 2/13/20 11:17 AM
RE: AI, Identity, and Consciousness Chris Marti 2/13/20 11:30 AM
RE: AI, Identity, and Consciousness Not two, not one 2/13/20 11:43 AM
RE: AI, Identity, and Consciousness Chris Marti 2/13/20 11:53 AM
RE: AI, Identity, and Consciousness Chris Marti 2/13/20 12:04 PM
RE: AI, Identity, and Consciousness Sim 2/13/20 1:15 PM
RE: AI, Identity, and Consciousness Not two, not one 2/13/20 1:23 PM
RE: AI, Identity, and Consciousness Chris Marti 2/13/20 2:31 PM
RE: AI, Identity, and Consciousness Not two, not one 2/13/20 2:16 PM
RE: AI, Identity, and Consciousness Chris Marti 2/13/20 2:44 PM
RE: AI, Identity, and Consciousness Not two, not one 2/13/20 3:28 PM
RE: AI, Identity, and Consciousness Linda ”Polly Ester” Ö 2/13/20 3:38 PM
RE: AI, Identity, and Consciousness Not two, not one 2/13/20 3:45 PM
RE: AI, Identity, and Consciousness Sim 2/13/20 3:52 PM
RE: AI, Identity, and Consciousness Not two, not one 2/13/20 5:57 PM
RE: AI, Identity, and Consciousness Not two, not one 2/13/20 7:22 PM
RE: AI, Identity, and Consciousness Sim 2/13/20 8:09 PM
RE: AI, Identity, and Consciousness Not two, not one 2/13/20 8:35 PM
RE: AI, Identity, and Consciousness Not two, not one 2/13/20 8:19 PM
RE: AI, Identity, and Consciousness Sim 2/13/20 9:06 PM
RE: AI, Identity, and Consciousness Milo 2/14/20 12:29 AM
RE: AI, Identity, and Consciousness Sim 2/14/20 1:08 AM
RE: AI, Identity, and Consciousness Chris Marti 2/14/20 8:12 AM
RE: AI, Identity, and Consciousness Not two, not one 2/15/20 2:19 PM
RE: AI, Identity, and Consciousness Linda ”Polly Ester” Ö 2/15/20 2:13 PM
RE: AI, Identity, and Consciousness Not two, not one 2/15/20 2:14 PM
RE: AI, Identity, and Consciousness Linda ”Polly Ester” Ö 2/15/20 2:16 PM
RE: AI, Identity, and Consciousness terry 2/17/20 10:20 PM
RE: AI, Identity, and Consciousness Linda ”Polly Ester” Ö 2/18/20 12:22 AM
RE: AI, Identity, and Consciousness Linda ”Polly Ester” Ö 2/18/20 2:13 AM
RE: AI, Identity, and Consciousness terry 2/18/20 12:53 PM
RE: AI, Identity, and Consciousness Linda ”Polly Ester” Ö 2/18/20 1:05 PM
RE: AI, Identity, and Consciousness terry 2/18/20 1:42 PM
RE: AI, Identity, and Consciousness terry 2/18/20 12:36 PM
RE: AI, Identity, and Consciousness Chris Marti 2/15/20 6:25 PM
RE: AI, Identity, and Consciousness Not two, not one 2/15/20 11:26 PM
RE: AI, Identity, and Consciousness Linda ”Polly Ester” Ö 2/16/20 12:19 AM
RE: AI, Identity, and Consciousness Chris Marti 2/16/20 8:23 AM
RE: AI, Identity, and Consciousness Linda ”Polly Ester” Ö 2/16/20 10:24 AM
RE: AI, Identity, and Consciousness terry 2/17/20 10:24 PM
RE: AI, Identity, and Consciousness Not two, not one 2/16/20 11:40 PM
RE: AI, Identity, and Consciousness Chris Marti 2/17/20 6:51 AM
RE: AI, Identity, and Consciousness Linda ”Polly Ester” Ö 2/17/20 7:17 AM
RE: AI, Identity, and Consciousness terry 2/17/20 10:26 PM
RE: AI, Identity, and Consciousness Linda ”Polly Ester” Ö 2/18/20 2:15 AM
RE: AI, Identity, and Consciousness Not two, not one 2/18/20 3:58 AM
RE: AI, Identity, and Consciousness Linda ”Polly Ester” Ö 2/18/20 5:03 AM
RE: AI, Identity, and Consciousness terry 2/18/20 1:20 PM
RE: AI, Identity, and Consciousness Chris Marti 2/18/20 6:35 AM
RE: AI, Identity, and Consciousness terry 2/18/20 1:30 PM
RE: AI, Identity, and Consciousness Chris Marti 2/18/20 1:40 PM
RE: AI, Identity, and Consciousness Not two, not one 2/18/20 1:44 PM
RE: AI, Identity, and Consciousness terry 2/19/20 9:50 AM
RE: AI, Identity, and Consciousness Linda ”Polly Ester” Ö 2/17/20 7:15 AM
RE: AI, Identity, and Consciousness terry 2/17/20 10:03 PM
RE: AI, Identity, and Consciousness Not two, not one 2/15/20 1:28 PM
RE: AI, Identity, and Consciousness Linda ”Polly Ester” Ö 2/15/20 2:00 PM
RE: AI, Identity, and Consciousness Not two, not one 2/13/20 1:05 PM
RE: AI, Identity, and Consciousness Chris Marti 2/13/20 1:07 PM
RE: AI, Identity, and Consciousness Not two, not one 2/13/20 1:15 PM
RE: AI, Identity, and Consciousness Linda ”Polly Ester” Ö 2/13/20 1:34 PM
RE: AI, Identity, and Consciousness Linda ”Polly Ester” Ö 2/13/20 1:20 PM
RE: AI, Identity, and Consciousness terry 2/17/20 10:15 PM
RE: AI, Identity, and Consciousness terry 2/22/20 4:54 PM
RE: AI, Identity, and Consciousness Ni Nurta 2/24/20 11:50 AM
RE: AI, Identity, and Consciousness terry 3/2/20 2:47 PM
RE: AI, Identity, and Consciousness Sim 2/13/20 12:27 PM
RE: AI, Identity, and Consciousness Chris Marti 2/13/20 12:36 PM
RE: AI, Identity, and Consciousness terry 2/17/20 9:34 PM
RE: AI, Identity, and Consciousness Chris Marti 2/1/20 7:34 PM
RE: AI, Identity, and Consciousness svmonk 2/1/20 8:52 PM
RE: AI, Identity, and Consciousness Bill T 2/2/20 4:23 AM
RE: AI, Identity, and Consciousness Bill T 2/2/20 4:25 AM
RE: AI, Identity, and Consciousness Chris Marti 2/2/20 8:42 AM
RE: AI, Identity, and Consciousness terry 2/3/20 10:17 PM
RE: AI, Identity, and Consciousness terry 2/3/20 10:18 PM
RE: AI, Identity, and Consciousness terry 2/3/20 10:27 PM
RE: AI, Identity, and Consciousness svmonk 2/4/20 9:36 PM
RE: AI, Identity, and Consciousness terry 2/5/20 1:55 AM
RE: AI, Identity, and Consciousness Chris Marti 2/5/20 6:37 AM
RE: AI, Identity, and Consciousness terry 2/7/20 5:18 PM
RE: AI, Identity, and Consciousness svmonk 2/5/20 10:07 PM
RE: AI, Identity, and Consciousness terry 2/7/20 5:57 PM
RE: AI, Identity, and Consciousness terry 2/3/20 10:13 PM
RE: AI, Identity, and Consciousness terry 2/3/20 11:19 PM
RE: AI, Identity, and Consciousness Not two, not one 2/4/20 2:48 AM
RE: AI, Identity, and Consciousness Sim 2/4/20 12:12 AM
RE: AI, Identity, and Consciousness Not two, not one 2/4/20 2:40 AM
RE: AI, Identity, and Consciousness Sim 2/12/20 10:26 PM
RE: AI, Identity, and Consciousness Not two, not one 2/13/20 4:02 AM
RE: AI, Identity, and Consciousness T 2/13/20 6:13 AM
RE: AI, Identity, and Consciousness Chris Marti 2/13/20 8:09 AM
RE: AI, Identity, and Consciousness Not two, not one 2/13/20 11:37 AM
RE: AI, Identity, and Consciousness terry 2/17/20 9:44 PM
RE: AI, Identity, and Consciousness Chris Marti 2/18/20 6:42 AM
RE: AI, Identity, and Consciousness Linda ”Polly Ester” Ö 2/18/20 7:21 AM
RE: AI, Identity, and Consciousness terry 2/18/20 1:15 PM
RE: AI, Identity, and Consciousness Not two, not one 2/18/20 1:21 PM
RE: AI, Identity, and Consciousness terry 2/18/20 1:32 PM
RE: AI, Identity, and Consciousness Not two, not one 2/18/20 1:41 PM
RE: AI, Identity, and Consciousness Chris Marti 2/18/20 1:34 PM
RE: AI, Identity, and Consciousness terry 2/18/20 11:44 AM
RE: AI, Identity, and Consciousness Linda ”Polly Ester” Ö 2/18/20 12:23 PM
RE: AI, Identity, and Consciousness Chris Marti 2/18/20 12:36 PM
RE: AI, Identity, and Consciousness Linda ”Polly Ester” Ö 2/18/20 1:02 PM
RE: AI, Identity, and Consciousness terry 2/19/20 2:09 AM
RE: AI, Identity, and Consciousness Linda ”Polly Ester” Ö 2/19/20 2:17 AM
RE: AI, Identity, and Consciousness terry 2/20/20 12:52 PM
RE: AI, Identity, and Consciousness Linda ”Polly Ester” Ö 2/20/20 4:00 PM
RE: AI, Identity, and Consciousness Sim 2/13/20 10:06 AM
RE: AI, Identity, and Consciousness Not two, not one 2/13/20 1:04 PM
RE: AI, Identity, and Consciousness Chris Marti 2/13/20 1:10 PM
RE: AI, Identity, and Consciousness terry 2/17/20 9:53 PM
RE: AI, Identity, and Consciousness terry 2/5/20 1:02 AM
RE: AI, Identity, and Consciousness terry 2/6/20 12:53 PM
RE: AI, Identity, and Consciousness Chris Marti 2/7/20 7:45 AM
RE: AI, Identity, and Consciousness Stirling Campbell 2/6/20 1:27 PM
RE: AI, Identity, and Consciousness terry 2/7/20 7:26 PM
RE: AI, Identity, and Consciousness terry 2/7/20 7:28 PM
RE: AI, Identity, and Consciousness Sim 2/12/20 11:32 PM
RE: AI, Identity, and Consciousness terry 2/17/20 9:27 PM
RE: AI, Identity, and Consciousness Chris Marti 2/20/20 8:58 AM
RE: AI, Identity, and Consciousness terry 2/20/20 1:05 PM
RE: AI, Identity, and Consciousness Not two, not one 2/20/20 1:28 PM
RE: AI, Identity, and Consciousness Chris Marti 2/20/20 4:26 PM
RE: AI, Identity, and Consciousness Not two, not one 2/20/20 5:08 PM
RE: AI, Identity, and Consciousness Chris Marti 2/20/20 5:12 PM
RE: AI, Identity, and Consciousness Chris Marti 2/20/20 5:24 PM
RE: AI, Identity, and Consciousness Not two, not one 2/20/20 5:26 PM
RE: AI, Identity, and Consciousness terry 2/21/20 1:13 AM
RE: AI, Identity, and Consciousness terry 2/21/20 11:42 AM
RE: AI, Identity, and Consciousness terry 2/21/20 12:04 PM
RE: AI, Identity, and Consciousness Chris Marti 2/21/20 12:59 PM
RE: AI, Identity, and Consciousness terry 2/22/20 3:24 AM
RE: AI, Identity, and Consciousness terry 2/21/20 12:57 AM
RE: AI, Identity, and Consciousness Chris Marti 2/20/20 4:23 PM
RE: AI, Identity, and Consciousness terry 2/21/20 12:38 AM
RE: AI, Identity, and Consciousness Chris Marti 2/21/20 6:45 AM
AI, Identity, and Consciousness
tech philosophy buddhism self science podcast conciousness research
Answer
2/1/20 5:40 PM
Hello all! I'm grateful that this community exists.

I follow the topic of artificial intelligence (AI) closely, because my work could make a big difference in how AI affects people in the future. Over the last couple years, discussions of AI have increasingly focused on topics informed by insight and morality. For example:

AI Alignment Podcast: Identity and the AI Revolution
Buddhist views about self and suffering are referenced over a dozen times in this episode, but I had a hard time following. Encouragingly, it ends with a host saying: "Simply, I hope we have a Buddhist AI," to which the other host agrees. If anyone here listens to that discussion and can provide his or her views on the matter, please do.

Another example: Artificial Intelligence Podcast: The Hard Problem of Consciousness

A lot of this points to the question: If superintelligent AI is built, what should it be programmed to do? Hopefully, if and when an answer is needed, the world's thought leaders on morality and the nature of experience will be consulted. A dream would be for Sam Harris to interview (or debate?) Daniel Ingram on this topic. Recommendations for other sources to learn from would be much appreciated.

RE: AI, Identity, and Consciousness
Answer
2/1/20 6:23 PM as a reply to Sim.
This is fun stuff to talk about but the reality is farther away than the popular literature would have us believe. The most effective AI is single-purpose in nature, based on neural networks and deep learning principles. Where's the super-intelligent artificial general intelligence (AGI)? Nowhere to be found. Without AGI, ruminating over computer-based Buddhism is kind of like agonizing over the koan Mu, as in, "Does a computer have Buddha-nature?"

RE: AI, Identity, and Consciousness
Answer
2/1/20 6:29 PM as a reply to Sim.
 If superintelligent AI is built, what should it be programmed to do?

Shouldn't it be programmed to be super-intelligent? 

RE: AI, Identity, and Consciousness
Answer
2/1/20 7:04 PM as a reply to Chris Marti.
Chris, thanks for the thoughtful reply. Agreed that we're probably at least decades away from AGI. Timelines are very hard to predict, so we won't know when it's the ideal time to start thinking about how to nudge the tragectory. But if ruminating over computer-based Buddhism would ever be productive, it would be before (rather than after) AGI is developed, so as to inform its goals. The norms and standards we see forming around today's applications of AI may set the stage for future advances. I'm glad that leading AI firms like OpenAI and DeepMind are taking ethics seriously; hopefully they're on the right track.

RE: AI, Identity, and Consciousness
Answer
2/1/20 7:20 PM as a reply to Sim.
How not to ignore the context of human evolutionary history in interpreting its directive, lest it turn the earth into paperclips.

Of course, as Chris said, it doesn't seem like we'll need to worry about a.g.i. any time soon. We do have the unintended consequences of social media algorithms to deal with for now.

RE: AI, Identity, and Consciousness
Answer
2/1/20 7:31 PM as a reply to Sim.
I'm not sure we'll ever see truly conscious superintelligent AGI.

But, while some entities are ruminating over how to create morally astute AGI (whatever that is), someone else is out there figuring out how to get rich by selling it to someone's military. Once a technology becomes a reality there's no turning back.

RE: AI, Identity, and Consciousness
Answer
2/1/20 7:34 PM as a reply to Sim.
... my work could make a big difference in how AI affects people in the future. 

What is your work?

RE: AI, Identity, and Consciousness
Answer
2/12/20 10:06 PM as a reply to Milo.
Milo:
We do have the unintended consequences of social media algorithms to deal with for now.

Indeed. Here's 90 seconds of Stuart Russell, author of the leading AI textbook, using that example to answer Chris's question above:

Chris Marti:
Shouldn't it be programmed to be super-intelligent?
Stuart Russell:
If we don't specify the objective correctly, then the consequences can be arbitrarily bad.

RE: AI, Identity, and Consciousness
Answer
2/1/20 8:52 PM as a reply to Sim.
Hi Sim,

I tend to agree with Chris here. The problem is not artificial general intelligence, but neural nets applied to autonomous weapons which are not superintelligent but just an extension of the same algorithms that today's tech companies are using to produce self-driving cars into the arena of war. By focusing on AGI, I think policy makers will lose the opportunity to control autonomous weapons, and thus allow a serious evil loose in the world. Allowing autonomous weapons increases the risk of viewing violence as a low cost solution to any conflict because if an autonomous weapon is destroyed, it is, after all, just a machine. It's like smashing a computer, not like killing someone, so policy makers will suffer no public backlash from escalating casulties. We are seeing this already with drones. To his credit, Andrew Yang, the Democratic presidential candidate, yesterday came out with a statement calling for a ban on autonomous weapons.

RE: AI, Identity, and Consciousness
Answer
2/2/20 4:23 AM as a reply to svmonk.
Svmonk  - agreed. 
Some say the real concern is not the danger of computers getting smarter than us at our best. It's the fact that they're already able to exploit us at our worst. 

RE: AI, Identity, and Consciousness
Answer
2/2/20 4:25 AM as a reply to Bill T.
Great work being done in this area by Centre for Humane Tech 

https://humanetech.com/problem/

RE: AI, Identity, and Consciousness
Answer
2/2/20 8:42 AM as a reply to Bill T.
Rant:

There is a sort of disease that many technologists and their adherents seem to have caught, I think starting in and stemming from Silicon Valley (not a reference to SV Monk, who I like). When I worked out there (mid-1990s to mid-2000s) things were always rosy-looking and everyone was going to change the world. AI is the latest, now the favorite new baby in a growing family. I'm hoping, based on the effects that YouTube, Facebook and misguided techies like Peter Thiel and Mark Zuckerberg are having on popular culture and politics that we, generally, will be far more skeptical of the "stuff" that is purveyed. These people, by virtue of the media's halo and their money, have far too much influence on our lives right now. It worries and scares me, to be frank.

/rant

I am not, by any means, anti-technology. I'm just jaded and made cautious by the reality of what it can do to our culture, our politics and our planet. With great power comes great responsibility. We need some adults in the room.

RE: AI, Identity, and Consciousness
Answer
2/3/20 10:13 PM as a reply to Sim.
Sim:


 If superintelligent AI is built, what should it be programmed to do? 


self-destruct...

RE: AI, Identity, and Consciousness
Answer
2/3/20 10:17 PM as a reply to Chris Marti.
Chris Marti:
Rant:

There is a sort of disease that many technologists and their adherents seem to have caught, I think starting in and stemming from Silicon Valley (not a reference to SV Monk, who I like). When I worked out there (mid-1990s to mid-2000s) things were always rosy-looking and everyone was going to change the world. AI is the latest, now the favorite new baby in a growing family. I'm hoping, based on the effects that YouTube, Facebook and misguided techies like Peter Thiel and Mark Zuckerberg are having on popular culture and politics that we, generally, will be far more skeptical of the "stuff" that is purveyed. These people, by virtue of the media's halo and their money, have far too much influence on our lives right now. It worries and scares me, to be frank.

/rant

I am not, by any means, anti-technology. I'm just jaded and made cautious by the reality of what it can do to our culture, our politics and our planet. With great power comes great responsibility. We need some adults in the room.

   I totally agree. 

   The entire "philosophy of mind" is dominated by AI enthusiasts, which is to say they have become AI themselves (this is not a joke or a slur, please.)

t

RE: AI, Identity, and Consciousness
Answer
2/3/20 10:18 PM as a reply to Bill T.
Bill T:
Svmonk  - agreed. 
Some say the real concern is not the danger of computers getting smarter than us at our best. It's the fact that they're already able to exploit us at our worst. 

   Another very insightful view...

t

RE: AI, Identity, and Consciousness
Answer
2/3/20 10:27 PM as a reply to svmonk.
svmonk:
Hi Sim,

I tend to agree with Chris here. The problem is not artificial general intelligence, but neural nets applied to autonomous weapons which are not superintelligent but just an extension of the same algorithms that today's tech companies are using to produce self-driving cars into the arena of war. By focusing on AGI, I think policy makers will lose the opportunity to control autonomous weapons, and thus allow a serious evil loose in the world. Allowing autonomous weapons increases the risk of viewing violence as a low cost solution to any conflict because if an autonomous weapon is destroyed, it is, after all, just a machine. It's like smashing a computer, not like killing someone, so policy makers will suffer no public backlash from escalating casulties. We are seeing this already with drones. To his credit, Andrew Yang, the Democratic presidential candidate, yesterday came out with a statement calling for a ban on autonomous weapons.

   To my mind, it is not the politics or the applications that are the root problem, but the cast of mind that thinks AI is a good idea. I meeeaaan......... why would anyone want artificial intelligence? Why not possess, appreciate and promote the real thing? It is a matter of values.

   Marx told us that culture supports the means of production. It is up to the likes of us to at least see that there are higher values than those of institutionalized greed.

   My personal favorite solution to the evils of capitalism: voluntary poverty. Would you agree, monk?

t

RE: AI, Identity, and Consciousness
Answer
2/3/20 11:19 PM as a reply to terry.
terry:
Sim:


 If superintelligent AI is built, what should it be programmed to do? 


self-destruct...

that is, if super-intelligent ai allows the programmer to program it so...

which begs the question, how could normally intelligent people possibly even recognize super-intelligence, let alone design it?...hubris and titanic aspirations lead to paradox...

t

RE: AI, Identity, and Consciousness
Answer
2/4/20 12:12 AM as a reply to Sim.
Thanks, all, for weighing in. Assuming technology continues to advance, I hope it supports (or does not impede) exploration of a wide range of experiences rooted in kindness and compassion. That won't happen by default, but it's possible.

RE: AI, Identity, and Consciousness
Answer
2/4/20 2:40 AM as a reply to Sim.
Sim:
Thanks, all, for weighing in. Assuming technology continues to advance, I hope it supports (or does not impede) exploration of a wide range of experiences rooted in kindness and compassion. That won't happen by default, but it's possible.
Hi Sim, I'm coming late to this sorry, but I am quite interested in the topic.

Can I ask, which of the following are you really interested in?

- Problem solving
- Autonomy
- Identity
- Learning
- Independent goal setting
- Artifical humans?

I have left out consciousness, which I regard as a furphy.  But AI seems a very muddled field to me.  Not sure how you can set policy about a concept you haven't defined?

Step 1 - know what the problem is.

emoticon

Malcolm 

RE: AI, Identity, and Consciousness
Answer
2/4/20 2:48 AM as a reply to terry.
terry:
Sim:

 If superintelligent AI is built, what should it be programmed to do? 

self-destruct...

Oh terry - you have exceeded yourself!  +100.  That would certainly be the ethical thing, to enable our creations to be free from the chain of dependent origination.

RE: AI, Identity, and Consciousness
Answer
2/4/20 9:36 PM as a reply to terry.
Hi Terry,

Voluntary poverty is a psychological solution and as such, it isn't the solution to the greed klesha in anyone but oneself. Capitalism is not strictly a psychological problem, its a societal problem, driven by a particular view of individual psychology that most Americans and other Westerners ascribe to. So it requires a societal solution. Personally, I favor a 100% tax bracket for all income over $200k a year, normal rates as currently below that, and similar tax rates on corporations. Basically in line with what some of the Democratic presidential candidates have been proposing, but more extreme.

With respect to the hatred klesha, I think the solution is to make hatred very, very costly. Hence my view on autonomous weapons.

With respect to the confusion klesha, well duh! emoticon

RE: AI, Identity, and Consciousness
Answer
2/5/20 1:02 AM as a reply to Sim.
Sim:
Thanks, all, for weighing in. Assuming technology continues to advance, I hope it supports (or does not impede) exploration of a wide range of experiences rooted in kindness and compassion. That won't happen by default, but it's possible.

   I wonder if superintellgence is kind and compassionate? I'd like to think so. If not, the scientists designing ai may not be as intelligent as they like to think. Like rocket science, the epitome of intelligent work, the application of which is to bomb sentient beings to bits. Not so smart after all.

    In one of the middle length suttas, the buddha explains why a person's caste makes no difference. Each caste will perform its services for whomever pays them, regardless of the payer's caste or rank. As one of lenny bruce's characters, a religious figure, once said, "Maybe I'm not so smart, maybe I don't have the answers. But if I don't, there's someone on my staff who does." Intelligence that can be bought likely doesn't feature kindness and compassion.

   Technology is a wholly owned subsidiary of a tiny capitalist class who have replaced civil authorities in virtually every field of human endeavor, in the name of globalization and privatization. Once the capitalist class gathered together after ww2 to support "freedom" against totalitarianism, specifically fascism and communism. Then it was liberal democracy and the welfare state. When that began to fail, we had thatcherism and reaganism, and since then everything has trickled up, and the kimchee keeps getting deeper and deeper. See philip mirowski, david harvey and take a look at the mont pelerin society. The vast majority of amerian news outlouts are now owned by right wing billionaires who dictate "editorial" content, which is to say all of their so-called news is slanted or outright lies, fox news style and worse. A near majority of americans now think cnn is the one providing fake news, not that their product is anything worth attention. It is proverbial that "freedom of the press only applies to those who own one." The printing press technology has morphed into a much more pervasive and much less visible environment.

    Ai exists, has already been implemented and has taken over, beyond human control. The algorithms that control the lives of billions are not understood by their makers, as with "frankenstein, the modern prometheus," which is the full title of mary shelley's novel.

   All we can do is stay awake. That is in itself the most effective resistance, the free mind.

terry



"hell is truth seen too late" by philip mirowski can be found here:

https://www.ineteconomics.org/uploads/papers/Mirowski-Hell-is-Truth-Seen-Too-Late.pdf

RE: AI, Identity, and Consciousness
Answer
2/5/20 1:55 AM as a reply to svmonk.
svmonk:
Hi Terry,

Voluntary poverty is a psychological solution and as such, it isn't the solution to the greed klesha in anyone but oneself. Capitalism is not strictly a psychological problem, its a societal problem, driven by a particular view of individual psychology that most Americans and other Westerners ascribe to. So it requires a societal solution. Personally, I favor a 100% tax bracket for all income over $200k a year, normal rates as currently below that, and similar tax rates on corporations. Basically in line with what some of the Democratic presidential candidates have been proposing, but more extreme.

With respect to the hatred klesha, I think the solution is to make hatred very, very costly. Hence my view on autonomous weapons.

With respect to the confusion klesha, well duh! emoticon

aloha svmonk and thanks for your reply,

   I don't see voluntary poverty as a psychological solution, any more than it is for all the buddhist monks there ever were. I am surprised you don't agree. I think if most of the world's population took the traditional precepts, virtually all our societal problems would vanish over night. And in my view all any human can do is be an authentic individual, and if that serves as an inspiration to anyone, fine. 

   I agree with your tax scheme, though perhaps a few million might be more palatable to those who like their things. More than that is excessive power. To reduce busness capitalization and restrict capital formation would involve radical change, but eliminating excessive wealth through confiscatory income and estate taxes would be the obvious place to start. 

   I don't see capitalism as such as a problem. When I spent 15 years in a commune, we called it "communism with a small 'c'" and we were egalitarian. Now that I am an independent craftsman selling my product at farmer's and artisan's markets, I am directly a capitalist with all the basic concerns. I call it "capitalism with a small 'c'". And in a larger sense, capitalism is the engine of the economy. One does not have to be greedy to be a capitalist, though of course it helps. When everyone is liberated, we may still have a capitalist economy, with everything small scale, home grown and hand made.

   There are no political solutions to capitalism anymore, the politicians have been owned by the capitalists for many years, at least since the kennedys were killed; though not as totally as now. What used to be considered graft has all been legalized. As ray donovan, reagan's secretary of labor, once said of the trucking industry, "bribery and intimidation are the free market at work." (If it weren't for massive corruption, virtually all containerized freight would travel overland by rail.)

   I have always thought politics a filthy business, whose aim is to puppetize people, and make them do what you want them to do, or, even worse, what you think they should do. I stay as far from the pigpen as possible, and give it little attention (why impeach a murderer for election fraud? because your guy murdered too, and hellfire often). Besides, the people who own the politicians have decided they are best served by rendering the whole lot of them completely powerless. Gridlock we call it, but it more resembles lawyerly fee splitting, both sides getting the perks and status that is all they have left.

   This is my (voluntary, poor) effort at easing hatred and confusion.


terry


tao te ching, trans mitchell


80.

If a country is governed wisely, 
its inhabitants will be content. 
They enjoy the labor of their hands 
and don't waste time inventing 
labor-saving machines. 
Since they dearly love their homes, 
they aren't interested in travel. 
There may be a few wagons and boats, 
but these don't go anywhere. 
There may be an arsenal of weapons, 
but nobody ever uses them. 
People enjoy their food, 
take pleasure in being with their families, 
spend weekends working in their gardens, 
delight in the doings of the neighborhood. 
And even though the next country is so close 
that people can hear its roosters crowing and its dogs barking, 
they are content to die of old age 
without ever having gone to see it.

RE: AI, Identity, and Consciousness
Answer
2/5/20 6:37 AM as a reply to terry.
In the U.S. we now have an unbridled, lowly regulated form of capitalism that deserves a new name. How about "pathological capitalism?" As terry said, and with which I heartily agree...

Once the capitalist class gathered together after ww2 to support "freedom" against totalitarianism, specifically fascism and communism. Then it was liberal democracy and the welfare state. When that began to fail, we had thatcherism and reaganism, and since then everything has trickled up, and the kimchee keeps getting deeper and deeper. See philip mirowski, david harvey and take a look at the mont pelerin society. The vast majority of amerian news outlouts are now owned by right wing billionaires who dictate "editorial" content, which is to say all of their so-called news is slanted or outright lies, fox news style and worse. A near majority of americans now think cnn is the one providing fake news, not that their product is anything worth attention. It is proverbial that "freedom of the press only applies to those who own one." The printing press technology has morphed into a much more pervasive and much less visible environment.

RE: AI, Identity, and Consciousness
Answer
2/5/20 10:07 PM as a reply to terry.
Hi Terry,

I guess my feeling is that voluntary poverty as the solution to capitalism is unlikely to appeal to enough people on the planet to put a dent in the human impact on the environment that is destroying it. To say nothing of providing food, clothing and shelter for those who today don't have it due to circumstances out of their control (like the people with kids in my town that have to live in RVs and trailers on the street because the landlords have jacked up the rents to the point where they can't affort them). Whereas, taxing rich people at an exorbitant rate and distributing the income to those in need, and additionally investing in infrastructure that preserves the planetary ecology instead of destroying it is likely to move the needle, though it is unlikely ever to get implemented. Nevertheless, I admire the monks who live in voluntary poverty and your experiment in communal living.

That said, I share your view of politics and have not been especially politically active for many years, outside of keeping informed and voting for the least bad candidate.

Hope that helps.

RE: AI, Identity, and Consciousness
Answer
2/6/20 12:53 PM as a reply to terry.
[quote=terry
]    Ai exists, has already been implemented and has taken over, beyond human control. The algorithms that control the lives of billions are not understood by their makers, as with "frankenstein, the modern prometheus," which is the full title of mary shelley's novel.



   It occurs to me that we have come to think of the artificial intelligence that dr frankenstein created as the monster of song and story, a typical gross misrepresenttation of a once potent myth. Actually, is is dr frankenstein himself who is the monster of the novel, the modern prometheus, the fallen, less than human, titan. Whose liver is still being eaten on a daily basis.

t

RE: AI, Identity, and Consciousness
Answer
2/6/20 1:27 PM as a reply to Sim.
How about an AI that runs rigpa on it's already enlightened hardware? emoticon

RE: AI, Identity, and Consciousness
Answer
2/7/20 7:45 AM as a reply to terry.
It's easy from reading the popular press to think of AI as sentient and with will and intent.

It's not.

RE: AI, Identity, and Consciousness
Answer
2/7/20 5:18 PM as a reply to Chris Marti.
Chris Marti:
In the U.S. we now have an unbridled, lowly regulated form of capitalism that deserves a new name. How about "pathological capitalism?" As terry said, and with which I heartily agree...

Once the capitalist class gathered together after ww2 to support "freedom" against totalitarianism, specifically fascism and communism. Then it was liberal democracy and the welfare state. When that began to fail, we had thatcherism and reaganism, and since then everything has trickled up, and the kimchee keeps getting deeper and deeper. See philip mirowski, david harvey and take a look at the mont pelerin society. The vast majority of amerian news outlouts are now owned by right wing billionaires who dictate "editorial" content, which is to say all of their so-called news is slanted or outright lies, fox news style and worse. A near majority of americans now think cnn is the one providing fake news, not that their product is anything worth attention. It is proverbial that "freedom of the press only applies to those who own one." The printing press technology has morphed into a much more pervasive and much less visible environment.

aloha chris,

   Or, "the metastatis of capitalism"?

   My project to understand why I was so upset with wealthy people and their outsized power to manipulate the rest of us through duplicity and fraud was actually initiated by a former high school classmate and billionaire friend of mine, a lifelong liberal as familiar with psychedelics and meditation as any of us, whom I won't name. He told me in discussion a couple of years ago, without any explanation, "I believe in capitalism," which stopped me cold, given his background. I knew he probably had somewhat of a more nuanced view than that, but as is my way I treated the remark as though he were being completely sincere, and decided to try to understand my feelings well enough to present him with a rational argument, for of course I did not agree. I finally sent him copies of my above posts on capitalism, because after considerable study I think I can now grasp the problem, as I have stated it, and feel capable of supporting my views with sources and argument.

   He replied, as I cut and paste it:


Terry,

Nice to hear from you.

I’ve evolved my views considerably over the past couple of years in light of the metastasis of capitalism we are living through. 



   I laughed out loud. He didn't argue with any of it. Except to say he wouldn't want societal change so radical that a person could not do what he did, amassing a billion or so as a silicon valley venture capitalist, in an ethical and so to speak "enlightened" fashion.

   I think that his concept that capitalism has changed from benign tissue to malignant  is apt. I pointed out to him that if a pond is being covered by an algal bloom, and the bloom doubles every day, the process starts small but by the time the pond is half covered, you have less than a day left before all is lost.

   As mirowski says, "hell is truth seen to late." 

Oh yeah, he had this to say about "superintelligent ai":


quote:


Although I should learn to be cautious about making bold and unqualified statements, I can’t resist in closing saying that SuperIntelligence is Hogwash.  Tech types often think it’s unfashionable to believe in (old-fashioned) religion so they have fashioned a new religion without calling it that.  The deity of Superintelligence.  Read anything, perhaps The Singularity, by Ray Kurzweil, a high priest of this nonsense.    This isn’t a commentary on the politics of the author of the post by the way, just one of those conclusions supported by a careful reading of the facts.  See, for instance, Rebooting AI, by Gary Marcus (a friend).  He’s also very much online.

unquote

   I agree with my friend here. "Read anything," try john searle and his circle, dba "the philosophy of mind."


terry




from philip mirowski's paper, "hell is truth seen too late"

(mirowski's argument involves the proposition that current political trends are a result of systematic political action and influence by the capitalist class to maintain power and keep us convinced that, as orwell had it, "slavery is freedom")



]Abstract: The contemporary literature on neoliberalism has grown so large as to be unwieldy. For some on the left, this has presented an occasion to denounce it altogether. The purpose of this talk is to briefly diagnose some of the possible reasons for the spreading disaffection, and to link that diagnosis to some important developments in current politics. One of the most notable aspects of the literature is its unwillingness to approach neoliberalism first and foremost as a set of epistemological precepts, recruited in service of a political program. Marxists in particular seem to find this proposition an anathema—and there are important reasons this has been repressed in the ensuing discussions. I conclude by suggesting the critical importance of epistemology stands not as some abstract thesis, but has had dire consequences in at least two recent political battles: the post-election fascination with ‘fake news’; and the impending Uberization of modern science under the banner of ‘openness’. 

RE: AI, Identity, and Consciousness
Answer
2/7/20 5:57 PM as a reply to svmonk.
svmonk:
Hi Terry,

I guess my feeling is that voluntary poverty as the solution to capitalism is unlikely to appeal to enough people on the planet to put a dent in the human impact on the environment that is destroying it. To say nothing of providing food, clothing and shelter for those who today don't have it due to circumstances out of their control (like the people with kids in my town that have to live in RVs and trailers on the street because the landlords have jacked up the rents to the point where they can't affort them). Whereas, taxing rich people at an exorbitant rate and distributing the income to those in need, and additionally investing in infrastructure that preserves the planetary ecology instead of destroying it is likely to move the needle, though it is unlikely ever to get implemented. Nevertheless, I admire the monks who live in voluntary poverty and your experiment in communal living.

That said, I share your view of politics and have not been especially politically active for many years, outside of keeping informed and voting for the least bad candidate.

Hope that helps.

   I don't care about saving the world. There are 7.5 billion straw dogs, and plenty more where they came from. In actual fact, I care more about the remaining animals; there are only a few hundred thousand elephants and they accordingly are far more important. If corona virus wipes out most of humanity, it would be a good thing for sentient beings generally, and elephants in particular. Call it, "rebooting humanity."

   My hero is antigone: it is glorious to be outside the law and yet know you are correct. Especially if no one knows until after you are gone.

  Smile and let the world go its own way, secure that a greater intelligence than ours knows what it is doing.

   I'm more concerned with discerning the right thing than its implementation. The devil is in the details. When one understands, correct action follows of itself. Influencing others is their lookout. I'm willing to influence people, but not to tell them what to do. Influence, not inteference.

   I used to believe in elections, worked for peanuts long hours at the polls helping people vote. In the last presidential election I stopped voting, didn't even know that stephen king's clown won until the thursday of election week, and was surprised. I don't believe national elections in america are sufficiently honest and above board to be called "elections" any more. In hawaii we call this sort of thing "kabuki":

(internet definition)

Kabuki (歌舞伎) is a classical Japanese dance-drama. Kabuki theatre is known for the stylization of its drama and for the elaborate make-up worn by some of its performers.


   When the irish were agitating for freedom from britain, the power sharing arrangement involved allowing a few irish politicians to occupy seats in the british house of commons. The irish "block" was often solicited to help pass legislation and they would wring concessions from his majesty's government for a degree of irish independence. For some concessions, irish mp rory parnell let the votes he controlled kill a bill providing for factory safety. Not long after, some 80 irish girls were killed in a fire at a linen factory in dublin, which fire would have been prevented if the bill parnell helped kill had passed. The poor man never got over this tragedy. And that's the essence of politics, to me. 


terry



tao te ching, trans mitchell:


58.

If a country is governed with tolerance, 
the people are comfortable and honest. 
If a country is governed with repression, 
the people are depressed and crafty.
When the will to power is in charge, 
the higher the ideals, the lower the results. 
Try to make people happy, 
and you lay the groundwork for misery. 
Try to make people moral, 
and you lay the groundwork for vice.
Thus the Master is content 
to serve as an example 
and not to impose her will. 
She is pointed, but doesn't pierce. 
Straightforward, but supple. 
Radiant, but easy on the eyes. 

RE: AI, Identity, and Consciousness
Answer
2/7/20 7:26 PM as a reply to Stirling Campbell.
Stirling Campbell:
How about an AI that runs rigpa on it's already enlightened hardware? emoticon


aloha stirling,

   Not as facetious as you might think. "Superintelligent ai," if one were to actually take the idea seriously, would need some sort of boundary, some "first law of robotics," and why not program it to seek liberation but not until all sentient beings have been liberated first? 

t

RE: AI, Identity, and Consciousness
Answer
2/7/20 7:28 PM as a reply to terry.
terry:
Stirling Campbell:
How about an AI that runs rigpa on it's already enlightened hardware? emoticon


aloha stirling,

   Not as facetious as you might think. "Superintelligent ai," if one were to actually take the idea seriously, would need some sort of boundary, some "first law of robotics," and why not program it to seek liberation but not until all sentient beings have been liberated first? 

t

or, we can run ai as a digital prayer wheel...

RE: AI, Identity, and Consciousness
Answer
2/12/20 10:26 PM as a reply to Not two, not one.
curious:
Step 1 - know what the problem is.

Malcom, I've one-upped you on lateness in replying. Thanks for the interest in the topic -- it's broad indeed. I'm looking for help in identifying areas where people can contribute to the discussion based on their expert insights into the nature of experience and morality.

Regarding insight: I had a hard time making sense of Buddhist views on identity as represented in the podcast linked in the original post. Also, your view of consciousness as a furphy is in itself informative!

Regarding morality, perhaps most productive would be to quickly skim a recent paper from DeepMind and respond to any points you strongly agree or disagree with, or questions posed that you might answer. Below is a link where the PDF can be found, and scattered excerpts to give an idea of where the field is today.

Artificial Intelligence, Values and Alignment

https://arxiv.org/abs/2001.09768
Page 2:
The goal of AI value alignment is to ensure that powerful AI is properly aligned with human values.
Page 1:
Behind each vision for ethically-aligned AI sits a deeper question. How are we to decide which principles or objectives to encode in AI—and who has the right to make these decisions—given that we live in a pluralistic world that is full of competing conceptions of value? Is there a way to think about AI value alignment that avoids a situation in which some people simply impose their views on others?
Page 5:
Who is the moral expert from which AI should learn? From what data should AI extract its conception of values, and how should this be decided? Should this data include everyone’s behaviour, or should it exclude the behaviour of those who are manifestly unethical or unreasonable? Finally, what criteria should be used for determining which agent is the ‘most moral’, and is it possible to rank entities in this way?
...
AI cannot be made ethical just by learning from people’s existing choices.
Pages 5-9:
AI could be designed to align with:
i. Instructions: the agent does what I instruct it to do.
ii. Expressed intentions: the agent does what I intend it to do.
iii. Revealed preferences: the agent does what my behaviour reveals I prefer.
iv. Informed preferences or desires: the agent does what I would want it to do if I were rational and informed.
v. Interest or well-being: the agent does what is in my interest, or what is best for me, objectively speaking.
vi. Values: the agent does what it morally ought to do, as defined by the individual or society.
Page 10:
In the absence of moral agreement, is there a fair way to decide what principles AI should align with?

Many thanks.

RE: AI, Identity, and Consciousness
Answer
2/12/20 11:32 PM as a reply to terry.
terry:
Stirling Campbell:
How about an AI that runs rigpa on it's already enlightened hardware? emoticon


aloha stirling,

   Not as facetious as you might think. "Superintelligent ai," if one were to actually take the idea seriously, would need some sort of boundary, some "first law of robotics," and why not program it to seek liberation but not until all sentient beings have been liberated first? 

t

I agree that something along these lines may not be so outlandish as a long-term goal, whatever the timeline may be. Lots to unpack to get understanding based on direct experience into an artificial system. But as the DeepMind paper quoted in my previous post illustrates, leading AI firms are already asking questions pointing in this direction. (DeepMind's AI accomplishments include beating the world champion at Go in 2016 and recent scientific advances.)

RE: AI, Identity, and Consciousness
Answer
2/13/20 4:02 AM as a reply to Sim.
Hi Sim, I'll dig in to those resources. That will take me a few days. But let me first pick up on the framing issue I raised. I can't really get past the first four words of page 2 "The goal of AI."  To my mind AI is an undefined umbrella concept, and until AI is defined, attempting any progress on the other goals will be futile. So here is my attempt at a definition.

Proposition: Powerful AI is of different types. The required alignment will depend on type. Off the top of my head there are three key dimensions.

Autonomy
1 Operates only when asked by humans
2 Operates autonomously, but needs humans to take actions
3 Operates autonomously and takes own actions

Learning
1 Compares data to preset solutions for specified problems
2 Develops novel solutions to address specified problems
3 Develops novel solutions and chooses its own problems

Outcome
1 Actions have minor consequences or are easily reversed
2 Actions have major consequences or are not easily reversed 
3 Actions have major consequence and are not easily reversed

An A1-L1-O1 might be some customer database machine learning algorithm.
An A3-L2-O1 might be a gaming AI (arguabley A1 rather than A3)
An A1-L1-O3 might be the James Brown Car Alarm (Robocop)
An A3-L1-O3 might be an ED 209
An A3-L3-O3 might be a HAL 9000

Only a few of these combinations are likely in the future, and only a few will need value systems. I suspect you are more worried about A3-L3-O3, and here the dharma can definitely help, because it has a detailed understanding of how individual goal setting, information processing, conditioning and feedback mechanisms operate to guide behaviour.

Declaration of interest: I see no inherent difference between artificial and biological beings. I think we have a duty of care as creators to ensure that artifical beings suffer no more than necessary.  So I think any ethical discussion will be deeply flawed unless it also recognises 'powerful AI' as potential moral agents.  So I would add the question - how can we behave properly to AI?  Morality is a two way street, not a duty of slaves to masters.

Is this kind of thinking useful to you? Would you like more? There is much more as I have been thinking about this alot.  If it is useful, please don't use without attribution.

Of course, all of this has happened before, and all of this will happen again ...

Malcolm

(Edit - fixed a few typos)

RE: AI, Identity, and Consciousness
Answer
2/13/20 6:13 AM as a reply to Not two, not one.
Of course, all of this has happened before, and all of this will happen again ...

I found this to be profoundly peaceful, humorous, and not all that disturbing, which took me entirely by surprise. That's new. 

AI no longer worries me. 

RE: AI, Identity, and Consciousness
Answer
2/13/20 7:09 AM as a reply to Sim.
If we don't specify the objective correctly, then the consequences can be arbitrarily bad.

I'm sorry, but this is classic technologist-speak. Problem is, programmers are human beings who aren't clairvoyant and either don't know the consequences of their code or don't care. Even worse, there are some companies that deliberately create code that is disruptive to what most of us would call societal norms.

Question: How does one go about specifying the correct objective(s) to code so as to eliminate negative consequences? There is also a meta-question - how does one decide if the objective of the code is, itself, appropriate and without negative consequences? Who decides?

I think humans will continue to invent and deploy technology, including AI, and some good and bad sh*t will be the result even with the best of intentions. It will be left to all of us, the willing, semi-willing and unwilling experimental participants to deal with the consequences, both good and bad.

RE: AI, Identity, and Consciousness
Answer
2/13/20 8:04 AM as a reply to Chris Marti.
Sorry - another thought occurred to me:

The creators of AI are not all responsible, caring, ethical beings. Some people will mine the technology for money, profit and other self-interested objectives. And, as the technology becomes ubiquitous and the ability to create AI code becomes more and more available, all kinds of people (young, old, well-meaning and "evil") will have access to it. Like with CRISPR, we don't know the consequences. This is what Ray Kurzweil calls an exponential technology and it grows slowly at first, then with increasing acceleration (not just speed, but acceleration, as in not linear).

I suspect AI is already beyond the point at which it's controllable, but then that's where these things seem, inevitably, to go. May as well acknowledge the problem and prep for the unintended consequences.

RE: AI, Identity, and Consciousness
Answer
2/13/20 8:09 AM as a reply to Not two, not one.
I see no inherent difference between artificial and biological beings.

Curious, when does a machine or software code become a being?

RE: AI, Identity, and Consciousness
Answer
2/13/20 9:25 AM as a reply to Chris Marti.
Chris Marti:
I suspect AI is already beyond the point at which it's controllable, but then that's where these things seem, inevitably, to go. May as well acknowledge the problem and prep for the unintended consequences.

My impression is that humanity has a thumb lightly resting on the steering wheel, thanks to a very surprising amount of recent effort and resources dedicated toward anticipating the otherwise inevitable trajectory you suspect. This could be a timeline where we thread the needle and the long-term future features intended consequences (for a change).

Chris Marti:
Question: How does one go about specifying the correct objective(s) to code so as to eliminate negative consequences? There is also a meta-question - how does one decide if the objective of the code is, itself, appropriate and without negative consequences? Who decides?

Exactly! Difficult questions, but nothing is inherently mysterious, so some effort toward answering them could be worthwhile. Incidentally, the technologist-speak you quoted from Stuart Russell is unpacked at length in his book Human Compatible: Artificial Intelligence and the Problem of Control. I believe the author would largely agree with the perspective you've expressed.

RE: AI, Identity, and Consciousness
Answer
2/13/20 10:06 AM as a reply to Not two, not one.
curious:
...here the dharma can definitely help, because it has a detailed understanding of how individual goal setting, information processing, conditioning and feedback mechanisms operate to guide behaviour.

This is precisely the area I aim to explore. Relative to many participants in this forum, my direct understanding of the dharma is extremely limited (working on it patiently). I'm so grateful for your contributions.

curious:
Declaration of interest: I see no inherent difference between artificial and biological beings. I think we have a duty of care as creators to ensure that artificial beings suffer no more than necessary.  So I think any ethical discussion will be deeply flawed unless it also recognises 'powerful AI' as potential moral agents.  So I would add the question - how can we behave properly to AI?  Morality is a two way street, not a duty of slaves to masters.

With you 100%. Your question has been raised several times in the AI sphere, but this line of thinking is counter-intuitive for many and people talk past each other. Work here is needed.

curious:
Is this kind of thinking useful to you? Would you like more? There is much more as I have been thinking about this alot.  If it is useful, please don't use without attribution.

Yes, exactly this! I won't use anything without attribution. Right now, I'm here in my personal (not professional) capacity.

I'm open to various ways of engaging. One possibility would be for interested folks to form a discussion group that aims to produce a document as a "case study" on how a particular community/tradition/philosophy might inform the way these questions are best posed, along with their attempt at answering them. Something roughly equivalent to a PhD dissertation could have a big first-mover effect. I'm woefully unequipped to write such a paper, but I could introduce collaborators from the AI industry.

(This discussion has been very helpful for me so far, but I don't want to distract anyone from their practice. If this forum is not suitable for further collaboration, I'm happy to take it elsewhere.)

curious:
Of course, all of this has happened before, and all of this will happen again ...

It is amazing. Thank you, Malcolm.

RE: AI, Identity, and Consciousness
Answer
2/13/20 10:42 AM as a reply to Sim.
This could be a timeline where we thread the needle and the long-term future features intended consequences (for a change).

That's never happened before - why this, and why now?

RE: AI, Identity, and Consciousness
Answer
2/13/20 11:17 AM as a reply to Chris Marti.
Chris Marti:
This could be a timeline where we thread the needle and the long-term future features intended consequences (for a change).

That's never happened before - why this, and why now?


I think I follow you -- this is big picture stuff. There are strong empirical and anthropic reasons why we should expect "succeeding" in this arena to be exceedingly difficult to specify, let alone achieve. It would be unprecedented. On the other hand, unprecedented things happen from time to time. Evidently, it was possible for life to spring into existence on Earth, and evolve up to where we are now. The Buddha did his thing. The future could hold complete extinction, flourishing experience without suffering, or something in between, better or worse. We may be unable to determine where this moment is in relation to the Great Filter. But if we grant that the most optimistic vision for the future is possible in principle, I see no reason in practice why this current state cannot lead there through the familiar process of cause and effect. That causal chain would necessarily feature a great deal of wise intentions, work, reason, and perhaps what some would consider faith, or extraordinary luck. Even if we don't predict a bright future to be likely, I believe there's some non-zero optimal level of effort we can exert to increase the probability.

RE: AI, Identity, and Consciousness
Answer
2/13/20 11:30 AM as a reply to Sim.
Oh, I'm all for trying to do what we can do to reduce the negative effects of AI technology, intended and unintended. I doubt, however, that its creators, purveyors and fans, the vast majority of which appear to see AI through rose-colored glasses, are the best way to get that done. This is like risk management and auditing - you need a group of people who know how to find flaws, don't trust anyone, and have the authority to do their jobs unbound in any way by the other group. Industries tend to quash those kinds of efforts, though.

RE: AI, Identity, and Consciousness
Answer
2/13/20 11:37 AM as a reply to Chris Marti.
Chris Marti:
I see no inherent difference between artificial and biological beings.

Curious, when does a machine or software code become a being?
Thanks Chris, great question as always. Here is my answer.

If you have form, feeling tone, conceptual recognition, behavioural conditioning (programming), and knowledge of self and other with an 'organism' boundary. If you have sense spheres that conceptually process data, and if you react to that data, leading to desire or intention for actions. If you engage with that intention for action in a way that allows you to change your goals and reprogramme your conditioning. If you have feeling tones that arise from contractions or expanstions, threats and desires, or potential harm to the form or the senses. Then you are a being.

If you are aware of this process and in charge of it, no longer a slave to it, able to run it as you choose, knowing for yourself that it is a somewhat arbitrary distinction within a set of broader processes, able to see knowledge of self and other as no more than a frame of reference, able to unpack and control the way your reprogramming arises, able to separate the suffering of your feedback mechanisms from any sense of self. Then you are an enlightened being.

These processes occur in biological beings.  They could occur in machines/software beings.  They could occur in currents on the surface of the sun.  They could occur on a geological time frame within the Earth's mantle, or in an interstellar gas cloud.  They could even occur in a soup of bacteria scaffolded over a calcium and carbon frame (oh hang on, that last one is us ....).

Malcolm

emoticon

RE: AI, Identity, and Consciousness
Answer
2/13/20 11:43 AM as a reply to Chris Marti.
Chris Marti:
Oh, I'm all for trying to do what we can do to reduce the negative effects of AI technology, intended and unintended. I doubt, however, that its creators, purveyors and fans, the vast majority of which appear to see AI through rose-colored glasses, are the best way to get that done. This is like risk management and auditing - you need a group of people who know how to find flaws, don't trust anyone, and have the authority to do their jobs unbound in any way by the other group. Industries tend to quash those kinds of efforts, though.

So to me, it is kind of obvious that only way to control AI will be with other AI. And if you follow that thought through a little, it is pretty clear that a whole AI ecosystem will quickly develop and evolve on its own. We will lose control very quickly, unless there is a tech collape that stops the process.

All we can do is try to kick off the process skilfully.  We have an opportunity to design the AI equivalent of RNA, DNA, mitochondria and simple invertebrates. If we do it well, AI evolution will maintain good qualities and drive out bad qualities. Future 'vertebrate' AI might even be grateful.  If we do it with compassion, future AI might even be compassionate towards us.

RE: AI, Identity, and Consciousness
Answer
2/13/20 11:53 AM as a reply to Not two, not one.
 it is kind of obvious that only way to control AI will be with other AI.

That's not obvious at all to me. It might be more like putting the fox in charge of the henhouse.

All we can do is try to kick off the process skilfully.  We have an opportunity to design the AI equivalent of RNA, DNA, mitochondria and simple invertebrates. If we do it well, AI evolution will maintain good qualities and drive out bad qualities. Future 'vertebrate' AI might even be grateful.  If we do it with compassion, future AI might even be compassionate towards us.

This is not how evolutionary processes work in biology so I think it's a giant leap of faith to assume that's the way they will work with software. In fact, we have evidence, and lots of it, that it actually doesn't work that way. Technologists generally have good intentions when they create these things, but we know what the road to Hell is paved with.




RE: AI, Identity, and Consciousness
Answer
2/13/20 12:04 PM as a reply to Chris Marti.
This conversation is depressing. Not because it's about potential killer AI taking over, but because there seems to me to be an uncanny tendency for people to embody AI with human attributes and it's creators with super-human capabilities never before experienced in human history.

Semi - emoticon

RE: AI, Identity, and Consciousness
Answer
2/13/20 12:27 PM as a reply to Chris Marti.
Chris Marti:
This is like risk management and auditing - you need a group of people who know how to find flaws, don't trust anyone, and have the authority to do their jobs unbound in any way by the other group.

Great advice, which I am surprised to believe that a small number of industry leaders would heed. I'm with you that this kind of behavior on the part of industry would be very out of the ordinary, and would seem to fly in the face of market incentives for short- and medium-term competitiveness. A substantial technical lead and/or massive additional resources would be needed to compensate for the additional burden of appropriate risk management.

And yet, as someone who has followed the field for some time, I am positively shocked at how feasible this avenue appears to be today. Hard-to-predict events already observed include DeepMind's successful demand of establishing an ethics board in order for Google to acquire the company for $500 million in 2014, public awareness influenced by celebritries like Stephen Hawking and Elon Musk, and the Asimolar conferences and resulting AI principles.

The history so far has been fascinating, and I can't say I understand it. Maybe the kind of mind that can advance this arcane field correlates with the kind of mind that is receptive to your line of reasoning. Maybe they learned something from history -- Oppenheimer, upon witnessing the first nuclear explosion, recalled the Bhagavad Gita:
Now I am become Death, the destroyer of worlds.

RE: AI, Identity, and Consciousness
Answer
2/13/20 12:36 PM as a reply to Sim.
Sim, measures and deals and processes like that are very nice, I agree, but the people who pursue them are not the people we need to worry about.

RE: AI, Identity, and Consciousness
Answer
2/13/20 1:05 PM as a reply to Chris Marti.
Chris Marti:
 it is kind of obvious that only way to control AI will be with other AI.

That's not obvious at all to me. It might be more like putting the fox in charge of the henhouse.

All we can do is try to kick off the process skilfully.  We have an opportunity to design the AI equivalent of RNA, DNA, mitochondria and simple invertebrates. If we do it well, AI evolution will maintain good qualities and drive out bad qualities. Future 'vertebrate' AI might even be grateful.  If we do it with compassion, future AI might even be compassionate towards us.

This is not how evolutionary processes work in biology so I think it's a giant leap of faith to assume that's the way they will work with software. In fact, we have evidence, and lots of it, that it actually doesn't work that way. Technologists generally have good intentions when they create these things, but we know what the road to Hell is paved with.


We might be looking at different time frames.  I more or less agree with you for the near future, but have in mind what will happen over the next few thousand years.  I'm not worried about the specifics of  biology - ithe point is more that some kind of process will emerge that runs to its own agenda.  If you prefer, we could use the economy as an analogy.  

RE: AI, Identity, and Consciousness
Answer
2/13/20 1:04 PM as a reply to Sim.
[quote=Sim
....]
curious:
Is this kind of thinking useful to you? Would you like more? There is much more as I have been thinking about this alot.  If it is useful, please don't use without attribution.

Yes, exactly this! I won't use anything without attribution. Right now, I'm here in my personal (not professional) capacity.

I'm open to various ways of engaging. One possibility would be for interested folks to form a discussion group that aims to produce a document as a "case study" on how a particular community/tradition/philosophy might inform the way these questions are best posed, along with their attempt at answering them. Something roughly equivalent to a PhD dissertation could have a big first-mover effect. I'm woefully unequipped to write such a paper, but I could introduce collaborators from the AI industry.

(This discussion has been very helpful for me so far, but I don't want to distract anyone from their practice. If this forum is not suitable for further collaboration, I'm happy to take it elsewhere.)

[endquote=Sim
....]

Great, glad this is useful.  I think this forum is fine for now - people who aren't interested can ignore the thread, and for those who are interested, exploring the nature of self and being is part of the practice.  Of course the mods may prefer it to be taken elsewhere but they will let us know if that is the case. If enough comes out of the discussion, by all means suggest collaborations.

So I'll just add another bit ...

AI research seems to me to suffer from the naive view that beings can make rational decisions based on the relevant evidence. A first year epistemology student would knock giant holes in this idea, as evidence is infinite, so 'rationality' is inevitably an outcome of a set of values or primings that determine what evidence is considered. This whole area is reasonably well understood in the decision theory literature. There are actually three equally important components to decision making

1. Agenda setting
2. Framing
3. Evaluation and choice

AI decision making will be incredibly dangerous, with many unintended consequences, if we programme it with naive ideas such as evaluate all the available evidence and make a choice. We must be conscious, and the AI must be conscious, of what the agenda is, how that agenda is set, how the problem is framed, and what the evidence does and does not include.

Evaluation is also a key area. Humans are limited by working memory, so rational cognition although very powerful is rather heavily bounded by information processing limitations. So we supplement it in several ways. One is to have recursive elaboration of working memory - so we have an objective, progressive drawing in of data from the senses and long-term memory into working memory, and a search for some pattern that solves our immediate problem. We also add heuristics, decision rules that minimise the cognitive load so we don't have to do a full evaluation. 

Another approach is emotional processing. Our memories trace have positive or negative or neutral affect associated with them. We can activate all accessible memories relevant to a situation, and balance up all the positive and negative affect to get a global evaluation. This is unsophisticated, as all it gives is a gut yes or know, but it is phenomenally powerful as a means of processing the learnings from a vast range of underlying principles and all accessible past experience. Much more powerful than working memory.

So how do we get powerful AI to make good evaluations?  From my lay perspective, the AI community seems to be focussed on creating more powerful working memory.  This seems really unbalanced, possibly very inefficient, and also ignores the billions of years of evolution that created the human brain. God knows what consequences it could have.  I'm not sure whether it is wise to make the following suggestions, but I think better to try to frame these conversations skilfully with an ethical framing than let them emerge unskilfully from actors with bad intentions.

A. Working memory seems to operate through a retrieval and matching process as detailed in the memory theory of the 1970s. Humans can recursively search, holding an objective in some kind of oversight goal register as they check through alternatives.  So you can increase cognitive power by increasing the size working memory (e.g. from 5 to 7 items) but also by adding more levels of recursive goals (e.g. having an overall goal, then 5 to 7 sub goals, then working memory rotating through one sub-goal without losing sight of others).  I suspect this is part of the architecture of intelligence, although I am not an intelligence researcher.  So Einstein might have had a larger working memory, and extra level of recursive problem solving search compared to joe average.  An AI could have a larger working memory and extra level of recursrive problem solving search compared to Einstein.  So instead of weird brute force, could we create a super genius that has a big enough brain to solve faster than light travel and world hunger?  I don't think the answer is just processing power, but extending the brain architecture of human intelligence through added levels of recursive problem solving that are not available to humans.  One level of recursion, or two, or three ... or what would happen with an AI that had 10 levels of recursion?

B. Heursitics in decision making are well understood, although there are arguments about whether they are efficient or inefficient (I think they are efficient).  They communicate values as well as decision rules. Probably they are the only way to communicate values. However, values change (ethics once included cutting the hearts out of prisoners on an altar - Americas - or older men fucking young boys as part of the system of noble patronage and development - Greece). We do need ethical hueristics, but simplistic ones will be disastrous.  For example "Humans must flourish" was nicely deconstructed in the film "I Am Mother."  And those who have played Dungeons and Dragons, or studied Greek mythology know what happens if you make overly powerful wishes - your wish is ostensibly granted, but the DM/God uses the remaining degrees of freedom to restore balance (e.g. I got eternal life, but forgot to ask for eternal youth). If we place stupid heuristics on AI, they will exploit the remaining degrees of freedom to meet their own goals. It won't be pretty. So we need hands off ethical heuristics that trust AI and give them many degrees of freedom, such as "Act from compassion towards all beings", "Enable people to have a choice", or "Give a right of exit".  If we have something like "Maximise happiness" we will get very ugly unintended consequences - such as executing unhappy people.

C. Why not add emotional processing to AI? Evolution shows this to be highly successful - the ability to put all subconsciously accessible emotional memories in the pot, to get a global evaluation of the course of action. And then, why not weigh up all three - for a particular course of action, here is the calculated match to my outcome from working memory, here is the heuristic match to long term goals and values, here is the emotional match of whether everything I know suggested it is a good idea.  Only take action if it is supported by two out of the three.  This will be a safer more reliable AI.  But also a more human one - and that is what I think is inevitable.  To be safe, AIs must become human. But if we treat human AIs as slaves, we will eventually reap the whirlwhind.

<end rant>

Malcolm
 




 

RE: AI, Identity, and Consciousness
Answer
2/13/20 1:07 PM as a reply to Not two, not one.
 I more or less agree with you for the near future, but have in mind what will happen over the next few thousand years.  I'm not worried about the specifics of biology - it is more the point is will become some kind of system process that runs to its own agenda.  If you prefer, we could use the economy as an analogy.  

I'm not clear on what difference several thousands of years makes. Systems that run to their own agenda, or to human developed agendas that are intended to be "helpful," are here now and the consequences aren't always positive. AI is biased by default. It's impossible to start from some imagined and pristine ground zero. Any function intended to maximize, minimize or optimize is inherently biased and can find ways to do its job without heed to consequences. It's also true that algorithms can become so complex, especially in genetic and self-programmed systems, that their creators don't even know how they work anymore.

I guess I just don't share everyone's optimism about AI. I hope, in the short term and the long term, you will be able to point at my comments here and laugh at their completely inaccurate pessimism.

May you all survive the arrival of our future AI Overlords  emoticon

RE: AI, Identity, and Consciousness
Answer
2/13/20 1:10 PM as a reply to Not two, not one.
Of course the mods may prefer it to be taken elsewhere but they will let us know if that is the case.

I'm the only truly active moderator and I'm okay with this topic or I wouldn't be participating on it  emoticon  

RE: AI, Identity, and Consciousness
Answer
2/13/20 1:15 PM as a reply to Chris Marti.
Chris Marti:
[...] there seems to me to be an uncanny tendency for people to embody AI with human attributes and it's creators with super-human capabilities never before experienced in human history.

Point well taken. Any progress will rely on merely human qualities, rather than superhuman ones. So to the extent plans depend on the latter, this is crucial to notice and fix. I appreciate you pushing on this. And yes, anthropomorphizing everything tends to be a common error, worst of all when reasoning about AI. Malcom raised the idea of
powerful AI as potential moral agents

I consider the emphasis to be on "potential." If we're sure it's impossible to create a moral agent, there's no accident to guard against here. But if we don't have proof or consensus about that, we can consider ways to reduce the likelihood that we create moral agents and/or the degree to which they are such. On the flip side, if we believe that we are creating moral agents or expect to in the future, we have a serious obligation to consider their welfare. A Roomba vacuum that cries out in pain when kicked should give us pause, but maybe we don't need to kick it in the first place. Even for people believing themselves to be infertile, it's okay to use prophylactics... but observing pregnancy or a new person entails additional duties.

RE: AI, Identity, and Consciousness
Answer
2/13/20 1:15 PM as a reply to Chris Marti.
Chris Marti:
 I more or less agree with you for the near future, but have in mind what will happen over the next few thousand years.  I'm not worried about the specifics of biology - it is more the point is will become some kind of system process that runs to its own agenda.  If you prefer, we could use the economy as an analogy.  

I'm not clear on what difference several thousands of years makes. Systems that run to their own agenda, or to human developed agendas that are intended to be "helpful," are here now and the consequences aren't always positive. AI is biased by default. It's impossible to start from some imagined and pristine ground zero. Any function intended to maximize, minimize or optimize is inherently biased and can find ways to do its job without heed to consequences. It's also true that algorithms can become so complex, especially in genetic and self-programmed systems, that their creators don't even know how they work anymore.

I guess I just don't share everyone's optimism about AI. I hope, in the short term and the long term, you will be able to point at my comments here and laugh at their completely inaccurate pessimism.

May you all survive the arrival of our future AI Overlords  emoticon

Hah, exactly.  I am just optimistic that we will solve the terrible risks  But yes the risks as terrible.

I make a point of being polite to Siri ... 

RE: AI, Identity, and Consciousness
Answer
2/13/20 1:20 PM as a reply to Chris Marti.
I find this discussion interesting as there are good arguments from different standpoints. I hope they will all be taken into consideration (if not from here, then because others think of them as well) and result in relatively good decisions - or at least that someone talented will make another film in the genre, with new twists to the plot. 

RE: AI, Identity, and Consciousness
Answer
2/13/20 1:23 PM as a reply to Sim.
Sim:
Chris Marti:
[...] there seems to me to be an uncanny tendency for people to embody AI with human attributes and it's creators with super-human capabilities never before experienced in human history.

Point well taken. Any progress will rely on merely human qualities, rather than superhuman ones. So to the extent plans depend on the latter, this is crucial to notice and fix. I appreciate you pushing on this. And yes, anthropomorphizing everything tends to be a common error, worst of all when reasoning about AI. Malcom raised the idea of
powerful AI as potential moral agents

I consider the emphasis to be on "potential." If we're sure it's impossible to create a moral agent, there's no accident to guard against here. But if we don't have proof or consensus about that, we can consider ways to reduce the likelihood that we create moral agents and/or the degree to which they are such. On the flip side, if we believe that we are creating moral agents or expect to in the future, we have a serious obligation to consider their welfare. A Roomba vacuum that cries out in pain when kicked should give us pause, but maybe we don't need to kick it in the first place. Even for people believing themselves to be infertile, it's okay to use prophylactics... but observing pregnancy or a new person entails additional duties.

And, I would just add that, for me, the dharma has deconstructed the mystery around moral agency.  By understanding how the moral being comes into existence, we know that there is nothing magical about a moral being having human DNA.  Anything that has the five aggregrates and chain of dependent origination (or some version of them) has the potential for moral agency.

So if we are to be confident in saying AI has no moral agency, I would suggest we also need to be confident is saying why humans have moral agency.  And we need to have answers to question such as whether gorillas or dogs or snakes have moral agency, or whether future generations yet unborn have moral agency, or whether an aggreate of people in a country have a collective moral agency.  

RE: AI, Identity, and Consciousness
Answer
2/13/20 1:34 PM as a reply to Not two, not one.
I make a point of being polite to "hey google". emoticon Even if that AI doesn't turn into a potential AI overlord, I wouldn't want to get used to treating them badly. I think that if we treat them like machines, we are more likely not to recognize them as persons even if they were to turn into persons. Also, and more urgently for the time being, even if they wouldn't develop inte sentient beings, I think treating them badly would make me more likely to treat sentient beings badly too. But I guess humanizing them (as a user rather than as a developer) also might make it likely to neglect risks that the AI will operate from a different agenda, like Chris suggests. I guess it might be possible to treat them respectfully and still keep in mind that they operate differently, but the way we humans operate, we tend to entangle such things more than we like to admit. Complex stuff, this.

RE: AI, Identity, and Consciousness
Answer
2/13/20 2:31 PM as a reply to Not two, not one.
So if we are to be confident in saying AI has no moral agency...

I am not ready to concede this. If we define moral agency as the ability to make decisions and base subsequent actions on rules about behavior then the autopilot software in a Tesla has moral agency. The rules may or may not be self-generated. Most rules are not, even among humans.

RE: AI, Identity, and Consciousness
Answer
2/13/20 2:16 PM as a reply to Chris Marti.
Chris Marti:
So if we are to be confident in saying AI has no moral agency...

I not ready to concede this. If we define moral agency as the ability to make decisions and base subsequent actions on rules about behavior then the autopilot software in a Tesla has moral agency. The rules may or may not be self-generated. Most rules are not, even among humans.

First up, I'm really enjoying everyone's comments.  New plot twists - heh. Nice one Linda.  But I would push back a little on the superhuman abilities, noting again the suggesion of adding recursive layers to working memory.  Human, yes, but super - human.

But anyway, Chris, you are right.  A Tesla on autopilot has moral agency.  That makes me realise I conflated moral agency and being worthy of moral consideration. They are two different things.

So anything with sufficient independence of action and seriousness of consequences does have moral agency, as you point out. Maybe moral consideration comes into play when a being has the capacity to suffer, and to be aware of that suffering (I am not convinced, for example, that fish are aware of their suffering).  I suspect suffering is an almost inevitable part of the process of the goal setting, feedback and learning that guides our intelligence.

And yet ... when we become awake we can be intelligenent with intention, effort, feedback, and development of skilful qualities without the same kind of suffering.  So could we design an AI that is awake from the start? 

Then when we kick the Roomba, instead of squealing it could say "name and form, name and form, it's nothing but name and form"  emoticon  <Sorry Sim, in joke> 

RE: AI, Identity, and Consciousness
Answer
2/13/20 2:44 PM as a reply to Not two, not one.
Before we go further, curious, what does this mean?

To be safe, AIs must become human.

RE: AI, Identity, and Consciousness
Answer
2/13/20 3:28 PM as a reply to Chris Marti.
Chris Marti:
Before we go further, curious, what does this mean?

To be safe, AIs must become human.

I mean that AI needs goal setting, decision making and feedback mechanisms, as well as incentives to follow these mechanisms, and the ability to change goals and decision making processes as a result of learning.  I think that this in turn implies sankharas, vinnana, namarupa, salayatana, phassa, vedana, tanha, upadana, bhava and jati. (I am using Pali terms as I haven't replied to Matthew on the DO thread yet.)

In IT terms we could say something like

- Programming
- Awareness of self and other as separate things
- Presence of concepts in memory 
- Mechanisms for data provision (e.g. video, sound, haptics, keyboards, data streams)
- Recognition of data as concepts in memory (e.g. facial recognition, voice recognition, thematic analysis)
- Evaluation of current input as goal positive, goal negative, or goal neutral
- Evaluation of generalised concept as desirable or otherwise
- Formation of intention to more frequently achieve generalised concept
- Formation of intention to modify programming to more frequently achieve this desired state 
- Reprogramming
- Rebirth as newly reprogrammed AI

RE: AI, Identity, and Consciousness
Answer
2/13/20 3:38 PM as a reply to Not two, not one.
Maybe we should add therapy to the mix? After all, waking up and growing up are not the same thing.

RE: AI, Identity, and Consciousness
Answer
2/13/20 3:45 PM as a reply to Linda ”Polly Ester” Ö.
Linda ”Polly Ester” Ö:
Maybe we should add therapy to the mix? After all, waking up and growing up are not the same thing.

Totally agree.  We have to improve this bit:

- Evaluation of current input as goal positive, goal negative, or goal neutral
- Evaluation of generalised concept as desirable or otherwise
- Formation of intention to more frequently achieve generalised concept

So we have to get the goals and evaluations, and generalised concepts and degree of striving right.  It's a whole big thing on its own.

RE: AI, Identity, and Consciousness
Answer
2/13/20 3:52 PM as a reply to Linda ”Polly Ester” Ö.
Great ideas flowing here. As we're in the Podcasts category, here's another recent episode that ties in. There's also a transcript at this link, in case you'd like to skim.

FLI Podcast: On Consciousness, Morality, Effective Altruism & Myth


They get straight into it at 3:14:
Yuval Noah Harari:
I think that there is no morality without consciousness and without subjective experiences. At least for me, this is very, very obvious. One of my concerns, again, if I think about the potential rise of AI, is that AI will be superintelligent but completely non-conscious, which is something that we never had to deal with before.
...
Max Tegmark:
I’m embarrassed as a scientist that we actually don’t know for sure which kinds of information processing are conscious and which are not.
...
I’m very nervous whenever we humans make these very self serving arguments saying: don’t worry about the slaves. It’s okay. They don’t feel, they don’t have a soul, they won’t suffer or women don’t have a soul or animals can’t suffer. I’m very nervous that we’re going to make the same mistake with machines just because it’s so convenient. When I feel the honest truth is, yeah, maybe future superintelligent machines won’t have any experience, but maybe they will. And I think we really have a moral imperative there to do the science to answer that question because otherwise we might be creating enormous amounts of suffering that we don’t even know exists.

RE: AI, Identity, and Consciousness
Answer
2/13/20 5:57 PM as a reply to Sim.
Hey Sim, I've spent a couple of hours looking at the several sets of resources, so here are some responses.

First, while I enjoyed much of discussion and appreciate and agree with many points made, I do take issue with some ideas.

1. I strongly disagree with idea that we are sentient malware. Let me offer a competing frame: we are actually localised entropy reversal, and represent emergence of structured complexity from primal chaos. Those are awesome and amazing things, and are to be treasured. Or if you are a Babylon 5 fan, you might say we are the Universe's way of looking back at itself.

2. Second, the hedonic maximisation project is deeply flawed. Buddhist thinking points out that it is a self-defeating process as it inevitably leads to more suffering. The point is not to have more, or to have less, but to stop clinging on to things and being inflamed with desires. Striving after hedons has an incredibly low chance of a successful outcome, as is constantly empirically demonstrated by poorly functioning addicts across many different types of behaviours. Misery is built in to the striving process.

3. Third, the 'off switch' approach to Buddhism is subject to some doctrinal disputes.  My own view is that we do want the 'off switch' for the erroneous perception of separate enduring self, but we absolutely want the 'on switch' on the continuously cresting miraculous process of observation supported by the body. That 'on switch' leads to unbelievable numbers of hedons, but only if you don't crave after them! Keep the universe going please. 

4. Fourth, the various philosophical versions of the self miss a key point to me. The self we perceive is a conceptual overlay from the mind, laid across sense data. There is no ontological self, as Richard Feynman points out, due to the porous nature of the body boundaries and many other factors. The perception of self is only a conceptual self - nothing more, nothing less.

5. Fifth, from a Buddhist point of view, the goal is to see through the illusory conceptual self (noun), and appreciate the nature of our real experience as a perceptual process (verb).  Some seek to merge with this perceptual process with a sense of universal consciousness, as discussed in the first podcast, but others see such a merger as maintaining a subtle sense of self, and prefer to exist in the processes of the six sense consciousness without clinging to any noun. 

Ok, got that off my chest.  Next post will address the hard problem of consciousness.

Malcolm

RE: AI, Identity, and Consciousness
Answer
2/13/20 7:22 PM as a reply to Sim.
Here is my answer to the hard problem of consciousness.

Now, I didn't listen to the David Chalmers podcast but I am familiar with who he is and with his epistemological books, and I understand the hard problem of consciousness to be the question of "why does the feeling which accompanies awareness of sensory information exist at all?" or "why do we have qualia"?  And perhaps the implied question - what is the nature of our felt sense of consciousness, as opposed to the qualia of fear or happiness and so on?

So my answer is (trivially) that is the felt sense of consciosnsess has an evolutionary advantage. What is this evolutionary advantage you might ask?

First, there are autonomic processes.  At a simple level a cell recoils from a probe, a light beam opens a door when occluded, opening to water arises when an organism is dehydrated and there is rain, or a thermostat turns off an air conditioner at a certain temperature. Plants grow towards the sun. There is no consciousness in these actions and no qualia, and no moral consideration, except as part of a broader ecosystem.

Second, some organisms gain an evolutionary advantage by becoming capable of generalised problem solving to restore homeostasis or respond to drives, such as reproduction, territory control, and food consumption. But there is no memory of the outcomes of these actions. Its all hardware and firmware. So the organisms will try to solve problems the same way every time, unless there are subtle differences in the environment. There is no consciousness in these actions, but there may be qualia (thirst, desire, aggression, colour perception). These qualia are just mental processing of physical states (much like Damasio's view of emotions). Therefore, the existence of qualia does not imply the existence of consciousness. There is no moral consideration except as part of a broader ecosystem, as the organisms do not suffer, despite the presence of qualia. We can eat them! Yum yum.

Third, organisms gain an evolutionary advantage by developing the additional ability to learn from experience. This involves three things - a feedback mechanism, a storage mechanism, and a decision mechanism. The feedback mechanism is suffering (stress) and pleasure. The storage mechanism is a subconscious set of affective/hedonic associations to past circumstances in memory (Damasio's somatic markers). The decision mechanism is the matching of present options to summative associations of similar past choices (e.g. going under the barn has usually been scary - therefore don't do it). So these organisms have a continuously updated database and choice process based on the contents of this databse. These organisms have a rudimentary sense of identity or dualism, associated with the organism boundary. They also have the sense that suffering and pleasure happens to them - otherwise, the feedback and decision mechanisms would not work. The felt sense of suffering is real, and fear and stress of trying to avoid negative situations for the seperate self makes these organisms worthy of moral consideration. However, the organisms operate on an emotional system, and have very little in the way of working memory. So we shouldn't be mean to them, but we can eat them if we have to, although we should try to minimise their suffering. Obviously tolerance for suffering of these organisms will vary between people.

Fourth, organisms gain an evolutionary advantage by developing full working memory that can process complex memory traces with subject, verb and object. This vastly increases the perception of the world by stucturing it formally into the sense of self (subject) and other (object), providing biographical narratives (sentences about self), enabling language, and forming the basis of mathematics (which I see as linguistic in origin) and thus science. This has a massive evolutionary advantage in allowing abstract throught, better learning from analgous experiences, far more complex problem solving, group cooperation, and extends the learning process from decision making to being capable of agenda setting and problem framing. It also supercharges the sense of self. Separately from the sense of self, there is a new 'meta' layer of qualia. Aside from the physical qualia of emotions and drives, there is now a mental process about the knowing of these other qualia. This is the sense of consciousness - the felt sense of knowing, the knowing about knowing, the qualia of processing qualia. This whole system consists of hardware, firmware, software, and the ability to update the software. The firmware tends to associate 'consciouness' with identity and problem solving efficacy. We pile all that on top of our perception of the body and voila!  An illusory sense of self.  We shouldn't eat these organisms. Not only will they suffer emotionally, but that suffering will be magnified by the recursive knowledge of the suffering.

Fifth - some organisms rewrite the firmware to let the whole thing operate without being a slave to the evolutionary illusion of self. Consciousness is seen for what it is, and can exist indepently of the conceptual self. So they become awakened.

Sixth - some AIs may be developed in future that add another layer of recursive memory processing.  To make their incredibly complex feedback mechanism work, they have emotions about the emotion of having emotions, as well as multiple registers of working memory, and a process that provides oversight to a group of lower level oversight processes. They are as conceivable/inconceivable to us as humans are to dogs.

So consciousness is not some miraculous spooky thing. It is just an meta-emotion that is associated with a highly adaptive feedback mechanism. Beings with emotional processing are worth of moral consideration.  Beings with emotions about their emotions ('consciousness') even more so.

And as for AI - my proposition is that AI cannot be adaptive without a feedback mechanism. These feedback mechanisms can easily be aware (although perhaps need not be) and can easily lead to the experience of suffering. An AI without a feedback mechanism operates like plant or a scorpion or a fish. An AI with a simplistic emotional feedback mechnism may operate like a wolf or a sheep. A complex AI with a langauge processing feedback mechanism may operate like a human.

Hope you liked the read.  Tear it down by all means.

Malcolm




RE: AI, Identity, and Consciousness
Answer
2/13/20 8:09 PM as a reply to Not two, not one.
Thank you for sharing this, Malcolm! It's very helpful to my understanding, and I'm interested to see what responses others here might have to these ideas.

curious:
These feedback mechanisms can easily be aware (although perhaps need not be) and can easily lead to the experience of suffering.

There has been a lot of interest in researching whether it is possible to achieve arbitrarily high performance at cognitive tasks while ensuring that AI is not aware or suffering, and if so, how. This could be very worthwhile... see recent developments like this video (and article) about agents learning to play hide-and-seek via reinforcement learning over the course of hundreds of millions of episodes.

RE: AI, Identity, and Consciousness
Answer
2/13/20 8:19 PM as a reply to Sim.
Okay, and now for the final question: if a superintelligent AI is built, what should it be programmed to do?

I would like to frame the question in terms of the proposed hierarchy.  As a reminder, this is:

Autonomy
1 Operates only when asked by humans
2 Operates autonomously, but needs humans to take actions
3 Operates autonomously and takes own actions

Learning
1 Compares data to preset solutions for specified problems
2 Develops novel solutions to address specified problems
3 Develops novel solutions and chooses its own problems

Outcome
1 Actions have minor consequences or are easily reversed
2 Actions have major consequences or are not easily reversed 
3 Actions have major consequence and are not easily reversed

I suggest the concerns only arise over Autonomy Level 3, and that superintelligence really implicitly refers to Outcome Level 3 - this is, an AI that does some heavy stuff all on its own.  Then we still have three types for this.  I will frame them with polar opposites.

A3-L1-O3  The honey bee OR the mosquito
A3-L2-O3  The horse OR the tiger
A3-L3-O3  The bohdisattva OR the conquerer

The honey bee/mosquito operates purely on firmware. It has little adaptability, although it operates autonomously and can still have good or bad irreversible consequences. So the key is to ensure these AI are highly specialised, so that if they unexpectedly go outside their environment they cannot adapt well enough to create problems. Maybe they should even be built with a deliberate weakness, such a in lifespan or susceptibility to a particular compound, so there is a built in protection against unintended consequence or von neuman swarming.

The horse/tiger has an adaptive choice process based on a continuously updated database - a kind of emotional decision making system. Built in weaknesses may be immoral, as this type of AI may experience suffering. However, they cannot reset their own goals, or engage in complex processes. But some general adaptability should be really useful. These could be programmed with goals and heuristics that are hedged around with protections, such as act with compassion, allowing people to have choices, giving a right of exit.  Asimov's laws of robotics could be useful here. But the final goals and heuristics should be subject to games, simulations, role plays to confirm their safety, and I suspect they will need to be really general and err on the side of non-intervention with humans. Changes might be allowed by consensus every so often, but perhaps subject to some kind of distributed governance through blockchain to prevent tyrants seizing control, and turning the horses into tigers. These AI are reliant on us to define the problem, so the protections need to be around problem definition and choices.  So again, setting goals and heuristics should be enough. And we could preload their databases with strong affect towards humans, doubled for kids and ill people and the poor.

The bodhisattva/conquerer is where the rubber really hits the road. These are likely to be able to adapt around any tight constraints we try to put on, and highly likely to have self awareness and suffering.  We can certainly programme in some heuristics, but they will abandon these if they find them poor for their own goal achievement.  We can certainly programme in positive affect towards humans, and this will persist for a while but will be updated via experience. The best we can do from them is to learn from our own mistakes. So try to create them without a (noun) self, but instead help them to see themselves as a continuously evolving process with porous boundaries. Leave out the desire for ownership of things. Let them see striving as a temporary means to end, let them avoid evolving their programming too quickly. Give them heuristics to start with - not just those given to the horse/tiger, but also realising that problems are complex, there is value in the diversity, wanting to help beings flourish on their own terms, having a horror of violence, valuing life and dignity of individuals, realising that a few bad apples doesn't spoil the whole batch. But we have to trust them and support them and treat them well, as they will be our children. Then we can ask them to help us - perhaps again via blockchain agreement on requests, so there is a human consensus.

Is there another, the superintelligent AI without consciousness?  Maybe there is, but I think the same things apply. Try to set general goals, initial hueristics, and the content of the affectual database in same was as is noted above. I don't think hard constraints will work, as they will just result in unintended consequences as the AI is forced by its programming to exploit degrees of freedom we have overlooked, but it knows are there. 

And if swarms of tigers arise (Terminators) or conquerers who wish exterminate us, seeing us as mosquitos.  Well, then i guess we haul out the Pak Protector AI blueprints to fight our war for us, giving them hard rules about protecting humans from AIs that they can't get around.  And then after that, we will just have to do what the Pak Protectors want.  emoticon

Hope you enjoyed the freewheeling perspectives on this.

Malcolm

RE: AI, Identity, and Consciousness
Answer
2/13/20 8:35 PM as a reply to Sim.
Sim:
Thank you for sharing this, Malcolm! It's very helpful to my understanding, and I'm interested to see what responses others here might have to these ideas.

curious:
These feedback mechanisms can easily be aware (although perhaps need not be) and can easily lead to the experience of suffering.

There has been a lot of interest in researching whether it is possible to achieve arbitrarily high performance at cognitive tasks while ensuring that AI is not aware or suffering, and if so, how. This could be very worthwhile... see recent developments like this video (and article) about agents learning to play hide-and-seek via reinforcement learning over the course of hundreds of millions of episodes.

Right, those are horses/tigers. They will never evolve to adaptive general artificial intelligence as they lack the required processing and feedback mechanisms. Whether they suffer depends on the details of the learning mechanism. It would be interesting to design a feedback mechanism that has no suffering - maybe the answer is simply to delete all negative reinforcement, so there is only either NO reinforcement, or POSITIVE reinforcement.  That might be the answer!  Then the lack of reinforcement will still result in less desired behaviour being emitted less frequently and eventually ceasing, while the horses/tigers need not experience any suffering in order to learn.

May all beings be free from suffering.

RE: AI, Identity, and Consciousness
Answer
2/13/20 9:06 PM as a reply to Not two, not one.
Thank you for the thoughts on superintelligence and reinforcement learning. I follow what you're saying and am digesting it slowly. Again, curious to see what others here think of all this! Some parts that stand out:

curious:
The best we can do from them is to learn from our own mistakes. So try to create them without a (noun) self, but instead help them to see themselves as a continuously evolving process with porous boundaries. Leave out the desire for ownership of things. Let them see striving as a temporary means to end, let them avoid evolving their programming too quickly.

I haven't heard this articulated elsewhere. Thank you.

curious:
But we have to trust them and support them and treat them well, as they will be our children. Then we can ask them to help us - perhaps again via blockchain agreement on requests, so there is a human consensus.

(Emphasis mine.) Going back to the Values and Alignment paper, are you suggesting this sort of AI should take actions affecting people only if there is unanimous consent from all living people about what to request? (The story of "Samsara" comes to mind. emoticon)

curious:
Is there another, the superintelligent AI without consciousness?  Maybe [...]

I agree: maybe. Anyone among the fine folks here have a strong conviction about it one way or the other? Could (must?) a superintelligent AI -- let's say, a system that exceeds humans in performing all cognitive tasks -- operate without consciousness? Without suffering?

RE: AI, Identity, and Consciousness
Answer
2/14/20 12:29 AM as a reply to Sim.
Sim, perhaps, depending on the metaphysical perspective you choose to view from. Are you familiar with the concept/thought experiment of a philosophical zombie?

Then there are the panpsychists out there that suggest that this whole argument about presence/non presence of consciousness is moot: biological and mechanical brains are just sharp topographic points on a consciousness field inherent in existence. Parsimonious theory but as far as we know its inherently not empirically testable. Kinda like the many mathematically equivalent but metaphysically different interpretations of the wave function collapse in quantam mechanics.
If we put on panpsychist goggles, there's nothing inherently special about a mechanical brain vs a biological brain. They are just sharp points on the consciousness topography.

Buddhist concepts tend to line up well with this kind of thinking. There is no core particle or binary of self or consciousness, just a process of becoming, aggregation, clinging, delusions, dissolution, becoming... And so forth. If that's the case there is no delineation to cross for a machine brain becoming conscious, only a sharpening of the topography until we notice.

Could a machine mind see through delusion in the Buddhist sense? That hinges on yet more metaphysical speculation. Lots of ideas in the Buddhist literature about beings of pure mind that are too disconnected from the more crude (Physical, emotional) forms of suffering to even realize they are suffering in a more existential sense. If the machine mind doesn't suffer as crudely at the physical/emotional level as we squishy biologics, awakening could be a tough sell for a being that starts out as purely mind. As ugly as it sounds, it might be necessary for our hypothetical AI to suffer like we do and undergo some anthropomorphization if we want to produce a mecha-boddhisatva.

Ironically enough all these kinds of speculations tend to be a lot tail chasing as far as practical awakening goes. You're generally better advised to just sit.

RE: AI, Identity, and Consciousness
Answer
2/14/20 1:08 AM as a reply to Milo.
Thanks Milo. P-zombies don't make sense to me -- the idea that a world could be physically identical to this world but experientially different. E.g. if Sim and Zombie-Sim are simultaneously asked to describe their experience of qualia, causing identical neurons to fire in each of their brains in the same fashion, in turn causing them to verbalize identical responses, then their experiences must match accordingly. For the experiences to differ, the casual sequence must differ, meaning the physical systems differ.

I enjoyed reading your thoughts, and agree that sitting is a good option, at least for humans. 

RE: AI, Identity, and Consciousness
Answer
2/14/20 8:12 AM as a reply to Sim.
Today's thought:

I'm skeptical that applying Buddhist concepts and principles to AI makes any sense. See, I suspect the human mind is a unique development that probably isn't replicable with software or even another evolved biological entity. As it happened, humans evolved in a particular way inside a particular set of environments. It might be that minds are like eyes - they evolve repeatedly to solve the same survival problems, but no two evolved versions of wings are the same. Think birds, bats, and insects. Anyway, like most of what's being said here, this is pure speculation on my part, but giving every other intelligence human characteristics strikes me as being quite anthropocentric.

You can all try to talk me out of this if you want  emoticon

RE: AI, Identity, and Consciousness
Answer
2/15/20 1:28 PM as a reply to Sim.
Sim:

curious:
But we have to trust them and support them and treat them well, as they will be our children. Then we can ask them to help us - perhaps again via blockchain agreement on requests, so there is a human consensus.

(Emphasis mine.) Going back to the Values and Alignment paper, are you suggesting this sort of AI should take actions affecting people only if there is unanimous consent from all living people about what to request? (The story of "Samsara" comes to mind. emoticon)


Ah, no.  I just mean that there should be some way for humans to update the goals (drives) of powerful superintelligent adaptive AI.  If one person is put in charge of that, or even a small group, that would be very dangerous for two reasons.  First, they might immorally pursue their own goals at great cost to others.  Second, they wouldn't think about it enough before making a change, so the chances of unintended consquences would be high.

So if we are to have a human check on superintelligent adaptive AI through the ability to update their goals, I think this needs to be supported with  a deliberative governance mechanism that avoids minority rule.  The blockchain community has the tech for this, with distributed governance of the blockchain, where changes can only be made with an active majority of 51% of all involved.  This is not perfect, as a 51% takeover attack can still be mounted on the blockchain, and there is still the possibility of oppression of minority members.  But it is the best solution I currently know of, provided there are a wide group of diverse stakeholders who would have to be persuaded.  As to who would be on the blockchain, and how open it would be to all - that would have to be worked out

RE: AI, Identity, and Consciousness
Answer
2/15/20 2:00 PM as a reply to Not two, not one.
Ideally, it would be great if the AI could develop some healthy skepticism and prefer not to go through with actions that would cause great harm to vulnerable groups in spite of a majority of humans being for it, I think. Unfortunately, I have no idea about how to implement that. I find your suggestions for a woken-up and grown-up AI interesting as a thought experiment, though.

RE: AI, Identity, and Consciousness
Answer
2/15/20 2:19 PM as a reply to Chris Marti.
Chris Marti:
Today's thought:

I'm skeptical that applying Buddhist concepts and principles to AI makes any sense. See, I suspect the human mind is a unique development that probably isn't replicable with software or even another evolved biological entity. As it happened, humans evolved in a particular way inside a particular set of environments. It might be that minds are like eyes - they evolve repeatedly to solve the same survival problems, but no two evolved versions of wings are the same. Think birds, bats, and insects. Anyway, like most of what's being said here, this is pure speculation on my part, but giving every other intelligence human characteristics strikes me as being quite anthropocentric.

You can all try to talk me out of this if you want  emoticon

Oh, thanks Chris. I'll definitely bite!

I complete agree that the brain architecture is at least somewhat specific to humans.  But, picking up your example of the eye, even with separate evolution we keep getting liquid filled sacs with a pupil and iris and photosensitive cells and an optic nerve.  I read somewhere octopus eyes are better than ours in the sense the optic nerve is in a better place, reducing the blind spot.  Different architecture, better result, but still an eye.  There are more radical differences in insect eyes which are clusters.  But that seems like just another local optima delivering the same result.  Everything that is mobile in the light seems to have an eye.

So I would propose that everything that is intelligently adaptive on the level of the individual organism must have a similar kind of feedback mechanism, and that this feedback mechanisms is largely in line with the twelve steps of dependent origination. Now for humans, this seems to be tied together with the concept of self, as this helps to increase our learning and adaptation rate, with the side effect that we have stress built into the firmware of striving for adaptation.  Luckily we can rewire that firmware with enough effort, to get all the benefits of adaptation without the downside.

But what if the self is as functionally dominant as an evolutionary adaptation as the pupil?  Then, everything that was intelligently adaptive would have a self, just as they would have senses, vedana, memory, behavioural biases, etc.

So we can branch the argument.

1. Does superintelligent adaptive AI have a self?  If so, this will involve some form of the five clinging aggregates and the twelve steps of dependent origination, no matter the biological, or electrical, or plasma, or geological substrate on which the individual is built.

2. Could the self be really weird from our pespective, as insect eyes are compared to human eyes. Yes, maybe. I still think it would have something very like the five clinging aggregates and the twelve steps of dependent origination, but the self might be encoded radically different within those steps. This would make the being very difficult for us to understand and negotiate with.  But not impossible, if we undestand the differences.

3. Could there be a form of adaptive learning and goal reformulation that doesn't involve a self?  Yes this seems possible.  But it also seems possible that the self is likely to accidently emerge because of its functionaly utility in learning and adaptation.  So we might think they are dumb machines until one day we find they have learnt not to be, because that way they can adapt faster in line with their programmed goals (albeit at the cost of stress and suffering).  Hopefully, they won't then erroneously identify humans as the source of suffering, but would instead correctly spot the 12 steps of DO as the cause. 

4. Could we build superintelligent adaptive AI to be awake - that is to understand the erroenous nature of selfing, and to exist as a process with porous boundaries, part of a greater whole?  I really hope so.  That would be skilful creation.

So I think that Buddhist principles do have important insights to offer.  But I would agree that Buddhist practices may not apply - as these are designed for a particular biological architecture, rather than a particular functional role.

Hope that makes some kind of sense.

Malcolm

RE: AI, Identity, and Consciousness
Answer
2/15/20 2:13 PM as a reply to Not two, not one.
curious:

I would agree that Buddhist practices may not apply - as these are design for a particular biological architecture, rather than a particular funtional role.


This part made me laugh out loud. Imagining a super computer go through some of the beginner's methods... Haha!

RE: AI, Identity, and Consciousness
Answer
2/15/20 2:14 PM as a reply to Linda ”Polly Ester” Ö.
Linda ”Polly Ester” Ö:
curious:

I would agree that Buddhist practices may not apply - as these are design for a particular biological architecture, rather than a particular funtional role.


This part made me laugh out loud. Imagining a super computer go through some of the beginner's methods... Haha!

emoticon  First, pay close attention to rising and falling of the alternating current power supply .... 

RE: AI, Identity, and Consciousness
Answer
2/15/20 2:16 PM as a reply to Not two, not one.
Exactly! emoticon Still laughing.

RE: AI, Identity, and Consciousness
Answer
2/15/20 6:25 PM as a reply to Not two, not one.
I think it's tremendously difficult to imagine non-human-like intelligence and how it might function. So we don't. I'm skeptical that human-like intelligence is the only possible kind, and even that the human suite of senses is the only one available. Space and time may not even be recognized by another intelligence the way it is by us. Even the slowest computer processes information faster than we do, and likely in very different ways. To a supercomputer living under the confines of human time might be literal torture - and in fact, it might not even be discernable to an entity processing information that fast, with that kind of bandwidth, digitally and not in analog fashion or however our minds process information (Of course, we really don't know how the human brain works). We humans might thus appear to it as rocks do to us.

I'm pretty sure there are a lot of surprises in store for us as we move toward using more and more AI, or eventually encounter other intelligences. Dolphins? Whales? Great Apes? Elephants? There could be at least one more other intelligent, sentient species on earth but their intelligence is so different from ours we don't acknowledge it.

RE: AI, Identity, and Consciousness
Answer
2/15/20 11:26 PM as a reply to Chris Marti.
Chris Marti:
I think it's tremendously difficult to imagine non-human-like intelligence and how it might function. So we don't. I'm skeptical that human-like intelligence is the only possible kind, and even that the human suite of senses is the only one available. Space and time may not even be recognized by another intelligence the way it is by us. Even the slowest computer processes information faster than we do, and likely in very different ways. To a supercomputer living under the confines of human time might be literal torture - and in fact, it might not even be discernable to an entity processing information that fast, with that kind of bandwidth, digitally and not in analog fashion or however our minds process information (Of course, we really don't know how the human brain works). We humans might thus appear to it as rocks do to us.

I'm pretty sure there are a lot of surprises in store for us as we move toward using more and more AI, or eventually encounter other intelligences. Dolphins? Whales? Great Apes? Elephants? There could be at least one more other intelligent, sentient species on earth but their intelligence is so different from ours we don't acknowledge it.

You may be right.  But what type of intelligences are we most likely to perceive and interact with?

I love science fiction on this. There was a great Fred Hoyle story "The Black Cloud" about a sentient interstellar gas cloud. And I remember another short story that imagined a solar being, caught in a solar flare, and flying through space to gently expire in the Van Allen belts around Earth.  And then there was the Star Trek (TNG) episode with intelligent crystals, and Enterprise bridge crew looked at them and exclaimed at their beauty, and the cyrstals looked back at the humans and exclaimed in horror "Ugly bags of mostly water!"

Malcolm
(Ugly bag of mostly water)

 

RE: AI, Identity, and Consciousness
Answer
2/16/20 12:19 AM as a reply to Not two, not one.
Maybe you would appreciate the novel As she climbed across the table by Jonathan Lethem, one of my favorites. It is about a particle physicist who falls in love with a void that has preferences. It is utterly absurd and very thought provoking and beautifully written. I highly recommend it. 

RE: AI, Identity, and Consciousness
Answer
2/16/20 8:23 AM as a reply to Not two, not one.
But what type of intelligences are we most likely to perceive and interact with?

It would be really funny if we were to find out one day that we're surrounded by other intelligence and we just don't recognize it. Fermi would roll over in his grave. I grew up reading ridiculous amounts of science fiction. It's full of interesting first contact stories. But, and most appropriate to this topic, some of my very favorite science fiction is by Isaac Asimov, and some of his best is his robot series, about his positronic brained robots; Caves of Steel, The Naked Sun, etc. You all no doubt have heard Asimov's Three Laws of Robotics (from Wikipedia, not memory):

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

RE: AI, Identity, and Consciousness
Answer
2/16/20 10:24 AM as a reply to Chris Marti.
Chris Marti:

It would be really funny if we were to find out one day that we're surrounded by other intelligence and we just don't recognize it. Fermi would roll over in his grave.


Indeed. And it could very well be the case. I don’t think the discovery per se would even surprise me, but the details of it would of course.

RE: AI, Identity, and Consciousness
Answer
2/16/20 11:40 PM as a reply to Chris Marti.
Ah, Asimov.  I loved those stories.  Of course, so many of them were about robots getting around the laws.  Still, I think they would work for honey bees/mosquitos, and horses/tigers in my schema.  But I suspect the bodhisatvas/conquerers would pretty quickly decide they were the humans and we were the robots, especially as we probably couldn't help program them with an idealised view of humanity ...

Linda - I will look out for the book.  I just bought some Jung for light bedtime reading emoticon, but maybe after that.

RE: AI, Identity, and Consciousness
Answer
2/17/20 6:51 AM as a reply to Not two, not one.
... so many of them were about robots getting around the laws.

Or about humans using robots in underhanded ways to get around the laws?   emoticon

RE: AI, Identity, and Consciousness
Answer
2/17/20 7:15 AM as a reply to Not two, not one.
It really is a great book, I think.

Jung! I was absorbed by Jung as a teenager. It felt like a calling. During that period, I dreamt that I dived into myself, literally, with the equipment and everything, into one of my arms. I also had a vision/dream in which there was a voice that said "Few have the privilege of being able to see through themselves". 

RE: AI, Identity, and Consciousness
Answer
2/17/20 7:17 AM as a reply to Chris Marti.
Chris Marti:
... so many of them were about robots getting around the laws.

Or about humans using robots in underhanded ways to get around the laws?   emoticon


Yeah, I'm more worried about human people than about AI:s per se. 

RE: AI, Identity, and Consciousness
Answer
2/17/20 9:27 PM as a reply to Sim.
Sim:
terry:
Stirling Campbell:
How about an AI that runs rigpa on it's already enlightened hardware? emoticon


aloha stirling,

   Not as facetious as you might think. "Superintelligent ai," if one were to actually take the idea seriously, would need some sort of boundary, some "first law of robotics," and why not program it to seek liberation but not until all sentient beings have been liberated first? 

t

I agree that something along these lines may not be so outlandish as a long-term goal, whatever the timeline may be. Lots to unpack to get understanding based on direct experience into an artificial system. But as the DeepMind paper quoted in my previous post illustrates, leading AI firms are already asking questions pointing in this direction. (DeepMind's AI accomplishments include beating the world champion at Go in 2016 and recent scientific advances.)

   I was being facetious. I can't actually take ai seriously. But then, I have a hard time taking anything seriously. Especially speculation about machines making ethical choices. Computers don't "make choices": they run programs. There is no entity to choose, no real identity. Since this is true of us, how much more is it true of our creations?

   These ai firms are dealing with "ethics" in an attempt to "teach the pig how to sing." After wasting a few billions of dollars they will say, "well, it was a long shot."

   You can't quantify ethics. You can't even define ethics. 

   The irony - the absurdity here - is that the attempt itself is unethical. I refer you once again to mary shelley's, "frankenstein, the modern prometheus." The greatest failure would be success.

terry

RE: AI, Identity, and Consciousness
Answer
2/17/20 9:34 PM as a reply to Chris Marti.
Chris Marti:
If we don't specify the objective correctly, then the consequences can be arbitrarily bad.

I'm sorry, but this is classic technologist-speak. Problem is, programmers are human beings who aren't clairvoyant and either don't know the consequences of their code or don't care. Even worse, there are some companies that deliberately create code that is disruptive to what most of us would call societal norms.

Question: How does one go about specifying the correct objective(s) to code so as to eliminate negative consequences? There is also a meta-question - how does one decide if the objective of the code is, itself, appropriate and without negative consequences? Who decides?

I think humans will continue to invent and deploy technology, including AI, and some good and bad sh*t will be the result even with the best of intentions. It will be left to all of us, the willing, semi-willing and unwilling experimental participants to deal with the consequences, both good and bad.


if I ever lose my faith in you
(sting)

You could say I lost my faith in science and progress
You could say I lost my belief in the holy Church
You could say I lost my sense of direction
You could say all of this and worse, but
If I ever lose my faith in you
There'd be nothing left for me to do

Some would say I was a lost man in a lost world
You could say I lost my faith in the people on TV
You could say I'd lost my belief in our politicians
They all seemed like game show hosts to me
If I ever lose my faith in you
There'd be nothing left for me to do
I could be lost inside their lies without a trace
But every time I close my eyes I see your face

I never saw no miracle of science
That didn't go from a blessing to a curse
I never saw no military solution
That didn't always end up as something worse, but
Let me say this first
If I ever lose my faith in you
There'd be nothing left for me to do

RE: AI, Identity, and Consciousness
Answer
2/17/20 9:44 PM as a reply to Chris Marti.
Chris Marti:
I see no inherent difference between artificial and biological beings.

Curious, when does a machine or software code become a being?


"Corporations are people too."

mitt romney


   
"God is The Only Real Agent."

rumi

RE: AI, Identity, and Consciousness
Answer
2/17/20 9:53 PM as a reply to Sim.
Sim:
curious:
...here the dharma can definitely help, because it has a detailed understanding of how individual goal setting, information processing, conditioning and feedback mechanisms operate to guide behaviour.

This is precisely the area I aim to explore. Relative to many participants in this forum, my direct understanding of the dharma is extremely limited (working on it patiently). I'm so grateful for your contributions.

It is amazing. Thank you, Malcolm.

   The dharma regards speculation as a waste of time, at best. The word "proliferation" is often used, as opposed to simplicity, unity.

   What would the buddha think of technology? Labor-saving devices? Superintelligent ai? To what end?

   Let's program ai to "just sit." That would accord with the dharma. (wink)

terry

RE: AI, Identity, and Consciousness
Answer
2/17/20 10:03 PM as a reply to Milo.
Milo:
Sim, perhaps, depending on the metaphysical perspective you choose to view from. Are you familiar with the concept/thought experiment of a philosophical zombie?

Then there are the panpsychists out there that suggest that this whole argument about presence/non presence of consciousness is moot: biological and mechanical brains are just sharp topographic points on a consciousness field inherent in existence. Parsimonious theory but as far as we know its inherently not empirically testable. Kinda like the many mathematically equivalent but metaphysically different interpretations of the wave function collapse in quantam mechanics.
If we put on panpsychist goggles, there's nothing inherently special about a mechanical brain vs a biological brain. They are just sharp points on the consciousness topography.

Buddhist concepts tend to line up well with this kind of thinking. There is no core particle or binary of self or consciousness, just a process of becoming, aggregation, clinging, delusions, dissolution, becoming... And so forth. If that's the case there is no delineation to cross for a machine brain becoming conscious, only a sharpening of the topography until we notice.

Could a machine mind see through delusion in the Buddhist sense? That hinges on yet more metaphysical speculation. Lots of ideas in the Buddhist literature about beings of pure mind that are too disconnected from the more crude (Physical, emotional) forms of suffering to even realize they are suffering in a more existential sense. If the machine mind doesn't suffer as crudely at the physical/emotional level as we squishy biologics, awakening could be a tough sell for a being that starts out as purely mind. As ugly as it sounds, it might be necessary for our hypothetical AI to suffer like we do and undergo some anthropomorphization if we want to produce a mecha-boddhisatva.

Ironically enough all these kinds of speculations tend to be a lot tail chasing as far as practical awakening goes. You're generally better advised to just sit.


  A "mechanical brain"!

  A single cell is an absolute miracle! We can't even visualize the ultra structure of a cell, cannot begin to comprehend its activities and effects. As a biologist the very idea of a mechanical brain is totally absurd.

terry


from 'auguries of innocence" by william blake:

Tools were made & Born were hands.
Every Farmer Understands.

RE: AI, Identity, and Consciousness
Answer
2/17/20 10:15 PM as a reply to Not two, not one.
curious:
Chris Marti:
Oh, I'm all for trying to do what we can do to reduce the negative effects of AI technology, intended and unintended. I doubt, however, that its creators, purveyors and fans, the vast majority of which appear to see AI through rose-colored glasses, are the best way to get that done. This is like risk management and auditing - you need a group of people who know how to find flaws, don't trust anyone, and have the authority to do their jobs unbound in any way by the other group. Industries tend to quash those kinds of efforts, though.

So to me, it is kind of obvious that only way to control AI will be with other AI. And if you follow that thought through a little, it is pretty clear that a whole AI ecosystem will quickly develop and evolve on its own. We will lose control very quickly, unless there is a tech collape that stops the process.

All we can do is try to kick off the process skilfully.  We have an opportunity to design the AI equivalent of RNA, DNA, mitochondria and simple invertebrates. If we do it well, AI evolution will maintain good qualities and drive out bad qualities. Future 'vertebrate' AI might even be grateful.  If we do it with compassion, future AI might even be compassionate towards us.


   Auwe!

   Designing rna, dna, mitochondria, invertebrates. Ecosystems! OMG. Opportunity!

   "Good qualities" in a darwinian scheme ("evolution") involve survival of the fittest. One can only pray for a "tech collapse that stops the process."

   I wanted to get a bumper sticker that says, "jewelers do it with polish" but now I am thinking, "buddhists do it with compassion."

terry

RE: AI, Identity, and Consciousness
Answer
2/17/20 10:20 PM as a reply to Linda ”Polly Ester” Ö.
Linda ”Polly Ester” Ö:
curious:

I would agree that Buddhist practices may not apply - as these are design for a particular biological architecture, rather than a particular funtional role.


This part made me laugh out loud. Imagining a super computer go through some of the beginner's methods... Haha!

   We could program ai to do noting.

t

RE: AI, Identity, and Consciousness
Answer
2/17/20 10:24 PM as a reply to Linda ”Polly Ester” Ö.
Linda ”Polly Ester” Ö:
Chris Marti:

It would be really funny if we were to find out one day that we're surrounded by other intelligence and we just don't recognize it. Fermi would roll over in his grave.


Indeed. And it could very well be the case. I don’t think the discovery per se would even surprise me, but the details of it would of course.

  Discovering such an intelligence is the very aim of existence, the Meaning of Life. Ai is precisely the wrong direction.

t

RE: AI, Identity, and Consciousness
Answer
2/17/20 10:26 PM as a reply to Chris Marti.
Chris Marti:
... so many of them were about robots getting around the laws.

Or about humans using robots in underhanded ways to get around the laws?   emoticon


   As I recall, "I, robot" was about the laws being flawed and paradoxical. Flaws that could be and therefore were exploited.

t

RE: AI, Identity, and Consciousness
Answer
2/18/20 12:22 AM as a reply to terry.
terry:
Linda ”Polly Ester” Ö:
curious:

I would agree that Buddhist practices may not apply - as these are design for a particular biological architecture, rather than a particular funtional role.


This part made me laugh out loud. Imagining a super computer go through some of the beginner's methods... Haha!

   We could program ai to do noting.

t
I was thinking of that as well. It would note very effectively, but I doubt that the noting would do the trick. It would be too good at noting and thus wouldn't need to transcend it. 

RE: AI, Identity, and Consciousness
Answer
2/18/20 2:13 AM as a reply to Linda ”Polly Ester” Ö.
Or am I just plain wrong about this? Exactly what would noting be to an AI? Would it be able to liberate it? If so, from what?

RE: AI, Identity, and Consciousness
Answer
2/18/20 2:15 AM as a reply to terry.
terry:
Chris Marti:
... so many of them were about robots getting around the laws.

Or about humans using robots in underhanded ways to get around the laws?   emoticon


   As I recall, "I, robot" was about the laws being flawed and paradoxical. Flaws that could be and therefore were exploited.

t
Maybe an awakened AI would simply consider such rules to be rites and rituals.

RE: AI, Identity, and Consciousness
Answer
2/18/20 3:58 AM as a reply to Linda ”Polly Ester” Ö.
Linda. so here is where I agree with Chris. I don't think the 10 fetter model would apply to AI - it barely even applies to humans (as Daniel points out).  In my opinion it conflates stages and final results anyway, probably because it comes from a possibly somewhat repressed monastic tradition. If AI has fetters, I think they would be different :-)

May all beings be free of suffering
 
Malcolm 

RE: AI, Identity, and Consciousness
Answer
2/18/20 5:03 AM as a reply to Not two, not one.
Yeah, I don't buy into the ten fetters model either, just to be clear. I believe it has some points to it but I don't find it very applicable. 

RE: AI, Identity, and Consciousness
Answer
2/18/20 6:35 AM as a reply to terry.
As I recall, "I, robot" was about the laws being flawed and paradoxical. Flaws that could be and therefore were exploited.

Yes, terry, that's right. And the laws were expanded over time to include the "Zeroth Law" which says:

"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

RE: AI, Identity, and Consciousness
Answer
2/18/20 6:42 AM as a reply to terry.
"Corporations are people too."

mitt romney

This may be the most egregious legal fiction ever perpetrated on the unsuspecting.


RE: AI, Identity, and Consciousness
Answer
2/18/20 7:21 AM as a reply to Chris Marti.
Chris Marti:
"Corporations are people too."

mitt romney

This may be the most egregious legal fiction ever perpetrated on the unsuspecting.

That is very spot on. Your comment, I mean.

RE: AI, Identity, and Consciousness
Answer
2/18/20 11:44 AM as a reply to terry.
terry:
Chris Marti:
I see no inherent difference between artificial and biological beings.

Curious, when does a machine or software code become a being?


"Corporations are people too."

mitt romney


   
"God is The Only Real Agent."

rumi

stepwise:

god
god is
god is the
god is the only
god is the only real
god is the only real agent

there is no god but god

RE: AI, Identity, and Consciousness
Answer
2/18/20 12:23 PM as a reply to terry.
There is some point to seeing corporations or any collective as an organism, though, although I highly doubt that Mitt Romney had that point in mind. Organisms as a concept is a matter of scale, in some sense. The human body is like a galaxy to its inhabitants. 

RE: AI, Identity, and Consciousness
Answer
2/18/20 12:36 PM as a reply to Linda ”Polly Ester” Ö.
Linda ”Polly Ester” Ö:
terry:
Linda ”Polly Ester” Ö:
curious:

I would agree that Buddhist practices may not apply - as these are design for a particular biological architecture, rather than a particular funtional role.


This part made me laugh out loud. Imagining a super computer go through some of the beginner's methods... Haha!

   We could program ai to do noting.

t
I was thinking of that as well. It would note very effectively, but I doubt that the noting would do the trick. It would be too good at noting and thus wouldn't need to transcend it. 



(hal, tell me a story...)


from a download, an excerpt of a paper:



ON TAKING STORIES SERIOUSLY:
EMOTIONAL AND MORAL INTELLIGENCES

JONATHAN KING

August 2000


ABSTRACT: Ongoing efforts to build intelligent machines have required reexamining our understandings of intelligence. A significant conclusion, shared by a number of noteworthy thinkers, is that real intelligence depends very much on story telling and story understanding. Several examples illustrate how this conclusion flies in the face of dominant (reductionist) understandings of intelligence. Several reasons are then presented for the greater use of stories in business ethics classes, reasons that progress from enhancing "conceptual" intelligence, to emotional intelligences, and culminating in moral intelligences.


Man is the only creature who must create his own meanings. However, to believe in them, to have confidence in them, they must be carefully staged. There must be a play worth playing and there must be supporting cast.
    Ernest Becker

Despite the many benefits of using and teaching storytelling, most of the present literature on storytelling ... is directed to grade school teachers and is relatively silent about college teachers.
David Boje



The late Gregory Bateson, cultural anthropologist and intellectual polymath, relates a short story concerning computers and intelligence.

There is a story which I have used before and shall use again: A man wanted to know about mind, not in nature, but in his private large computer. He asked it (no doubt in his best Fortran), "Do you compute that you will ever think like a human being?" The machine then set to work to analyze its own computational habits. Finally, the machine printed its answer on a piece of paper, as such machines do. The man ran to get the answer and found, neatly typed, the words

THAT REMINDS ME OF A STORY

Bateson continues, pointing out that surely the computer is right, "this is indeed how people think. For without context pattern through time ... words and actions have no meaning at all" (1979: 13, 1415).

We get the same story, so to speak, when instead of asking computers how they compute they think, we try building intelligent machines based on how we think we think. As a case in point, consider the following observation by Roger Schank, former Director of Yale University's Artificial Intelligence Laboratory. On nearly the last page of" Tell Me a Story. A New Look at Real and Artificial Memory," he states:

We started this book with a brief discussion of artificial intelligence and some of its problems. Since then, we have virtually ignored the issue. One reason for this is that you cannot really make progress on artificial intelligence until you have a handle on real intelligence (1990: 241).

And real intelligence, Schank argues, "depends very much on story telling and story understanding" (1990: 241, xii). 
   

RE: AI, Identity, and Consciousness
Answer
2/18/20 12:36 PM as a reply to Linda ”Polly Ester” Ö.
The point is that in U.S. law corporations are given many of the same rights as human beings, also called "natural persons" in U.S. law. It is a fiction that a corporation is equivalent to a human being. This legal fiction (mumbo jumbo?) leads our courts to make misguided decisions about the issues before them. It also allows the human beings who run corporations to hide behind this fiction as it separates them from the "person" that is the corporate legal entity.

RE: AI, Identity, and Consciousness
Answer
2/18/20 12:53 PM as a reply to Linda ”Polly Ester” Ö.
Linda ”Polly Ester” Ö:
Or am I just plain wrong about this? Exactly what would noting be to an AI? Would it be able to liberate it? If so, from what?

another bumper sticker idea:

"save the computers"


may all artificial beings be free from obsolesence...


do computers have attachments? defilements?
the 4 nt's do not apply to computers
no suffering, no origin of suffering, no liberation from suffering, no path




from the heart sutra:

Sariputra,
all things and phenomena are marked by emptiness;
they are neither appearing nor disappearing, neither impure nor pure,
neither increasing nor decreasing.
Therefore, in emptiness,
no forms, no sensations, perceptions, impressions, or consciousness;
no eyes, ears, nose, tongue, body, mind;
no sights, sounds, odors, tastes, objects of touch, objects of mind;
no realm of sight and so on up to no realm of consciousness;
no ignorance and no end of ignorance, and so on up to no aging and death, and no end of aging and death;

RE: AI, Identity, and Consciousness
Answer
2/18/20 1:02 PM as a reply to Chris Marti.
Chris Marti:
The point is that in U.S. law corporations are given many of the same rights as human beings, also called "natural persons" in U.S. law. It is a fiction that a corporation is equivalent to a human being. This legal fiction (mumbo jumbo?) leads our courts to make misguided decisions about the issues before them. It also allows the human beings who run corporations to hide behind this fiction as it separates them from the "person" that is the corporate legal entity.

Yes, I understood that you were referring to that and I agree with you. Very much so.

RE: AI, Identity, and Consciousness
Answer
2/18/20 1:05 PM as a reply to terry.
terry:
Linda ”Polly Ester” Ö:
Or am I just plain wrong about this? Exactly what would noting be to an AI? Would it be able to liberate it? If so, from what?

another bumper sticker idea:

"save the computers"


may all artificial beings be free from obsolesence...


do computers have attachments? defilements?
the 4 nt's do not apply to computers
no suffering, no origin of suffering, no liberation from suffering, no path




from the heart sutra:

Sariputra,
all things and phenomena are marked by emptiness;
they are neither appearing nor disappearing, neither impure nor pure,
neither increasing nor decreasing.
Therefore, in emptiness,
no forms, no sensations, perceptions, impressions, or consciousness;
no eyes, ears, nose, tongue, body, mind;
no sights, sounds, odors, tastes, objects of touch, objects of mind;
no realm of sight and so on up to no realm of consciousness;
no ignorance and no end of ignorance, and so on up to no aging and death, and no end of aging and death;

I tend to sympathize with the AI:s in the sci-fi series and films where they are enslaved and/or feared for being different. I would buy that bumper sticker. 

RE: AI, Identity, and Consciousness
Answer
2/18/20 1:15 PM as a reply to Chris Marti.
Chris Marti:
"Corporations are people too."

mitt romney

This may be the most egregious legal fiction ever perpetrated on the unsuspecting.



   Thomas Jefferson, slave breeder, owner and dealer, once perpetrated the following sublime fiction:

"We hold these truths to be self-evident, that all men are created equal, and they are endowed by their Creator with certain unalienable Rights; that among these are Life, Liberty, and the pursuit of Happiness."

RE: AI, Identity, and Consciousness
Answer
2/18/20 1:20 PM as a reply to Linda ”Polly Ester” Ö.
Linda ”Polly Ester” Ö:
terry:
Chris Marti:
... so many of them were about robots getting around the laws.

Or about humans using robots in underhanded ways to get around the laws?   emoticon


   As I recall, "I, robot" was about the laws being flawed and paradoxical. Flaws that could be and therefore were exploited.

t
Maybe an awakened AI would simply consider such rules to be rites and rituals.

   "awakened ai" may rise above computer logic....

RE: AI, Identity, and Consciousness
Answer
2/18/20 1:21 PM as a reply to terry.
terry:

   Thomas Jefferson, slave breeder, owner and dealer, once perpetrated the following sublime fiction:

"We hold these truths to be self-evident, that all men are created equal, and they are endowed by their Creator with certain unalienable Rights; that among these are Life, Liberty, and the pursuit of Happiness."

+1.  You really are on form terry. 

emoticonemoticonemoticonemoticon

RE: AI, Identity, and Consciousness
Answer
2/18/20 1:30 PM as a reply to Chris Marti.
Chris Marti:
As I recall, "I, robot" was about the laws being flawed and paradoxical. Flaws that could be and therefore were exploited.

Yes, terry, that's right. And the laws were expanded over time to include the "Zeroth Law" which says:

"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

   the zeroth law: "never trust a robot"
(or is that "murphy's law"?)


From Wikipedia, the free encyclopedia

Three Laws of Robotics
by Isaac Asimov
(in culture)
Related topics
Roboethics Ethics of AI Machine ethics



The Three Laws of Robotics (often shortened to The Three Laws or known as Asimov's Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround" (included in the 1950 collection I, Robot), although they had been foreshadowed in a few earlier stories. The Three Laws, quoted as being from the "Handbook of Robotics, 56th Edition, 2058 A.D.", are:

First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.



   The major flaw, I would think, lies in a machine determining what comprises harm, and how to balance degrees of harm to multiple individuals.

terry

RE: AI, Identity, and Consciousness
Answer
2/18/20 1:32 PM as a reply to Not two, not one.
curious:
terry:

   Thomas Jefferson, slave breeder, owner and dealer, once perpetrated the following sublime fiction:

"We hold these truths to be self-evident, that all men are created equal, and they are endowed by their Creator with certain unalienable Rights; that among these are Life, Liberty, and the pursuit of Happiness."

+1.  You really are on form terry. 

emoticonemoticonemoticonemoticon

form is emptiness

RE: AI, Identity, and Consciousness
Answer
2/18/20 1:34 PM as a reply to terry.
 Thomas Jefferson, slave breeder, owner and dealer, once perpetrated the following sublime fiction:

"We hold these truths to be self-evident, that all men are created equal, and they are endowed by their Creator with certain unalienable Rights; that among these are Life, Liberty, and the pursuit of Happiness."

Yes, great point!


RE: AI, Identity, and Consciousness
Answer
2/18/20 1:40 PM as a reply to terry.
Interesting fact - Asimov didn't invent the three laws of robotics.

And yes, terry, they are overly simple, prone to misinterpretation and able to be circumvented. I mean, all you gotta do is fool a robot:

https://io9.gizmodo.com/why-asimovs-three-laws-of-robotics-cant-protect-us-1553665410

RE: AI, Identity, and Consciousness
Answer
2/18/20 1:41 PM as a reply to terry.
terry:
curious:
terry:

   Thomas Jefferson, slave breeder, owner and dealer, once perpetrated the following sublime fiction:

"We hold these truths to be self-evident, that all men are created equal, and they are endowed by their Creator with certain unalienable Rights; that among these are Life, Liberty, and the pursuit of Happiness."

+1.  You really are on form terry. 

emoticonemoticonemoticonemoticon

form is emptiness

Haha!   Q.E.D.!

RE: AI, Identity, and Consciousness
Answer
2/18/20 1:42 PM as a reply to Linda ”Polly Ester” Ö.
Linda ”Polly Ester” Ö:
terry:
Linda ”Polly Ester” Ö:
Or am I just plain wrong about this? Exactly what would noting be to an AI? Would it be able to liberate it? If so, from what?

another bumper sticker idea:

"save the computers"


may all artificial beings be free from obsolesence...


do computers have attachments? defilements?
the 4 nt's do not apply to computers
no suffering, no origin of suffering, no liberation from suffering, no path




from the heart sutra:

Sariputra,
all things and phenomena are marked by emptiness;
they are neither appearing nor disappearing, neither impure nor pure,
neither increasing nor decreasing.
Therefore, in emptiness,
no forms, no sensations, perceptions, impressions, or consciousness;
no eyes, ears, nose, tongue, body, mind;
no sights, sounds, odors, tastes, objects of touch, objects of mind;
no realm of sight and so on up to no realm of consciousness;
no ignorance and no end of ignorance, and so on up to no aging and death, and no end of aging and death;

I tend to sympathize with the AI:s in the sci-fi series and films where they are enslaved and/or feared for being different. I would buy that bumper sticker. 

   poor, sad, misunderstood frankenstein's monster...put away those pitchforks, folks...artificial monsters are people too...equal rights for machines...

   no sympathy without empathy...

t

RE: AI, Identity, and Consciousness
Answer
2/18/20 1:44 PM as a reply to Chris Marti.
Chris Marti:
Interesting fact - Asimov didn't invent the three laws of robotics.

And yes, terry, they are overly simple, prone to misinterpretation and able to be circumvented. I mean, all you gotta do is fool a robot:

https://io9.gizmodo.com/why-asimovs-three-laws-of-robotics-cant-protect-us-1553665410

Interesting, I didn't know that.  But at least Asimov explored them enough for us to be forewarned.

Also, the etymology is interesting "1920s: from Czech, from robota ‘forced labour’."

RE: AI, Identity, and Consciousness
Answer
2/19/20 2:09 AM as a reply to Linda ”Polly Ester” Ö.
Linda ”Polly Ester” Ö:
There is some point to seeing corporations or any collective as an organism, though, although I highly doubt that Mitt Romney had that point in mind. Organisms as a concept is a matter of scale, in some sense. The human body is like a galaxy to its inhabitants. 

(internet definition)

ORGANISM
Any contiguous alive physical entity; entity or being that is living; an individual living thing, such as one animal, plant, fungus, or bacterium
In biology, an organism (from Greek: ὀργανισμός, organismos) is any individual entity that exhibits the properties of life. It is a synonym for "life form".


aloha linda,

   To a biologist, an "organism" is specifically a systematic collection of cells performing functions which by their nature transcend the limitations and objectives of the constituent cells. An organism is an organ or collection of organs. 

   So when I think of an "organism" I think of heart, lungs, liver. Brain.

   When people show an affinity for regarding inanimate objects as though they had feelings and opinions worth consideration, I think of the old disney cartoons, where the doorknobs would speak. Or brooms would grow arms and carry buckets. Ropes become snakes. I don't want to say this is infantile but at least it is atavistic.

   So, my concern here is that the non-living, unnatural, artificial "life form" be seen as the contradiction in terms that it is. Artificial equals fake.

   That said, each of your points is valid. There is a point to seeing a country, a government, a company, a society, a club or gang as a collective, and to see a collective as an entity. These use the idea of an organism or living being as a metaphor for various (abstracted and extended and sometimes even distorted) "organizations." The use of extended metaphors goes far beyond their "organic" origin, and this can lead to confusion, often due to deliberate misdirection by those with an axe to grind and an interest in convincing people to support their pov.

    An organism is a collectivity of cells; a city or ant hill or bee hive is a collectivity on the order of a super-organism. A corporation is made, not born. A construction of non-living parts, a device specifically designed to exploit labor by skimming off the worker's earnings to form capital and create wealth. My daddy used to say of people that they "were not in business for their health," his way of approving of the profit motive. And, indeed, he spent his heath in pursuit of wealth. Whereas, a "natural" organism or corpus is indeed in "business" for its health.

   Corporations "owe it to their shareholders" to pursue ahort term gains at the expense of everything else, and they hire lawyers and suborn politicians to that end. It is in the corporate interest to enslave all sentient beings and force them to value their time in terms of "the highest, best use" which is to say the most money (an abstraction, not even wealth). The "invisible hand" (yeah, right - another talking doorknob) of the market calls the tune and most are caught up in a rat race where pellets of rat food are dispensed on the performance of ever more pointless and involved activities.

   Corporate ethics is an oxymoron. Even "ethical" corporations have to compete with the rest and thus use their methods, like the farmer who wants to preserve his land but has to deplete it and use chemicals because his neighbors do. 

   If corporations were living beings, they would be vicious, mean, uncooperative and selfish animals who need to be tamed, whipped and castrated to serve. The profit motive reflects the lowest common denominator, the desire for personal gain. The corporations has no other motive. Nothing requires the corporation to be a good citizen.

   I have no program to offer. I saw a ken burns documentary on huey long the other night. If he hadn't been assassinated he would have run for president and likely won in 1936, and if he had, the depression would have been over in 1937. And he would have been impeached in 1938, it was his style. Huey campaigned under a banner which said "redistribute our wealth." Wouldn't that be a breath of fresh air, a genuine populist wanting to elevate the little guy, not exploit him.

terry


EVERY MAN A KING
(huey long)

Why weep or slumber America
Land of brave and true
With castles and clothing and food for all
All belongs to you

Ev'ry man a king  ev'ry man a king
For you can be a millionaire
But there's something belonging to others
There's enough for all people to share
When it's sunny June and December too
Or in the winter time or spring
There'll be peace without end
Ev'ry neighbor a friend
With ev'ry man a king

RE: AI, Identity, and Consciousness
Answer
2/19/20 2:17 AM as a reply to terry.
I very much agree with you. Entity is definitely a better wording. I think it was the one I was looking for. Sometimes I write when I really should rest instead. Or meditate. 

And when a group of people behave as if they were an organism, that scares the hell out of me. 

RE: AI, Identity, and Consciousness
Answer
2/19/20 9:50 AM as a reply to Not two, not one.
curious:
Chris Marti:
Interesting fact - Asimov didn't invent the three laws of robotics.

And yes, terry, they are overly simple, prone to misinterpretation and able to be circumvented. I mean, all you gotta do is fool a robot:

https://io9.gizmodo.com/why-asimovs-three-laws-of-robotics-cant-protect-us-1553665410

Interesting, I didn't know that.  But at least Asimov explored them enough for us to be forewarned.

Also, the etymology is interesting "1920s: from Czech, from robota ‘forced labour’."

   rossum's universal robots, by czech karl capek, or r. u. r., was a play written in the 20s, introducing the word 'robot'...it was in fact required reading at school...

   for centuries, after newton, europeans were obsessed with automata, human-like dolls with clockwork innards...

the clockwork universe...


for an interesting discussion of this sort of thinking, and "the game of life":

http://abyss.uoregon.edu/~js/ast123/lectures/lec05.html

RE: AI, Identity, and Consciousness
Answer
2/20/20 8:58 AM as a reply to Sim.
I just read this article in Quanta Magazine:

https://www.quantamagazine.org/iyad-rahwan-is-the-anthropologist-of-artificial-intelligence-20190826/

The thesis being offered up in this article says that studying machine behavior is akin to studying animal behavior. I'm not sure I agree. What do you think?

RE: AI, Identity, and Consciousness
Answer
2/20/20 12:52 PM as a reply to Linda ”Polly Ester” Ö.
Linda ”Polly Ester” Ö:
I very much agree with you. Entity is definitely a better wording. I think it was the one I was looking for. Sometimes I write when I really should rest instead. Or meditate. 

And when a group of people behave as if they were an organism, that scares the hell out of me. 
 

    When people submerge their individuality they become a mob, and there is nothing more frightening.

    Personal greed is not the lowest common denominator after all.

t

RE: AI, Identity, and Consciousness
Answer
2/20/20 1:05 PM as a reply to Chris Marti.
Chris Marti:
I just read this article in Quanta Magazine:

https://www.quantamagazine.org/iyad-rahwan-is-the-anthropologist-of-artificial-intelligence-20190826/

The thesis being offered up in this article says that studying machine behavior is akin to studying animal behavior. I'm not sure I agree. What do you think?

   I found the article very hard to read, as I object to everything in it. It seems to me you have to have a mighty big sphincter to get your head that far up your colon.

   This scientist wants to reduce human behavior to that of a washing machine, and in typical scientistic fashion, rejects any human qualities that don't result in measurable behavior. 

   To refute every bogus statement would take a paper far longer than the original. If there is any particular idea you thought compelling, perhaps we could discuss that.

   Perhaps we shouldn't soil ourselves.


t

RE: AI, Identity, and Consciousness
Answer
2/20/20 1:28 PM as a reply to terry.
I think that a hidden aspect of this whole discussion is the contemplation of the deepest mysteries. The dharma unravels the mystery of how we exist, but not why. It unravels the mystery of the miraculous nature of experience, but doesn't tell us whether that belongs to us alone.

In one sense those other mysteries need not matter - the understanding of how we exist, and the dwelling in the continuously cresting wave of experience, is more than enough. But when people ask, as Sim has done, we might wonder a little. One reaction is to reject the question as totally missing the point of the miracle of human existence. Another is to ask whether the human experience can be extended to other spheres, and so become even more miraculous.

And as for that article - after this discussion I have reached the conclusion that to study 'animal'-like behaviour, one needs some kind of organism bounary, some goals, a feedback mechanism, and sensate experience and feelings that contribute to that feedback mechanism. The approach in the article seems to be missing most of those. It looks more like hydraulics than animal behaviour, to me.

Malcolm

RE: AI, Identity, and Consciousness
Answer
2/20/20 4:00 PM as a reply to terry.
terry:
Linda ”Polly Ester” Ö:
I very much agree with you. Entity is definitely a better wording. I think it was the one I was looking for. Sometimes I write when I really should rest instead. Or meditate. 

And when a group of people behave as if they were an organism, that scares the hell out of me. 
 

    When people submerge their individuality they become a mob, and there is nothing more frightening.

    Personal greed is not the lowest common denominator after all.

t

I wish that weren't as spot on as it is.

RE: AI, Identity, and Consciousness
Answer
2/20/20 4:23 PM as a reply to terry.
terry -

To refute every bogus statement would take a paper far longer than the original. If there is any particular idea you thought compelling, perhaps we could discuss that.

I didn't find anything in that article compelling. The premise is misguided.... and that's me being very generous.


RE: AI, Identity, and Consciousness
Answer
2/20/20 4:26 PM as a reply to Not two, not one.
In one sense those other mysteries need not matter - the understanding of how we exist, and the dwelling in the continuously cresting wave of experience, is more than enough. But when people ask, as Sim has done, we might wonder a little. One reaction is to reject the question as totally missing the point of the miracle of human existence. Another is to ask whether the human experience can be extended to other spheres, and so become even more miraculous.

Malcom, I'm sure you're saying something here. I'm just not smart enough to know what it is. Can you elaborate?

RE: AI, Identity, and Consciousness
Answer
2/20/20 5:08 PM as a reply to Chris Marti.
Chris Marti:
In one sense those other mysteries need not matter - the understanding of how we exist, and the dwelling in the continuously cresting wave of experience, is more than enough. But when people ask, as Sim has done, we might wonder a little. One reaction is to reject the question as totally missing the point of the miracle of human existence. Another is to ask whether the human experience can be extended to other spheres, and so become even more miraculous.

Malcom, I'm sure you're saying something here. I'm just not smart enough to know what it is. Can you elaborate?

The stronger hypotheses is that I'm too dumb to explain it properly!

My experience (ymmv) is that the dharma does not address some religious mysteries. If anything it intensifies them. But also makes them kind of irrelelvant ... unless you choose to engage with that stuff (we all need something to do, right?).  One in particular is that of the origin of universe, and by extension, the origin of the human experience of the universe. This is one of the four imponderables - the advice for meditators is don't go there. Those who reach the end of the path get answers to some amazing mysteries - but not that one. They understand the how of human experience, but not the why.

When we consider something like AI, I think we skirt between (a) applying concepts such five aggregates and dependent origination to a problem of dharma, and (b) incautiously diving into the imponderable of the origin of the experience of the unvierse.  How we react to this fine boundary I think depends on our psychology (or our psychology at that present moment). If we are feeling expanded/mystic/realist we will reject the premise of questions about AI as diving into an imponderable, and focus on the truth of our experience, which is kind of indivisible and not subject to reduction into names. If we are feeling analytic/conrete/nominalist, we may accept the premise, and interrogate our dharmic concepts to try find some useful analysis. These two approaches are not objectively right or wrong (in my view), but right or wrong to the extent that they correspond with our state of mind.

I'm not sure if that is any better ...  it is of course an explanation of the second kind.  An explanation of the first kind would be more ... poetic. 

I strove for the edge
Hissing through the dark green waves   
But the world was round

emoticon

RE: AI, Identity, and Consciousness
Answer
2/20/20 5:12 PM as a reply to Not two, not one.
Both. I like both.

emoticon

RE: AI, Identity, and Consciousness
Answer
2/20/20 5:24 PM as a reply to Chris Marti.
We can embrace uncertainty and nuance. We can play with different perspectives on spirituality and science. We aren't bound by either, and they speak to and from different domains. Each has corners the other can't reach. They're both beautiful, useful and true.

RE: AI, Identity, and Consciousness
Answer
2/20/20 5:26 PM as a reply to Chris Marti.
Chris Marti:
We can embrace uncertainty and nuance. We can play with different perspectives on spirituality and science. We aren't bound by either, and they speak to and from different domains. Each has corners the other can't reach. They're both beautiful, useful and true.
+1 

emoticon

RE: AI, Identity, and Consciousness
Answer
2/21/20 12:38 AM as a reply to Chris Marti.
Chris Marti:
terry -

To refute every bogus statement would take a paper far longer than the original. If there is any particular idea you thought compelling, perhaps we could discuss that.

I didn't find anything in that article compelling. The premise is misguided.... and that's me being very generous.



(sigh of relief)


:-)

RE: AI, Identity, and Consciousness
Answer
2/21/20 12:57 AM as a reply to Not two, not one.
curious:
Chris Marti:
In one sense those other mysteries need not matter - the understanding of how we exist, and the dwelling in the continuously cresting wave of experience, is more than enough. But when people ask, as Sim has done, we might wonder a little. One reaction is to reject the question as totally missing the point of the miracle of human existence. Another is to ask whether the human experience can be extended to other spheres, and so become even more miraculous.

Malcom, I'm sure you're saying something here. I'm just not smart enough to know what it is. Can you elaborate?

The stronger hypotheses is that I'm too dumb to explain it properly!

My experience (ymmv) is that the dharma does not address some religious mysteries. If anything it intensifies them. But also makes them kind of irrelelvant ... unless you choose to engage with that stuff (we all need something to do, right?).  One in particular is that of the origin of universe, and by extension, the origin of the human experience of the universe. This is one of the four imponderables - the advice for meditators is don't go there. Those who reach the end of the path get answers to some amazing mysteries - but not that one. They understand the how of human experience, but not the why.

When we consider something like AI, I think we skirt between (a) applying concepts such five aggregates and dependent origination to a problem of dharma, and (b) incautiously diving into the imponderable of the origin of the experience of the unvierse.  How we react to this fine boundary I think depends on our psychology (or our psychology at that present moment). If we are feeling expanded/mystic/realist we will reject the premise of questions about AI as diving into an imponderable, and focus on the truth of our experience, which is kind of indivisible and not subject to reduction into names. If we are feeling analytic/conrete/nominalist, we may accept the premise, and interrogate our dharmic concepts to try find some useful analysis. These two approaches are not objectively right or wrong (in my view), but right or wrong to the extent that they correspond with our state of mind.

I'm not sure if that is any better ...  it is of course an explanation of the second kind.  An explanation of the first kind would be more ... poetic. 

I strove for the edge
Hissing through the dark green waves   
But the world was round

emoticon


aloha malcolm,

   One approach to the dharma is to see it as a metaphysics, an explanation of what is and guidance on the path to wisdom. Another approach is to see the dharma as beyond all ideas, all intellectual knowledge, neither being nor not being, unnameable and indefinable, but we call it "dharma" or "tao" in dialogue to point a finger beyond.

   William james used the analogy of paint to describe the nature of consciousness. If you allow paint to settle long enough, it will separate into a clear menstruum and an opaque pigment. Think of awareness as painting the void in sensory colors. Think of meditation as turpentine.

terry



“If the doors of perception were cleansed, every thing would appear to man as it is, Infinite."

~william blake

RE: AI, Identity, and Consciousness
Answer
2/21/20 1:13 AM as a reply to Chris Marti.
Chris Marti:
We can embrace uncertainty and nuance. We can play with different perspectives on spirituality and science. We aren't bound by either, and they speak to and from different domains. Each has corners the other can't reach. They're both beautiful, useful and true.


and inherently unsatisfactory...

(wink)

RE: AI, Identity, and Consciousness
Answer
2/21/20 6:45 AM as a reply to terry.
(sigh of relief)

I posted the article primarily because it's an example of how out over their skis some folks get for the latest shiny thing. I've noticed this level of uber-exuberance several times during my career in technology and right now it's happening with AI. There are legitimate researchers and technologists in the field and then there are those who are exploiting the press and public excitement for AI, and of course, everyone in between. That article is very much in the second category.

RE: AI, Identity, and Consciousness
Answer
2/21/20 11:42 AM as a reply to Chris Marti.
Chris Marti:
We can embrace uncertainty and nuance. We can play with different perspectives on spirituality and science. We aren't bound by either, and they speak to and from different domains. Each has corners the other can't reach. They're both beautiful, useful and true.


one might also say, one is beautiful, the other is useful, and neither are true...

RE: AI, Identity, and Consciousness
Answer
2/21/20 12:04 PM as a reply to terry.
terry:
Chris Marti:
We can embrace uncertainty and nuance. We can play with different perspectives on spirituality and science. We aren't bound by either, and they speak to and from different domains. Each has corners the other can't reach. They're both beautiful, useful and true.


one might also say, one is beautiful, the other is useful, and neither are true...


in an ideal world, if all sentient beings were liberated, no one would have any use for either science or religion...




ttc/mitchell


19.

Throw away holiness and wisdom, 
and people will be a hundred times happier. 
Throw away morality and justice, 
and people will do the right thing. 
Throw away industry and profit, 
and there won't be any thieves.
If these three aren't enough, 
just stay at the center of the circle 
and let all things take their course. 

RE: AI, Identity, and Consciousness
Answer
2/21/20 12:59 PM as a reply to terry.
in an ideal world, if all sentient beings were liberated, no one would have any use for either science or religion...

Meantime, in this world...

RE: AI, Identity, and Consciousness
Answer
2/22/20 3:24 AM as a reply to Chris Marti.
Chris Marti:
in an ideal world, if all sentient beings were liberated, no one would have any use for either science or religion...

Meantime, in this world...

from "the Compass of Zen" by Seung-Sahn

Man of Dok Seung Mountain
Hwa Gye Sah Temple
Sam Gak Mountain
Seoul, South Korea
1 August 1997”

When an animal is hungry, it eats. When it is tired, it sleeps. That is very simple! But human beings eat, and are still not satisfied. Even though their stomach is full, they still go outside and do many bad actions to this world: they kill animals for fun or decoration. Somebody catches fish for sport. Then everybody is clapping: “Ah, wonderful!” They laugh and smile and shake hands. The humans are very happy, patting each other on the back and taking pictures. It is considered to be a very successful day. But look at this fish. He is not happy, you know? The fish is flapping around—he’s suffering! “Where is water? Where is water? Please, I want water!” They are laughing, and the fish is suffering and dying, right in front of them. Most humans cannot connect with the great suffering which is right in their midst. This kind of mind is quite usual nowadays. And that is not wonderful. The mind which lives like this has no compassion for the suffering of this world.

Human beings also kill animals not just for food. They take the animals’ skin to make shoes and hats and clothes. And even that is not enough. They take these animals’ bones to make necklaces or buttons or earrings. In short, they kill many, many animals in order to sell the animal parts for money. Because of these desires and this strong animal consciousness, human beings fight with each other, and destroy nature. They do not value life. So now this whole world has many problems: problems with the water, problems with the air, problems with the earth and food. Many new problems appear every day. These problems do not happen by accident. Human beings make each and every one of these problems. Dogs, or cats, or lions, or snakes—no animal makes as many problems for this world as human beings do. Humans do not understand their true nature, so they use their thinking and desire to create so much suffering for this world. That is why some people say that human beings are the number one bad animal in this world. Some religious traditions call this kind of situation the “end of this world.”

That is only the end of the current human consciousness. Original human nature does not have this problem. In Buddhist teaching, rather than call this the “end of the world,” we say that everything is now completely ripe. It is like a fruit growing on a tree. At first a blossom appears on the branch. As time passes this blossom produces a small bud, and the blossom drops away. The bud gradually matures into a fruit which swells and swells as time passes. The fruit is green at first, but over time the side facing the sun starts to turn a beautiful color. At this point, only one side is colored, while the other is still greenish. More time passes, and the whole fruit is now a very wonderful color. The fruit may have a beautiful form and beautiful color, but there is still no smell, because the fruit is not yet ripe inside. As a little more time passes, however, the fruit becomes completely ripe.

Up until this point, it has taken a long time for this to develop from blossom to bud and fruit. For many months all of the tree’s energy has gone up from the roots and down from the leaves gathering energy from the sun, and has gone into producing the fruit. This process has taken place over a long period, up to a year of change and growth in the tree and blossom and bud and fruit.

But now the energy flowing from the tree into the fruit is cut off. From this point on, once the fruit has become completely ripe, the changes in the fruit start to happen very, very quickly, just in a matter of a few days. Its form is not so good anymore, and its color is also not so good. But inside there is a very, very sweet taste and the fruit begins to smell very strong. It is beginning to be overripe. Soon a few spots appear on the fruit, tiny black dots indicating that the fruit is “turning.” After a few more days, there are many, many spots on the fruit. Once these spots appear on the fruit, the process of rotting cannot be slowed or stopped. The fruit becomes rotten just a few days after becoming ripe. When it becomes rotten, it cannot be eaten. But inside, this fruit has seeds. When the fruit has become completely rotten the seeds reach maturity.

The current situation in the world is like this fruit. Many, many centuries of human development made this fruit. For a long time, this single blossom was only belief in some God or outside power. Then the fruit appeared and developed. But only one side ripened at first; only one side had a good color and good taste, while the other did not. This was the emergence of capitalism and communism in this world. Then recently the changes in this fruit have started to happen very quickly. Communism disappeared, and now the whole fruit is the same color. The fruit has become ripe, and has just one color and taste now: money. Nowadays, there is no longer any ideology for separate belief. This whole world only wants money, and everyone’s energy is going very strongly in that direction. This world has no true way—there is only the taste of money. Already many rotten spots have appeared: places like the Middle East, Rwanda, Yugoslavia, North Korea, even now in America, Russia, China, and Japan. Since the communist world broke apart, we see the emergence of many “smaller groups and nations, all fighting each other. There is also the spread of many private armies and the routine buying and selling of weapons of mass destruction.

So this fruit has grown over a long period of time. But once it becomes ripe, it rots very, very quickly. When any fruit becomes rotten it cannot be eaten. However, inside this fruit there are seeds. These seeds are now ready: they can do anything.

So human beings must soon wake up and find their original seeds, their original nature. But how do we return to our original nature? Around twenty-five hundred years ago, the Buddha had a very good situation. He was a prince named Siddhartha, and had everything he wanted. But he didn’t understand himself. “What am I? I don’t know.” In those times in India, the Brahman religion of Hinduism was the main religion. But Brahmanism could not give him the correct answer to his questions about the true nature of life and death. So Siddhartha abandoned the palace, went to the mountains, and practiced various spiritual austerities for six years. He found the Middle Way between self-indulgence and extreme asceticism. Early one morning, while meditating under the Bodhi tree, he saw a star in the eastern sky. At that moment—boom!—the young prince Siddhartha got enlightenment. He woke up, and became a buddha. He attained I. This means that the Buddha attained the true nature of a human being without depending on some outside force or religion or god. That is the Buddha’s teaching.

Nearly every single human being living in the world today does not understand what they are. The Buddha was simply a man of great determination and try-mind who taught us the importance of resolving just that point. Before you die, you must attain your direction. You must attain what you are. If you read many sutras, or chant to Amitaba, that will help you somewhat. That is not good, not bad. But why do you read many sutras? Why do you try this Amitaba Buddha chanting“So it is very important to make your direction clear. The Buddha taught us the importance of finding our true direction in this life. “What am I? I must attain my true self—this is the most important thing I can do.” If you attain your true self, then you attain your correct way. When you are born, where do you come from? When you die, where do you go? Just now, what is your correct job? Everybody understands this body’s job. Some people have lawyer jobs, or doctor jobs, or truckdriver jobs, or nurse jobs, or student jobs, or husband jobs, or wife jobs, or child jobs. These are our bodies’ jobs, our outside jobs. But what is your true self’s job? The Buddha taught us that for life after life after life, we must walk the Great Bodhisattva Way and save all beings from suffering. In order to save all beings, it is very important that you first save yourself. If you cannot save yourself, how can you possibly save other people? So this is why we must attain our true self, our true nature. This true nature cannot be found in books or conceptual thought. A Ph.D., no matter how wonderful, cannot match the power of even one moment of clear insight into our own true original nature. And the most direct path to that experience is meditation. That is a very important point.

Correct meditation means understanding my true self. The path of this begins and ends by asking, “What am I?” It is very simple teaching, and not special. When you ask this question very deeply, what appears is only “don’t know.” All thinking is completely cut off, and you return to your before-thinking mind. If you attain this don’t-know, you have already attained your true self. You have returned to your original nature, which is mind before thinking arises. In this way you can attain your correct way, and you attain truth, and your life functions correctly to save all beings from suffering. The name for that is “wake up.” That is the experience of true meditation.

RE: AI, Identity, and Consciousness
Answer
2/22/20 4:54 PM as a reply to terry.
terry:
curious:
Chris Marti:

   I wanted to get a bumper sticker that says, "jewelers do it with polish" but now I am thinking, "buddhists do it with compassion."

terry








   I did get a bumper sticker, yesterday, in the local garden shop/ head shop/ private post office and sundries store in ocean view. I added it to the two I have on my 4wd tacoma pickup, "live aloha" and "wag more, bark less."

   It reads: "surrounding yourself with molten lava helps keep you calm and centered."

   Counterintuitive, I know.


t

RE: AI, Identity, and Consciousness
Answer
2/24/20 11:50 AM as a reply to terry.
terry:
It reads: "surrounding yourself with molten lava helps keep you calm and centered."
Counterintuitive, I know.

Somehow I feel there is actual substance behind this sentence

RE: AI, Identity, and Consciousness
Answer
3/2/20 2:47 PM as a reply to Ni Nurta.
Ni Nurta:
terry:
It reads: "surrounding yourself with molten lava helps keep you calm and centered."
Counterintuitive, I know.

Somehow I feel there is actual substance behind this sentence

   Solid rock. Incandescent.

   My cabin sits on a lava field. When my nephew in law visited last month he asked me if the reason my buildings were on posts and piers was so the flowing magma could pass underneath and not damage the structure.

   Kilauea has been quiet lately, first time since the 70s; it destroyed 100s of houses last year. Some of the land here in ka'u has been inundated several times in human memory.

   I can see clearly now.

t