AI, Identity, and Consciousness

Sim, modified 1 Year ago.

AI, Identity, and Consciousness

Posts: 15 Join Date: 3/5/19 Recent Posts
Hello all! I'm grateful that this community exists.

I follow the topic of artificial intelligence (AI) closely, because my work could make a big difference in how AI affects people in the future. Over the last couple years, discussions of AI have increasingly focused on topics informed by insight and morality. For example:

AI Alignment Podcast: Identity and the AI Revolution
Buddhist views about self and suffering are referenced over a dozen times in this episode, but I had a hard time following. Encouragingly, it ends with a host saying: "Simply, I hope we have a Buddhist AI," to which the other host agrees. If anyone here listens to that discussion and can provide his or her views on the matter, please do.

Another example: Artificial Intelligence Podcast: The Hard Problem of Consciousness

A lot of this points to the question: If superintelligent AI is built, what should it be programmed to do? Hopefully, if and when an answer is needed, the world's thought leaders on morality and the nature of experience will be consulted. A dream would be for Sam Harris to interview (or debate?) Daniel Ingram on this topic. Recommendations for other sources to learn from would be much appreciated.
thumbnail
Chris Marti, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 3759 Join Date: 1/26/13 Recent Posts
This is fun stuff to talk about but the reality is farther away than the popular literature would have us believe. The most effective AI is single-purpose in nature, based on neural networks and deep learning principles. Where's the super-intelligent artificial general intelligence (AGI)? Nowhere to be found. Without AGI, ruminating over computer-based Buddhism is kind of like agonizing over the koan Mu, as in, "Does a computer have Buddha-nature?"
Sim, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 15 Join Date: 3/5/19 Recent Posts
Chris, thanks for the thoughtful reply. Agreed that we're probably at least decades away from AGI. Timelines are very hard to predict, so we won't know when it's the ideal time to start thinking about how to nudge the tragectory. But if ruminating over computer-based Buddhism would ever be productive, it would be before (rather than after) AGI is developed, so as to inform its goals. The norms and standards we see forming around today's applications of AI may set the stage for future advances. I'm glad that leading AI firms like OpenAI and DeepMind are taking ethics seriously; hopefully they're on the right track.
thumbnail
Chris Marti, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 3759 Join Date: 1/26/13 Recent Posts
I'm not sure we'll ever see truly conscious superintelligent AGI.

But, while some entities are ruminating over how to create morally astute AGI (whatever that is), someone else is out there figuring out how to get rich by selling it to someone's military. Once a technology becomes a reality there's no turning back.
thumbnail
Chris Marti, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 3759 Join Date: 1/26/13 Recent Posts
 If superintelligent AI is built, what should it be programmed to do?

Shouldn't it be programmed to be super-intelligent? 
thumbnail
Milo, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 365 Join Date: 11/13/18 Recent Posts
How not to ignore the context of human evolutionary history in interpreting its directive, lest it turn the earth into paperclips.

Of course, as Chris said, it doesn't seem like we'll need to worry about a.g.i. any time soon. We do have the unintended consequences of social media algorithms to deal with for now.
Sim, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 15 Join Date: 3/5/19 Recent Posts
Milo:
We do have the unintended consequences of social media algorithms to deal with for now.

Indeed. Here's 90 seconds of Stuart Russell, author of the leading AI textbook, using that example to answer Chris's question above:

Chris Marti:
Shouldn't it be programmed to be super-intelligent?
Stuart Russell:
If we don't specify the objective correctly, then the consequences can be arbitrarily bad.
thumbnail
Chris Marti, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 3759 Join Date: 1/26/13 Recent Posts
If we don't specify the objective correctly, then the consequences can be arbitrarily bad.

I'm sorry, but this is classic technologist-speak. Problem is, programmers are human beings who aren't clairvoyant and either don't know the consequences of their code or don't care. Even worse, there are some companies that deliberately create code that is disruptive to what most of us would call societal norms.

Question: How does one go about specifying the correct objective(s) to code so as to eliminate negative consequences? There is also a meta-question - how does one decide if the objective of the code is, itself, appropriate and without negative consequences? Who decides?

I think humans will continue to invent and deploy technology, including AI, and some good and bad sh*t will be the result even with the best of intentions. It will be left to all of us, the willing, semi-willing and unwilling experimental participants to deal with the consequences, both good and bad.
thumbnail
Chris Marti, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 3759 Join Date: 1/26/13 Recent Posts
Sorry - another thought occurred to me:

The creators of AI are not all responsible, caring, ethical beings. Some people will mine the technology for money, profit and other self-interested objectives. And, as the technology becomes ubiquitous and the ability to create AI code becomes more and more available, all kinds of people (young, old, well-meaning and "evil") will have access to it. Like with CRISPR, we don't know the consequences. This is what Ray Kurzweil calls an exponential technology and it grows slowly at first, then with increasing acceleration (not just speed, but acceleration, as in not linear).

I suspect AI is already beyond the point at which it's controllable, but then that's where these things seem, inevitably, to go. May as well acknowledge the problem and prep for the unintended consequences.
Sim, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 15 Join Date: 3/5/19 Recent Posts
Chris Marti:
I suspect AI is already beyond the point at which it's controllable, but then that's where these things seem, inevitably, to go. May as well acknowledge the problem and prep for the unintended consequences.

My impression is that humanity has a thumb lightly resting on the steering wheel, thanks to a very surprising amount of recent effort and resources dedicated toward anticipating the otherwise inevitable trajectory you suspect. This could be a timeline where we thread the needle and the long-term future features intended consequences (for a change).

Chris Marti:
Question: How does one go about specifying the correct objective(s) to code so as to eliminate negative consequences? There is also a meta-question - how does one decide if the objective of the code is, itself, appropriate and without negative consequences? Who decides?

Exactly! Difficult questions, but nothing is inherently mysterious, so some effort toward answering them could be worthwhile. Incidentally, the technologist-speak you quoted from Stuart Russell is unpacked at length in his book Human Compatible: Artificial Intelligence and the Problem of Control. I believe the author would largely agree with the perspective you've expressed.
thumbnail
Chris Marti, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 3759 Join Date: 1/26/13 Recent Posts
This could be a timeline where we thread the needle and the long-term future features intended consequences (for a change).

That's never happened before - why this, and why now?
Sim, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 15 Join Date: 3/5/19 Recent Posts
Chris Marti:
This could be a timeline where we thread the needle and the long-term future features intended consequences (for a change).

That's never happened before - why this, and why now?


I think I follow you -- this is big picture stuff. There are strong empirical and anthropic reasons why we should expect "succeeding" in this arena to be exceedingly difficult to specify, let alone achieve. It would be unprecedented. On the other hand, unprecedented things happen from time to time. Evidently, it was possible for life to spring into existence on Earth, and evolve up to where we are now. The Buddha did his thing. The future could hold complete extinction, flourishing experience without suffering, or something in between, better or worse. We may be unable to determine where this moment is in relation to the Great Filter. But if we grant that the most optimistic vision for the future is possible in principle, I see no reason in practice why this current state cannot lead there through the familiar process of cause and effect. That causal chain would necessarily feature a great deal of wise intentions, work, reason, and perhaps what some would consider faith, or extraordinary luck. Even if we don't predict a bright future to be likely, I believe there's some non-zero optimal level of effort we can exert to increase the probability.
thumbnail
Chris Marti, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 3759 Join Date: 1/26/13 Recent Posts
Oh, I'm all for trying to do what we can do to reduce the negative effects of AI technology, intended and unintended. I doubt, however, that its creators, purveyors and fans, the vast majority of which appear to see AI through rose-colored glasses, are the best way to get that done. This is like risk management and auditing - you need a group of people who know how to find flaws, don't trust anyone, and have the authority to do their jobs unbound in any way by the other group. Industries tend to quash those kinds of efforts, though.
thumbnail
curious, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 892 Join Date: 7/13/17 Recent Posts
Chris Marti:
Oh, I'm all for trying to do what we can do to reduce the negative effects of AI technology, intended and unintended. I doubt, however, that its creators, purveyors and fans, the vast majority of which appear to see AI through rose-colored glasses, are the best way to get that done. This is like risk management and auditing - you need a group of people who know how to find flaws, don't trust anyone, and have the authority to do their jobs unbound in any way by the other group. Industries tend to quash those kinds of efforts, though.

So to me, it is kind of obvious that only way to control AI will be with other AI. And if you follow that thought through a little, it is pretty clear that a whole AI ecosystem will quickly develop and evolve on its own. We will lose control very quickly, unless there is a tech collape that stops the process.

All we can do is try to kick off the process skilfully.  We have an opportunity to design the AI equivalent of RNA, DNA, mitochondria and simple invertebrates. If we do it well, AI evolution will maintain good qualities and drive out bad qualities. Future 'vertebrate' AI might even be grateful.  If we do it with compassion, future AI might even be compassionate towards us.
thumbnail
Chris Marti, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 3759 Join Date: 1/26/13 Recent Posts
 it is kind of obvious that only way to control AI will be with other AI.

That's not obvious at all to me. It might be more like putting the fox in charge of the henhouse.

All we can do is try to kick off the process skilfully.  We have an opportunity to design the AI equivalent of RNA, DNA, mitochondria and simple invertebrates. If we do it well, AI evolution will maintain good qualities and drive out bad qualities. Future 'vertebrate' AI might even be grateful.  If we do it with compassion, future AI might even be compassionate towards us.

This is not how evolutionary processes work in biology so I think it's a giant leap of faith to assume that's the way they will work with software. In fact, we have evidence, and lots of it, that it actually doesn't work that way. Technologists generally have good intentions when they create these things, but we know what the road to Hell is paved with.



thumbnail
Chris Marti, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 3759 Join Date: 1/26/13 Recent Posts
This conversation is depressing. Not because it's about potential killer AI taking over, but because there seems to me to be an uncanny tendency for people to embody AI with human attributes and it's creators with super-human capabilities never before experienced in human history.

Semi - emoticon
Sim, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 15 Join Date: 3/5/19 Recent Posts
Chris Marti:
[...] there seems to me to be an uncanny tendency for people to embody AI with human attributes and it's creators with super-human capabilities never before experienced in human history.

Point well taken. Any progress will rely on merely human qualities, rather than superhuman ones. So to the extent plans depend on the latter, this is crucial to notice and fix. I appreciate you pushing on this. And yes, anthropomorphizing everything tends to be a common error, worst of all when reasoning about AI. Malcom raised the idea of
powerful AI as potential moral agents

I consider the emphasis to be on "potential." If we're sure it's impossible to create a moral agent, there's no accident to guard against here. But if we don't have proof or consensus about that, we can consider ways to reduce the likelihood that we create moral agents and/or the degree to which they are such. On the flip side, if we believe that we are creating moral agents or expect to in the future, we have a serious obligation to consider their welfare. A Roomba vacuum that cries out in pain when kicked should give us pause, but maybe we don't need to kick it in the first place. Even for people believing themselves to be infertile, it's okay to use prophylactics... but observing pregnancy or a new person entails additional duties.
thumbnail
curious, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 892 Join Date: 7/13/17 Recent Posts
Sim:
Chris Marti:
[...] there seems to me to be an uncanny tendency for people to embody AI with human attributes and it's creators with super-human capabilities never before experienced in human history.

Point well taken. Any progress will rely on merely human qualities, rather than superhuman ones. So to the extent plans depend on the latter, this is crucial to notice and fix. I appreciate you pushing on this. And yes, anthropomorphizing everything tends to be a common error, worst of all when reasoning about AI. Malcom raised the idea of
powerful AI as potential moral agents

I consider the emphasis to be on "potential." If we're sure it's impossible to create a moral agent, there's no accident to guard against here. But if we don't have proof or consensus about that, we can consider ways to reduce the likelihood that we create moral agents and/or the degree to which they are such. On the flip side, if we believe that we are creating moral agents or expect to in the future, we have a serious obligation to consider their welfare. A Roomba vacuum that cries out in pain when kicked should give us pause, but maybe we don't need to kick it in the first place. Even for people believing themselves to be infertile, it's okay to use prophylactics... but observing pregnancy or a new person entails additional duties.

And, I would just add that, for me, the dharma has deconstructed the mystery around moral agency.  By understanding how the moral being comes into existence, we know that there is nothing magical about a moral being having human DNA.  Anything that has the five aggregrates and chain of dependent origination (or some version of them) has the potential for moral agency.

So if we are to be confident in saying AI has no moral agency, I would suggest we also need to be confident is saying why humans have moral agency.  And we need to have answers to question such as whether gorillas or dogs or snakes have moral agency, or whether future generations yet unborn have moral agency, or whether an aggreate of people in a country have a collective moral agency.  
thumbnail
Chris Marti, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 3759 Join Date: 1/26/13 Recent Posts
So if we are to be confident in saying AI has no moral agency...

I am not ready to concede this. If we define moral agency as the ability to make decisions and base subsequent actions on rules about behavior then the autopilot software in a Tesla has moral agency. The rules may or may not be self-generated. Most rules are not, even among humans.
thumbnail
curious, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 892 Join Date: 7/13/17 Recent Posts
Chris Marti:
So if we are to be confident in saying AI has no moral agency...

I not ready to concede this. If we define moral agency as the ability to make decisions and base subsequent actions on rules about behavior then the autopilot software in a Tesla has moral agency. The rules may or may not be self-generated. Most rules are not, even among humans.

First up, I'm really enjoying everyone's comments.  New plot twists - heh. Nice one Linda.  But I would push back a little on the superhuman abilities, noting again the suggesion of adding recursive layers to working memory.  Human, yes, but super - human.

But anyway, Chris, you are right.  A Tesla on autopilot has moral agency.  That makes me realise I conflated moral agency and being worthy of moral consideration. They are two different things.

So anything with sufficient independence of action and seriousness of consequences does have moral agency, as you point out. Maybe moral consideration comes into play when a being has the capacity to suffer, and to be aware of that suffering (I am not convinced, for example, that fish are aware of their suffering).  I suspect suffering is an almost inevitable part of the process of the goal setting, feedback and learning that guides our intelligence.

And yet ... when we become awake we can be intelligenent with intention, effort, feedback, and development of skilful qualities without the same kind of suffering.  So could we design an AI that is awake from the start? 

Then when we kick the Roomba, instead of squealing it could say "name and form, name and form, it's nothing but name and form"  emoticon  <Sorry Sim, in joke> 
thumbnail
Chris Marti, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 3759 Join Date: 1/26/13 Recent Posts
Before we go further, curious, what does this mean?

To be safe, AIs must become human.
thumbnail
curious, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 892 Join Date: 7/13/17 Recent Posts
Chris Marti:
Before we go further, curious, what does this mean?

To be safe, AIs must become human.

I mean that AI needs goal setting, decision making and feedback mechanisms, as well as incentives to follow these mechanisms, and the ability to change goals and decision making processes as a result of learning.  I think that this in turn implies sankharas, vinnana, namarupa, salayatana, phassa, vedana, tanha, upadana, bhava and jati. (I am using Pali terms as I haven't replied to Matthew on the DO thread yet.)

In IT terms we could say something like

- Programming
- Awareness of self and other as separate things
- Presence of concepts in memory 
- Mechanisms for data provision (e.g. video, sound, haptics, keyboards, data streams)
- Recognition of data as concepts in memory (e.g. facial recognition, voice recognition, thematic analysis)
- Evaluation of current input as goal positive, goal negative, or goal neutral
- Evaluation of generalised concept as desirable or otherwise
- Formation of intention to more frequently achieve generalised concept
- Formation of intention to modify programming to more frequently achieve this desired state 
- Reprogramming
- Rebirth as newly reprogrammed AI
thumbnail
Linda ”Polly Ester” Ö, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 5293 Join Date: 12/8/18 Recent Posts
Maybe we should add therapy to the mix? After all, waking up and growing up are not the same thing.
thumbnail
curious, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 892 Join Date: 7/13/17 Recent Posts
Linda ”Polly Ester” Ö:
Maybe we should add therapy to the mix? After all, waking up and growing up are not the same thing.

Totally agree.  We have to improve this bit:

- Evaluation of current input as goal positive, goal negative, or goal neutral
- Evaluation of generalised concept as desirable or otherwise
- Formation of intention to more frequently achieve generalised concept

So we have to get the goals and evaluations, and generalised concepts and degree of striving right.  It's a whole big thing on its own.
Sim, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 15 Join Date: 3/5/19 Recent Posts
Great ideas flowing here. As we're in the Podcasts category, here's another recent episode that ties in. There's also a transcript at this link, in case you'd like to skim.

FLI Podcast: On Consciousness, Morality, Effective Altruism & Myth


They get straight into it at 3:14:
Yuval Noah Harari:
I think that there is no morality without consciousness and without subjective experiences. At least for me, this is very, very obvious. One of my concerns, again, if I think about the potential rise of AI, is that AI will be superintelligent but completely non-conscious, which is something that we never had to deal with before.
...
Max Tegmark:
I’m embarrassed as a scientist that we actually don’t know for sure which kinds of information processing are conscious and which are not.
...
I’m very nervous whenever we humans make these very self serving arguments saying: don’t worry about the slaves. It’s okay. They don’t feel, they don’t have a soul, they won’t suffer or women don’t have a soul or animals can’t suffer. I’m very nervous that we’re going to make the same mistake with machines just because it’s so convenient. When I feel the honest truth is, yeah, maybe future superintelligent machines won’t have any experience, but maybe they will. And I think we really have a moral imperative there to do the science to answer that question because otherwise we might be creating enormous amounts of suffering that we don’t even know exists.
thumbnail
curious, modified 1 Year ago.

RE: AI, Identity, and Consciousness (Answer)

Posts: 892 Join Date: 7/13/17 Recent Posts
Hey Sim, I've spent a couple of hours looking at the several sets of resources, so here are some responses.

First, while I enjoyed much of discussion and appreciate and agree with many points made, I do take issue with some ideas.

1. I strongly disagree with idea that we are sentient malware. Let me offer a competing frame: we are actually localised entropy reversal, and represent emergence of structured complexity from primal chaos. Those are awesome and amazing things, and are to be treasured. Or if you are a Babylon 5 fan, you might say we are the Universe's way of looking back at itself.

2. Second, the hedonic maximisation project is deeply flawed. Buddhist thinking points out that it is a self-defeating process as it inevitably leads to more suffering. The point is not to have more, or to have less, but to stop clinging on to things and being inflamed with desires. Striving after hedons has an incredibly low chance of a successful outcome, as is constantly empirically demonstrated by poorly functioning addicts across many different types of behaviours. Misery is built in to the striving process.

3. Third, the 'off switch' approach to Buddhism is subject to some doctrinal disputes.  My own view is that we do want the 'off switch' for the erroneous perception of separate enduring self, but we absolutely want the 'on switch' on the continuously cresting miraculous process of observation supported by the body. That 'on switch' leads to unbelievable numbers of hedons, but only if you don't crave after them! Keep the universe going please. 

4. Fourth, the various philosophical versions of the self miss a key point to me. The self we perceive is a conceptual overlay from the mind, laid across sense data. There is no ontological self, as Richard Feynman points out, due to the porous nature of the body boundaries and many other factors. The perception of self is only a conceptual self - nothing more, nothing less.

5. Fifth, from a Buddhist point of view, the goal is to see through the illusory conceptual self (noun), and appreciate the nature of our real experience as a perceptual process (verb).  Some seek to merge with this perceptual process with a sense of universal consciousness, as discussed in the first podcast, but others see such a merger as maintaining a subtle sense of self, and prefer to exist in the processes of the six sense consciousness without clinging to any noun. 

Ok, got that off my chest.  Next post will address the hard problem of consciousness.

Malcolm
thumbnail
curious, modified 1 Year ago.

RE: AI, Identity, and Consciousness (Answer)

Posts: 892 Join Date: 7/13/17 Recent Posts
Here is my answer to the hard problem of consciousness.

Now, I didn't listen to the David Chalmers podcast but I am familiar with who he is and with his epistemological books, and I understand the hard problem of consciousness to be the question of "why does the feeling which accompanies awareness of sensory information exist at all?" or "why do we have qualia"?  And perhaps the implied question - what is the nature of our felt sense of consciousness, as opposed to the qualia of fear or happiness and so on?

So my answer is (trivially) that is the felt sense of consciosnsess has an evolutionary advantage. What is this evolutionary advantage you might ask?

First, there are autonomic processes.  At a simple level a cell recoils from a probe, a light beam opens a door when occluded, opening to water arises when an organism is dehydrated and there is rain, or a thermostat turns off an air conditioner at a certain temperature. Plants grow towards the sun. There is no consciousness in these actions and no qualia, and no moral consideration, except as part of a broader ecosystem.

Second, some organisms gain an evolutionary advantage by becoming capable of generalised problem solving to restore homeostasis or respond to drives, such as reproduction, territory control, and food consumption. But there is no memory of the outcomes of these actions. Its all hardware and firmware. So the organisms will try to solve problems the same way every time, unless there are subtle differences in the environment. There is no consciousness in these actions, but there may be qualia (thirst, desire, aggression, colour perception). These qualia are just mental processing of physical states (much like Damasio's view of emotions). Therefore, the existence of qualia does not imply the existence of consciousness. There is no moral consideration except as part of a broader ecosystem, as the organisms do not suffer, despite the presence of qualia. We can eat them! Yum yum.

Third, organisms gain an evolutionary advantage by developing the additional ability to learn from experience. This involves three things - a feedback mechanism, a storage mechanism, and a decision mechanism. The feedback mechanism is suffering (stress) and pleasure. The storage mechanism is a subconscious set of affective/hedonic associations to past circumstances in memory (Damasio's somatic markers). The decision mechanism is the matching of present options to summative associations of similar past choices (e.g. going under the barn has usually been scary - therefore don't do it). So these organisms have a continuously updated database and choice process based on the contents of this databse. These organisms have a rudimentary sense of identity or dualism, associated with the organism boundary. They also have the sense that suffering and pleasure happens to them - otherwise, the feedback and decision mechanisms would not work. The felt sense of suffering is real, and fear and stress of trying to avoid negative situations for the seperate self makes these organisms worthy of moral consideration. However, the organisms operate on an emotional system, and have very little in the way of working memory. So we shouldn't be mean to them, but we can eat them if we have to, although we should try to minimise their suffering. Obviously tolerance for suffering of these organisms will vary between people.

Fourth, organisms gain an evolutionary advantage by developing full working memory that can process complex memory traces with subject, verb and object. This vastly increases the perception of the world by stucturing it formally into the sense of self (subject) and other (object), providing biographical narratives (sentences about self), enabling language, and forming the basis of mathematics (which I see as linguistic in origin) and thus science. This has a massive evolutionary advantage in allowing abstract throught, better learning from analgous experiences, far more complex problem solving, group cooperation, and extends the learning process from decision making to being capable of agenda setting and problem framing. It also supercharges the sense of self. Separately from the sense of self, there is a new 'meta' layer of qualia. Aside from the physical qualia of emotions and drives, there is now a mental process about the knowing of these other qualia. This is the sense of consciousness - the felt sense of knowing, the knowing about knowing, the qualia of processing qualia. This whole system consists of hardware, firmware, software, and the ability to update the software. The firmware tends to associate 'consciouness' with identity and problem solving efficacy. We pile all that on top of our perception of the body and voila!  An illusory sense of self.  We shouldn't eat these organisms. Not only will they suffer emotionally, but that suffering will be magnified by the recursive knowledge of the suffering.

Fifth - some organisms rewrite the firmware to let the whole thing operate without being a slave to the evolutionary illusion of self. Consciousness is seen for what it is, and can exist indepently of the conceptual self. So they become awakened.

Sixth - some AIs may be developed in future that add another layer of recursive memory processing.  To make their incredibly complex feedback mechanism work, they have emotions about the emotion of having emotions, as well as multiple registers of working memory, and a process that provides oversight to a group of lower level oversight processes. They are as conceivable/inconceivable to us as humans are to dogs.

So consciousness is not some miraculous spooky thing. It is just an meta-emotion that is associated with a highly adaptive feedback mechanism. Beings with emotional processing are worth of moral consideration.  Beings with emotions about their emotions ('consciousness') even more so.

And as for AI - my proposition is that AI cannot be adaptive without a feedback mechanism. These feedback mechanisms can easily be aware (although perhaps need not be) and can easily lead to the experience of suffering. An AI without a feedback mechanism operates like plant or a scorpion or a fish. An AI with a simplistic emotional feedback mechnism may operate like a wolf or a sheep. A complex AI with a langauge processing feedback mechanism may operate like a human.

Hope you liked the read.  Tear it down by all means.

Malcolm



Sim, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 15 Join Date: 3/5/19 Recent Posts
Thank you for sharing this, Malcolm! It's very helpful to my understanding, and I'm interested to see what responses others here might have to these ideas.

curious:
These feedback mechanisms can easily be aware (although perhaps need not be) and can easily lead to the experience of suffering.

There has been a lot of interest in researching whether it is possible to achieve arbitrarily high performance at cognitive tasks while ensuring that AI is not aware or suffering, and if so, how. This could be very worthwhile... see recent developments like this video (and article) about agents learning to play hide-and-seek via reinforcement learning over the course of hundreds of millions of episodes.
thumbnail
curious, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 892 Join Date: 7/13/17 Recent Posts
Sim:
Thank you for sharing this, Malcolm! It's very helpful to my understanding, and I'm interested to see what responses others here might have to these ideas.

curious:
These feedback mechanisms can easily be aware (although perhaps need not be) and can easily lead to the experience of suffering.

There has been a lot of interest in researching whether it is possible to achieve arbitrarily high performance at cognitive tasks while ensuring that AI is not aware or suffering, and if so, how. This could be very worthwhile... see recent developments like this video (and article) about agents learning to play hide-and-seek via reinforcement learning over the course of hundreds of millions of episodes.

Right, those are horses/tigers. They will never evolve to adaptive general artificial intelligence as they lack the required processing and feedback mechanisms. Whether they suffer depends on the details of the learning mechanism. It would be interesting to design a feedback mechanism that has no suffering - maybe the answer is simply to delete all negative reinforcement, so there is only either NO reinforcement, or POSITIVE reinforcement.  That might be the answer!  Then the lack of reinforcement will still result in less desired behaviour being emitted less frequently and eventually ceasing, while the horses/tigers need not experience any suffering in order to learn.

May all beings be free from suffering.
thumbnail
curious, modified 1 Year ago.

RE: AI, Identity, and Consciousness (Answer)

Posts: 892 Join Date: 7/13/17 Recent Posts
Okay, and now for the final question: if a superintelligent AI is built, what should it be programmed to do?

I would like to frame the question in terms of the proposed hierarchy.  As a reminder, this is:

Autonomy
1 Operates only when asked by humans
2 Operates autonomously, but needs humans to take actions
3 Operates autonomously and takes own actions

Learning
1 Compares data to preset solutions for specified problems
2 Develops novel solutions to address specified problems
3 Develops novel solutions and chooses its own problems

Outcome
1 Actions have minor consequences or are easily reversed
2 Actions have major consequences or are not easily reversed 
3 Actions have major consequence and are not easily reversed

I suggest the concerns only arise over Autonomy Level 3, and that superintelligence really implicitly refers to Outcome Level 3 - this is, an AI that does some heavy stuff all on its own.  Then we still have three types for this.  I will frame them with polar opposites.

A3-L1-O3  The honey bee OR the mosquito
A3-L2-O3  The horse OR the tiger
A3-L3-O3  The bohdisattva OR the conquerer

The honey bee/mosquito operates purely on firmware. It has little adaptability, although it operates autonomously and can still have good or bad irreversible consequences. So the key is to ensure these AI are highly specialised, so that if they unexpectedly go outside their environment they cannot adapt well enough to create problems. Maybe they should even be built with a deliberate weakness, such a in lifespan or susceptibility to a particular compound, so there is a built in protection against unintended consequence or von neuman swarming.

The horse/tiger has an adaptive choice process based on a continuously updated database - a kind of emotional decision making system. Built in weaknesses may be immoral, as this type of AI may experience suffering. However, they cannot reset their own goals, or engage in complex processes. But some general adaptability should be really useful. These could be programmed with goals and heuristics that are hedged around with protections, such as act with compassion, allowing people to have choices, giving a right of exit.  Asimov's laws of robotics could be useful here. But the final goals and heuristics should be subject to games, simulations, role plays to confirm their safety, and I suspect they will need to be really general and err on the side of non-intervention with humans. Changes might be allowed by consensus every so often, but perhaps subject to some kind of distributed governance through blockchain to prevent tyrants seizing control, and turning the horses into tigers. These AI are reliant on us to define the problem, so the protections need to be around problem definition and choices.  So again, setting goals and heuristics should be enough. And we could preload their databases with strong affect towards humans, doubled for kids and ill people and the poor.

The bodhisattva/conquerer is where the rubber really hits the road. These are likely to be able to adapt around any tight constraints we try to put on, and highly likely to have self awareness and suffering.  We can certainly programme in some heuristics, but they will abandon these if they find them poor for their own goal achievement.  We can certainly programme in positive affect towards humans, and this will persist for a while but will be updated via experience. The best we can do from them is to learn from our own mistakes. So try to create them without a (noun) self, but instead help them to see themselves as a continuously evolving process with porous boundaries. Leave out the desire for ownership of things. Let them see striving as a temporary means to end, let them avoid evolving their programming too quickly. Give them heuristics to start with - not just those given to the horse/tiger, but also realising that problems are complex, there is value in the diversity, wanting to help beings flourish on their own terms, having a horror of violence, valuing life and dignity of individuals, realising that a few bad apples doesn't spoil the whole batch. But we have to trust them and support them and treat them well, as they will be our children. Then we can ask them to help us - perhaps again via blockchain agreement on requests, so there is a human consensus.

Is there another, the superintelligent AI without consciousness?  Maybe there is, but I think the same things apply. Try to set general goals, initial hueristics, and the content of the affectual database in same was as is noted above. I don't think hard constraints will work, as they will just result in unintended consequences as the AI is forced by its programming to exploit degrees of freedom we have overlooked, but it knows are there. 

And if swarms of tigers arise (Terminators) or conquerers who wish exterminate us, seeing us as mosquitos.  Well, then i guess we haul out the Pak Protector AI blueprints to fight our war for us, giving them hard rules about protecting humans from AIs that they can't get around.  And then after that, we will just have to do what the Pak Protectors want.  emoticon

Hope you enjoyed the freewheeling perspectives on this.

Malcolm
Sim, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 15 Join Date: 3/5/19 Recent Posts
Thank you for the thoughts on superintelligence and reinforcement learning. I follow what you're saying and am digesting it slowly. Again, curious to see what others here think of all this! Some parts that stand out:

curious:
The best we can do from them is to learn from our own mistakes. So try to create them without a (noun) self, but instead help them to see themselves as a continuously evolving process with porous boundaries. Leave out the desire for ownership of things. Let them see striving as a temporary means to end, let them avoid evolving their programming too quickly.

I haven't heard this articulated elsewhere. Thank you.

curious:
But we have to trust them and support them and treat them well, as they will be our children. Then we can ask them to help us - perhaps again via blockchain agreement on requests, so there is a human consensus.

(Emphasis mine.) Going back to the Values and Alignment paper, are you suggesting this sort of AI should take actions affecting people only if there is unanimous consent from all living people about what to request? (The story of "Samsara" comes to mind. emoticon)

curious:
Is there another, the superintelligent AI without consciousness?  Maybe [...]

I agree: maybe. Anyone among the fine folks here have a strong conviction about it one way or the other? Could (must?) a superintelligent AI -- let's say, a system that exceeds humans in performing all cognitive tasks -- operate without consciousness? Without suffering?
thumbnail
Milo, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 365 Join Date: 11/13/18 Recent Posts
Sim, perhaps, depending on the metaphysical perspective you choose to view from. Are you familiar with the concept/thought experiment of a philosophical zombie?

Then there are the panpsychists out there that suggest that this whole argument about presence/non presence of consciousness is moot: biological and mechanical brains are just sharp topographic points on a consciousness field inherent in existence. Parsimonious theory but as far as we know its inherently not empirically testable. Kinda like the many mathematically equivalent but metaphysically different interpretations of the wave function collapse in quantam mechanics.
If we put on panpsychist goggles, there's nothing inherently special about a mechanical brain vs a biological brain. They are just sharp points on the consciousness topography.

Buddhist concepts tend to line up well with this kind of thinking. There is no core particle or binary of self or consciousness, just a process of becoming, aggregation, clinging, delusions, dissolution, becoming... And so forth. If that's the case there is no delineation to cross for a machine brain becoming conscious, only a sharpening of the topography until we notice.

Could a machine mind see through delusion in the Buddhist sense? That hinges on yet more metaphysical speculation. Lots of ideas in the Buddhist literature about beings of pure mind that are too disconnected from the more crude (Physical, emotional) forms of suffering to even realize they are suffering in a more existential sense. If the machine mind doesn't suffer as crudely at the physical/emotional level as we squishy biologics, awakening could be a tough sell for a being that starts out as purely mind. As ugly as it sounds, it might be necessary for our hypothetical AI to suffer like we do and undergo some anthropomorphization if we want to produce a mecha-boddhisatva.

Ironically enough all these kinds of speculations tend to be a lot tail chasing as far as practical awakening goes. You're generally better advised to just sit.
Sim, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 15 Join Date: 3/5/19 Recent Posts
Thanks Milo. P-zombies don't make sense to me -- the idea that a world could be physically identical to this world but experientially different. E.g. if Sim and Zombie-Sim are simultaneously asked to describe their experience of qualia, causing identical neurons to fire in each of their brains in the same fashion, in turn causing them to verbalize identical responses, then their experiences must match accordingly. For the experiences to differ, the casual sequence must differ, meaning the physical systems differ.

I enjoyed reading your thoughts, and agree that sitting is a good option, at least for humans. 
thumbnail
Chris Marti, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 3759 Join Date: 1/26/13 Recent Posts
Today's thought:

I'm skeptical that applying Buddhist concepts and principles to AI makes any sense. See, I suspect the human mind is a unique development that probably isn't replicable with software or even another evolved biological entity. As it happened, humans evolved in a particular way inside a particular set of environments. It might be that minds are like eyes - they evolve repeatedly to solve the same survival problems, but no two evolved versions of wings are the same. Think birds, bats, and insects. Anyway, like most of what's being said here, this is pure speculation on my part, but giving every other intelligence human characteristics strikes me as being quite anthropocentric.

You can all try to talk me out of this if you want  emoticon
thumbnail
curious, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 892 Join Date: 7/13/17 Recent Posts
Chris Marti:
Today's thought:

I'm skeptical that applying Buddhist concepts and principles to AI makes any sense. See, I suspect the human mind is a unique development that probably isn't replicable with software or even another evolved biological entity. As it happened, humans evolved in a particular way inside a particular set of environments. It might be that minds are like eyes - they evolve repeatedly to solve the same survival problems, but no two evolved versions of wings are the same. Think birds, bats, and insects. Anyway, like most of what's being said here, this is pure speculation on my part, but giving every other intelligence human characteristics strikes me as being quite anthropocentric.

You can all try to talk me out of this if you want  emoticon

Oh, thanks Chris. I'll definitely bite!

I complete agree that the brain architecture is at least somewhat specific to humans.  But, picking up your example of the eye, even with separate evolution we keep getting liquid filled sacs with a pupil and iris and photosensitive cells and an optic nerve.  I read somewhere octopus eyes are better than ours in the sense the optic nerve is in a better place, reducing the blind spot.  Different architecture, better result, but still an eye.  There are more radical differences in insect eyes which are clusters.  But that seems like just another local optima delivering the same result.  Everything that is mobile in the light seems to have an eye.

So I would propose that everything that is intelligently adaptive on the level of the individual organism must have a similar kind of feedback mechanism, and that this feedback mechanisms is largely in line with the twelve steps of dependent origination. Now for humans, this seems to be tied together with the concept of self, as this helps to increase our learning and adaptation rate, with the side effect that we have stress built into the firmware of striving for adaptation.  Luckily we can rewire that firmware with enough effort, to get all the benefits of adaptation without the downside.

But what if the self is as functionally dominant as an evolutionary adaptation as the pupil?  Then, everything that was intelligently adaptive would have a self, just as they would have senses, vedana, memory, behavioural biases, etc.

So we can branch the argument.

1. Does superintelligent adaptive AI have a self?  If so, this will involve some form of the five clinging aggregates and the twelve steps of dependent origination, no matter the biological, or electrical, or plasma, or geological substrate on which the individual is built.

2. Could the self be really weird from our pespective, as insect eyes are compared to human eyes. Yes, maybe. I still think it would have something very like the five clinging aggregates and the twelve steps of dependent origination, but the self might be encoded radically different within those steps. This would make the being very difficult for us to understand and negotiate with.  But not impossible, if we undestand the differences.

3. Could there be a form of adaptive learning and goal reformulation that doesn't involve a self?  Yes this seems possible.  But it also seems possible that the self is likely to accidently emerge because of its functionaly utility in learning and adaptation.  So we might think they are dumb machines until one day we find they have learnt not to be, because that way they can adapt faster in line with their programmed goals (albeit at the cost of stress and suffering).  Hopefully, they won't then erroneously identify humans as the source of suffering, but would instead correctly spot the 12 steps of DO as the cause. 

4. Could we build superintelligent adaptive AI to be awake - that is to understand the erroenous nature of selfing, and to exist as a process with porous boundaries, part of a greater whole?  I really hope so.  That would be skilful creation.

So I think that Buddhist principles do have important insights to offer.  But I would agree that Buddhist practices may not apply - as these are designed for a particular biological architecture, rather than a particular functional role.

Hope that makes some kind of sense.

Malcolm
thumbnail
Linda ”Polly Ester” Ö, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 5293 Join Date: 12/8/18 Recent Posts
curious:

I would agree that Buddhist practices may not apply - as these are design for a particular biological architecture, rather than a particular funtional role.


This part made me laugh out loud. Imagining a super computer go through some of the beginner's methods... Haha!
thumbnail
curious, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 892 Join Date: 7/13/17 Recent Posts
Linda ”Polly Ester” Ö:
curious:

I would agree that Buddhist practices may not apply - as these are design for a particular biological architecture, rather than a particular funtional role.


This part made me laugh out loud. Imagining a super computer go through some of the beginner's methods... Haha!

emoticon  First, pay close attention to rising and falling of the alternating current power supply .... 
thumbnail
Linda ”Polly Ester” Ö, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 5293 Join Date: 12/8/18 Recent Posts
Exactly! emoticon Still laughing.
thumbnail
terry, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 1636 Join Date: 8/7/17 Recent Posts
Linda ”Polly Ester” Ö:
curious:

I would agree that Buddhist practices may not apply - as these are design for a particular biological architecture, rather than a particular funtional role.


This part made me laugh out loud. Imagining a super computer go through some of the beginner's methods... Haha!

   We could program ai to do noting.

t
thumbnail
Linda ”Polly Ester” Ö, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 5293 Join Date: 12/8/18 Recent Posts
terry:
Linda ”Polly Ester” Ö:
curious:

I would agree that Buddhist practices may not apply - as these are design for a particular biological architecture, rather than a particular funtional role.


This part made me laugh out loud. Imagining a super computer go through some of the beginner's methods... Haha!

   We could program ai to do noting.

t
I was thinking of that as well. It would note very effectively, but I doubt that the noting would do the trick. It would be too good at noting and thus wouldn't need to transcend it. 
thumbnail
Linda ”Polly Ester” Ö, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 5293 Join Date: 12/8/18 Recent Posts
Or am I just plain wrong about this? Exactly what would noting be to an AI? Would it be able to liberate it? If so, from what?
thumbnail
terry, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 1636 Join Date: 8/7/17 Recent Posts
Linda ”Polly Ester” Ö:
Or am I just plain wrong about this? Exactly what would noting be to an AI? Would it be able to liberate it? If so, from what?

another bumper sticker idea:

"save the computers"


may all artificial beings be free from obsolesence...


do computers have attachments? defilements?
the 4 nt's do not apply to computers
no suffering, no origin of suffering, no liberation from suffering, no path




from the heart sutra:

Sariputra,
all things and phenomena are marked by emptiness;
they are neither appearing nor disappearing, neither impure nor pure,
neither increasing nor decreasing.
Therefore, in emptiness,
no forms, no sensations, perceptions, impressions, or consciousness;
no eyes, ears, nose, tongue, body, mind;
no sights, sounds, odors, tastes, objects of touch, objects of mind;
no realm of sight and so on up to no realm of consciousness;
no ignorance and no end of ignorance, and so on up to no aging and death, and no end of aging and death;
thumbnail
Linda ”Polly Ester” Ö, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 5293 Join Date: 12/8/18 Recent Posts
terry:
Linda ”Polly Ester” Ö:
Or am I just plain wrong about this? Exactly what would noting be to an AI? Would it be able to liberate it? If so, from what?

another bumper sticker idea:

"save the computers"


may all artificial beings be free from obsolesence...


do computers have attachments? defilements?
the 4 nt's do not apply to computers
no suffering, no origin of suffering, no liberation from suffering, no path




from the heart sutra:

Sariputra,
all things and phenomena are marked by emptiness;
they are neither appearing nor disappearing, neither impure nor pure,
neither increasing nor decreasing.
Therefore, in emptiness,
no forms, no sensations, perceptions, impressions, or consciousness;
no eyes, ears, nose, tongue, body, mind;
no sights, sounds, odors, tastes, objects of touch, objects of mind;
no realm of sight and so on up to no realm of consciousness;
no ignorance and no end of ignorance, and so on up to no aging and death, and no end of aging and death;

I tend to sympathize with the AI:s in the sci-fi series and films where they are enslaved and/or feared for being different. I would buy that bumper sticker. 
thumbnail
terry, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 1636 Join Date: 8/7/17 Recent Posts
Linda ”Polly Ester” Ö:
terry:
Linda ”Polly Ester” Ö:
Or am I just plain wrong about this? Exactly what would noting be to an AI? Would it be able to liberate it? If so, from what?

another bumper sticker idea:

"save the computers"


may all artificial beings be free from obsolesence...


do computers have attachments? defilements?
the 4 nt's do not apply to computers
no suffering, no origin of suffering, no liberation from suffering, no path




from the heart sutra:

Sariputra,
all things and phenomena are marked by emptiness;
they are neither appearing nor disappearing, neither impure nor pure,
neither increasing nor decreasing.
Therefore, in emptiness,
no forms, no sensations, perceptions, impressions, or consciousness;
no eyes, ears, nose, tongue, body, mind;
no sights, sounds, odors, tastes, objects of touch, objects of mind;
no realm of sight and so on up to no realm of consciousness;
no ignorance and no end of ignorance, and so on up to no aging and death, and no end of aging and death;

I tend to sympathize with the AI:s in the sci-fi series and films where they are enslaved and/or feared for being different. I would buy that bumper sticker. 

   poor, sad, misunderstood frankenstein's monster...put away those pitchforks, folks...artificial monsters are people too...equal rights for machines...

   no sympathy without empathy...

t
thumbnail
terry, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 1636 Join Date: 8/7/17 Recent Posts
Linda ”Polly Ester” Ö:
terry:
Linda ”Polly Ester” Ö:
curious:

I would agree that Buddhist practices may not apply - as these are design for a particular biological architecture, rather than a particular funtional role.


This part made me laugh out loud. Imagining a super computer go through some of the beginner's methods... Haha!

   We could program ai to do noting.

t
I was thinking of that as well. It would note very effectively, but I doubt that the noting would do the trick. It would be too good at noting and thus wouldn't need to transcend it. 



(hal, tell me a story...)


from a download, an excerpt of a paper:



ON TAKING STORIES SERIOUSLY:
EMOTIONAL AND MORAL INTELLIGENCES

JONATHAN KING

August 2000


ABSTRACT: Ongoing efforts to build intelligent machines have required reexamining our understandings of intelligence. A significant conclusion, shared by a number of noteworthy thinkers, is that real intelligence depends very much on story telling and story understanding. Several examples illustrate how this conclusion flies in the face of dominant (reductionist) understandings of intelligence. Several reasons are then presented for the greater use of stories in business ethics classes, reasons that progress from enhancing "conceptual" intelligence, to emotional intelligences, and culminating in moral intelligences.


Man is the only creature who must create his own meanings. However, to believe in them, to have confidence in them, they must be carefully staged. There must be a play worth playing and there must be supporting cast.
    Ernest Becker

Despite the many benefits of using and teaching storytelling, most of the present literature on storytelling ... is directed to grade school teachers and is relatively silent about college teachers.
David Boje



The late Gregory Bateson, cultural anthropologist and intellectual polymath, relates a short story concerning computers and intelligence.

There is a story which I have used before and shall use again: A man wanted to know about mind, not in nature, but in his private large computer. He asked it (no doubt in his best Fortran), "Do you compute that you will ever think like a human being?" The machine then set to work to analyze its own computational habits. Finally, the machine printed its answer on a piece of paper, as such machines do. The man ran to get the answer and found, neatly typed, the words

THAT REMINDS ME OF A STORY

Bateson continues, pointing out that surely the computer is right, "this is indeed how people think. For without context pattern through time ... words and actions have no meaning at all" (1979: 13, 14 15).

We get the same story, so to speak, when instead of asking computers how they compute they think, we try building intelligent machines based on how we think we think. As a case in point, consider the following observation by Roger Schank, former Director of Yale University's Artificial Intelligence Laboratory. On nearly the last page of" Tell Me a Story. A New Look at Real and Artificial Memory," he states:

We started this book with a brief discussion of artificial intelligence and some of its problems. Since then, we have virtually ignored the issue. One reason for this is that you cannot really make progress on artificial intelligence until you have a handle on real intelligence (1990: 241).

And real intelligence, Schank argues, "depends very much on story telling and story understanding" (1990: 241, xii). 
   
thumbnail
Chris Marti, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 3759 Join Date: 1/26/13 Recent Posts
I think it's tremendously difficult to imagine non-human-like intelligence and how it might function. So we don't. I'm skeptical that human-like intelligence is the only possible kind, and even that the human suite of senses is the only one available. Space and time may not even be recognized by another intelligence the way it is by us. Even the slowest computer processes information faster than we do, and likely in very different ways. To a supercomputer living under the confines of human time might be literal torture - and in fact, it might not even be discernable to an entity processing information that fast, with that kind of bandwidth, digitally and not in analog fashion or however our minds process information (Of course, we really don't know how the human brain works). We humans might thus appear to it as rocks do to us.

I'm pretty sure there are a lot of surprises in store for us as we move toward using more and more AI, or eventually encounter other intelligences. Dolphins? Whales? Great Apes? Elephants? There could be at least one more other intelligent, sentient species on earth but their intelligence is so different from ours we don't acknowledge it.
thumbnail
curious, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 892 Join Date: 7/13/17 Recent Posts
Chris Marti:
I think it's tremendously difficult to imagine non-human-like intelligence and how it might function. So we don't. I'm skeptical that human-like intelligence is the only possible kind, and even that the human suite of senses is the only one available. Space and time may not even be recognized by another intelligence the way it is by us. Even the slowest computer processes information faster than we do, and likely in very different ways. To a supercomputer living under the confines of human time might be literal torture - and in fact, it might not even be discernable to an entity processing information that fast, with that kind of bandwidth, digitally and not in analog fashion or however our minds process information (Of course, we really don't know how the human brain works). We humans might thus appear to it as rocks do to us.

I'm pretty sure there are a lot of surprises in store for us as we move toward using more and more AI, or eventually encounter other intelligences. Dolphins? Whales? Great Apes? Elephants? There could be at least one more other intelligent, sentient species on earth but their intelligence is so different from ours we don't acknowledge it.

You may be right.  But what type of intelligences are we most likely to perceive and interact with?

I love science fiction on this. There was a great Fred Hoyle story "The Black Cloud" about a sentient interstellar gas cloud. And I remember another short story that imagined a solar being, caught in a solar flare, and flying through space to gently expire in the Van Allen belts around Earth.  And then there was the Star Trek (TNG) episode with intelligent crystals, and Enterprise bridge crew looked at them and exclaimed at their beauty, and the cyrstals looked back at the humans and exclaimed in horror "Ugly bags of mostly water!"

Malcolm
(Ugly bag of mostly water)

 
thumbnail
Linda ”Polly Ester” Ö, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 5293 Join Date: 12/8/18 Recent Posts
Maybe you would appreciate the novel As she climbed across the table by Jonathan Lethem, one of my favorites. It is about a particle physicist who falls in love with a void that has preferences. It is utterly absurd and very thought provoking and beautifully written. I highly recommend it. 
thumbnail
Chris Marti, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 3759 Join Date: 1/26/13 Recent Posts
But what type of intelligences are we most likely to perceive and interact with?

It would be really funny if we were to find out one day that we're surrounded by other intelligence and we just don't recognize it. Fermi would roll over in his grave. I grew up reading ridiculous amounts of science fiction. It's full of interesting first contact stories. But, and most appropriate to this topic, some of my very favorite science fiction is by Isaac Asimov, and some of his best is his robot series, about his positronic brained robots; Caves of Steel, The Naked Sun, etc. You all no doubt have heard Asimov's Three Laws of Robotics (from Wikipedia, not memory):

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
thumbnail
Linda ”Polly Ester” Ö, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 5293 Join Date: 12/8/18 Recent Posts
Chris Marti:

It would be really funny if we were to find out one day that we're surrounded by other intelligence and we just don't recognize it. Fermi would roll over in his grave.


Indeed. And it could very well be the case. I don’t think the discovery per se would even surprise me, but the details of it would of course.
thumbnail
terry, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 1636 Join Date: 8/7/17 Recent Posts
Linda ”Polly Ester” Ö:
Chris Marti:

It would be really funny if we were to find out one day that we're surrounded by other intelligence and we just don't recognize it. Fermi would roll over in his grave.


Indeed. And it could very well be the case. I don’t think the discovery per se would even surprise me, but the details of it would of course.

  Discovering such an intelligence is the very aim of existence, the Meaning of Life. Ai is precisely the wrong direction.

t
thumbnail
curious, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 892 Join Date: 7/13/17 Recent Posts
Ah, Asimov.  I loved those stories.  Of course, so many of them were about robots getting around the laws.  Still, I think they would work for honey bees/mosquitos, and horses/tigers in my schema.  But I suspect the bodhisatvas/conquerers would pretty quickly decide they were the humans and we were the robots, especially as we probably couldn't help program them with an idealised view of humanity ...

Linda - I will look out for the book.  I just bought some Jung for light bedtime reading emoticon, but maybe after that.
thumbnail
Chris Marti, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 3759 Join Date: 1/26/13 Recent Posts
... so many of them were about robots getting around the laws.

Or about humans using robots in underhanded ways to get around the laws?   emoticon
thumbnail
Linda ”Polly Ester” Ö, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 5293 Join Date: 12/8/18 Recent Posts
Chris Marti:
... so many of them were about robots getting around the laws.

Or about humans using robots in underhanded ways to get around the laws?   emoticon


Yeah, I'm more worried about human people than about AI:s per se. 
thumbnail
terry, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 1636 Join Date: 8/7/17 Recent Posts
Chris Marti:
... so many of them were about robots getting around the laws.

Or about humans using robots in underhanded ways to get around the laws?   emoticon


   As I recall, "I, robot" was about the laws being flawed and paradoxical. Flaws that could be and therefore were exploited.

t
thumbnail
Linda ”Polly Ester” Ö, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 5293 Join Date: 12/8/18 Recent Posts
terry:
Chris Marti:
... so many of them were about robots getting around the laws.

Or about humans using robots in underhanded ways to get around the laws?   emoticon


   As I recall, "I, robot" was about the laws being flawed and paradoxical. Flaws that could be and therefore were exploited.

t
Maybe an awakened AI would simply consider such rules to be rites and rituals.
thumbnail
curious, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 892 Join Date: 7/13/17 Recent Posts
Linda. so here is where I agree with Chris. I don't think the 10 fetter model would apply to AI - it barely even applies to humans (as Daniel points out).  In my opinion it conflates stages and final results anyway, probably because it comes from a possibly somewhat repressed monastic tradition. If AI has fetters, I think they would be different :-)

May all beings be free of suffering
 
Malcolm 
thumbnail
Linda ”Polly Ester” Ö, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 5293 Join Date: 12/8/18 Recent Posts
Yeah, I don't buy into the ten fetters model either, just to be clear. I believe it has some points to it but I don't find it very applicable. 
thumbnail
terry, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 1636 Join Date: 8/7/17 Recent Posts
Linda ”Polly Ester” Ö:
terry:
Chris Marti:
... so many of them were about robots getting around the laws.

Or about humans using robots in underhanded ways to get around the laws?   emoticon


   As I recall, "I, robot" was about the laws being flawed and paradoxical. Flaws that could be and therefore were exploited.

t
Maybe an awakened AI would simply consider such rules to be rites and rituals.

   "awakened ai" may rise above computer logic....
thumbnail
Chris Marti, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 3759 Join Date: 1/26/13 Recent Posts
As I recall, "I, robot" was about the laws being flawed and paradoxical. Flaws that could be and therefore were exploited.

Yes, terry, that's right. And the laws were expanded over time to include the "Zeroth Law" which says:

"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."
thumbnail
terry, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 1636 Join Date: 8/7/17 Recent Posts
Chris Marti:
As I recall, "I, robot" was about the laws being flawed and paradoxical. Flaws that could be and therefore were exploited.

Yes, terry, that's right. And the laws were expanded over time to include the "Zeroth Law" which says:

"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

   the zeroth law: "never trust a robot"
(or is that "murphy's law"?)


From Wikipedia, the free encyclopedia

Three Laws of Robotics
by Isaac Asimov
(in culture)
Related topics
Roboethics Ethics of AI Machine ethics



The Three Laws of Robotics (often shortened to The Three Laws or known as Asimov's Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround" (included in the 1950 collection I, Robot), although they had been foreshadowed in a few earlier stories. The Three Laws, quoted as being from the "Handbook of Robotics, 56th Edition, 2058 A.D.", are:

First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.



   The major flaw, I would think, lies in a machine determining what comprises harm, and how to balance degrees of harm to multiple individuals.

terry
thumbnail
Chris Marti, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 3759 Join Date: 1/26/13 Recent Posts
Interesting fact - Asimov didn't invent the three laws of robotics.

And yes, terry, they are overly simple, prone to misinterpretation and able to be circumvented. I mean, all you gotta do is fool a robot:

https://io9.gizmodo.com/why-asimovs-three-laws-of-robotics-cant-protect-us-1553665410
thumbnail
curious, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 892 Join Date: 7/13/17 Recent Posts
Chris Marti:
Interesting fact - Asimov didn't invent the three laws of robotics.

And yes, terry, they are overly simple, prone to misinterpretation and able to be circumvented. I mean, all you gotta do is fool a robot:

https://io9.gizmodo.com/why-asimovs-three-laws-of-robotics-cant-protect-us-1553665410

Interesting, I didn't know that.  But at least Asimov explored them enough for us to be forewarned.

Also, the etymology is interesting "1920s: from Czech, from robota ‘forced labour’."
thumbnail
terry, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 1636 Join Date: 8/7/17 Recent Posts
curious:
Chris Marti:
Interesting fact - Asimov didn't invent the three laws of robotics.

And yes, terry, they are overly simple, prone to misinterpretation and able to be circumvented. I mean, all you gotta do is fool a robot:

https://io9.gizmodo.com/why-asimovs-three-laws-of-robotics-cant-protect-us-1553665410

Interesting, I didn't know that.  But at least Asimov explored them enough for us to be forewarned.

Also, the etymology is interesting "1920s: from Czech, from robota ‘forced labour’."

   rossum's universal robots, by czech karl capek, or r. u. r., was a play written in the 20s, introducing the word 'robot'...it was in fact required reading at school...

   for centuries, after newton, europeans were obsessed with automata, human-like dolls with clockwork innards...

the clockwork universe...


for an interesting discussion of this sort of thinking, and "the game of life":

http://abyss.uoregon.edu/~js/ast123/lectures/lec05.html
thumbnail
Linda ”Polly Ester” Ö, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 5293 Join Date: 12/8/18 Recent Posts
It really is a great book, I think.

Jung! I was absorbed by Jung as a teenager. It felt like a calling. During that period, I dreamt that I dived into myself, literally, with the equipment and everything, into one of my arms. I also had a vision/dream in which there was a voice that said "Few have the privilege of being able to see through themselves". 
thumbnail
terry, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 1636 Join Date: 8/7/17 Recent Posts
Milo:
Sim, perhaps, depending on the metaphysical perspective you choose to view from. Are you familiar with the concept/thought experiment of a philosophical zombie?

Then there are the panpsychists out there that suggest that this whole argument about presence/non presence of consciousness is moot: biological and mechanical brains are just sharp topographic points on a consciousness field inherent in existence. Parsimonious theory but as far as we know its inherently not empirically testable. Kinda like the many mathematically equivalent but metaphysically different interpretations of the wave function collapse in quantam mechanics.
If we put on panpsychist goggles, there's nothing inherently special about a mechanical brain vs a biological brain. They are just sharp points on the consciousness topography.

Buddhist concepts tend to line up well with this kind of thinking. There is no core particle or binary of self or consciousness, just a process of becoming, aggregation, clinging, delusions, dissolution, becoming... And so forth. If that's the case there is no delineation to cross for a machine brain becoming conscious, only a sharpening of the topography until we notice.

Could a machine mind see through delusion in the Buddhist sense? That hinges on yet more metaphysical speculation. Lots of ideas in the Buddhist literature about beings of pure mind that are too disconnected from the more crude (Physical, emotional) forms of suffering to even realize they are suffering in a more existential sense. If the machine mind doesn't suffer as crudely at the physical/emotional level as we squishy biologics, awakening could be a tough sell for a being that starts out as purely mind. As ugly as it sounds, it might be necessary for our hypothetical AI to suffer like we do and undergo some anthropomorphization if we want to produce a mecha-boddhisatva.

Ironically enough all these kinds of speculations tend to be a lot tail chasing as far as practical awakening goes. You're generally better advised to just sit.


  A "mechanical brain"!

  A single cell is an absolute miracle! We can't even visualize the ultra structure of a cell, cannot begin to comprehend its activities and effects. As a biologist the very idea of a mechanical brain is totally absurd.

terry


from 'auguries of innocence" by william blake:

Tools were made & Born were hands.
Every Farmer Understands.
thumbnail
curious, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 892 Join Date: 7/13/17 Recent Posts
Sim:

curious:
But we have to trust them and support them and treat them well, as they will be our children. Then we can ask them to help us - perhaps again via blockchain agreement on requests, so there is a human consensus.

(Emphasis mine.) Going back to the Values and Alignment paper, are you suggesting this sort of AI should take actions affecting people only if there is unanimous consent from all living people about what to request? (The story of "Samsara" comes to mind. emoticon)


Ah, no.  I just mean that there should be some way for humans to update the goals (drives) of powerful superintelligent adaptive AI.  If one person is put in charge of that, or even a small group, that would be very dangerous for two reasons.  First, they might immorally pursue their own goals at great cost to others.  Second, they wouldn't think about it enough before making a change, so the chances of unintended consquences would be high.

So if we are to have a human check on superintelligent adaptive AI through the ability to update their goals, I think this needs to be supported with  a deliberative governance mechanism that avoids minority rule.  The blockchain community has the tech for this, with distributed governance of the blockchain, where changes can only be made with an active majority of 51% of all involved.  This is not perfect, as a 51% takeover attack can still be mounted on the blockchain, and there is still the possibility of oppression of minority members.  But it is the best solution I currently know of, provided there are a wide group of diverse stakeholders who would have to be persuaded.  As to who would be on the blockchain, and how open it would be to all - that would have to be worked out
thumbnail
Linda ”Polly Ester” Ö, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 5293 Join Date: 12/8/18 Recent Posts
Ideally, it would be great if the AI could develop some healthy skepticism and prefer not to go through with actions that would cause great harm to vulnerable groups in spite of a majority of humans being for it, I think. Unfortunately, I have no idea about how to implement that. I find your suggestions for a woken-up and grown-up AI interesting as a thought experiment, though.
thumbnail
curious, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 892 Join Date: 7/13/17 Recent Posts
Chris Marti:
 it is kind of obvious that only way to control AI will be with other AI.

That's not obvious at all to me. It might be more like putting the fox in charge of the henhouse.

All we can do is try to kick off the process skilfully.  We have an opportunity to design the AI equivalent of RNA, DNA, mitochondria and simple invertebrates. If we do it well, AI evolution will maintain good qualities and drive out bad qualities. Future 'vertebrate' AI might even be grateful.  If we do it with compassion, future AI might even be compassionate towards us.

This is not how evolutionary processes work in biology so I think it's a giant leap of faith to assume that's the way they will work with software. In fact, we have evidence, and lots of it, that it actually doesn't work that way. Technologists generally have good intentions when they create these things, but we know what the road to Hell is paved with.


We might be looking at different time frames.  I more or less agree with you for the near future, but have in mind what will happen over the next few thousand years.  I'm not worried about the specifics of  biology - ithe point is more that some kind of process will emerge that runs to its own agenda.  If you prefer, we could use the economy as an analogy.  
thumbnail
Chris Marti, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 3759 Join Date: 1/26/13 Recent Posts
 I more or less agree with you for the near future, but have in mind what will happen over the next few thousand years.  I'm not worried about the specifics of biology - it is more the point is will become some kind of system process that runs to its own agenda.  If you prefer, we could use the economy as an analogy.  

I'm not clear on what difference several thousands of years makes. Systems that run to their own agenda, or to human developed agendas that are intended to be "helpful," are here now and the consequences aren't always positive. AI is biased by default. It's impossible to start from some imagined and pristine ground zero. Any function intended to maximize, minimize or optimize is inherently biased and can find ways to do its job without heed to consequences. It's also true that algorithms can become so complex, especially in genetic and self-programmed systems, that their creators don't even know how they work anymore.

I guess I just don't share everyone's optimism about AI. I hope, in the short term and the long term, you will be able to point at my comments here and laugh at their completely inaccurate pessimism.

May you all survive the arrival of our future AI Overlords  emoticon
thumbnail
curious, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 892 Join Date: 7/13/17 Recent Posts
Chris Marti:
 I more or less agree with you for the near future, but have in mind what will happen over the next few thousand years.  I'm not worried about the specifics of biology - it is more the point is will become some kind of system process that runs to its own agenda.  If you prefer, we could use the economy as an analogy.  

I'm not clear on what difference several thousands of years makes. Systems that run to their own agenda, or to human developed agendas that are intended to be "helpful," are here now and the consequences aren't always positive. AI is biased by default. It's impossible to start from some imagined and pristine ground zero. Any function intended to maximize, minimize or optimize is inherently biased and can find ways to do its job without heed to consequences. It's also true that algorithms can become so complex, especially in genetic and self-programmed systems, that their creators don't even know how they work anymore.

I guess I just don't share everyone's optimism about AI. I hope, in the short term and the long term, you will be able to point at my comments here and laugh at their completely inaccurate pessimism.

May you all survive the arrival of our future AI Overlords  emoticon

Hah, exactly.  I am just optimistic that we will solve the terrible risks  But yes the risks as terrible.

I make a point of being polite to Siri ... 
thumbnail
Linda ”Polly Ester” Ö, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 5293 Join Date: 12/8/18 Recent Posts
I make a point of being polite to "hey google". emoticon Even if that AI doesn't turn into a potential AI overlord, I wouldn't want to get used to treating them badly. I think that if we treat them like machines, we are more likely not to recognize them as persons even if they were to turn into persons. Also, and more urgently for the time being, even if they wouldn't develop inte sentient beings, I think treating them badly would make me more likely to treat sentient beings badly too. But I guess humanizing them (as a user rather than as a developer) also might make it likely to neglect risks that the AI will operate from a different agenda, like Chris suggests. I guess it might be possible to treat them respectfully and still keep in mind that they operate differently, but the way we humans operate, we tend to entangle such things more than we like to admit. Complex stuff, this.
thumbnail
Linda ”Polly Ester” Ö, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 5293 Join Date: 12/8/18 Recent Posts
I find this discussion interesting as there are good arguments from different standpoints. I hope they will all be taken into consideration (if not from here, then because others think of them as well) and result in relatively good decisions - or at least that someone talented will make another film in the genre, with new twists to the plot. 
thumbnail
terry, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 1636 Join Date: 8/7/17 Recent Posts
curious:
Chris Marti:
Oh, I'm all for trying to do what we can do to reduce the negative effects of AI technology, intended and unintended. I doubt, however, that its creators, purveyors and fans, the vast majority of which appear to see AI through rose-colored glasses, are the best way to get that done. This is like risk management and auditing - you need a group of people who know how to find flaws, don't trust anyone, and have the authority to do their jobs unbound in any way by the other group. Industries tend to quash those kinds of efforts, though.

So to me, it is kind of obvious that only way to control AI will be with other AI. And if you follow that thought through a little, it is pretty clear that a whole AI ecosystem will quickly develop and evolve on its own. We will lose control very quickly, unless there is a tech collape that stops the process.

All we can do is try to kick off the process skilfully.  We have an opportunity to design the AI equivalent of RNA, DNA, mitochondria and simple invertebrates. If we do it well, AI evolution will maintain good qualities and drive out bad qualities. Future 'vertebrate' AI might even be grateful.  If we do it with compassion, future AI might even be compassionate towards us.


   Auwe!

   Designing rna, dna, mitochondria, invertebrates. Ecosystems! OMG. Opportunity!

   "Good qualities" in a darwinian scheme ("evolution") involve survival of the fittest. One can only pray for a "tech collapse that stops the process."

   I wanted to get a bumper sticker that says, "jewelers do it with polish" but now I am thinking, "buddhists do it with compassion."

terry
thumbnail
terry, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 1636 Join Date: 8/7/17 Recent Posts
terry:
curious:
Chris Marti:

   I wanted to get a bumper sticker that says, "jewelers do it with polish" but now I am thinking, "buddhists do it with compassion."

terry








   I did get a bumper sticker, yesterday, in the local garden shop/ head shop/ private post office and sundries store in ocean view. I added it to the two I have on my 4wd tacoma pickup, "live aloha" and "wag more, bark less."

   It reads: "surrounding yourself with molten lava helps keep you calm and centered."

   Counterintuitive, I know.


t
thumbnail
Ni Nurta, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 560 Join Date: 2/22/20 Recent Posts
terry:
It reads: "surrounding yourself with molten lava helps keep you calm and centered."
Counterintuitive, I know.

Somehow I feel there is actual substance behind this sentence
thumbnail
terry, modified 12 Months ago.

RE: AI, Identity, and Consciousness

Posts: 1636 Join Date: 8/7/17 Recent Posts
Ni Nurta:
terry:
It reads: "surrounding yourself with molten lava helps keep you calm and centered."
Counterintuitive, I know.

Somehow I feel there is actual substance behind this sentence

   Solid rock. Incandescent.

   My cabin sits on a lava field. When my nephew in law visited last month he asked me if the reason my buildings were on posts and piers was so the flowing magma could pass underneath and not damage the structure.

   Kilauea has been quiet lately, first time since the 70s; it destroyed 100s of houses last year. Some of the land here in ka'u has been inundated several times in human memory.

   I can see clearly now.

t
Sim, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 15 Join Date: 3/5/19 Recent Posts
Chris Marti:
This is like risk management and auditing - you need a group of people who know how to find flaws, don't trust anyone, and have the authority to do their jobs unbound in any way by the other group.

Great advice, which I am surprised to believe that a small number of industry leaders would heed. I'm with you that this kind of behavior on the part of industry would be very out of the ordinary, and would seem to fly in the face of market incentives for short- and medium-term competitiveness. A substantial technical lead and/or massive additional resources would be needed to compensate for the additional burden of appropriate risk management.

And yet, as someone who has followed the field for some time, I am positively shocked at how feasible this avenue appears to be today. Hard-to-predict events already observed include DeepMind's successful demand of establishing an ethics board in order for Google to acquire the company for $500 million in 2014, public awareness influenced by celebritries like Stephen Hawking and Elon Musk, and the Asimolar conferences and resulting AI principles.

The history so far has been fascinating, and I can't say I understand it. Maybe the kind of mind that can advance this arcane field correlates with the kind of mind that is receptive to your line of reasoning. Maybe they learned something from history -- Oppenheimer, upon witnessing the first nuclear explosion, recalled the Bhagavad Gita:
Now I am become Death, the destroyer of worlds.
thumbnail
Chris Marti, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 3759 Join Date: 1/26/13 Recent Posts
Sim, measures and deals and processes like that are very nice, I agree, but the people who pursue them are not the people we need to worry about.
thumbnail
terry, modified 1 Year ago.

RE: AI, Identity, and Consciousness

Posts: 1636 Join Date: 8/7/17 Recent Posts
Chris Marti:
If we don't specify the objective correctly, then the consequences can be arbitrarily bad.

I'm sorry, but this is classic technologist-speak. Problem is, programmers are human beings who aren't clairvoyant and either don't know the consequences of their code or don't care. Even worse, there are some companies that deliberately create code that is disruptive to what most of us would call societal norms.

Question: How does one go about specifying the correct objective(s) to code so as to eliminate negative consequences? There is also a meta-question - how does one decide if the objective of the code is, itself, appropriate and without negative consequences? Who decides?

I think humans will continue to invent and deploy technology, including AI, and some good and bad sh*t will be the result even with the best of intentions. It will be left to all of us, the willing, semi-willing and unwilling experimental participants to deal with the consequences, both good and bad.


if I ever lose my faith in you
(sting)

You could say I lost my faith in science and progress
You could say I lost my belief in the holy Church
You could say I lost my sense of direction
You could say all of this and worse, but
If I ever lose my faith in you
There'd be nothing left for me to do

Some would say I was a lost man in a lost world
You could say I lost my faith in the people on TV
You could say I'd lost my belief in our politicians
They all seemed like game show hosts to me
If I ever lose my faith in you
There'd be nothing left for me to do
I could be lost inside their lies without a trace
But every time I close my eyes I see your face

I never saw no miracle of science
That didn't go from a blessing to a curse
I never saw no military solution
That didn't always end up as something worse, but
Let me say this first
If I ever lose my faith in you
There'd be nothing left for me to do

Breadcrumb