Navigation Menu

Skip to content

Instantly share code, notes, and snippets.

@leshy
Created November 8, 2012 22:18
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save leshy/4042118 to your computer and use it in GitHub Desktop.
Save leshy/4042118 to your computer and use it in GitHub Desktop.
answer to some questions about hard ai and rights

answer to http://josephdreamboatlevitt.tumblr.com/post/35273273231/just-some-notes-for-a-new-project-im-working-on

totally give me feedback if you disagree with something.


you are asking a question about valuing AI life, so I'll answer your last question first, on value of life in general.

Even if an AI valued human life more than animal/plant life - would it value a human life as more than millions of animals or plants? Would it ignore scale? Why?

I don't think that AI's thoughts on this should be any different then our own.

we have to answer how does a being in the world attribute value to anything?

  • either arbitrarily (dogma, intuition, whatever, this is a position that most animal rights groups take)
  • or rationaly, once that being has some goals in life defined (those are again defined arbitrarily)

so AI can have hardcoded arithmetic on values of lives. whatever it is, it should be easy to answer your question once you have that definition. or AI can have hardcoded particular goals in life, and then it can decide on values of living things depending on those goals. (cats are probably more valuable then humans if your life purpose is to fill the world with hairballs)

  • and now! a bit on human feelings/moral intuitions on value of life, in order to be able to talk about the value of AI life:

most people wouldn't like to eat monkeys but feel that carrots are ok to eat. I think that most of us feel that there is this gradient of 'value' of life related to 'the amount of soul' (intelligence/identity/self) a being has. (intelligence or identity is an important distinction I won't go into now, and just potential for those is another important distinction, hi abortion debate)

we draw a line on which identities are small enough for us to eat. some vegetarians have drawn this line a bit lower then the other people did. from fauna, some people eat only fish. fish are really dumb. I bet that its really hard to find a person that eats only monkeys and fish. but you could find a person that eats only chicken and fish.

size of some creatures identity doesn't have to be an intuitive esoteric thing. we can talk about kolmogorov complexity of identity. in short, how much would it take for us to implement an identity of some being as a computer program, and how big would that program be? this is a concrete number.

then there is a next question on how do we accumulate value of lives? is it ok to just add kolmogorov complexityes up? I think that whenever we formalize those things and get some kind of "morality algebra" we are bound to deduce situations in which it will part ways with our intuitions, as our intuitions are not consistent.

all of us are forced to answer those questions implicitly, by killing (yes plants too) for food. we are forced to compare values of lives.

not only that compare values of different lives, we are also forced to compare the value of life with value of non living things. (we give life monetary value) for example, will we add a safety device to an elevator? how about a safety device for a safety device? (etc) how frequently will we inspect it? every 5 minutes? probably not. why? it will be too expensive, our lives are not worth that much.

What would it take for Artificial Intelligence to be real and worthy of rights of its own? The capacity for suffering? And how can you get past the Chinese Room argument? How can you prove actual suffering as opposed to flawless imitation of suffering?

yeah, this is partly a question about validity of a turing test. each element of the chinese room doesn't understand chinese, but the room itself does. just like each neuron is not intelligent, but brain is intelligent as a whole.

  • then I'll prove that AI that feels like we do is possible:

our brains are machines. we can simulate physical machines. there is no reason to believe that we won't be able to scan our brains and run them in computers at some point. (check bluebrain project) would emotions we'd experience then, somehow be invalid?

if you don't believe that simulation of a brain is possible, you don't believe that brain is a physical machine, which means that you've introduced some kind of "magical soul juice" two arguments against this:

  1. you can't use something you can't define as an argument. I'm taking a physicalist position and saying that a thing you can't define and test, doesn't exist.
  2. magical soul juice needs to have motory control of a physical machine. there would be a breakage of causality in our brains. electrons in our neurons would magically spawn in order for the soul juice to move our arms. this is not happening.
  • then I'll talk about imitation

in order to "flawlessly imitate" something, you need to implement that something fully. if an actor is flawlessly imitating someone, is that someone living inside of an actor, in some form? it is still an imitation in a way that actor is more then the thing he is imitating, but he can't be less.

I'll refer to kolmogorov complexity again: there is always a minimum of computations that someone needs to do in order to arrive to some complex result. anyone that arrives to this complex result did those computations in one form or the other.

a thing acting intelligently has to be intelligent.

  • and now the hard one: How can you prove actual suffering as opposed to flawless imitation of suffering?

you are asking which test you can preform in order to prove that something that passes all tests, doesn't pass all tests. this is paradoxical. you are asking if we can prove that a philosophical zombie is a zombie

I don't think that you can.

proving emotions is harder then proving intelligence, as intelligent behaviour is something that you do while emotion, is something that you experience, and I can't peak there.

but that means that we have a problem. a being could pretend to suffer. perfectly pretend, and there is a difference between fully understanding and simulating emotion, and experiencing it. you need that qualia, sentience, that magical soul-juice we can't put our finger on.

I don't know.

but AI is not at loss here in comparison with other people. you might be faking it too. I chose to ignore this until I figure something better out.

to me, intelligence and identity, which are testable, are enough to give 'human' rights to AI. if it ends up writing songs about life and making drunken calls to its crushes, I'd conclude that it 'feels', while keeping in mind that it might be just pretending, just like I'm keeping in mind that everyone that's not me might be pretending too.

How tied is AI to its physical housing? Does making a copy of itself raise the same questions that star trek transporters always made me wonder about? I.E. is it really you on the other side? Or are you destroyed, and a copy of you created with all your memories? Do you die? Is that the same with AI? Or can they create a copy, and then connect with that copy over the network, integrate with it, and then remove the section of themselves that is left on the original server, without any disruption in the flow of consciousness?

yeah, you can copy it. you can pause it, print out the code, together with the memory state, take it elsewhere, type it in and run it, the program won't notice the difference.

you could even implement it out of wood and marbles and dominos.

I thought that you'll take this question elsewhere.. for example, if you copy an AI, is it morally ok to erase one of the copies? this reminds me of an abortion debate again.. also, how about after they've been running for 5 minutes? we could again talk about kolmogorov complexity of the changes that have happened in those 5 minutes. you could say "this soul has grown for one dog and a chicken. am I ok with killing those now?" now think about identical twins.. the situation is the same.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment