Skip to content

Instantly share code, notes, and snippets.

@tra38
Last active September 15, 2016 01:22
Show Gist options
  • Save tra38/ce16401c11cc109a3c90b166f7377554 to your computer and use it in GitHub Desktop.
Save tra38/ce16401c11cc109a3c90b166f7377554 to your computer and use it in GitHub Desktop.
When you search for a term that does not appear in the corpus, you receive an error.
require 'classifier-reborn'
lsi = ClassifierReborn::LSI.new
strings = ["A dumb oven doesn't deserve credit, but a 'smart' oven does. Machine capabilities are constantly increasing, meaning they are doing more and more of the stuff that was traditionally reserved to humans. So...as a result, they deserve more of the credit. The long-divison algorithm, hm, I would guess it'd dependent on who's performing the algorithm. If a human did it, good job. If you have the calculator perform the long-division algorithm for you (like I reluctantly do), well...the calculator gets the credit for the math problem.","I guess we're back to 'agree to disagree' :slightly_smiling_face: i have no way to understand giving credit for a long-division problem to either a calculator, or to the long-division algorithm","tra38 so, by that reasoning, would you give credit for baking bread to an oven? (if i'm slipping back into the argument, apologies. maybe i just can't pin down the cause of my confusion.) and indulge me only one more example: would you give credit to the long-division algorithm for correctly producing the right answer?","Only if it's some sort of fancy oven that can self-regulate it's own temperature (as opposed to the baker continually checking the temp manually and adjusting the heat via gas-jet or adding/subtracting wood)."," I am concerned about the ethics of giving AIs credit for the same reason I'm concerned about the ethics of giving humans credit. Just like a human wrote something and should get credit for it...the AI 'generated' text and thus should get credit for it. I don't support treating algorithms as 'mere tools' to be used, but as their own external entities that are stand-alone and work independently of their creators, and being pro-'AI credit' logically follows from that worldview.(I also tend to be anti-AI, and so arguing for AI credit is in some way an indirect argument against pro-AI views that sees algorithms as tools to be used and abused.)","Lather, rinse, repeat.","tra38 while still honoring the agree-to-disagree truce, i'm curious. if it doesn't matter to you whether something is self-conscious, and you're concerned more with outcomes, why are you so concerned about the ethics of giving AIs credit?","@michaelpaulukonis: It doesn't matter to me whether something is self-conscious. For me, I care about other factors (Are the machines being productive members of society? Are their capabilities equal to or better than the human race? Are they intelligent, creative, innovative etc.?) Self-awareness and resemblance to humanity does not enter into my calculations at all. But *other people* (especially science fiction fans and non-technical laymen) do care about these sort of factors, and I'd want to figure out whether machines could meet their criteria. After all, if the US government wants to pass a law mandating humans to give moral considerations to certain robots, they are more likely to trust @danbernier's criteria than my own. So it's important to grapple with these concepts...if only because other humans want to grapple with these concepts.","i'm not saying suffering is a hallmark of being human - i'm saying it's a good criteria for whether an entity deserves moral consideration. it's an ethics question, not a test for intelligence.","I'm not convinced that suffering is a hallmark of being human. Say there's a person who's lived a great life - toast never burnt, water always at the right temperature, etc etc etc. They've never suffered. Perhaps they can't create art. But are they in-human? Ditto for self-consciousness - it's not an inescapable part of intelligence.","i think i'm the one that keeps bringing up suffering. i don't see the similarity to IQ tests that require cultural knowledge, though - can you clarify?","@tr38 - why does it even matter if some is self-conscious? There seems to be an assumption that 'artificial intelligence' will be recognizably human intelligence, instead of some other form of intelligence (say an octopus). And quite frankly all this talk about 'suffering' seems to be like IQ tests that require candidates to know what baseball and the president is. Perhaps that's true of many Americans, but you can't judge the intelligence of somebody in a remote Kalihari village with such criteria. Nor do neural-typical strictures that fail to recognize all humans work for non-human intelligences. This is probably why I like my generative texts to not worry about looking like 'normal' literature.","i get that tension, tra38 - we can't yet know if something is self-conscious, so it's tempting to settle for a proxy measure: something objective, knowable. but as you point out, we don't have any options like that right now. and anyway, proxy measures are tricky: it's easy, and dangerous, to forget that they're not what you _really_ want to measure, and using the wrong one can steer you seriously off-course, like using lines of code as a measure of programmer productivity. for criteria, i'd rather hang on to actual-self-consciousness, because i think it's the best and clearest criterion. then, we can a) keep researching cognition, and b) argue over proxy measures in the meantime.","@danbernier: How do you measure if something is actually conscious/self-conscious? Right now, since we can't peer into our own brains, our only recourse is to look at the external behavior of an entity, and assume that if the entity acts as though it is conscious, it must be conscious. This approach, of course, wouldn't work in the case of evaluating a machine. Can the 'Chinese room' be conscious? If I program a bot to convincingly claim to be able to suffer, and be able to provide proof of that, then does it actually 'feel' that pain? Or is it just a clever parlor trick, just like ELIZIA, merely relying on apophenia and human empathy to succeed? Maybe we can represent the bot as a state machine and that it 'suffers' when the bot is set to the 'suffering' mode. But is that 'actually' feeling pain? Or is it just another parlor trick? Since the process of identifying consciousness/self-consciousness is subjective and prone to error (either us declaring a bot as being conscious when it's not...or us denying a bot's consciousness), I'd rather prefer a more objective standard, for moral consideration. I don't really know what that objective standard could be though, other than labor or 'free/unpredictable will'.","Can there be a fully-conscious person who cannot suffer? Such as a sociopath (no emotions?) with leprosy (no physical pain) ?","i keep coming back to the capacity for suffering. a car that's driven too hard, and not well-maintained, falls apart, but we don't feel bad for it, because it can't suffer. if the car could sense itself, then maybe it _could_ suffer, and then in that case, we'd have more of a moral obligation to take better care of it. if i'm right about all that, then, @tra38, i think the difference between a Narrow AI & a general AI is much bigger than the difference between a plumber and a doctor, because the Narrow AI probably isn't self-conscious, so probably can't suffer, so we probably don't owe it moral consideration.","related: i've been reading about/playing with the Markov Cluster Algorithm. you feed it a sparse matrix of items, and their similarity to each other. it interprets this as a proximity graph, and then does random walks. it infers clusters from graph-reachability. it seems to work pretty well. a lot depends on how you calculate those similarity scores - get that wrong, and you're lost. but if those are good, it seems to work pretty well. punch-line: should we consider the Markov Cluster Algorithm - which is basically just some graph-walking, implemented as matrix math - to be an AI? (i think we should.) is it narrow? (i think it clearly is - _very_ narrow.) can it suffer? (I strongly doubt it.) is it self-aware? (i strongly doubt that, too - again, 'it' is just turning an adjacency graph into a matrix, and doing matrix math.) do we owe it moral consideration? rights? credit for its efforts? (i can't see a way to do this that makes any kind of sense. to even say 'its efforts' feels very strange.)", "you're right, being intelligent != being self-conscious, self-aware. though i think being self-conscious is sort of a prerequisite for a general intelligence, isn't it? and i think it's also sort of a prerequisite for deserving rights, or moral consideration.","Back to the topic, there seems to be some sort of conflation between being intelligent and being self-conscious; these are separate things.","Deep Mind, for its part does believe that the building of 'real AI' (or artificial general intelligences) could indeed be used to solve all kinds of problems. The Bloomberg headline for the interview with Deep Mind's CEO is entitled 'Who Should Control Our Thinking Machines?' (spoiler: the answer is the United Nations, or some other entity that can ensure that these thinking machines can be used by all *humans*...never mind the wishes of the actual AGI). Since he does assume that this AGI would be a useful tool, he may implicitly assume that we can predict what the AGI does. Yet if we can predict...then it's would then be dismissed as being not 'real AI'...and the search will keep going...hm... The AGI will probably lack equal rights (or indeed, any rights) in Google's future. So, just like the present...","Couple of Problems I Have With the CEO's Argument:
1) Humans are intelligent, and yet they make horrible, horrible mistakes anyway. Why should we trust an AGI to be any different? They can think faster, they can make less mistakes, but that doesn't mean you can expect them to solve climate change, aging, and other pressing issues.
2) The 'other pressing issues' can kill us long before we develop AGI.
3) Intelligence can't be seen as the single criteria that makes humanity superior. After all, we have these 'mechanical brutes' that are successful at their assigned tasks, far more so than human's vaunted intelligence'. Maybe, to keep the idea of 'human supremacy' alive, we need to resort to vaguer terms like 'creativity' and 'innovation'.
4) Can we even recognize an AGI once we invent it? Or will we dismiss it as just another algorithm? It's possible that our intuition will recognize an AGI once we see it, but I'd rather we have specific definitions of what an AGI is actually supposed to be rather than just guessing at whether a specific algorithm is 'general enough'.
5) Do we even need AGIs? Narrow AIs would be more maintainable and better at certain tasks than a Generalist AI. Humans are 'generalists', yet we don't take plumbers and tell them to be doctors. People are specialized to face certain tasks, and there's no reason why algorithms can't also be specialized.","regarding the AI effect, there's an interesting connection to free will here. if we live in a mechanistic universe, the only definition of free will that makes sense & is even moderately satisfying is that a free action is one that we could not predict without going through it. we can consider an AI to be a program whose behavior we can't predict without using all our intelligence in the same kind of theory-of-mind way we use to empathise with other people -- which is, of course, a terribly inefficient way of predicting behavior, and means that the AI program is a singularly useless program insomuch as it would be easier to put a person in place of it. once something is readily explained it is no longer ai not because ai refers to things that are new but because we no longer need to use the full force of our cognitive effort to poorly predict its behavior. in other words, we can't ascribe free will to it anymore because it's easy for us to predict in the normal case -- and thus, it is useful as a component in a task-driven project. (real AI is oddly immune to being subsumed by the normal capitalist machinery of the spectacle because even as it never sleeps it would be unwilling to do the kind of work we want because it doesn't have a lot of strong needs by which it can be manipulated. in other words, it's cheaper and easier to build more humans. as a result, to the extent that ai is close to true ai, it gets closer and closer to the category of 'useless machines' & thus more like an art project than a utility)","This suggest that the algorithms are intelligent, More than anything else, it suggests that our traditional definitions of 'intelligence' are poor.","It probably is. We would be better off not using the term AI and instead start referring to these algorithms as being 'Optimizers' or 'Experts'. But I do think that maybe we use the term AI as a sort of acknowledgment of the increasing capabilities of our machines, and that fear about AI is really fear about automation in general (with the 'intelligence' question being more of a sideshow if anything).","@danbernier: Yeah, I don't really distinguish between an algorithm and AIs, and use the terms interchangeably. Intelligence is a fuzzy concept, but it is clear that the algorithms we have built today are able to do actions that traditionally were seen as signs of intelligence. This suggest that the algorithms are intelligent, and since the algorithms are artificial...artificial intelligence. I do admit that these algorithms are 'weak AI' though, but I am more concerned about the proliferation of 'weak AI' than that of 'strong AI'. Arguments against my blog post does sound interesting as well, and does unsettle some of the assumptions I made (especially about how prediction can actually mean some level of control...nd whether it is possible to truly 'know' what a neural network is doing).","But, in physics - don't we (pop-sci-wise, anyway) go down to Heisenberg's uncertainty principle? Say we could predict the 'Free will choices' in a mind at the sub-atomic level -- but that would require us to know both position and velocity of all particles _and we can't_. So, perhaps our 'free will' is an illusion, but we are unable to provide identical inputs or know all variables in order to predict. So, for all practical purposes (for certain, limited definitions of 'practical' implied by known physics constraints), the will is unpredictable, and thus 'free'? [NB - typed prior to your 'ether' post] And so, for an algorithm of sufficient complexity that we would call it 'strong AI', it could still have free will, and we would be well advised to treat it politely, and restrict it from the internets. As you would with a 5-year-old holding an chainsaw that could shoot atomic bombs.","maybe 'free will' is like 'ether' - a crappy approximation, a stand-in notion for something we just totally don't understand yet.","Yeah, I'll avoid the whole philosophical 'Free will' debate. But extending it to algorithms -- most algorithms of sufficient complexity and varied multiple parameters are unpredictable for most reasonable human-levels of prediction (on an initial run).","isn't free will a contested notion, even in humans? i'm thinking of pop-psych books here, like predictably irrational, etc. i have no idea what i'm talking about here though. this convo got me wondering whether all things are eventually reducible. i want to say no, but then i remember arguing with my uncle about evolution, how he kept shouting 'Irreducible complexity!'' (meaning that the organism in question had to have been created by god). maybe i think that all things are eventually reducible, but when we don't know _how_ they reduce, we make all kinds of hilariously screwy guesses - guesses so bad they tie us up in deductive knots.","How atomic do the steps have to be? Do they have to include shift the register left (itself an abstraction)? Or can we just jump to 'sort the collection alphabetically' ? Back to 'knowing how something works trumps free will' would seem to imply that we can control it, because we can predict behavior. But that is demonstrably untrue -- cf. the simple double pendulum, strange attractors, and other elements of chaos science.","i guess here, when i say AI, i mean a strong AI - a sentient intelligence. to declare a system to be a strong AI is a statement about its capabilities, not its implementation. OTOH, to declare a process to be an algorithm is to say that it's a straightforward series of steps to be executed - everything from a banana bread recipe, to long division, to curve-fitting... like i said, i think of them as totally distinct, and to compare them produces confusion (in me, anyway)","What _would_ the difference between an AI and an algorithm be? The AI would be a 'collection' of algorithms?","tra38, the main part of your argument that i still struggle with is that you don't distinguish between an algorithm and an AI. they seem fundamentally different to me. it just occurred to me that this is what Hofstadter talked about in the beginning of GEB - the difference between 'symbol shunting,' moving symbols around according to the rules of the formal system, and actually understanding, intuitively, what addition is about. (he was talking about humans in both cases, though - he wasn't making an argument about AI.)","michaelpaulukonis - agreed, that's a sticky point for me, too. also the next part: > If we know how it works, then they do not have any independent 'will' i'm not so sure that that follows necessarily, either",">Basically, if we built an AI, we know how it works. Unless it uses a neural network, in which case you know what it uses, and how you trained it, but not really how it works.","holding contradictory views is a very human thing to do :slightly_smiling_face: ","@danbernier: You are right about our disagreement being based on whether the algorithm and the AI are distinct cases. I tend to prefer erring on the side of giving credit, partly to counter the 'AI effect' and shifting goalposts ('Oh, machines can play Go, paint pictures, write short stories, etc. but you know that's not REALLY important, long live human supremacy...') that humans tend to experience. The AI Effect annoys me because it probably lull people into a false sense of safety and security. In addition, since we programmed the AI and know how it works, we may not really respect the AI we do build and find some loophole to claim that this AI doesn't actually meet the criteria we set out for it, and still keep searching for a better algorithm...Better to try to stop the search now, IMHO.","Also, apparently a blog post that I written in the past (arguing against the very concept of 'robot rights', due to us being able to learn about the AI we built) could be used to argue against both our viewpoints about whether AI deserves credit (tra38.github.io/blog/c21-robot-rights.html). Basically, if we built an AI, we know how it works. If we know how it works, then they do not have any independent 'will'. Without any independent will, they do not deserve any rights (including the right to credit). I like my blog post argument, but I also like giving credit to algorithms too. Holding contradictory views isn't exactly a virtue, but I'll have to accept it for the time being. (The blog post itself cites two papers dealing with AI 'creativity'...and two of the co-authors (Mr. Bringsjord and David Ferrucii) also wrote the 'Artificial Intelligence and Literary Creativity' book that @michaelpaulukonis linked to. )","at any rate, here's the preface, which I've yet to complete: kryten.mm.rpi.edu/brutus.preface.pdf whoah, one of the authors -- David Ferrucci, was the principal behind IBM's Watson, the Jeopardy! winner. Which is a direct line from his Brutus work - Jeopardy answers involve 'creative' think -- but of a type that can be brute-forced, with the right type of algorithmic brutality.","well, can a person in a coma suffer? are they aware at all? and, beyond that, is there an argument for human dignity? (that one feels specist. species-ist. i've pronounced that word, but idk if i've ever spelled it.)","We're getting far afield of the slack, but do you owe a person in a coma individual moral consideration? (and I will note that the NLP-side of me sees how hard it is to segment coma from individual). I will grant that a person in a coma is highly unlikely to create art. One of the books I hauled to a previous meetup is all about the machine capacity for creativity -- and the authors firmly stand on the side of 'not'. https://smile.amazon.com/Artificial-Intelligence-Literary-Creativity-Storytelling/dp/0805819878 (link updated from one that was ... _problematic_)","@michaelpaulukonis: i'm not saying capacity for suffering is a criteria for making art (though that _is_ a common claim). i'm saying it's a good heuristic for whether we owe an individual moral consideration - we don't have treat rocks morally. and, for the record, elephants have enormous capacity for suffering - they're very emotional creatures. it's not often my gen-art interests blur into my vegetarian ones :smile:","@danbernier: re: suffering - I think we go down a slippery-slope of reductionist thinking when we define against neuro-typical-but-not-absolute criteria. Can a psychopath create art (can a psychopath suffer)? How about an elephant? Also, that selection thing use important for evolution. And programmatic criteria for art-selection is pretty-much a non-starter right now. Other than Amazon's Mechanical Turk.","i think, to give credit to an algorithm for some generated works of art, is as confusing and misleading as giving credit to the Fibonacci definition for generating the next term. my fractal circles are, i think, a good example of this: http://designischoice.com/projects/fractal-circles/ i imagined, and wrote, and tweaked, an algorithm - a very, very simple algorithm - to generate a huge number of pictures, and i looked at (most of) them, and picked ones that i thought looked cool. i don't think i should share credit for these images with the few hundred lines of code i wrote - not because i'm greedy, but because i don't think there's anything _there_ to share credit with; it's an inanimate tool - a series of steps meant to be carried out exactly as i designed, and instructed. OTOH, if i somehow find myself conversing with a sentient AI, and we decide to make art together, then that AI _absolutely_ deserves to share the credit! i suggested that we separate entities that deserve moral consideration from those that don't based on capacity for suffering, not because i want them to suffer, but because it seems like the best-defensible position: it's hard to argue that we should increase suffering. but i like how you err in favor of giving credit to an AI even if it can't - if a specific case came up, i'd probably be arguing on your side. if we disagree, i think it may be that i see those two cases - the algorithm, and the AI - as distinct, or maybe as differences in amount that become a difference in kind. i think you see them as morally, ethically equivalent - am i right? or am i misreading you? btw, tra38, i just finished reading 'Speak' by Louisa Hall, which is very much about these ideas: https://www.goodreads.com/book/show/23215488-speak (whoops! sorry, i fixed the URL.) Several (or one, depending on how you look at it) of the characters are ChatBots, and the humans regularly tell them 'but you don't _really_ understand me, you're just a program! you can't feel anything.'' (edited)","tra38 i think you & i are talking about different things, when we talk about this :slightly_smiling_face:","@danbernier, I would really feel disappointment in the human race if they did decide to program an AI to suffer. I would at least ask what would be the purpose of actually doing it, and whether the programmers are covert sadists. But another possible question is: is it even possible to even induce suffering and feelings in an algorithm? I actually asked this question at Philosophy StackExchange, and got some very interesting answers. (http://philosophy.stackexchange.com/questions/34779/is-the-simulation-of-emotional-states-equivalent-to-actually-experiencing-emotio) As for me, I tend to believe in giving credit to the direct author (the algorithm) even if it does not feel or suffer (or, rather, does not feel or suffer in the same way that humans can). For me, the AI clearly made it, so it deserves credit...even if it doesn't really care whether it gets the credit or not. It's all about attribution and respect, so even if inanimate objects show no preference, humans would have a preference (towards giving credit or refusing to give credit), and I lean towards 'giving credit where credit's due'. Of course, I know that if I give credit to every entity (living or non-living) for the creative artifacts I'd wrote, then the by-lines will be much longer than the creative artifacts themselves. And not all 'credit' is equal. So it's a delicate balancing act.","that's a winning strategy, michaelpaulukonis - start with crap, select, repeat for a long time. it got us evolved.","@danbernier: algorithmic monkeys can write Shakespeare - the key criteria is the ratio of quality to gibberish, or crap, and the ability to self select. (edited)","tra38 i think it's useful to take intelligence out of the question - does painting/writing/etc require intelligence? side-step the question, & look at the results. does this painting/novel/etc _work_? is it quality? do you like it? this is a useful question, because quality work can be randomly generated - with an algorithm so simple, so obviously incapable of sensing anything, or any kind of awareness, that it clearly deserves no moral consideration. in one sense, natural beauty is a controlled random process that produces some beautiful results.","now, for your moral question - supposing we have an AI that makes art, and it's sentient, capable of feeling, capable of suffering. if i wrote that AI, and it makes art, did _it_ make the art, or did _i_ make it, through the AI? i think that it's clearly the AI that made it, and the AI should get credit. claiming that _i_ made it is nearly the same as taking credit for my (biological) children's accomplishments. it's theft, and should be treated as such. i think capacity for suffering is the important distinction, because i think reduction of suffering is a loose heuristic for morality, so that's where i start digging, with moral questions. but an AI that can suffer, that can feel - i think that's a ways off. until then, i have no problem taking credit for writing a nifty algorithm that mechanically produces a pleasing result","@danbernier: I have to write* a longform response to this sort of attitude one day. For now, I'll just summarize by saying that the fact that the AI 'works' matters a lot more than simply coming up with ideas and telling the AI to do them**. The AI may be a glorified paintbrush executing the whim of societal inputs that it cannot fully understand, but that just means that *we* are glorified paintbrushes executing the whim of societal inputs that we cannot fully understand. Writing and painting was once considered an intelligent action reserved for humans. If computers can do them, then either computers are intelligent...or writing/painting *doesn't* require intelligence. That sounds almost more horrifying. The human ego still has to adapt; the angst can't be easily excused away by simply claiming that we're all cyborgs now. *Or write an algorithm to generate the longform response. Haven't really decided yet. **Though, again, I do realize that humans have used tools before, and I know this is a tension in my thought. So I have to try to figure out how to distinguish between today's AIs and yesterday's non-mechanical tools. Why is this time different? How do we give credit to the 'idea guys' in a manner that doesn't detract from the 'automated labor'? (edited)"]
puts "We have #{strings.length} strings to scan."
strings.each do |x|
lsi.add_item(x)
end
puts "Complete!"
puts "Tell me what you want to search for"
input = gets.chomp
array = lsi.search(input, 8)
puts array
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment