Is our AI learning?
Tyler Cowen points us to YouChat, a new AI chatbot, that as far as I can see after studying this carefully for 15 seconds is supposed to be more up to date than the OpenAI bot and integrated with a search engine which naturally makes it the next new thing and presumably worth billions of dollars to potential investors.
In the interest of being scrupulously fair, I decided to give YouChat a chance to answer the same question that OpenAI fumbled badly, viz.,
If a three dimensional polygon has nine sides and nine vertices, how many edges does it have? How do you know?
Here is the reply:
A three-dimensional polygon with nine sides and nine vertices is called a nonagon . A nonagon has 12 edges . The number of edges in a polygon can be determined using the formula E = V + F – 2 , where E is the number of edges, V is the number of vertices, and F is the number of faces. In the case of a nonagon, V = 9 and F = 9 , so the formula gives E = 9 + 9 – 2 = 12.
Hmmm . . . seems familiar . . . oh wait . . . this is a mish-mash of the answers given by the OpenAI chatbot that I posted here yesterday. The footnotes are all to yesterday’s post. It somehow picked out the right formula, but it also picked out one of the wrong answers even though the right answer was clearly presented in my post. So YouChat is up to date, give it that, but it has no judgment and cannot do basic arithmetic.
Verdict: more A than I. A search engine that just gave you a link would be more useful. Any human reading my post would be able to identify the right answer.
“…Verdict: more A than I…”
[Exactly. Should we ever expect otherwise? The “I” will always be derived from the perceptions of those coding the “A” software with regards to what they believe the definition of “I” is. Too bad if that sounds too much like “It depends on what the meaning of the word ‘is’ is.”]
IOW, just because genetics is the coding system for life, that does not mean that machine code makes machines alive. This means than mankind is far closer to being dog than it is close to being god regardless of how backyards our thinking may be.
back in the day when i was “any normal human” i am pretty sure i could have given the right answer to Kramer’s question. I am no longer sure I could…at least not in the time I am willing (able?) to give to it. Does that make me not a normal human? Ask the machine that question.
as for closer to dog than god. see T.S.Eliot poem The Hippopotamus [no relation, i think to the hippopotami who “ask no more what am I?” in the children’s story (poem?).
I think I can find god in a dog more reliably than i can find god in a man…but that begs the question. I suspect that god… whatever or whoever that may be if anything (but what is a no-thing?) may be a great deal more complex than i can imagine…even containing evil, or at least concerned with what is evil in men. or just sufficiently stupid.
wile i am here I just read an article on Effective Altruism. When I read an article on that subject here on AB a little while ago, I did not realize it was “a thing.” All I can say is god help us. That humans, much less “philosophers” concern themselves with such nonsense longer that it takes to finish the last beer and say, “well, time to get home” causes me to worry about the human race, not even counting global warming or loose nukes.
Well, yes sir – exactly so.
Shouldn’t YouChat and other similar AI tools find critiques like this and make corrections to itself by itself? Try again in a couple of days?
it took me a few seconds to realize what you were saying…damn carbon based hardware…which i understand (now) to be the same as what Kramer and Ron and I are saying.
The AI machine may “learn” but it is still a machine and it’s answers unreiiable because they don’t “mean” anything. Humans actually have the same problem, producing logic that is meaningless because based on simple un-free association.
Nevertheless our un-free associations are likely based on more meaningful input than a machine’s….except in the case of computer programmers and middle managers…from my observations.
if i got it wrong and you are not agreeing with Kramer and Ron but saying somethig like the machine got it right after it took in Kramer’s input is evidence of “learning,” I’d say “no.” The intelligence was supplied by Kramer and the machine only “found” it. Suppose I had written what Kramer did only got the wrong answer. Then the machine would have reported my wrong answer as a solution. I don’t know if the algorithm has some clever eror checking routine, but from Kramers experiment I would guess it’s not a very reliable one.
In other news, I read that a self driving car caused a serious accident recently. I would like to report that I predicted this without needing to research the data or consult the experts. What I worry about is that we have already to some coniderable extent tied ourselves to machines and somewhat arrogant programmers…because the people who buy their product are happy enough to “save money” while not worrying hteir pretty little heads about the consternation caused their customers when the programmers foresight was inadequate to the situation, the machines having no foresight whatsoever…
not quite AI but right next to it…I am told that back in the day the inventors of game theory… intelligent men all…argued that since we had the Bomb and the Russians did not yet have the Bomb our best strategy was to bomb the Russians now (that is, then.). While Putin may be proving that they were right, I am grateful that less intelligent humans prevailed at the time and have given us, so far, an extra seventy years or so of life.
Machines aren’t smart, and humans who act like machines aren’t smart either.
Sweet. In the famous words of Forrest Gump “That’s all that I have to say about that.”
The machine was wrong, but this was identified in a publicly searchable document (AB), so isn’t the machine supposed to be looking around for this kind of input, assessing it and then correcting its original errors?
Depends what you want it to do..or what you want to call what it is doing if it’s just looking through publically available documents for previous solutions to some problem…you would be about right, but don’t call it learning. i am not sure how it goes about “assessing” it, but based on Kramer’s result, it doesn’t sound like it does a very good job of that.
A fine allusion to a notoriou Geo Bush Jr (Yale ’68) speech 22 years ago.
‘Is Our Children Learning?’
Yale New – Dec 2012
In what will likely become one of his most enduring rhetorical legacies, George W. Bush ’68 asked a crowd of South Carolinians in 2000: “Is [sic] our children learning?” …
(But the Yale News piece ostensibly is a review of a 2012 book ‘How Children Succeed: Grit, Curiosity and the Hidden Power of Character’, available at used-book shops everywher.)