Ignorance, Fake Turing Tests, and Bads
by Mike Kimel
Ignorance, Fake Turing Tests, and Bads
I don’t think Bryan Caplan realized he was advocating ignorance as a strategy for passing this non-Turing test.
But it actually got me thinking. I may soon be spending a bit of time on a small project that some years ago I might have tackled by trying to build some adaptive behavior into a piece of software, though I’m older and (I hope) wiser now, so if the project goes forward, the software will rely more on script and less on what some people erroneously call artificial intelligence. (I don’t like that term. Machines don’t learn and machines don’t think. It may happen some day but we’re a long way from that now and I don’t think there’s anyone out there who has any idea how it might happen, if it ever does.)
We live in a world where a surprising amount of the public put an astounding amount of time and effort into keeping track of the affairs of other people who don’t actually do very much. I discovered the other day that not just did I, a guy who rarely watches tv, have knowledge of a show called Jersey Shore and can even recognize three of its cast members. How this, er, information got into my brain I’ll never know, but if anyone can tell me how to remove it I’d be most appreciative.
So… could it be that the optimal strategy for building a program that can pass a Turing test is filling it with information about useless stuff? Which leads to another question – is it possible that we consume a lot of things that are harmful to the economy’s long run health? Not because they generate externalities, but because they are really “bads” rather than “goods”?
is it possible that we consume a lot of things that are harmful to the economy’s long run health?
Anything produced by Fox, CFG, Heritage, WSJ editors, etc. should have something like those new FDA cigarette warnings attached.
We are Anonymous.
We have read your financial boards.
Your financial boards make ‘sense.’
We have noticed you are missing things.
You don’t notice, do you?
You have become part of the problem.
you write. People listen.
Jp morgan manipulates markets.
As do you, unwittingly. Who do you serve?
As are we, cunningly.
Silver Viral Project.
It is here.
Down with Fiat.
“Not because they generate externalities, but because they are really “bads” rather than “goods”?”
From the category “there is nothing new under the sun” I repeat,
“When will the people be educated? When they have enough bread to eat, when the rich and the government stop bribing treacherous pens and tongues to deceive them. When will this be? Never.” M. Robespierre c. 1789
The military industrial congress complex consumes large amounts of brain power, commodities and energy which would almost 100% be better not spent, or applied to building things rather than blowing up deserts.
And we have airshows and war spots on TV to sell it all.
After a serious encounter with Television in my early teens and a forced separation, I have managed to live clean for over fifty years. As a result I know almost nothing about popular culture and don’t miss it.
But also as a result I don’t know what “everyone knows” in Washington, which enables me to sometimes to see clearly the solution to a problem they can’t see because their framing excludes the answer.
So I think you are on the wrong path to your Turing machine: useless knowledge is what we see all around us, including especially among the President’s advisors and the Congress. And of course useless knowledge is the popular concept of what “educated” means.
If you can get your machine to solve real problems by “seeing to the heart of the matter,” you will then have discovered “intelligence.” But before we get the machines working on that, it would please me much more to get people working on that.
Mike, How to make this short.
In the 90s I was team leader of a team called “The Artificial Intelligence Team”. Now, admittedly, the title was more an inside joke to the members but to the organization, it was good PR. The real purpose of the team was what peers then called “Instrumental Intelligence”, in this case “put a chemist in the box”. I think you can see this as a task with real measurements bounded by limitations of experimental design, adequacy of data, and other data analysis considerations with a tad of logic sometimes tacked on. So the category in those days was mainly what they called “pattern recognition” and the link to “artificial intelligence” was a stretch but one’s imagination could allow one to believe that if a machine discovered the key measurement relationships to a well defined problem and if it could be tested against vectors not included in the training, it had taught itself to mimic the chemist. As I said easy but also nothing new, somewhat bogus in interpretation and successful in its limited universe. Indeed, hardly artificial and definitely not intelligent.
But, the experience leads me to make a point about intelligence. I was accused by other scientists of thinking I could replace their “creativity” with a computer (this was in the 80s). I had spent a great deal of my time in graduate school looking at the question of uncertainty in multivariate pattern recognition modeling algorithms and other problems with mathematical discovery including patterns of missing information and adequacy of statistical measures as substitutes.
All of this is a long winded means of pointing out that Turing or not, a large component of human intelligence is the mode by which people make mistakes and arrive at erroneous conclusions. I could go on and on. But consider the question of evidence and how evidence plays into intelligence, both negatively and positively. But it is intelligence and any mimic should be just as faulty with its use of evidence as the intelligence it mimics and for similar reasons. Human intelligence as an efficient tool is really quite overrated in my opinion. One has to break from human intelligence to arrive at the type of intelligence we really want in the tool. Also, in most (all?) adaptive modelling human intelligence is not so important to how evidence is ultimately incorporated and maybe it shouldn’t be except as a “tag on”.
One thing that has always bothered me because of my alien background to AB is the way economists seem to make inferences seeming outside of the universe of their model. How much is the data massaged before placement in the adaptive algorithm?
Back in my previous life as a consultant, I had to build a tool that mimicked how I solve statistical problems. (Call it an econometrician in a box.) And the project had the same issues you describe.
I agree that “learning from mistakes” is key. We did actually build that ability into the tool that imitated me.
But I think the key is actually a step back from there and I cannot even envision how to solve that problem. See, the trick is not imitating intelligence, it is imitating desire. I can build any number of algorithms that at a first pass will solve problems the same way I do to a greater or lesser extent. But how do you make one that wants to solve a problem, or wants to keep itself going, or even that wants to keep itself turned on? The lowliest flea has urges to stay alive, but no machine cares. In the end, its not the higher function that is harder to imitate, but the lowest.
“But how do you make one that wants to solve a problem, or wants to keep itself going, or even that wants to keep itself turned on? The lowliest flea has urges to stay alive, but no machine cares. In the end, its not the higher function that is harder to imitate, but the lowest. “
I worked with a guy in the late 80s that constructed “hardware neural nets”. The question of how many layers were needed just to have the size of an ant brain was an endless topic of conversation and not whether, once achieved, the interconnects would be trainable to be said ant. I think you can see why. Nevertheless, he had some hope that it would do something demonstrable. I moved to a new job and didn’t keep up with his project.
Incentive indeed. Ah, reminders of the early days of learning machines. At the time behavior techniques incorporating positive feedback instead of negative were popular in the social sciences so, naturally, the language morphed into how learning machines were updated and produced publications. Now, I never asked the box if it “felt” better or “learned” better if treated it humanely, but, if I recall (very long ago stuff), so-call positive feedback methods converged to a solution faster. Was more leisure time viewed by the box as positive? Don’t know; it never said one way or the other.
As to how to keep it going – what comes to mind is the subconscious or what we mean when we say “it must have been churning around in the back of my mind”. If you are lucky enough to have inputs arriving at a slow enough pace compared to computations, then imagine a processor churning, perhaps trying alternate models, preprocessing or transforms seeking better and better (more robust?) parameters, transforms or models to be tested as new input arrives. Imagine creating hypotheses on the fly that can be tested as evidence is collected. Imagine having an “analogy” component. I can imagine many thing including results that reflect a host of choices/possibilities discovered. Imagine indeed. If incentive is just – keep working at the problem productively then the box, via you, can certainly appear to be determined to keep itself going. And just like a human, in it’s determination, it can choose a wrong fork and be doomed to investigating it’s naval to infinitum. Unlike a human, it will keep it’s incentive and be fascinated by it’s naval until a real intelligence intervenes. We know what the human wants; we don’t know what the box wants. We want the box to want to please the human so why not build it simply to do that? If the human is happy then the box, by definition, is also happy.
But you know, it’s fun to play with but what I learned over the years is that the process of playing helped me discover the information more than create the box. I had the luxury of being able to demand any solution make sense chemically or biochemically at a some level. This makes the job much easier than yours.
Anna and Mike
sounds like you went much farther than i did. and i wish i knew more about the guy with the nerual nets, because back in the time i was pretty convinced that “machine intelligence” was barking up the wrong tree, because the brain works nothing like a CPU with memory.
Ah, but how does the brain work? don’t know because back in the day all the humans seemed to think the brain worked like a computer and they were always looking for the part of the brain where memory was stored, etc. And I got to where I couldn’t stand reading about it anymore.
I don’t know how many synapses there are in an ant brain, or what an ant can learn, and i no longer have as much faith in my own ideas about hierarchies of “feature detectors” that are really nothing more than random junctions of patterns of activity… if that means anything in such a short and undeducated comment.
But as for incentive… i think you could design it something that would look like incentive anyway.. a simple circuit that monitors the status of the “on” switch and turns it back on everyy time it is turned down or switched off absense an absolute power failure.
since i think human intelligence is mostly a matter of random associations… guided somewhat by what we call education and culture which are just (just!) systems we device to encourage certain associations (hope i don’t sound too much like skinner)… i don’t have any special reason to believe that my random associations are any better than yours.
and society tends to agree with me about that. now if my random associations had led to a better bomb, i am sure they would find a way to care.
spelling errors in the aging human brain give important insight, perhaps, into how the brain is organized.
Agreed, but when you are having fun, and I was, you just keep on having fun and just play with the ideas understanding that the applications can be achieved without real intelligence. Math games. I’m a data freak (data, data, data). You notice that my first response devolved in definition from “artificial intelligence” to “instrumental intelligence” to “chemist in a box”.
I went to a trade show where vendors were claiming they had operational neural net technology doing biology. I don’t doubt that they had software mimics on chips, I do doubt they had hardware neural nets. Not because it wasn’t possible on a small scale but because it was not needed. Perhaps robotics is more involved in trying to engineer a solution. I lost touch years ago when I “Peter Principled” to a high, less entertaining, level of responsibility.
Note that today UAVs, UGVs, etc. are really remote controlled technology and most things that seem “smart” are really excellent signal processors with good algorithms for “that feel good when I see the target”. (At least the ones I know.)
What people don’t realize is that fingerprint, character recognition, voice recognition and similar things that exist today were the early things deemed as the important artificial intelligence problems. The solutions we use are not artificial intelligence. The exercise contributed to the technology so all was good.
I guess now that I am retired I might have some time to see what is going on today. In my own field, I know the answer. Nothing new but a lot of things renamed to fit today’s buzz words.
I think the hardware neural net guy I mentioned last name was Andes. Last I heard he worked at China Lake.
thanks. that business about “caring” or even “having fun” is the key to how it works. i don’t know how obvious, simple minded, or well refuted my ideas are. what struck me at the time was that no one else seemed to be working on the same lines. at least its not hard to imagine a neural system in which some receptors associated with biological needs stimiulate a network of neurons that supply a chemical at their destination that has the property of rendering more permanent those temporary connection that are active at the time.
what is harder to explain is why we “feel” that as pleasure.
Of course for an artificial neural net we are really speaking of activation/stimulation and circuits can easily be built for such things (a simple one simply uses a threshold signal). You might google “quantum dots” to see what is going on at the “nano” level (purely research I think).
Can one say that an electrical (and biological) device only needs the production of a threshold signal to be “happy” enough to perform a task?
Mike, sorry. The conversation drifted from Turing. I’ll shut up now.
guess i’ll shut up too. we weren’t going to solve the problem here. but it was nice to see that there are other people who were interested in it.
for the purposes of designing machines that mimic brains, i am willing to accept “production of a threshhold signal” as an operational definition of “happy.”
unlike mr Skinner and other logical positivists, though, i won’t insist that my operational definitions explain the Universe and there is no need to think further.
Coming in late to this, but I have long thought that filling people’s heads with nonsense, falsehoods and flimsy facts, is equivalent to feeding people inert food or even poison. I wish the hell we recognized this when it happened and treated the offenders with appropriate scorn.
“I wish the hell we recognized this when it happened and treated the offenders with appropriate scorn.”
Welcome back noni. The offence is in the eye of the beholder. One man’s offence is another’s truth to be told. As it has been well demonstrated throughout history, and especially since the days of Tricky Dick Nixon, an offence committed often enough becomes a valid point of view. Our society no longer has a reliable fourth estate. Deceit and deception are no longer condemned and the truth of an issue goes to the potentially most lucrative relationship. Listen to a debate in either house of the Congress. Are they serious? Or, are they aware of the limitations of their audience and the benefits of the ppoints of view that they offer as the best relaity?
The economy has become bifurcated in a most unbalnced manner. Our civil liberties are fast evaporating, and that under the leadership of a supposed “socialist.” Those who still have jobs are under constant attack as being over paid and under performing. War and imperialism is constant and prolonged. And where is our focus? Abortion rights, gay marriage, the largess of the public employee and guns and ammo. That’s nuts!!! But that’s America.
Hi Jack, hi everyone. Nice to be here, I have had a busy, tiring spring not made any better by the demise of my little hatchback. Helping with a family move, and then returning home to hack back the lawn with a machete and shove in some bedding plants. Place looks almost civilized, now.
Yes, of course there are differences of opinion, and even (in some fields) legitimate differences of facts. But the people I am thinking of go WAY beyond that.
just to be clear…. the people in Congress and the reporters and columnists who cover them have become totally insane. as far as i can tell, the insanity is now being taught in schools.
it is entirely possible “the people” are not so stupid as they appear, but given that there is not a damn thing they can do about the politicians, they might as well watch television or take up gardening.
Jack knows the difference between difference of opinion and insanity… he was just using the polite language we all use to edge out of a conversation that is going nowhere.
Part of this is the constant “moving goalpost” issue in AI. Any technique that works ends up getting called something else (such as OCR, or speech recognition). Different techniques are good for different problems, so something great for neural nets (OCR for example) will probably do poorly with support vector machines or with simple “if then else” scripting (I’m doing some work with a CLIPS derivative, Jess, for some of this).
Yes it is more than possible. This is part of the subject of my term paper this summer.