Learning is struggle
I have heard of this. Have not used it. But, I wonder what will happen to our own creativity. We humans are supposed to be a curious, thoughtful, and an intelligent specious. We do the unexpected in different environments and situations which makes us unique. No two of us are alike or react the same. “It is a spectacular scientific puzzle that human beings are the sole species that seems to be able to think and feel beyond the limits of the scale for their species.1” Do we need we need a machine to do our thinking for us when exposed to reality? Prof. Zetland, The One-Handed Economist talks of ChatGPT and its dangers.
~~~~~~~~
Learning is struggle, The One-Handed Economist, David Zetland
ChatGPT excites people who think (I use this word with caution) that they can use GPT to do less work/impress people/advance their careers.
This ideal may be true for those who already know how to do the work they are asking GPT to do (e.g., writing a blog post), but it won’t work for learners who admire GPT output without being able to do it themselves. They will pass GPT’s work as theirs, but they will not be able to explain “their” logic or conclusions. “GPT-cheats” will get caught. Hopefully they will just be disciplined, but others will do far more damage in their assertive ignorance (a human version of hallucinating). I am reminded of the massive damage caused by Bush’s loyal-but-incompetent agents in Iraq.
In the meantime, GPT users will be busy trying to fool each other into getting paid for work that GPT has done while non-GPT users will find the entire situation frustrating.
Non-augmented humans will take hours to do what GPT can do in seconds; they will struggle to understand complex ideas and integrate them into reasonable thoughts. They will question the point of going on. But then they will be the ones to spot the errors, to suggest novel alternatives, to add value.
In the land of the blind, the one-eyed man is king.
With GPT, we will see adults losing their analytical skills. Students will not even acquire them. Average IQ will drop, as will productivity.
(The only exception will be the few people who use GPT as a “Socratic sparring partner” to push their knowledge and/or skills. They can benefit from GPT, but the vast majority will fall for an “apple of knowledge” that is rotten inside.)
My one handed conclusion is that GPT will take the jobs of anyone who uses GPT to do those jobs, let alone study for them.
1 The Scope of Human Thought « On the Human, (nationalhumanitiescenter.org), Professor Mark Turner,
ChatGPT excites people who think (I use this word with caution) that they can use GPT to do less work/impress people/advance their careers….
[ What nonsense, as well as meanly written. ]
https://news.cgtn.com/news/2023-03-28/Chinese-ministry-deploys-AI-to-promote-frontier-sci-tech-research-1ixeEjdhp2U/index.html
March 28, 2023
Chinese ministry deploys AI to promote frontier sci-tech research
China has officially started the deployment of a project to promote the use of Artificial Intelligence (AI) in frontier sci-tech research, focusing on key problems in basic disciplines and research needs in key sci-tech fields such as drug development, gene research and biology breeding.
Jointly launched by the Ministry of Science and Technology and the National Natural Science Foundation of China, the project is called AI for Science.
It will further strengthen system layout and overall guidance to promote the deep integration of AI and sci-tech research, so as to promote the opening and convergence of resources, and enhance innovation capabilities, according to the ministry.
Under the project, the ministry will promote the innovation of AI models and algorithms for major scientific problems, develop a number of platforms for typical research fields, and accelerate the construction of a national open innovation platform for the new generation of AI public computing power.
It also pledged to bring together interdisciplinary research and development teams, promote the establishment of an innovation consortium, and build international academic exchange platforms to offer solutions to common human scientific challenges, including cancer treatment and the climate crisis.
AI is a tool that scientists and engineers are creating for any of us to distinctively use. Meant to be individually and distinctively used for problem solving and exploration just as a telescope or microscope or a range of lens. Even now lens are being crafted and employed, modelled from lobster eyes, to create altogether new images from space telescopes.
https://news.cgtn.com/news/2022-10-11/Lobster-eyes-inspire-Chinese-scientists-for-universe-observation-1e2pMhYpCRW/index.html
October 11, 2022
Lobster eyes inspire Chinese scientists for universe observation
Who could have thought that lobsters could provide inspiration for a state-of-the-art telescope that could help us peer into the depths of the universe? But it did for scientists at the National Astronomical Observatories of the Chinese Academy of Sciences (NAOC) who developed the Lobster Eye Imager for Astronomy (LEIA), previously cited as a Wide-field X-ray Telescope (WXT).
Evidenced by images released in August, the most special feature of LEIA is its 36 micro-pore lobster-eye glasses and four large-array CMOS sensors….
https://www.nytimes.com/2023/03/28/technology/ai-chatbots-chatgpt-bing-bard-llm.html
March 28, 2023
How Does ChatGPT Really Work?
Learning how a “large language model” operates.
By Kevin Roose
In the second of our five-part series, * I’m going to explain how the technology actually works….
* https://www.nytimes.com/article/ai-artificial-intelligence-chatbot.html
Now for severe criticism. I may well have been too dismissive and harsh in my opening comment:
https://www.nytimes.com/2023/03/29/technology/ai-artificial-intelligence-musk-risks.html
March 29, 2023
Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society’
More than 1,000 tech leaders, researchers and others signed an open letter that urged a moratorium on the development of the most powerful artificial intelligence systems.
By Cade Metz and Gregory Schmidt
More than 1,000 technology leaders and researchers, including Elon Musk, have urged artificial intelligence labs to pause development of the most advanced systems, warning in an open letter that A.I. tools present “profound risks to society and humanity.” …
ChatGPT excites people who think (I use this word with caution) that they can use GPT to do less work/impress people/advance their careers….
[ I was too dismissive and harsh about this criticism. ]
YEP, but that is because AI is a general concept that applies some common search and correlation threads of reasoning across a vast variety of disciplines. IOW, scientific research is one thing and ChatGPT quite another.
Compare this to all people think, but not all people think the same.
I have found that reductionist thinking is an unreliable method for fully understanding any complex problem, but yet a very reliable method for finding a starting point for examining almost any complex problem.
ltr
yes you were. it comes from not thinking.
“thinking is struggle.”
what we are calling AI may indeed be useful. but it is not intelligence. as far as i can tell it is a library research tool made dangerous by being able to write arguably connected sentences. and it’s proposed uses…by people… will only add tot he computerization of our society..by which i mean people trained to act like computers, using computers to replace people to the great detriment of “service” where a little human intelligence would go a long way to stop wasting customer’s time. high level human stupidity looks a lot like AI. I think the technique of AI is ..gathering a lot of unexamined information and putting into an arguably readable essay which satisfies professors not trying too hard to think…any more than the would be Harvard graduate is thinking. of course we get a lot of that even here.
Coberly,
Yes sir, exactly so sir.
Humans believe they are unique as a species because they communicate with one another, solve problems, remember the past, talk about it & talk about the future.) Perhaps whales and dolphins do exactly the same thing, to name a couple of species. We just don’t respect them because we have no idea what’s going on in their heads.
May be true of ants & cockroaches also, for all we know.
And of course, if god exists, we must have been made in his image.
As for AI, if one insists it is sentient and we choose to respect it as such, then it will be sentient. Not nearly all will accept this, however. For starters, someone will pull out the module that allows it to speak.
It does seem like we humans have a unique power to destroy our own civilization, if not our entire species (or planet even), as a sort of re-set as in after the Fall of Rome. Perhaps far worse even. Yay US!
Otherwise, we are certainly capable of doing away with other species that might even be sentient.
They say that the cockroaches will make it through at least. No other species can lay claim to this ability.
Maybe we should make sure that the cockroaches don’t make it either.
Of course some believe our world (ny which is usually meant our universe) exists in a simulation.
Not only that, but sci-fi writers have imagined that we can set up our own simulation that people could upload themselves into. (Neal Stephenson in particular.) Once done, millions will procede to do so, leaving the hoi polloi behind to take out the trash, etc.
I fondly remember a joke from physics circles long ago about our universe residing in a jug under some grad student’s lab bench.
Suggested reading: Mr g by Alan Lightman
Mr g: A Novel About the Creation