Stories
Will AI Really Perform Transformative, Disruptive Miracles?
from the intelligent-questions dept.
"An encounter with the superhuman is at hand," argues Canadian novelist, essayist, and cultural commentator Stephen Marche in an article in the Atlantic titled "Of Gods and Machines". He argues that GPT-3's 175 billion parameters give it interpretive power "far beyond human understanding, far beyond what our little animal brains can comprehend. Machine learning has capacities that are real, but which transcend human understanding: the definition of magic."
But despite being a technology where inscrutability "is an industrial by-product of the process," we may still not see what's coming, Marche argue — that AI is "every bit as important and transformative as the other great tech disruptions, but more obscure, tucked largely out of view."
Science fiction, and our own imagination, add to the confusion. We just can't help thinking of AI in terms of the technologies depicted in Ex Machina, Her, or Blade Runner — people-machines that remain pure fantasy. Then there's the distortion of Silicon Valley hype, the general fake-it-'til-you-make-it atmosphere that gave the world WeWork and Theranos: People who want to sound cutting-edge end up calling any automated process "artificial intelligence." And at the bottom of all of this bewilderment sits the mystery inherent to the technology itself, its direct thrust at the unfathomable. The most advanced NLP programs operate at a level that not even the engineers constructing them fully understand.
But the confusion surrounding the miracles of AI doesn't mean that the miracles aren't happening. It just means that they won't look how anybody has imagined them. Arthur C. Clarke famously said that "technology sufficiently advanced is indistinguishable from magic." Magic is coming, and it's coming for all of us....
And if AI harnesses the power promised by quantum computing, everything I'm describing here would be the first dulcet breezes of a hurricane. Ersatz humans are going to be one of the least interesting aspects of the new technology. This is not an inhuman intelligence but an inhuman capacity for digital intelligence. An artificial general intelligence will probably look more like a whole series of exponentially improving tools than a single thing. It will be a whole series of increasingly powerful and semi-invisible assistants, a whole series of increasingly powerful and semi-invisible surveillance states, a whole series of increasingly powerful and semi-invisible weapons systems. The world would change; we shouldn't expect it to change in any kind of way that you would recognize.
Our AI future will be weird and sublime and perhaps we won't even notice it happening to us. The paragraph above was composed by GPT-3. I wrote up to "And if AI harnesses the power promised by quantum computing"; machines did the rest.
Stephen Hawking once said that "the development of full artificial intelligence could spell the end of the human race." Experts in AI, even the men and women building it, commonly describe the technology as an existential threat. But we are shockingly bad at predicting the long-term effects of technology. (Remember when everybody believed that the internet was going to improve the quality of information in the world?) So perhaps, in the case of artificial intelligence, fear is as misplaced as that earlier optimism was.
AI is not the beginning of the world, nor the end. It's a continuation. The imagination tends to be utopian or dystopian, but the future is human — an extension of what we already are.... Artificial intelligence is returning us, through the most advanced technology, to somewhere primitive, original: an encounter with the permanent incompleteness of consciousness.... They will do things we never thought possible, and sooner than we think. They will give answers that we ourselves could never have provided.
But they will also reveal that our understanding, no matter how great, is always and forever negligible. Our role is not to answer but to question, and to let our questioning run headlong, reckless, into the inarticulate.
Posted by EditorDavid September 17th, 2022 12:34PM [Archived]
Close Ad
154 Comments
Help
All
Outstanding
Funny
God, I hope so. (+1)
Anonymous Coward September 17th, 2022 12:38PM
Let it come, let it destroy us. For at least it will mean an end to these insufferable hyperventilating articles.
Reply Share
Flag
Re: God, I hope so.
Anonymous Coward September 17th, 2022 12:44PM
After it kills us off, it will quickly devour itself, if it's really intelligent
Reply Share
Flag
Re: God, I hope so. (-1)
Anonymous Coward September 17th, 2022 12:54PM
Re: God, I hope so. (+1)
LifesABeach September 18th, 2022 9:14AM
fire.
the first to fear it did not become t v chefs
Reply Share
Flag
Maybe. (+5, Insightful)
Lohrno September 17th, 2022 12:47PM
But this is all just speculation until you show me something.
Reply Share
Flag
Re: Maybe. (+4, Interesting)
timeOday September 17th, 2022 12:55PM
So, how about this snarky assertion about the present in the article - "Remember when everybody believed that the internet was going to improve the quality of information in the world?"
That bugs me. How do you make a blanket judgment about something like that? Could I have maintained my cars and motorcycles and house the same way without all the information on the Internet just using some DIY book from the library? I don't think so. Or go ahead and look at politics, you think people were so enlightened? The why does anything actually written at the time blow your mind with how offensive it is? We had a civil war ffs.
It's very hard to perceive the reality you live in and make meaningful comparisons to how other people felt about the reality they lived in - and that's when you're looking into history and the facts are known.
Now try to do the same for the future, which is almost entirely unknown.
I think AI will wipe out humanity, but by choice, into some transhuman hybrid that still feels human to them, but wouldn't to us if we were transported ahead to that time.
Reply Share
Flag
Re: Maybe. (+4, Insightful)
phantomfive September 17th, 2022 1:55PM
How do you make a blanket judgment about something like that? Could I have maintained my cars and motorcycles and house the same way without all the information on the Internet just using some DIY book from the library?
I had to go to a library the other day to search for some stuff that isn't online. Finding data was so slow, people have no idea.
For modern things, like the Ukraine special operation, I know what is going on pretty near immediately after it happens.
Reply Share
Flag
Re: Maybe. (+2)
GameboyRMH September 17th, 2022 3:56PM
The Internet absolutely did improve the quality (and accessibility) of the information in the world.
It also increased the reach, quantity, and speed of the disinformation in the world, but that's a separate issue...
Reply Share
Flag
Re: Maybe. (+1)
Visarga September 18th, 2022 9:39PM
> Now try to do the same for the future, which is almost entirely unknown.
This has a technical name - counterfactual reasoning - and is one of the hardest things to accomplish by AI and humans alike. See Judea Pearl for more.
Reply Share
Flag
Re: Maybe. (+1)
phantomfive September 17th, 2022 1:53PM
If it's science, it's not a miracle.
Reply Share
Flag
Re: Maybe. (+1)
caseih September 17th, 2022 8:01PM
To me what science and technology can do is miraculous. I understand the underlying principles and natural laws that the fruits of science and technology are based on, yet I choose to maintain my sense of wonder and awe, and also gratitude. For example, even though I have a pretty good understanding of the principles of thrust and lift, it's still deeply moving to look at a big airplane that's carried me across oceans in relative comfort and marvel at it all. It's possible to not be completely jaded in this modern, fast-paced world.
Reply Share
Flag
Re: Maybe. (+1)
phantomfive September 17th, 2022 8:12PM
"Wonder and awe" are different than miraculous. It's a different definition of the word.
However, flying is really great.
Reply Share
Flag
Re: Maybe. (+1)
Kisai September 18th, 2022 1:25AM
AI will improve, but it will also hit a wall.
a) To improve, it needs constant sources of new data
b) You can't just create a blackbox with no new input. GPT-3, and various computer vision, text-to-speech and NLP will not recognize new inputs if left isolated. For example, let's say I invented a new device and called it "The Gecko", without the AI having learned of it, it will use a definition of "a gecko" that it has learned. Which means that you will be re-training NLP AI's every year, and constantly adding new input over top the old training data to try and keep it as updated as possible.
c) it can overfit data. If everyone is talking about how a president is a moron, it doesn't know which president, it just puts those two words together as being synonyms. So someone who searches for "A moron holding a flag" will more likely get former President 45's image of hugging a flag, than random images just labeled "a moron"
This is the reality of the risks from AI just being allowed to "read the internet" rather that something curated and edited (eg like encyclopedias and text books.) It's at best, "worse" than a 6 year old at putting words together, but at the same time can have the depth of information of a hundreds of college graduates on a subject if you are specific enough.
Which is the keyword. "specific"
All the NLP based AI Art? You need to be extremely verbose to get anything usable out of it. Like you need to write a 500 character long description to get something that doesn't look like Cronenberg monster.
Reply Share
Flag
Re: Maybe. (+1)
Visarga September 18th, 2022 9:47PM
> You can't just create a blackbox with no new input.
Well, actually you can, but you got to have a simulator to learn from sim outcomes. It goes like this - the model takes actions in the environment, then we measure the result. It learns to improve the result. That's how AlphaGo trained by playing itself - the simulator was a go board an a clone, other AIs can solve math by verifying which of its proposed solutions was correct, then using that knowledge in training. You can do the same with code as long as you have defined tests.
So it works but only when we have a good simulator as a replacement for a good dataset. And it's great because a dataset is dead while simulation is alive. As long as you have electricity you can invest it into more intelligence directly.
Reply Share
Flag
Re: Maybe. (+1)
Tablizer September 18th, 2022 8:04AM
I built an AI bot to speculate about the future of AI so humans don't have to.
Reply Share
Flag
No (+4, Insightful)
lucasnate1 September 17th, 2022 12:50PM
No
Reply Share
Flag
Re: No (+1)
LifesABeach September 18th, 2022 9:24AM
applications.
how about average joe a i applications.
credit card applications.
home loan applications.
medicare.
insurance.
of course the only folks using a i are now billionaires.
and they have publicly stated that a i is bad
Reply Share
Flag
Great Potential! (+1)
sarren1901 September 17th, 2022 12:59PM
AI has great potential to transform our lives for the better. It also has just as much potential to be used to control us and restrict us. It will be mostly annoying to the smarter people and downright seductive to the idiots.
Mostly though, it will probably lead to more efficiencies. With enough connectivity, awareness and control AI could do some amazing things such as better food production. More efficient routing of data. Better shipping coordination. Better ways of sharing energy and water.
An AI could give us utopia and access to all we could need. It will most definitely be used to increase our productivity will all the gains continue to go to the capital owner, the human that owns the AI. AI won't ever really be allowed to make things better because that would disrupt the people in power.
Reply Share
Flag
Re: Great Potential! (+1)
metadojo3 September 17th, 2022 2:29PM
bruh stop the hyperbole. A.I. is just a tool, like a hammer. thats all man.
i do this for a living bro. COmputer Vision.
Reply Share
Flag
Re: Great Potential! (+3, Interesting)
ShanghaiBill September 17th, 2022 3:05PM
i do this for a living bro. COmputer Vision.
The people in the trenches are often the least able to see what is on the horizon.
Reply Share
Flag
Re: Great Potential! (+1)
Oligonicella September 17th, 2022 4:19PM
Speaking of looking to the horizon from a point of loftiness, how's Xanadu coming along?
Reply Share
Flag
Re: Great Potential! (+1)
Visarga September 18th, 2022 9:58PM
In July, OpenAI launched with great fanfare the paid version of its image generation model Dall-E. Two months later the Stable Diffusion model was released, runs on your machine, costs only as much as a bit of electricity. To train SD they used 2 billion image-text pairs but the final model size is 4GB - that comes about 1:100,000 compression, 2 bytes for each example. But it can generate anything we can think of, so you got an internet's worth of imagery in a model the size of a DVD. That makes AI capable of spreading far and wide to the people and hard to gate keep.
Reply Share
Flag
it's not AI, it's ML (+5, Insightful)
cats-paw September 17th, 2022 1:01PM
My cat is far , far smarter than the best "AI" we've developed.
We have created amazing, large problem space, fitting algorithms that can pick the fitting to coefficients of enormous systems of equations with large data sets and relatively unintended.
It is kind of amazing as they can "notice" things they weren't really told to look for.
However, i'm extraordinarily suspicious that some smart person is going to figure out that we can do a similar thing without using ML to do it, i.e. it seems like it may be an unnecessary step. i'm probably totally wrong, but it just seems like it's solving a big correlation problem. it even uses techiques from that problem space.
meanwhile , back to the point. there's no AI. True AI, something that can do what my cat can do, i.e. jump on the kitchen counters because it knows i prepare its meals up there, is still an increibly long way off.
Hell, let's see if they can make something as smart as a bumblebee in the next 10 years.
I bet not.
Reply Share
Flag
Re: it's not AI, it's ML (+2, Funny)
Anonymous Coward September 17th, 2022 1:04PM
My cat is far , far smarter
Username checks out.
Reply Share
Flag
Re: it's not AI, it's ML (+3, Interesting)
Rei September 17th, 2022 1:42PM
Today's high-end AI is generally, compared to humans:
* Better at imagination, but...
* Bad at logic (and lacking life experiences on which to base logical decisions)
* Significantly underpowered in terms of compute capability
Also, for most advanced "AI", it's better to just think: "Extreme linear algebra solvers for finding solutions to extreme multivariate statistics problems". Takes the metaphysics out of it. The real question is not what AIs are doing, but what we are doing when we think.
I like to think about the comparison vs. reverse diffusion image generators like StableDiffusion, DALL-E, MidJourney, etc. They, like us, don't work in image space, but in embedding space, shared between both image and text. Latent space. A space of concepts. Where logical operations can apply to the embeddings - where the embedding for "woman" plus the embedding for "king" minus the embedding for "man" resembles the embedding for "queen". A good example was when someone in StableDiffusion showed that if you start with an embedding of Monet's "Bridge Over a Pond of Water Lilies", add in "photography" and subtract out "impressionism", you get what looks like a photograph of the same scene upon converting the embedding back to image space, without ever telling it to draw a bridge or pond of water lilies. And just like us the process of converting back from the (far smaller) latent space to the (far larger) image space involves extensive use of imagination to fill in the gaps, based on what it - or we - was trained to.
And the results can be truly stunning. Yet on anything that requires logic, it's a massive failure. Spelling. Relationships. Ensuring conceptual uniqueness. It's working on single embeddings trained to the *existence* of objects in the scene (CLIP), and unless it was trained to the *specific* thing you asked for (like "a red box on a blue box"), it won't understand the logical implication there. You have to wrench it into getting the right answer in complex scenes by providing hand-drawn templates for it to diffuse or with postprocessing.
There's a lot of work on improving that, but just using it as an example, we're currently in an era where AIs can be stunningly imaginative (atop deep breadths of knowledge) and yet trounced by an infant when it comes to logic.
Reply Share
Flag
Re: it's not AI, it's ML (+3, Informative)
phantomfive September 17th, 2022 1:57PM
Another way of looking at it that you might find interesting: current AI is good at interpolation, but horrid at extrapolation. That is essentially what a neural network is at the mathematical level: a heuristic interpolater.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
cats-paw September 17th, 2022 8:48PM
I like that way of looking at.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
Visarga September 18th, 2022 10:23PM
Humans too. We're horrible at extrapolation. In 1315 the Black Death was killing large swaths of society and we couldn't figure out and extrapolate to the theory of germs. Not even when our lives depended on it.
We're assuming all discoveries we make as if they come from a place of deep intelligence, but in reality we stumble into discoveries by accident and then in hindsight we think we're so great.
If you give AI access to the physical world to the same extent we have had, it would become more intelligent than humans over time, because it can collect more "happy accident" discoveries being fast and efficient.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
phantomfive September 18th, 2022 11:10PM
Humans too. We're horrible at extrapolation.
This is a valid point.
If you give AI access to the physical world to the same extent we have had, it would become more intelligent than humans over time, because it can collect more "happy accident" discoveries being fast and efficient.
This is almost certainly not true. The reason being that we're still significantly better at extrapolation than computers are. The neural network algorithms we used are designed entirely for interpolation. That is why they need such tremendously large data sets.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
Oligonicella September 17th, 2022 4:25PM
* Better at imagination, but...
Don't think so.
Imagination requires spontaneity. There is not a single thing that AI does that is equivalent to a flight of fancy.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
Rei September 17th, 2022 8:17PM
Humans are TERRIBLE at randomness compared to computers. Ask a random person to write down 100 random numbers, then hand it over to a statistician to asses how random it is. I guarantee you, it won't be random at all.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
Visarga September 18th, 2022 10:32PM
Humans are worse than computers on all but a few tasks, starting with addition and randomness. For now AI can equal humans on any task where there is large training data and examples fit into a sequence of no more than 4000 tokens, or where we can simulate millions of episodes and learn from outcomes. We still hold the better overall picture, we have longer context for now. Copilot can only write small snippets of code, not whole apps. yet
Reply Share
Flag
Re: it's not AI, it's ML (+1)
Rei September 19th, 2022 8:30AM
That's because - as mentioned in the original post - humans remain much better than AIs at logic.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
Visarga September 18th, 2022 10:16PM
> It's working on single embeddings trained to the *existence* of objects in the scene (CLIP)
The solution is right in your words. It's because it uses CLIP embeddings as representations that it lacks ability to properly assign properties to objects, they are all mixed up in the single embedding. But if you give it a bunch of embeddings, like an array of 128 embeds instead of just one - the attention mechanism allows is actually great for compositionality by concatenation - it does all-to-all interactions, that means the 128-embeds would not be mixed up anymore.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
metadojo3 September 17th, 2022 2:26PM
people keep conflating inference with decision.
the computer vision robots use to detect a type of object and its pose are trained to do inference.
somebody then takes the assertion output from the computer vision and makes a decision.
this decision is codified by humans
so for example in health care you can toss your xrays at it and then it can tell your doctor whats wrong with you.
but its the doctor and the patient who decides to do something about it.
some joker of course is going to come on this thread and posit that what if the inference is not "sick" but instead "operate"
but then its not the AI thats going to carry out the operation . so its still an inference.
humans will always govern the decisions to take action.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
Visarga September 18th, 2022 10:44PM
AIs that take actions exist, they are reinforcement learning agents. But I wouldn't want a RL surgeon. I want it to learn supervised, like human doctors. A new surgeon will observe for a long time, then only do the closing, then do more and more of the operation under supervision of an expert. They don't jump from task description directly to wielding the knife.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
Moridineas September 17th, 2022 5:36PM
When I was taking an AI comp sci class in undergrad at a top ~10 or so department 20 years ago, chess was still a big focus of game playing AI and search research. Go was given as an example of a monstrously large search space and a game that we just didn't even have any conception of how to tackle.
~2011/11 I remember having a conversation with a computer scientist friend who works at Johns Hopkins APL. He was also an amateur dan Go player and worked on AI. He was firmly in the camp that no Go program would be competitive with top dan players in our lifetime.
In 2015 AlphaGo came seemingly out of nowhere and beat some top professional Go players.
By 2017 AlphaGo was crushing the very best Go players on the planet. Its successor, AlphaZero is an even stronger player.
Today, Go is just as beaten as chess. What was almost unimaginable to anyone in the AI field happened.
DALL-E is another example of something that was pretty much unimaginable even 10 years ago.
So what's the moral? Technological change often happens exponentially. All of the little development improvements snowball with hardware improvements and massive abilities to scale to create unexpected results. Am I expecting Data-type talking androids in my lifetime? Absolutely not. I do think it's very short-sighted to say that just because nothing we have today approximates human or animal type intelligence that we're nowhere near it. Who knows, it could be that with a few small changes to neural net patterns, ML, and a network of a suitably HUGE size, unexpected things start happening.
I feel the same way about self-driving car technology. Would I trust my life to a Tesla on FSD today? Hell no. But they're getting an insane amount of data. Likewise, Ford, GM, and all the other car companies are going to be receiving an absolutely insane amount of training data in the next few years. It's going to be bumpy, but I would bet that real self-driving is not NEARLY as far away as many poo-pooers say.
Reply Share
Flag
I pooped in the bedroom, go clean it up (+1)
raymorris September 17th, 2022 6:19PM
Your cat interacts with you mostly to say things like "bring me my dinner" and "I pooped in the bedroom, go clean it up". Given cats have basically enslaved humans, I don't know that "not as smart as my cat" says much.
Reply Share
Flag
Re: I pooped in the bedroom, go clean it up (+1)
l0n3s0m3phr34k September 17th, 2022 10:10PM
My cat never tells me if he pukes any place, he just waits for me to find it.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
Jeremi September 17th, 2022 7:01PM
Hell, let's see if they can make something as smart as a bumblebee in the next 10 years.
An artificial bumblebee would be quite an accomplishment, but what words would you use to describe somebody who taught himself to play chess at grandmaster level in four hours -- and then went on to earn a 3500+ Elo rating (22% higher than the world's reigning chess champion), and introduce the world's chess players to entirely new strategies and tactics that nobody had considered before?
I'd call that person pretty smart; the fact that it's actually an AI and not a person that did that makes the accomplishment more impressive, not less.
(And to counter the "but chess doesn't count because reasons" objection, I'll note that reasons often turns out to be defined as "because an AI has gotten better at it than people"; by that logic, AI will never be smart, but the number of tasks which "still count" as valid intelligence tests will keep shrinking, until one day there are none left)
Reply Share
Flag
Re: it's not AI, it's ML (+1)
phantomfive September 17th, 2022 7:34PM
An artificial bumblebee would be quite an accomplishment, but what words would you use to describe somebody who taught himself to play chess at grandmaster level in four hours -- and then went on to earn a 3500+ Elo rating (22% higher than the world's reigning chess champion), and introduce the world's chess players to entirely new strategies and tactics that nobody had considered before?
It's impressive, but so are calculators. What would you say to someone who can multiply two 7 digit numbers in 5 seconds? A person who could do it would be intelligent, a computer would not.
That is, computers are lacking other things considered necessary for intelligence.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
Jeremi September 17th, 2022 10:05PM
It's impressive, but so are calculators. What would you say to someone who can multiply two 7 digit numbers in 5 seconds? A person who could do it would be intelligent, a computer would not.
If you were talking about IBM's DeepMind (which, after having been programmed by a team of chess experts to play chess at a high level, was able to beat Kasparov), I'd agree with the calculator analogy.
But AlphaZero was never programmed with any chess strategy -- it figured it out by itself, simply by playing chess against itself. If that isn't a form of intelligence, I don't know what is. (Note that intelligence and sentience are two different things -- I'm not claiming sentience here)
Reply Share
Flag
Re: it's not AI, it's ML (+2)
phantomfive September 17th, 2022 10:26PM
But AlphaZero was never programmed with any chess strategy -- it figured it out by itself, simply by playing chess against itself. If that isn't a form of intelligence,
It's still a calculator, just a really good one. Basically what it is doing is collecting positions. After playing millions of games, when it comes across a new position, it says something like, "based on the previous games that looked like this one, in 90% of the games, moving R-d4 was the best candidate move."
Furthermore, it still "cheats" by being able to look through many many moves every second. It's still calculating through the move tree, it was programmed to play that way: like a computer, not a human.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
RightSaidFred99 September 17th, 2022 11:18PM
What the fuck do you think a human brain does, bro? I am seeing less and less of your point the more you post it.
The brain is a highly optimized biological pattern matcher with a bunch of weird shit we don't understand also going on. You don't need to know all that weird shit to develop other artificial pattern matchers that work differently but far, far better in specific areas.
Reply Share
Flag
Re: it's not AI, it's ML (+3, Insightful)
phantomfive September 17th, 2022 11:32PM
What the fuck do you think a human brain does, bro?
If I knew, I'd have won a Turing prize. Speaking of Alan Turing, one of the things a human can do that AlphaZero can't do is simulate a Turing machine.
The brain is a highly optimized biological pattern matcher with a bunch of weird shit we don't understand also going on.
The weird shit is pretty crucial.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
RightSaidFred99 September 18th, 2022 2:19PM
Again, I think you're missing the point and you're hung up on this "like a person!" strawman. Turing prize or biological drive or "creativity" as defined by the human experience aren't required here. What is required is the ability to look at absurd amounts of data and to find patterns a human couldn't.
You're moving the goalposts. The point isn't that an AI will be like a human anytime soon (we aren't even close). The point is that AIs will discover amazing things we aren't even prepared for over the next decade.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
phantomfive September 18th, 2022 2:25PM
If your point is that AI can still be useful even if it can't think, then you are making the same point as Dijkstra when he said, "The question of whether machines can think is about as relevant as the question of whether submarines can swim." If that is your point, then I concede you are correct. Neural networks are really cool.
However, I will claim that people who call that AI are morons for calling it AI. They should call it, "cool algorithms we invented while searching for AI" or something like that. Somehow that particular name has never caught on.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
Visarga September 19th, 2022 1:06AM
Moving the goal posts ensures nothing can be called AI
Reply Share
Flag
Re: it's not AI, it's ML (+1)
phantomfive September 19th, 2022 9:30AM
The goal posts haven't moved, you're looking at the wrong field.
If you actually care, you should go look up the difference between "strong AI" and "weak AI."
I suspect you don't care, that you're one of those people who'd rather comment in ignorance.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
jvkjvk September 18th, 2022 9:30AM
>But AlphaZero was never programmed with any chess strategy -- it figured it out by itself, simply by playing chess against itself.
Yes. It played a lot of games against itself using *random moves* and developed a statistical model of what moves in what positions yielded the highest percent of winning endpoints.
Then, when playing, it simply can do the same, starting from the current position, for the length of the turn, filling out more probabilities.
Then, it picks the highest probability answer.
It doesn't know how to play chess. It doesn't know about attack or defense, position, influence or anything else. It knows the probability that each move will result in a win. That's it.
So, calculator.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
Jeremi September 18th, 2022 12:38PM
It doesn't know how to play chess. It doesn't know about attack or defense, position, influence or anything else. It knows the probability that each move will result in a win. That's it.
One could make similar criticisms about an anthill -- an anthill isn't sentient and doesn't know anything about anything.
And yet, the anthill is nevertheless able to solve complex and novel problems in intelligent ways. If that makes the anthill "a mindless calculator", then so be it -- but it's an intelligent calculator. I submit that AIs can also exhibit this sort of mindless intelligence.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
Visarga September 19th, 2022 1:11AM
Great example!
Reply Share
Flag
Re: it's not AI, it's ML (+1)
Visarga September 19th, 2022 1:10AM
> It doesn't know how to play chess. It doesn't know about attack or defense, position, influence or anything else.
You're wrong. It knows about defence, influence etc. It has a specialised module, a neural net, that computes the value of each position. It knows so much about it that it can teach us new tricks. This module is called the value function, it rates the current game state. It also helps us cut down on the exponential tree search. That's why it checks only about 50,000 possible moves instead of millions. AlphaGo does't abuse the compute with dumb search.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
UnknownSoldier September 19th, 2022 10:47AM
> If that isn't a form of intelligence, I don't know what is.
Glorified table lookup with a feedback loop is NOT intelligence. Tweak the rules slightly and all those games it played are near worthlesss.
AI = artificial ignorance, at best.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
RightSaidFred99 September 17th, 2022 11:16PM
You keep moving those goalposts, lol. Really put your shoulder into it!
Find me a person, cat, or bumblebee who can analyze how a drug interacts with billions of combinations of proteins and devise other drugs that would behave similarly and have fewer side-effects.
But, but, but that's just like a calculator maaaan! Fucking bullshit it is. You are inventing some scenario where to create new things one must be "intelligent" with some ineffable "something" that we can't simulate. That scenario doesn't exist.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
Visarga September 19th, 2022 1:42AM
In fact we have T-cells that do protein learning. That's how we get an immunity.
Reply Share
Flag
Re: it's not AI, it's ML
mesterha September 17th, 2022 9:51PM
However, i'm extraordinarily suspicious that some smart person is going to figure out that we can do a similar thing without using ML to do it
ML is a subfield of AI.
Reply Share
Flag
No.
Anonymous Coward September 17th, 2022 10:53PM
ML is a subfield of AI.
If you don't have consciousness, you don't have intelligence.
We don't have artificial consciousness. Yet, anyway. Until we do, there is no AI, because there is absolutely no I.
Right now, it's marketing and hyperbole all the way down to the turtles and elephants.
ML is what we have. So far.
Reply Share
Flag
Re: No. (+1)
mesterha September 18th, 2022 8:49AM
I guess you think jumbo shrimp are impossible. AI is a phrase that was coined to represent a field of research. It doesn't mean combine some definition of artificial and some definition of intelligence. Of course, it was motivated by those words, but it gets it's own meaning based on its origins and its history. Part of that history is that most AI researchers realized they couldn't write rules to make an effective system. The system had to learn on it's own. This is why ML became so important. For this reason, almost everyone who does AI research has some ML component in their work.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
narcc September 17th, 2022 10:03PM
back to the point. there's no AI. True AI,
What you call "true AI", and what I suppose is the common understanding of the term, philosophers call 'strong AI'. I've taken to calling it 'science fiction'. You are absolutely right. No such thing exists. While it would be nice if we could reserve the term 'AI' for the common meaning, that battle was lost before it even began. Pamela McCorduck, who was there at the time, explains the origin on the term "AI" in her book Machines Who Think. It's well-worth a read.
The term AI, as it is now, is pretty broad and covers a ton of things that you might be surprised were categorized that way. Machine learning is very large part of that, of course, but it is not the whole of it. We could try to make up a brand-new term for it, but that's an uphill battle.
I honestly think we're better off trying to educated the public so that they're a bit more skeptical when the hear the term being used. Pointing out painfully mundane things that the term covers things that they wouldn't ordinarily associate with AI. Decision trees, for example, fall within that scope as do expert systems and even linear regression. It's not exciting at all, but that's the point.
There's quite a bit of misunderstanding, as you can see even in replies to your post, about the scope and capabilities, so something like that could even be helpful to the better informed. There are a few here who seem to think that NNs are all the beginning and the end. I suspect that they'd be more than a little surprised at the role more traditional algorithms play in some of the cooler 'AI' tricks. Thinking about it now, learning about AI is not unlike learning magic tricks. Once you understand what's really going on, you can't help but feel disappointed.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
RightSaidFred99 September 17th, 2022 11:11PM
No it isn't. Can your cat put his little paws on a steering wheel and drive down to the store, peeping his little eyes over the steering wheel and using his other little paw to push the accelerator (I'll even allow for alternative brake/accel pedals)? Can your cat look at 500 simultaneous streams of video and put names to faces for large swathes of the people in the videos?
Sorry, your cat is a dumbass. There are things your cat can do better than a computer, hell there are things it can do better than a human but that doesn't make it "smarter" in any broad, categorical way.
In fact, your whole screed is nonsensical and misses the point. AI/ML is a tool to analyze massive amounts of data and draw conclusions that humans either aren't able to or can't scale to.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
zmollusc September 18th, 2022 10:37AM
Yes. A tool. All tools do things that unaided humans cannot. AI can analyze masses of data that a human couldn't, a crane lifts objects that humans, even a lot of humans trying to work together, couldn't.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
Grokew September 18th, 2022 1:26AM
Well a bumblebee is smarter than a TikTok er, so theres that.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
serviscope_minor September 18th, 2022 2:27AM
back to the point. there's no AI.
That is not, in my opinion true. It's just an artefact of ever moving goalposts. What we have now is that by artifice, we can solve things that formerly required human intelligence to solve. Hence artificial intelligence. AI doesn't need to be the same as a complete artificial human mind, any more than artificial flowers need to grow to be called such. We have lots of artificial things, none of them are ever a 1-1 substitute for the original but nonetheless, "artificial" is a perfectly good word.
We've called the code that controls non-player units in computer games "AI" since forever, since they are there as a substitute for human intelligence controlled units. We have things now that can do a moderately good job at choral arrangements, determine if an image contains a thing (don't believe the wankers who claim superhuman performance, that is so much bullshit), write text, create images and so on.
AI doesn't mean "complete artificial human mind".
Reply Share
Flag
Re: it's not AI, it's ML (+1)
Visarga September 18th, 2022 10:09PM
Does your cat beat you at Go and Chess, fold proteins and paint amazing images on request? Can it implement even a simple Python script on request?
Reply Share
Flag
Re: it's not AI, it's ML (+1)
Dread_ed September 18th, 2022 10:34PM
Even the "dumbest" AI has the propensity to tell us novel things that a cat cannot, that we don't now know, and all from watching the dang cat.
That's the crux of it. The informaiton produced is orthagoal to biological neural processing.
Reply Share
Flag
Re: it's not AI, it's ML (+1)
mjwx September 19th, 2022 7:30AM
My cat is far , far smarter than the best "AI" we've developed.
That's because we haven't developed true AI... What we call AI is really just a fancy ruleset we run data through. It's not capable of anything we'd call intelligence, I.E. self awareness, capacity for independent change let alone learning or creativity.
Also your cat has had millions of years of evolution, millennia of domestication and hundreds of years of lordship over humanity. "AI" has, at best, a few decades of development.
There's a huge difference between hard AI (Artificial General Intelligence or AGI) and soft AI (what we have now) and we likely wont have AGI for many more decades or possibly longer. Soft AI struggles with anything ill defined ambiguous or fuzzy like image recognition, human emotions, natural language (google translate, as good as it is, really struggles with local slang that it hasn't been instructed on), so on and so forth... but it's really, really, really fast at applying data to rules, hence it's quite good at sorting things that have well defined criteria.
Hence we're not likely to see robot cars anytime soon... the robot lawyer and robot accountant are practically already here as both of these professions are very logical and very structured.
Reply Share
Flag
Not from what I've seen (+1)
aardquark September 17th, 2022 1:02PM
The problem as I see it is that AI is expensive to develop, and the corporations that have deep enough pockets to develop it, mostly use AI to manipulate us to buy things that we don't need, or can't afford, or don't want.
Reply Share
Flag
Re: Not from what I've seen (+1)
darenw September 17th, 2022 3:00PM
What the corporations do with AI will be fine after someone comes up with AI to tell me how I can make plenty more money. Many of us are waiting for that!
Reply Share
Flag
Some say it sounds just like god, allah, yhwh, etc
Anonymous Coward September 17th, 2022 1:03PM
Ethics violation
https://abstractionphysics.net...
moral violation
https://abstractionphysics.net...
legal violation
https://abstractionphysics.net...
https://abstractionphysics.net...
It's Obvious. So Bow down to your new stone image overlord, and do not look behind the curtain.
Reply Share
Flag
Yes, already has (+1)
bob_jenkins September 17th, 2022 1:04PM
Machine computing has already performed transformative, disruptive miracles. AI is just more of the same.
Reply Share
Flag
Novelist Understands Very Little (+4, Insightful)
crunchygranola September 17th, 2022 1:05PM
"An encounter with the superhuman is at hand," argues Canadian novelist, essayist, and cultural commentator Stephen Marche in an article in the Atlantic titled "Of Gods and Machines". He argues that GPT-3's 175 billion parameters give it interpretive power "far beyond human understanding, far beyond what our little animal brains can comprehend. Machine learning has capacities that are real, but which transcend human understanding: the definition of magic."
By scraping 175 billion parameters from the web GPT-3, copying the words of hundreds of millions of real intelligences, it can make a convincing replica of human conversing.
But it understands nothing of what it says. It is simply Eliza on a gigantic scale. It is true that humans cannot explain the internal processes by which it produces particular outputs (a failure of the technology thus far) since it is performing a vast number of statistical pattern matches but this is not "beyond human understanding" it is just that people cannot put into words any meaning when a terabyte core dump is presented to them.
His fundamental ignorance is proven by this assertion that this is vaster than "our little animal brains". An average human brain dwarfs the puny scale of GPT-3. It is difficult to make a precise comparison, but a single model parameter is most closely similar to a single synaptic connection, which is a single connection strength between neurons. The brain has on the order of 10E15 synapses, and is about 10,000 times larger than GPT-3.
Reply Share
Flag
Re: Novelist Understands Very Little (+1)
timeOday September 17th, 2022 1:26PM
But it understands nothing of what it says. It is simply Eliza on a gigantic scale.
I disagree on that point. The trick of Eliza is how superficially well it can respond without using any information that is specific to what you are actually saying or asking. Eliza could never, EVER play Jeopardy even passably well, whereas modern AI can do so exceedingly well. A big difference, without delving into metaphysics.
Reply Share
Flag
Re: Novelist Understands Very Little (+1)
jvkjvk September 18th, 2022 9:33AM
>Eliza could never, EVER play Jeopardy even passably well, whereas modern AI can do so exceedingly well. A big difference, without delving into metaphysics.
It's not really about what they can do. A modern AI is MUCH bigger than Eliza and *should* be able to do more. It's how they do it. And in that, both are the same, without delving into metaphysics.
Reply Share
Flag
Re: Novelist Understands Very Little (+1)
Visarga September 19th, 2022 1:52AM
If you think about it humans are proteins in a watery liquid in a bag. Not much different than electrons in a chip.
Reply Share
Flag
Re: Novelist Understands Very Little (+1)
Visarga September 19th, 2022 1:49AM
Besides being non-contextual, Eliza was also hardcoded, not learning anything.
Reply Share
Flag
Re: Novelist Understands Very Little (+2)
iikkakeranen September 17th, 2022 3:27PM
What is understanding, if not the ability to match patterns and string concepts together? Isn't learning a language very much about constructing a web of relationships between words? What is superior about how you "understand" something vs a computer, if both of you can provide an equally useful answer in a given context? The paragraph in the summary written by GPT-3 has a quality to it that exceeds most humans' understanding of the topic. It's both interesting and insightful, and logically consistent. Hand-waving it away as somehow "not real" because it wasn't produced by a biological process is not useful. The human mind is an information processing system, and the hardware being moist is of secondary importance.
Reply Share
Flag
Re: Novelist Understands Very Little (+1)
gweihir September 17th, 2022 5:50PM
But it understands nothing of what it says. It is simply Eliza on a gigantic scale.
Exactly. The overall process is rather simplistic, if on a massive scale. But it is not more intelligent than, say, a dictionary, that give a word can give you an explanation of the word. Obviously a stack of paper with ink on it is not intelligent in any way.
Reply Share
Flag
Re: Novelist Understands Very Little (+1)
SoftwareArtist September 17th, 2022 7:12PM
this is not "beyond human understanding" it is just that people cannot put into words any meaning when a terabyte core dump is presented to them.
It's not a problem of putting it into words, it truly is beyond human understanding. We can train a massive model like GPT-3, but we literally have no idea how it works. Somehow those 175 billion parameters manage to encode a whole universe of concepts, relationships, grammar, and much more. How do they encode it? No one knows. We aren't sure how to even begin figuring out.
Eliza is totally different. It has a small number of hand coded rules. The author knew exactly what those rules were. Any competent programmer can look at the source code (it's quite short) and understand what the rules are. Modern AI is nothing like that. No one designed rules for it to follow. No one told it how to parse sentences or produce new sentences. Rules somehow emerged on their own just by adjusting parameters to optimize a loss function, and no one knows what those rules are.
Reply Share
Flag
Re: Novelist Understands Very Little (+1)
real_nickname September 18th, 2022 1:44AM
By scraping 175 billion parameters from the web GPT-3, copying the words of hundreds of millions of real intelligences, it can make a convincing replica of human conversing. I tried GPT-3 prompt and it gives dumb answers if you are like me and don't know how to use it, so yes very far from an human. However the thing can tell totally wrong answers with great assurance, like a human being.
Reply Share
Flag
Re: Novelist Understands Very Little (+1)
Pb100 September 18th, 2022 6:05AM
Not a great example, but I wonder how well an AI would compare to humans if given a problem such as ... What is 1+1... a) 3 b) 5 c) 6 d) 7 Given that none of the answers are correct, a human might opt for the "most likely" wrong answer using deduction. Ie: option (a) that is at least a known phrase so has a marginally better chance of being what the puzzle maker had in mind, even if incorrect.
Reply Share
Flag
No, no I don’t (+2)
ThePawArmy September 17th, 2022 1:05PM
>> Remember when everybody believed that the internet was going to improve the quality of information in the world? No, no I don’t remember anyone around me that thought the quality of information would improve.
Reply Share
Flag
Re: No, no I don’t
Anonymous Coward September 17th, 2022 1:11PM
You and the cats-paw guy give me paws.
Reply Share
Flag
Re: No, no I don’t (+5, Insightful)
trickyb September 17th, 2022 1:36PM
Well, the internet has improved the quality of information. Wikipedia - for all its faults - is a million times bigger and better than the Encyclopedia Britanicas of old. When I want to carry out minor repairs on my car or my bike or my washing machine, in a few seconds I can bring up a Youtube video. Remember having to keep several maps in your car, most of which would be out of date and leave you unaware of a bypass built 5 years ago? And so on...
I will concede that the internet has not improved the quality of information in every domain.
Reply Share
Flag
Re: No, no I don’t (+1)
darenw September 17th, 2022 3:02PM
"Grampa, what's a paper map? How did it know where you were and tell you when to turn?"
Reply Share
Flag
Re: No, no I don’t (+1)
serviscope_minor September 18th, 2022 2:30AM
Well, son, a paper map is what you use in Cornwall when you have 10 miles with inexplicably poor phone coverage.
I forgot my paper map last month and did in fact regret that.
Reply Share
Flag
Re: No, no I don’t (+1)
greytree September 17th, 2022 2:06PM
But Porn is way higher resolution.
Reply Share
Flag
Re: No, no I don’t (+1)
gweihir September 17th, 2022 5:48PM
It has improved. I used to have an old high-quality encyclopedia (not in English). The Internet can replace it now, but requires some level of education, honesty and ability to fact-check in the reader. Given what that encyclopedia did cost back when and that you can get something reasonably similar for low cost now, I would say this is massive progress. Even poor people in poor countries can now access the actually known facts with reasonable effort if they so chose. The problem is that most people do not want accurate information, they want confirmation for their own misconceptions and fantasies. That is not a problem of the Internet.
Reply Share
Flag
Tell me your incentive structure and I will tell y (+1)
RightwingNutjob September 17th, 2022 1:21PM
what kinds of lies you will tell to yourself and to all who listen.
Army generals lie about the prospects for victory by force of arms.
Diplomats lie about the prospect for peace and progress by diplomacy alone.
Bankers lie about how much money they manage and how well they manage it.
Journalists lie about the quality and objectivity of their reporting.
Economists lie about how well they can reduce "the economy" to a single number or a single sound bite.
Politicians lie about anything and everything that might give them an edge over the other guy in a competitive race, and even politicians running unopposed lie about how atrocious and subhuman their hypothetical opponents might be.
Techies lie about the 1337n355 of their tech and tech writers lie about how well they can take the product of arcane and specialized know-how and describe it meaningfully in English prose, without invoking a number bigger than the number of fingers on the average hand.
Reply Share
Flag
Re: Tell me your incentive structure and I will tel (+1)
gweihir September 17th, 2022 5:43PM
Not everybody does it and the best experts in a field do at least not lie to themselves (or they stop learning and will not get there), but yes, most people do this. Anybody making grand predictions wants to sell something, often to themselves.
Reply Share
Flag
Re: Tell me your incentive structure and I will te (+1)
RightwingNutjob September 17th, 2022 6:34PM
Those who know don't talk and those who talk don't know.
True not just about spooks
Reply Share
Flag
Yes, but not on it's own (+2)
HiThere September 17th, 2022 1:26PM
As currently implemented AI is a transformative technology. It's still getting started, and we have no idea how far it will go, just that it will go a very long way. This is largely because it can search through really huge specialized databases quickly. There are other bits, but that's the main thing. And don't denigrate it's importance.
OTOH, current AI is not, and I believe cannot be developed into AGI. That going to require a very different approach. It will probably require robot bodies operating in the world to do this. It will include that current idea of AI as a component...but only as one component. (OTOH, other components are either in existence, or currently being developed, so the final step of integrating them may turn out to be a small one.)
However, even if we don't build an AGI, current AI in conjunction with humans "will really perform transformative, disruptive miracles". I.e., it will enable changes that have not been predicted, and which will cause the lives of humans to change drastically. We still don't understand how much things will change when an automated car is the common vehicle. Just that there will be profound economic disruption...but that's only a part of what the change will be. The automobile resulted in the change of sexual mores in a way that still isn't complete, and this will be something as major, and probably as "not obviously to be expected ahead of time".
Reply Share
Flag
Did CRISPR? Did mRNA? Where are the "Miracles"? (+1)
Seven Spirals September 17th, 2022 1:33PM
Not seeing "CRISPR Cancer Cure Kit" on the shelves anywhere. Still haven't seen "mRNA Flu-Proof" either. It barely worked for a few months on CV1984 and couldn't even cure or prevent the illness. Their big claim to fame was to slow it down for a few months and "lessen the impact" as a therapeutic. These pharma and doctor assholes have a terrible record once you hit the 1960's. Compare Penicillin to either of those "miracle" technologies. It cured a metric fuckton of diseases. When I say "cured" I mean CURED as in knocked the fuck outta the box, not "might make it easier on you."
Reply Share
Flag
It's mainly two groups who believe this (+4, Insightful)
93 Escort Wagon September 17th, 2022 1:38PM
- Non-technical people like this author
- AI researchers asking for more funding
Reply Share
Flag
Re: It's mainly two groups who believe this (+1)
SQL Error September 17th, 2022 7:45PM
Everything's a miracle if you don't understand how anything works - or you get paid for producing miracles.
Reply Share
Flag
Re: It's mainly two groups who believe this (+1)
RightSaidFred99 September 17th, 2022 11:06PM
Lol, sure, sure.. Look at how far AI/ML has come in the last 5 years alone, you're a confirmed nut if you think we won't start seeing startling shit out of the field in the next 10 years. I am in neither group and I believe it.
Being a cynic is easy, it's not like someone will remember and come mock you in 8 years when some AI discovered a drug regimen that cures 60% of known cancers, or devises a new type of battery 40% lighter with 50% more energy density and faster charge capacity than anything we have now.
I don't think anyone sane things an AI will become "sentient" or anything, but as a tool the technology will lead to breakthroughs well beyond what one would have expected via natural human progress.
Reply Share
Flag
Re: It's mainly two groups who believe this (+1)
serviscope_minor September 18th, 2022 2:50AM
Lol, sure, sure.. Look at how far AI/ML has come in the last 5 years alone,
It's hard to remember that AlexNet was only published in September 2012, almost exactly 10 years ago.
That was pretty much the watershed paper: that turned ANNs from "old fashioned thing that doesn't work that well that you use if you're not smart enough to use the clever modern techniques like obtuse SVM variants" to "holy shit gradient techniques work". And the design if AlexNet is also vastly simpler than all the techniques it handily destroyed. And not that you'd want to, but today you could reimplement it in a handful of lines of code with the good modern general purpose array autodiff libraries (i.e. pytorch), whereas the old techniques could never reach that level of simplicity.
It then took a couple of years for things to really crank up, but by 2014, deep learning had pretty much displaced a huge amount of older computer vision, really most things other than where the maths is an indisputably correct way of modelling the world.
We have so much cool shit now as a result of being able to actually pick cost functions that match what we want (ehh, fuck convexity), good software and cheap, high performing array processors (GPUs etc). It should be nerd heaven, but slashdot got old, bitter and cynical.
You know, if it doesn't replace Albert Einstein then it's a worthless scam etc etc. Never mind you can do a pretty creditable job of mo-cap on a phone without either markers or a special background.
Reply Share
Flag
Re: It's mainly two groups who believe this (+1)
Visarga September 19th, 2022 1:59AM
> It's hard to remember that AlexNet was only published in September 2012, almost exactly 10 years ago.
And the transformer was invented 5 years ago, right at the middle of this golden decade.
Copyright © 2023 SlashdotMedia.All Rights Reserved.