Will machines completely replace all human beings?

In your questions, you ask if machines will “enslave humans”. Machines tell humans what “is needed” or “what to do next”. In sending a rocket to the Moon, the scientist doesn’t dare argue with the computer. At merely that point, they are not totally enslaving humans, merely managing them by displaying to the humans the result of the logic programmed into them in such a way as to be influential. If the program was about winning wars, they tell the generals what to do in order to win the war. They are advising in such a way that the General doesn’t dare refuse.

Gradually, those machines get more and more sophisticated such that they are managing the General, not merely displaying a selection of optional tactics. The machines are in effect, not merely managing that general, but enslaving the General’s opponent. And when the General’s opponent is the population itself, as is the case in the USA and most of Europe, the machines have enslaved the populous at the bequest of the General.

Currently social engineering psychiatrists and psychologists are doing that same thing using subliminal influence upon the adversary, the populous, and before the machines take over replacing the psychologists and psychiatrists. The machines already advise the social engineers. In the end, the game of social engineering is entirely an inescapable machine derived paradigm. The Generals and the social engineers become the populous being managed along with everyone else.

In the late 70’s I became a production manager for a small manufacturing company for the first time. Being new at that game and having many people depending on me to balance their wages properly, I wrote a program on a very small computer (before they were even called “PCs”, to sum up everything I knew of the people plus any more that might be relavent to how much the company could profitably pay them for their job. It including just about everything that you could associate with being an employee; attendance records, intelligence profile to suit the job, learning capacity to potentially suit other jobs within the company, average enthusiasm, attention to the task at hand, getting along with others, their own professed goals in life,… The program knew more about the personnel than their supervisors and the personnel department. And it also had a budget/profit algorithm with which to balance against wages.

I was surprised when I first ran the program that it yielded almost the exact same wages as they were already getting with few exceptions. This indicated that the program wasn’t strongly needed, but it was designed to be entirely altruistic, unbiased. That program was designed to use all the exact same information that “Human Resources” people gather on people today with the exception of prior whispered reputations. Today Human resource people do that same thing except that they usually don’t know that it is a program in a distant computer informing them, nor what biases are being used in order to engineer society in general.

That was back in the late 70’s. Computer derived advise gained through remote “statistics” (far less relevant to the company at hand) have exponentially increased in their influence and capacity to persuade managers, especially in large companies. A big part of managing a company is managing the managers; selecting them based on computer derivations and gauging them relative to computer derived budget/profit concerns. A big part of managing the managers is to ensure that they adhere to computer advise (whether they realize it or not) - “loyal to the machine”.

The intelligence of the people; the managers, engineers, and employees, is being replaced by remote machine intelligence. The people become merely humanoid drones. The people dare not think for themselves. Yet they are not aware that they are not thinking for themselves.

I saw it coming because I was a part of its original inspiration. It didn’t take a megatronic, super-duper, ultra-computer of any kind. Merely a clever intelligence designer/programmer with good intent.

And more recently, I built and programmed what I call “Jack”. Jack is a computer that emulates reality on the most fundamental level, below physics and automatically derives the “laws of physics”. Jack knows things that even I don’t know. Even I don’t argue with Jack. Yet Jack was not any grand super-computer, merely very smart.

In some cases, in the beginning, though remember you have to work in saved costs on health care, increased workload - I mean, in the planners minds, that is, at least.

Imagine if certain individuals of homo erectus decided to plan their sucessor. (they might have chosen to be sabertoothed tiger men) Think about how people currently imagine how they would be better. We will have decisions made by corporations who will start on children letting them know what improvements they should want, and they generally will. There’s a hubris involved and then one part of the mind thinking it has a good grasp of the whole picture.

And your also treating evolution as advancement. Successors need not be more fit, especially now that we can control what succeeds. In general succession suits the niche well. It doesn’t mean that the horse is better than the eohippus. Here we can control the niche and the succession. Could just lead to a real mess.

Even more likely is something like, let’s say it leads to a general reduction in emotion. And a few, frog in slowly upwarmed water generations later, we have people who are really quite empty, though like wasps very hardy and fit. They will likely not know what they are missing, not having anything to compare it to. And this is barely speculation - in terms of trends - having emotions today is nearly synonymous with being diagnosable.

I’m certainly not assuming any teleology. Eohippus is not as well-fitted to the modern environment as the modern horse (or at least, we’ve been through an environmental situation where it was worse-suited than the modern horse). If we change our environment, the people we design for that environment will succeed only for as long as we can control the environment. When that changes, the changes we make may be handicaps, humanity may face extinction. But that’s true at the moment too.

I agree that it’s hubristic and unwise to assume that our tinkering will have the effects we foresee and no significant unwanted side-effects; in any complex system theoreticians lose out to conservative empirical tinkering, whether it’s social or medicinal or whatever. But I think it’s inevitable, because people like theories and like to think they’re in control.

Seriously? Conversely: compared to a hundred years ago, western males are blubbering wrecks who are “in touch with their emotions” and “seeking closure” where their forefathers cauterised the pain and got over themselves. I honestly don’t see emotion on a downward trend, except maybe compared to local high points like California in the 70s.

That’s one of the reasons why I was saying in my last post that there is not only a correlation between machines and fertility, but also a correlation between machines and intelligence (=> #), although the difference is that the first correlation appears earlier than the second correlation, but both appear, and always appear (you can be sure).

This is was is said by “Wikipedia” about “drones (bee)”:

"The drones’ main function is to be ready to fertilize a receptive queen. Drones in a hive do not usually mate with a virgin queen of the same hive because they drift from hive to hive. Mating generally takes place in or near drone congregation areas. It is poorly understood how these areas are selected, but they do exist. When a drone mates with his sister, the resultant queen will have a spotty brood pattern (numerous empty cells on a brood frame). This is due to the removal of diploid drone larvae by nurse bees (i.e., a fertilized egg with two identical sex genes will develop into a drone instead of a worker).

Mating occurs in flight, which accounts for the need of the drones for better vision, which is provided by their large eyes. Should a drone succeed in mating he soon dies because the penis and associated abdominal tissues are ripped from the drone’s body after sexual intercourse.

Honey bee queen breeders may breed drones to be used for artificial insemination or open mating. A queen mating yard must have many drones to be successful.

In areas with severe winters, all drones are driven out of the hive in the autumn. A colony begins to rear drones in spring and drone population reaches its peak coinciding with the swarm season in late spring and early summer. The life expectancy of a drone is about 90 days.

Drones do not exhibit typical worker bee behaviours such as nectar and pollen gathering, nursing, or hive construction. While drones are unable to sting, if picked up they may swing their tails in an attempt to frighten the disturber[citation needed]. Although the drone is highly specialized to perform one function, mating and continuing the propagation of the hive, it is not completely without side benefit to the hive. All bees, when they sense the hive’s temperature deviating from proper limits, either generate heat by shivering, or exhaust heat by moving air with their wings—behaviours which drones share with worker bees. In some species drones will buzz around intruders in an attempt to disorient them if the nest is disturbed.

Drones fly in abundance in the early afternoon and are known to congregate in drone congregation areas a good distance away from the hive."

AND AFTER THAT THEY HAVE TO GO TO BED.

And here you said about ants and bees (incl. drones):

Okay, that was said in a different thread (=> #), but it suits also in this thread.

There is a high probability that people will become humanoid or „cyborgoid“ bees.

And in the not so very far future they will be a kind of cyborgs without any awareness of what happened in the past, what happens in the presence, and what will probably happen in the future because they just do what they are told, advised, ordered, commanded to.

The history of thinking must be written soon, since there is not many time left for that because the thinkless time will sooner begin than the most today’s people “think”.

And here is said:

If even a species destroys itself, than it can not be false to assume that machines will perhaps longer exist than the species “homo sapiens” who created them.

I actually doubt that the eohippus is less fit for today’s Environment. Less large predators, easier to hide. I see no reason why it would NOW be less fit.

yes, I Think it is inevitable.

It’s a good Point. Have to mull that a bit. It seems like there are a couple of trends happening at once. The pathologization and medicalizing of emotions AND the relaxation on taboos to some degree.

I actually doubt that the eohippus is less fit for today’s Environment. Less large predators, easier to hide. I see no reason why it would NOW be less fit.
[/quote]
Large herbivores are threatened everywhere. Humans want farmland, there’s less to eat. Hunters want trophies, Chinese medicine wants ivory and hide. Maybe they could flourish in Siberia, or in a nature reserve. Horses do well because they’re large enough and small enough and tameable enough to ride, and for the meat. Eohippus might be a good beast of burden.

My feeling is that the imperative to be happy is behind both pressures - happy as an emotion, a sensation, rather than a way of being in the world.

IMHO, I don’t think humans will ever be completely replaced by machines. I don’t foresee our technology ever getting to a point where none of it ever has to be managed by human beings under some circumstance or other.

Besides that, human beings just won’t stand for it. If machines completely replace us (in the workforce, that is), human beings will be out of work. We’ll revolt and destroy the machines before we allow ourselves to starve.

So the frog will eventually jump out of the pot and overthrow the humans?

I just don’t think it must necessarily end either way. Your frog analogy isn’t very good, but both are possibilities. Machines could take over in some possible world, but I can also imagine a world where we aren’t completely naive and don’t build machines that end up actively destroying human kind. Machines that outlast people aren’t necessarily replacing human beings either.

They have been for years because the frog doesn’t jump out of the pot.

Think about the economy. Certainly no intelligent governance would let it get as horrifically bad as it is, certainly not Greece. But the frog simply doesn’t jump out of the pot, so why not?

It is called “Normalcy Bias”.

Certainly any sufficiently selfish intelligent governance would allow the situation in Greece if there were a controllable benefit. Are you saying that machines are sabotaging our economies and cooking people alive?

And who would that be?
And wouldn’t that also apply to machines replacing people… for the same selfish reasons?

I don’t really know, but I also don’t know it is a coup by machines to eliminate humans and take over. Either is possible, I suppose, but I can’t accept the machine narrative without some serious evidence, and at the same time there is plenty of evidence of people controlling other people through power relations.

Sure, but I just don’t believe there have to be such machines or that we don’t stand a chance to control our devices.

“We” already don’t control them.

Do you control what your eyes see?
Don’t you respond in accord to what your eyes see?

If Google designed your eyes (which in a real sense they do along with Microsoft), you see what you are given to see by a machine that I can guarantee you Microsoft has lost all control over. And you respond accordingly, as do they, because they use similar machines to tell them of reality and what is really important and needed by society and the bank.

It is a machine that told Bill Gates that “We must immediately begin eliminating a great deal of the population”. The machine logic reduced to simply, “We do things this way which costs that much which requires X amount of resources and people require too much of those resources.” The machine wasn’t asked if there is a better way of doing things because each step is already machine designed, so the presumption (without thought) is that “machines have already made it perfect enough and thus we simply must get rid of the people”.

Basically, on the higher level, the machines have been asked merely to design a beehive or ant colony, but of ultimate power.

And it isn’t a coup, it is designed to be voluntary so that the blame is shifted entirely upon You, the population.

If this is an allusions to humans not responding to the gradual machine take-over, then it is a poor one. Think of it in terms of individuals rather than the labor force as a mass bulk: A man who has worked for a factory for several years and all of a sudden loses his job because a more efficient, powerful, faster machine has taken his place is not going through a gradual transition. His lay-off is sudden, and he will be upset. There will be a point when enough people experience this unwelcome transition at a high enough rate that they, in response to the prospect of starvation, will do something about it.

Utter non-sense.

The man doesn’t see the slow changing that led to the machine taking his job as he was buying internet time, computers, investing in tech companies, paying taxes to be used to build more technocracy, and watching TV. By the time it is too late, the water has gotten too hot, he doesn’t “jump out of the pot”, he, in effect, dies - got laid off. He died from blindness of that which was sneaking up on him slowly, exactly like the slow boiling water that he can sense, but can’t tell where it is coming from… until it is too late to do anything about it, but get kicked out - “die”.

The analogy was formed pretty perfectly… long ago when it was first stated.

James,

I just don’t understand your thought process.

Do I control what my eyes see?
Well, I believe I can shut them if I want, or direct them on a certain part of my surroundings, so to some extent yes…

Do I respond in accord to what my eyes see?
I respond to what my eyes see according to a variety of factors…perhaps instinct takes over if I see something dangerous, probably other sensations, previous memories, or mood fluctuations factor into my response by providing the decision-making part(s) of my brain with context. I don’t know what you’re asking.

If Google designed my eyes?
What are you talking about? Google Glass, Google search results and algorithms, Google advertising, YouTube…?

…or else what? What did the machine say would happen if we didn’t? What conditions were given as input for the machine to come to that conclusion? Again, what are you talking about?

Yes… all of that and more; "statistics that the government uses to promote various ideas and laws upon the population" (for example). And not merely gather with machines, but filtered and analyzed by machines in machine ways of thinking what has been said or done.

Or else the human race would entirely die out due to overpopulation devastating the resources. There have been quite a number of films and documentaries on it.

Statistics were given, like the above example of Google, but far more, not merely from Google machines. And far too many machines for anyone to track down any errors that might have been involved, so they just go with the “probability” that the machines know what they are talking about and we had better obey “Science”. We all know that Science can’t be wrong else your cell phone wouldn’t work. :icon-rolleyes:

James, is everything a pissing match to you?

You’re equating death with the worker’s being laid off. I was equating it with his literally dying of starvation. I’m saying the revolt will happen after the laying off. As for before it happens, I could agree with you that it’s like the frog in boiling water scenario.