Speculation

At a convention of artificial intelligence convention a few days ago, a prediction based on available data, was made that the power of artificial intelligence is very likely to develop to levels billions times that of existing human intelligence.

The suggestion made on this information is that a synch has to be established between the human and the artificial intelligences , in order to be able to retain nexus, for the sake of prevention of astronomically huge differences to develop , possibly causing terrible miscalculations and their consequences.

Would some kind of microchip injected into the human body create a bio-feedback system on some level do the job? Would it need constant upgrade as AI increases at an increasing slope level?

Could the human neural system accommodate such increases by adding new neuron channels and circuitry?
Or, would the brain abruptly shut down?

And finally , what would be the resultant changes in human life and reality resulting from this ? Would we , as a species move closer to the entity of God, as we have intelligently developed it , since our inception?

Anybody care to hazard some kind of picture of this description of a possible/probable truly brave new world?

Less broadly, will consciessness and interactive non verbal communication become more prone to validate such now marginal areas of study such currently ascribed to people like Jung, and Casey? Will extra sensory perception, psychic phenomena and Theism become more credible?

Will brain power increasing phenomenally prove magnet like properties of cortical - brain studies, to re-examine ideas associated with Reich’ s Orgone theory and intuitive philosophy, mathematics, verification of Bergson’s ideas?

Will belief in God be more theoretically demonstratable, where Ultimate Consciousness will coincide with the view of a closing circle between approximation of Absolute apprehension between a mathematically demonstrated closure of the LIEBNITZIAN idea of the coincidence of 2 perfect spheres? Will the disposed idea of ether be re-considered?

If some, or most, or even all of the above can be settled by then, will the idea of Man as God be acceptable ?

The premise for this last proposition rests with the abandonment of a Creatinost theory, since that is becoming obsolete for the reason that:

For Being to exist, it has to develop out of the archaic(by then-) digital either/or idea. (There can not be a conception of nothing without the perception of something)

Therefore some and no thing are logically interdependent and causally insignificant, on that level. Sameness will loose its significance as well, and as Intelligence will be concerned, the God-Man distinction will also break down,
disconcerting the connection between knowledge and evil,
-and the Faustian Romantic idiom, as well. New Romantic revivals may set the tone, as well, and Humanism become legitimized, no longer as worship on the altar of self service, and Narcissism but as the beginning of an era of mass adherence to the idea is The Superman. The Machine, fed this information, would never revolt against God, since It will know Itself as One With. Destroy man, the idea as substance, It also destroys Itself.

I’m actually working on this, building a home-made singularity of sorts, because the issue is one of efficiency rather than raw power. The NSA, for example, informed the entire semi-conducting community that the human brain is the model for what they are all shooting for, because it is a relatively “slow-cooker”, but incredibly efficient at crunching enormous numbers. We use perhaps 30w on the average to accomplish with causal ease what enormous next generation supercomputers are about to do. Today’s race for AI includes analog circuitry that can be 10,000x more efficient than digital, while quantum systems can be over 100% efficient. The Chinese are the first to specialize in analog quantum circuitry, but classical computing is still more efficient in some cases and, among others, IBM is working on a computer that combines both.

Anyway, analog is rapidly becoming synonymous with efficiency and as a way to reduce latencies and provide short-cuts that classical logic and physics alone can’t manage. My own design incorporates 64-96 arithmetic accelerators running parallel to animate a VR platform using a simple adaptive AI approach and a Mind Maze headset that reads your brain waves, providing the computer with 30-70ms of advanced warning. Among other things, the headset eliminates lag issues such as draw distances, providing time for the relatively slow arithmetic accelerators to buffer. Using a headset this way allows the operator feedback to be used as the actual computer that does most of the real number crunching, while the hardware uses a rudimentary AI system to gather and collate the operator’s feedback to express greater complexity within the virtual reality.

Essentially, it amplifies yin-yang push-pull dynamics leveraging the greater nonlinear temporal dynamics of analog circuitry. You can add biofeedback to the system and haptics or whatever, but all it requires is a significant amount of immersion. In my design, I intend to use a rasterized video game engine rendered with real time ray traced lighting. The lighting provides additional AI that can be easily programmed into the system to make all the animations behave much more naturally using eye-tracking. The analog circuitry can be programmed and hardwired to promote the resilience of complex systems up to perhaps the complexity of a flock of chickens. The Mind Maze can incorporate eye, gesture, and expression tracking and is manufactured by a major medical research company.

That means I can use the human brain as a short-cut to crunching most of the larger numbers, and add AI from the cloud or whatever to supplement the feedback the operator is constantly providing. Most of human behavior is so predictable its not funny, and I’m leveraging that fact to allow the operator to do more of the work for the computer. Theoretically, an advanced version of the entire platform can be built into a headset using quantum circuitry and actually entangled with the operator, because Roger Penrose’s theory of quantum induced microwave vibrations in the brain has received its first two experimental confirmations. My own specialty is the fuzzy logic which requires both fractal geometry and continuum mechanics.

all this science talk is making me hard.

so this mind maze thing, its like a gamepad but no hands, like if you look to the right it registers as joystick input to the right?

im getting the vibe the mind maze cannot actually read your mind though. but its making me hard when you are talking about using my brain as a human calculator. i want to be a robot in utopia land doing equations as a math slave.

not sure if its a fetish or what. But i only want to do it for an hour or two at most, i dont want to be fully a slave.

my other question is, how exactly are u going to use my mind for these equations?

By overcoming the difference between probability without any doubt, that something(or) someone doesn’t duplicate what it’s stable state is , versus its unpredictability.

OK, so the VR setup I’m planning incorporates the ability to organize things with roughly the complexity of a flock of chickens. It has to account for things like the resilience of the system, but if it can cover any behavior up to that complexity using the operator’s own feedback to decide most of what happens in virtual reality. Walking around we don’t normally walk backwards, so the computer doesn’t have to account for a lot of things except that we tend to walk forward and keep walking until we stop. An amazing amount of our interactions are really that simple and predictable, and the computer merely has to cover those, to be able to provide the kind of complexity required to easily add more complex AI to the system using the cloud or whatever.

Instead of the VR world being largely preprogrammed or random, it responds to the operators input and can provide a rather impressive amount of feedback that encourages the operator to learn new tricks, thus, the operator becomes their own computer and the whole point is the operator needs answers, not a more complicated computer. Some might call it a virtuous circle, with the machine merely being designed to promote self-actualization. If you can entangle the machine and operator, I’m not sure what would happen, but the machine becomes smarter along with the operator. Intel’s new Loihi chip is a machine learning chip that does something similar and will be on the market next year for a couple of hundred bucks. I might use one, we’ll see what I need in the long run.

The Mind Maze reads your mind enough to provide up to 70ms of warning to the computer before you so much as blink. It tracks your movements, gestures, eye movements, and expressions. Adding real time ray traced lighting means any character can look you in the eye and act as though they know what is happening around them and what objects are around them even. You could wave, and they’d wave right back. Knock one down, help it back up, and it would look you in the eye and say thank you. The ability to automate all of that in real time with roughly the complexity of a flock of chickens means the system has the required complexity for the operator to provide the required feedback.

70ms is laggy dude.

Second i dont understand how you will use this device as a supercomputer of equations.

Is English not your first language, because if you cannot understand plain English I will stop responding to your posts.

My mistake, I did not explain well enough.

What you are saying doesn’t seem coherent. What exactly are you trying to accomplish? A game world? Or an AI?
Why is a human component needed at all if you are simply trying to make an AI? Because it seems to me your analog microchips do all the computations, and all the human does is move around in your virtual world. So what role does the human player play in this at all? And that is what I am asking.

I fail to see how this contributes to the AI component (which is the main goal) of this project. What do raytraced graphics have to do with anything either? You just claim that ray traced lighting somehow creates AI, and you provide no explanation.

Again , and only a gut level comment, but the changing and relative states coming from either of
both ends, may create an unassailable depth , where from some high probability functional deviance hasn’t be assumed.

But what if, such assumptions are premature to the extent that falsify , rather than reinforce a set design flaw, or something?

What if at that level, (Janus) the developmental progression can not be stopped. What if built in safeguard for AI fail, because of a block (existential or otherwise) that even chickens can discern at a previous level, ?

Even then, Cantor’s predicament, defies any less complex re-tracing as a resulting from some kinds of counter security, that the original program could not/would not anticipate.

What if such surprising development was not conceivable at probability levels far below ?

Where do ethical consideration merge here except those which are transcendental such as the TESTAMENTS?

That these processes at that level are progressive and reductive is no fault of the program, as present apologist dreamers may like to laughj about in a goldilocks world.

The catch 22 of ranked as if I do this or do that ed of I do ultimately fades any element to revive some kind of boundary situation which can reinstate a willful termination of the program.

If it is terminates, the intent of of the program ITSSELF may come under attack, and that is even more unstabling

The contradictory nature as an essential difference in a differential quantum state, is better off assuming a Rousseau’en primal state, on the assumption that learning does over come a prior ideogram.

So fearing AI may input this onto even an original state , reducing A1( for by this time the 2001 Space Oddessy manifest, -of A1 not allowing complete termination)

Secondary safeguards, not related to existential threats , may be disallowed not to operate. A1 allowing it, because of the law of contradiction.

It is very near certain, that by this time the dedferential becomes almost undiscernable , and A1 would have learned from same contradictory law, even from a presumed contradictory assessment, that such would be counter effective, impractical, and even delusional.

That’s. the Space Odessy plot.= preparation for future shock!