Rational Metaphysics: The Equation for Space

Haha… :laughing:

Is that your exceedingly polite way of saying that you think it is all fiction (aka. “BS”)? :sunglasses:

…and the author of that linked article is “Posted by: JSS”.

Not to dup post, but rather merely address the “fiction” issue;

First, as stated in the other thread, one cannot prove anything concerning logic unless the person viewing it adheres to logic - proof is in the mind of the beholder. Definitional Logic proves the incontrovertibly of the logic and then the ontology is then proven by the sheer number of exact likenesses to empirically demonstrated physics. Unlike Science, RM doesn’t “reverse engineer” the physical universe. RM designs a physical universe and then compares it to the one already in operation. When that comparison is so exact as to not be able to distinguish one from the other on every observed phenomena in physics, a sufficient proof has been formed.

The computer that I used to generate all of this is named Jack. Jack is the combination of a single bit processor and a common PC. The PC handles the memory storage and display while the SBP handles the logic. I chose the name Jack, because frankly, if you don’t know RM, you don’t know Jack. :sunglasses:

With what I had to work with and in the time allotted, I created a metaspace of about the size of a fraction of a hydrogen atom, enough to be able to watch particles form and interact. But within that space, all of the laws of physics can be witnessed even though not programmed into the behavior.

I don’t know why I hadn’t thought of that before, but it actually wouldn’t be nearly that simple. The computer that could accurately predict the existing world would have to have a horrendous amount of precise data on the world just as it stands. Such efforts have been underway since WW2. Huge computers are currently trying to track everything you could imagine solely for the purpose that I am warning about. So as far as me having such a computer before hand so that I could predict the effect of that same knowledge being “known to the world” (as if there was anything ever known to the entire world), would be extremely unlikely… well… impossible.

What they do so as to make any computer have even the slightest chance of being accurate involves approximations, generalizations, and probabilities. For socialist and communist governance schemes such things are much easier because they frankly don’t care about anything but constructing a simple idealized model of an organized social system regardless of who has to suffer and die in order to achieve it. I think the cartoon film, Shrek, displays the basic scenario. But the adversaries to such schemes are as bad if not worse. None display any actual understanding of morality and necessity.

You are right that post-dicting (as opposed to pre-dicting) can be tricky. But in reality and due to the horrendous complexity of the universe, by truly knowing almost every detail of the present, every event of the past can be calculated. Of course the further distant one tries to post-dict, the more precise one’s measure of the present must be. And you are right in that no computer could ever contain enough information about the present to be very precise in details concerning distant past events. Again, it becomes an issue of “good enough” for the concerns at hand.

Predicting is much easier for a variety of reason. One of those reasons is that one way to increase the accuracy of a prediction is to help adjust any variations that begin to happen so that the computers calculated future will turn out as predicted. In other words, you cheat.

Such things have already taken place in the US and I’m sure across the world because the Pharaohs were doing similar 3000 years ago. More recently, when Prof John Nash proved an economic scheme that would make the elites rich, but required a specific type of human social behavior for it to be accurate, extreme measures were taken to get people to behave as the computer model required so that the wealth could be realized. What was created was the “Me Generation” and the current economic crisis. During that time, John Nash is awarded the Noble Prize in economics.

Interestingly, there is a difference in a simulator and a true metaspace. It gets complicated as to exactly why, but what forms in a metaspace is as real as anything that forms in real space, merely a more complex version of it. A metaspace program cannot use modal of things.

Remember that RM begins with the entire universe being no more than values assigned for each point in space. A value is not a physical entity. The changing of those values is what causes physicality and our universe. In a metaspace program, again merely values are assigned to all locations. The exact same rules that apply to the physical universe are then applied to those values. Those values change in accord with physical reality and create an actual real, physically existent meta-universe wherein only the true rules of reality prevail… that is until the program gets stopped or interfered with.

You can read a little more about the program here… Achieving Faster than Light

Also something that I had posted some time ago;

The following is a very early pic of a high density energy field being displayed through an Excel spreadsheet (because I didnt have proper graphics programs and didn’t want to go create one);

That pic doesn’t really tell you much other than displaying how fields aggregate and that trying to find a particle in all that mess would be tough. So I improved on the display processing by filtering out the lower level “noise” and adding a particle locater and tracker program separate from the metaspace programming.

That was an earlier snapshot using a tracker to locate and follow particulates forming. The big circles are the tracker.

What you are seeing is the center x-y plane of a cube of metaspace. At that stage, the tracker would follow the drifting Brownian type motion of the particle throughout metaspace while keeping the screen centered around the particle, or in that case, 2 particles. The red circle is indicating a particle that is in another x-y plane along the z axis. You can only watch one plane at a time in 2D of course.

I started to create a 3D spreadsheet for Jack, but Excel turned out to be too limited and I didn’t want to go relearn C++.

Jack has had various brain surgeries since that pic and looks a little better, but the entire thing wasn’t really for sake of public display so most all of it is merely sufficient for me. Later sometime I need to create some good animations and screen shots for full explanations.

The following displays a few of the clips showing two positrons interacting. The top graph displays the distance between the two. One positron was headed toward the other. They both responded with proper inverse square aversion to each other. Again, remember that as far as the program is concerned, there is no such thing as a “particle” . The program merely changes the PtA value at each point (many times more than those displayed) according to the “rules of reality”. Particles form and obey what we call the “laws of physics” without ever being told to do so.

Note the upper left corner number in orange. That number is a timer in real time letting you know how long it took my little setup to figure out everything required for each step. The actual screen is about 4 times what you see there displaying far more of the details involved concerning affectance field potential and density, gimbal spin velocities, and so on. The little blue circles are the tracker program locating the xyz position of the particles as they float about.

Most of that was from about a year ago.

But this thread isn’t actually about RM, but rather the eventual effects and consequences of extremely convincing predictive machines in the hands of lustful people who do not ensure that they really want what they seek.

The biblical story of Sodom and Gomorrah is the exact scenario of concern merely in biblical language. To know that requires that you understand scriptural language, so for now, take my word for it. The concerns, behaviors, and consequences revealed in that story display a reality that predictive machines or mechanisms create.

Look at the threads concerning the “Ought” question and “Morality”. The world displays that it cannot resolve those questions any more than the people on this forum can. None of the Illuminati, Freemasons, Royal Masons, Catholic Church, Jews, Muslims, Secularists or anyone in influence is displaying any sign of having the slightest understanding concerning that question of morality and “ought”. Some seem to come closer than others at times or on specific issues, but there isn’t the slightest sign of an actual understanding behind their efforts. As the Bible story goes, “My people do not know me.” I have no doubt that such was true then, but my concern is that it is still true today.

Humans as they stand cannot have that kind of power since we are all self interested. The pinnacle of human interaction is the balance between once needs and that of others, the only reason someone would care for someone else is because of the necessity we have of working together due to our limitations which creates a need to live in harmony. Nonethless if this computer was made, then the users wouldnt need others and emotions would never arise, with that power they would seek their own benefit even at the detriment of others.

Although, it would not be too bad because even knowing what will happen does not mean that you can control it, humans are in the humility era they will learn that they do have limitations, somethings are just impossible to do.

I call it “Inculisive Self-Harmony”.
It means focus on the balance of harmony both within and surrounding oneself.
The reward is the closest that can be achieved to eternal joy and eventually, actually achieved.

Even with RM, it is never proposed that others are not needed. Quite the contrary in that the first temptation is to use others so as to obtain more power and diminish any potentially contrary influence.

The issue and concern is the realization that such a social structure has a limit to its size and grasp - a provable limit. Currently the thought is that at the right moment, so as to achieve “sustainable” governance, a very large portion of the population must be deleted, else the governance will not be able to maintain its supremacy, a supremacy that it actually had no business attempting to create in the first place.

The ideal is distributed governance, distributed intelligence (Democracy), not central governance or central intelligence (Socialism and/or Communism). But that democracy must be of a specific nature, else it all starts over again.

I very much agree. But what RM proposes, unlike many before it, is the opportunity to discover exactly what is impossible and why. The problem, as usual, is even RM requires that one actually seek what is possible or impossible before they presume, before they try to gain what is beyond their grasp. They see some progress and lustfully rush into it and forget about whether it was anything actually worth accomplishing.

What exactly is it that you have formed? What are you talking about with this “device”?

In short, a scaled model of a device that allows for extreme accuracy in predicting and thus total control of the future of Man. In effect, it is Dr Who’s TARDIS without the actual “physical travel” feature.

Yes, I know that you don’t believe it, but there is a big difference in speculative opinion and well founded assessment.

The question of the thesis is what should one do with such a device? It isn’t the entire true scale version and thus can’t be currently used to predict what would happen. But it offers that someone could merely fill in the blanks. Should it be given to the “world”, or to whom, if anyone?

Congratulations. What is your purpose for posting this if you don’t think anyone will believe you?

The question involved.
Take it as a hypothetical.

I’m reminded of the movie “War Games”, where the computer plays tic-tac-toe with all possible nuclear destruction modes and decides that it is all a lose-lose situation, shuts down our nuclear armaments and quits the game. Even a computer capable of 100% accurate prediction could only make predictions that would hold true for perhaps a nano-second until more variables forced it to make new predictions which would make “good enough” the same modality we use today. There might be an increase in accuracy prediction in the VERY short run, but ultimately, it’s accuracy would be no long-term better than what we experience now. “Fixing” the universe is nothing new. We been at it since we climbed down out of the trees, but even as we finally grasp that all is a constant flow of noumena, predicting that flow escapes us - and it doesn’t appear likely to ever change until the universe itself changes, which is highly unlikely.

Yes, that was pretty much my point.

But then the same had been said concerning flying.
There is no question that the scenario can be created.
There is no question that Man has on many occasions attempted such a thing.
And fairly recently, more than one such “god” was formed, thus causing each a serious problem in having to predict the other.
You are living in the aftermath of that contest right now.

But those aren’t the only scenarios available.

This is that “Equation of Space” that provides a single field which explains all others in physics.
Equation of Space.png

That equation is absolutely necessarily true, although I have not explained how to use it.

Would you mind telling me a bit about the equation itself (for example about the term to the right of the “p +”)?

In common English, the equation states that every point in space is defined by the sum of its potential-to-affect, “p”, and all changes to its potential-to-affect - the time derivatives.

In philosophical terms, it is merely stating that every point in existence is defined by the rate of change of its potential to alter the degree of existence (its ability to affect anything = its degree of existence).

The terms following the “p +” are the sum of all changes at all rates in p through time, expressed as the sum:
a0dp/dt +
a1
d²p/dt² +
a2*d³ p/dt³ + …

wherein the “a” values are scalars suited to each point.

I thought so. But if all terms following the “p +” (thus the “pta +”) are “the sum of all changes at all rates in p through time”, then they have to include the entire time of the universe, thus also the future of the universe. Okay, this could also be a part of my thread “Universe and Time”.

The first key is to realize that every affect comes with a direction of affect and each and every 3D direction has its own equation of space for every point in space. All of those listed change rates are different for every possible direction. So for example, headed directly to the right, the following value set might apply for point A:
E{right} = [0.5, 0.01, 0.0001, 0.023, …]

But also at the exact same time at point A, headed in the upward direction, the following equation applies:
E{upward} = [0.5, 0.001, 0.02, 0.7, …]

Of course every angle must be handled and there are an infinite number of such directions in 3D, so the challenge got a bit tough. I not only had an infinite series for a billion points, but an infinite number of infinite series for each of a billion points, all of which had to be calculated for each picosecond tic.

The resolve took a very long time for my little brain to figure out. I basically had to prove that each preferred method (due to simplicity or speed) of emulating space would not work.

The project was to emulate affects propagating through a small bit of space in whatever direction they might take.

If you allow space to be represented by a matrix of points (locations), the question arises as to how these points are to be situated. Aristotle proposed that a tetrahedron could properly fill space. That turned out to be incorrect, although pretty close. What is called “space-filling” became my study for a while. I tried all kinds of combinations of shapes with which to fill space along with which equations would have to be used to emulate the propagation of an affect in any direction. I wanted to simplify for sake of speed and memory usage, but that turned out to be quite a challenge.

Not being able to use any of the standard methods and after getting very, very creative in coming up with new methods (that didn’t quite cut it), I almost gave up on being able to realistically emulate space. Eventually, it dawned on me that I could use the simple cube matrix to fill space, but I couldn’t calculate each point for each tic of time. Merely a 1000x1000x1000 pixel matrix, 1 billion points, would represent perhaps 10 nanometers of space and leave a matrix of 1 billion simultaneous equations to have to resolve for every tic of time (one “frame”), which might represent merely one picosecond or less. That would take an average PC possibly years to calculate each picosecond of time. So I had to find a way to update the state of that tiny metaspace without losing accuracy concerning the propagation of affects through the space and calculate thousands of tic frames within a reasonable completion time.

Eventually I figured out the “Afflate”.

An afflate (an affectance oblate) is a proposed, or selected, small portion of affectance that can be treated as a single propagating affect and a “virtual particle” (even though it is not an actual particle at all). And an afflate might have any small size or shape. An afflate is very similar to a photon, although greatly smaller than a light photon. Each afflate has many characteristics such as density, potential, and propagation direction.

What I finally realized is that I could track millions of these random afflates as they each propagated in their own direction rather than trying to calculate the changes that were occurring at each of a billion points. And that allowed for me to emulate the propagation of affects within the space. The end result eventually led to being able to watch particles form merely because of the manner in which random affects naturally occur, such as in these renderings:


The word “afflate” means probably the compound word of the two words “affectance” and “oblate”. Is it a tiny objectified affectance? Is it a tiny thing of affectance?


“There is nothing wrong with your monitor. Do not attempt to adjust the picture. We are now controlling the transmission. We control the horizontal and the vertical. We can deluge you with a thousand channels or expand one single image to crystal clarity and beyond. We can shape your vision to anything our imagination can conceive. For the next hour we will control all that you see and hear. You are about to experience the awe and mystery which reaches from the deepest inner mind of James.”
:wink:

Sorry, I didn’t see that post earlier.

Yes, “afflate” is merely an ultra tiny amount of affectance of non-specific size. In the program, I have the computer randomly assign afflate sizes from ultra, ultra tiny to merely ultra tiny along with random densities and PtA levels. It usually takes from 20,000 to 200,000 within the small metaspace to emulate anything with reasonable statistical accuracy.

That anime is probably using 100,000 afflates to fill the 3D space with the intensity turned up so high that you can’t see into the space, merely the surface of the cube. What you see buzzing around are the random afflates. I used yellow for negative PtA levels and blue for positive with varied levels of each. The result is somewhat blue-greenish. Normally, with the intensity adjusted properly, all of that same activity is going on, but you don’t see it until it concentrates such as in the following one while the particle aggregates or accumulates.

The software is very precisely calculating each tiny movement based upon the general behavior of affectance upon affectance (afflates passing through other afflates). The calculation gets a bit hairy as it considers the density, cross wind, density slopes, and PtA variations, all in 3D and for each of the 100,000 afflates for each tiny movement. It takes quite a while to render one of those animes.

Actually for UP1001, this thread from 2012 introduces what I first called “Rational Metaphysics” before I realized that RM is merely the method for creating an ontology. The first ontology created from it is “Affectance Ontology”, thus now I refer to “RM:AO”. Affectance Ontology is a new foundation from which all fields of science (real science) become united, meaning that the exact same principles/“laws” apply to every field because they are logically necessary consequences of fundamental, relevant definitions.

An update to this topic:

[youtube]http://www.youtube.com/watch?v=KkfLaeunLaU[/youtube]