Basic questions of the moral life

Here are some questions upon which I have been reflecting lately, and upon which I invite you to think of some solutions:

How can we maximize human fun and minimize human suffering?

How can we attain widespread prosperity?

How can we – perhaps with the help of machines that learn – converge on answers to the major problems holding back human progress? Do we need intelligent learning machines to teach us?

How will a super-intelligence module implanted in our bodies show us what we should want if only we knew what is in our self-interest for maximum benefit?

Will it take a super-intelligent learning machine to teach us that
Each human can only get by if that individual helps make it possible for others to get by.

What are our goals as individuals and as a society? How can we best align our shared goals with the goals programmed into Artificial Intelligence learning machines?

—Answers, anyone?

sometimes it’s a blessing that your wish isn’t granted, the bravery of attempting to stop torment can cause massive torment in ignorance, humility and gratitude go a long ways as we approach our “case by case basis”

Very good thoughts to reflect upon, except for one thing … you left out the issue of individual survival.

It is easy to arrange for maximum fun, if you are not concerned with surviving it.

So where does “what is good for us” lay? Fun is a part of it, but so is surviving. Trying to balance those is the issue in life. Losing is sacrificing either for sake of the other.

Winning is maintaining the perfect balance of both fun and survival.

And that changes all of your questions.

Leaving out survival, leaving out thymos, leaving out rage, leaving out threat … and so on and so forth - all this means leaving out a complete side (which is mostly called “the negative side”, “the bad side”, “the evil side”, “the haevy side”, “the difficult side”) of the issue. So, this leaving out leads only to more problems.

Your two questions are not new; they are pretty old; and the history is full of failed trials of answers.

We should not only consider the light and easy things, but also the heavy and difficult things.

Why are you not asking for harmony?

In this type of civilization one is unable to live a moral and ethical life where people are only deluding themselves thinking they can.

thinkdr

By not maximizing that fun to the heights but rather being aware that balance is important.
We can also minimize human suffering by also doing the above. As the saying goes, paraphrasing it, "The more we play, the more we pay.
Also, the more others pay as a result of our need for so much pleasure.

But do you see “human fun” as the end all? What about deep satisfaction which comes from achieving something worthwhile? What about the struggle and challenge which comes from attempting something? That can be, if not so much fun, still exhilarating, stimulating and afford one such a sense of deep qualia though the struggle and challenge is still there.

It is the seeking of so much fun by the hedonist which leads to disaster and chaos in the world.

…and who will be inputting all of the intelligence and wisdom into these machines to teach us?
But of course, someone like the android, Data, from Star Trek, the Next Generation, might be helpful. :evilfun:
But are there these machines in existence to teach things like INTELLIGENCE and WISDOM?

I know very little about machines but at first glance, I would say no. Where does the Consciousness come from which this supposedly, super-intelligent learning machine would teach or give us? From a human right - or would I be wrong?

I am not so sure, I do not intuit, that any machine can teach us love, compassion, wisdom, intelligence.
Machines are capable of inputting and outputting(?) facts - thinking like Sherlock Holmes :blush: how I do not know lol - I have built humans not machines. :evilfun:
I do not believe that machines get their knowledge from the ether or from the gods.
So such important things as wisdom, intelligence, moral and ethics, virtue - how can that come from a machine which only things and has no Heart?

That would depend on the individuals and their societies. I may be wrong here but I think that question is a bit too broad but again I may be wrong.
I can honestly say that the more I learn, the more I realize I know absolutely nothing about most things.

I may be wrong here but wouldn’t/shouldn’t the goals which are already programmed into AI learning machines already be aligned with human goals.
But my statement is a bit too simple.