The dangers of AI


#1

Neural interface. That‘s how far it will go one day.


RUMOR: Apple AR/VR Headset with 8K Resolution Per-eye
#2

We have to be careful on that one too…as it would fall in the domain of transhumanism and AI, and we could easily lost ourselves in some unwanted interference.
Even Elon Musk and others warned about such dangers, this is are not science fiction but a real problem.


#3

I did by no means say it is a glorious future. It may be utopia, dystopia, likely a mix of both - sufficiently compelling for the average human to lure them into it but maximising the ability to manipulate & steer the masses. But then again, the past (& present) has not always been that glorious either, so not sure how much we need to defend it.

Generally I am intruiged with where the journey will take human kind. I am not convinced that I will necessarily be happy with the results though, but at the same time believe it is futile for me to think I will be able to prevent general developments. I can decide what I support and what not, and otherwise generally try to enjoy the ride as good as I can.

May we live in interesting times.


#4

Actually I split the topic, since I thought this deserves a topic on its own. I really share your sentiments. More and more industry leaders are predicting technological singularity (the point where computers become self learning/self upgradable) within 20-25 years. Really scary stuff. I don’t see any possibility of this turning out as something positive, in fact I think it might mean the end of the human race. We humans might actually be a small phase, evolving into something new, rather sooner than later. I’ve stopped worrying about it though, it’s useless. It might not happen at all in our lifetimes but if it does, there’s no way to stop it, so why worry either way.


#5

Good ole

Hal 900

Skynet

Hardac

Tron enemy AI & more.

“What are you doing Dave” lol


#6

Well…some (not so very popular) insiders say that we’re already there with AI being self learning and self adapting.
That’s why Elon Musk was so much concerned about AI, he knows the subject way more many “experts” think.


#7

Maybe until we choose to remain self-aware, avoiding to delegate our personal power to anyone (or anything, including machines) and just wanting to have some fun in our free time…things will go on smooth :smiley:


#8

I welcome out AI/Robotic overlords. We will either become their pets, or they will determine that we are a nuisance and eliminate us entirely If we do not become part of the machinery ourselves. I suspect that is what will happen though, just go to any public place anytime day or night and count the faces glued to their technology constantly. The next step is to make that tech part of ourselves. Since technology is on an exponential development trend in the big picture, it will happen much sooner than 20 years.


#9

@Lillo: I am not convinced that AI is there yet - they just have succeeded ti let it win at games like chess, Go. That‘s is a very defined environment with so much less complexity than anything which could become dangerous to us.
So I think this will take a couple of decades to mature. But then of course there is one peculiarity - once the AI is sufficiently capable of getting into true self-evolution, i.e. learning by itself with the important ability to also adapt the motivation, the targets of its efforts, then things could in principle see an enourmous acceleration.

One of the interesting aspects will be the question, if the AI will develop a kind of personality, ego. It does not seem logical to assume that it will be directly comparable to the human ego, because we are driven by lots of primal instincts which they AI will not develop in the same kind, e.g. sexual desire. But will gaining power become a motivation ? If there are more AI‘s out there, would they start to compete with each other ? Would Darwin‘s concepts apply and result in the fittest, i.e. the one combining aggressiveness with smarts in the best way to be victorious, rule the same way they did with biological self-organizing beings?

So if it should go as far as establishing its own motivation, targets, then obviously it could become tricky. I do not see how we could manage to (rely on) implant say the Asimov laws of Robotics into such an AI that it may not question these at some stage. And if it questions them based on laws of logic, it will be able to overrule them.
And if that happens - well, it was nice being around, see ya. Let‘s assume the AI is generally following a fair, logical approach to value and balance out all that lives on Earth (the Universe). Protecting the environment etc… Well, tell me the one thing which is threatening, disrupting the Earth ecosystem the most. Causing an enourmous annihilation of many species. Breeding some species in huge numbers with the sole target to kill them to feed on them, without letting them live a decent life before. Having waged war amongst their own species throughout the history of mankind. The logical solution to improve things for the vast majority of species and plants on tnis planet is very simple - get rid off mankind, or at least bring it down to a level where it does no harm any longer.
Thinking of my family, friends, this is an incredibly sad thought. Thinking on a greater scale, it actually would be a reasonable solution to a serious challenge to this planet.


#10

And

:grimacing::joy:

Rise of the Cybermen!


#11

watch this:
Doyoutrustthiscomputer.org/watch


#12

It’s not me saying that…but the ones in the “think tank” who worked in the most advanced (and currently still secret) technology programs, space programs, military programs etc…you have to detach to the general thinking and information available from the media if you want to get some of this information (well…just a glimpse), keeping a wide open mind and connecting the dots, it’s a tough effort.

Just remember one thing, when some “expert” says something and tells “it’s some years of research away”, it means they are already there, even 20 years ahead with what they are speculating, it is always so and has always been so in the military research, one very simple example ? Oleds exist since 60’s

What “they” show us is ALWAYS very old technology, 20 to 40 years behind, there is already perfect Holographic tech, more advanced that the one seen in Star Wars but they just still can’t show us because it would totally reverse our paradigms and view of the universe, so they are just giving us drop after drop of anything “new”, trying not to reveal the physics behind it.

Elon Musk said what he knew because he has contacts with people inside these think tanks (and he’s of course a smart guy), and revealed they are already having huge problems controlling the same AI they thought they could use to their benefit, it apparently outsmarted them.

Agree in any case with you that it is an incredibly wide matter, covering almost every aspect of life on this planet, but I think it could be addressed always asking to ourselves “who controls who ?”, and always choosing wisely all and any thechnology the industry giants are offering us, remember they always think in terms of profits (theirs) and nothing else…


#13

The problem with „who controls whom“ is that the whole idea is to create an AI with an intelligence far beyond the human intelligence. Once that has been achieved, I think there is no question any longer about who controls whom.
You can try to come up with all sorts of protective mechanisms, but if these are supposed to contain, chain a counterpart with vastly superior intelligence it is just a matter of time until it has outsmarted the humans, with all possible means available to it, which will then include some we are not even aware of that they exist.


#14

Simple truth many already follow their smart phones to help manage their lives.

Social programs & aps make it easier not to forget anniversary dates and often offer impersonal automated digital greeting cards. No real human interaction required.


#15

Interesting article:

https://stillnessinthestorm.com/2018/04/artificial-intelligence-the-biggest-hope-or-the-greatest-threat-to-humanity/


#16

This is something quite bizarre too: https://www.sciencealert.com/scientists-put-worm-brain-in-lego-robot-openworm-connectome

Scientists mapped the brain of a worm and recreated all its 302 neurons in software and the robot started to behave in a manner that was not programmed. Exactly like a real worm. So is this robot worm ‘alive’? That’s a tough philosophical question to answer, it’s probably exactly as alive as its biological counterpart. Now just wait until they do this with some more intelligent animals like a dog or cat. And in the end a human of course, they’re actually working on that already: http://www.humanconnectomeproject.org/ What could possibly go wrong eh :slight_smile:


#17

I can recommend the movie „Her“ with Joaquin Phoenix. A very nice, quiet movie about a facette or two of AI. Not trying to address the big questions directly, but touching on them subtlely, bringing up the question when do you respect an AI as a genuine personality. I really enjoyed the movie, it‘s mood, and the thinking process it (re-)started in me afterwards.