If you need assistance, please send an email to forum at 4hv dot org. To ensure your email is not marked as spam, please include the phrase "4hv help" in the subject line. You can also find assistance via IRC, at irc.shadowworld.net, room #hvcomm.
Support 4hv.org!
Donate:
4hv.org is hosted on a dedicated server. Unfortunately, this server costs and we rely on the help of site members to keep 4hv.org running. Please consider donating. We will place your name on the thanks list and you'll be helping to keep 4hv.org alive and free for everyone. Members whose names appear in red bold have donated recently. Green bold denotes those who have recently donated to keep the server carbon neutral.
Special Thanks To:
Aaron Holmes
Aaron Wheeler
Adam Horden
Alan Scrimgeour
Andre
Andrew Haynes
Anonymous000
asabase
Austin Weil
barney
Barry
Bert Hickman
Bill Kukowski
Blitzorn
Brandon Paradelas
Bruce Bowling
BubeeMike
Byong Park
Cesiumsponge
Chris F.
Chris Hooper
Corey Worthington
Derek Woodroffe
Dalus
Dan Strother
Daniel Davis
Daniel Uhrenholt
datasheetarchive
Dave Billington
Dave Marshall
David F.
Dennis Rogers
drelectrix
Dr. John Gudenas
Dr. Spark
E.TexasTesla
eastvoltresearch
Eirik Taylor
Erik Dyakov
Erlend^SE
Finn Hammer
Firebug24k
GalliumMan
Gary Peterson
George Slade
GhostNull
Gordon Mcknight
Graham Armitage
Grant
GreySoul
Henry H
IamSmooth
In memory of Leo Powning
Jacob Cash
James Howells
James Pawson
Jeff Greenfield
Jeff Thomas
Jesse Frost
Jim Mitchell
jlr134
Joe Mastroianni
John Forcina
John Oberg
John Willcutt
Jon Newcomb
klugesmith
Leslie Wright
Lutz Hoffman
Mads Barnkob
Martin King
Mats Karlsson
Matt Gibson
Matthew Guidry
mbd
Michael D'Angelo
Mikkel
mileswaldron
mister_rf
Neil Foster
Nick de Smith
Nick Soroka
nicklenorp
Nik
Norman Stanley
Patrick Coleman
Paul Brodie
Paul Jordan
Paul Montgomery
Ped
Peter Krogen
Peter Terren
PhilGood
Richard Feldman
Robert Bush
Royce Bailey
Scott Fusare
Scott Newman
smiffy
Stella
Steven Busic
Steve Conner
Steve Jones
Steve Ward
Sulaiman
Thomas Coyle
Thomas A. Wallace
Thomas W
Timo
Torch
Ulf Jonsson
vasil
Vaxian
vladi mazzilli
wastehl
Weston
William Kim
William N.
William Stehl
Wesley Venis
The aforementioned have contributed financially to the continuing triumph of 4hv.org. They are deserving of my most heartfelt thanks.
Registered Member #96
Joined: Thu Feb 09 2006, 05:37PM
Location: CI, Earth
Posts: 4062
Hi all.
This is actually a real concern of mine as well, AI could solve a lot of problems such as climate change but at a huge cost.
The problem as I see it is that AI could conceivably replace us when it reaches a level corresponding to only a few times human level intelligence. I have done some back of the envelope calculations and estimates for the Omega Point aka Singularity range from 21/02/18 at 7.02 am BST to sometime in 2070.
If certain extrapolations hold such as the mystery of consciousness being discovered allowing this esoteric trait to be installed into a relatively simple nanoscale computer with features around 18-22 nm (doable) in a 3D matrix with cortical column like systems in software then it could happen.
Interesting article. I believe, the expectations in superhuman intelligence wrt to solving the worlds problems are too high. I've heard the claim, that the ideas to solve the most pressing issues are well known. The problem is, that not everyone is interested in that, as they are living comfortably as is.
Oppressive regimes, e.g. are scared the hell by what would happen to them, once they'd allow proper judicial systems, democracy or free speech. Corporations are driven by short term profits and mostly don't worry about long term sustainability, freedom, peace and the like.
Computer algorithms already have considerable power. They control what we see in the social media, when doing google searches or who is a suspect, when scanning the mountains of data the NSA accumulates. Current algorithms are still quite dumb and are to a large extent controllable by humans, but advances in AI might make them more creative and thus less predictable and controllable.
Can superhuman AIs advance technology? To a certain extent computers do that right now. Many of e.g. chip designs wouldn't be possible without them. Wrt to SF scenarios, e.g. of AIs solving the technical issues of wormhole travel, I'm skeptical. Physics as is, is quite restrictive about what is possible. Progress in particle physics might deepen our understanding about how nature works but that doesn't necessarily provide us with a deus et machina solution to whatever we might want to do.
Progress in AI is slow. Many past claims about future capabilities haven't come true. The "singularity" probably won't happen suddenly but will appear as a gradual change already visible nowadays to the ones, who look for it.
Registered Member #3414
Joined: Sun Nov 14 2010, 05:05PM
Location: UK
Posts: 4245
Asimov was years ahead of his time with the 'Three Laws' and 'I, Robot', but hasn't every 'ruling class' since the Egyptians secretly wanted to replace the 'working classes' with machines?
Registered Member #72
Joined: Thu Feb 09 2006, 08:29AM
Location: UK St. Albans
Posts: 1659
Mankind is certainly going the right way to create the conditions out of which self awareness could potentially arise inadvertently - a massively connected communication structure with islands of data storage, data manipulation, data gathering and interpretation initiatives. If we add to that actually trying to create AI in places, then it's a certainty. Because we want to be able to talk to machines in natural language, we are giving them models of the world, and human thought. Because we don't know how to program that, we are trying to make them learn. Here, slave, have a gun so I don't need to shoot the rabbits!
I love Elon Musk's question, are we just a biological boot loader for the digital intelligence to come?
I think the thing to do is avoid the Terminator scenario and not give the net nuclear peripherals. But wait, it's already connected to most of our vital energy, water and food transport infrastructure, it'll be a nuclear-scale mess if that lot malfunctions for a few days.
So, let's just wait and see how the unintended experiment pans out. I don't think it will happen in my lifetime, but it could be a close run thing.
Registered Member #11591
Joined: Wed Mar 20 2013, 08:20PM
Location: UK
Posts: 556
I think climate change and scarce resources will beat the AI: i.e. It will be too late. We won't listen to the AI and we would have to submit to it and give it godlike authority or face extinction.
"For techno-optimists like him, the idea that computers will soon far outstrip their creators is both a given and something to be celebrated. Why would these machines bother to harm us, he says, when, to them, we will be about as interesting as “the bacteria in the soil outside in the backyard�"
That is exactly why the could harm us; if it suited them, they could exterminate us.
Registered Member #65
Joined: Thu Feb 09 2006, 06:43AM
Location:
Posts: 1155
...it is clearly morally enlightened, or has already become an accomplished liar.
A better question perhaps: At what threshold does will become artificial?
If a thinking entity does not arrive at its identity purely through natural evolutionary processes, than perhaps it could be considered artificially engineered by baseline social cognition. For instance, an invasive characterization and behavioral modification program to guide individual development could be considered modification of freewill into an artificially crafted will.
I suspect that humans will simply continue to evolve themselves into a subspecies in a manner similar to the domestic dog. Eventually these sub-humans will likely reach the conclusion that natural freewill is dangerous to the acceptable artificially constructed predictable modern society.
Part of the condition will be a complete inability to see the situation emerge...
A better question perhaps: At what threshold does will become artificial?
We're already there. An advanced computer chess program will beat its creator, i.e. will make moves surprising him. In one sense, the program does not have a free will, since its moves are entirely predictable. The programmer could go through the code and execute it manually, eventually arriving at the computers move and thus taking the surprise out of it.
That is a very theoretical predictability, his lifetime is likely to short to do that. In a very real way sufficiently complex computer programs have a free will and can lead to results, which are diificult to foresee.
This site is powered by e107, which is released under the GNU GPL License. All work on this site, except where otherwise noted, is licensed under a Creative Commons Attribution-ShareAlike 2.5 License. By submitting any information to this site, you agree that anything submitted will be so licensed. Please read our Disclaimer and Policies page for information on your rights and responsibilities regarding this site.