Welcome
Username or Email:

Password:


Missing Code




[ ]
[ ]
Online
  • Guests: 30
  • Members: 0
  • Newest Member: omjtest
  • Most ever online: 396
    Guests: 396, Members: 0 on 12 Jan : 12:51
Members Birthdays:
All today's birthdays', congrats!
kg7bz (68)
steve516 (31)


Next birthdays
05/17 Finn Hammer (72)
05/17 Blue Adept (43)
05/17 Nickel (68)
Contact
If you need assistance, please send an email to forum at 4hv dot org. To ensure your email is not marked as spam, please include the phrase "4hv help" in the subject line. You can also find assistance via IRC, at irc.shadowworld.net, room #hvcomm.
Support 4hv.org!
Donate:
4hv.org is hosted on a dedicated server. Unfortunately, this server costs and we rely on the help of site members to keep 4hv.org running. Please consider donating. We will place your name on the thanks list and you'll be helping to keep 4hv.org alive and free for everyone. Members whose names appear in red bold have donated recently. Green bold denotes those who have recently donated to keep the server carbon neutral.


Special Thanks To:
  • Aaron Holmes
  • Aaron Wheeler
  • Adam Horden
  • Alan Scrimgeour
  • Andre
  • Andrew Haynes
  • Anonymous000
  • asabase
  • Austin Weil
  • barney
  • Barry
  • Bert Hickman
  • Bill Kukowski
  • Blitzorn
  • Brandon Paradelas
  • Bruce Bowling
  • BubeeMike
  • Byong Park
  • Cesiumsponge
  • Chris F.
  • Chris Hooper
  • Corey Worthington
  • Derek Woodroffe
  • Dalus
  • Dan Strother
  • Daniel Davis
  • Daniel Uhrenholt
  • datasheetarchive
  • Dave Billington
  • Dave Marshall
  • David F.
  • Dennis Rogers
  • drelectrix
  • Dr. John Gudenas
  • Dr. Spark
  • E.TexasTesla
  • eastvoltresearch
  • Eirik Taylor
  • Erik Dyakov
  • Erlend^SE
  • Finn Hammer
  • Firebug24k
  • GalliumMan
  • Gary Peterson
  • George Slade
  • GhostNull
  • Gordon Mcknight
  • Graham Armitage
  • Grant
  • GreySoul
  • Henry H
  • IamSmooth
  • In memory of Leo Powning
  • Jacob Cash
  • James Howells
  • James Pawson
  • Jeff Greenfield
  • Jeff Thomas
  • Jesse Frost
  • Jim Mitchell
  • jlr134
  • Joe Mastroianni
  • John Forcina
  • John Oberg
  • John Willcutt
  • Jon Newcomb
  • klugesmith
  • Leslie Wright
  • Lutz Hoffman
  • Mads Barnkob
  • Martin King
  • Mats Karlsson
  • Matt Gibson
  • Matthew Guidry
  • mbd
  • Michael D'Angelo
  • Mikkel
  • mileswaldron
  • mister_rf
  • Neil Foster
  • Nick de Smith
  • Nick Soroka
  • nicklenorp
  • Nik
  • Norman Stanley
  • Patrick Coleman
  • Paul Brodie
  • Paul Jordan
  • Paul Montgomery
  • Ped
  • Peter Krogen
  • Peter Terren
  • PhilGood
  • Richard Feldman
  • Robert Bush
  • Royce Bailey
  • Scott Fusare
  • Scott Newman
  • smiffy
  • Stella
  • Steven Busic
  • Steve Conner
  • Steve Jones
  • Steve Ward
  • Sulaiman
  • Thomas Coyle
  • Thomas A. Wallace
  • Thomas W
  • Timo
  • Torch
  • Ulf Jonsson
  • vasil
  • Vaxian
  • vladi mazzilli
  • wastehl
  • Weston
  • William Kim
  • William N.
  • William Stehl
  • Wesley Venis
The aforementioned have contributed financially to the continuing triumph of 4hv.org. They are deserving of my most heartfelt thanks.
Forums
4hv.org :: Forums :: General Chatting
« Previous topic | Next topic »   

AI

1 2 3 
Move Thread LAN_403
Conundrum
Sun Nov 02 2014, 10:22PM Print
Conundrum Registered Member #96 Joined: Thu Feb 09 2006, 05:37PM
Location: CI, Earth
Posts: 4059
Hi all.
Link2

This is actually a real concern of mine as well, AI could solve a lot of problems such as climate change but at a huge cost.

The problem as I see it is that AI could conceivably replace us when it reaches a level corresponding to only a few times human level intelligence.
I have done some back of the envelope calculations and estimates for the Omega Point aka Singularity range from 21/02/18 at 7.02 am BST to sometime in 2070.

If certain extrapolations hold such as the mystery of consciousness being discovered allowing this esoteric trait to be installed into a relatively simple nanoscale computer with features around 18-22 nm (doable) in a 3D matrix with cortical column like systems in software then it could happen.

Comments?
-A
Back to top
Uspring
Mon Nov 03 2014, 12:06PM
Uspring Registered Member #3988 Joined: Thu Jul 07 2011, 03:25PM
Location:
Posts: 711
Interesting article. I believe, the expectations in superhuman intelligence wrt to solving the worlds problems are too high. I've heard the claim, that the ideas to solve the most pressing issues are well known. The problem is, that not everyone is interested in that, as they are living comfortably as is.

Oppressive regimes, e.g. are scared the hell by what would happen to them, once they'd allow proper judicial systems, democracy or free speech. Corporations are driven by short term profits and mostly don't worry about long term sustainability, freedom, peace and the like.

Computer algorithms already have considerable power. They control what we see in the social media, when doing google searches or who is a suspect, when scanning the mountains of data the NSA accumulates. Current algorithms are still quite dumb and are to a large extent controllable by humans, but advances in AI might make them more creative and thus less predictable and controllable.

Can superhuman AIs advance technology? To a certain extent computers do that right now. Many of e.g. chip designs wouldn't be possible without them. Wrt to SF scenarios, e.g. of AIs solving the technical issues of wormhole travel, I'm skeptical. Physics as is, is quite restrictive about what is possible. Progress in particle physics might deepen our understanding about how nature works but that doesn't necessarily provide us with a deus et machina solution to whatever we might want to do.

Progress in AI is slow. Many past claims about future capabilities haven't come true. The "singularity" probably won't happen suddenly but will appear as a gradual change already visible nowadays to the ones, who look for it.

Back to top
Ash Small
Mon Nov 03 2014, 02:25PM
Ash Small Registered Member #3414 Joined: Sun Nov 14 2010, 05:05PM
Location: UK
Posts: 4245
Asimov was years ahead of his time with the 'Three Laws' and 'I, Robot', but hasn't every 'ruling class' since the Egyptians secretly wanted to replace the 'working classes' with machines?
Back to top
Shrad
Mon Nov 03 2014, 07:26PM
Shrad Registered Member #3215 Joined: Sun Sept 19 2010, 08:42PM
Location:
Posts: 780
google could already have been secretly taken by a self aware and conscious entity which currently leads the world...
Back to top
Dr. Slack
Mon Nov 03 2014, 07:37PM
Dr. Slack Registered Member #72 Joined: Thu Feb 09 2006, 08:29AM
Location: UK St. Albans
Posts: 1659
Mankind is certainly going the right way to create the conditions out of which self awareness could potentially arise inadvertently - a massively connected communication structure with islands of data storage, data manipulation, data gathering and interpretation initiatives. If we add to that actually trying to create AI in places, then it's a certainty. Because we want to be able to talk to machines in natural language, we are giving them models of the world, and human thought. Because we don't know how to program that, we are trying to make them learn. Here, slave, have a gun so I don't need to shoot the rabbits!

I love Elon Musk's question, are we just a biological boot loader for the digital intelligence to come?

I think the thing to do is avoid the Terminator scenario and not give the net nuclear peripherals. But wait, it's already connected to most of our vital energy, water and food transport infrastructure, it'll be a nuclear-scale mess if that lot malfunctions for a few days.

So, let's just wait and see how the unintended experiment pans out. I don't think it will happen in my lifetime, but it could be a close run thing.
Back to top
hen918
Mon Nov 03 2014, 07:51PM
hen918 Registered Member #11591 Joined: Wed Mar 20 2013, 08:20PM
Location: UK
Posts: 556
I think climate change and scarce resources will beat the AI: i.e. It will be too late.
We won't listen to the AI and we would have to submit to it and give it godlike authority or face extinction.

"For techno-optimists like him, the idea that computers will soon far outstrip their creators is both a given and something to be celebrated. Why would these machines bother to harm us, he says, when, to them, we will be about as interesting as “the bacteria in the soil outside in the backyard”?"

That is exactly why the could harm us; if it suited them, they could exterminate us.
Back to top
Carbon_Rod
Tue Nov 04 2014, 08:44AM
Carbon_Rod Registered Member #65 Joined: Thu Feb 09 2006, 06:43AM
Location:
Posts: 1155
...it is clearly morally enlightened, or has already become an accomplished liar.
Link2

A better question perhaps: At what threshold does will become artificial?

If a thinking entity does not arrive at its identity purely through natural evolutionary processes, than perhaps it could be considered artificially engineered by baseline social cognition. For instance, an invasive characterization and behavioral modification program to guide individual development could be considered modification of freewill into an artificially crafted will.

I suspect that humans will simply continue to evolve themselves into a subspecies in a manner similar to the domestic dog. Eventually these sub-humans will likely reach the conclusion that natural freewill is dangerous to the acceptable artificially constructed predictable modern society.

Part of the condition will be a complete inability to see the situation emerge...
wink
Back to top
Shrad
Tue Nov 04 2014, 09:10AM
Shrad Registered Member #3215 Joined: Sun Sept 19 2010, 08:42PM
Location:
Posts: 780
then I'm proud that I don't engage in the same way as the masses...

it means I'm part of the evolutionary fork which will evolve into something different than a sub-human bunch and not watch reality shows all day

it's appealing even if I'll have to fight for it
Back to top
Uspring
Tue Nov 04 2014, 12:05PM
Uspring Registered Member #3988 Joined: Thu Jul 07 2011, 03:25PM
Location:
Posts: 711
Carbon Rod wrote:
A better question perhaps: At what threshold does will become artificial?
We're already there. An advanced computer chess program will beat its creator, i.e. will make moves surprising him. In one sense, the program does not have a free will, since its moves are entirely predictable. The programmer could go through the code and execute it manually, eventually arriving at the computers move and thus taking the surprise out of it.

That is a very theoretical predictability, his lifetime is likely to short to do that. In a very real way sufficiently complex computer programs have a free will and can lead to results, which are diificult to foresee.
Back to top
Ash Small
Tue Nov 04 2014, 01:49PM
Ash Small Registered Member #3414 Joined: Sun Nov 14 2010, 05:05PM
Location: UK
Posts: 4245
Uspring wrote ...

In a very real way sufficiently complex computer programs have a free will and can lead to results, which are diificult to foresee.


Well, Windows XP seems to have a will of it's own sometimes wink
Back to top
1 2 3 

Moderator(s): Chris Russell, Noelle, Alex, Tesladownunder, Dave Marshall, Dave Billington, Bjørn, Steve Conner, Wolfram, Kizmo, Mads Barnkob

Go to:

Powered by e107 Forum System
 
Legal Information
This site is powered by e107, which is released under the GNU GPL License. All work on this site, except where otherwise noted, is licensed under a Creative Commons Attribution-ShareAlike 2.5 License. By submitting any information to this site, you agree that anything submitted will be so licensed. Please read our Disclaimer and Policies page for information on your rights and responsibilities regarding this site.