Welcome
Username or Email:

Password:


Missing Code




[ ]
[ ]
Online
  • Guests: 32
  • Members: 0
  • Newest Member: omjtest
  • Most ever online: 396
    Guests: 396, Members: 0 on 12 Jan : 12:51
Members Birthdays:
No birthdays today

Next birthdays
05/04 Matthew T. (35)
05/04 Amrit Deshmukh (60)
05/05 Alexandre (32)
Contact
If you need assistance, please send an email to forum at 4hv dot org. To ensure your email is not marked as spam, please include the phrase "4hv help" in the subject line. You can also find assistance via IRC, at irc.shadowworld.net, room #hvcomm.
Support 4hv.org!
Donate:
4hv.org is hosted on a dedicated server. Unfortunately, this server costs and we rely on the help of site members to keep 4hv.org running. Please consider donating. We will place your name on the thanks list and you'll be helping to keep 4hv.org alive and free for everyone. Members whose names appear in red bold have donated recently. Green bold denotes those who have recently donated to keep the server carbon neutral.


Special Thanks To:
  • Aaron Holmes
  • Aaron Wheeler
  • Adam Horden
  • Alan Scrimgeour
  • Andre
  • Andrew Haynes
  • Anonymous000
  • asabase
  • Austin Weil
  • barney
  • Barry
  • Bert Hickman
  • Bill Kukowski
  • Blitzorn
  • Brandon Paradelas
  • Bruce Bowling
  • BubeeMike
  • Byong Park
  • Cesiumsponge
  • Chris F.
  • Chris Hooper
  • Corey Worthington
  • Derek Woodroffe
  • Dalus
  • Dan Strother
  • Daniel Davis
  • Daniel Uhrenholt
  • datasheetarchive
  • Dave Billington
  • Dave Marshall
  • David F.
  • Dennis Rogers
  • drelectrix
  • Dr. John Gudenas
  • Dr. Spark
  • E.TexasTesla
  • eastvoltresearch
  • Eirik Taylor
  • Erik Dyakov
  • Erlend^SE
  • Finn Hammer
  • Firebug24k
  • GalliumMan
  • Gary Peterson
  • George Slade
  • GhostNull
  • Gordon Mcknight
  • Graham Armitage
  • Grant
  • GreySoul
  • Henry H
  • IamSmooth
  • In memory of Leo Powning
  • Jacob Cash
  • James Howells
  • James Pawson
  • Jeff Greenfield
  • Jeff Thomas
  • Jesse Frost
  • Jim Mitchell
  • jlr134
  • Joe Mastroianni
  • John Forcina
  • John Oberg
  • John Willcutt
  • Jon Newcomb
  • klugesmith
  • Leslie Wright
  • Lutz Hoffman
  • Mads Barnkob
  • Martin King
  • Mats Karlsson
  • Matt Gibson
  • Matthew Guidry
  • mbd
  • Michael D'Angelo
  • Mikkel
  • mileswaldron
  • mister_rf
  • Neil Foster
  • Nick de Smith
  • Nick Soroka
  • nicklenorp
  • Nik
  • Norman Stanley
  • Patrick Coleman
  • Paul Brodie
  • Paul Jordan
  • Paul Montgomery
  • Ped
  • Peter Krogen
  • Peter Terren
  • PhilGood
  • Richard Feldman
  • Robert Bush
  • Royce Bailey
  • Scott Fusare
  • Scott Newman
  • smiffy
  • Stella
  • Steven Busic
  • Steve Conner
  • Steve Jones
  • Steve Ward
  • Sulaiman
  • Thomas Coyle
  • Thomas A. Wallace
  • Thomas W
  • Timo
  • Torch
  • Ulf Jonsson
  • vasil
  • Vaxian
  • vladi mazzilli
  • wastehl
  • Weston
  • William Kim
  • William N.
  • William Stehl
  • Wesley Venis
The aforementioned have contributed financially to the continuing triumph of 4hv.org. They are deserving of my most heartfelt thanks.
Forums
4hv.org :: Forums :: General Chatting
« Previous topic | Next topic »   

How soon are we dooooomed?

Move Thread LAN_403
Dr. Slack
Fri Jan 05 2018, 08:02AM Print
Dr. Slack Registered Member #72 Joined: Thu Feb 09 2006, 08:29AM
Location: UK St. Albans
Posts: 1659
Various prognosticators have been warning of AI's impending takeover, the latest being Link2 here. Various timescales have been mentioned, most of them comfortably exceeding my lifetime.

But will it ever happen?

I think it could. And the important thing is, I don't know exactly how it could or when, I'm just recognising a trend that seems to be common in evolution.

I'm not a biological scientist, so I don't understand exactly what biochemists are wittering about when they say that this gene or structure seemed to evolve for this, but was then hijacked for that. But my take home from those sorts of things is that things very rarely turn out exactly as you think they would. So a bipedal gait frees up hands to make tools. It's easy to accidentally make something that can be repurposed.

Now nobody on the planet is intentionally trying to make a race of robots that are able to reproduce and repair and fully provision themselves, and end up wiping us out. However, look at what we, engineers, are actively striving towards, and succeeding quite well at.

* We want digital personal assistants. That means voice control, natural language understanding, understanding of us so that they can figure out what questions they can answer by themselves, and when they need to wake us up for advice.

* We want self-driving cars. That means scene interpretation, theory of mind to predict other drivers' reactions and intentions, robust error recovery, some sort of ethics framework to decide who to hit when only partial control remains.

* We want bloodless (our blood anyway) wars, so we have weaponised robots.

* We seem to have made a big advance with deep learning, where machines can teach themselves skills. The self-taught AlphaGoZero has made moves that win, yet are beyond the human experts.

* We have automated factories, and mines.

If I want to script a 'Terminator' style film, and put things in the opening scenes that fore-shadowed that later action, I wouldn't use the bit at the start of Terminator 2, where a mean-looking armed drone is bucking inside some sort of glass cage. I could use people sitting in their driverless car, discussing theory of mind with it.

So what's going to be the spark that puts all of the peripherals, the automated factories, the armed drones, the clever systems together? I don't know. It's not going to happen in the next 10 years, and maybe no 100. But what about 1000?

I suspect consciousness research could be a unifying factor. I haven't a clue why I'm a spectator at this wonderful scene going on around me, why I can comment on it, and choose to alter it. And nobody else knows for certain either. There are hints that it may emerge from cycles in sufficiently complex networks. But I do think we'll continue trying to understand it, and to emulate it.

So I think my early-in-the-film portentious scene would be researchers conversing with a machine, and remarking to each other 'look, it seems to have emotion!'

Humanity has a remarkable track record in doing stuff to make our lives more comfortable in the short term, and not doing stuff to mitigate the long term bad effects of those decisions, though lack of imagination, or pursuit of maximum profit.

Let's have a (probably post-mortem) sweepstake on when the robot apocalypse occurs. I'll go for 2218. I would also bet money that it doesn't happen in a way that's straightforward to foretell, which means I'm betting my consciousness idea is wrong for being too obvious.
Back to top
Carbon_Rod
Fri Jan 05 2018, 07:41PM
Carbon_Rod Registered Member #65 Joined: Thu Feb 09 2006, 06:43AM
Location:
Posts: 1155
If humanity could agree on how to quantify intelligence to begin with, and if compassion was statistically insignificant.

I think the normalization of people addicted to staring at screens at all times even when driving... has already destroyed society as we knew it. Many modern people are no longer even aware of the room they are standing in most of the time, and vote with emotions rather than rational thought.

I don't think an AI needs to be smart to destabilize the planet, as Trump was elected by simple manipulation of the advertising industry and Gerrymandering. Likewise, 3 million US families were made homeless due to a banking foreclosure algorithm in 2008, and then rented other bankrupted victims homes by a proxied property holding firm.

"War" hardware is simply economical machines that are designed to kill humans, and it is not as efficient as simply letting people kill themselves.

Back to top
dexter
Fri Jan 05 2018, 10:03PM
dexter Registered Member #42796 Joined: Mon Jan 13 2014, 06:34PM
Location:
Posts: 195
i think you view AI through the romanticized lens of cinematography: oppressed - oppressor relationship between humans and something that resemble humans

most likely the AI we'll create wouldn't even grasp our notion of freedom for the simple reason we have different natures
Back to top
E.TexasTesla
Sat Jan 06 2018, 03:55AM
E.TexasTesla Registered Member #4362 Joined: Sat Jan 21 2012, 03:44AM
Location: Texas
Posts: 98
Would Moore's law apply to AI ?

Time to build Tesla's death ray. Take out a few drones. cheesey






Back to top
Uspring
Sat Jan 06 2018, 03:37PM
Uspring Registered Member #3988 Joined: Thu Jul 07 2011, 03:25PM
Location:
Posts: 711
Now nobody on the planet is intentionally trying to make a race of robots that are able to reproduce and repair and fully provision themselves, and...
That would be kind of neat, no human work required.
end up wiping us out.
This certainly not. But letting machines decide for us is a trend. Advertisers employ data mining to target customers, security agencies to identify terrorists or criminals by predictive policing. Smarter computers will strengthen the trend leaving the public at the mercy of algorithms which can't be held accountable.

The singularity, i.e. the point, at which machines are able to improve their capabilities significantly by themselves, is a different, more far away issue. I don't believe, neural networks in their current architecture allow for this. NNs require training from a lot of examples, humans can learn from a single one. They have symbolic reasoning abilities. One might think, that a NN could learn how to program by feeding it e.g. the linux source tree, but that won't work. It will likely to be able then to do syntax checks but to program, it needs to have intentions and strategies.

Consciousness is probably not required for effectiveness, maybe not even for robots taking over the world. AIs will be programmed to be effective and to a certain extent be allowed to choose the methods. The results of their decisions might be as incomprehensible as a particular move AlphaGo makes.

It is very difficult to predict, what intelligent AIs would be like. Humans are usually brought up in families, have a long socialisation period, have moments of happiness but also suffer from pain, illnesses and finite lifetime. AIs don't have these experiences. AIs might be localised in a single body or spread out over the internet. They probably will be very different from us. Who knows, what they'll be up to.
Back to top
Conundrum
Tue Jan 16 2018, 06:45AM
Conundrum Registered Member #96 Joined: Thu Feb 09 2006, 05:37PM
Location: CI, Earth
Posts: 4059
Maybe the lack of observed aliens is because they are waiting for us to achieve the Singularity before making first contact?
Back to top

Moderator(s): Chris Russell, Noelle, Alex, Tesladownunder, Dave Marshall, Dave Billington, Bjørn, Steve Conner, Wolfram, Kizmo, Mads Barnkob

Go to:

Powered by e107 Forum System
 
Legal Information
This site is powered by e107, which is released under the GNU GPL License. All work on this site, except where otherwise noted, is licensed under a Creative Commons Attribution-ShareAlike 2.5 License. By submitting any information to this site, you agree that anything submitted will be so licensed. Please read our Disclaimer and Policies page for information on your rights and responsibilities regarding this site.