Is AI Humanity's biggest threat?
I work in technology and am continually amazed by the leaps forward we see daily. Right now the latest iPhone I have in my pocket is more powerful than my first computer - a home built 286 that I loaded with XTree Pro and DOS
Ray Kurzweil the director of engineering at Google, refers to the point in time when machine intelligence surpasses human intelligence as "the singularity," which he predicts could come as early as 2045.
I have heard 2025, 2032 and 2090. It may be longer.
Either way I believe my career will probably not last past 2025. Right now my iPhone has a predictive text that is uncannily good at picking the word i need to use next.
Stephen Hawking has warned about the potential dangers of artificial intelligence. Hawking, working with with well-known physicists Max Tegmark and Frank Wilczek of MIT, and computer scientist Stuart Russell of the University of California, Berkeley all believe that the creation of AI will be "the biggest event in human history."
And possibly the last.
Elon Musk The CEO of the spaceflight company SpaceX and the electric car company Tesla Motors has also said that artificial intelligence is "our biggest existential threat." humanity needs to be "very careful" with AI, and he called for national and international oversight of the field.
At the same time Musk is working with Facebook founder Mark Zuckerberg and actor Ashton Kutcher build an artificial brain. (Probably because Ashton needs one)
But other experts disagree that AI will spell doom for humanity.
Charlie Ortiz from AI Development company Nuance stated recently "I don't see any reason to think that as machines become more intelligent … which is not going to happen tomorrow — they would want to destroy us or do harm,"
So where are we heading - is this worthy of a thread?
I work in technology and am continually amazed by the leaps forward we see daily. Right now the latest iPhone I have in my pocket is more powerful than my first computer - a home built 286 that I loaded with XTree Pro and DOS
Ray Kurzweil the director of engineering at Google, refers to the point in time when machine intelligence surpasses human intelligence as "the singularity," which he predicts could come as early as 2045.
I have heard 2025, 2032 and 2090. It may be longer.
Either way I believe my career will probably not last past 2025. Right now my iPhone has a predictive text that is uncannily good at picking the word i need to use next.
Stephen Hawking has warned about the potential dangers of artificial intelligence. Hawking, working with with well-known physicists Max Tegmark and Frank Wilczek of MIT, and computer scientist Stuart Russell of the University of California, Berkeley all believe that the creation of AI will be "the biggest event in human history."
And possibly the last.
Elon Musk The CEO of the spaceflight company SpaceX and the electric car company Tesla Motors has also said that artificial intelligence is "our biggest existential threat." humanity needs to be "very careful" with AI, and he called for national and international oversight of the field.
At the same time Musk is working with Facebook founder Mark Zuckerberg and actor Ashton Kutcher build an artificial brain. (Probably because Ashton needs one)
But other experts disagree that AI will spell doom for humanity.
Charlie Ortiz from AI Development company Nuance stated recently "I don't see any reason to think that as machines become more intelligent … which is not going to happen tomorrow — they would want to destroy us or do harm,"
So where are we heading - is this worthy of a thread?