Reading 11: Artificial Intelligence

Artificial Intelligence is one of my favorite CS related topics.  I love the concept of them, I love reading stories about them, and I would love to work on creating even Artificially Narrow Intelligence systems.  That being said, I do not think we will ever create a strong AI and I pray to God that we do not.  I am beyond fearful at what would happen if humans were ever able to create a self aware artificial intelligence.  One of my favorite articles about AIs is written by Tim Urban on his blog, WaitButWhy.  (http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html)  In the article, Tim writes about many theories and beliefs concerning Smart AIs of the brightest minds in computer science.  In the second installment of the article, he includes a short story about a robotic arm project at an AI company that becomes self aware and ends up killing every human on the planet without anyone’s knowledge that it is even self aware.  I am terrified that if a computer ever reaches self awareness, humans will fall impossibly far behind it’s intelligence in a matter of days.

That being said, I do not believe it is possible for us to create intelligence.  I believe that God created humanity and that we are unable of creating sentient life without divine intervention.  We cannot understand how we think or even how to define thought, so how would we ever be able to get a computer to think.  I am hopeful that this belief is valid and that Smart AIs will never be reached.  Computers will always be dependent upon the rules that we put into them and will never attain their own thought.

If truly Strong Artificial Intelligence was reached, then I would share the same mindset as Stephen Hawking and Elon Musk, why are both far smarter than I am and have also come to the conclusion that a smart AI would be a threat to the very existence of humanity.  The article, “Debunking the biggest myths about artificial intelligence” is a load of garbage in my opinion.  The author does not respect the opinions of some of the smartest men in the world and makes extremely bold claims about things that none of humanity understands.  He states that AIs won’t be bound by human ethics, AIs will spin out of control, and AIs will be a series of sudden breakthroughs are all myths.  I laugh at his naivety.  If a smart AI ever comes to fruition and he is correct, I would be happy.  However, making these claims when we have never created an AI, do not understand human thought, and have no idea of how our own ethics work seems ignorant to me.  If we can’t define our own ethics, how can we possibly say that machines will be bound by them?  The claim that AIs won’t spin out of control boggles my mind.  What evidence supports this?  Imagine a human who could think as quickly as every other human in the world combined and had access to all information in the world.  I don’t know about you but I can’t comprehend that.  How could anyone possibly claim that some unquantifiable level of intelligence will be controlled once it becomes self aware?  Anyway, his “myths” about AIs seem like baseless conjecture to me and differ widely from my own beliefs.

Now that I’ve spoken about my opinions about AIs and reacted to the article, I will also address the questions.  Sorry I didn’t get around to this until now…

Artificial Intelligence is the sense of weak or narrow AIs is basically another term for a computer with an extremely complex algorithm that can learn to do one thing really well, like play chess.  This can be a feedback loop from previous games and from collected data or through other means.  Strong AIs on the other hand, are AIs that can “think” on their own.  I think of this as being able to write and execute their own code as well as being able to understand and interpret data across all subjects and make informed decisions about all things based on this data, leaving behind their initial human constraints and embracing thought as we know it.

I think AlphaGo, Watson, and Deep Blue are fantastic and love the success that each of them have had.  I do not think any of them think on their own in the way I think of thought and I do not believe any of them are close to a Smart AI.

The Turing Test is an interesting measure but I do not believe that it proves whether a machine is intelligent.  With data mining and complicated algorithms (human programmed rules), I believe that a machine should be able to trick a human into believing that it is human via a five minute text exchange.  While it may be able to do this, it would not have to have any understanding of its output or what the conversation was.

Lastly, if an AI was ever created that was self aware, I think I would consider it a mind.  But I do not think that this is possible and certainly hope it isn’t so I have not considered the ethical implications fully.

Reading 11: Artificial Intelligence

Leave a comment