Wednesday, December 12, 2018

Artificial Intelligence: Small Harm or Exponential Threat


Stojanche Andov
(Stoyan Serafim Andov)
This research paper was done for
the purpose of my English 1A class.
It's an interesting one, so I decided to share it.



Artificial intelligence is something that started off as an idea and quickly became a reality. To some extent, it is a very fascinating concept. It only takes the click of a button to instruct an artificial being to do something only humans were capable of doing before. If someone is driving in their car for example and receives a text message, but wants to continue safely driving, a robot built in their phone can read it for them. This concept is just one of many ways artificial intelligence is manifesting itself into this evolving world. Although artificial intelligence has many beneficial factors, there is still a lot unknown about its capabilities. Before we go deep into AI, we would like to give a definition of what intelligence and AI is. When we speak about intelligence, it is the ability to understand things, to learn, create as well as the emotions and the self-awareness. With this being said, the AI are machines made and programed by humans to perform certain tasks as: self-driven cars, answering questions based on people’s voices, playing games, recognizing patterns in data, etc. Although humans have created it, and it has developed human like characteristics through technology, it has also created many ethical issues, such as being a safety hazard, creating a risk of autonomic robotic warfare, taking over many manual labor jobs, and raising concerns about privacy.
Technology serves a purpose just like many things in the world, but like in most cases, too much of anything can cause problems. In a TED talk called Can we build AI without losing control over it? Neuroscientist and philosopher, Sam Harris explores questions answering just that. He poses a simple scenario. To paraphrase his words, it is easy to close one's eyes and imagine that there is “super intelligent AI out there,  no smarter than an average team of researchers. Knowing that electronic circuits function at least one million times faster than biochemical ones, it can be concluded that AI in turn can “think about one million times faster than the minds that built it.” To be exact, “one week of AI work equals twenty years of human work.” Therefore it is clear from this scenario that AI has the potential of becoming a very dangerous thing. Therefore, humans should not create machines and artificial beings that's longterm abilities and safety to society are presently unknown.
Next, the era of technology is booming more than ever before. Major smartphone companies like Google, Apple, and Microsoft have introduced intelligent personal assistants built to follow human commands, which in turn respond with humanlike mannerisms. To name a few, these assistants can do simple task like sending a text, or giving one directions when prompted. Although the idea of these intelligent assistants has been created with good intention, these forms of AI have been know to have issues understanding human commands therefore distracting those who are trying to utilize them while driving. In contrast with AI being overly intelligent, discoveries in the Canadian Journal of Experimental Psychology, brings to light that “although voice-based technology may allow the driver to keep their eyes on the road, they may actually increase the level of cognitive workload associated with interactions with technology in the vehicle.” This is a major issue since human eyes as the article puts it “cannot focus on two disparate locations at the same time.”
Another pressing issue in the world of AI is autonomous robotic warfare, which some in society thinks will help in the fight against foreign enemies, but in turn could cause the deaths of innocent individuals. Although AI in battles against foreign enemies could be beneficial to society, it should still be cautioned against. Just like a smartphone personal assistant that makes mistakes, so too autonomic robots are likely to display similar behaviors. Also, on this topic, Professor of Artificial intelligence, Noel E. Sharkey argues that the idea of autonomic robots is gradually manifesting itself through “the use of drones in the conflicts in Iraq and Afghanistan and by the US Central Intelligence Agency for targeted killings and signature strikes in countries outside the war zones: Pakistan, Yemen, Somalia, and the Philippines” (Sharkey).  Consequently, United States forces are working towards a much more dangerous goal. That is taking “autonomous battlefield robots” and “plans to take the human out of the control loop is well underway for aerial, ground, and underwater vehicles. And the US is not the only country with autonomous robots in their sights” (Sharkey). This has given way to more worry in regards to the ethics of AI. All people and systems are to follow basic ethical standards and regulations regardless of a company or product created. This creates a problem especially in the realm of autonomous robots. Robots as an entity lack a major component which is required “to ensure compliance with the principle of distinction.” They unfortunately lack “adequate sensory or vision processing systems for separating combatants from civilians, particularly in insurgent warfare, or for recognizing wounded or surrendering combatants.” This is a major problem as many innocent civilians could be fired at and killed due to the robots misreading for danger causing more issues both for those firing the shots as well as the opposing side.
To go further more into the issues surrounding, AI has a big role in the medical field. The technology has helped enormously to detect various illnesses or if a person is in coma; through all of the technology equipment, the doctors could follow the situation. One example of how AI is being used is in ophthalmology. Briefly, ophthalmology is part of the medicine that deals with surgery and diseases of the eyeball and orbital. As the technology has developed, doctors are able to detect almost instantly any type of eye problem through the AI. “Deep integration of ophthalmology and artificial intelligence has the potential to revolutionize current disease diagnose pattern and generate a significant clinical impact” (Lu et. al 1).  In the article “Applications of Artificial Intelligence in Ophthalmology”, says that in 1956, John McCarthy who was a computer scientist and founder of the discipline of artificial intelligence, worked on machines that were programmed for this purpose, had helped many patients and doctors to detect devices on the eyes.
In recent years, AI techniques have shown to be an effective diagnostic tool to identify various diseases in healthcare. Applications of AI can make great contributions to provide support to patients in remote areas by sharing expert knowledge and limited resources. (Lu et.al 7)
In the same article, it talks about how AI has been broken into different steps for its operation and detecting any eye diseases. They are the following: Building AI models, Date preprocessing, training, validation and testing. They also have invented AI for specific eye diseases such as diabetic Retinopathy (could lead to blindness), Glaucoma (damaging the optic nerve), Age-Related Macular Degeneration (blurred vision), and Cataract (symptoms such as cloudy lens).
From all of the above that AI in ophthalmology demonstrated, we see the benefit of the AI used to detect various eye diseases. Over the years, it has improved dramatically where today if anyone has a eye issue, simply the machine could tell where is the problem. However, the disadvantage of AI on this field or as in the article is called “black box” is after examining the eyes and detecting the problem, they cannot determine what treatment or decision should be taken. The doctors and patients can not fully trust those machines or rely on them as AI machines do not give an explanation of why the patient has been diagnosed with that specific disease.
There is also another interesting aspect of AI that has been created for the use of children and that is the personal robots or also called  “assistive robots for play.” Ideally, they have been created and operated by robotic specialists. There intention is to make people or children productive and develop their brain. In the article “Will Artificial Intelligence Be a Blessing or Concern in Assistive Robots for Play?”the authors talk about these personal robots and give analysis about the children, specifically those with special needs and people with disabilities. People who created those robots, had a purpose to help the children in their development whereby the interaction with robots, they will “experience a sense of internal control and mastery” (Adams, at. al 214). At the same time, it is was interesting to learn that those robots can be a obstacle for the children’s development. They can become less confident, sometimes frustrated and instead of playing on their own, they become “slaves” of that personal robot. The article gives an example of children with disabilities where when such a robot is given to them, they are not able to turn it on or switch it off. Those robots have been programmed to manipulate where they are supposed to help the children to start recognize certain objects. They are also called assistive manipulation devices.
Besides those assistive devices, Lego robots also have been invented for similar purpose. The Legos have been tested for children with cerebral palsy. The results are that the children's engagement have increased, and at the same time it has been shown that the children are able to do functional Lego robot play. In other words, they drive the robot around instead to pretending that the Lego robot was some type of gatekeeper. Likewise, some of the children have difficulties operating the robots. Another issues that arises with the personal robots is the privacy of the children. Some of the robots who have cameras and have small computer programs inside,they record the child's face and it ends up somewhere else. Here, automatically the privacy has been lost. When it comes to making a personal robots, the privacy of the child and the family must be considered. “Improvements in technology should not be the goal, but improvements should be valued for how they can support goals in healthy development and functional independence”  (Adams, at. al 216). When building assistive robots for play, the ethical implications for the families is a must.
Building upon the issue of ethics, imagine how AI manifests itself into simple things like social media websites and smartphone applications. As the article titled “Ethical, Explainable Artificial Intelligence”, explains it, think of a simple scenario: “Travel and mapping tools such as Google Maps and Waze find the fastest routes for commuters traveling to and from work and school” (Gordon-Murnane 23). In and of itself, this is very helpful to people. However, the fact that these apps have an educated guess of where people are going before they even type it into  their devices, draws to mind questions regarding personal privacy and information. This type of privacy can also be linked to search engine history manifesting themselves through ads on facebook or instagram. For example, one can search for a clothing item on amazon through google, and the next day the very same item will be shown in that person's facebook newsfeed. This is a clear example that people’s personal and private information is being analyzed by the technological world.
Often overlooked by society, AI is replacing jobs that were once only done by human manual labor. In Alan Brown’s article, “Robots at Work Where Do We Fit?” he clearly answers just that. The truth is robots are rapidly taking over the job market, especially in America. Even reflecting back on U.S history, particularly in the business of textile makers, many felt threatened by the invention of the power loom. However, in this case,the invention of the power loom did not have a major impact on the employment of people, but Artificial intelligence does. AI does not just take a way manual labor jobs in one line of work, but in many. Many things are internet and robot based. For example, “when we wanted to travel we called travel agents, who would ask where and when we wanted to go, query some proprietary databases (or even paper catalogs), talk us through our options, and book reservations. Today, we simply go online” (Brown 33). This is a harsh, but realistic fact to what AI was even a few years ago and what it is continuing to evolve into today. This is what the article called “the Second Economy” discusses where AI replaces people’s jobs. There are machines where they charge our credit cards, do reservations in restaurants, hotels, and flights. With one word, “Everything takes place within seconds, without any human intervention” (Brown 34). This is also been understood as a threat or revolution of the digital technology. So many AI are been created for this purpose with an explanation that they are faster and more secure. On the other side, not many people pay attention how much those AI advances would affect people’s jobs. This article warns that in the next thirty years, by creating those AI robot who are replacing people’s job, there will be a problem that the world will have to face.
To support this argument of whether artificial intelligence could be a small harm or exponential treat, in the debate called “Don’t Trust the Promise of Artificial Intelligence,” brings up a very important discussion and argument against AI. Mainly, the argument is for or against the motion of AI. In the beginning of the debate, Jaron Lanier, who is computer scientist and author of “Who Owns the Future?” and is on the side for the motion, bring interesting points. One of them is automatic translation. He says that there is nothing wrong with it, but the way how its been created is by scraping the efforts from many people around the world whereas they are not even aware that is happening. With other words, those automatic translating machines are taking the credit of the translators. With this, since the translating machines operate, the worlds create enormous number of unemployments.  Lanier says that “we should not be shrinking the economy over fantasy. This fantasy of these artificial creatures make us ignore our own lifes , our own contributions. ”(Lanier). The main question he brings in can people use AI machines responsibly?
The other group in the debate for the motion (to trust). Martine Rothblatt, a transhumanist, entrepreneur and author of Virtually Human defends AI. She begins defending her side by naming three things where she also says all terms have to be defined. She continues the promise that AI revolves around three terms: replication, application, and fascination. To prove these three terms, she goes further where for the first one she states that “with replication, we we replicate a human mind.” She believed the function of the human mind can be replicated. For the second term, she says that AI will be good for the humanity. After applying the replication, the application will be beneficial where humans would have a use of AI. What she finds the most amazing in this second term is the diseases people have such as dementia where she believes with replication of AI it could help those people. For the third one, fascinations, she goes on saying “we love these AI’s. We love them in the same way as we love our cats, our dogs, our friends. ” (Rothblatt).  
As the debate continues, Andrew Keen, who is Internet Entrepreneur and author of the book Internet is Not the Answer, opposes the motions and brings up his points in the following manner: he said strongly that this issue is a very serious deal and we, the humans, “cannot replicate our minds that live forever in the clouds some other digital space” (Keen). He says that it is a big issue where they want to make AI to replicate humans, human intelligence, human being, human identity or the human soul. Keen says that in the debate they are making, the biggest issue is not about trusting artificial intelligence, but rather the biggest argument against is “we should not trust the promise” (Keen). He is not against technology, but when someone tries to create an AI who can replicate humans, then there is an issue. There is no problem with the technology, but with the promise and the ideology of it. He argued against Rothblatt and how she justifies her ideas, “unless you humanise them,”(Keen) and make those moral judgments by replicating humans. He brings a point where he says that in the 19th century there was a philosophical idea of liberation theology, which nowadays is emerging into the society. That is what is also happening today, but with the invention of the promise AI where the creator of it promises to “liberate our bodies and live forever” (Keen). He also says that this concept is being taught philosophically and ideologically instead through the context of the real world. Another interesting statement he is makes is with the big companies such as Google, Facebook. They invest millions of dollars in AI, yet they do not care about the people, but how to improve their business. They like to own AI technology. In the debate, Keen wants to warn people to be more aware of what is really happening with AI as he says “the promise is scary” (Keen).   
His opponent, James Hughes, executive director of the Institute for Ethics and Emerging Technologies, believes that we humans can use the technology to be smarter, happier, and to live longer and healthier. He defines that “AI is codification of the way we do things together” (Hughes). His argument is that AI could be very helpful in the field of the medicine where those machines can detect and treat diseases. Mainly, he sticks with the health care by saying how beneficial AI can be for the human body. For that reason, be believes that the technology for humans can be smarter. He says that “future AI will allow us to understand the complexity of the genome, unlock health and longevity for our children” (Hughes).
To summarize, artificial intelligence has certain advantages where it has helped in many fields on industries. However, at the same time, there are big disadvantage. As the technology improves, things are being taken out of control. Robots are being created with an intention to do same tasks as humans, which automatically reduces employment. The bottom line should be, all the AI must be taken in moderation. We, the people, have to be sober about what technology we purchase and how we use it. We should not rely fully on AI, but we should use our brains and continue to develop our learning process. If we fully rely on AI, we get comfortable where we do not have to worry about learning certain things, knowing the fact that AI can do it for us.



Work Cited
Strayer, David L., et al. “The Smartphone and the Driver’s Cognitive Workload:               Comparison of Apple, Google, and Microsoft’s Intelligent Personal Assistants.” Canadian Journal of Experimental Psychology, vol. 71, no. 2, June 2017, pp. 93–110. EBSCOhost, doi:10.1037/cep0000104.     

Sharkey, Noel E. “The Evitability of Autonomous Robot Warfare.” International Review of the Red Cross, vol. 94, no. 886, June 2012, pp. 787–799. EBSCOhost, doi:10.1017/S1816383112000732.


BROWN, ALAN S. “Robots at Work Where Do We Fit?” Mechanical Engineering, vol. 138, no. 4, Apr. 2016, pp. 32–37. EBSCOhost, libproxy.mpc.edu/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=119357129&site=ehost-live&scope=site.

Gordon-Murnane, Laura. “Ethical, Explainable Artificial Intelligence: Bias and Principles.” Online Searcher, vol. 42, no. 2, Mar. 2018, pp. 22–44. EBSCOhost, libproxy.mpc.edu/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=128582745&site=ehost-live&scope=site.
Lu, Wei, et al. “Applications of Artificial Intelligence on Ophthalmology: General Overview.” Journal of Ophthalmology, Nov. 2018, pp. 1-15. EBSCOhost, doi:10.1155/2018/5278196.

Adams, Kim, et al. “Will Artificial Intelligence Be a Blessing or Concern in Assistive Robots for Play?” Revista Brasileira de Crescimento e Desenvolvimento Humano, vol. 28, no. 2, May 2018, pp. 213-218. EBSCO, doi:10.7322/jhgd.147242.


Harris, Sam. “Can We Build AI without Losing Control Over It?” YouTube, June 2016, https://www.ted.com/talks/sam_harris_can_we_build_ai_without_losing_control_over_it#t-2338

IntelligenceSquared Debates. Jaron Lanier “Don’t Trust the Promise of Artificial Intelligence” YouTube, 14 March, 2016 https://www.youtube.com/watch?v=yC_dfbxQqRI

IntelligenceSquared Debates. Rothblatt, Martine  “Don’t Trust the Promise of Artificial Intelligence” YouTube, 14 March, 2016 https://www.youtube.com/watch?v=yC_dfbxQqRI

IntelligenceSquared Debates. Keen, Andrew “Don’t Trust the Promise of Artificial Intelligence” YouTube, 14 March, 2016 https://www.youtube.com/watch?v=yC_dfbxQqRI