Do we want machines to take all our jobs?

Week 9: March 11, 2018

Source #16:

Sokal, Joshua. “Why Self-Taught Artificial Intelligence Has Trouble With the Real World.” Quantamagazine, 21 February 2018, https://www.quantamagazine.org/why-self-taught-artificial-intelligence-has-trouble-with-the-real-world-20180221/. Accessed March 16, 2018.

In this article, Sokol looks into how creating game-playing machine algorithms can and cannot improve researchers chances at developing general artificial intelligence. Researchers have been working on robotic algorithms that can compete in games like chess, checkers, and more complex roleplaying games. The end goal is to create general artificial intelligence (AI that simulates human intelligence in its ability to solve multiple problems and interpret hidden data) by mimicking real world settings within game settings and by having each new machine prototype play against its predecessor. The problem, however, is that researchers cannot create a perfect model of the real world for AI to practice within. Also, algorithms require an “objective function,” which means they need a singular end goal to aim for in order to be effective. Creating an objective function for game play is easy enough, as the goals are simple and the data isn’t hidden, but in the real world things are not so simple.

Source #15:

Bollegala, Danushka Dr. “I lost my job to a robot.” Vice Unlimited, 1 August 2017, https://www.unlimited.world/vice/i-lost-my-job-to-a-robot. Accessed March 15, 2018.

Bollegala doesn’t actually lose his job to a robot, but discusses the possibility of all human beings eventually losing their jobs to automation. He thinks it’s the inevitable outcome; however, he doesn’t believe this spells doom for humanity. He notes that machines can be useful tools, and allowing them to run society would leave people free to do what they’ve always wanted to do. He acknowledges humans will always be better at interacting with other human beings; people will have to intervene in cases where human compassion is necessary. He proposes an AI tax (taxing businesses that use automation instead of human labour) that would allow governments to fund Universal Basic Income. During his final take-away, Bollegala warns that the biggest challenge humans will face in the future isn’t a machine takeover, it’s overcoming our egos enough to accept something that does our jobs better than we do.

Week 8: March 4, 2018

Source #14:

Ardagh, Arjuna. “Is Technology Good For You?” Huffington Post, 18 August 2016, https://www.huffingtonpost.com/arjuna-ardagh/is-technology-good-for-yo_b_11569108.html. Accessed 3 March 2018.

In this article, Ardagh prompts readers to think about the long-term effects technology could have on our health, relationships, and minds by asking a series of questions. Some of these questions include, “Does it make you more Self Reliant or more Dependent?”; “Is this device or software improving your health, or negatively impacting your health?”; “Is technology in general leaving you with more time to relax and do what you really love, or is it taking time away from you?” He looks at the ways technology might impact people from multiple angles—and concludes that technology isn’t going anywhere, but we should make sure advancements in this area align with our human values and goals. That means creating tech that doesn’t interfere with our abilities to socialize and grow as happy, healthy, mentally sound beings.

Source #13:

Lay, Stephanie. “Uncanny Valley: why we find human-like robots and dolls so creepy.” The Conversation, 10 November 2015, https://theconversation.com/uncanny-valley-why-we-find-human-like-robots-and-dolls-so-creepy-50268. Accessed 3 March 2018.

With this article, Lay explains what ‘uncanny valley’ is and how it works. Uncanny valley is the feeling of disgust and unease people experience when they encounter a robot/animated character/doll that looks and/or behaves in a human-like manner but isn’t quite right. This feeling should fade once a human-like object attains more appealing and acceptable human-like qualities. One theory is that uncanny human-like entities remind us of those with psychopathic traits. The sensation of uncanny valley may only be a symptom of this particular point in AI history or may be something humans continue to deal with well into the future.

Week 7: February 25, 2018

Source #12:

Snow, Richard E. “Aptitude development and education.” Psychology, Public Policy, and Law, vol. 2, no. 3-4, 1996, https://search-proquest-com.ezproxy.library.yorku.ca/psycarticles/docview/614398334/fulltextPDF/9D7B08D235EE42F1PQ/1?accountid=15182. Accessed 24 February 2018.

In this academic article, Snow argues intelligence development (which is best measured by IQ score) can be improved. Snow’s findings suggest IQ is not static and may be influenced by environmental factors and education. People, if given a rigorous amount of education, can experience a surge in intellectual development. However, education must be tailored to individual needs in order to effectively enhance IQ. Other factors—like poor nutrition and impoverished living conditions—should be factored in and addressed, in addition to meeting personalized educational needs, when attempts to increase IQ are being made. Overall, intelligence can be developed, but doing so isn’t necessarily easy.

Source #11:

Winthrop, Henry. “Some Psychological And Economic Assumptions Underlying Automation, I.” American Journal of Economics & Sociology, 1958, http://web.b.ebscohost.com.ezproxy.library.yorku.ca/ehost/pdfviewer/pdfviewer?vid=4&sid=0f5d6c8f-92b5-480b-800f-29c868ec1017%40sessionmgr103. Accessed 23 February 2018.

Winthrop’s academic article counters the optimistic predictions about job automation expressed at a 1956 symposium: namely, the ease of retraining employees and increased job satisfaction after upgrading. Winthrop is concerned with the practicality of these predictions. He worries semi-skilled and unskilled workers won’t have the IQ necessary to be retrained for the skilled jobs that will replace lower-level jobs as technology enhances. He also argues placing unskilled or semi-skilled workers (or those with lower IQs) in positions that require higher mental capacity and concentration won’t lead to higher job satisfaction and will increase anxiety levels. Workers of any intellectual category also may not feel more job satisfaction, seeing as the pool of new jobs available may not be to the personal tastes of individual workers.

Week 6: February 18, 2018

Source #10:

Dowd, Maureen. “Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse.” Vanity Fair, 26 March 2017, https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x. Accessed 17 February 2018.

This article provides an in-depth analysis on what Elon Musk, leading entrepreneur in the technology field, has to say about AI. Musk believes AI could one day be the cause of humanity’s demise. Not every expert agrees with him, though. Some accuse him of being a sensationalist who has seen one too many dystopian sci-fi movies. Some accuse Musk of stirring up fear to promote his own “safer” AI business ventures (OpenAI, one of Musk’s companies, for instance). Musk proposes AI would be safer if it were merged with the human brain—if we focused on the creation of cyborgs, rather than the creation of autonomous artificial intelligence, which would give human beings more control of advanced technology. Musk argues human beings are already cyborgs anyway. Overall, Musk seems to want to play a part in the development of AI, but wants to find ways to minimize the potential dangers and the possible obsoletion of the human race.

Source #9:

“Brian Cox on Q&A | Artificial intelligence a ‘real threat to civilisation’ | #qanda #ai #briancox.” YouTube, uploaded by News Bite Global, 13 November 2017, https://www.youtube.com/watch?v=NX7YYI2mhNE

Brian Cox, a physicist and professor at University of Manchester, for the most part answers questions about the future of harvesting resources from outer space in this Q&A video—but he does briefly touch on his predictions for AI in the first few minutes. He warns people not to leave complex, moral decision-making to machines and algorithms, believing people are better equipped to make these difficult choices. He also advises governments to build a system—in response to growing automation of work—that replaces old, soon-to-be outdated jobs with new ones and provides funding for career retraining programs. He believes governments have a responsibility to prepare citizens for new types of jobs, seeing as we can’t “hang onto the old jobs” as technology expands.

Week 5: February 11, 2018

Source #8:

Basl, John. “The Ethics of Creating Artificial Consciousness.” Northeastern University, n.d., https://philarchive.org/archive/BASTEO-11. Accessed 10 February 2018.

In this scholarly report, Basl examines how researchers might apply the concept of moral patiency to an artificial consciousness if such a being were ever created in the future. He argues that the treatment of the artificial consciousness should be relative to the circumstances, without strictly adhering to any normative value structure. The consciousness may be intelligent, but have different needs and interests than any current moral patient—and therefore require different treatment than a human being, dog, chimp, or any other creature considered worthy of moral patiency. The consciousness may be worthy of moral patiency, but be too important for research to have this status honoured. The way an intelligent artificial consciousness should be treated raises difficult ethical questions and sparks debate.

Source #7:

“Philosophy of Intelligence with Matthew Crosby – TWiML Talk #91.” This Week in Machine Learning & Artificial Intelligence from Sam Charrington, 21 December 2017.

In this podcast, guest speaker Matthew Crosby—a researcher from Imperial College London—talks about his Kinds of Intelligence Project. Crosby’s research takes a look at intelligence from a philosophical point of view, including studies into “predictive processing” and “controlled hallucination,” with a special focus on applying these theories to AI. This podcast provides insight into the differences between human intelligence and artificial intelligence, as well as the differences between human and machine communication and language use. Crosby also mentions the idea of “moral patiency,” an ethical concept that questions how intelligent a machine needs to be in order to be considered worthy of moral consideration (in other words, if machine intelligence began matching our own levels of intelligence, would using these machines for work be considered slavery?)

Week 4: February 4, 2018

Episode Pitch:

Whether we like it or not, the structure of the job market is changing in response to developments in robotic tech. Fears machines will take every job available to people, and therefore render human beings obsolete and useless, loom on the horizon. Panic is already in the air—but Ben Dickson, a contributor at TechFinancials, is optimistic about the future in spite of this growing cultural anxiety.

Dickson wants the machines to take our jobs. Not only does Dickson highlight potential reduced prices to the goods we need to survive, after human labour costs are dropped, he also believes humans will be freer to express their individual creativity and connect with other people. Dickson’s view is nothing if not positive—and I, too, am inclined to hope for the best possible outcome here.

Automation is a new frontier for humanity. The discussion is home to more questions than answers. Some people, possibly the majority, worry a robotic revolution will signal the end of the human race. Other people believe automation is the key to our salvation. No matter which side of the debate you’re tempted to lean toward, there’s no question we are in the beginning stages of a massive transition. And maybe this transformation is exactly what we need.

Nobody is saying this revolution is going to be easy, or painless. It’s incredibly terrifying, especially when nobody can deliver any concrete answers concerning the final outcome. We’re just picking sides out here, either warning our fellow humans of the upcoming dangers or dreaming of utopias. We’re questioning if capitalism will falter and die, where our jobs might go. If all our jobs are gone, then what are we going to do with ourselves? Are we building a utopia or a dystopia? These questions are already springing up. The discussion has already started. This isn’t some sci-fi scenario; it’s happening right now. We’ve got to talk about it. It’s our future. It’s our present.

Dickson’s article struck a chord with me. From what I’ve gathered, most people aren’t as optimistic as he is. I appreciate his point of view. I’m an idealist at heart, and I believe the utopia is within our grasp. But how do we make this happen? How do we control the outcome? Well, I don’t believe I know. But I can speculate. Further speculation isn’t going to hurt us, not in a debate where guesswork is all we have. Why not join the discussion?

 

 

Music: bensound.com

Source #6:

Silman, Victoria. ““Ghost in the Machine” debates AI.” Excalibur, 31 January 2018, p. 2.

Silman’s article recounts the debate points covered at the “Ghost in the Machine” event hosted by York’s Lassonde School of Engineering. A group of industry professionals and academics gathered together at this event to discuss artificial intelligence and how it could impact the future of work, particularly for students. The overall tone of the event was overwhelmingly optimistic; multiple experts agreeing that AI would likely benefit humanity and that any risks it posed could be contained. When asked about the future of work, the experts at the panel advised students entering the workforce to strengthen their critical thinking abilities and other soft skills, all of which make human beings unique from AI technology.

Source #5:

“Artificial intelligence, robots and the future of work (Encore September 13, 2017).” Ideas from CBC Radio, 31 January 2018.

This podcast from CBC Ideas explores the ways in which human society could change as automation tech progresses. The general belief expressed in this recording is that the nature of this expected change is uncertain. Change will happen, but it’s not known yet if it will be a positive change or a negative change. The end result is entirely dependent on humanity’s responses to growing technologies. There’s no question that the job market, economy, and social values will be affected by robotic technologies, but whether the end result will be something we’ve never seen before or something reminiscent of the past is yet to be determined. History shows that as technology changes, the job market changes; new jobs are invented. Some scholars don’t believe increased automation will destroy capitalism, but can only change the nature of work—with more focus on environmental, social, and technological jobs. Other scholars are not sure history can even repeat itself this time, seeing as developing AI technology has the potential to outperform human beings, thus making them obsolete. Both optimism and pessimism fuel this discussion.

Week 3: January 28, 2018

Source #4:

Manyika, James, et al. “What the future of work will mean for jobs, skills, and wages.” McKinsey Global Institute, November 2017, https://www.mckinsey.com/global-themes/future-of-organizations-and-work/what-the-future-of-work-will-mean-for-jobs-skills-and-wages. Accessed 27 January 2018.

This statistical data from the McKinsey Global Institute aims to predict future job trends in the face of automation. The central finding is about 50% of jobs, or at least certain elements of several jobs, could be automated by 2030. The statistical findings do not indicate that the job market will be fully eradicated in the near future, but does advise workers to develop new skills in areas that involve social interaction, creativity, advanced logic, and technological innovation. Jobs that involve these particular skills will still be in high demand by 2030, since they require a human element that is beyond the capacities of present or predicted robotic technologies.

Source #3:

Beckett, Andy. “Post-work: the radical idea of a world without jobs.” The Guardian, 19 January 2018, https://www.theguardian.com/news/2018/jan/19/post-work-the-radical-idea-of-a-world-without-jobs. Accessed 24 January 2018.

In this article, Beckett explores the perspectives of “post-workists,” a group of theorists who dream of a future world where jobs don’t exist. They envision a utopia where office buildings are turned into community and art centres, where family and other social connections are not overshadowed by the demands of capitalism. In recent years, in light of increasing unemployment (which can partially be blamed on automation), select members of younger generations have adopted a more cynical view of working culture. The heavy burdens of working so-called pointless jobs, and the negative health effects that come along with this type of stress, are being called into question. Alternatives to present work conditions, such as 15-hour per week work schedules or a society based entirely on leisure, are being considered. Beckett (as well as a number of scholars and politicians), however, is hesitant to believe all jobs will ever be fully eliminated. He acknowledges that careers often bring a sense of meaning and purpose to people’s lives, and post-workist ideas of utopia greatly resemble what’s already being practiced in modern working society. 

Week 2: January 21, 2018

Source #2:

Chan, Kelvin. “Robotics company working toward building trust between humans, robots.” The Toronto Star, 19 January 2018, https://www.thestar.com/business/tech_news/2018/01/19/robotics-company-working-toward-building-trust-between-humans-robots.html. Accessed 21 January 2018.

Chan’s article reminds us that we still have a long way to go before we successfully build a robot that functions like a human being and blends in perfectly with human society. The first step is to build trust between humans and robots, as well as find ways to decrease feelings of uncanny valley that people currently experience when dealing with realistic humanoid robots. Hanson Robotics, the creator of Sophia the robot, is attempting to build the perfect social robot (for the purpose of taking care of human beings) that wouldn’t elicit fear in people. Sophia, their most famous social robot, is constantly being refined to look and behave in the most lifelike and socially acceptable fashion.

Source #1:

Istvan, Zoltan. “Will capitalism survive the robot revolution?” TechCrunch, 29 March 2016, https://techcrunch.com/2016/03/29/will-capitalism-survive-the-robot-revolution/. Accessed 21 January 2018.

Istvan’s pop science opinion piece suggests that capitalism as a global economic system will not survive the growing robotic revolution, seeing as machines will be able to outperform humans in almost every field one day. A new system will have to be put into place and society will have to adopt different values in order to move forward, but Istvan urges us to keep an open mind while forming solutions, considering that currently developed models may not hold up when the reality of a fully automated world bears down on us. He does offer a bit of speculation, though, when he imagines a society built around the exchange of knowledge as currency—even going so far as to bring up “the singularity.” Istvan further asserts that for all their work in technological innovation, human beings deserve a life of luxury and pampering at the hands of their robotic creations.

 

- Luke M.