A.I., TROLLEY PROBLEM AND FUTURES

I’ll come back to the π-type learner soon. This posting relates loosely to that string of thought connections.

I read in the New Scientist (9th Dec. 2015) “Super-literate software reads and comprehends better than humans” and in a news report in the ABC Australia on an 11th December 2015 Science article  on how scientists are teaching (taught) a computer to “learn” like a human. Both articles caught my attention, not least that we still know so little about the brain and all its glorious mysteries – although we have come a long way in leaps and bounds in the past decade clarifying many hypotheses so how intriguing to be reverse-engineering the process in an attempt to understand it; but also as the articles relate to a possible future scenario where A.I. devices or entities will do many white-collar jobs just as technology did to factory production lines.

In one of my classes today a student conducted a presentation on “bosses”. There was, of course, the usual confusion by many students in grasping that leader and manager are not synonymous. Both are crucial. Notwithstanding, the presentation provided a useful springboard upon which to bounce a discussion on future working scenarios. Nearly all students were oblivious to A.I. ideas – well, I can’t really see any reason why they should not be since no-one teaches the students about such ideas – philosophical or practical. There are courses on philosophy and an introductory course on the Mind available (by me, a blending of brain science, cognitive science and behavioural science/economics), and one faculty member is a mathematician who also teaches rudimentary systems information related stuff, but not any that I am aware are teaching technology and/or AI and future scenarios. The students were seemingly unaware of A.I. stuff – almost zero! – even less than me! No-one had even heard about the Turing Test. A few had heard about some mysterious science fiction – a “driverless car” – but little more.

I mentioned the above articles to the students and asked them to discuss the future scenarios that would likely play out in their working life (most probably within half their working life) but would definitely be reality in their childrens’ world. During the course of the discussion that followed, one student felt that a world where A.I. were managing and operating white-collar jobs would not be desirable as if the A.I. made a mistake it would not be able to take “ responsibility” for the error – say, as a human might do. (I pointed out that many bureaucrats and others of similar mindset already do not take any responsibility…). The student’s opinion provided a wonderful opportunity to introduce the “Trolley Problem”.

In line with most other reports on results from this problem, about 15% of the class chose not to pull the lever to make the trolley change tracks to kill one but save three (or five or whatever number).  85% believed that it was morally a better choice to kill one than three. The 15%, however, felt that it would be a wrong (moral) choice to take an action as your action would result in the death of a person, whereas doing nothing is not a result of your action (possibly inaction). Do we have a moral obligation to take an action to save the three (or five) even though we would still kill one?

I am not the first to transpose this dilemma on to a driverless car scenario: you are the only person in the autonomous vehicle; the A.I. that controls the vehicle has to make a decision, allow the car to hit a tree and kill you or steer into another path and kill three bystanders. What would you prefer the A.I. to choose?

Until the enthusiastic designers of the driverless car are able to convince lawmakers – and more importantly, the consumer – that self-sacrifice is an acceptable option, I cannot envision a rapid uptake by the wider public for such a vehicle – especially if it is to carry you, and your family!

Although the trolley problem made for a mind numbing detour for the students, the discussions highlighted to the students that perhaps the education they were being given was not sufficient – necessary, but insufficient. “ And why aren’t we getting exposed to skills and proficiencies that might be beneficial for our future?” asked one student.

“Well, it is complicated, but I will simplify and highlight a few factors” I responded. “Firstly, the idea of experience, as we discussed before [on escalator promotion and sempai-kohai cultures, experience may only be of the organisation, possibly making for good managers but not necessarily leaders or leadership], there are many in universities who are highly competent in their area, but have limited experiences in other world or organisational contexts. Simply, they cannot see. Their view of the future is much the same as their view of the present – and their personal past. And, secondly, it would be a scary thing to admit that what one had been teaching for 20 years was now obsolete. What would they do?” (Collaborate with others outside their discipline would be a good start.)

If much of our educational process and repertoire do not adapt and the creative ideas that continue to arrive upon as at ever increasing rates continue as the norm, will the masses have the skills, proficiencies, or mental capabilities to participate in society? Humans are an adaptable lot, so no doubt the entrepreneurial types amongst us will enable us to feel safe on the 13th Floor. Perhaps the whole paradigm will shift, the orange skin will slip, and we will be blissfully assimilated. Then, perhaps, as a friend joked, the Japanese will come out victors in their future ageing/ed societies and economies as A.I. will smoothly integrate into organisations and the society to fill the void left by the human absence (and increase GDP by factors humans could never achieve).

I am sure the Oxbridge’s, MIT’s, Harvard’s, Stamford’s et al, not to mention Google, Facebook and the unknown newbies yet to be discovered, have it all in hand…

New Scientist – https://www.newscientist.com/article/mg22830512-600-super-literate-software-reads-and-comprehends-better-than-humans/

ABC News Aust.- http://www.abc.net.au/news/2015-12-11/scientists-teach-computers-how-to-learn-like-humans/7020740

 Science article – Science 11 December 2015: Vol. 350 no. 6266 pp. 1332-1338; http://www.sciencemag.org/content/350/6266/1332

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s