Human Being Human

If you’re in or around Toronto, please Save The Date – Thursday, May 2 – for a new kind of Event Experience!

Logical Thinking.

I’ve written before about my very first gig…developing software for an Insurance company in Waterloo, Ontario.

When I started, I was merely ambivalent about the role…with enough time, I came to truly despise it. Having very little interest and only slightly greater aptitude for that type of work was a nasty combination.

Still, it wasn’t all bad. Waterloo was (and still is) a lovely place; I met some incredible people, many of whom I’m still friends with to this day. Even the job itself reaped one massive long-term benefit. It taught me various coding-related concepts, knowledge I would end up using in almost every single job to come, including the past two decades of Project Management consulting.

Ethical Thinking.

Looking back at it from 2019, the ‘latest and greatest’ technology of the 1990s seems so incredibly antiquated. With advances in computing, and Artificial Intelligence in particular, coding decisions aren’t merely logical in nature…they increasingly deal with moral questions and issues.

For example, if an autonomous vehicle were to encounter a situation where all outcomes resulted in pain and suffering, ethical considerations immediately jump to the forefront.

Say a passenger steps in front of a self-driving car. The vehicle’s programming will immediately have to kick in and decide whether to: a) swerve left, into another car; b) swerve right, into a cyclist; c) continue on, and hit the pedestrian; or d) slam on the brakes, potentially injuring the passenger, and possibly still hitting the pedestrian. Of course, these are the same choices a human driver would face; the difference is that the car (or to be more accurate, the programmer behind-the-scenes) would be making that decision on behalf of the individual. Some of you might be familiar with the ‘Trolley Problem’, which captures a few of these scenarios: Trolley Problem.

Many of the ethical dilemmas we’ll face aren’t new. The difference is that someone – the manufacturers, the government, the public – will have to collectively decide on a common set of rules…a massive departure from the way things have worked throughout the history of technology.

One intriguing twist is that it may force us to confront questions we’ve able to ignore. In that sense, it’s possible that building smarter machines may potentially make us better humans.

What Were They Thinking?

What may be even more concerning is that are already reports that coders and engineers are unable to understand/validate all the logic and conclusions of even rudimentary Artificial Intelligence. Some are suggesting that doing so isn’t even necessary, and that we should simply ‘trust the machines’.

Given the many cases of AI-related bias (which again, is essentially human-related bias), ‘trusting the machines’ sounds like a recipe for trouble, if not outright disaster. If we were to follow that path, surrendering our humanity to automation would not be a necessary evil…rather, it would be an evil of our own making.

Please join me each week for experiences, observations, and thoughts related to our upcoming project launch (March 2019). Your likes, comments, and shares are very much appreciated…and thanks for taking the time to stop by! Nigel Oliveira  Nigel’s LinkedIn Profile

2 thoughts on “Human Being Human

  1. Your post makes me think of the great Asimov and his Laws of Robotics (written in 1942!!!): elegant and simply written, yet incredibly complex to execute when faced with a moral quandary.
    You assert that we will collectively have to agree on a common set of rules. How intricate do you think these rules will be? For example, auto manufacturers have to meet certain safety criteria (ABS, crash test performance, etc), but HOW they achieve these criteria isn’t necessarily spelled out in the legislation.
    That is, the rules don’t state how the equipment must be designed, they just state that the equipment must achieve a certain outcome, such as “prevent the brakes from locking up”. Will we see the same with “ethical” AI, much like Asimov’s Laws? Will the programmers be given free rein to code as they see fit, provided they ensure human safety?

    Like

    1. FYI:

      A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s