Algorithms and Agency — Learning to Live With Learning Machines

Larry Weeks
7 min readOct 1, 2019

Humans are making machines smarter and the machines are learning.

So, what is happening to humans using smart machines?

These are the types of questions I wanted to ask Kartik Hosanagar.

Kartik is a professor at the Wharton School of the University of Pennsylvania, one of the world’s top 40 business professors under 40. His new book, A Human’s Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay In Control, is the topic of our discussion which you can listen to here

Problem solving at scale

Just to level set, an algorithm is a set of sequential instructions performed to solve a problem. To our discussion, a programming algorithm is a computer procedure that tells a device or application the precise steps ( inputs) to take to solve a problem (outputs)

These computer algorithms automate procedures and can solve complicated problems we wouldn’t otherwise be able to solve.

Remember that scene from The Social Network where Mark Zuckerberg tells Eduardo Saverin, “I need the algorithm you used to rank chess players” and Eduardo writes it on the dorm window? That’s an algorithm. It’s a real equation BTW (modified to rank looks) and it’s called the Elo algorithm named after its creator.

I love algorithms — and so do you.

When properly applied to the right problems they make work and life better. Not only via the multi-billion dollar economic impact of businesses birthed via content organization like the internet or search but improvements in logistics, flight routing, encryption, drug design and disease diagnostics, to name only a few. I would guess there will likely be future malpractice cases if a Doctor doesn’t use machine learning for diagnosis and treatment planning — if it’s available to them.

Let’s first acknowledge the amazing things we can now accomplish because of algorithms.

Yay algos! Seriously.

And although they are not sentient things waiting for a day to overthrow us, algorithms have become very intimate parts of our lives by being embedded in the devices we own, wear and drive.

They run the apps we use for many of the decisions we make.

Hence the risk.

Agency atrophy

In my opinion, the risk of ubiquitous algorithm proliferation and the threat of A.I. for that matter, is not necessarily that machines become more intelligent but that people become less intelligent.

David Krakauer has interesting thoughts about complementary vs. competitive cognitive artifacts. Competitive artifacts being those that improve our ability to perform a cognitive task but if taken away, likely to make us worse at performing the task than we were before.

Consider spatial orientation.

There is some research on fMRI (functional Magnetic Resonance Imaging) of individuals using native spatial navigation abilities that showed increased activity in the hippocampus. They also found the opposite was true — excessive use of GPS navigation could lead to atrophy in the hippocampus.

Dr. Veronique Bohbot, a researcher at the Douglas Mental Health University Institute goes further stating concern for “hippocampus disuse” as stated in the book Wayfinding: The Science and Mystery of How Humans Navigate the World

“The sedentary, habitual, and technology-dependent conditions of modern living today are changing how children and adults use their brains.”

To the point, Kartik asked me to consider the impact on our decision-making ability.

“They’re making so many choices for us, mostly in ways that allow us to be productive, the flip side is the extent to which we are fully in control of our decisions. It’s not quite what it used to be. The algorithms are nudging us in different ways”

Kartik cites research that shows they have a significant impact on our purchase decisions and entertainment choices. “Over a third of our choices on Amazon are driven by algorithmic recommendations. On Netflix over 80% of what we view is driven by algorithmic recommendations.”

Don’t dismiss that. As innocuous as shopping or movies may be, it’s an illusion to think we are making these choices solely via free will. And the impact is much broader; their utility has raised the stakes.

Kartik tells me one tech giant used an algorithm that was discovered to carry gender biases — and here’s the rub, which could not be eliminated by the engineers despite trying multiple solutions. Yipes!

So much for a machine’s pure objectivity.

In a research report, the Partnership on A.I. opposed the use of AI algorithms by law enforcement in cases of parole, bail, and probation. The report said algorithms dedicated to helping police in the jailing process are “potentially biased, opaque, and may not even work,”

Filter bubbles and priming

I think it obvious by now that having an algorithm that serves up content based solely on maximizing some click value thus rewarding personal biases, is not good for us.

There is a disconcerting story I came across in The New Yorker about a growing number of people who believe the earth is flat. One of those profiled in the story is Darryle Marble who for two years “drank in” conspiracy stories via YouTube and the algorithm served up one related video after another and then the article notes…

“Marble found the light in his YouTube sidebar, he said. ‘I was already primed to receive the whole flat-earth idea because we had already come to the conclusion that we were being deceived about so many other things.’ “

Ignorance and fear

Look, this is all so new in our evolutionary journey and we are still learning, hopefully.

Using a very extreme example, the transition from horse and buggy to cars was dangerous. People went from trotting down a street on four hooves to hurtling at 30 miles an hour in cages of steel on poor roads with no markings.

What soon followed; stop signs, lane dividers, speed limits, traffic lights and one of the largest safety measures — driver licensing. Learning how to properly use the machines.

Point being, regardless of how you feel about cars, there are emergent issues with any new technology when mixed with mass populations of humans with varying levels of skill and emotional balance.

Ignorance and fear

There seems to be a lot of angst about the future. Buzzwords like digital disruption and A.I. evoke in people a sense of helplessness because they are very big question marks.

Will machines take over?

Will I be “obsolete?”

This is a primal fear of the unknown. If something is foreign, we tend to fear it.

Taking back control

Shining a light into the dark places reveals the hat on the stand.

Understanding replaces fear with familiarity. Cultivating a habit of curiosity and learning can increase your sense of agency while decreasing your fear.

Let’s throw in decreasing your frustration level as well.

Do you have any idea why your Maps app took you the way it did?

Instead of constantly yelling at it, you should check your map settings which you can change (on Google Maps click-drag on a route forces the app to take a different way rather than the algorithm default) It also helps to have some general idea of the inputs it uses; what data is the program crunching when it recommends a route? Things like real-time traffic pattern data from the devices of millions of drivers who are on the road when you are, tolls, highways, time through intersections (left turns usually take much longer) complexity, etc., etc.

That said, when you default your decisions to others — to include devices — you’re giving away your agency each time.

I am in no way suggesting you shun algo driven tech, quite the opposite. Where you have complicated problems seek them out. Use smart devices and machines as tools, incorporate the automation where you can and work with them.

Just learn what they are doing and how they are doing it.

When Google gives you an answer to your question, where did it come from? Search uses multiple algorithms but it’s in part information retrieval and rank is no proxy for truth.

Sources matter.

You don’t need to know how to program a computer to get better at understanding what they’re doing. Just being more aware that an algorithm may be involved in your decision making delegations can put you in place a bit more empowering.

To that point, this conversation might help.

My chat with Kartik is a verbal look inside the brains of all of your devices, at least the software that’s running them and how they are learning and influencing you.

You really don’t need a lot of tech know-how to enjoy it. In fact, the less you know technically the more it might behoove you to listen.

Listen to the entire interview here.

Kartik does a masterful job not only explaining how algorithms work but how advances in A.I. are impacting your life — and what we can do about it personally as well as collectively as a society. He also proposes an algorithmic bill of rights.

And if you’re curious at all as to all the buzz about artificial intelligence, Kartik provides a unique take on its history and evolution from narrow automation to autonomous learning via the story of algorithms.

If you only have a few minutes you might want to listen here for a great retelling of how AlphaGo beat the world’s best *human* Go player. A bit eerie.

There’s so much more in this episode to include…

  • Human and algorithmic decision making
  • Input/output; data and data brokers
  • Algorithms work and human bias
  • On “raising” A.I., comparisons in human nature
  • Black box problems, not knowing why a machine takes an action
  • The predictability-resilience paradox, advancing A.I. and risk mitigation
  • Kartik’s view of the future and how to prepare for it

Listen on iTunes

Listen on Google Podcasts

Listen on Stitcher

--

--

Larry Weeks

Ex-Googler, host Bounce Podcast | larryweeks.com/podcast, maker Eurekaa.io. Compelled to talk to interesting people, ask bad questions and record it.