Meme Engine

RSS

What I've been thinking about...

Argument of the Day:

soycrates:

No bad men should govern.
Every majority is made of bad men.
Therefore, no majority should govern

Do you agree / disagree with this argument, and why?

What are the argument’s major flaws?

Check the notes if you want to see my slightly smarmy answer.

Another idea concerning the autonomous car problem: Why not program them to have the same reaction that most humans would have? For example, in the "two-bicyclists" situation, a human would be arguably more likely to sway to the side to avoid the bicyclist right in front of him, if only because that's a reaction his bodily instinct forced him to have. If we implement these same "instincts" into cars by programming, at the very least, they're not going to react less ethically than humans would.

I think something like this might be the only solution… this or actually introducing randomness!

There’s still the trouble of deliberation time though.  I can think of lots of scenarios where we’d forgive a person for doing the “wrong” thing because they had to act in a split second.  There’s no such excuse with programmed behaviour - programmers have leisure to consider the best choices and consequences.

And I return to, the only way around this is randomness, so that the cars are also deciding in a split second.  But… doesn’t it seem somehow wrong to just avoid deciding like this?  Shouldn’t we take a stand on what is really right?

Again, if people are curious, this discussion stems from this post.

Probably would be best to not go down that road, to not consider ethical decisions in programming.
Though we are talking about rare situations I’m sure… I don’t think you can avoid talking about them.  Certainly, you could avoid having cars consider heroically crashing off of overpasses.  But it would be hard not consider some way of how passenger safety balances with the passenger safety of those in nearby vehicles… I guess I’m assuming self-driving cars will be networked to each other in some way.  That always made the most sense to me.
But the ethical choices come as action, OR inaction, so I really think they have to be addressed.  In practice, they’ll probably come up because algorithms are used that cover most common sense situations.  Ethics will come in when people look at how those algorithms will act in some of the unusual ethical situations I mentioned.
Original post here if anyone wants to see what I’m talking about.

Autonomous car ethics

This was a great segment on “Spark” this week.  Even if you assume the problems all have perfect solutions (ie the programmed cars always act the way we intended they act, and they are able to predict and understand their environments completely), there are still big ethical problems to confront in the case of unavoidable crashes.

Here are some examples… some of these might remind of the trolley problems!

  • A child runs into the road in front of your car… should it swerve to avoid the child if it means pushing another car into oncoming traffic?
  • In an unavoidable collision, a car must choose between striking a motorcyclist wearing a helmet, and another motorcyclist who is bare-headed.  Which should it strike?
  • A heavy truck is bearing down on a busy intersection, unable to stop.  If a car on an overpass nearby notices it could stop the truck by crashing off the overpass to stop the truck, should it do so?

I think people are tempted to use Utilitarianism to answer these questions - that is, assign numerical outcomes to more or less favorable situations (involving number of people hurt or killed, etc), and then choose the outcome with the best number.

However, this would lead, in the second scenario, to striking the helmeted motorcyclist, since he/she’s least likely to be hurt.  However, this is something of a reward for the bare-headed one.  Is this okay?

In the third scenario, utilitarian views can pull unrelated parties into collisions if it reduces the total body-count.  This may be okay, but it also punishes just being *near* dangerous people or programs.  One could argue both sides of this decision I think.

Alternative: The only alternative to utilitarianism I can imagine is some sort of absolute rule following (“Stay between the lines”), with exceptions explicitly written in (“unless swerving will prevent injury”), etc.  But I feel like any rule we make will have some applications we’d want to except - this is why we have humans interpret vague laws, instead of just following the law absolutely.

Besides Utilitarianism, and rule following, there could be learning programs, which learn from human judgement and situational similarity which moral actions are correct.  But this is hardly problem free either.  How do we fix errors for instance?

Why is this harder for programs?  All the ethical choices given above would also be ethical choices for human drivers in those situations, so why do they feel more problematic for software-driven cars?  I can think of a couple of reasons:

  • Heroism from a distance: In the third scenario, the car sacrifices itself and its passengers for the good of others.  We feel this might be alright for a driver in the car to decide… but maybe not for a programmer who is not the one to be sacrificed.  Just like in the trolley problem, we don’t want to push the man onto the tracks, but we’d applaud him if he jumped on himself.
  • Premeditation: I feel like we can forgive a lot of human ethical judgement because they had to be made in a fraction of a second.  A driver that causes an accident by staying between the lines can be forgiven for their erroneous rule-following, because they just didn’t have time to consider it.  But for self-driving cars, the decisions will presumably all be pre-made in the programming.  If somebody decided to hit the child to avoid crashing into other cars, they [the programmer] decided that with all the hours/days of deliberations they needed.  This makes it more subject to judgement.

I can’t think of ready solutions to this problem!  Really though, this makes it a good topic to consider: it is real/practical, it is difficult, and it is an application of Ethical philosophy!

Anyone out there have thoughts to share?

mflikes:

Watch out for these sneaky data visualization hacks that misrepresent the numbers!

For those not looking closely, these graphs show the same data!  Thanks for the heads-up!

mflikes:

Watch out for these sneaky data visualization hacks that misrepresent the numbers!

For those not looking closely, these graphs show the same data!  Thanks for the heads-up!

I tore apart my house this morning looking for my copy of Scott McCloud’s “Understanding Comics”.
I finally found it, where else, in a cardboard box full of comic books in the top of the bedroom closet.  I’m going to lend it out to a coworker today, but I’m having second thoughts.
This book is really special.  Engaging, Entertaining, and even Philosophical, without any of the layered abstraction that usually detracts from enjoyment of philosophy. It speaks about more than one of my favorite subjects in a language usually used only to tell superhero stories, and the result is incredible.
My particular copy has an inscription from the friend that gave it to me in 1993 (21 years ago!) - admittedly, they only knew I liked comics, and didn’t know what an incredible find this book was.
I will lend it though.  The knowledge is more precious if it might be shared, than collecting dust among my old Knights of Pendragon comics.

I tore apart my house this morning looking for my copy of Scott McCloud’s “Understanding Comics”.

I finally found it, where else, in a cardboard box full of comic books in the top of the bedroom closet.  I’m going to lend it out to a coworker today, but I’m having second thoughts.

This book is really special.  Engaging, Entertaining, and even Philosophical, without any of the layered abstraction that usually detracts from enjoyment of philosophy. It speaks about more than one of my favorite subjects in a language usually used only to tell superhero stories, and the result is incredible.

My particular copy has an inscription from the friend that gave it to me in 1993 (21 years ago!) - admittedly, they only knew I liked comics, and didn’t know what an incredible find this book was.

I will lend it though.  The knowledge is more precious if it might be shared, than collecting dust among my old Knights of Pendragon comics.

Can You Detect Intelligence by Appearance?

I came across this really interesting study via “The Skeptic’s Guide to the Universe” podcast.  It was an admittedly small study, but I really like the experimental setup; here’s what they did:

They photographed 40 men and 40 women with neutral expressions against white backgrounds.  They then gave each photographed person a series of intelligence tests to determine their IQ.

Armed with these actual IQs to go with the photos, they showed the photos to subjects and asked them to rate either the attractiveness or intelligence of the faces, and then analyzed the results.

They got some pretty interesting results too:

  • Perceived Intelligence and Attractiveness strongly correlate.  For a given photo, higher scores for attractiveness and higher scores for perceived intelligence go together.
  • Attractiveness and Actual Intelligence do not correlate.  The “smart” and the “dumb” were scattered all over the attractiveness scale.
  • If Attractiveness is controlled for, Perceived and Actual Intelligence of Male photos correlate.  Say you take all the photos judged at some specified attractiveness level.  Among this group, it turns out that people could judge a little better than chance, who was intelligent and who was not.

One can’t draw too many conclusions based on a smallish study entirely withing a university population, but still, this is interesting stuff!

For some reason I’ve always thought intelligence could be seen in a face via a certain muscular poise around the eyes and mouth - in a way that could be detectable in a still photo.

Ambiguity always pleases me.  I know the commas eliminate the ambiguity, but you can’t always *hear* the comma in speech.

It also reminds me of the joke from Mary Poppins:

I once knew a man with a wooden leg named Smith.

Really, what was the name of his other leg?

In 1984 your host was twelve years old and like George Orwell’s protagonist Winston Smith, he kept a diary, for the citizens of the future…

To me, this podcast was pretty amazing.  I was 6 in 1984, pretty close to the host, so maybe that just makes me nostalgic about the same stuff.  Karate Kid.  Where’s the Beef.  Regan.  Go-bots.  Terminator (the first one).

But really, this is an excellent podcast, and coming from me, that means something.  It’s 50ish minutes of something hard to define as either fiction or non-fiction, but why not hit play and see if it hooks you… especially if you remember what a Go-bot is.

Reblogging to look at later, and check if this is randomness or something interesting.  Er, or both.

(Source: p9sh)

Advertising

I read in “Who owns the Future” today that the advertising we see on Google searches shouldn’t really be thought of as advertising.  Lanier (the author) says that much of traditional advertising has been celebratory, and glorifying of human activities.

Google search adds, by contrast, are more like a direct manipulation of the list of options put before us.  This might seem more true if you keep in mind that the vast majority of click-throughs from these searches are from the first page of results.

This portrayal seem a bit exaggerated to me, but I think maybe it’s right that these search adds are a different animal.

Having young kids can be like having poltergeists.  You find odd collections of objects in inexplicable arrangements.

Having young kids can be like having poltergeists.  You find odd collections of objects in inexplicable arrangements.

Leftfield
"Melt"
Leftism

From someone who rarely posts music… here’s something I think is rare and beautiful.

antinegationism:

This is actually something that’s bothered me for a while. I come from a family of adept politicians, so to some extent I may have been a product of my environment, but growing up I was always *really* good at winning arguments. Unfortunately I learned pretty early on that the ease with which I won an argument was in no way indicative of the success of decisions made based on those arguments.

So I set aside my previous habit of arguing with the goal of winning and started arguing with the goal of getting to the truth, and I realized how much harder winning is then. If you have one debate opponent seeking truth, and the other seeking to win, the truth is way more often than not going to lose out. Which in retrospect is obvious. Arguing for truth immediately forces you to give up useful fallacies and rhetoric.

That totally resonates.  Not as in the personal progression, but the conclusion definitely.  I’ve always thought of myself as smart in terms of both knowledge and insight… but it has never, NEVER manifested itself as an ability to argue my points.  I routinely lose arguments in which I know I am right.

I think I learned the lesson not to trust Charisma from legal dramas on TV.  Some of them were clever enough to sequentially convince the viewers to be on the plaintif’s side, then the defendant’s, without actually revealing any new facts, only presenting new charismatic lawyer’s arguments.

It’s hard to know what to do about that, in politics or in life.  I’m almost coming around to trying to use a broad assesment of the arguer (including history, stake in the argument…) to evaluate their arguments.  That’s officially a logical fallacy I know, but I think in real life it’s the way I lean.  But when ANY point can be effectively argued for by someone with Charisma, I feel like I have to step back from the “front” of the argument to have a chance at viewing the truth.

philosophy bites: Michael Ignatieff on Political Theory and Political Practice

Hmmm, look what I stumbled across while downloading this week’s podcasts.  Canadians: yes, it is that Michael Ignatieff.  It will be interesting to hear someone I “know” from his public life speak on this academic program.