Meme Engine

RSS

What I've been thinking about...

A joker is a little fool who is different from everyone else. He’s not a club, diamond, heart, or spade. He’s not an eight or a nine, a king or a jack. He is an outsider. He is placed in the same pack as the other cards, but he doesn’t belong there. Therefore, he can be removed without anybody missing him.

-

Jostein Gaarder (via menna-t-o-n)

Gaarder’s joker is also a questioner.  He never stops wondering why everyone has a rank, and a suit, and why it must be so.

It’s a bit vain, but I hope I’m a little like the joker.

An organic way to make a decision is to drop it into the back of your mind, like an imperfection into a winter cloud bank. The crystallization of ideas around it is too small to detect but, eventually, the imperfection becomes the seed of a symmetrical snowflake visible to the naked eye. You can return to the idea then and look at it to see what you’ve decided.

-

memeengine (me)

When decisions call for more than rational problem-solving, I’ve always felt like crystallization was a great metaphor for how to let them happen.

Sort of inspired by this quote from “Watership Down”.

As an example, I’ll mention what I’ll call the “radio theory” of brains. Imagine that you are a Kalahari Bushman and that you stumble upon a transistor radio in the sand. You might pick it up, twiddle the knobs, and suddenly, to your surprise, hear voices streaming out of this strange little box. If you’re curious and scientifically minded, you might try to understand what is going on. You might pry off the back cover to discover a little nest of wires. Now let’s say you begin a careful, scientific study of what causes the voices. You notice that each time you pull out the green wire, the voices stop. When you put the wire back on its contact, the voices begin again. The same goes for the red wire. Yanking out the black wire causes the voices to get garbled, and removing the yellow wire reduces the volume to a whisper. You step carefully through all the combinations, and you come to a clear conclusion: the voices depend entirely on the integrity of the circuitry. Change the circuitry and you damage the voices.

Proud of your new discoveries, you devote your life to developing a science of the way in which certain configurations of wires create the existence of magical voices. At some point, a young person asks you how some simple loops of electrical signals can engender music and conversations, and you admit that you don’t know — but you insist that your science is about to crack that problem at any moment.

Your conclusions are limited by the fact that you know absolutely nothing about radio waves and, more generally, electromagnetic radiation. The fact that there are structures in distant cities called radio towers — which send signals by perturbing invisible waves that travel at the speed of light — is so foreign to you that you could not even dream it up. You can’t taste radio waves, you can’t see them, you can’t smell them, and you don’t yet have any pressing reason to be creative enough to fantasize about them. And if you did dream of invisible radio waves that carry voices, who could you convince of your hypothesis? You have no technology to demonstrate the existence of the waves, and everyone justifiably points out that the onus is on you to convince them.

So you would become a radio materialist. You would conclude that somehow the right configuration of wires engenders classical music and intelligent conversation. You would not realize that you’re missing an enormous piece of the puzzle.

I’m not asserting that the brain is like a radio — that is, that we’re receptacles picking up signals from elsewhere, and that our neural circuitry needs to be in place to do so — but I am pointing out that it could be true. There is nothing in our current science that rules this out.

- David Eagleman – Inkognito (via frrrst)

Why Physicists Are Saying Consciousness Is A State Of Matter, Like a Solid, A Liquid Or A Gas

This article has lots of interesting points crammed into only a few paragraphs.  But something about it rubs me the wrong way.

It begins by searching for a theory of consciousness that can satisfy two criteria:

  1. Consciousness must be able to store and retrieve information.
  2. "A consciousness" shouldn’t be divisible, in the sense that information is lost if the parts are considered separately.

So a hard association is made between consciousness and information.  I would like to see this argued for as an interesting theory, rather than taken as a premise.  I just don’t think it’s obvious that all conscious things have memory.

Of course I believe that solipsism is the correct philosophy, but that’s only one man’s opinion.

- Melvin Fitting (via nomadic-curls)

(Source: quotedojo)

This was their way of reaching a conclusion… The fact flickered and oscillated down among them as a penny wavers down through deep water, moving one way and the other, shifting, vanishing, reappearing, but always sinking towards the firm bottom.

- Richard Adams from “Watership Down”, with a beautiful metaphor on how consensus may be gently reached (never mind it’s a consensus among rabbits). 

Another abuse of “literally” on the radio today:

"This one is literally a no-brainer!"

*sigh*

frrrst:

Veritasium – Forest of Friendship, Baggage Carousel of Jerks

We often imagine that unregulated competition produces optimal outcomes, behaviours, efficiencies, but trees and baggage carousels are two examples where the stable solution is worse for everyone than another strategy. This I find surprising and interesting - that evolution doesn’t come to the best solution, it comes to the most stable one.

He doesn’t use the words, but this excellent video seems like another great example of a multi-player prisoner’s dilemma (played by either trees, or travellers collecting luggage).

Also, some hints that the non-cooperative options may be the more “natural” or “stable” ones.  Really interesting how stability can trump advantage in these examples!

Air Conditioning is an example of Multi-player Prisoner’s Dilemma

Here’s a weird fact: Air Conditioners are net heat creators.

This is related to a semi-famous thought experiment about trying to cool an impermeable room using an open refrigerator (you can’t!).  If you view them as black boxes with unknown stuff going on inside, then both ‘fridges and air conditioners don’t actually remove heat, they just move it from one place to another.  A fridge moves the heat from it’s interior to the exterior, and a window AC unit moves heat from inside the house to the outside.

In fact, this movement of heat requires work - work that generates heat, so the fact is that AC units move the heat around, and also create just a bit of excess heat in the bargain.  If you draw a big enough circle around a ‘fridge or an AC unit, then those appliances are heat creators, and not the reverse.

As a result, deciding whether to cool yourself using AC in an overheated city is like a huge, multiplayer game of prisoner’s dilemma!
(if you need to know what prisoner’s dillemma is, there’s an audio post explaining it here.)

That is, if each person switching on an AC unit decreases their own heat by 5 degrees, but increases the heat of all fellow citizens by a fraction of a degree, then it’s ultimately worse to have an entire city running AC than an entire city not running it (because the heat from the rest of the city’s AC units adds more than 5 degrees to each individual).  Despite this, it’s in the rational best interests of each citizen to run their own AC.  Whatever the neighbors decide to do, that choice will certainly make it 5 degrees cooler.

Air conditioners are a pretty cool, concrete example of a real, large scale prisoner’s dilemma… but not the only one I think.  Deciding whether to drive patiently or like a jerk is sort of the same thing.  Also reaping the benefits of energy that pollutes the environment.  Whether a cell decides to die when it’s supposed to for the good of the organism, or become cancerous and survive.  Important stuff!

And, I suppose the implication is that the smarter choice is the one made from the group perspective.  Can individuals be wise enough to realize that each other individual is making the same choice, and conclude that they should choose as though every individual will make the same choice that they do?

In short:

Prisoner’s Dilemma ==> Kant’s Categorical Imperative

!!!!

Any functioning, authentic economy has to by definition be sustained more by voluntary participation than by enforcement. In the physical world it’s not all that hard to break into someone’s house or car, or to shoplift, and there aren’t all that many police. The police have a crucial role, but the main reason people don’t go around stealing in the physical world is that they want to live in a world where stealing isn’t commonplace.

- Jaron Lanier, with a reminder that social contracts *have* to be voluntary.

Argument of the Day:

soycrates:

No bad men should govern.
Every majority is made of bad men.
Therefore, no majority should govern

Do you agree / disagree with this argument, and why?

What are the argument’s major flaws?

Check the notes if you want to see my slightly smarmy answer.

Another idea concerning the autonomous car problem: Why not program them to have the same reaction that most humans would have? For example, in the "two-bicyclists" situation, a human would be arguably more likely to sway to the side to avoid the bicyclist right in front of him, if only because that's a reaction his bodily instinct forced him to have. If we implement these same "instincts" into cars by programming, at the very least, they're not going to react less ethically than humans would.

I think something like this might be the only solution… this or actually introducing randomness!

There’s still the trouble of deliberation time though.  I can think of lots of scenarios where we’d forgive a person for doing the “wrong” thing because they had to act in a split second.  There’s no such excuse with programmed behaviour - programmers have leisure to consider the best choices and consequences.

And I return to, the only way around this is randomness, so that the cars are also deciding in a split second.  But… doesn’t it seem somehow wrong to just avoid deciding like this?  Shouldn’t we take a stand on what is really right?

Again, if people are curious, this discussion stems from this post.

Probably would be best to not go down that road, to not consider ethical decisions in programming.
Though we are talking about rare situations I’m sure… I don’t think you can avoid talking about them.  Certainly, you could avoid having cars consider heroically crashing off of overpasses.  But it would be hard not consider some way of how passenger safety balances with the passenger safety of those in nearby vehicles… I guess I’m assuming self-driving cars will be networked to each other in some way.  That always made the most sense to me.
But the ethical choices come as action, OR inaction, so I really think they have to be addressed.  In practice, they’ll probably come up because algorithms are used that cover most common sense situations.  Ethics will come in when people look at how those algorithms will act in some of the unusual ethical situations I mentioned.
Original post here if anyone wants to see what I’m talking about.

Autonomous car ethics

This was a great segment on “Spark” this week.  Even if you assume the problems all have perfect solutions (ie the programmed cars always act the way we intended they act, and they are able to predict and understand their environments completely), there are still big ethical problems to confront in the case of unavoidable crashes.

Here are some examples… some of these might remind of the trolley problems!

  • A child runs into the road in front of your car… should it swerve to avoid the child if it means pushing another car into oncoming traffic?
  • In an unavoidable collision, a car must choose between striking a motorcyclist wearing a helmet, and another motorcyclist who is bare-headed.  Which should it strike?
  • A heavy truck is bearing down on a busy intersection, unable to stop.  If a car on an overpass nearby notices it could stop the truck by crashing off the overpass to stop the truck, should it do so?

I think people are tempted to use Utilitarianism to answer these questions - that is, assign numerical outcomes to more or less favorable situations (involving number of people hurt or killed, etc), and then choose the outcome with the best number.

However, this would lead, in the second scenario, to striking the helmeted motorcyclist, since he/she’s least likely to be hurt.  However, this is something of a reward for the bare-headed one.  Is this okay?

In the third scenario, utilitarian views can pull unrelated parties into collisions if it reduces the total body-count.  This may be okay, but it also punishes just being *near* dangerous people or programs.  One could argue both sides of this decision I think.

Alternative: The only alternative to utilitarianism I can imagine is some sort of absolute rule following (“Stay between the lines”), with exceptions explicitly written in (“unless swerving will prevent injury”), etc.  But I feel like any rule we make will have some applications we’d want to except - this is why we have humans interpret vague laws, instead of just following the law absolutely.

Besides Utilitarianism, and rule following, there could be learning programs, which learn from human judgement and situational similarity which moral actions are correct.  But this is hardly problem free either.  How do we fix errors for instance?

Why is this harder for programs?  All the ethical choices given above would also be ethical choices for human drivers in those situations, so why do they feel more problematic for software-driven cars?  I can think of a couple of reasons:

  • Heroism from a distance: In the third scenario, the car sacrifices itself and its passengers for the good of others.  We feel this might be alright for a driver in the car to decide… but maybe not for a programmer who is not the one to be sacrificed.  Just like in the trolley problem, we don’t want to push the man onto the tracks, but we’d applaud him if he jumped on himself.
  • Premeditation: I feel like we can forgive a lot of human ethical judgement because they had to be made in a fraction of a second.  A driver that causes an accident by staying between the lines can be forgiven for their erroneous rule-following, because they just didn’t have time to consider it.  But for self-driving cars, the decisions will presumably all be pre-made in the programming.  If somebody decided to hit the child to avoid crashing into other cars, they [the programmer] decided that with all the hours/days of deliberations they needed.  This makes it more subject to judgement.

I can’t think of ready solutions to this problem!  Really though, this makes it a good topic to consider: it is real/practical, it is difficult, and it is an application of Ethical philosophy!

Anyone out there have thoughts to share?

mflikes:

Watch out for these sneaky data visualization hacks that misrepresent the numbers!

For those not looking closely, these graphs show the same data!  Thanks for the heads-up!

mflikes:

Watch out for these sneaky data visualization hacks that misrepresent the numbers!

For those not looking closely, these graphs show the same data!  Thanks for the heads-up!