Overconfidence

A military AI outperforms humans in correctly lowering its confidence when judgments are made on limited information:
They couldn’t explain why they were overconfident; they just were. Overconfidence is human and a particular trait among highly functioning expert humans, one that machines don’t necessarily share.
It's worth remembering that, especially if you happen to be a high-functioning expert human. At least some of you are.

14 comments:

David Foster said...

Machines may not share the tendency toward overconfidence, but their human keepers can and do...As exhibited by the treatment of mathematical modeling results involving Climate and Coronavirus.

Grim said...

True. I'm not sure if this really solves the problem, since the humans are the ones who are going to make the final decision. Still, it's at least a useful reminder to us that we should beware the tendency.

E Hines said...

1) Whose definition of overconfidence? The programmers'?

2) Overconfidence is how we make long odds come home for us instead of simply surrendering meekly to them.

3) Overconfidence would seem to include a confidence level in our ability to recognize overconfidence and when it's counterproductive.

I've outthought and outfought too many computers, especially in the USAF, to have any doubts about my...self confidence.

Eric Hines

Grim said...

I have noticed that you have a high degree of self-confidence, Mr. Hines. I imagine it is a function of expertise.

I don't know what definition of overconfidence they're using, but I can define for poker as imposing greater confidence in a victorious outcome than the math suggests you should. If you're playing other humans, that often pays out because they are likely to read your confidence as justified (since you know your hand and they only know theirs).

If you play against a computer, though, you will lose if you adopt this strategy. Your three-of-a-kind is a pretty good hand, and if you're playing four other people three of a kind will win most of the time. But there remain a whole bunch of better hands, so if you bet overconfidently through iterations eventually you'll lose everything to the computer, which will follow the odds perfectly.

Against a computer, you can win if and only if you also play the odds -- or if you get lucky and then have the sense to stop playing. Since we aren't as good at calculating the odds as a computer, an AI poker player should be a better poker player because they can't be bluffed.

Of course, Han Solo-style, you can always say 'Never tell me the odds!' and throw big chips on the table. You might win, and if you do and have the sense to stop playing, you can come out ahead even against a computer. But, well, the odds are against it.

MikeD said...

An interesting fact I noticed earlier this year came while I was listening to "Major Tom" by Peter Schilling.

Standing there alone
The ship is waiting
All systems are go
"Are you sure?"
Control is not convinced
But the computer
Has the evidence
No need to abort
The countdown starts


And everything goes pear shaped from there in the song. But yet, our real world experience with catastophic failures in spaceflight is the exact opposite. Instrumentation is overridden or ignored in an effort to "make a launch window" or "proceed with the mission". Apollo 13 was neutral in this regard as neither man nor machine knew there was an issue until it literally exploded. But Columbia and Challenger both had instrumentation warnings that were ignored or discounted (more of noted but determined that nothing could be done about the issue in Columbia's case). But isn't it interesting that in the artist's mind, it was far more likely that humans suspect a problem but the computer falsely tells them that everything is fine?

And I think that same "prejudice" is on display in your piece Grim. We assume the machine would be more likely in misjudging a situation on limited information, but in fact we are more likely to. That said, it's also a credit to the human programmers that they built in that lack of overconfidence in the face of uncertainty. The computer didn't "come up with it" on it's own.

Grim said...

Heh. I used the poker example because I regularly lose to computers, and almost never to people. I even won a few base poker tournaments in Iraq. But when I play a machine it turns out that my three aces can’t beat a small straight, or a flush, or a full house, or four of anything. Those hands are hard to get yourself, but if you’re simulating four other hands it’s not that unlikely — and the fact that nobody has had better than two pair for the whole game is irrelevant, though the human mind illogically prices it in if it’s not carefully trained against it.

Assistant Village Idiot said...

Overconfidence does help us bring home long odds. Until it doesn't. Winston Churchill's hypomania was exactly what was needed for England in 1941-43. ("Gentlemen, I find it all rather inspiring.") But it usually gets people killed. Many entrepreneurs would not have succeeded without overconfidence. But most of them fail anyway, taking others down with them. Tolkien's commentary of ofermod in "The Battle of Maldon" captures the balance well, I think.

My experience is that intelligent people often have the fault of jumping to conclusions. It works 50% of the time, is neutral 30% of the time, and is inaccurate 20% of the time, so on balance, they get rewarded for continuing. I have liked having such bosses, but it is not an unalloyed gain.

Texan99 said...

What I look for in an executive is not so much a subjective conviction of being right, as a willingness to make a decision despite remaining less than 100% sure, unless there's a reasonable way to hedge or delay. He probably looks sure, so maybe he really feels sure or maybe he knows how to keep a straight face. His business. But there must be no dithering simply because he misses the comfort of actually feeling sure. Some decisions can't be delayed until our information is perfect. The patient can die on the operating table while you hope for a risk-free strategy to appear. So go ahead and doubt, but CHOOSE.

E Hines said...

I imagine it is a function of expertise.

Or at least a function of self-perceived expertise.

Regarding poker, that's very much less a matter of gambling or of odds playing and very much a matter of skill--of shaping the table over the course of the game's hands, of reading people, and of money management in a closed environment. Strictly speaking, the odds associated with a particular poker hand are almost irrelevant. I've probably lost that edge, since it's been a long time since I bought my college books with other players' poker money.

As to beating the computer, that's not a matter of beating the computer alone. I'm also playing against the computer's programmers, and that'll be the case for everyone at least until programs progress to the point of programming themselves over enough generations to sufficiently attenuate the original humans' involvement. Many's the time during intercepts that I've instructed Bonnie or Clyde to refigure its offered solution. Many's the time, too, when I've read a three-dimensional dogfight that's displayed as a single, planar phosphor blob on a radar display to the benefit of friendly fighters and detriment of the bad guys also in the blob. Statistically, that should have been not possible at the frequency a few of my fellows and I were able to achieve. That skill was due to the continued superiority of our wetware computers over the silicon varieties.

T99 is right, too: a lot of beating bad odds routinely (not 100% of the time; the Universe gets a vote, too) is simply being willing to act on incomplete information--and not being married to that action, but willing to alter the deal as more data come in, along with being able and willing to evaluate the legitimacy of those data in real time, which latter also demands a high level of self confidence or perhaps a measure of overconfidence.

And sometimes that amateur Holmes is right: when you've eliminated the outright impossible, the improbable, however unlikely, is the thing.

Eric Hines

E Hines said...

On the Defense One piece, a question comes to mind.

In the second phase, were the computer and the humans looking at a static snapshot of available data, or were they looking at a sequence of data flow? The flow of events is much more important than any snapshot, and the ability to dead reckon is a well-established and strong skill for properly trained humans. It would be, too, for a properly trained computer, but that would depend also on those responsible for doing the machine learning.

So far, we're still playing against the human programmers as much as we are the computer itself.

Eric Hines

David Foster said...

"What I look for in an executive is not so much a subjective conviction of being right, as a willingness to make a decision despite remaining less than 100% sure, unless there's a reasonable way to hedge or delay"...years ago, in Investors Business Daily, the then-CEO of some company...it may have been John Deere, but I'm not sure...had some thoughts on decision-making that I like.

He said the key is that you have to wander in the thicket of ambiguity for a while, but then come out the other side.

Both parts are important: understand the complexity of the problem, but don't let it paralyze you.

I would add: You've got to modify your decision-making process according to the situation. In some cases...airplane stalls close to the ground, building is on fire...you need an instinctive fast reaction, but in other cases, some time can be taken.

douglas said...

"My experience is that intelligent people often have the fault of jumping to conclusions. It works 50% of the time, is neutral 30% of the time, and is inaccurate 20% of the time, so on balance, they get rewarded for continuing. I have liked having such bosses, but it is not an unalloyed gain."
That's right I think. Intelligent people often are just better at seeing the patterns and deducing from incomplete info faster than others- but of course- sometimes you're too fast, buoyed by your confidence built up from being right so often.

Put another way- if you aren't making mistakes, are you really intelligent? If AI never gambles, how can it really learn?

douglas said...

"Some decisions can't be delayed until our information is perfect. The patient can die on the operating table while you hope for a risk-free strategy to appear. So go ahead and doubt, but CHOOSE."

Wise words for our times, Tex. We need leaders that understand this. The experts can play it safe, and wait for the data- that's their job- but leaders have to make decisions even with incomplete info, lest the patient die on the table, as you say.

Grim said...

“If AI never gambles, how can it really learn?“

Gödel thought that machines could never think, because they are bound to mathematics and math is necessarily incomplete— thus, intuition is the human advantage.

https://www.firstthings.com/article/2010/08/the-god-of-the-mathematicians