πŸ€”πŸ’­Justin’s Comments on Yes Or No Philosophy, Part 1πŸ‘¨πŸ»β€πŸ’»πŸ“

(These are comments on the first 30 minutes of the longest/”main” video in the Yes/No educational product. This is a selective summary/discussion of items and will omit many points and details, which you will have to pay for the whole product to get!)

Elliot describes the standard view, which is that ideas have amounts of goodness. These amounts can be described numerically or with words. Favorable args or evidence increase support, and crits reduce it. But no one knows how to measure an idea’s goodness.

Elliot says people use the idea of criticism reducing idea goodness/support in order to ignore crit. That’s bad!

Elliot mentions that there’s various words for idea goodness people use and specifically mentions authority, which is controversial. Some people reject it and try to think for themselves, but then their method is to look at support!

J’s Comment: a good example of how people can fall into an “intellectual trap” without the right epistemology. People can rightly reject authority but then switch to a method which makes the same sort of epistemological mistake. They might still improve their ideas and understanding, but their efforts could be more successful if they had more philosophical perspective on the issue.

On the issue of words for goodness, some I would not have recognized as “goodness” terms before watching the video were educated guess and myth.

Elliot discusses how the support approach leads to people having different, irreconcilable conclusions due to assigning things different “weights.” The weights are not the process used to determine the truth of the matter in their mind — the weights are an argument technique.

J’s Lengthy Comment: in US law there is frequent use of “balancing tests.” The idea is you consider a list of factors and “weigh” them somehow to come to a conclusion.

So for instance, when considering what procedures are required to deprive someone of life, liberty, or property, a court will supposedly weigh

(1) The importance of the private interest affected.

(2) The risk of erroneous deprivation through the procedures used, and the probable value of any additional or substitute procedural safeguards.

(3) The importance of the state interest involved and the burdens which any additional or substitute procedural safeguards would impose on the state.

Justice Scalia once said of a balancing test:

This process is ordinarily called “balancing,” but the scale analogy is not really appropriate, since the interests on both sides are incommensurate. It is more like judging whether a particular line is longer than a particular rock is heavy.

And I think that’s a very good way to put it. In coming up with an idea of what (for example) procedural due process you need in some circumstance, you can’t take a bunch of criteria and “weigh” their relative importance in order to come up with an idea. How many super important private interests equals a moderately important state interest? There’s no answer.

Elliot says that one reason people like talking in terms of numbers is even if they give a very high number for their “certainty” level on an idea being true, they give themselves a built-in excuse if they say 99% and they’re wrong. Basically, people don’t like dealing with fallibility, unlikely stuff, etc.

J’s Comment: People might say its like 99.999999999999% certain the sun will rise tomorrow. They think talking about the sun rising is pretty safe, but wanna cover their bases in case a giant asteroid hits us and knocks us out of orbit or something wacky like that. But really what’s going on is we have an explanatory model of reality which says events will happen that we call the sun rising under certain conditions. And as long as our explanatory model is true and those conditions hold, then the sun will rise, 100%. And when those conditions don’t hold anymore or our theory turns out to deviate from reality in some relevant respect, then the sun definitely won’t rise.

And also as a side note, I bet there’s modeling for things like the statistical chance of SURPRISE SNEAKY ASTEROID KNOCKING US OUT OF ORBIT, and it has actual numbers, not arbitrary tiny percentage guesses.

Elliot says people think support works cuz people think they do it and attribute lots of successful progress to it. But they’re wrong about how their thinking works.

Elliot talks about the relationship between authority and support. Basically, prestigious people believing an idea adds to its support. Elliot makes the good point that if you aren’t judging the idea itself, you’re left with authority (fame/prestige/academic degrees of speaker, popularity of idea).

J’s Comment: One thing I bring up a lot when talking about prestigious people is … they disagree! You can find people with fancy Harvard and Yale degrees who think all sorts of stuff. So what do you do with that situation? Do you go by number of people? What about if more prestigious people (who are numerically fewer) think something on some issue, and numerically more less prestigious people think something else. Do the more prestigious people count more? How much more?

Seems like a big, impossible mess to try and sort that out, just to avoid thinking about issues directly!!!

Here’s an example: CNN ran a whole big hit piece on Sebastian Gorka (who just left the White House) basically saying he’s not considered prestigious enough by other experts in the field:

That’s what authority-based approaches lead to…fighting over credentials instead of ideas.

Elliot says we should reject the whole support model and use yes or no/boolean judgments instead. Support doesn’t work and can’t solve the problem of how to believe good ideas and reject bad ideas.

Under this new approach, we can believe good ideas (“yes” ideas) and reject bad ones (“no” ideas). But we can’t directly compare two ideas we currently think are good. We have to come up with criticisms that will allow us to reject one of the ideas.

J’s Comment:

If people go by authorities, they still have to pick which authorities to go by. They are still making a judgment and still responsible. But it seems much easier for people to not feel responsible when they rely on other people’s thinking. To explicitly and consciously take responsibility for one’s ideas is a big deal and a hard step for many people. So I think this would be an objection many people would have to moving away from support to a YES/NO approach.

Elliot points out that when you decide between a “good” idea and a “great” idea, you’re choosing, you’re picking aside, you’re saying yes to one and no to the other one. So just admit that!

Ideas are “yes” by default, and “no” if you refute them. So all ideas can be categorized this way.

NOTE: Elliot Temple replied here

Leave a Reply

Your email address will not be published.