What are IQ test questions that people get right at different IQ levels (e.g., 100, 110, 120, 130, etc.)? Some folks have asked me to pull up data about this from a big study we ran on intelligence. These are all very rough approximations, but here you go:
IQ question thread π§΅
A question indicative of (very approximately) 100 IQ
A question indicative of (very approximately) 110 IQ
A question indicative of (very approximately) 120 IQ
A question indicative of (very approximately) 130 IQ
A question indicative of (very approximately) 135 IQ (you have to check ALL that apply to get it correct)
Keep in mind: one cannot get an accurate IQ score just by looking at the questions above. And there is a LOT more to getting what you want in life than your score on an IQ test anyway.
β
If you found this thread interesting, I'd appreciate a follow!
You may also enjoy my newsletter (One Helpful Idea) - where I send out one idea weekly (a 30 sec read) about psychology, philosophy, or society:
Fake experts are everywhere on the internet. How do you spot a reliable expert and differentiate them from unreliable fakes?
Here are 12 signs to look for, each of which can provide evidence of a proclaimed "experts" reliability:
[expertise megathread] π§΅
1) π¦They have deep factual knowledge
Experts possess extensive knowledge of relevant information, demonstrating a command of the (non-disputed) facts. It's far easier to tell if someone knows the non-disputed facts than to evaluate whether they are right on the disputed points
2) πCommunicate confidence levels
It's a sign of reliability when an expert states their confidence level, distinguishing between well-supported theories and less certain areas. Ideally, they openly discuss the evidence's strengths and limitations.
Can you tell whether a correlation between two things is meaningful from a scatter plot? Well, take a look at this example (see image).
Do you see a meaningful relationship between x and y?
Take a look at the thread below for the "answer":
The scatter plot above reflects a correlation of r=0.2.
Okay, but is that a "meaningful" correlation?
Whether a correlation is "meaningful" depends on two things:
(1) Robustness: is this correlation likely to be the result of chance (a false positive), or would we get a similar result if we measured these two variables again?
Stronger correlations (i.e., ones further away from r=0) are more likely to be robust.
Are people who make more money happier? Let me tell you a tale with many twists and turns about the scientific investigation of this question - along with a surprise ending that you're unlikely to uncover even if you read the papers on this topic
Income/happiness mega thread π§΅
One way to measure "happiness" is to ask questions like "How satisfied are you with your life?" or "On a ladder numbered 0 to 10, where 10 is the best possible life for you, and 0 is the worst, where do you place yourself?"
Questions like these measure "life satisfaction."
Life satisfaction is just one way of measuring happiness. We'll come back to that point in a moment. But 1st, how are life satisfaction and income related?
If we plot the GDP per capita of different countries (x-axis) versus life satisfaction (y-axis), we see this:
Gaslighting, where someone causes another person to doubt their feelings and senses, can cause psychological damage.
There's an opposite thing, though, that can also be damaging. As far as I know, it has no name. I call it Lightgassing.
Here's how it lightgassing works:
π§΅
Lightgassing is when one person agrees with or validates another person's false beliefs or misconceptions in order to be supportive.
Unlike gaslighting, a tactic of jerks and abusers, lightgassing is an (unintentionally harmful) tactic of friends and supporters.
Ideally, when you're upset, friends should validate your feelings and help you feel heard and understood but do so without agreeing with statements they themselves know to be false.
We do a disservice to people when we encourage their false beliefs.
I just encountered a dramatic, real-life example of Simpson's paradox!
In our giant intelligence study, we found a perplexing result: a negative correlation (r=-0.27) between how much effort participants reported putting into doing the tasks and their IQ scores. What gives?
π§΅
Of course, normally, you'd think that more effort means better scores on a test, so why the negative correlation?
Perhaps, we reasoned, people with higher IQs are putting in much less effort, which produces the counterintuitive finding?
But then we found another surprise:
When we split our data into two subpopulations that comprise the whole (paid study participants vs. internet participants) in EACH of those two groups separately, the correlation between reported effort and IQ is positive (rβ0.13), not negative!