Turns out it’s harder than I thought. (Writing about the “science” in “the science of attraction,” I mean.)
It’s been over a month without a new post, which means I’m failing to deliver on my promises. Although I only broke a promise to myself, not my approximately three readers (who doubtless do not care), I wanted to explain myself anyway. So this post is to serve as an update and an explanation of why I’m changing my goal.
I’ve decided to lower my standards
My goal was to write high-quality, well-researched posts about “the science of attraction.” But from now on, my goal is going to be more like writing medium-quality, inconsistently researched ramblings about “Kate’s opinion on attraction.”
The main problem: research is hard
I thought the hardest part of writing this website would be making the charts on the factors of male and female attractiveness. I just have to get this part over with, I told myself; after that, I assumed it would be easy to pump out articles on why these things are attractive, and how you can achieve them.
Unfortunately, the first part — explaining why these factors are attractive according to science — is way harder than I thought it would be. Let me tell you about the process I recently went through to write a single post (which I have yet to publish because it’s still only half done).
Step one is choosing a topic from the list of the factors of attractiveness. Then I look for scientific articles on the topic and start skimming them. Every article includes references to tons of other interesting articles, so I have to find and skim those too.
Eventually I have 30+ tabs open and the spinning beach ball of doom appears on my macbook, at which point I re-focus on the topic at hand and reopen just a few articles. I read these closely, and take notes on what we can conclude from them.
However, none of the studies address exactly the same topic, and all the experiments have different parameters, so they’re never perfectly comparable. Plus sometimes their results contradict each other.
Even if all their results say the same thing, how should I decide whether the consensus is trustworthy? (I am not critiquing the scientific method in general, only lamenting the difficulty of constructing a “facts”-based argument as a lone non-scientist blogger.) I’m not entirely qualified to decide whether each study is based on rock-solid experimental procedures/analyses and a large enough sample size.
So now I’ve spent three weeks on an article called “Why straight white teeth are attractive.” All I have to show for it is a long-winded description of different studies; and in the end, I couldn’t even find enough scientific evidence to show that straight white teeth are universally attractive.
Also, none of my friends want to read drafts of the post — apparently they already know straight white teeth are attractive, and they don’t wish to learn, through a lengthy and tedious reading assignment, why that belief may or may not be well-founded according to some obscure scientific studies.
Should I just skip to the “what you should do about it” part?
Maybe I should just go with my intuition about what’s attractive, and worry about the “science” part later.
My best friend likes to remind me of “the 5-year-old test” — if I’m discussing something I would have known at age 5, then it’s not worth the trouble of proving it to readers, because they probably already agree with me.
For example, rather than trying to prove why straight white teeth are attractive, I could have written the following instead.
Think back to when you were 5 years old. You probably would have agreed that people look better when their teeth are nice-looking — as in non-yellow, healthy, and well-aligned. (Studies have confirmed this.)
You might also have agreed that there was very little difference, in terms of attractiveness, between people with normal white teeth, slightly-less-white (or slightly crooked) teeth, and extremely bright white teeth. (Studies have confirmed this.)
Indeed, you might have thought artificially bright-white teeth, or teeth that are straight but the wrong size or shape for the person’s face, can even decrease someone’s attractiveness. (Studies have confirmed this.)
So overall, you probably already know that straight white teeth are attractive — assuming they’re not too straight and not too white — and now all you need to do is find out how to attain nice-enough teeth for yourself.
The would-a-5-year-old-know-this approach certainly makes it easier to write more posts, since I don’t have to spend so much time researching. Plus, most people would probably prefer this type of article, assuming it also contains useful advice.
On the other hand…
What would be the point of calling this “the science of attraction”?
The danger of not treating scientific research carefully enough is that when the science is misreported to the extent that it appears inconclusive, people might decide that scientists don’t know much about anything. There’s a segment on Last Week Tonight with John Oliver called Scientific Studies (May 8 2016) that talks about this:
Here’s my transcription of the most relevant parts:
(If you’d prefer to watch the whole video, you can skip this part since I didn’t add any analysis.)
(5:11) Scientists themselves know not to attach too much significance to individual studies until they’re placed in the much larger context of all the work taking place in that field. But too often, a small study with nuanced tentative findings gets blown out of all proportion when it’s presented to us, the lay public.
(7:38) And there is no doubt some of this is on us, the viewing audience. We like fun poppy science that we can share like gossip, and TV news producers know it.
That is why you constantly hear stuff like this: [from a Fox news segment titled “Cooking Up Love”] “Men, listen up: A brand new study says a woman is more open to romance when they are full as opposed to being hungry.”
(8:04) No shit. […] But you should know, that study involved only 20 women. And you cannot presume that 20 women can speak for all women. This is science, not the United States senate.
(9:55) Now to be fair, it’s not always the news media. Sometimes researchers themselves will oversimplify the science. Even TED talks, which have had some amazing speakers, have also featured some morning-show-style science in the past. […]
(13:16) You may think, well hold on, where’s the harm here? […]
(13:27) Think of it this way. This is a chart mapping the results of studies of things like coffee, eggs, and wine. All of them have been linked to raising or lowering your risk of cancer depending on the study. And “everything causes cancer” is not the conclusion you want to draw from science. It’s the conclusion you should draw from logging on to WebMD, where that is their motto.
Because if I were to tell you about each of those studies in isolation, you might reasonably think, ‘well, no one knows anything about what causes cancer.’ And that is a problem.
(14:01) Because that’s the sort of thing that enabled tobacco companies for years to insist ‘the science isn’t in yet.’ And if you think I’m exaggerating about the impact that this misreporting can have on our faith in science, look at an example from some of the people most guilty of it.
(14:15) The Today Show, which lives for scientific studies, recently concluded one segment like this.
(Today Show people discuss whether full-fat dairy is good or bad for you)
(14:39) (Today Show guy) “I think the way to live your life is you find the study that sounds best to you, and you go with that.”
(14:44) (John Oliver) NOOO! No no no no no no no! In science, you don’t just get to cherry-pick the parts that justify what you were going to do anyway. That’s religion. You’re thinking of religion.
(15:04) This is really dangerous. If we start thinking that science is à la carte and that if we don’t like one study, don’t worry another will be along soon, that is what leads people to think that man-made climate change isn’t real, or that vaccines cause autism, both of which the scientific consensus is pretty clear on.
(15:22) Science is, by its nature, imperfect, but it is hugely important, and it deserves better than to be twisted out of proportion and turned into morning-show gossip. So if they are going to keep saying “a study says,” they should have to provide sourcing and context, or not mention it all.
And I know what you’re thinking: “well hold on, if that happens, where am I going to get all my interesting bullshit from?” Don’t worry, we have you covered. [TODD talks segment begins 15:49]
(16:25) (TODD talks segment narrator) “At TODD Talks, we’ve raised the bar on entertainment, by lowering the bar on what constitutes science. […]
(17:00) (Guy on TODD Talks stage) “I conducted a randomized double-blind study on the effects of coffee on cancer of the esophagus, and while there were statistically significant decreases in incidences of cancer in the mice that were given the coffee compared to the control group, [audience looks bored] any definitive conclusions will of course have to await human trials, peer-review, and replication. Now of course this—”
(17:18) (other guy runs on stage) “I think what he’s trying to say is: coffee cures cancer!” [..]
(19:13) (narrator) “TODD Talks. Because science doesn’t have to be an exact science.” (the end)
Obviously, I don’t want to misrepresent science on this site in a way that’s as blatantly oversimplified as the examples in this segment.
But on the other other hand…
It seems like everyone does oversimplifies science, the viewing audience likes it, and nobody else cares. Even Last Week Tonight with John Oliver is sometimes guilty of the exact same thing.
This is from their segment on Standardized Testing (May 3 2015):
(8:01) Florida [..] uses this formula to assess teachers. [see image below.] A formula which looks like the kind of thing that aliens carve into an anti-Semite cornfield. [poster from the movie Signs starring Mel Gibson] [laughter]
(8:13) And many of these formulas on which teachers’ careers depend were partly inspired by research, and this is true, that modelled the reproductive trends of livestock. [audience sounds mildly appalled]
Basically, we judge the nuance of what happens in the complicated world of a child’s mind the same way that we judge this. [photo of two cows mating][laughter]
‘Look, I don’t know what we did wrong, but your child is going to either pass algebra, or birth a healthy calf. I don’t know, flip a coin.’ [end of transcription]
To quote John: NOOO! NO NO NO NO NO NO NO!!!!!!!!!
This line of argument is completely illogical, and you don’t need to know anything about genetic algorithms to see why.
Even if the show’s conclusion is true — that this formula is useless for evaluating teachers — no real evidence has been offered to support it. Oliver says the equation looks all weird and complicated, and it is based on something (livestock reproduction) that has nothing to do with what it purports to measure (teacher effectiveness).
But does looking weird, and being modelled on something weird, automatically mean an equation’s results are invalid? NO!
To adequately prove its conclusion, the show would have had to explain why this formula outputs the wrong teacher evaluations, NOT simply present some cases where students’ “projected grades” were wrong, followed by this B.S.
(It’s possible that part of the formula is wrong, but the show doesn’t explain what that has to do with the rest of the argument. Oliver would have had to say why outputting the wrong projected grades, and thus the wrong teacher evaluations, is inevitably and necessarily the result of the formula looking too complicated and being modelled on livestock reproduction.)
So basically, here’s what I’m trying to tell you:
I see the danger in falsely presenting scientific research, but I can also see that most people would rather read/watch simple extraordinary statements instead of complicated technical details (especially since the latter often end up saying something most people already thought was obvious).
Sensational headlines (followed by logical fallacies and advice to trust your intuition) are both easier to write and more popular among readers. Is it any wonder most websites, daytime TV producers, and even John Oliver writers go that route? And is saying “but everyone else is doing it!” enough to justify giving in to my own temptation to go that route too?
And now you might be wondering: Why did I bother to write this?
(as a follow-up question… Why do I bother to write anything?)
I know most people will neither read this article nor care whether the rest of the articles on this site are perfectly “scientific.” But I wanted to clarify things for the tiny minority of readers who do care about scientific standards of truth; and to explain why meeting that standard is currently too time- and labor-intensive for me to do on a consistent basis.
…Which is obviously the justification used by every morning talk show and tabloid paper when they promote oversimplified, hyped-up versions of scientific findings. I don’t want to go down that path, either, because I don’t respect people who do it, and I think it’s bad for society (since I think having an informed public is good for society, and misinformation and sensationalism only spreads confusion about what constitutes a legitimate source of knowledge). Ultimately I’m hoping to strike a compromise here.
Anyway, I haven’t given up on my original goal of writing thoroughly researched articles about the genuine science of attraction; I’m just shifting it to the future, when I’ll hopefully have enough money to hire a research assistant.
So now you know more about the struggles I face to produce any writing on this site (and of course they’re the same as the struggles all writers face: self-imposed). I’m hoping that now I’ve got this off my chest, I can stop feeling guilty about oversimplifying other people’s scientific research, and return to writing the straightforward articles people actually want to read. Onwards!