Psych News Daily reports that Conversations Rarely End When People Want Them to End. Scientific American reports that People Literally Don’t Know When to Shut Up or Keep Talking. You would think this was consistent reporting. Are you sure about that?
Psych News Daily states “On average, participants wished their conversations had been 1.9 minutes (or 24%) longer. They also said they believed that their partners wished that the conversation had been 5.8 minutes (62%) longer. The second study (157 women, 92 men, average age 23) paired two strangers to discuss anything for as long as they wanted, up to 45 minutes. In this study as well, participants indicated that they wished the conversation had lasted longer than it did.”
Scientific American states “Participants in both studies reported, on average, that the desired length of their conversation was about half of its actual length. To the researchers’ surprise, they also found that it is not always the case that people are held hostage by talks: In 10 percent of conversations, both study participants wished their exchange had lasted longer. And in about 31 percent of the interactions between strangers, at least one of the two wanted to continue.”
As an aside, ScienceMag.org’s article on the study When Should You End a Conversation? Probably Sooner Than You Think has an article that I think is much better. The author, Cathleen O’Grady, focused on study results indicating that people didn’t really get whether the other participant wanted the conversation to go longer or shorter. As you will see in a minute, I think that was the much better take away.
Ok. Which is it? Did the majority of study participants want the conversations to be shorter or did they want them to be longer?
Both articles cite the same research paper. The abstract says: “Do conversations end when people want them to? Surprisingly, behavioral science provides no answer to this fundamental question about the most ubiquitous of all human social activities. In two studies of 932 conversations, we asked conversants to report when they had wanted a conversation to end and to estimate when their partner (who was an intimate in Study 1 and a stranger in Study 2) had wanted it to end. Results showed that conversations almost never ended when both conversants wanted them to and rarely ended when even one conversant wanted them to and that the average discrepancy between desired and actual durations was roughly half the duration of the conversation. Conversants had little idea when their partners wanted to end and underestimated how discrepant their partners’ desires were from their own. These studies suggest that ending conversations is a classic “coordination problem” that humans are unable to solve because doing so requires information that they normally keep from each other. As a result, most conversations appear to end when no one wants them to.”
So the abstract just says conversations generally lasted for a different length than participants wanted, without pointing at which direction. The actual data can be found in the supplementary appendix. Can we take a look at that and validate one or the other attempt at journalism?
This appendix is very confusing.
Page 3 talks about data in Study 1 collected in Jan 2019, but doesn’t talk about the results or the number of participants. There is a Study 1 and a study S1. Why?
For example, page 6 starts talking about Study S1 in August 2020 and does give information about the respondents. Page 7 gives results (why results here and not Study 1?) and states
“As in Study 1, a large majority of participants (75.41%) reported that they had felt ready for the conversation to end before it actually did. These participants also reported the point at which they felt ready for the conversation to end. Of the participants who reported that they had felt ready for the conversation to end before it actually did, 85.35% also reported that they would have preferred the conversation to have ended at that point. Of the participants who reportedthat they had felt ready for the conversation to end before it actually did, 72.89% of these participants reported that it would have been better if the conversation had ended at that point.”
Page 9 gives the results of Study 2: “Analyses provided two key results: As in Study 1, a large majority of participants (80.92%) reported that they had felt ready for the conversation to end before it actually did. Of the participants who reported that they had felt ready for the conversation to end before it actually did, 90.57% also reported that they continued to feel that way throughout the conversation”
The Scientific American article seems to be directionally consistent with this text. Page 9 also states that “The main measures of Study 1, S1, and S2 are reported in Table S1.” So just to be safe, lets look at that table which appears on page 28.
Let’s look at Mean Durations First
The Study 1 mean duration was 13.97 minutes and the participants desired duration was 15.88, so that gets us the 1.9 minutes cited by Psych News Daily that the participants would have preferred the conversation last longer. The actual mean duration for Study S1 was 18.42 minutes with a mean desired duration of 18.19 minutes. This is not consistent with the Psych News summary. The Study 2 is worse. The mean duration was 20.34 minutes with the participants desired duration of 17.84, so the conversations lasted almost three minutes longer than the participants wanted. Not what Psych News Daily claims, but with respect to Study 2, Psych News Daily carefully said that “participants indicated they wished the conversation had lasted longer than it did”. It didn’t say that the majority of participants wished the conversation had lasted longer.
Now the Median Durations
The median duration for for Study 1 both for the actual as well as the desired duration was 10 minutes. For Study S1, the median actual duration was 15 minutes while the participants desired duration was 10 minutes. For Study S2, the median actual duration was 20 minutes and the participants median desired duration was 12 minutes. Obviously Scientific American is going for the median rather than the mean, but the statement “desired length of their conversation was about half of its actual length” really overstates the data. The Study 1 difference was 0, the Study S1 difference was a third the actual length, not a half. The S2 difference was 40%, not 50%. What are they doing in that magazine?
What about the Proportional Differences?
What about that 24% number that Psych News says that participants wanted the conversations in Study 1 to be 24% longer? There is a 24.08% proportional difference between the actual duration and the participant’s desired duration, but the equation provided is actual duration - participants desired duration / actual duration. Based on the numbers provided, that would be (13.97 - 15.88) / 13.97 = surprise 13.67%, not 24%. When we look at the same calculation for Study 2, the equation gives (20.34 - 17.84) / 20.34 = 12.29%, but the number in the table is 1.06%. Hmm.
But now look at Table S2 on page 29, which is the “Results for All Participants” and “Participants Included in Study 2”. The first group shows mean actual duration of 27.92 and desired duration of 30.07. The second group shows mean of 20.19 and desired duration of 20.81. Now I’m really confused. How does this reconcile with Table S1?
Conclusion
There may be some way of reconciling the text and the numerical summary tables, but it is not obvious. The authors of these studies really should be much more careful in explaining their published reports. These sorts of reports provide ammunition for the continued backlash against scientific studies and mistrust of science.
By the way, in news equal to water being wet, the studies did show that people who wanted the conversation to last longer than it did reported enjoying the conversation.
As usual, feel free to disagree using this contact link. My world view is a hypothesis, not a belief.