Save your time - order a paper!
Get your paper written from scratch within the tight deadline. Our service is a reliable solution to all your troubles. Place an order on any task and we will take care of it. You won’t have to worry about the quality and deadlinesOrder Paper Now
Discuss two common mistakes people make when thinking about random distributions and statistics. We’ll frame our discussion around the issues with identifying cancer clusters. When a young child develops a rare form of cancer, the parents often want to know why. While their child is in treatment, they may meet other parents who have children with cancer and they may share their stories. When they find commonalities, the parents may begin to suspect that their children have been exposed to an environmental carcinogen, and they may look for further evidence of a “cancer cluster”. A very classic (i.e. very old) example of this was described in a PBS Frontline episode called “Currents of Fear.” This episode looks at the claim that electromagnetic radiation from power lines causes cancer. I have included a transcript of the video since accessing the actual video might be difficult. As the video describes, claims about cancer clusters are often based on erroneous reasoning about random distributions and statistics. The reasoning errors include the “Texas sharp shooter fallacy” and the “multiple comparisons fallacy”. Let’s discuss each of these. Remember this is a group discussion, so instead of superficially touching on every question, I want you to discuss one or two questions in depth. Please choose 1 or 2 of these questions only.
Texas Sharp Shooter Fallacy
1. What is the Texas Sharp Shooter Fallacy?
2. How does it relate it to cancer clusters?
3. Why is it important to formulate a specific hypothesis before you conduct an experiment?
Multiple Comparisons Fallacy
Last week, we discussed a replication failure in Psychology. According to some reports, up to 50% of positive findings fail to replicate. By fail to replicate, I mean the original study found a statistically significant effect, but an exact replication failed to find an effect. Fifty percent is obviously high, but what failure rate should we expect? (For the sake of argument, assume that it is possible to exactly replicate the methods used in the original study.) To answer this question, we need to review the meaning of p-values.
4. What does it mean to have a statistically significant effect with p < .05? (Be sure to write this in your own words!)
5. Imagine that we conduct the same extrasensory perception (ESP) experiment 100 times. Let’s agree that ESP isn’t real, so the results are random. If our p-value cut-off is .05, how many times would we expect the experiment to produce a statistically significant result? Explain.
6. Imagine 100 independent researchers each conducting an experiment on ESP. Using the .05 cut-off for p-values, how many researchers would we expect to obtain a statistically significant result. (see above!) Which ones would be most likely to publish their results (and get media attention)? (If you want to look into this more, look for information on “publication bias” or “the file drawer effect”.)
7. Now consider the problem of multiple comparisons discussed in this week’s video. The Swedish researchers examined a large number of health variables to see if any correlated with proximity to transmission lines. Let’s say they examined 100 variables and that they used a p-value of .05. If transmission lines have no effect whatsoever on health, how many statistically significant correlations should they expect to find? What should the Swedish researchers have done to make their findings credible?
The post The Texas Sharp Shooter Fallacy first appeared on COMPLIANT PAPERS.