Previous Next Index Thread
On Sun, 30 Jun 1996 01:40:55 -0700, Chris Lawson <firstname.lastname@example.org> wrote: >Contrary to Damien's assertion that I only think meta-analysis is flawed in >parapsychology, I think it is a fundamentally flawed process in ANY field. It is not the >meta-analysis in parapsychology that I disagree with, it's that ONLY meta-analysis seems >to give statistical significance to the field's data. Now THAT is a bad omen for the >field, not the meta-analysis itself. By contrast, in pharmacology, standard study >designs make up the vast majority of studies, and meta-analyses are reserved for >difficult areas, where it is too expensive, or logistically or ethically difficult to >perform double-blind studies on large sample sizes. I believe it is possible to evaluate how meaningful the conclusions of a regular double blind study are by examining methodology. Is it not also possible to evaluate meta analyses in a some similar manner, so as to determine whether conclusions are meaningful, that would compensate for very understandable biases in experimental methodology by researchers desiring their studies yield positive findings for reasons of continued tenure, funding, status in their field, etc?