What is pseudoscience?
Eve & Dunn (1990, p. 10) defined pseudoscience as ideas “for which their proponents claim scientific validity, but which in actuality lack empirical support, or were arrived at either through faulty reasoning or poor scientific methodology.”
P2. Despite P1, the community of practitioners makes little attempt to develop the theory towards solutions of the problem, shows no concern for attempts to evaluate the theory relative to others, and is selective in considering confirmation and disconfirmation.
It’s possible that some pseudoscientific beliefs result from ‘folk psychology’ and other similar concepts; Rodriguez (2006)describes how terms from neuroscience have diffused into everyday language and brain-based explanations, albeit more reductive, are now becoming acceptable in everyday explanation. Some beliefs, e.g. that ‘photographic memory’ means being able to see any memory in photograph-like resolution, might conceivably have originated from misinformation or something. Many, however, have their fan clubs, public funding, and build museums to spread their views – often using pseudoscience to justify their assertions.
So why is this a problem?
Rosenthal (1993) says that unscientific/pseudoscientific beliefs seriously hinder attempts to improve scientific literacy. These ideas are resistant to change; Eve & Dunn (1990) showed that a substantial percentage of secondary biology teachers hold many pseudoscientific beliefs.
You don’t have to look far to see the incredible damage that holding false beliefs (especially if they’re held as being scientific fact) can do. They can give false hope, scam people out of money, and waste significant public resources.
One well-known example of pseudoscience is The Bell Curve (Hernstein & Murray, 1994), which is infamous for two arguments: firstly, that the average genetic IQ of the United States is declining due to the tendency of the more intelligent to have fewer children than the less intelligent, for the generation length to be shorter for the less intelligent, and through the large-scale immigration to the United States of those with less intelligence (and arguing against policies of affirmative action…) – in other words, the plot of Idiocracy; secondly, that IQ is predicted by race (here’s some examples: African Americans on average have an IQ of 85; Whites 103; Asians 106). There were hundreds of critical reviews of the book; aside from the various problems with IQ measurement in the first place, the book was never submitted to any kind of peer review; much of the work referred to in the book was funded by the Pioneer Fund, which is notorious for promoting scientific racism… Herbert (1994) puts it best, in a NYT review of the book: “a scabrous piece of racial pornography masquerading as serious scholarship.” (Nastier words were used that I’ve chosen to omit.) Herbert also points to Murray’s cross-burning past – at the very least, it’s evidence of extreme, active prejudice.
Noam Chomsky (1972) said that even if there were a correlation between race and intelligence, it would have no “social consequences except in a racist society in which each individual is assigned to a racial category and dealt with not as an individual in his own right, but as a representative of this category…In a non-racist society, the category of race would be of no greater significance [than height].”
Is there anything we can do?
Rosenthal found that a learning cycle approach seems to stimulate students who hold pseudoscientific beliefs to examine them more critically, if not abandon them (the learning cycle approach has been shown to be effective in bringing about conceptual change) – that’s a great start, but clearly we need to do more.
Part of the problem appears to be that the people who buy into these beliefs are just misinformed – for example, the ‘photographic memory’ example is easily corrected. For such cases, factcheckers built into browsers point these things out as the user reads them – so once a decent database is built up, it would be easy to judge the reliability of a source. The other group of people who hold pseudoscientific beliefs, however, will hold on to their beliefs no matter what – as Thagard’s definition says, they’re “selective in considering confirmation and disconfirmation” – in other words, they believe whatever fits in with their previous belief set (cf. Rosenthal again). Some people very sincerely hold these kinds of beliefs, and evidence will never dissuade them – they may believe that any evidence countering their beliefs are part of a wider conspiracy, for example. These people are obviously harder to reach – they can even hold contradictory beliefs about conspiracies.
There’s also a huge issue in how scientific discoveries are reported – newspapers have their own agendas to support, which in itself results in biased reporting. They also have a low quality threshold, calling surveys by restaurant chains “studies”. Reliability and validity is assumed for all studies. As a result, you get people believing reductive and possibly false versions of scientific theories/discoveries.
So the best thing we can do, in my opinion, is address these kinds of beliefs (starting with the most popular ones), keep them out of the NHS etc. (which means governments critically examining them in the first place), encourage responsible journalism that doesn’t make wild, baseless claims, and teach critical thinking skills in schools from an early age. What does everyone else think?
P.S. I couldn’t leave this out, nor could I fit it in – but it’s very relevant and worth watching if you have a spare 8 minutes.