Pseudosciences are practices that masquerade themselves as science but have little or no scientific rigour or cohesion to them. They claim to be factual and scientific, yet do not adhere to scientific methodology and principles; notably the scientific principle of falsifiability.
It can be difficult for the non-scientist to discern whether something being claimed as scientific actually is or not. Fortunately pseudoscience has many recognisable features that are distinct from genuine science. These features are outlined below. Whilst not every feature will be common to every form of pseudoscience, any claimed scientific practice that displays at least some of these features is increasingly likely to be pseudoscientific.
Features of pseudoscience:
- It's dogmatic
A dogmatic belief or position is one that is deemed, by its proponents, to be accepted authority; and as such, not to be doubted or disputed. Pseudosciences tend to have evolved very little, or not at all, since the dogma was first established. Any research or experimentation that is carried out in the field is generally done more to justify the belief than to improve knowledge.
In science, observations are made, a hypothesis is formed, data are gathered and testing is done, and if the results of testing supports the hypothesis, a theory (provisional conclusion) is formulated. If any evidence comes to light that invalidates the conclusion, the conclusion will be amended or even rejected and a replacement theory sought. In pseudoscience, they begin with a solid conclusion (such as 'homeopathy works'), form theories as to why it works, collect data that support the conclusion and reject or explain away data that doesn't; which inevitably results in the conclusion being confirmed. With this system, no evidence is capable of contradicting the conclusion.
Challenging the accepted dogma is often considered a hostile act; and such challenges will be fought off with attacks on the critic's character or motives rather than embraced as a way of testing claims as in real science. As a consequence, the same arguments and counter-arguments are seen time and again: scientists give reasons why the practice is a pseudoscience; the pseudoscientists respond with excuses and attacks on scientists and/or science itself.
- The idea is aimed directly at the public
Scientific breakthroughs will normally have been published in science journals, scrutinised by other scientists, and only announced to the public once scientists have agreed that the scientific breakthrough is indeed genuine. The progress of the acceptance of the idea will be documented and anyone can reference this information in the relevant journals.
Pseudoscientific ideas are sometimes driven by cultural or ideological reasons, but very often they're driven by commercial goals. A company that is trying to sell its products or ideas without having gone through this scientific scrutiny is giving out a telltale sign that their products will not stand up to scientific scrutiny. A new 'miracle breakthrough' healing device, for example, that is being sold directly to the public, but which has no science references to support it, probably doesn't work.
- Ideas that are non-testable
A crucial problem with many pseudoscientific ideas is that they cannot be tested in any meaningful way. This can come about because what is being claimed is so nebulous and vague it is difficult to conceive of how one would test it. Also, such vagueness facilitates a legion of ‘possible’ interpretations where just about anything could be made to fit the outcome to support the original claim. If a claim or theory cannot be tested then it cannot be falsified and thus it violates a central principle of science (that of falsifiability: see Braithwaite, 2006; Carroll, 2004). If a theory cannot be falsified then no evidence can be gleaned that would speak to the issue one way or the other – it is thus scientifically meaningless. Ideas that cannot be tested are no more right than they are completely wrong.
- Verbose language and prose
One reason that theories from pseudoscience are vague and untestable is that the language used by the proponents is far too vacuous itself. This often results in a ‘theory’ that is so conceptually slippery it becomes difficult to identify what is actually being argued – or how one might test it. Due to their nebulous content, such practices also nearly always hide all sorts of circular reasoning errors. Over-complex words, phrases and over-long sentences are employed in an attempt to ‘look’ scientific and intelligent.
Indeed, in pseudoscience the more scientific-type language employed, the more ‘plausible’ it appears to be. However, all this really accomplishes is confusion. Poorly defined terms like ‘energy’ ‘resonance’ ‘quantum’ ‘nano’ ‘dimensions’ are all used with no useful explicit definitions provided. They are meant to look scientific, to look respectable in order to add weight to an idea which is in reality both implausible and improbable. Poor writing often reflects poor thought and poor understanding. Whenever one encounters flowery and verbose language it is likely the authors/speakers do not fully understand what they are talking about. Verbose language is used to fill in the gaps of knowledge by making it sound as if something profound and insightful is being said, when in fact the sentence rarely goes anywhere!
- Conceptual hijacking
An increasing trend in contemporary pseudoscience is to hijack aspects from mainstream science in an attempt to appear more scientific. This is usually done with very new areas of science where the public's understanding (and that of scientists themselves), is low. Recent examples include areas like quantum mechanics and string theory from the field of Physics. Paranormal theories that hijack these areas (in an attempt to make their poor ideas look more plausible) are riddled with huge misunderstandings over these concepts. Conceptual hijacking plays on the public’s lack of understanding and presents a twisted version of science that bares little reality to the truth.
- Confirmation-bias (selective evidence)
Many people report a common perception of thinking about someone - then the phone rings and the caller is the person they were thinking of. Is this strong evidence for a psychic ability between these people? The answer is no. It reflects a selective bias in memory and reason. Although we can remember the instances when this does happen (as they can be striking) we rarely remember the instances when it is not the person we were thinking of. Our memory is biased to place an emphasis on the ‘hits’ and ignore the ‘misses’.
In a similar manner, researchers can sometimes concentrate only on that evidence that is consistent with the argument being developed (the hits) and ignore other evidence that contradicts it (the misses). This is known as the confirmation bias (where we are biased to only notice observations that confirm our assumptions). The confirmation bias relies on a positive biased focus and weighting towards only that evidence which is consistent with the current belief or world-view; and a negative bias to ignore results that challenge the view. It may be impressive to see a dowser find water in a single trial, but this on its own does not mean dowsing works. When we run tests and see that on many trials the dowser failed to locate water the scant and periodic instances when they are successful no longer looks impressive.
- Metaphorical/analogy driven thinking
Metaphors and analogies are essential to science and theory. Complex and more abstract areas of science rely particularly on metaphor and analogy to add clarity to knowledge and to communicate that knowledge. This is perfectly legitimate and indeed, to some extent, unavoidable. In science, analogies and metaphors may emerge as useful ways to think about, describe, and explain objective facts and evidence. For example, psychologists have employed the metaphor of visual selective attention being like a ‘spotlight’ illuminating the relevant information out there in the world from the surrounding darkness of all that we ignore. In many respects this has proved a very fruitful metaphor guiding thinking in this area of study. The problem here is not the use of analogies or metaphor in scientific thinking, but the clear abuse of them.
The problem with pseudoscience is its use and over-reliance on metaphor as an argument in and of itself. Rather than employ metaphors and analogies as illustrations of scientific knowledge, pseudoscience employs analogies to deduce new conclusions and propose alternative truths. At this point it no longer becomes a mere illustration; it becomes an argument by analogy
(or metaphor; Thouless, 1968).
Quite often, the richer and more intuitively appealing the analogy, the more true the claim being made appears to be. This can occur to such an extent that the analogy becomes a potent mind-trap and dominates all thinking on this issue. This is an error. Scientific arguments should be based on evidence, not analogy. The role of analogy in science is for illustration and communication – it is not for basing a claim of provisional truth. All analogies provide a degree of similarity to that which it is being applied to – this is why they are recruited as an illustration. However, there is also much dissimilarity as well and this is often missed (again another form of selection-bias). Ultimately, every analogy and metaphor will cease to work so it is crucial that any argument is not solely dependent on the analogy for its claim as a truth. As Thouless (1968) goes on to point out:
“Even the most successful analogies in the history of science breakdown at some point. Analogies are a valuable guide as to what facts we may expect, but are never final evidence as to what we shall discover. A guide whose reliability is certain to give out at some point must obviously be accepted with caution. We can never feel certain of a conclusion which rests only on analogy and we must always look for more direct proof”
(Thouless, 1968; pp142-143)
In some cases the analogy has no direct relevance or implication for the case being argued (the fallacy of the argument by irrelevant analogy; a special case of the non-sequitur type of fallacy). For example, modern creationists and advocates of intelligent design use analogies drawn from human design and engineering to argue for similar patterns in nature. The implication by such a comparison is that a designer must have been involved in the creation of the universe. Here the fallacy is to use a metaphor and analogy of a ‘known’ designer (i.e., something humans have designed and built) to prove the case of a divine designer. This type of comparison is an irrelevance. In addition, a closer examination often reveals that most pseudoscientific ideas are almost totally purely metaphorical in nature, form and content. That is to say, there are no reliable data, no firm facts, or evidence – just metaphor. This basically amounts to little more than a nice story – though not necessarily a correct or true one.
A good example of an over-reliance on metaphor and analogy is the ‘stone-tape’ metaphor that parapsychologists have used to explain ghostly sightings. According to the stone-tape account, human ‘energies’ and actions are somehow recorded in the immediate atmosphere and stored in the stone of a building or room, which can then be played back ‘somehow’ in ‘someway’ as a ghostly manifestation at a later date. The metaphor here is the notion of the making and playing back of recordings. However, despite its popularity, there is no scientific evidence to support this idea – and there never has been. Indeed, it is not at all clear as to how such recordings could be made by stone, and how they could be played back. All we are told is that it can occur ‘somehow’ in ‘someway’ - even though no plausible physical mechanism exists. This is an example of an over-reliance on a metaphor to support a non-scientific idea. The problem here is the analogy and metaphor itself can blind the untrained mind to the lack of actual facts and evidence present in the argument.
“The mere fact that the argument is in the form of an analogy is often enough to force the immediate irrational acceptance. There seems to be no other explanation of the extraordinary extent to which otherwise intelligent people become convinced of highly improbable things because they have heard them supported by an analogy whose unsoundness should be apparent to an imbecile”
(Thouless, 1968; pp146)
Anecdotes as evidence
Although anecdotal evidence has its place in scientific theory: no theory should be solely dependent on anecdotal evidence. Anecdotal evidence is a poor and unreliable source of evidence. For example, it is important that any theory of memory can explain the anecdotal experience of forgetting, but this should not just be based on anecdotes of forgetting, but on empirical demonstrations of the failure to retain information under controlled conditions. This leads to reliable and valid data on which to build a scientific account for the object of study. Similarly, theories of language need to be able to explain tip-of-the-tongue experiences (where we feel as if what we want to say is just failing to reach our ability to actually say it), slips of the tongue experiences (where we say a related word instead of the one we meant). However, the anecdotal experience of these instances does nothing to explain why and how they actually occur. These experiences are the products of psychological processes; however these products do nothing to explain the underlying processes themselves. Knowing that we have the phenomenal experience of consciousness, does not explain what consciousness is, or how it occurs.
One major problem with pseudoscience is that it places a strong and selective emphasis on anecdotes, and anecdotes alone, as support for its claims and theories. In reality, personal anecdotes alone are not a viable argument against data, facts, theory, empirical observation, and objective measurement. Lots of anecdotes do not support a case any more than a few anecdotes do. This is because all anecdotes are provided via a process which is itself fallible and prone to many sources of error. Anecdotal evidence has its place in scientific theory - but it is no contender for a source of information which can provide a mechanistic understanding the mental universe. Contrary to the popular saying, data is not the plural of anecdote.
- Lack of explicit mechanisms
Pseudoscience is characterised by a complete lack of viable explicit mechanisms of action for the object being studied. Even if we were to accept some instances as fact, there is still no clear idea how these phenomena would work or how they could work. There is no clear and plausible proposed mechanism for how apparitions are supposed to be recorded in stone, no clear mechanism for how astrology is supposed to influence human behaviour, no clear mechanism for how the mind could survive bodily death or how liquids can hold a memory (as is claimed in homeopathy).
This lack of explicitness is related to some of the other characteristics listed above. For example, the fact that an idea is nebulous in turn makes it difficult to test such ideas (i.e., cannot be falsified). Furthermore, an idea can be nebulous due to verbose language (see above). However, even when these factors are not a major concern there is still a lack of a workable explicit mechanism. Even the best and clearest explanations of homeopathy, apparitions, alternative health, and psychic phenomena still fail badly at outlining a specific mechanism for how they are supposed to work. Although the lack of any mechanism is not, in itself, evidence against the existence of such phenomena occurring, the lack of any plausible mechanism waiting verification is not particularly convincing evidence for it being genuine either.
There are many areas of experimental science where mechanisms of action are not well understood – however, under these circumstances there will be some factual and accepted knowledge that provides a framework for thinking. In addition, although a mechanism may not be known, candidate mechanisms will be well specified to a level that guides future experimentation and thinking. What counts in science is the ability for a provisional explanation to feasibly account for the phenomena via a proposed mechanism that is more explicit than any other. An explicit mechanism should also generate clear predictions and these predictions should be testable (and falsifiable). The mechanism should say why the phenomenon occurs, what the principal components are, how it works, and what it does.
In contrast to scientific mechanisms and models, Parapsychology has been actively investigating paranormal and psychic phenomena since the 1940s – and yet despite the decades that have passed, no reliable evidence, or explicit and plausible mechanism has ever been proposed that suggests paranormal phenomena are a real veridical objective event.
- Special pleading (elusive evidence)
Proponents of pseudoscience often claim that scientific testing is not the best way to test their claim; there is something special about the claim that makes it different to other disciplines. This special pleading is often accompanied by other fallacious reasoning such as scientists being too 'close-minded' to see the truth or that 'science has been wrong before'.
These claims invariably arise because when the pseudoscience is tested by scientists, the claimed results do not occur. One of the hallmarks of science is not only producing results and having those results reviewed by peers and published, but that those results can be reproduced (under controlled conditions) by other scientists independently.
This point is an important one. If something is real it will manifest itself regardless of who's doing the testing or whether the testers believe in it or not. If the phenomenon requires special (i.e. non-scientific) conditions or the testers have to believe in it for it to show up then it's highly likely that the phenomenon is not real and is merely a result of wishful thinking and confounding factors introduced by non-scientific testing.
Genuine phenomena will stand up to scrutiny.
- Conspiracy theory
Pseudosciences are often portrayed as real truths that "they" don't want you to know about. They, whoever they may be, are accused of suppressing the evidence for their own self-interest. For example, the 'real' cure for cancer is suppressed by 'big pharma' (pharmaceutical companies) so that they can keep making huge profits selling useless drugs whilst people die.
Just how the conspiracy theorists come to be aware of this suppressed evidence however, is never explained.
The marketing of 'miracle products' using a direct-to-consumer model is often done with a secrecy pitch. This helps the appeal of the product (you shouldn't really have something this good - 'they' don't want you to have it); but it also helps traders hide the fact that their pills/potions/devices have no evidence trail behind them that proves they can do what is claimed of them.
The defining feature of science is that hypotheses and theories that are put forward must be capable of being tested and shown to be false should they actually be so - this is the scientific criterion of falsifiability. As our examples above show, the tell-tale sign of pseudoscience is that the claims, theories, or products are always pitched in a manner that leads them away from being testable and falsifiable.
Pseudoscience then, can be described as theories, methodologies or practices that claim to be scientific but which are presented in such a manner that they can not be tested or falsified by empirical testing.