On the necessity of trust
Joshua Loo
]
A combination of scientific and philosophic observations indicate that there are certain theoretic limitations to one’s knowledge. For example, the uncertainty principle shows that there is a trade-off between precision of knowledge of particular momentum and position.1 Gödel’s incompleteness theorems2 impose certain limits on mathematic knowledge that are beyond the scope of this article.3
Some of us will be fortunate enough to reach these limits of knowledge. However, most of us will not; none will reach all these boundaries. Consider, for example, Heisenberg, or Gödel; they were both experts in their respective fields, but in their lives relied upon the expertise of fields outside their considerations. Gödel, at some point in his life, probably consumed some sort of medicine. He was almost certainly intellectually capable of understanding why such medicines worked. Nevertheless, it is possible that he did not. Some questions emerge, separate from the theoretic possibility of knowledge of the workings of these medicines, foremost amongst which is this: should Gödel take the medicine without knowing of its precise workings?
More abstractly, what is one to do with known unknowns? Does one trust those who have studied their field more, even when ignorant of the contents of such study? In most analogous circumstances, the option open to Gödel, that is, understanding how such medicine works, is unavailable, for a variety of reasons: science is often difficult to understand, and not all of us are capable of understanding all necessary fields of science; most of us are insufficiently industrious to learn of the workings of every single apparatus we use; there are limitations on the time we have—should one have to work, one already spends most of one’s waking hours doing something other than acquiring such understanding; and so on.
The basis on which societal trust in, for example, medicine, or climate science, or particle physics, emerges, is not scientific; it must explicitly reflect that in most circumstances, for one reason or another, one cannot use the best truth-seeking apparatus available. Instead, it is social. We take medicine not due to trust in science so much as scientists.
Judgement of institutions, though sub-optimal, is often far easier than judgement of the claims that these institutions propagate. It may, for example, be difficult to determine whether Xinhua is always publishing true stories. It is easier, however, to note that Xinhua is funded by the Chinese government, which uses it as a propaganda outlet. These two sorts of judgement are not mutually exclusive. However, the second per se is not without its uses.
A similar sort of judgement must necessarily be used in evaluating most reporting, for example. Few of us will have the chance to see many of the events described in newspapers. None of us will see all of them. Coverage of matters that one is unfamiliar with by direct study must be evaluated by other factors. As societies become more complex, epistemic dependence on others increases—that is, dependence on other sources of truth, whose pronouncements’ veracity is difficult to directly verify. A similar sort of judgement must necessarily be used in evaluating most reporting, for example. Often, this dependence is not simply optional, but unavoidable. Even a reporter for a newspaper does not know what the other reporters are doing, or whether they too are reporting the truth.
Indeed, such reliance permeates not just the news cycle but many other aspects of life. Academics trust other academics; they certainly trust that data collected by others are accurate, and, because of time constraints, it is inevitable that they will not verify all that their colleagues have found. Many mathematicians will not know of the metamathematic struggles of their colleagues, but simply accept certain axioms; some of their trust is, if not blind, not based on personal verification. One cannot take as axiomatic that academia will produce good results. The replication crisis4, deliberate fraud5,6, and p-hacking7, to name a few problems, suggest that, at the very least, academic competence cannot be taken as axiomatic. Systemic concerns are at least as important as internal metrics of academic success.
In schools, and to a lesser extent universities, we trust textbooks, teachers and curricula, without always being able to verify their claims. In history, we can hardly spatially, let alone temporally, be in a position to verify what occurred; even the tools of the modern historian—the analysis of sources and primary research—are absent from the modern textbook-based history course. The same applies to many other subjects—occasionally pupils are encouraged to ‘prove’ some assertion or other that has been included in the scientific curriculum, but normally one simply gathers a few data, that only with unverified interpretation seem to ‘prove’ the initial claim.
Quite apart, therefore, from questions as to whether there is an objective reality, are questions as to whether it is possible to reconcile different possible realities from different perspectives. It is at least likely that a significant part of different persons’ experiences will overlap, even given something akin to the strong programme: aeroplanes fly, trains move, computers operate, tidal mechanics seem broadly consistent, and so on, suggesting that, at the very least, modelling an objective, or reference, reality is useful; this helps in epistemic modelling involving different actors.
These considerations are particularly relevant in light of concerns about false news. Governments worry that some people do not accept the truth, instead consuming false news. This is the implicit epistemic paradigm of at least one body concerned with falsehoods.8 The sort of model that they espouse involves a dogmatic acceptance of truth from the bodies that have replaced the church as the principal source of information in European countries, and religion in general in others; it must be axiomatic that approved media are correct. Thus the web-page9 to which the hyper-link at the bottom of BBC News marked ‘[w]hy you can trust the BBC’ says, presumably to confirm that it is trustworthy, that ‘the BBC is seen as by far the most trusted and impartial news provider in the UK’, that it has ‘its own Editorial Guidelines’, and so on, ignoring the absurdity of relying upon an organisation to certify itself as trustworthy. Of course, within this model, sometimes they will make mistakes. They are accidental, and do not impugn the bodies that make them except to the extent that procedures must be improved and a few people fired in egregious circumstances. At the same time, it is enough to point to a few falsities from other institutions to justify the other side of the aforesaid axiom: they are, a priori, wrong, and these falsities are icing on the cake, yet, at the same time, these falsities are also propagated despite their limitations as evidence, perhaps in a tacit admission that this axiom may be sub-optimal.
What is a better model? Perhaps it is best to first ask why an epistemic model of this sort is needed. Rawls provides a reasonable account of the need for mutually beneficial coöperation. Coöperation necessitates communal decision-making; preferences are created by a combination of value judgements and beliefs about the probability distributions out outcomes. Thus, if there is no agreement on what would be, were a course of action followed, there can be no agreement on what should be, except in the rare case that two differing sets of value judgements and probability distributions were to coïncide in their recommendations. This, of course, is not particularly probable, and so cannot be relied upon. Epistemic bubbles make collective decision-making extremely difficult, since there are no common beliefs as to what is. Hence disagreement spreads not only based on value judgements, but also based on empirical beliefs, that are often moulded to follow value judgements unless they are carefully established otherwise.
Specialisation requires mutual trust: builders must trust engineers, physicists must trust mathematicians, mathematicians logicians, engineers other engineers, and so on. Since each domain is vast, and there are a great number of these domains, it is not possible for an individual to master all of the work that has been done to enable even simple work to occur—a librarian, for example, would struggle to learn both of all the processes that enable the construction of the plastic used in the equipment in the library and of the inner workings of the software used to manage loans. Thus an epistemic model is needed to enable trust, so as to enable coöperation.
How can trust be encouraged? First, social mobility helps to create trust in institutions. One is more likely to trust a relative or friend than a distant bureaucrat who is neither. More importantly, it is far easier to burst or merge epistemic bubbles with greater surfaces of contact. That is to say that if two epistemic bubbles contact each other through large numbers of people, and both have some reasonably good intuitive approximation of the results of the study of probability, the two bubbles are more likely to trust each other. If, for example, one hears one civil servant insisting that the deep state as conceived by some of those who propagate ‘conservative news’10,11 does not exist, it may be that this person is lying, especially on the television, instead of in conversation. If, however, there are many such civil servants, and they all have little reason to lie, as they are acquaintances or friends, both the perceived and true probability of such a conspiracy from the perspective of the acquaintance decrease.
Second, institutions should acknowledge their failings. If common institutions acknowledge that they are fallible, the perceived significance of their mistakes will decrease, thus increasing trust. This is particularly true of intent. The narrative that the media have deliberately attempted to fool the people of the United States is prevalent in part because the lack of acknowledgement by the media seems to suggest that factual mistakes, for example, were deliberate. Thus, as Paxman echoes Heren—‘why is this lying bastard lying to me?’—so too Trump echoes Paxman: ‘[a]nd the FAKE NEWS[sic] winners are …’ It is far easier to gain trust when mistakes are seen to be mistakes, not conspiracies.
Third, institutional transparency engenders trust. Legislation such as the Freedom of Information Act enables researchers who might ascribe ulterior motives to harmful government action to discover, for example, that it is not harmful, that it has some other beneficial purpose, that it is the product of accidental mal-administration, or that it reflects the capture of government at one level instead of all levels. Sometimes, direct verification is possible. Elementary comprehension of the scientific method, for example, need not be limited to scientists. The risk here is a lapse into dogma; the scientific method works because we are told that it works. The avoidance of such a lapse might even require greater circumspection in mainstream education, and an admission that verification may not always be possible. Yet, though perhaps this might have been a price not worth paying when such dogma was accepted, in the age of an internet that creates equally coherent alternative narratives, it is quite possible that there is no price to pay.
W. Heisenberg, “Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik,” Zeitschrift für Physik 43, nos. 3-4 (March 1927): 172–98, doi:10.1007/BF01397280.↩
Kurt Gödel, “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I,” Monatshefte für Mathematik und Physik 38-38, no. 1 (December 1931): 173–98, doi:10.1007/BF01700692.↩
Our scientific editor remarks that mathematicians desire to know that their theorems are definitely correct. Logicians define a set of axioms, in a given notation, known as the language of the theory’; they use a series of rules to derive new theorems from previous theorems. These derivation rules are entirely mechanical, such that a computer could check all properly written proofs. A set of axioms that manages to represent numbers and their arithmetic is, by the theorems, unable to prove its own consistency, and will be incomplete. Consistency here means that two contradictory statements are never both proven, and completion means that all statements that are true and can be expressed in the language of the theory must be provable.↩
Harold Pashler and Eric–Jan Wagenmakers, “Editors’ Introduction to the Special Section on Replicability in Psychological Science: A Crisis of Confidence?” Perspectives on Psychological Science 7, no. 6 (November 2012): 528–30, doi:10.1177/1745691612465253.↩
John Power, “The Cancer Researcher Catching Scientific Fraud at Rapid Speed,” The Atlantic, April 3, 2018, https://www.theatlantic.com/science/archive/2018/04/jennifer-byrne-science-fraud/557096/.↩
Daniele Fanelli, “How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data,” ed. Tom Tregenza, PLoS ONE 4, no. 5 (May 29, 2009): e5738, doi:10.1371/journal.pone.0005738.↩
Megan L. Head et al., “The Extent and Consequences of P-Hacking in Science,” PLOS Biology 13, no. 3 (March 13, 2015): e1002106, doi:10.1371/journal.pbio.1002106.↩
“Select Committee on Deliberate Online Falsehoods - Causes, Consequences and Countermeasures | Parliament of Singapore,” accessed May 6, 2018, https://www.parliament.gov.sg/sconlinefalsehoods.↩
“Learn How the BBC Is Working to Strengthen Trust and Transparency in Online News,” BBC News, accessed June 2, 2018, https://www.bbc.co.uk/news/help-41670342.↩
“Letter to Social Media Platforms Calling for A Transparency Review,” GOP, accessed June 2, 2018, https://gop.com/letter-to-social-media-platforms-calling-for-a-transparency-review.↩
Of course, the sort of thing that news outlets report is so far removed from the theoretic limits of knowledge that it does not make sense to distinguish ‘conservative’ and hypothetical ‘liberal’ news.↩