Research

Our project is called ''Formal Epistemology''. We deal with topics from both the traditional theory of knowledge as well as the modern epistemology of belief. We do so by partly employing the formal tools of logic and mathematics.

Knowledge and Justification

It is better to know a truth rather than to merely believe it. But why? Conceptual analysis alone does not necessarily answer this question. The reason is that, in the end, conceptual analysis is based on the analyzing philosopher's intuitions. Gettier (1963) refutes the tripartite definition of knowledge by providing an example of a justified true belief that intuitively is not knowledge. Ditto Lewis (1996) when he refers to our intuitions in claiming knowledge to be infallible. But just because my intuitions tell me that no two parallels have a point in common, it does not follow that this is actually the case.

Epistemic consequentialism (Percival 2002, Stalnaker 2002) may provide an answer that does not depend on anyone's intuitions. It is compatible with both formal learning theory (Kelly 1996) as well as virtue epistemology (Pritchard 2005). It says that it is meaningless to call a norm (say, You should prefer knowledge over mere true belief!) justifed - just as it is meaningless to say that Germany has a higher population than. The justification of a norm is not a property of the norm in question. Rather, it is a relation between the norm and a goal that is to be furthered by the norm. From this point of view it is not even irrational to believe a contradiction. It is merely irrational to believe a contradiction if truth is the goal - but not if falsehood is the goal.

The project Knowledge and Justification investigates which answer(s) we get from this position to the question why knowledge is better than mere true belief.

Belief and Its Revision (Benjamin Bewersdorf)

The formal epistemology of belief takes belief to be a yes-or-no affair. A person's beliefs simply correspond to a set of propositions.

It is interesting to see what this set looks like if we revise it by new information in form of a proposition. If the new information is compatible with the totality of the old beliefs, we might simply add it as a conjunct. This simplistic rule is also managed by the update rules of subjective probability theory.

If, however, the new information is incompatible with the set of old beliefs, update rules such as Strict Conditionalization collapse. Thus the question arises how to consistently combine the old and new beliefs. This question is addressed by AGM belief revision theory (Alchourrón & Gärdenfors & Makinson 1985, Rott 2001). The basic idea is that the new belief set should contain the new information and as much of the old beliefs as is allowed for by the requirement that the new belief set be consistent.

The project Belief and Its Revision deals with the problem whether it is degrees of belief or categorial beliefs that are basic. A further goal is to get rid of the idealizing assumption that we revise our belief system only if there is a logical contradiction between our old beliefs and the new information.

Degrees of Belief and Belief (Alexandra Zinke, Franz Huber)

The formal epistemology of belief is divided into two research programs. The formal epistemology of qualitative belief focuses on the categorical notion of yes-or-no belief and the laws it should obey. The formal epistemology of quantitative belief focuses on degrees of belief and the laws they should obey.

Intuitively, there is a close connection between these two notions known as the Lockean thesis (Foley 1992, Hawthorne & Bovens 1999): categorical belief is sufficiently high degree of belief. Surprisingly, almost no theory of degrees of belief induces in a natural way a reasonable notion of categorical belief satisfying the Lockean thesis. This is also true of approaches which represent an agent's epistemic state as a set of degree of belief functions (Levi 1980) rather than a single one, although there are attempts to define a categorical notion of belief (Roorda 1995).

One exception is ranking theory (Spohn 1988), although there is a price to be paid here: the degrees of belief in ranking theory are of a purely ordinal nature and thus not rich enough to be used for decision making (see, however, Giang & Shenoy 2000).

The goal of the project Degrees of Belief and Belief is to develop a unified account of epistemic states that comprises both categorical belief and degrees of belief and verifies the Lockean thesis.

Theories of Degrees of Belief (Franz Huber)

Theories of degrees of belief can be evaluated according to the plausibility-informativeness theory (Huber 2008). The latter claims that a hypothesis is acceptable if it is sufficiently plausible and sufficiently informative. For instance, tautologies, while maximally plausible, are too uninformative to be acceptable, while complete hypotheses may be too implausible.

A tautological theory of degrees of belief says that a degree of belief function is a function. A complete theory of degrees of belief specifies one single function as the correct degree of belief function - say, a logical probability measure à la Carnap or the truth value function corresponding to the actual (or, perhaps, a merely possible) world. Theories such as subjective probability theory are somewhere in between these extremes.

Interestingly, probability measures are a special case of both Dempster-Shafer belief functions as well as interval-valued probabilities - just as Spohn's (1990) ranking functions are a special case of both Weydert's (1994) general belief measures and Huber's (2006) ranking functions. In addition, Dempster-Shafer belief functions as well as interval-valued probabilities can be represented as convex sets of probabilities, which in turn satisfy the conditions of Halpern's (2003) plausibility measures.

Hence comparing different theories of degrees of belief is similar to comparing two hypotheses such that the one entails the other. The theory of subjective probabilities is logically stronger, and hence more informative, than the theory of Dempster-Shafer belief functions. On the other hand, the theory of Dempster-Shafer belief functions is more plausible than the theory of subjective probabilities. The question to be answered by the project Theories of Degrees of Belief is which of these accounts achieves the best trade-off between plausibility and informativeness.

Degrees of Belief and Justification (Franz Huber)

There are several competing theories of degrees of belief. This raises the question of their justification as normative accounts: why should an ideally rational epistemic agent's degrees of belief satisfy the laws of one rather than another such theory, or any at all?

The traditional arguments for the theory of subjective probabilities - the Dutch Book Argument, Cox' theorem, and the representation theorems of measurement theory - are not sufficient for its justification as a normative theory of epistemic states. The Dutch Book Argument, even in its depragmatized form, faces a series of objections from presupposing too close a link between believing and betting to being unsound (Hájek 2005). Cox' theorem depends on several purely mathematical assumptions. And the representation theorems inherit the drawbacks from operationalism as well as presuppose certain principles such as expected utility maximisation which are in need of justification themselves.

Against this background Joyce's (1998) attempt to justify the theory of subjective probabilities in purely epistemic terms is extremely important. Given some assumptions on how to measure the inaccuracy of degrees of belief, Joyce is able to show that an agent's degrees of belief avoid unnecessary inaccuracy just in case they satisfy the probability calculus. However, Joyce's argument faces the objection that different inaccuracy measures (all satisfying his assumptions) have the agent adopt different probability measures. Thus the question arises why an agent with a degree of belief function violating the probability calculus should adopt a probability measure in the first place.

Huber (2007) provides and argument for the thesis that degrees of belief should obey the ranking calculus (including update rules for evidence of different formats): they should do so because that is necessary and sufficient for always having categorical beliefs that are consistent and deductively closed.

These attempts to justify various theories of degrees of belief will be further pursued in the project Degrees of Belief and Justification.

Belief Revision in Dynamic Epistemic Logic and Ranking Theory (Peter Fritz)

Epistemic states such as knowledge and belief can be modeled in epistemic logic. Adding a dynamic component produces dynamic epistemic logic, which allows us to represent epistemic states and actions of multiple epistemic agents (van Ditmarsch & van der Hoek & Kooi 2007). Recently, several ways have been proposed in which belief revision can be incorporated in such a logic (e.g. van Benthem 2007). Ranking theory (Spohn 1988) describes a specific kind of belief revision that is able to account for iterated revision (Hild & Spohn 2008).

The project Belief Revision in Dynamic Epistemic Logic and Ranking Theory is concerned with the relation between belief revision in ranking theory and in dynamic epistemic logic, specifically the question how and to which extent the account of belief revision as given in ranking theory can be incorporated in dynamic epistemic logic.

Understanding Normality (Corina Strößner)

In the last two decades normality became an import subject of logical research. The logics of normality suggested so far, for example Frank Veltman's Defaults in Update Semantics or Boutilier's Conditional Logics of Normality, assume an ordering of more or less normal possibilities. In this sense something is normally the case if it is true in the most typical circumstances. On the other hand normality is often associated with majority. In this sense something is normally the case if it is true in most circumstances.  The first one is qualitative  normality  and the second one quantitative normality. One crucial difference is the validity of the rule of conjunction. While „Normally A and normally B. Therefore: Normally A and B“ is doubtlessly valid for qualitative normality it is false with respect to quantitative normality. In this project the philosophical background of the logically different approaches to normality statements is discussed. 

 

References

Alchourrón, Carlos E. & Gärdenfors, Peter & Makinson, David (1985), On the Logic of Theory Change: Partial Meet Contraction and Revision Functions. The Journal of Symbolic Logic 50, 510-530.

Cox, Richard T. (1946), Probability, Frequency, and Reasonable Expectation. American Journal of Physics 14, 1-13.

Foley, Richard (1992), The Epistemology of Belief and the Epistemology of Degrees of Belief. American Philosophical Quaterly 29, 111-124.

Gettier, Edmund L. (1963), Is Justified True Belief Knowledge? Analysis 23, 121-123.

Giang, Phan H. & Shenoy, Prakash P. (2000), A Qualitative Linear Utility Theory for Spohn's Theory of Epistemic Beliefs. In C. Boutilier & M. Goldszmidt (eds.), Uncertainty in Artificial Intelligence 16. San Francisco: Morgan Kaufmann, 220-229.

Hájek, Alan (2005), Scotching Dutch Books? Philosophical Perspectives 19, 139-151.

Halpern, Joseph Y. (2003), Reasoning about Uncertainty. Cambridge, MA: MIT Press.

Hild, Matthias & Spohn, Wolfgang (2008), The Measurement of Ranks and the Laws of Iterated Contraction. Artificial Intelligence 172, 1195-1218.

Huber, Franz (2006), Ranking Functions and Rankings on Languages. Artifical Intelligence 170, 462-471.

------ (2007), The Consistency Argument for Ranking Functions. Studia Logica 86, 299-329.

------ (2008), Assessing Theories, Bayes Style. Synthese 161, 89-118.

Hawthorne, James & Bovens, Luc (1999), The Preface, the Lottery, and the Logic of Belief. Mind 108, 241-264.

Joyce, James M. (1998), A Non-Pragmatic Vindication of Probabilism. Philosophy of Science 65, 575-603. 

Kelly, Kevin T. (1996), The Logic of Reliable Inquiry. Oxford: Oxford University Press.

Levi, Isaac (1980), The Enterprise of Knowledge. Cambridge, MA: MIT Press.

Lewis, David K. (1996), Elusive Knowledge. Australasian Journal of Philosophy 74, 549-567.

Percival, Philip (2002), Epistemic Consequentialism. Proceedings of the Aristotelian Society, Supplementary Volume 76, 121-151.

Pritchard, Duncan (2005), Epistemic Luck. Oxford: Oxford University Press.

Roorda, Jonathan (1995), Revenge of Wolfman: A Probabilistic Explication of Full Belief. http://www.princeton.edu/~bayesway/pu/Wolfman.pdf

Rott, Hans (2001), Change, Choice, and Inference: A Study of Belief Revision and Nonmonotonic Reasoning. Oxford: Clarendon Press.

Spohn, Wolfgang (1988), Ordinal Conditional Functions: A Dynamic Theory of Epistemic States. In W.L. Harper & B. Skyrms (eds.), Causation in Decision, Belief Change, and Statistics II. Dordrecht: Kluwer, 105-134.

------ (1990), A General Non-Probabilistic Theory of Inductive Reasoning. In R.D. Shachter & T.S. Levitt & J. Lemmer & L.N. Kanal (eds.), Uncertainty in Artificial Intelligence 4. Amsterdam: North-Holland, 149-158.

Stalnaker, Robert C. (2002), Epistemic Consequentialism. Proceedings of the Aristotelian Society, Supplementary Volume 76, 153-168.

van Benthem, Johan (2007), Dynamic Logic for Belief Revision. Journal of Applied Non-Classical Logics 17, 129-155.

van Ditmarsch, Hans & van der Hoek, Wiebe & Kooi, Barteld (2007), Dynamic Epistemic Logic. Dordrecht: Springer.

Weydert, Emil (1994), General Belief Measures. In R. Lopez de Mantaras & D. Poole (eds.), Proceedings of Uncertainty in Artificial Intelligence 1994. San Mateo, CA: Morgan Kaufmann, 575-582.