Book of Abstracts (LRR1)

Keynote lectures

 

Leen De Vreese

Explanation and Scientific Understanding in Medicine

Starting from an explanatory pluralist view on the medical sciences, and on the basis of a couple of medical case studies that exemplify this approach, I will challenge some of the basic assumptions underlying the approach to the notion of scientific understanding as it is stipulated in the compilation on this subject of De Regt, Leonelli and Eigner (2009). I will argue that (1) if one starts reasoning from a pragmatic approach to scientific explanation which takes the epistemic interests of the researcher already into account, it is not the inclusion of the cognizing subject and the context-dependency which makes the difference between scientific understanding and scientific explanation and (2) that talk in terms of different kinds, types, or forms of scientific understanding is therefore flawed. Rather, I will argue, one should talk about different ways in which cognizing subjects can come to scientific understanding, based on differences in epistemic interests and background knowledge (which imply different perspectives and different „interpretative frameworks” – the latter including, but not only, different (kinds of) scientific explanations). The importance of philosophical thought about the notion of scientific understanding lies then in getting a grip on how these different perspectives and „interpretative frameworks” can lead to scientific understanding and how they relate to each other (instead of in getting a grip on different kinds of scientific understanding in itself).

I will also explain the consequences of this shift in approach to the notion of scientific understanding, for example with respect to the meaning of the notion, the assessment of the quality of scientific understanding, and the relation between scientific understanding and scientific knowledge.

 

Igor Douven

Inference to the Best Explanation: What is it, and why should we care?

Opponents of Inference to the Best Explanation (IBE) have complained that this rule has never properly been explicated, and also that given any explication that does not reduce the rule to Bayes’ rule, it faces serious problems. In my talk, I address both criticisms.

 

Gerhard Schurz

Abductive Belief Revision

I start from the observation that neither belief revision in the AGM tradition nor belief base revision contain mechanisms for learning new hypotheses from new evidences. An account of input-driven abductive belief expansion and revision is developed which accounts for the learning of new hypotheses from new evidences. The account models procedures of belief revision in science and in common sense cognition. Abductive expansion and revision functions are described within three specific domains: inductive generalization, factual abduction, and theoretical model abduction. It turns out that abductive belief revision does not satisfy the Levi-identity.

 

 

Contributed papers

 

Diderik Batens

A General Recipe for Devising Adaptive Logics for Theoretical Abduction

During the past ten years, different approaches to adaptive logics of abduction were devised, elaborated, and published. In a technical report, Frederik Van De Putte has shown the defective character of those logics. Each has features that make it inadequate as an explication of abduction.

Being one of the two felons responsible for introducing the distinction between practical and theoretical abduction, I shall argue that the notion of practical abduction was a mistake. The name was introduced to refer to an inference that occurs frequently in both everyday reasoning and in the sciences. However, this inference cannot be sensibly called abductive. So I now take it that all abduction is theoretical, viz. boils down to deriving potential explanantia from generalizations and theories on the one hand and explananda on the other.

For theoretical abductions, I shall present a general recipe. The generality is required because the generalizations and theories involved in theoretical abductions may have a multiplicity of underlying logics. These logics reveal whether the generalizations and theories are material or rather nomo- logical in one of several senses, and whether they are counterfactual.

I shall argue that theoretical abduction involves a requirement that pertains to general statements rather than to singular ones. The requirement comes to an implicative connection that displays maximal specificity. The popular requirement on singular statements is erroneously copied from the explication of the mistaken notion of practical abduction. It is pointless with respect to theoretical abductions.

The past confusion seems partly due to the fact that the problem was diagnosed as a logical one rather than as an epistemological one. Also striking is the resemblance between part of the criticism of past adaptive proposals on the one hand and the literature on explanation from the 1960s on the other hand.

 

Mathieu Beirlaen & Bert Leuridan

Discovering Causal Regularities: A Formal Explication

We present a qualitative (non-probabilistic) logic, ELIMr, for the discovery of candidate causal regularities starting from empirical data. Our approach is inspired by Mackie’s account of causes as inus-conditions, which focusses on deterministic causal relations at the generic or type level. From Mill, Mackie borrows the idea that causation is seldom, if ever, an invariable sequence or regularity between a single antecedent (e.g. a short circuit) and a single consequent (e.g. a fire). Instead, it is often the case that the effect P occurs when some conjunction of factors (e.g. ABC; a short circuit, the presence of oxygen, the presence of inflammable materials) occurs, but not when any of these conjuncts fails to occur. Moreover, alternative conjunctions of factors (e.g. the conjunctions DGH and JKL) may also be followed invariably by P. A, in this example, is an insufficient but non-redundant part of an unnecessary but sufficient condition for P. In short, using the first letters of the italicized words, it is an inus-condition for P.

Mackie stresses the fact that our knowledge of complex causal regularities is seldom, if ever, complete. “What we know are certain elliptical or gappy universal propositions.” (Mackie, 1974, 66) Moreover, he writes that “the elliptical character of causal regularities as known is closely connected with our characteristic methods of discovering and establishing them: it is precisely for such gappy statements that we can obtain fairly direct evidence from quite modest ranges of observation” (Mackie, 1974, 68). The adaptive logic ELIMr to be presented in this paper will serve as an explication for Mackie’s views on these ‘characteristic methods’. ELIMr consists of two preliminary logics: ELIr and Mr. ELIr allows to derive logical equivalences — of a particular, Mackie-style type — from empirical data. Mr then serves to minimize these equivalences; intuitively, Mr serves to throw out redundant factors.

In presenting the logic ELIMr, we will pay special attention to how it connects to some other adaptive logics for explicating defeasible reasoning. In particular, we will discuss its relation to logics for inductive generalization (see e.g. Batens (2011)) as well as its relation to logics for explicating abductive reasoning (see e.g. Meheus & Batens (2006)). We will also compare it with an algorithm by Michael Baumgartner which, like ELIMr, explicates the discovery of deterministic causal structures Baumgartner (2009).

References

Batens, D. (2011). Logics for qualitative inductive generalization. Studia Logica, 97:61–80.

Baumgartner, M. (2009). Uncovering deterministic causal structures: a Boolean approach. Synthese, 170(1):71–96.

Mackie, J. L. (1974). The Cement of the Universe: A Study of Causation. Clarendon Press, Oxford.

Meheus, J. and Batens, D. (2006). A formal logic for abductive reasoning. Logic Journal of the IGPL, 14:221–236.

 

Peter Brössel

On the role of explanatory and systematic power in scientific reasoning

The talk has the following structure: Part 1 introduces various probabilistic measures of explanatory power that have been discussed in the literature from Popper (1959) to Crupi and Tentori (2012). These measures enable us to compare the strength of the different explanations provided by different hypotheses and thus to define IBE. Common features of all these measures are (i) that they presuppose a notion of explanation and (ii) that the application of these measures for quantifying the level of explanatory power presupposes that the hypothesis in question is indeed an explanation for the evidence. In the context of theory choice, this presupposition is usually not satisfied. Even though the hypothesis might explain parts of the (total) evidence, almost no hypothesis explains all the evidence. Thus, Part 2 discusses what the proposed measures quantify, if the above presupposition is not satisfied, i.e., if the hypothesis does not explain the evidence. It is argued that in this case they are measuring the systematic power of the hypothesis with respect to the evidence. It is also argued that in the context of theory choice, we should take into account the entire systematic power of the hypotheses, not just their explanatory power. In addition, the corresponding inference schema Inference to the Best Systematization (IBS) is defined. Part 3 investigates whether it is possible to provide a vindication of systematic power as a criterion of theory choice and of the inference schema IBS. It is argued that this is indeed the case. In particular, Part 3 demonstrates that in science, systematic power is a very fruitful criterion for theory choice: after finitely many pieces of evidence, and for every piece of evidence thereafter, (i) true hypotheses display a higher systematic power than false hypotheses, and (ii) logically stronger true hypotheses display a higher systematic power than logically weaker true hypotheses. Part 3 also demonstrates that the inference schema IBS is a fruitful inference schema in science, since it takes one to the logically strongest true hypotheses among the hypotheses available. The reason why we cannot achieve similar results for explanatory power and IBE is also discussed. Roughly, the reason is that our hypotheses cannot usually be considered an explanation of the total evidence available to us. Part 4 discusses how to reconcile consideration of explanatory and systematic power with Bayes’ rule. More specifically, even though IBE and IBS are based on a probabilistic measure of systematic power, strictly speaking they are not Bayesian at heart, as they force agents either to accept a hypothesis or reject it for another, whereas Bayesian epistemology recommends assigning probabilities to the hypotheses under consideration. However, Part 4 shows that Bayes’ rule can be reformulated in such a way that one can see how explanatory power and systematic power both inform the degrees of belief of Bayesian agents. Finally, Part 5 discusses the results achieved and puts them in perspective to van Fraassen’s famous criticism of IBE.

 

Szymon Chlebowski & Andrzej Gajda

Abductive question-answer systems

The term ‘abduction’ was originally used to determine a kind of reasoning where new, hypothetical premise was added with the purpose of increasing the probability of the conclusion which was not acclaimed before (or was acclaimed with lesser firmness) [Peirce, 1958; Urbanski, 2009, p. 10]. Abductive reasoning is very often described as interesting, because it is considered as a kind of reasoning which is fast and, in the same time, enables one to create accurate explanation of surprising phenomena or events.

There is an open discussion about how to understand abduction: as a process or as a product, as a construction or as a selection Aliseda [2006]. We distinguish the following three concepts:

• Abductive problem is a situation, when it is impossible to derive A from the knowledge base Γ and the abductive question (abductive problem) arises: which propositions should be added to the knowledge base to derive A?

• Abductive reasoning is a process, where the goal is to find propositions such that addition of these propositions to the knowledge base Γ allow us to explain a phenomenon described by A.

• Abductive hypothesis is the result of this process i.e. a proposition that allow us to derive the sentence A from the knowledge base Γ.

In our presentation we focus on describing Abductive Question-Answer System (AQAS) for Classical Propositional Logic and we discuss possible modal extensions. The whole structure is based on Wi ́sniewski’s Inferential Erotetic Logic (IEL) which enables us to transform an initial abductive question into auxiliary questions Wi ́sniewski [2004]. Answers to the auxiliary question create the answer we were seeking at the beginning i.e. answer to the initial question. Through this process we obtain two kinds of abductive hypothesis: analytic and non-analytic. The first one gives us answer that contains information only from our database while the second allow us to introduce a new piece of information. We also introduce rules and restrictions for generating abductive hypotheses which guarantee that those hypotheses are significant (an explanandum is not a consequence only of our abductive hypotheses) and consistent with a given knowledge base. Introduced rules have questions as their premises and propositions as their conclusions. Theeffect of introducing such rules and restrictions is that the set of possible hypotheses is reduced to the optimal one, i.e. redundant (non significant and inconsistent) cases are excluded by restrictions. As a result, Abductive Question-Answer System generates ‘good’ abductive hypotheses in one step, on the contrary to the more standard approach where this process is divided into two parts: generation of hypotheses and evaluation with qualifying selection (see for example Komosinski et al. [2014]).

Our future work will cover also implementation of Abductive Question-Answer System in programming language. This would enable us to test the system on huge datasets and compare it with already existing solutions, like one presented by Komosinski Komosinski et al. [2014]. This stage has begun by now and as the implementation language was chosen Haskell. The reason of this choice was that Haskell is a purely functional language and that enables us to define the Abductive Question-Answer System almost in the same manner as we introduce it in the logical formalism.

References

Aliseda, A. (2006). Abductive reasoning. Logical investigations into discovery and explanation. Springer, Netherlands.

Komosinski, M., Kup ́s, A., Leszczynska-Jasion, D., and Urbanski, M. (2014). Identifying efficient abductive hypotheses using multi-criteria dominance relation. ACM Transactions on Computational Logic (TOCL), 15(4):28:1– 28:20.

Peirce, C. S. (1931–1958). Collected Works. Harvard University Press, Cambridge MA.

Urbanski, M. (2009). Rozumowania abdukcyjne. Wydawnictwo Naukowe UAM, Poznan.

Wísniewski, A. (2004). Socratic proofs. Journal of Philosophical Logic, 33(3):299–326.

 

Matteo Colombo & Jan Sprenger

Explanatory value and probabilistic reasoning: An empirical study

The interplay of explanatory, causal, and probabilistic reasoning is tight and multidirectional. While the question of how judgments of explanatory value (should) inform probabilistic inference has received much attention within both psychology and psychology—e.g., in the literature on abductive inference—the related question of how probabilistic and causal information (should) affect judgments of explanatory value has not been well-studied.

One way to address this question is to begin with the hypothesis that explanation is “a two-tiered structure consisting of statistical relevance relations on one level and causal processes and interactions on the other” (Salmon 1997: 475-6). According to this hypothesis, explanatory value depends on the joint contribution of statistical relevance relations and causality: both factors are indispensable to explanatory value, which has also been stressed recently by the literature on probabilistic causation (e.g., Halpern and Pearl 2005; Hitchcock 2008).

In the present paper, we elucidate this hypothesis by addressing whether and under which circumstances judgments of explanatory value are associated with causal and probabilistic characteristics of a potential explanation. To address these issues, we conducted two experimental studies. In both studies, experimental participants read well-constrained problem situations where information was provided about statistical and causal relevance relations between an explanandum and a potential explanatory hypothesis. Participants were asked to make a series of explanatory judgments along several dimensions, including judgments about the explanatory value of the hypothesis and its cognitive and causal relevance, but also about its plausibility, degree of confirmation and its logical relation to the evidence.

In the first study, we examined explanations for a certain event-type, where no alternative explanation was explicitly given, but many potential alternative explanations could be easily produced. We tested three hypotheses: (i) that judgements of explanatory value were reliably predicted by the prior subjective credibility of the candidate explanation; (ii) that judgements of explanatory value were predicted by the degree of statistical relevance of the candidate explanation for the explanandum.; and (iii) that judgements of explanatory power were sensitive to the framing of the candidate explanation in causal as opposed to non-causal terms.

In the second study, we examined explanations for singular, event-tokens, where exactly one alternative explanation was provided and no other alternative explanation could be easily produced. We tested three hypotheses: (i) that judgments of explanatory value could be dissociated from posterior probabilities or other indicators of rational acceptability; (ii) that judgments of explanatory value were positively associated with causal reasoning and a sense of understanding; (iii) that judgments of explanatory value were positively affected by statistical relevance.

Results from the first study showed that for generic types of explanations involving a complex causal mechanism, the prior credibility of the hypothesis and causal framing jointly raised the perceived explanatory value of the hypothesis. Statistical relevance relations had a negligible impact on explanatory value, where there was an unrestricted number of potential explanations, yielding to causal credibility as the main determinant of explanatory value. Results from the second study provided evidence that for explanations of single events, judgments of explanatory value were highly sensitive to relations of statistical relevance, and were dissociable from posterior probabilities and other indicators of the rational acceptability of the explanatory hypothesis.

Collectively, these findings support the hypothesis that explanation is a complex structure that taps into distinct types of sources of information in different contexts (Lombrozo 2012). They also call for a reassessment of the rationality of explanatory modes of inference like abductive inference (Lipton 2004). Specifically, our findings indicate that two different kinds of probabilistic cues—the credibility of theexplanation and the statistical relevance for the explanandum—contribute to explanatory value, albeit in different circumstances. The level of generality of the explanation (and the explanandum) make a crucial difference: for generic (type) explanations, the prior credibility, but not the statistical relevance boosts explanatory value, whereas for individual (token) explanations, explanatory value co-varies with statistical relevance, but not with prior credibility. This indicates that the probabilistic coherence of explanatory modes of inference is context-specific, and the rationality of abductive reasoning should thus be assessed on a case-by-case basis.

We hope our results will promote “the prospects for a naturalized philosophy of explanation” (Lombrozo 2011, 549), contributing to a theory of explanatory reasoning that is both psychologically accurate and philosophically appealing.

References

Halpern, J., and Pearl, J. (2005). Causes and Explanations: A Structural-Model Approach. Part II: Explanations. British Journal of Philosophy of Science 56: 889–911.

Lipton, P. (2004). Inference to the Best Explanation (second edition). London: Routledge.

Lombrozo, T. (2011). The instrumental value of explanations. Philosophy Compass 6: 539–551.

Lombrozo, T. (2012). Explanation and abductive inference. In K. J. Holyoak & R. G. Morrison (eds.): Oxford Handbook of Thinking and Reasoning, 260–276. Oxford, UK: Oxford University.

Salmon, W. (1971/1984). Statistical Explanation. Reprinted in Salmon (1984): Scientific Explanation and the Causal Structure of the World, 29–87. Princeton: Princeton University Press.

 

Ludwig Fahrbach

IBE & Bayesianism: A Couple in Harmony

The two main contenders for a correct account of scientific inference are arguably IBE and Bayesianism. However, the two accounts differ profoundly. Actually, they don’t share a single epistemic concept (except the basic notions of hypothesis and observation). We can only start to compare them, if we first establish ways to relate the central epistemic notions of the two accounts. The literature contains a number of different proposals concerning how one might do so. I want to offer an especially simple and straightforward proposal.

According to my suggestion, IBE and Bayesianism are related by the following three correspondences. First, the notion of explanatory quality in IBE of a hypothesis H with respect to some evidence E corresponds to the product Pr(H) · Pr(E|H). Pr(H) captures intrinsic epistemic properties of H such as simplicity and elegance. Pr(E|H) concerns the relation between the hypothesis and the observation. The second correspondence concerns the notion of acceptance. It is very plausible that the notion of acceptance of a hypothesis in IBE corresponds to the notion in Bayesianism that the posterior of the hypothesis is near one. The third correspondence concerns the cores of the two accounts, namely criteria for successful inference. The criterion for acceptance in IBE is that a hypothesis H is accepted, if it explains the evidence E substantially better than any of the rival hypotheses Hi. I suggest that a corresponding criterion in Bayesianism is the strong inequality

 

Pr(H) · Pr(E|H) >> Σi Pr(Hi) · Pr(E|Hi). (STROQ)

 

STROQ expresses that the explanatory quality of H relative to E is much better than the sum of the explanatory qualities of all rival hypotheses Hi relative to E. Happily, STROQ is equivalent to the statement that the posterior of H is near one. (Proof: Dividing STROQ by Pr(E) and applying Bayes’ theorem yields Pr(H|E) >> Σi Pr(Hi|E) = Pr(¬H|E).) Therefore, STROQ is an apt counterpart to the acceptance criterion of IBE. This is the harmony between our couple. I discuss several consequences of my approach, some plausible, some not so plausible, and compare it with other approaches.

Of course, every couple has some disharmonies. It is then interesting to examine the different ways in which one may deal with these. For example, STROQ is somewhat stronger than the acceptance condition of IBE, as it requires that the explanatory quality of H be substantially better than the combined explanatory qualities of all rival hypotheses Hi. Such discordances lead to the question whether one of the two accounts is more fundamental than the other. The three correspondences seem to constitute a way of representing IBE within the Bayesian framework suggesting that Bayesianism is the more basic account. However, I show that this is not the only way to view the situation. To discuss this issue I present a general system with which one can compare any given pair of accounts of scientific inference. The system refers to, among other things, the relationship between the domains of the two accounts of scientific inference, the distribution of agreement and disagreement between the two accounts over the intersection of the domains of the two accounts, the distribution over the two domains of correctness and incorrectness according to an ultimate standard of correctness of inference, and ways to understand the relative fundamentality of the two accounts.

 

Joachim Frans

A contextual approach to mathematical explanation

Mathematical explanation is a hot topic in current philosophy of mathematics. Apart from the discussion whether and how mathematics can explain empirical facts, there is an increasing concensus that the idea that mathematics can have explanatory power within mathematics itself should receive more attention. A few philosophers tried to explicate the notion of this kind of mathematical explanation, including contributions by Mark Steiner and Philip Kitcher.

Both accounts, however, have been criticized. Furthermore, it is hard to find a great deal of agreement in the literature which proofs or theories are considered to be explanatory. One approach to investigate this lack of consensus is to shift from an objectivist approach to a contextual approach to mathematical explanation.

In this talk I will discuss such an approach, heavenly based upon the work of De Regt & Dieks in the contextual approach to understanding in physics. Many authors agree that the topics of explanation and understanding are deeply connected. A common feature of all kinds of explanation is, in this view, that they increase our understanding of a phenomenon. By shifting our attention to understanding we take some specific aspects of the subject into account. De Regt & Dieks show that this does not necessarily lead to a purely subjectivist discussion, as understanding in their view is both linked with the virtues of a theory as with the skills of a scientist. In this contextual context, understanding is achieved with the help of conceptual tools such as causal reasoning, abstract reasoning, visualisation or unification.

I will argue that such an approach has an interesting starting point for mathematical explanation as well. By presenting several cases, in which I investigate the explanatory value of visualisation and unification in mathematics, I show that they are not a necessary or sufficient condition for explanation or understanding, but play important roles depending on the context. As a result, we take a step towards a pluralist and pragmatic view of mathematical explanation that can capture some of the important heterogeneous aspects of mathematical practice.

 

Raoul Gervais

Explanatory inferences in cognitive science

A popular maxim among philosophers of science is that one should ‘let science speak for itself.’ While it is not easy to state what precise consequences the adoption of this maxim has or should have for the business of doing philosophy of science, in broad terms, the idea seems to be that one’s philosophy should not clash with science; or that if it does clash, it is the former that should give way rather than the latter. While I think this is a healthy policy, in philosophy of cognitive science however, it seems that this policy has gone haywire, especially in the work of John Bickle, who no longer practices philosophy, but what he calls ‘metascience’. Although the precise difference between metascience and philosophy of science remain unclear, in discussing it, Bickle espouses an aversion for philosophy, arguing that it can play no role of significance in scientific practice.

In a recent article, Paul Thagard took the opposite view. He argued that there are two aspects of philosophy that can be of great benefit to cognitive science: generality and normativity. In this paper I will follow and expand Thagard’s approach by applying it to explanatory inferences as they are made by cognitive scientists. I will conclude that, pace Bickle, there is indeed an important role for philosophy to play in cognitive science, though not the role that philosophers themselves have traditionally claimed.

 

Dorota Leszczyńska-Jasion & Adam Kups

Identifying efficient abductive hypotheses using multi-criteria dominance relations

Abduction is a type of reasoning in which key role is assigned to generation and evaluation of hypotheses. In [1] the authors describe a mechanism of generating hypotheses, worded in the language of Classical Propositional Logic, which is based on the method of Synthetic Tableaux (ST-method; see [2] for an overview). By and large, the ST-method is a proof method based on a direct reasoning – instead of analysing a formula into its subformulas (and/or their negations) an attempt is undertaken to synthesize the formula from all possible sets of literals. The merits of this proof method in the context of abduction is that one can specify literals which are actually entangled in the process of a formula synthesis. This literals are then used to formulate hypotheses which fill the deductive gap between a set of premises and a description of the phenomenon to be explained.

The logical algorithm of hypotheses generation has been implemented using a scripting programming language. An automatic abductive problem generator provided a number of problems to allow extensive analyses. As in most cases the amount of the generated hypotheses was quite large, and moreover many hypotheses were semantically equivalent, several filters including Quine-McCluskey method have been implemented to reduce the number of hypotheses. Then the “sieved” hypotheses were further evaluated according to a set of selected criteria (including e.g. consistency or complexity).

Since the aforementioned abductive hypotheses were evaluated on several equally important criteria the multi-criteria dominance relation was employed. The main advantage of this tool is that it does not require imposing an order of importance on the criteria or aggregating them. The ultimate stage of applying the multi-criteria dominance relation was the generation of a set of non-dominated abducibles for each of the problems. These sets may be seen as ones that contain interesting hypotheses.

The near-future plans concern applying the presented procedures to some more advanced logics. As the Synthetic Tableau Method for modal propositional calculi is currently under development we would like to apply the procedures described above to generate modal abducibles.

References

[1] M. Komosinski, A. Kups, D. Leszczynska-Jasion, M. Urbanski, “Identifying efficient abductive hypotheses using multi-criteria dominance relation”, ACM Transactions on Computational Logic, Vol. 15, Issue 4, 2014.

[2] M. Urbanski, “Synthetic Tableaux and Erotetic Search Scenarios: Extensionand Extraction”, Logique et Analyse, Vol. 173–175, pp. 69–91, 2001.

 

Joke Meheus

Explanation and abduction in medicine: some formal challenges

TBA

 

Rune Nyrup

Empirical Problems for Explanationism

Explanationists claim that inference to best explanation (IBE) provides a reliable guide to (approximately) true theories. It is often defended on the basis of the descriptive claim that IBE is ubiquitous in scientific inferential practice (Lipton 2004). I criticise the empirical premises of these arguments, and propose an alternative view of IBE according to which it first and foremost justifies pursuing hypotheses, i.e. spending time and resources investigating whether they are true (cf. Paavola 2006, McKaughan 2008).

Two kinds of empirical arguments for the reliability of IBE can be distinguished: direct and indirect (Douven 2011). Discussion of these arguments is usually framed in terms of the scientific realism debate, focusing on whether they beg the question against antirealists. I argue that even assuming the truth of realism, these arguments face a more fundamental problem: the empirical premises necessary for them to support explanationism are most likely false.

According to the direct strategy, successful applications of IBE provide evidence for its reliability (Douven 2005). But for this to support explanationism, it is not enough that IBE has often led scientists to infer (approximately) true theories. Rather, it must be the case that IBE has led to the truth more often than not. As already Duhem (1954), and later Laudan (1981), pointed out, even if theories we currently accept provide very good explanations, they were preceded by numerous false theories which also provided very good explanations. Most realists respond to this problem by (i) requiring successful theories to also have novel predictive successes and (ii) restricting their realist commitments to the “working posits” of theories (Psillos 1999). Although these moves may rescue scientific realism from the pessimistic induction, they cause problems for explanationism.

First, move (i) suggests that it is novel predictive success, rather than explanatory quality, which is doing all the epistemic work. Second, as (ii) is typically implemented, it is the explanatory posits – e.g. caloric (Chang 2003) – of past theories which are deemed to be “idle wheels” The indirect strategy starts from the premise that IBE plays an important role in scientific inquiry, avoiding commitments to the success of individual IBEs. This, together with the assumption that scientific inquiry is generally reliable, is supposed to show that IBE is generally reliable (e.g. Thagard 1988, ch. 8; Lipton 2004, ch. 9). However, in this simple formulation, the argument commits the fallacy of division: even if scientific inquiry as a whole is generally reliable, it does not follow that any individual inference-pattern used in scientific inquiry is reliable as well. In particular, it fails to rule out that explanatory reasoning plays a different role in scientific inquiry from being a guide to the approximate truth of theories. I examine some of the case-studies usually taken to support explanationism, arguing that explanatory reasoning in these cases is more plausible interpreted as generating and selecting hypotheses it would be worthwhile pursuing.

References

Chang, Hasok (2003): “Preservative Realism and Its Discontents: Revisiting Caloric”, Philosophy of Science 30:902-912.

Duhem, Pierre (1954): The Aim and Structure of Scientific Theory, Princeton UP.

Douven, Igor (2005): “Evidence, Explanation, and the Empirical Status of Scientific Realism”, Erkenntnis 63:253-291.

Douven, Igor (2011): “Abduction”, in: Zalta (ed.): The Stanford Encyclopedia of Philosophy.

Laudan, Larry (1981): “A Confutation of Convergent Realism”, Philosophy of Science 48:19-49.

Lipton, Peter (2004): Inference to the Best Explanation (2nd ed.), Routledge.

McKaughan (2008): “From Ugly Duckling to Swan: C. S. Peirce, Abduction, and the Pursuit of Scientific Theories”, Transactions of the Charles S. Peirce Society 44:446-468.

Paavola, Sami (2006): “Hansonian and Harmanian Abduction as Models of Discovery”, International Studies in the Philosophy of Science, 20:93-108.

Psillos, Stathis (1999): Scientific Realism, Routledge.

Thagard, Paul (1988): Computational Philosophy of Science, MIT Press.

 

Jan Potters & Erik Weber

Unification and explanation in linguistics

A grammatical theory is a theory that explains why certain sentences are considered to be acceptable (in a language), while others are not. In this talk, we will be looking at one grammatical theory in particular: the principles-and-parameters approach to generative grammar (P&P), which was first formulated by Noam Chomsky (1981). It is our claim that the explanations offered by P&P fit Philip Kitcher’s account of unificatory explanation.

Our talk will consist of three parts. First, we will give a short outline of some central aspects of P&P, and show how it tries to explain the (un)acceptability of sentences with respect to their phrase structure. The second part of our talk will then concern Kitcher’s account of unificatory explanation, and the way in which the P&P-explanations offered in part one fit Kitcher’s framework. In the last part, we will talk about some of the more problematic aspects of P&P and how these relate to a problematic issue concerning unification, viz. ontological versus derivational unification.

 

Miroslava Trajkowski

Abduction, perception and emotion: Pattern recognition of body maps

In this paper I examine the abductive process of recognizing the patterns of body maps in the case of feeling of emotions.

C. S. Peirce relates emotions with sensations (or perceptions) and he argues that both are governed by abduction. Peirce’s claim that “an emotion is directly felt as a bodily state” (Collected Papers, 1.250) obviously is in tune with William James’ theory that emotions are perceptions (or feelings) of bodily changes (“What is an emotion?“, Mind 1894). What is new in Peirce’s view is that in emotions, as in sensations and perceptions, one simple predicate is substituted for “a highly complicated predicate” (Peirce, op. cit. 5.292, 2.643). This substitution is based on an abduction. The resulting account can be stated as follows: an emotion is the result of an abductive reasoning.

Note that the conclusions of the inferential processes behind feeling and perceiving, are not judgments – we do not judge that we are angry and we do not judge that we see a chair. We do not judge these conclusions – we just feel angry, we just see the chair. These reasonings are practical, not theoretical. Their consequences are practical: they consist of what is actually felt or seen. But do these practical consequences appear simultaneously or in succession? James claimed the former. Antonio Damasio argues for the latter. Namely, according to Damasio, we first perceive that our body is in a certain state, only afterwards we might feel an emotion. Due to body-sensing brain regions there are neural maps that represent our emotional states. But being in a certain emotional state is still not a guarantee that this state is felt as a corresponding emotion. Feelings will “emerge when the sheer accumulation of mapped details reaches a certain stage” (Looking for Spinoza, 2003, p. 86). This is clearly, as I argue, the case of an induction by properties (that is of an abduction): when a critical amount of properties of a certain class is reached, then they are felt as belonging to that class. The very fact that whether an emotion is felt depends on how detailed a certain neural map is, proves, I will argue, that the patterns of that map serve as an abductive basis for the process of recognizing (i.e. feeling) of emotions.

 

Mariusz Urbański

Abduction: Some conceptual issues

What I shall argue for in this talk is that in research on abduction the very basic concept of abductive reasoning is constructed rather than described or even explicated. Such different constructions usually satisfy some common constraints of Aristotelian or Percean provenience, but do so in different ways and to different degrees.

In his reflection on abduction C. S. Peirce famously went from early ‘syllogistic’ theory, in which abduction is seen as one element of a tripartite division of reasoning along with deduction and induction, to late ‘inferential’ theory, in which there is much more to abduction than just finding missing premise to produce a valid syllogism. On this second view abduction is a complex form of reasoning, one of the most fundamental cognitive processes, allowing for succesful interpretative interactions with the physical world and other minds as well. However, under inferential theory all we can decisively claim about logical structure of abduction is that affirming the consequent has something to do with it.

In case of deduction we have at our disposal sound intuitions concerning what constitutes deductive reasoning and also generally we are able to effectively operationalize criteria for testing deductiveness. In case of abduction, however, we have intuitions only. Depending on answers to the following two questions: (1) Is abduction intrinsically explanatory? (2) If so, is this explanation of de- ductive character? existing accounts on abduction can be broadly divided into three classes of models: explanatory-deductive, explanatory-coherentist and apagogical ones. What is common for all of them is that under each model the aim of abduction is to make sense of some puzzling phenomena. What does it mean to ’make sense’ and what counts as ’puzzling’ remains debatable.

I shall explore differences and similarities between these three classes of models, in particular with respect to criteria for deciding what makes one instance of abductive reasoning, or a hypothesis, better that other. I shall draw on research on abduction in a computational setting of Inferential Erotetic Logic (multicriteria dominance relation as a basis for evaluating abductive hypotheses generated by a procedure based on Synthetic Tableaux Method, a procedure for generating abductive hypotheses in the form of law-like statements based on Socratic transformations) as well as in the context of experimental data on problem-solving (exploration of solutions to Raven’s Advanced Progressive Matrices test and to MindMaze game).