Book of abstracts (LRR2)

Keynote lectures

Henk de Regt

Scientific explanation and understanding: Lessons from Carl Hempel

When Carl Hempel presented his covering law model of scientific explanation, he emphasized that philosophers should focus on the objective notion of explanation and not on the subjective understanding that explanations may generate. In response to this neglect of understanding philosophers of science have more recently developed philosophical accounts of scientific understanding that highlight its epistemic import. My own contextual theory of scientific understanding is one of these. But although Hempel has long served as a whipping boy for the ‘friends of understanding’ (and for many philosophers of explanation as well), it is now time to acknowledge that not all his ideas were misguided. In my talk I will reassess Hempel’s views on explanation in the light of current analyses of scientific understanding, particularly by comparing my own theory of understanding with some of Hempel’s theses. It will turn out that we can still draw important lessons from Hempel’s work.

 

Phyllis McKay Illari

How are mechanistic explanations understood?

For a long time the philosophical literature on explanation has neglected understanding, following Hempel in focusing on ‘the objective’ notion of explanation. The literature on mechanistic explanation has largely followed this, so that the debate about ontic (Craver: the mechanism explains) versus epistemic (Bechtel: the description of the mechanism explains) explanation seems crucial.

Recently, however, a burst of work has re-examined understanding. In this paper I examine de Regt’s claim that understanding is vital to the epistemic aims of science, so much so that an account of explanation is incomplete without an account of understanding.

I will defend and extend de Regt’s contextual theory of understanding, which broadly holds that scientists regard a theory as intelligible when they can use it. I will apply it to our understanding of the mechanisms of supernovae, and argue that the account, to be successful for such cases, requires emphasis of the importance of community, and a multiplicity of skills, including the embodied skills defended by Leonelli. I choose to examine astrophysics, to find a case that’s unusual both for philosophy of physics and the mechanisms literature, and allows me to argue that Leonelli’s views also extend out with the life sciences.

 

Erik Weber

The Results of Philosophical Inquiries into Scientific Explanation: “Accounts”, “Logics”, “Models” Or “Theories”?

The paper is motivated by some observations:

  1. In the early days (1948 till about 1970) it was common to call the results of philosophical investigations on the topic of scientific explanations “logic of explanation”. Here are some examples:
    • Hempel Carl & Oppenheim Paul (1948), ‘Studies in the Logic of Explanation’ (in Philosophy of Science).
    • Hempel Carl (1957), ‘The Logic of Functional Analysis’ ( in L. Gross (ed.) Symposium on Sociological Theory).
    • Nagel Ernest (1961), The Structure of Science: Problems in the Logic of Explanation.
  2. Many authors in that era indeed used simple logical tools (classical first order predicate logic) in their philosophical analysis.
  3. In the more recent literature this label does not occur anymore. Instead, there is a variety of other labels. Examples are:
    • Woodward James (2003), Making Things Happen. A Theory of Causal Explanation. New York: Oxford University Press.
    • Strevens Michael (2008), Depth. An Account of Scientific Explanation. Cambridge: Harvard University Press.

I will reflect on the idea of “logic of explanation”. Issues that I will address are:

  1. What could it mean to offer a “logic” of explanation (as opposed to a theory, model, account, …)?
  2. What are the benefits of having such “logic” (again: on top of a theory, account, ….)?
  3. What is the role of formal logic in a logic of explanation?
  4. Is there a role (in the development of a logic of explanation) for other formal tools besides formal logic (e.g. probability theory or decision theory)?

 

Contributed papers

Ken Aizawa and Carl Gillett

The Ontology and Methodology of Mechanistic Explanation

Much of the recent “New Mechanistic” work on non-causal explanation in the sciences has had three foci. First, mechanistic explanations are (typically) taken to be explanations of processes, such as the propagation of action potentials, the opening and closing of ion channels, and phototransduction. Call this the “process focus” of New Mechanism.

Second, it is widely supposed that the explanation of processes invokes a “dualist” ontology of entities and activities. (See, for example, Bechtel and Abrahamson, 2005, Craver, 2007, Machamer, Darden, and Craver, 2000, Craver and Darden, 2013, and Thagard, 2003.) Figure 1.1 from Craver, 2007, is exceptionally useful in displaying this structure. Craver proposes that the process of an entity S’s engaging in an activity of ψ-ing is explained in terms of, say, an entity X1’s engaging in an activity of Φ1-ing causally influencing an entity X2’s engaging in an activity of Φ2-ing and an entity X3’s engaging in an activity of Φ3-ing, and so forth. Call this the “dualist focus” of the New Mechanism.

fig1_Aizawa

Figure 1. A mechanistic explanation of what S does in terms of what X’s do.

A third focus of the New Mechanism is on the role of interventions in the investigation of mechanisms. Craver and Darden, 2013, propose that mutual manipulability underlies experiments for testing mechanistic explanations. The core idea here is also nicely summarized in a simple figure (reproduced here as Figure 2), about which they comment:

“As shown in Figure [2], interlevel experiments can be bottom-up or top-down. On the left is a bottom-up experiment, in which one intervenes into a component in a mechanism and detects changes in the behavior of the mechanism as a whole. On the right is a top-down experiment, in which one intervenes to manipulate the phenomenon and detects changes in the activities or properties of the components in the mechanism.” (Craver and Darden, Kindle Locations 2680-2683.)

fig2_Aizawa

Call this the “interventionist focus” of the New Mechanism.

This paper will argue that these three foci of New Mechanist research are too narrow. Scientists use a more expansive ontology and additional methodologies. To state the matter succinctly: 1) not all inter-level explanations are of processes, 2) not all inter-level explanations are in terms of entities and activities, and 3) the investigation of inter-level explanations is not always pursued through experiments involving mutual manipulability. This paper’s argument for broader foci will be based on a simple, familiar, and accessible case study: Examples from Robert Hooke’s research with the light microscopic reported in his 1665 book Micrographia.

 

Sandy Berkovski

Is there a metaphysical explanation?

A major motivation of the metaphysics of ground is to provide explanations: a fact X is grounded in a fact Y when Y explains X. Ground theorists insist that the grounding relation exhibits a distinct kind of metaphysical explanation. Here I argue that the putative grounding facts explain by a familiar procedure of unification, and that the explanatory role of ground is incompatible with the realist metaphysics which the ground theory is supposed to affirm.

Since the ground theorists’ justification for the existence of ground is usually piecemeal, it is only too natural to follow them and work with examples. Thus the fact P that Beijing is a city of 21 million people is said to be grounded in more fundamental facts ⟨pi⟩ about the number of people residing in a particular
geographic location, the facts of administrative and employment records (DeRosset, 2013). Other than an intuition, what makes them suitable candidates for grounding facts is their explanatory role: P is explained by ⟨pi⟩.

But why to think that any such explanatory link exists? Unless, again, we resort to a bare intuition, the main reason, I speculate, is that in postulating the putative ground we are able to reduce the number of brute facts (other possible reasons are mentioned in the full version of the paper). The administrative records of the city, combined with certain facts about residence, explain the fact of the city’s population, because they allow us to establish connections with further facts—say, of literacy. Suppose we read in a chronicle that in the year 1600 Beijing population was five million, but that one year later it dropped to four million. If, improbably, we were to take these two facts as brute, ungrounded, then we would have left it at that. If, on the other hand, we think they
are grounded in the way described, then we would enquire whether perhaps the records were more adequately kept in 1600 than one year later. If they were, we would further ask whether this was due to greater literacy. A whole new line of enquiry is initiated. Therefore, the facts of city population may be said to be explained by other facts if these facts integrate them into further descriptions of the world that previously appeared irrelevant.

Thus, far from exhibiting a novel form of explanation, ground explanations are of a piece with scientific explanation, in particular with the familiar unification model. The ground theorist might respond by claiming that the best possible explanation of P should necessarily cite the ground of P. That is: any explanation is good, genuine, or successful, only when the explanandum is grounded in the explanans (Rodriguez-Pereyra, 2005). I think the response does not work for the following reasons. (1) Explanations using a false theory are not pointless. Under certain conditions, they increase our understanding of the world. (2) Grounds have to be described in our language, with our concepts, with all the limitations that follow. Is the possibility of a final explanation (often asserted in ground metaphysics) so much as intelligible? (3) As the ground theorist sees it, the maximally successful explanation is the maximally specific one. As such, it will be able to fully specify the grounds of the given fact. But in providing explanations, it is essential to leave some things out (Batterman, 2002). The full specification of fundamental ontology will be explanatorily inept. This is unsurprising, since theoretical reasoning, in furnishing explanations, sorts savailable evidence into relevant and irrelevant.

References
Batterman, R. W. (2002). The Devil in the Details. Oxford University Press.
DeRosset, L. (2013). Grounding explanations. Philosophers’ Imprint, 13(7).
Rodriguez-Pereyra, G. (2005). Why truthmakers? In Beebee, H. and Dodd, J., editors, Truthmakers. Clarendon Press.

 

Matteo Colombo

Experimental philosophy of explanation rising. The case for a plurality of concepts of explanation

While all explanations answer some why – or how – question, significant variation is observed across contexts in what is accepted as an explanation, in what type of explanatory information is sought, and in what norms are assumed to govern good explanation. This apparent variation motivates a central question in the philosophy and psychology of explanation: How many concepts of explanation do we have in our psychology?

According to pluralists, we have more than one concept of explanation; and this plurality is reflected in the plurality of philosophical models of explanation. Monists oppose pluralism claiming that we have only one concept of explanation; the plurality of models observed in philosophy would be grounded in this one concept. Monism appears to be the predominant view in the philosophy of science, where all major models of explanation “are ‘universalist’ in aspiration — they claim that a single, ‘one size’ model of explanation fits all areas of inquiry in so far as these have a legitimate claim to explain” (Woodward 2014). A third answer comes from eliminativists, according to whom there is no distinct concept of explanation in our psychology; the variety of philosophical models would be grounded in a heterogeneous, indistinct suite of psychological processes and representations used and reused in a variety of complex capacities spanning causal, confirmation theoretic, counterfactual, deductive, inductive, and probabilistic reasoning.

This paper brings together results from the philosophy and the psychology of explanation in order to argue that there are multiple concepts of explanation in human psychology. It draws on results from what can be called experimental philosophy of explanation with the hope of furthering a conversation “about the prospects for a naturalized philosophy of explanation” (Lombrozo 2011, 549).

Specifically, it is shown that pluralism about explanation coheres with the multiplicity of models of explanation available in the philosophy of science, and is supported by evidence from the psychology of explanatory judgment. Focusing on the case of a norm of explanatory power, the paper concludes by responding to the worry that if there is a plurality of concepts of explanation, one will not be able to normatively evaluate what counts as good explanation

 

Leen De Vreese

Explaining Medically Unexplained Physical Symptoms: Mix & Match?!?

Since MUPSs (Medically Unexplained Physical Symptoms, including syndromes such as, e.g., chron-ic fatigue syndrome, irritable bowel syndrome and unexplained chronic back pain) are very com-mon in medical practice, they form a big challenge for medical practitioners. Although these symp-toms and syndromes are labeled “unexplained”, several explanatory models are available in the literature. However, these are clearly not (yet) accepted by the medical scientific community. In this paper, I will thoroughly analyse the case of MUPS and try to shed more light on this paradoxical situation.

First, I will turn to the question what medical scientists and practitioners expect from an explanato-ry model for symptoms and syndromes that are now labelled „MUPS” and how this relates to the acceptability of the explanations offered in the literature. My analysis will further lead me to won-der whether, and – if so – to what extent, medical scientists and medical practitioners maintain a covert demand for a single scientific narrative that can explain all similar clusters of symptoms, in line with the biomedical model. I will contrast this with the need for explanatory pluralism in medi-cine (as defended in De Vreese, Weber & Van Bouwel, 2010) and question whether an explanatory pluralist approach is helpful to get a grasp on the case of MUPS. At least, a pluralist approach seems to fit in with the actual medical literature which advices practitioners to „mix and match” the availa-ble explanations in such a way that the story which is build is acceptable and helpful for both the practitioner and the patient. However, one might argue that this will lead to an approach within which „anything goes” and which is not acceptable from a scientific point of view.

I will discuss these worries and relate this to the role that “scientific narratives” in medicine play in bridging the gap between the general theory and the individual case, and between the clinical evi-dence and the personal experience. This will further also be related to the question of the goals of medicine and of its status as a „science”.

Reference:
De Vreese, L; Weber, E. & Van Bouwel, J. (2010). Explanatory pluralism in the medical sciences: theory and practice, Theoretical Medicine and Bioethics, 31, 2010, pp. 371-390.

 

Fons Dewulf

How Hempel changed the philosophical reflection on the historical sciences

During my talk I will argue that Hempel’s paper “The Function of General Laws in History” turned the dominant, neokantian question concerning the historical sciences from the early 20th century on its head. Before Hempel’s contribution to the philosophy of historiography the dominant question was the following: which principle allows one to have objective experience of a cultural/historical object? After his contribution this question was dropped completely. Philosophers of science now asked themselves how objective experience of historical objects should be systematized in order to yield an explanation. My argument consists of two parts.

In the first part I start from Wilhelm Windelband’s 1904 paper “Nach hundert Jahren”. Windelband argues that the field of scientific knowledge since Kant has expanded to the area of historical and mental [geistige] events. This field, which he called historical knowledge, is based on a different form of objective experience than the natural sciences. According to Windelband it is the challenge of philosophy to understand how historians can have objective experience of the historical particular, without losing its specific content. Next, I show that this central challenge was taken up by a wide range of kant-inspired philosophers of science for the next forty years. All of them tried to show how certain constitutive principles transformed the perceptual manifold into historical
experience. To this end Heinrich Rickert uses his theory of value-relation [Wertbeziehung], Wilhelm Dilthey his notion of structural cohesion [Strukturzusammenhang], Max Weber his theory of the Ideal Type and Rudolf Carnap his idea of the relation of manifestation [Manifestationsbeziehung]. All of these authors believe that the objective experience of historical objects cannot be separated from a theoretical structure or explanatory framework.

In the second part I investigate Hempel’s framing of the debate. Hempel takes for granted that every aspect of an historical explanation, as he understands this, is “amenable to objective checks”. (Hempel 1942, 38) Whereas “the objective checks” were the philosophical problem before Hempel, they now become an epistemic element that can arbitrate which are the true explanations in the historical sciences. If an explanation, e.g. subsumption of an object under a general idea, is not amenable to empirical tests, then it amounts to a pseudo-explanation. (Hempel 1942, 45) Within this Hempelian framework philosophy of science studies the explanatory structure and not the empirical experience of historical events. The latter is rather something that is taken for granted or can be understood independently from the explanatory systematization. In the current literature on historical explanation this tacit assumption is still operational. (Glennan 2010; Mey en Weber 2003; Leuridan en Froeyman 2012)

This way the history of the philosophy of historiography can show us two different frameworks to reflect on the relation between an historical object and its explanation. The Kant-inspired framework has not been taken up seriously since Hempel shifted what makes a relevant philosophical question concerning historiography. This, however, does not imply that the Kant-inspired framework is illegitimate or meaningless for philosophers in the 21st century.

 

Melinda Bonnie Fagan

Explanation and collaboration

I approach explanation as a collaborative activity among practicing scientists. In many scientific fields (e.g., molecular biology, experimental physics, neuroscience) explanatory models are constructed by integrating results from diverse research groups, frequently across traditional disciplinary boundaries (e.g., Craver 2007). Successful integration requires compatibility in the aims and standards of participating researchers or research groups. Often, this compatibility takes the form of complementarity rather than correspondence; i.e., research aims and standards differ, but can be combined into an overall plan that is coherent and effective in its own right. I use two cases from recent systems biology to show contrasts between successful and unsuccessful explanatory collaborations (Huang 2011, Jaeger and Crombach 2012). Construction of explanatory models, at least in some fields, is a special case of cooperative social action, requiring compatible goals, intentions, and sub-plans (e.g., Bratman 2014).

I then extend this approach to outcomes: the explanatory models produced by these collaborative practices. Often these models exhibit a multi-level structure, aligning two or more levels of description of a phenomenon of interest, and thereby representing the same target of inquiry at different scales or levels of resolution. Mechanistic explanations in biology are one prominent example, but the same pattern appears in social science and chemistry, as well as systems biology (e.g., Andersen and Wagenknecht 2013, Bechtel 2011, Woody 2004). Multilevel explanations of this sort are often complex, particularly at lower levels of description. They do not conform to traditional ideals of explanation, such as simplicity, generality, or systematization of a wide range of phenomena under a single formula. Although they include causal relations between lower-level entities, other details are also represented, including similar or complementary shape, relative spatio-temporal location, and combination into complexes (e.g., macromolecular assemblages, social groups).

I suggest that explanations of this kind have distinctive virtues, which can be explicated in terms of collaborative concepts. Lower-level components represented in a multilevel model are unified in the sense of being organized together, interconnected to form a complex system. Their connections (causal, complementary, mutual, etc.) organize lower-level components into a complex exhibiting some phenomenon of interest, such as a cell with a distinct phenotype, or molecule with specific chemical reactivity. In describing how components are unified, the explanation traverses multiple levels; organized components and overall system are shown to be different perspectives on the same thing. Multilevel explanations of this kind yield understanding by putting together different perspectives on the target of inquiry. This mode of understanding, which can co-exist with others, has features associated successful collaboration. In particular, connections among components are of primary interest, and diversity is treated as a resource. Explanatory models that represent diverse parts ‘working together,’ and thereby constituting higher levels of organization, are both an outcome of collaborative practices and built around collaborative concepts. This collaborative approach to explanation could also be deployed at the meta-level, to examine relations among different philosophical accounts of explanation (i.e., mechanistic and “minimal models” of biological phenomena; Batterman and Rice 2014).

References
Andersen, H, and Wagenknecht, S (2013) Epistemic dependence in interdisciplinary groups. Synthese 190: 1881-1898.
Batterman, R, and Rice, C (2014) Minimal model explanations. Philosophy of Science, 81, 349-376.
Bechtel, W. (2011). Mechanism and biological explanation. Philosophy of Science, 78, 533–557.
Bratman, M (2014) Shared Agency: A Planning Theory of Acting Together. Oxford: Oxford University Press.
Craver, C. (2007). Explaining the Brain: Mechanisms and the Mosaic Unity of Neuroscience. Oxford: Oxford University Press.
Huang, S. (2011b). Systems biology of stem cells: Three useful perspectives to help overcome the paradigm of linear pathways. Philosophical Transactions of the Royal Society, Series B, 366, 2247–2259.
Jaeger, J., & Crombach, A. (2012). Life’s attractors: Understanding developmental systems through reverse engineering and in silico evolution. In O. Soyer (Ed.), Evolutionary Systems Biology (pp. 93–119). London: Springer.
Woody, A (2004) Telltale signs: what common explanatory strategies in chemistry reveal about explanation itself. Foundations of Chemistry 6: 13–43.

 

Joachim Frans

Is mathematics a domain for philosophers of explanation?

Steiner’s paper ‘Mathematical Explanation’ (1978) started a small but still continuing line of research in the philosophy of mathematics, a line of research that focuses on the distinction between explanatory and non-explanatory proofs in mathematics. Recently, Zelcer (2013) has put forward several claims about explanation in mathematics that challenge the fruitfulness of this work. Two of the main claims are that explanation is absent from mathematical practice and
that explanation adds nothing relevant to mathematical knowledge. If Zelcer is right in this, it is not clear what the significance is of the philosophical inquiry into mathematical explanation. Furthermore, he argues that philosophical theories of explanation were developed to analyse natural, behavioural and social sciences. As a consequence, they have properties that makes them unfit for application in mathematics.

We will argue that, even if we would accept that explanation plays a smaller role in mathematics than in empirical sciences, explanation is still a legitimate
topic for philosophers of mathematics. To make this clearer, we make a distinction between two goals philosophers of explanation can adopt. The first aim,
which we call the analytical aim, is to analyse certain aspects of mathematical practice by means of philosophical models of explanation. The second aim, which we call the reflective aim, intents to clarify the idea that mathematicians do not merely pile up proofs of theorems, but can do something more. Providing understanding is a clear example of what this \extra” could be. In this case, models of explanation are used to make the idea that mathematics should be more than proving theorems meaningful. We will demonstrate that the leading authors in the field all pursue the reflective aim, and for this aim it is not necessary to presuppose that explanation is present in mathematical practice. The goal is rather to expose potential explanatory roles of mathematics.

The question that remains open at that point is whether explanation developed by philosophers of science are useful tools for philosophers of mathematical explanation. According to Zelcer, the answer is negative. We will argue that his arguments are unfounded. Rather, we argue for a positive answer to this
question. One reason for this is that they have an important heuristic value. By this we mean that they provide a good starting point but often require suitable
adaptation. This heuristic value will be clarifieed and supported by means of two examples. Furthermore, we argue that an optimal use of the potential of the literature on scientific explanation requires what Hafner and Mancosu have called a “bottom-up approach”.

References
[1] Hafner James and Mancosu Paolo (2005), `The varieties of mathematical explanation’. In: Mancosu, P., Jorgensen, K.F. and Pedersen, S.A. (eds)
Visualization, explanation and reasoning styles in mathematics, Berlin, Springer: pp. 215-250.
[2] Steiner Mark (1978), `Mathematical explanation’. Philosophical studies, vol. 34: pp. 135-151.
[3] Zelcer Mark (2013), `Against mathematical explanation’. Journal for general philosophy of science, vol. 44: pp. 173-192.

 

Raoul Gervais

Experimental philosophy of explanation rising. The case for a plurality of concepts of explanation

There seems to be a tradeoff between the explanatory breadth of theories and their predictive power. One can increase one’s explanatory breadth, but at the cost of diminishing one’s predictive power; conversely, one can only gain in predictive power by losing breadth. This tradeoff is well enshrined in the literature. Following a recent suggestion by Trafimow and Uhalt (2015) however, in this paper I will use the notion of auxiliary assumptions to argue that this tradeoff is actually not nearly as clear cut as it seems. The tradeoff will be questioned, not only in physics, but also in the less exact domains of psychology and archaeology.

References
Trafimow, D. & Uhalt, J. (2015). The alleged tradeoff between explanatory breadth and predictive power. Theory and Psychology, available trough DOI: 10.1177/0959354315591052

 

Mary S. Morgan

Narrative Ordering and Explanation

Narratives appear in the modern human, social and natural sciences, where they play a significant cognitive role for scientists that goes well beyond the simple act of reporting. Thus scientific narratives can be analysed not as rhetorical strategies, but as sense-making strategies. Such narratives do not occur in all sciences, nor with all modes of scientific investigation, but they are endemic in the historical sciences (geology, paleontology, evolutionary biology) and in the case studies of the human sciences (medicine, psychiatry); and they appear in the complex open system sciences (ecology and anthropology) as well as
in conjunction with simulation modelling (for example, in chemistry and economics). Where they are found, narratives function as a mode of explanation: being
able to make a ‘narrative ordering’ is the way that scientists make sense of things and explain things to themselves. In such sites and cases, scientists come to understand their world (real or hypothetical, empirical or theorized) not through their narratives but in their narratives.

This paper will address the issue of how such ‘narrative knowing’ is constituted, and how scientists’ narratives function to provide ‘explanatory services’ that are likely complementary to other modes and forms of explanation. It will address particularly the configuring processes by which narratives enable the scientist to order and to organise diverse evidential and theoretical materials to make sense of complex cases, including cases where the complexity is a set of possibilities rather than actualities. Such ordering processes are based on making sense of: causes, conjunctions, disjunctions, contingencies, and possibilities. These processes, may in turn, be understood in terms of different minimalist notions of what makes a narrative in accordance with various definitions of narrative. The paper uses resources from philosophy of science, of narrative and of history to make its case that scientists’ processes of ordering makes use of narrative as a mode of explanation.

 

Mark Pexton

An Extension of Manipulationism for Non-Causal Model Explanations

Woodward’s manipulationist account of causal explanation can be extended to cover some types of non-causal explanation. The extension will apply to non-causal explanations which are explanatory in virtue of providing counterfactual information through a modelling stage. The modelling stage allows the extension of the notion of a manipulation. Internal to a model there are changes to parameters (pseudo-manipulations) which are well defined, even though such changes in the real physical system are not well defined and cannot be classed as manipulations. These pseudo-manipulations allow scientists to explore the counterfactual structure of model worlds and, insofar as those model worlds are good representations of target systems, make indirect counterfactual inferences about non-causal dependencies in target systems which would otherwise be obscured.

 

Guglielmo Tamburrini and Nicola Angius

Functional and mechanistic explanations of computing systems’ behaviours. A pluralist account.

This paper examines the relation between functional and mechanist scientific explanations in the specific context of computer science, to argue in favour of Salmon’s (1998) thesis that the two models of explanations are complementary and that mechanist explanations do not override
functional ones.

Computing systems form a vast class of physical systems, all characterised by being implementations of stakeholders requirements and specifications. One of main activities computer scientists are engaged into is evaluating the realized artefacts with respect to the provided
specifications. Accordingly, explaining computing systems’ behaviours (CSB), both correct and incorrect, is a pervasive and varied activity in computer science. It is pointed out here that many explanations of CSB, which are acknowledged as adequately addressing the explanatory requests
from which they arose, bottom out without making any reference to physical components and processes of computing systems. Accordingly, these explanations rely on purely functional decomposition strategies (Cummins 1975) while abstract away from all physical components of
computing systems and physical descriptions of the processes they engage into. This analysis of CSB explanations is brought to bear on mechanistic models of explanation (Piccinini 2007). The latter often come with the regulative ideal of the full physical instantiation of functional roles as a means to achieve greater explanatory force (Piccinini and Craver 2011). However, following this regulative ideal does not invariably
lead to better explanations in computer science, insofar as the functional decomposition strategy can be decoupled from the functional role filler instantiation strategy without losing in explanatory force.

The problem of scientific explanation in Computer Science is in this paper exemplified in terms of two why-questions of interest concerning CSB:

  1. Why has digital computer C displayed an incorrect/correct behaviour in these particular executions (or runs) of program P on inputs I1, …., In?
  2. Why were physically heterogeneous digital computers D1 and D2 capable of running the same program P?

The examination of question (1) allows to show how functional analyses decoupled from mechanist instantiations is satisfactorily capable of supplying explanations answering this questions type. Question (2) concerns the explanation of chains of events that are observed across different runs
of P on architecturally dissimilar computing systems. This paper shows that even in the explanation of P-commonalities of architecturally heterogeneous physical systems, while descending to the physical device level one need not take into account all the physical details of the system in order to achieve better explanations. Quite on the contrary, distortive idealizations from physical processes of each system is essential to adequately address explanatory request of this sort.

REFERENCES
Cummins, R. (1975). Functional analysis. Journal of Philosophy, 72(20), 741–765.
Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking about mechanisms. Philosophy of science, 1-25.
Piccinini, G. (2008). Computers. Pacific Philosophical Quarterly, 89, 32-37.
Piccinini, G., & Craver, C. (2011). Integrating psychology and neuroscience: Functional analyses as mechanism sketches. Synthese, 183(3), 283-311.
Salmon, W. C. (1998). Causality and explanation (pp. 320-9). Oxford: Oxford University Press.

 

Dingmar van Eck and Cory Wright

Ontic, mechanistic, and idealized explanations

In this contribution we raise problems for an ontic conception (OC) of scientific explanation, focusing in particular on (ontically conceived) mechanistic explanations. One claim that we will be attacking is the recent idea that OC can be salvaged by appeal to ‘ontic constraints’. Rather, such an appeal defuses OC, and makes it an epistemic conception of explanation. We further argue that the ontic constraints invoked by some protagonists of OC, makes for an inferior epistemic conception as well. These claims are illustrated and supported in terms of cases on explanation and idealization in neuroscience.

For starters, OC cannot plausibly be a descriptively adequate account of scientific explanation, notwithstanding advertisements of its protagonists that it does, since scientific explanations procured by scientists are shot through with representational features, such as diagrams, equations, verbal reports, and the like.

OC, however, commits itself to two fundamental theses: explanations are entities located in reality, i.e. explanations are in re — in the case of ontic mechanistic explanation, extant mechanisms in the world –, and explanations are non-representational. They are not explanatory texts (e.g., diagrams, models), but rather are the phenomena themselves which are represented by such texts (Craver 2012; Wright 2015; cf. Salmon 1984; Strevens 2008).

Most authors writing on mechanistic explanation endorse OC (Machamer et al. 2000; Craver 2007, 2012; Glennan 2005; Illari 2013), despite some ‘heavy fire’ emphasizing that explaining a phenomenon requires reasoning steps on the part of agents/scientists to connect explanandum and explanans. Explanation proceeds via the construction of (explanatory) representations (models, diagrams, etc.), and subsequent reasoning on or with such representations (Bechtel and Abrahamsen 2005; van Eck 2015; Wright 2012, 2015). Features/entities in the world do not and cannot perform this explanatory work.

Perhaps sensitive to such criticism and the related observation that OC squares poorly with explanatory practices, there have been recent attempts to salvage OC by appealing to ‘ontic constraints’ (Craver 2012, Illari 2013). As Craver (2012, p. 36-37) and Illari (2013, p. 15) put the ontic constraint view, respectively:

“[T]he norms of [mechanistic] explanation fall out of a commitment by scientists to describe as accurately and completely as possible the relevant ontic structures in the world” (Craver)

“Describe the (causal) structure of the world: to be distinctively mechanistic, describe the entities and activities and the organization by which they produce the phenomenon or phenomena” (Illari)

Now, this just concedes the debate. If, on this version, OC commits itself to the view that descriptions (!!!) ought to be ontically constrained — in the case of Craver by being as complete and accurate as possible with respect to the relevant structures (mechanisms) in the world — it no longer is a version of OC proper: one of the fundamental pivots of OC, i.e., that explanation is non-representational, is flagrantly contradicted. OC becomes an epistemic conception of explanation. Now, this insight perhaps seems too obvious to elaborate on, but since advocates of the ontic constraint view explicitly refuse to concede that it is an epistemic conception, since the majority of authors writing on mechanistic explanation endorse OC, since mechanistic explanation is considered by said authors to be the main model of explanation for the life sciences, and since ‘explanation’ is a key concept in the metaphysics of science, it is relevant to expose and correct this misconception.

In doing so, we also argue that endorsement of the constraints of maximum accuracy and completeness makes for a poor (epistemic) account of explanation and the explanatory power of mechanistic explanations. We use cases on idealizations in neuroscience to show that the ‘accuracy and completeness’ perspective fares poorly w.r.t. handling the tradeoffs between generality and descriptive adequacy (of explanatory practice) on the one hand and maximum accuracy on the other. Idealizations intentionally distort causal relations and causal roles in order to highlight what is explanatorily relevant and what is not, and specific idealizations depend on specific ways in which the target mechanism gets described.

The practice of (multiple models) idealization shows that scientists do not aim for maximum accuracy and completeness. Tradeoffs between accuracy vis-à-vis generality and descriptive adequacy w.r.t. explanatory practice can be far better handled with a multiple models view on explanation and idealization (cf. Love and Nathan 2015)

A more general lesson that the idealization cases offer is that explanation, at its core, is representational and thus that OC proper is untenable. Idealizations, in the case of mechanistic explanation, depend on, inter alia, knowledge of role functions of mechanistic components. Function ascription and description, in turn, relies on descriptions of containing mechanism in which these components figure. Hence, idealizations rely on containing mechanism descriptions as well.

Bechtel, W., and A. Abrahamson. (2005). Explanation: A Mechanist Alternative. Studies in History and Philosophy of Biological and Biomedical Sciences 36:421-41.
Craver, C.F. (2007). Explaining the brain: Mechanisms and the mosaic unity of neuroscience. New York: Oxford University Press.
Craver, C.F. (2012). Explanation: the ontic conception. In A. Hutteman, M. Kaiser (eds.), Explanation in the biological and historical sciences. Berlin: Springer.
Glennan, S. 2005. Modeling mechanisms. Studies in the History and Philosophy of the Biological and Biomedical Sciences 36(2):375-88.
Illari, P. (2013). Mechanistic explanation: integrating the ontic and epistemic. Erkenntnis, online first, DOI 10.1007/s10670-013-9511-y.
Love, A. & Nathan, M. (2015, forthcoming). The idealization of causation in mechanistic explanation. Philosophy of Science.
Machamer, P., Darden, L., & Craver, C. (2000). Thinking about mechanisms. Philosophy of Science, 67, 1-25.
Salmon, W. (1984). Scientific explanation: three basic conceptions. Proceedings of the biennial meeting of the philosophy of science association 2: 293-305.
Strevens, M. (2008). Depth: an account of scientific explanation. Harvard University Press.
Van Eck, D. (2015). Reconciling ontic and epistemic constraints on mechanistic explanation, epistemically. Axiomathes, 25 (1), pp. 5-22. DOI 10.1007/s10516-014-9243-x.
Wright, C.D. (2012). Mechanistic explanation without the ontic conception. European Journal for the Philosophy of Science 2: 375-394.
Wright, C.D. (2015, forthcoming). The ontic conception of scientific explanation. Studies in History and Philosophy of Science.

 

Naftali Weinberger

Explanation in Causal Modeling

In their groundbreaking book Discovering Causal Structure, Glymour, Spirtes, Scheines and Kelly (1987) present an algorithm for choosing among causal
models based on probabilistic evidence. One principle upon which the algorithm relies is Spearman’s Principle, which says that one should prefer models that entail certain probabilistic facts for all values of their free parameters. The “free parameters” in a causal model represent the strengths of the causal
relationships. The authors defend Spearman’s Principle on the grounds that models that predict the relevant probabilistic facts for a wider range of parameter values are more explanatory.

Discovering Causal Structure pioneered a large literature on causal inference using Bayes nets. Although no one still refers to Spearman’s Principle,
the Causal Faithfulness Condition plays the same role in more recently developed modeling methods. Yet the more recent literature on causal modeling has made no appeal to the explanatory virtues of causal models. In this talk, I clarify the explanatory basis for Spearman’s Principle, and advocate the reintroduction of explanatory principles into contemporary discussions of causal modeling.

The one philosopher who has provided an explanatory defense of Spearman’s Principle is Marc Lange (1994). According to Lange, models that entail certain features of the probability distribution for all values of their free parameters leave fewer facts unexplained than those that do not. If a model only entails the relevant features for some values of its parameters, this allegedly raises the explanatory question of why particular parameters have those values and not
others. Taking a step back, the general explanatory principle Lange invokes is that we should prefer theories that do not leave treat certain facts –
in this case, the values of parameters – as brute. The problem with this general principle – which Lange considers but never adequately addresses – is
that no theory is able to provide an explanation for all facts in a domain, and we only sometimes count a theory’s inability to explain a particular phenomenon against it. Without some criterion for distinguishing between facts that do and do not call for explanation, Lange’s principle is not helpful.

In presenting Spearman’s Principle, Lange abstracts away from its role in causal modeling procedures. His way of doing so he makes the principle too general to be useful. By looking more carefully at the role the principle plays in causal modeling, we can propose a more plausible explanatory principle. Spearman’s Principle is motivated not by the idea that the particular values of parameters require explanation, but rather by the idea that fine-tuned combinations
of parameters do. The models that Spearman’s Principle treats as deficient are those in which one must stipulate that distinct causal relationships between a cause and its effect have equal and opposite strengths. Using widely accepted assumptions of causal inference, I show that this type of coordination does require
an explanation such as a common cause. The discussion reveals the importance of paying careful attention to modeling assumptions in evaluating a model’s explanatory virtues