Three graduate students Chuan-Ya Liao, Hong-Ui Tenn, Meng-Yun Hsu, and one graduated student Fun Chi Chang, all of whom supervised by Professor Karen Yan, summited their papers respectively to and were accepted by the 9th Biennial Conference of the Society for the Philosophy of Science in Practice. They have the honor to give talks at the University of Ghent from 2nd to 4th, July 2022. The subjects and abstracts of their paper are below.
Cross-Disciplinarity in the Empirically-Informed Philosophy of Mind
Karen Yan, Chuan-ya Liao
Empirically-informed philosophy of mind (EIPM) has become a dominant research style in the 21st century (Knobe, 2015). However, the extant literature lacks a systematic empirical description of EIPM. Moreover, though EIPM is essentially a form of cross-disciplinary research, it has not been analyzed as such so far. In this paper, we aim to fill these literature gaps and provide quantitative and qualitative descriptions of EIPM to enhance our understanding of the research practice of EIPM. Our quantitative analysis uses a scientometric tool called co-citation network analysis (Chen et al., 2010). Our qualitative analysis draws up the insights from the philosophy of interdisciplinarity (Klein, 2010; Grüne-Yanoff, 2016; Mennes, 2020; Pohla et al., 2021).
Co-citation network analysis employs statistical methods to map out the internal intellectual structure (i.e., different research themes) of a given body of literature. We analyzed 2,761 philosophy journal articles published between 1950 and 2019 and 11,794 articles cited by those 2,761 philosophy articles. We found six statistically significant clusters of articles with different research themes. We then manually picked the top three cited scientific articles within each cluster. Then we downloaded all philosophy journal articles citing these selected scientific articles (267 philosophy articles in total).
In our qualitative analysis stage, we closely read these 267 philosophy articles to analyze how they cite the targeted scientific articles (and their related articles, i.e., articles published by the same authors). We then developed five types of cross-disciplinarity categories based on the literature on the philosophy of interdisciplinarity and our reading experience of the citing practices of those 267 articles. The five categories are encyclopedic multi-disciplinarity (MD), contextualizing MD, empirical interdisciplinarity (ID), methodological ID, and theoretical ID. These five categories help us to present the results of our qualitative analysis. Our results show that the three most common citing practices are encyclopedic MD, contextualizing MD, and theoretical ID. Within each cluster, the distribution of these three types varied.
In conclusion, our empirical description of EIPM shows that cross-disciplinarity in EIPM often does not aim for integrating with scientific research at the levels of experimental practice and methodology but more at the conceptual level. We will raise some worries about how philosophers integrate philosophical conceptual practice with scientific conceptual practice.
Causal Complexity and the Causal Ontology of Health-Related Quality of Life Model
Patient-centered care (PCC) promotes the kind of healthcare that values patients’ rights, perspectives, and autonomy. Clinical practitioners usually employ health-related quality of life (HRQL) measurement tools to help them assess how well they implement PCC. HRQL is a construct that consists of different dimensions of patients’ health conditions, such as biomedical factors, functional status, general health perception, and overall quality of life (McClimans, 2019; Wilson and Cleary, 1995). HRQL measurement tools aim to develop ways of measuring HRQL. Developing an HRQL measurement tool needs a theoretical model. Wilson and Cleary (1995) developed the most widely-used theoretical model that informed the design or development of HRQL measurement tools (Bakas et al., 2012).
In this paper, I will point out that Wilson and Cleary’s model implicitly instills a causal bias into the current HRQL measuring practice, even though they do not explicitly endorse any causal ontology in their model (1995, p. 60). Based on my literature analysis, most of the HRQL research guided by Wilson and Cleary’s model has the same type of causal hypotheses, i.e., from biomedical factors to non-biomedical factors. Causal hypotheses regarding how non-biomedical factors cause biomedical factors are rarely investigated. This causal bias is an obstacle for implementing PCC because it implicitly directs researchers’ attention away from how patients’ values, preferences, and overall quality of life can causally affect their HRQL.
To rectify this implicit causal bias that impedes PCC implementation, I will propose a way to strengthen the causal ontology of Wilson and Cleary’s model. I will employ Rocca and Anjum’s (2020) notion of causal complexity to modify Wilson and Cleary’s model. According to Rocca and Anjum, causal complexity means that variables from different dimensions of a patient can cause each other or co-cause an illness. I propose to change how Wilson and Cleary present causal connections in their diagram to represent their theoretical model. My proposed changes will provide clear guidance and motivation for clinical researchers to investigate how patients’ values, preferences, and overall quality of life can causally affect their HRQL.
Epistemic Iteration as the Process of Interdisciplinary Knowledge Production
Philosophers of scientific practice have turned their focus from epistemic objects to epistemic activities. In the context of interdisciplinary knowledge, the philosophical focus is shifted from the propositional contents of knowledge to the process of knowledge production (Nersessian, 2019; MacLeod, 2018). This paper aims to explore possible mechanisms for producing interdisciplinary knowledge. Two candidates are considered: The exemplar-based approach (Shan, 2018) and the process of epistemic iteration (Chang, 2016). Both candidates are proposed initially as ways of analyzing the history of science. In this paper, I will examine the suitability of using both candidates to analyze interdisciplinary knowledge production.
Let’s first apply Shan’s notion of exemplar to imagine the interdisciplinary knowledge production process. Shan (2018) defined exemplar as “a set of contextually well-defined research problems and the corresponding solutions” (p. 11). Exemplars are supposed to function as epistemic targets while analyzing scientific practice. By identifying the characteristics and constituents of exemplars, how they are constructed, and how they guide subsequent research, philosophers can then reconstruct the process of interdisciplinary knowledge production when exemplars are transferred to another discipline.
However, using this approach to reconstruct the process of interdisciplinary knowledge production might be problematic. MacLeod (2018) argued that disciplinary practice is domain-specific in the sense that researchers are trained to focus on a specific domain of epistemic targets so that their problem-solving practice is cognitively manageable. However, the domain-specificity of disciplinary practice also generates cognitive obstacles when disciplinary researchers engage in cross-disciplinary activities. Exemplars also have domain-specificity. Though analogical reasoning might help sometimes, it is still unclear what the exact mechanism for interdisciplinary knowledge production is.
I will show that Chang’s notion of epistemic iteration provides a way to deconstruct the domain-specificity of disciplinary practice and scaffold the relevant cognitive resource conducive to successful interdisciplinary knowledge production.
The importance of indicating how researchers maintain trustworthiness during the interview-based research
Typically, when writing research papers, people tend to present problems or difficulties that have already been solved while keeping unsolved ones quiet. However, in some cases, revealing more details about the unsolved difficulties is important. Especially, when the researched data can’t be replicated or recollected. This paper aims to argue that it is important to reveal how researchers cope with a specific kind of difficulty in interview-based research: maintaining trustworthiness. Since ‘trustworthiness’is crucial to interview and has a direct influence on data, it requests the interviewer to face a dilemma of how actively they should be to maintain trustworthiness while keeping a proper distance from participants for not participating in answering when collecting data.
To demonstrate my argument, I will first introduce Anna Alexandrova’s (2017) examination of measures of well-being and her improved proposal to enlighten the issues of validation of measures; then, I will show that even if Alexandrova doesn’t maintain the validation of interview directly, we would need to face with it inevitably if we try to put her proposal into practice. Additionally, I will illustrate what specific difficulties we might encounter based upon my experience of conducting an open-end interview with 40 patients about how they understand a measure of well-being, Spiritual Index of Well-Being.
This paper is divided into three parts: To begin with, I will summarize Alexandrova’s examination of measures of well-being and her improved proposal to introduce the issue of construct validation; then, instead of discussing how to put her improved proposal into practice as a whole, I will only focus on the process of collecting data and showing that adding a procedure of interview would be the best way to practice her proposal. Finally, I will point out that there are at least three difficulties that interviewers might encounter. Without handling those difficulties properly, the research would collect data they want but in a way that violates the spirit that Alexandrova’s proposal aims to preserve, the spirit of accommodating the perspective of the subjects.
The three difficulties are: First, the constantly changing cognitive status of participants, which is hard to identify their understanding exactly; second, the complexity of maintaining trustworthiness. Interviewers would need to actively maintain the trustworthiness with participants and be cautious not to participate in answering questions; third, the lack of restriction of how to use. Without proper guidance, it will make interviewers act with discretion easily when it comes to mass application: from collecting data to manipulating it for efficiency.I will conclude that when the target of interview-based research is relevant to the issues of validation, it would be better off to indicate more details about how researchers maintain trustworthiness both successfully and unsuccessfully. By so doing, we would not only get a better understanding of the complexity of interviews but help us to rethink the concept of validation in practice.