Measurement at the Crossroads

Programme > Abstracts

 

TABLE OF CONTENTS

Panel 1 (symposium):
Computation and measurement at the LHC: managing complexity in high energy physics experiments

Panel 2: Historical foundations of the philosophy of measurement

Panel 3: Standardization at issue

Panel 4: Measurement, intersubjectivity and trust

Panel 5: Revisiting the coordination problem

Panel 6: Measurement, social norms, and public health policies

Panel 7: Investigating models of social measurement

Panel 8: The Making of instruments and standards of measurement

Panel 9: Managing data: Three studies

Panel 10: Measurement practices: from state regulation to mathematical guidance

Panel 11: Theory dependence, models and idealization in measurement

Panel 12: Errors of measurement in historical perspective

Panel 13: Measurement issues in the life sciences

Panel 14 (symposium): The measurement of non-quantitative properties in the human sciences

Panel 15: Reconsidering the Representational theory of measurement

Panel 16: Constructing measurement: quantifications, institutions, and numerical notations

 

 

 

 

 

 ABSTRACTS


 

Panel 1 (symposium)
Computation and measurement at the Large Hadron Collider: managing complexity in high energy physics experiments

Chair: Thomas Coudreau (University Paris Diderot)
Room: Luc Valentin 454A


The Large Hadron Collider (LHC) at CERN in Geneva Switzerland is one of the most complex experimental set ups, and is the largest and most powerful particle accelerator, ever constructed. The LHC consists of a 27km ring with four detectors located at four collision points. Each detector is run independently by an experimental collaboration: ALICE, ATLAS, CMS, and LHCb (requiring over 3000 members in ATLAS and CMS, and over 700 members in ALICE and LHCb). Many diverse measurements have been completed, are currently being attempted, and have been planned for the next two decades. This symposium will explore the complexity of the measurements attempted by the experimental collaborations at the LHC and the epistemic roles of computer simulations and machine learning.

------------------

Sophie Ritson (Alpen-Adria-Universität Klagenfurt)
Measurement and machine learning at the Large Hadron Collider (LHC): the Higgs self-coupling as a case study

In 2012, two experiments at the LHC, ATLAS and CMS, announced an observation of a particle. Both experiments then conducted measurements of the particle’s properties to determine if the particle was the predicted Higgs boson. This paper focuses on the measurement of the self-coupling of the Higgs boson (the Higgs’ interaction with itself). There is a significant increase in complexity: the measurement of self-coupling requires, just as one step, the identification of two simultaneous Higgs in the detector. Physicists at the LHC are attempting to utilise machine-learning techniques to manage the complexity, presenting an opportunity to examine attempts at automation in measurement. Currently attempted by ATLAS and CMS simultaneously, the measurements may potentially take 20 years and are beset by institutional and epistemic difficulties. These complex co-evolving processes are occurring within institutions not only dedicated to developing and regulating measuring procedures, but also to designing and running experiments.

------------------

Florian Boge (Bergische Universität Wuppertal)
How to infer from simulated measurements?

Computer simulations (CS) are involved in virtually every step of the highly complex measurement procedures at CERN’s Large Hadron Collider (LHC). This poses the question how these CS function in generating knowledge of the measured quantities. In my talk, I will first defend a reasonable view that mediates between extremes and understands CS as experiments on computers. This implies an emphasis on the inferences back to the world, that we draw from CS to generate the desired knowledge. In turn this raises the question of the kind of inference at stake. In a second step, I will use a measurement of the top quark mass by the ATLAS experiment at the LHC as a case study to demonstrate that these inferences are abductive inferences by appeal to analogy, which I call the abduction thesis (AT). I will then argue that AT holds for a well-defined range of scientific contexts.

------------------

Paul Grünke (Karlsruher Institut für Technologie)
The epistemic status of experimental measurements involving computer simulations

In debates about computer simulations, the thesis that computer simulations can be epistemologically on a par with experiments is discussed controversially. Morrison (2009) argues that in the context of experimental measurement, simulations in some instances can be treated as having the same epistemic status as experiments. Massimi and Bhimji (2015) argue similarly, relying on details of the Higgs discovery and measurement. Others argue for an epistemic privilege of the experiment. In this talk, I will argue that the authors disagree with one another because they have different understandings of the notion of “epistemologically on a par”. Depending on these understandings as well as the context of application, computer simulations can be understood as having the same epistemic status as experiments or not. I will use the ATLAS experiment of the LHC as an example for a case of an interaction of experiments and computer simulations.

 


 

Panel 2
Historical foundations of the philosophy of measurement

Chair: Nadine de Courtenay (University Paris Diderot)

Room: Malevitch 483A

------------------

Michael Heidelberger (Eberhard Karls Universität Tübingen)
Ernst Mach's Theory of Measurement

In his Principles of the Theory of Heat (1896), Ernst Mach developed a theory of measurement that opposed Helmholtz’s modern approach in certain respects. Instead of taking length measurement as paradigmatic for measurement in general, he based it on the measurement of temperature. In my talk, I shall outline the main features of Mach’s theory. Mach defended a nominalist approach and maintained that an unobservable attribute (like temperature) can only be measured, if it is conventionally tied to an observable physical characteristic (Merkmal) (like volume expansion) that replaces our sensations (of heat). It is maintained that the representational theory of measurement allows for several different philosophical interpretations (as e.g. the probability axioms do). Mach’s theory can then be classified as a correlative interpretation, besides a classical, additive and operational one. Einstein followed Mach when he reconsidered the measurement of space and time.

------------------

Francesca Biagioli (University of Vienna)
Hermann von Helmholtz and the Quantification Problem of Psychophysics

Hermann von Helmholtz’s theory of measurement offered a new perspective on quantification in the nineteenth-century context. I will address in particular Helmholtz’s position in the debate about the measurability of sensations which followed Gustav Fechner’s and Wilhelm Wundt’s attempts to measure psychological processes. Helmholtz replaced the metaphysical distinction between extensive and intensive quantities with a relative distinction between additive and nonadditive magnitudes. The qualities of sensation offered the example of attributes for which composition according to additive principles was unknown, although a different (indirect) form of quantification could not be excluded. I suggest that the advantage of Helmholtz’s approach over both the proponents or the opponents of psychophysics – who argued for the incommensurability of sense qualities per se – lies in the fact that Helmholtz offered a dynamical perspective on quantification and measurement practices.

------------------

Pablo Acuña (Pontificia Universidad Católica de Valparaíso)
Measuring the Epistemology of Geometry

The rise of non-Euclidean geometries in the 19th century prompted a shift from an aprioristic stance towards the epistemology of (Euclidean) physical geometry, to an empiricist one: questions about the geometric structure of physical space turned to be conceived as a matter of experience and measurement. The most elaborated philosophical account of the empiricist view can be found in the work of Riemann and, especially, Helmholtz. By the end of the 19th century Poincaré introduced his famous argument for the conventionality of geometry. The French mathematician rejected the aprioristic stance, but by claiming that the question of the geometric structure of physical space is a matter of convention—the usual interpretation states—he also rejected the empiricist view. I argue that this traditional interpretation is wrong. Poincaré’s conventionalism not only does not contend Helmholtz’s empiricism, but actually presupposes it. That is, the conventionalist thesis relies on an empiricist view of the epistemology of geometry. For Poincaré, just like for Helmholtz, the question of the geometric structure of physical space is a matter of measurement.

 


 

Panel 3
Standardization at issue

Chair: Youna Tonnerre (University Rennes 1 & University Paris Diderot)
Room: Mondrian 646A

------------------

Aashish Velkar (University of Manchester)
The Cultural and Economic Consequences of Global Metrological Standardisation

Metric reforms of the 18th century disconnected anthropocentric measurement units from artefacts and tied them to fundamental constants. The resulting metrology, a ‘technology of precision’, fundamentally changed the way people now communicate at a scientific and social level. Globally standardized metrology, initially stemming from colonialism, has enabled a global economic system based on institutionalism and cooperation. Metrology is also a ‘technology of coordination’ allowing industrial firms to compete more effectively and internationally. Metrology as a ‘technology of governance’ has enabled governments to devise newer ways of governing economic and social life. Metrology, whilst enabling a shared cognition around values of precision, still retains deep political and cultural significance. Metrological units have been stripped of any social significance by tying them to quantum phenomenon that most people rarely think about. But measurements retain the social and cultural meaning similar to historical societies who used human forms to measure the world they experienced.

------------------

Rebecca Jackson (Indiana University)
“The Uncertain Method of Drops”: How a Non-uniform Fluid Unit Survived the Century of Standardization

In the early 19th century, British medical practitioners denounced the “drop” as too variable and uncertain for use in dosages, and attempted to replace it with a standardized fluid unit, the minim. To explain how the non-uniform drop remained in common use while the standardized minim became obsolete, I first discuss the challenges unique to measuring small amounts of fluid and the devices designed to address them. Second, I identify two audiences for fluid units, the discursive audience and the practical audience, and explain how drops communicated to the audience of practitioners. Third, I argue that, at a time when medicine was turning towards more patient-centric and evidence-based practices, drops were well-suited for investigating and communicating a gradual process where attention to individual outcomes was important for verifying the proper dosage. This study exemplifies how examining non-standard measurement practices can be instructive for understanding the role and function of standardization.

------------------

Edward Gillin (University of Cambridge)
Mathematicians, musicians, and the measurement of musical pitch in mid-Victorian Britain

In 1859 the celebrated astronomer John Herschel declared that the determination of a standard musical pitch would provide the third essential measure of nature (along with time and space). While Napoleon III had decreed 522 vibrations for a C to be France’s national musical standard, Herschel believed that 512 was more correct. In 1859 the Society of Arts established a committee to investigate and agree on a uniformed musical measure which could be implemented throughout the nation and Empire. However, the pitch eventually agreed on was a compromise between scientific authorities, musical performers, instrument builders, mechanics, and politicians. My paper explores the heated debates between these different groups and argues that commercial interests, musical practices, and political notions of liberalism ensured that it was hard to apply mathematical theory to the construction of an accurate measure of sound.

 


 

Panel 4
Measurement, intersubjectivity and trust

Chair: Fabien Grégis (Tel Aviv University)
Room: Luc Valentin 454A

------------------

Andrew Maul (University of California, Santa Barbara), Luca Mari (Università Cattaneo), Mark Wilson (University of California, Berkeley)
Intersubjectivity of measurement across the sciences: Unit definition and dissemination

Metrological traceability to a unit is a critical condition for measurement results, making them context-independent, and thus identically interpretable by different measurers. We have proposed to call this the intersubjectivity of measurement, and consider it the characterizing feature that explains the societal role of measurement. We sketch the three-step evolutionary process followed in physical measurement (up to the forthcoming “revised SI”) to guarantee intersubjectivity, and in the light of the principles underlying this evolution we explore the problem of intersubjectivity in the measurement of properties in the human sciences, for which different and original solutions have been found. The fact that, despite such differences, traceability to units can be structurally guaranteed in both physical and non-physical measurement and can be presented in a single and consistent framework is a significant step towards the development of a conception of measurement across the sciences.

------------------

Rafael Lattanzi Vaz (National Institute of Metrology, Quality and Technology, Brazil)
Metrological Traceability and the bridge between Reliability and Trust

Metrology, ‘the science of measurement and its applications’, aims to promote the universality, uniformity, reliability, objectivity and long-term stability of measurement standards for scientific, sociopolitical or economic applications. The route to deal with this huge exchange of needs is setting out reliable measurement standards and a trustworthy interinstitutional background. In this sense, good laboratory practice (GLP) protocols, certificates and intergovernmental agreements play the role of technologies as much as e.g. measurement systems and statistical models. I argue that metrology settles a liaison between knowledge production and dissemination due to a key conception: metrological traceability. Starting from the particular shifts on its definition, I highlight how the concept constitutes a workable heterogeneity: it behaves as a property and a systematization tool, in which reliability and trust are mutually dependent, but not interchangeable. Consequently, shifts on the notion promote new comprehensions for the roles of epistemic and non-epistemic values present in metrology.

------------------

Florence Hsia (University of Wisconsin-Madison)
Measuring a Chinese eclipse

Making time and space commensurable across localized practices and cultural conventions has long been the work of astronomers, whose métier depends so critically on the judicious use of observational data made by others. Such efforts at metrological reconciliation intersect in the field of historical astronomy, whether as undertaken by Ptolemy in establishing models of planetary motion or modern practitioners seeking to establish the rate of the earth’s secular acceleration. In the course of the seventeenth century, European scholars began to work closely with Chinese astronomical material. This paper assesses the emergence of metrological assumptions and interpretive strategies that European scholars used to domesticate an apparently distinct and largely unfamiliar scientific tradition through their accumulation and analysis of Chinese observational data.

 


 

Panel 5
Revisiting the coordination problem

Chair: Francesca Biagioli (University of Vienna)
Room: Malevitch 483A

------------------

Rick Shang (Washington University in St Louis)
Does the Coordination Problem Exist: Measurement in Neuroimaging

Contemporary philosophers of science, such as van Fraassen, Chang, and Tal, have revived the discussion of the coordination problem. The coordination problem is a chicken-and-egg problem in which scientific measurement and scientific concept mutually presuppose the prior success of each other. In this paper, I use the history of Positron emission Tomography as a case study to argue that the coordination problem does not exist. The coordination problem does not exist because scientists do not create measurement for an entirely foreign or largely unknown concept. Instead, scientists iteratively and gradually extend their measurement from well-known phenomena to unknown phenomena.

------------------

Tzur Karelitz (National Institute for Testing and Evaluation), Charles Secolsky (Rockland Community College), Thomas Judd (United States Military Academy)
The Evolution of Face Validity from Inception to Reinstatement

The roots of face validity are tracked up to modern day psychometrics for purposes of providing future direction for its utility for researchers. Dismissed by measurement theorists as having little psychometric value, there now exist sound reasons for its reinstatement. Modern day theorists hold that validators construct logical arguments requiring additional considerations of alternative uses and interpretations of test scores for possibly refuting these same arguments,. If laypersons have the opinion that the test is too expensive or unfair, the test may perish making validity study irrelevant. Therefore, it is important to collect non-expert perceptions of test content, response processes, scoring procedures, score reporting, uses and interpretations and even societal impact. Face validity is not evidence of validity but a lack of it undermines the test purpose and consequently its validity. Therefore, perception-based evidence can support the validity of test scores by showing that they do not lack face validity.

------------------

William P. Fisher, Jr. (University of California, Berkeley)
Blending Objectivity and Subjectivity in Measurement: Benjamin Wright's Personal Approach to Learning

Between 1947 and 1958, Ben Wright moved from physics and computer science to certification as a psychoanalyst supervising teachers of children with autism. In 1958, two years before he met Rasch, motivated by dissatisfaction with education's research methods, Wright devised what he called a personal approach to learning. Following Freud, instead of identifying subjectivity only to try to remove it as a factor influencing results, Wright methodically used it to inform "a course of action that includes a feeling for what moves the child." Wright's personal approach foreshadows his later work on construct mapping, uncertainty and location estimation, and model innovations in the context of Rasch's theory of measurement. An important consequence of this work is the way it provides "occasions for the phenomena to differ," to use Latour's phrase, and so contributes, in Galison's terms, to a "joint epistemic project" of "mutually conditioning" subjective and objective forms of knowledge.

 


 

Panel 6
Measurement, social norms, and public health policies

Chair: Claude-Olivier Doron (University Paris Diderot)
Room: Mondrian 646A

------------------

Moran Levy , Gil Eyal (Columbia University)
Politicizing Imprecision

Precision medicine is one of the predominant reforms of our time. In the name of ‘precision’, scientists, doctors, patients, legislators, regulators and industry are moving away from the aspirations and methodologies of “Evidence-Based-Medicine’ and towards a new paradigm of research and healthcare. How did we move away from randomized clinical trials aiming to generalize and standardize knowledge so we can treat large heterogenous populations, towards developing ‘smart drugs’ that would work for one in a thousand patients? Rather than assuming that this is simply ‘progress’, we argue that this transition begs a sociological explanation. Building on academic papers, popular science books, op-ed and editorials we propose that problematization of scientific imprecision by various actors in various disciplines served as a political tool to transform the priorities and agenda in biomedical research and healthcare in the US since the 1990s.

------------------

Marion Boulicault (Massachusetts Institute of Technology)
Gender and the measurement of fertility: a case study in critical metrology

Human fertility is in an apparent state of crisis. In July 2017, scientists reported that sperm counts among men from North America, Europe and Australia have decreased by 50 – 60% since 1973 (Levine et al. 2017). For women, the story is bleak and familiar: women’s fertility decreases with age, yet women are waiting longer and longer to have children (Kincaid 2015). Undergirding these crisis narratives is an unstated assumption: fertility is measurable. That is, scientific reports that fertility is declining presuppose that it’s possible to successfully measure and compare fertility diachronically. I investigate this assumption by examining the practice of fertility measurement, i.e. the standards, methods and instruments by which the phenomenon of fertility is quantified. By comparing two current gold standard fertility measures – semen analysis in men, and ovarian reserve testing (ORT) in women – I show how gender ideologies play a role in constructing fertility as a measurable phenomenon.

------------------

Nicolas Rasmussen (University of New South Wales, Sydney)
Measuring Fatness and its Hazards: Precision Adipometry versus a 1950s Public Health Campaign against Obesity

Systems and units of measurement can be used not only for collaboration, but as weapons against alternative systems and the facts of nature built upon them. Here I discuss one such metrological offensive, against epidemiological findings concerning the dangers of obesity. For two decades physiologist Ancel Keys fought to replace the measure of obesity established by the insurance industry through half a century of extensive statistical research, body weight relative to height, with the alternate measure of “adiposity” or body fat content. His elaborate project of precision adiposometry using caliper measures of subcutaneous fat never was shown superior to relative weight as a disease predictor, but it nevertheless served to undermine the medical community’s confidence in high body weight as a heart disease risk factor, advance the status of Key’s favoured risk factor (serum cholesterol), and ultimately led to the establishment of Body Mass Index as the standard measure of obesity.

 


 

Panel 7
Investigating models of social measurement

Chair: Alain Leplège (University Paris Diderot)
Room: amphithéâtre 310 (ENSA Paris-Val de Seine)

------------------

Leslie Pendrill (Research institutes of Sweden), Stefan Cano (Modus Outcomes), Theresa Köbe (Charité - University Medicine Berlin), Jeanette Melin (Research institutes of Sweden), Ariane Fillmer (Physikalisch-Technische Bundesanstalt)
Restitution of ability and difficulty from decision-making: the metrology of human-based perceptions

Fundamental reappraisal of metrology, beyond superficial analogies with traditional measurement instruments, is needed if ordinal properties – such as ‘counted fractions’ (bounded by zero and one), performance metrics for ability tests, customer satisfaction, etc - are to be included in an extended quantity calculus for a new SI. Regarding a human being (or other ‘probe’) as a Measurement Instrument with Rasch measurement theory (i.e., postulating a Generalised Linear Model link function z = θ – δ, of a ‘probe’ attribute θ and a ‘target’ attribute δ) not only handles ordinal properties of the measurement system response, Psuccess, but also attribute separability in restitution; essential to underpinning measurement traceability and uncertainty. Examples from person-centred care (e.g., of Alzheimer ’s disease patients) will demonstrate causal Rasch models relating task difficulty and patient ability to explanatory variables such as test sequence entropy and brain atrophy, thereby enabling novel certified reference materials for traceability.

------------------

David Andrich (The University of Western Australia)
The Gaussian distribution as a culmination of the Rasch measurement theory of invariance

The derivation of a distribution of random errors of replicated measurements culminated in the “quadratic exponential law of Gauss”. Gauss appreciated that this distribution did not accommodate the finite range of an instrument. The derivation of a distribution of inferred replicated measurements from Rasch’s theory of invariance also culminates in a “quadratic exponential law”. Rasch’s distribution is a function of an object’s measure, the instrument’s unit, and its finite range. The unit in Rasch’s discrete distribution is the inverse of the variance of the commensurate, continuous Gaussian distribution. The convergence of Rasch’s derivation to Gauss’s suggests that Rasch’s derivation may be as fundamental as Gauss’s. It also suggests that Rasch’s distribution may be applicable in the natural sciences when the unit and range of an instrument need to be accommodated explicitly. The paper illustrates a social science application in which the range of possible measurements is 0 to 100 units.

------------------

Jan Kutylowski (University of Oslo)
Measurement and modelling of categorical variables in the socio-sciences: a comparison of traditional and modern approaches, with prospects for the future

Since categorical attributes present difficulties with respect to quantifiable measurement , S. Stevens postulated restrictions as to admissible algebraic operations on such variables. Stevens dicta are often disregarded by “illicit” elevation of the ordinal variables to an interval level (“Quantum Leap Fallacy”), which leads to unjustified substantive conclusions. This will be demonstrated in the context of three prominent substantive research frameworks. A probabilistic framework will then be reviewed in terms of two types ordinality, based on cumulative and hazard probabilities, which leads to special cases of multivariate generalized linear models. Their characterization includes behavioral motivation and a unifying description, with or without hidden (latent) factor. This leads in turn to correction of some prevailing misconceptions in the literature. Comments will also be presented as to the reasons for incidence and prevalence of Quantum Leap Fallacy in socio-sciences, the resulting epistemic dangers, and viable tactics of amelioration.

 


 

Panel 8
The Making of instruments and standards of measurement

Chair: Jan Lacki (University of Geneva)
Room: Luc Valentin 454A

------------------

Liqun Zhou (Beijing Foreign Studies University & Needham Research Institute, Cambridge)
A DIY Water Clock (Clepsydra) from a Chinese Text of Yuan Dynasty

The text “The Nimble Way of Water Clock (Kelou jiefa 刻漏捷法)” records a kind of pot Water Clock(Yulou 盂漏) DIY made by intelligent of Yuan dynasty or earlier. It’s about making two water pots, the small copper one floating on the surface of bigger one, and water springs up into the small pot from outside of a hole in the bottom. This design looks like Indian water clock which used more than 1500 years, not typical the three layers of water clock system in China. The text also talks about making chips, how to add and subtract of Taiping coins to control water flowing, calculating 24 Solar Terms etc., from traditional Chinese system. Some scholars regard it as Quanzhen Taoist invention, while some regard it Buddhist Monks’ contribution.

------------------

Dieter Hoffmann (Max-Planck-Institut für Wissenschaftsgeschichte)
Wilhelm Kösters (1876–1950) and the development of a new standard of length based on the wavelength of light

The physicist Wilhelm Köster (1876-1950) belongs to the pioneers to define the length by the wavelength of light and to make the meter standard independent of time, place and external influences. In his research he could go on from the work of the American physicists Abraham Michelson and his French colleagues Jean Benoit, Charles Fabry and Alfred Perot as well, who had determine the wavelength of light with great accuracy by interferometric methods and shown that this could be used as a standard for length. But at that time it only works in principle. In overcoming the serious difficulties of the used instruments and measuring methods Wilhelm Kösters and his lab at the PTR became instrumental and led to numerous innovative instruments and devices. The talk will report the way which were gone by Köster and other researchers leading to the fundamental new definition of the meter based on the wavelength of light. The talk will also portray the metrologist Köster.

------------------

Eckhard Wallis (Institut de Mathématiques de Jussieu - Paris Rive Gauche)
Making clocks – Research at the Laboratoire de l'Horloge Atomique during the 1960s

This paper proposes an historical study of the role of experimental research on atomic clocks during the transition from astronomical towards “atomic” timekeeping in the 1960s. I will analyze the work of the French Laboratoire de l’Horloge Atomique (LHA) between its foundation in 1958 and the international adoption of the “atomic second” in 1967. Bringing together disciplinary cultures like chronometry, radio engineering and quantum physics, the LHA had yet to find a stable balance between the investigation of physical phenomena and metrological routine work. The place of empirical knowledge on atomic clocks in the international timekeeping system has been analyzed by Eran Tal in “Making Time” (2016). While Tal’s study focusses on a stabilized and formalized metrological system, this paper aims to show how research on atomic clocks found its place in a metrological system still under construction.

 


 

Panel 9
Managing data: Three studies

Chair: Patrice Delon (University Paris Diderot)
Room: Malevitch 483A

------------------

Chris Partridge , Sergio De Cesare , David Leal , Mesbah Khan , Hayden Atkinson (University of Westminster), Andrew Mitchell
Explaining Measurements to Machines

The 21st century has seen a significant increase in the automation of engineering measurement. In practice, this has led to significant problems. In particular, a variety of conceptual data structures in the measurement domain have emerged ad hoc in response to local requirements without much consideration of semantic interoperability and integration.

------------------

Jean-Baptiste Grodwohl (University of Cambridge)
Measuring Natural Selection in Population Genetics

Is natural selection a process amenable to measurement, and is it worth measuring it? The concept of natural selection encompasses different processes affecting the change in frequency of a gene, and these processes can be erratic in nature, as they change through time and space. There thus seems to be little hope of finding typical or invariant properties of natural selection. I analyze here the different functions of measurement and how they came to matter. For a given study system, measuring natural selection can mean detecting selection (distinguishing it from neutrality), identifying the kind of selection at work, quantifying its intensity and identifying the causal factors involved. This talk will discuss more closely the issue of detection, which has resurfaced each time population geneticists accessed a new level of variation, be it chromosome variants (1930s), proteins (1960s), and DNA sequences (1980s).

------------------

Jean-Pierre Llored (Université Paris Diderot)
Investigating Measurement: The Case of Chemical Metrology

This work is the result of studies carried out in many laboratories of analytical chemistry. We will investigate how chemists use various procedures in order to stabilize a measure. Using this study, we will then query the meaning of the ceteris paribus clause in the domain of chemistry. We will show that the sentence “all things being equal” encompasses the co-adaptation and the channeling of multifarious fluctuations which, in turn, leads to the very possibility of making holistic inferences. A relational consistency thus stems from a practice of stabilization, the meaning of which will be queried in our talk.

 


 

Panel 10
Measurement practices: from state regulation to mathematical guidance

Chair: Christine Proust (Centre National de la Recherche Scientifique)
Room: Mondrian 646A

------------------

Carlos Gonçalves (University of São Paulo)
Measurement in an Ancient Mesopotamian Loan Archive

This presentation focuses on the archive of Nūr-Šamaš, so called because in its more than a hundred documents Nūr-Šamaš figures as a lender of silver and barley. The archive was produced in the region of the Diyala during the Old Babylonian period (c. 2000-1600 B.C.E.). The specific aims of this presentation are to describe the units of measure used in the archive, how these documents expressed weights and capacities and how capacity values were employed to express interest to be added to the principal amount of loans. Furthermore, the range of values present in the archive informs us about the orders of magnitude involved. Finally, in a few cases the documentation allows us to calculate the total amount borrowed by individuals that appear in more than one contract and, although imperfectly, to follow their trajectories.

------------------

Guy Sechrist (University of Cambridge)
“False Measures”: Seventeenth-Century English Gauging Instruments and legitimizing English Excise

My work intends to focus on how the systematic standards of measurement and values, which had been established, regulated, and controlled by the English state in the seventeenth century concerning the gauging and assessment of ale casks and barrels, played a large role in the process of acquiring taxes levied on ale for the English state. While my work does address the process in which the state sought to regulate casks, thereby standardizing and transforming them into units of measurement, my paper intends to examine the ways in which gauging instruments, both before and after the implementation of state regulated excise, played an even larger role in codifying such standards for tax revenue. Considering this, my paper contends that the establishment of excise in England (introduced in 1643) not only correlates with the introduction of the earliest English gauging instruments and slide rules, but also reveals an important process in which theoretical mathematics came to serve as a practical means for assessing commodities for the state to regulate and tax.

------------------

Jennifer Egloff (Zayed University)
Artisans' Resistance to Geometrical Measurement Techniques in the Early Modern English Atlantic: Challenging the Persistent Notion of Linear Change in Mathematics

This paper challenges long-held perceptions of linear change in mathematics by analyzing early modern English-language didactic literature, which encouraged artisans to adopt geometrical measurement techniques, in place of traditional rule-of-thumb methods. Although Leonard Digges began promoting geometrical measurement in the mid-sixteenth century, there is evidence of ongoing resistance to certain techniques throughout the seventeenth century, and even of certain authors advocating the use of geometrically faulty methods in the later eighteenth century. This paper argues that individuals’ decisions whether to adopt geometrical measurement techniques were likely related to numerous practical concerns, including the temporal and monetary investment required to learn new skills, the risks versus potential rewards of new methods, the criteria by which expertise was defined in their professions, and the logistics of implementing these techniques in practical situations. It shows that the availability of vernacular mathematical instruction did not guarantee that mathematical techniques would be adopted on a popular level.

 


 

Panel 11
Theory dependence, models and idealization in measurement

Chair: Theodore Porter (University of California, Los Angeles)
Room: amphithéâtre 310 (ENSA Paris-Val de Seine)

------------------

Kent Staley (Saint Louis University)
An Epistemological Function for Systematic Uncertainty in Measurements in High Energy Physics

In experimental High Energy Physics (HEP), standard practice requires that reports of measurement results divide uncertainty into statistical and systematic components. I argue in defense of HEP's retention of the statistical/systematic distinction on the grounds of the distinctive epistemological function of systematic uncertainty. I then consider the consequences of this approach for debates over methodology in uncertainty estimation. To what extent can the epistemological function of systematic uncertainty serve as a constraint on the choice of statistical approaches to its estimation (frequentist, Bayesian, hybrid, etc.)? I consider two approaches to this question. The first maintains that systematic uncertainties express quantitatively the uncertain character of any theoretical assumptions relied upon in arriving at a measurement result. The second regards the estimation of systematic uncertainty as a means of securing the premises of a model-based argument for the measurement result.

------------------

Alessandro Giordani (Università Cattolica del Sacro Cuore di Milano), Luca Mari (Università Cattaneo)
On theory dependence of truth in measurement

The recent focus on modeling has arisen some significant questions as to how truth in measurement is to be conceived, challenging the realist position that what is measured is a property of the object under measurement and that truth in measurement is correspondence between what the proposition stating the measurement result says and what is actually the case. Against this, antirealist positions have been developed according to which what is measured is essentially dependent on a theory, so that truth in measurement can be identified at best as truth-in-a-model, thus losing link with the world. A conceptual framework is proposed aimed at clarifying this situation, from the analysis of how the truth value of a proposition stating a measurement result is established in terms of the distinction between sense and reference of an expression. Our conclusion is that at least in some cases truth in measurement is partly theory independent.

------------------

Qiu Lin (Duke University)
Idealization and Measurement: A Comparative Case Study

Given the obvious discrepancy between an idealization and the real-world phenomenon it purports to represent, how does the former advance our knowledge of the latter? This paper develops an answer from the perspective of measurement. First, I argue that devising a proper conception of chosen quantities and a corresponding way to measure them are critical to all scientific disciplines. Second, I highlight the connection between idealization and measurement. A comparison between the Keplerian orbit and perfectly spherical molecules is given to illustrate the connection: in research, while every measured deviation from the former indicates hitherto undiscovered details in the world (such as perturbation from a third body), deviation from the latter only betrays failures on our part to measure the molecular constitution. From the point of view of measurement, I conclude, while the former is a legitimate counterfactual basis that enters constitutively into the research, the latter is a mere heuristic device supplying a way of conceptualizing.

------------------

Roman Zdzislaw Morawski (Warsaw University of Technology)
Measurement as abduction

An attempt will be made to interpret measurement in terms of abduction understood as inference from effects to causes, or from observational data to explanatory theories. It will be argued that: - The definition of any measurand must refer to a mathematical model of a system under measurement for which this measurand is defined. - The definition of any measurement act must refer to a mathematical model of a corresponding measuring system. - The latter model must include an inverse model of the relationship between the measurand and the so-called raw result of measurement. - The evaluation of the measurement uncertainty must refer to some extended models of the system under measurement and of the measuring system. It will be shown, moreover, that the key operation underlying any measurement, the inverse modelling under uncertainty is equivalent to quantitative abductive reasoning which consists in the selection of the best estimate of the measurand in a set of admissible solutions, using a priori information on the measurand and on the measuring system.

 


 

Panel 12
Errors of measurement in historical perspective

Chair: Giora Hon (University of Haifa)
Room: Luc Valentin 454A

------------------

Robert Middeke-Conlin (Max Planck Institute for the History of Science)
The limits of measured value in Ancient Mesopotamia

This paper will examine measurement practices in the early second millennium BCE kingdom of Larsa (modern southern Iraq). For an economy to function properly, units of measure must be agreed upon and trusted. Both the standards used to define value and the instruments used to assess value must be understood by the practitioners who assess value so that accountability may be maintained. This paper asks whether and how ancient practitioners understood value assessment, whether they were aware of instrumental limits to observation, and how they coped with these constraints. In the end, several case studies are used to explore the limiting factors of Ancient Mesopotamian metrology and how the ancient practitioners understood and coped with these limits.

------------------

George Borg (University of Pittsburgh)
Accentuating the Positive: Observation and Measurement in Kepler’s Optics

In this talk I reflect on the relationship between Kepler’s efforts to improve astronomical observation and the manner in which he theorizes about the eye. Certain modern interpreters of Kepler’s Optics (1604) have drawn negative epistemological consequences from the theory of vision described therein. According to what I call the “naturalist” interpretation, Kepler purges the theory of any essential role for human agency. In doing so, he would have paved the way for Cartesian skepticism by “estranging” the observer from nature. My goal is not to refute this interpretation, but to suggest a different one that accentuates a more positive epistemological moral. I suggest a “metrological” interpretation, according to which Kepler subordinates his theorizing about vision to the requirements of accurate measurement. On this interpretation, a role for human agency is retained and the method leads to more rather than less certainty. I marshal recent work in the philosophy of measurement (van Fraassen, Baird) to support my view.

------------------

Maarten Bullynck (Université Paris VIII & SPHERE laboratory)
The fine distinctions of error. Getting knowledgeable about errors at the crossroads of theory, instrument and observation

During the Peru-expedition Bouguer and de la Condamine discovered the intricacy of observational error, its persistence and its many forms during measurements of the meridian. Their lucid and informal depiction of how errors crept in their everyday practice stimulated discussions during the 18th century, especially in the German states. These discussions engaged notable mathematicians such as Tobias Mayer, A.G. Kästner, J.H. Lambert, R.J. Boscovich, or, C.F. Gauss, and groups of practitioners such as astronomers, surveyors, instrument builders or even cameralists. Partial theories of errors and methods to diminish errors were developed at the frontier of theory and practice, between instrument and its usage, between doing an observation and using it in a computation. The debates are part of a process of professionalization (of both surveying and mathematics) and would prepare the ground for the 19th century approach to error.

 


 

Panel 13
Measurement issues in the life sciences

Chair: Céline Lefève (University Paris Diderot)
Room: Mondrian 646A

------------------

Maria Estela Jardim , Nádia Jardim (University of Lisbon)
Measuring body functions at the turn of the 19th Century through serial photography and cinema

At the turn of the 19th century, Cinema allowed the possibility of capturing medical phenomena related to body movements. Serial photography of human movement, first produced by Etienne-Jules Marey (1830-1904) in France and Eadweard Muybridge (1830-1904) in the United States, was of key importance for the invention of cinema as well as providing data on the physiology of the human body. Both serial photography and cinema were used to measure, segment and quantify pathological movements in neurological diseases. In the 1920s the Portuguese neurologist and Nobel prize winner in Medicine (1949), Egas Moniz (1874-1955), undertook a task of obtaining serial angiographs in order to measure the speed of blood in the brain with an instrument designed by one of his collaborators, Pereira Caldas. In this paper we will examine medical cases in this period 19th-early 20th centuries when measurements were performed by physicians using serial photography and cinema.

------------------

Caterina Schuerch (Ludwig-Maximilians-Universität München)
Quantification – the key to understanding physiological processes

By accurately measuring biological functions and fitting to them the simple equations of chemical kinetics, one can reveal their underlying physicochemical mechanisms. This is, in a nutshell, the methodological premise biophysicist Selig Hecht’s (1892-1947) work on photoreception was based on. I will introduce Hecht’s quantitative experiments, and the model of the mechanism of photoreception he put forward. I will then carve out the factors that enabled quantitative analysis of organismic photoreception in the first place. Thirdly, I will explain the significance of quantification to both Hecht’s mechanism modeling as well as his assessment of the model’s applicability to photoreception across the animal kingdom; from tunicates to clams, man, insects, amphibians, and birds. Finally, I will submit a set of methodological norms that guided Hecht’s measurement practice. These norms are derived from his research actions, his method discussions, and his critique of other researchers’ measurements as misguided, inaccurate, or meaningless.

------------------

Daniel Ott (University of Cambridge)
Can Pain Be Measured? Emerging technologies, epistemological uncertainty, and pragmatic realism

Clinical pain measurements are an essential component of modern medicine’s diagnostic and prognostic procedures. Typically recorded using either a visual analog scale (VAS) or a numeric rating scale (NRS), these pain measures relate a spatial location or number to the perceived amount of pain reported. Newly emerging technologies further complicate this measurement practice, namely functional Magnetic Resonance Imaging (fMRI), through an effort to establish an objective measure of pain directly from neural data. This tension, between the patient’s subjective experience and the purported objective measures found using novel technologies, raises substantial questions about pain measurement, including: what specifically is measured when probing pain experiences? Are psychological states categorically immune to quantification? And who has authority on pain experiences when objective measures are introduced? I use the difficulties encountered through established and emerging measurement practices to highlight inconsistent philosophic pain concepts, and explore the possibilities of a revised notion.

 


 

Panel 14 (symposium)
The measurement of non-quantitative properties in the human sciences

Chair: Mark Wilson (University of California, Berkeley)
Room: Luc Valentin 454A


As metrology broadens its scope to incorporate the measurement practices of human sciences, it runs the risk of inheriting the harmful philosophical baggage of such disciplines’ practices. Specifically, psychometrics has been accused of a quantitative imperative whereby the desire to be recognised as a quantitative discipline has been given priority over scientifically investigating psychological properties on their own structural terms. This imperative has been deepened by the fallacious belief that the presence of ordered differences in these properties, such as different levels of intelligence or academic attainment, necessarily entails the presence of quantitative differences. In this symposium, we will examine a conception of measurement and corresponding models that do not commit the human sciences to this imperative, do not conflate order and quantity, and do not defer to quantitative, statistical theory. Moreover, we discuss how these models may be applied to qualitatively evaluate properties like intelligence and educational attainment.

------------------

Joshua McGrane (University of Oxford)
An inclusive conception of measurement for the human sciences minus the philosophical baggage

Over the past century, conceptions of measurement have diversified and, in some cases, become quite permissive regarding the kinds of properties that are taken to be measurable. For the human sciences, the most impactful contribution in this respect was Stevens’ (1946) paper. However, by hitching quantification to an extreme interpretation of operationalism, Stevens opened the floodgates on the proliferation of questionable measurement practices in the human sciences. In response, this paper attempts to provide an inclusive conception of measurement that discards some of this erroneous philosophical baggage. Specifically, measurement is conceived as the empirical attainment of information regarding the structural relations of a property instantiated in different objects/agents/events, which is represented on an abstract parameter space. The structural focus of this information-theoretic conception of measurement is not limited to quantitative structure, but also includes qualitative structures concerned with differences in kind, order, and combinations thereof, such as for partial-order structures.

------------------

Trisha Nowland (Macquarie University)
Rough Set Theory for psychometric research: A modest proposal

The aim of this presentation is to illustrate insights afforded by examining what is otherwise merely cast as statistical measurement error for a given dataset, drawing on a methodology informed broadly by axiomatic set theory, following Suppes (1972, 1999, 2002). An example is included that demonstrates the indiscernibility relations of rough set theory (Pawlak, 1982, 1998) in data analysis. The consequences of assumptions necessary for statistical modelling are contrasted with the outcomes of a rough set theory approach, which makes no assumptions regarding for example linearity of relations between attributes. A further advantage of data mining within what is otherwise classed as meaningless variation in between-subjects data is also shown. Such information may be particularly useful in bringing to light problems associated with the adaptive testing paradigm, and employment of tests as psychometric measures, of individual performances.

------------------

Alex Scharaschkin (University of Oxford)
Measurement without quantification? The case of educational assessment

In educational assessment students are given tasks and summary scores of their performances are often said to measure some feature of interest, such as ‘attainment’ or ‘proficiency’. Usually, this summary information is collected about different attributes of each performance in the form of sub-scores, which can be regarded as partial valuations of the performances. These partial valuations encode assessors’ judgements of quality against criteria that define what ‘better’ or ‘worse’ performance consists in. This paper aims to avoid the psychometricians’ fallacy, i.e., assuming these ordinal judgements are necessarily quantitative, by applying ‘qualitative mathematics’ to them. It will explore the idea that assessors’ judgements can be modelled as having the structure of fuzzy truth-degrees, and that an appropriate method of combining partial valuations of performances is not via arithmetical operations, but via the logical operations on a lattice of ‘value-concepts’. Applications to educational assessments in the United Kingdom will be discussed.

 


 

Panel 15
Reconsidering the Representational theory of measurement

Chair: Michael Heidelberger (University of Tübingen)
Room: Malevitch 483A

------------------

Matthias Neuber (University of Tübingen)
Helmholtz, Kaila, and the Representational Theory of Measurement

The problem of measurement can be reformulated as the problem of measurability: What are the conditions under which measurement becomes possible at all? And what is the ontological status of concrete measurement outcomes? It will be shown in the course of this paper that what Joel Michell deemed the 'representational theory' of measurement provides an adequate framework for answering these questions. However, contrary to Michell I will point out that Hermann von Helmholtz be seen as an important forerunner of the representational view. Furthermore, it will be argued that the Finnish logical empiricist Eino Kaila carried on Helmholtz's approach by combining it with a certain form of ontological invariantism. On the whole, a 'structural realist' account of measurement will be suggested.

------------------

Jean Baccelli (Munich Center for Mathematical Philosophy)
Beyond the Metrological Viewpoint

In this paper, I discuss what I call the “metrological critique” of representational measurement theory (RTM). The critique is that RTM offers poor descriptions of the measurement procedures actually followed in science and that, thereby, it provides the philosophy of science with poor insights on measurement. I claim that it is not RTM’s goal to offer such descriptions. To support this claim, I examine various cases where RTM can be said to investigate measurement without specifying any measurement procedure. I argue that such limit cases reveal what RTM’s goal truly is, and that they discredit, rather than trivially vindicate, the metrological critique.

------------------

Pierre Uzan (SPHERE laboratory)
From measurement-representation to measurement as a semantic act

As shown by many examples from a wide range of fields, from physics to biology, psychology and economy, the distribution of numerical values that can be assigned to the supposedly “objective” properties of objects obviously depends on the sequence of operations implemented to carry out the measurement. Its outcome is thus relative to an experimental context and, even more, to the whole scientific and socio-cultural paradigm in which this measurement is realized and interpreted. A measurement cannot thus be conceived as a mere assignation of numerical values to predefined properties but as a contextual process. The operational and contextual approach of measurement can be complemented by semantic considerations on the activity of the “Measurer” (or the Observer) whose role can be regarded as part of the “human activities of seeking and finding” (Hintikka 1983) that makes scientific knowledge possible. Taking it into account finally leads us to conceive of measurement as a process of elaboration of meaning.

 


 

Panel 16
Constructing measurement: quantifications, institutions, and numerical notations

Chair: Nadine de Courtenay (University Paris Diderot)
Room: Mondrian 646A

------------------

Daniel Jon Mitchell (Rheinisch-Westfälische Technische Hochschule Aachen)
The Second quantification of physics

Historians traditionally trace the quantification of physics to the period 1780–1830, during which time many quantitative physical concepts, such as latent heat or electrical capacity, were invented and operationalized. I identify and characterize a subsequent “Second Quantification” during the nineteenth century that distinguishes itself from the traditional one along three key lines:
(1) Substantive conceptual, physico-mathematical and operational issues posed by the historical definition and realization of “absolute” units, or the integration of measuring scales.
(2) Development of novel algebraic practices for representing physical quantities and the results of measurement in experimental physics.
(3) Pursuit of a widespread program of theoretical and experimental implementation by William Thomson and his mathematically-minded allies across epistemic discontinuities between different communities of physical practice, e.g. telegraphic engineers.
The relevance of this novel historiographic proposal to the philosophy of measurement is readily discernible through the enduring legacy of the Second Quantification in modern metrology.

------------------

Frans Van Lunteren (Vrije Universiteit)
The International Bureau of Weights and Measures and the politics of science

On May 20, 1875 diplomats of 17 nations convened in Paris to sign an international treaty, known as the Metre Convention. Its aim and outcome was the replacement of the existing French prototypes by new international standards and the establishment of an International Bureau of Weights and Measures. Three years earlier an international commission had installed a /Permanent Committee/, meant to supervise the construction of the new prototypes and to prepare the way for the convention. The negotiations leading up to the convention clearly reflected the European transfer of power, both politically and scientifically, from France to Germany. The main bone of contention in these negotiations was the international bureau. The correspondence of the Dutch secretary of the Permanent Committee, Johannes Bosscha, brings out much of the political scheming behind the scenes. Remarkably, Bosscha himself did everything in his power to thwart its foundation.

------------------

Qiu Gaoxing (China Jiliang University)
Imperial notation and Bodhisattva notation - Illustrated by the example of Avatamsaka Sutra

The way of expressing numbers is known as Bodhisattva Notation in Avatamsaka Sutra. It is noteworthy that in this way, a fraction of numbers is expressed in meaningful Chinese words, while the majority rest are directly transliterated from Sanskrit. There are several types of this numerical notation. Most words The first one is to use adjectives Chinese Words. From the perspective of traditional Chinese expression, these words will in no way be regarded as a numeric concept. The second is to use verbs, Normally, these verbs are used to denote an action or a state, yet here they indicate numbers. This is a rare use in traditional Chinese. The third is to use negative words which stand for numbers that are too large to be counted, in other words, beyond the mathematical capability of human beings.

Online user: 1