You are on page 1of 23

Scientific Graphs and the Hierarchy of the Sciences: A Latourian Survey of Inscription Practices Author(s): Laurence D.

Smith, Lisa A. Best, D. Alan Stubbs, John Johnston, Andrea Bastiani Archibald Source: Social Studies of Science, Vol. 30, No. 1 (Feb., 2000), pp. 73-94 Published by: Sage Publications, Ltd. Stable URL: http://www.jstor.org/stable/285770 Accessed: 14/05/2009 18:05
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/action/showPublisher?publisherCode=sageltd. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is a not-for-profit organization founded in 1995 to build trusted digital archives for scholarship. We work with the scholarly community to preserve their work and the materials they rely upon, and to build a common research platform that promotes the discovery and use of these resources. For more information about JSTOR, please contact support@jstor.org.

Sage Publications, Ltd. is collaborating with JSTOR to digitize, preserve and extend access to Social Studies of Science.

http://www.jstor.org

sss
ABSTRACTStudies comparing the cognitive status of the sciences have long sought to identify the distinguishing features of 'hard' and 'soft' science. Attempts by philosophers of science to ground such distinctions in abstract principles and by sociologists of science to detect relevant differences (for example, in consensus levels) have met with limited success. However, recent investigations of scientists' concrete practices of data representation provide new leads on this problem. In particular, Bruno Latour has argued that graphs are essential to science due to their ability to render phenomena into compact, transportable and persuasive form. Applying Latour's notion of 'graphism' to the hierarchy of sciences, we found that the use of graphs across seven scientific disciplines correlated almost perfectly with their hardness, and that the same pattern held up across ten specialty fields in psychology. Keywords * data representation * fractional graph area * graphism * hard science * Latour * soft science

Scientific Graphs and the Hierarchy of the Sciences: A Latourian Survey of Inscription Practices
Laurence D. Smith, Lisa A. Best, D. Alan Stubbs, John Johnston and Andrea Bastiani Archibald
Among the most familiar and widespread beliefs about science is that a distinction can be drawn between the 'hard' sciences and the 'soft' sciences. Dating back at least to the writings of Auguste Comte, it has been thought that the sciences can be arrayed in a hierarchy, with welldeveloped natural sciences (such as physics) at the pinnacle, the social sciences at the bottom, and the biological sciences occupying an intermediate position.1 A recent demonstration of the continuing influence of this widespread belief is provided by Janice Beyer Lodahl and Gerald Gordon, who asked scientists to rank different scientific disciplines according to their level of development. The results neatly mirrored the traditional Comtean hierarchy, with rankings ranging from physics at the top to social sciences such as sociology and political science at the bottom.2 Given the broad acceptance of the Comtean ordering, it is perhaps surprising that scholars who study science have failed to reach agreement on exactly what characteristics of scientific fields are responsible for their relative standing along this dimension of 'hard' and 'soft' science. Among
Social Studies of Science 30/1 (February 2000) 73-94 ? SSS and SAGE Publications (London, Thousand Oaks CA, New Delhi) [0306-3127(200002)30: 1;73-94;012783]

74

Social Studies of Science 30/1

philosophers of science, earlier faith in the logical principles of verifiability or falsifiability as the relevant distinguishing features has been undermined by a growing realization that the perplexities of the Duhem-Quine problem call into question the empirical testability of even hard-science theories. As a result, some philosophers of science, like Stephen Toulmin, have recast the hard-soft distinction as a matter of relative degrees of disciplinary 'compactness',3 while others, like Larry Laudan, have questioned the utility of any distinction between the hard (or mature) sciences and their soft (or immature) counterparts. Arguing that sciences never undergo a permanent transition to paradigmatic status in a Kuhnian sense, Laudan writes that 'it is extremely unclear whether the notion of "mature" science finds any exemplification whatsoever in the history of science'.4 Similar sceptical conclusions have been drawn by other scholars, such as Lloyd Houser, who declares that 'the "hard science-soft science" notion has been revealed as a myth'.5 In scientometrics and the sociology of science, efforts to explicate the status of disciplines across the Comtean hierarchy have focused on the search for measurable correlates of hardness. Some of these studies have looked for differential rates of progress, as reflected for example in Derek Price's 'Immediacy Index' of citations, which appeared to show a more rapid rate of obsolescence for papers in the hard sciences than in the soft.6 Others have focused on measures of consensus, which were expected to be higher in the harder or more 'codified' sciences.7 Despite some successes with these approaches, the results have generally been disappointing. Price's Index, which initially appeared to be highly correlated with disciplinary hardness, turned out upon re-analysis to be largely an artefact of different rates of growth in scientific literatures.8Similarly, it turns out that some of the once-promising measures of consensus, such as journal rejection rates, can be attributed in large part to the differential availability of journal page-space,9 while others, such as inter-judge reliability in journal refereeing and grant reviewing, simply failed to exhibit the predicted effects. For example, Stephen Cole reported results from seven separate investigations designed to ascertain variables that would, across disciplines, correlate reliably with standard notions of disciplinary hardness. None turned out to do so, leading Cole to conclude that 'there are no systematic differences between sciences at the top and at the bottom of the hierarchy in either cognitive consensus or the rate at which new ideas are incorporated'.10 Although such conclusions remain controversial," Susan Cozzens has noted that such traditional approaches to the problem of differentiating the sciences 'have now largely been abandoned, for a variety of reasons', and that quantitative measures of the sort used for standard science indicators are unlikely to provide breakthroughs. She suggests instead that the search for differences can be conducted more promisingly at the level of scientific specialties, rather than whole disciplines, and by focusing on the concrete social interactions and practices that scientists engage in while formulating and reformulating knowledge claims.12

Smith et al.: Scientific Graphs & the Hierarchy of the Sciences

75

Cozzens' suggestion is, of course, in line with the turn in science studies away from the earlier global focus on theories and toward investigations of the situated practices whereby scientists construct, negotiate and communicate scientific facts. These practices include the use of laboratory instruments, shared techniques for transforming and analyzing data, the development of specialized vocabularies and, broadly speaking, the adoption of various technologies for representing scientific findings. Such representational techniques - referred to by Bruno Latour and Steve Woolgar as 'inscription devices' 13 - have been viewed by historians and rhetoricians of science as crucial discursive practices for enrolling allies to one's own form of science, and for persuading other scientists of the value of one's research.14In particular, Latour and Woolgar have characterized modern scientific laboratories as organized sites for persuasion by means of inscription devices. One important form of inscription device that has begun to receive attention in science studies, including the history and rhetoric of science, is the scientific graph. In his landmark essay 'Drawing Things Together', Latour laid out the features of graphs that make them an especially powerful and persuasive form of inscription. First, they are able to transcend scales of time and place, rendering invisible phenomena (such as quarks, ion pumps, gross national products) into easily apprehended icons. Second, they are manipulable, and can be superimposed and compared in ways that lead to seeing new connections between seemingly unrelated phenomena, discerning similarities between events vastly separated in time and place, and assessing the congruence of empirical and theoretical curves. As such, they encourage the sort of abstraction from detail to generalities that is characteristic of theoretical science. Third, graphs are 'mobile' or transportable: they can be carried from laboratory to laboratory, or from laboratories to scientific conferences, or from research sites to sites of application. Fourth, they are 'immutable', both in the sense of fixing the flux of phenomena - and thus stabilizing what may be only ephemeral in nature or the laboratory - and in the sense of remaining stable as they are transported across contexts. Fifth, as 'immutable mobiles', graphs can be enlisted in the task of persuading scientists in competing camps of the validity of one's evidence. As Latour puts it, a well-constructed graph raises the cost of dissenting from one's own favoured viewpoint, forcing scientific adversaries to muster their own evidence in the form of even better graphs. To the extent that scientists are able to mobilize consensus on data and evidence, it is through competition and negotiation over graphical representations (hence Latour's motto that The centrality and pervasiveness of graphs 'inscriptions allow conscription'). in science led Latour to conclude that scientists exhibit a 'graphical obsession', and to suggest that, in fact, the use of graphs is what distinguishes sciencefrom nonscience.'5Others who analyze the representational practices of scientists share Latour's conviction that graphical displays of data play a central rather than peripheral r6le in the process of constructing and communicating scientific knowledge.16

76

Social Studies of Science 30/1

On the face of it, the striking claim that graph use distinguishes science from nonscience may appear hyperbolic, but Latour argues that large effects - such as the powerful role of science in Western culture - can in fact arise from small-scale, local practices of knowledge production that are consistently applied. In his view, earlier efforts to demarcate science from nonscience, especially those arising in the philosophy of science, were misguided in their search for large causes for large effects; in looking for demarcation in all the wrong places (such as the 'logic' of science), philosophers neglected the evidence before their eyes that could be found through the ethnography of everyday scientific practice. In Latour's own ethnographic studies, he observed that when disagreements arose over the nature of phenomena and their interpretation, scientists invariably reverted to the use of graphical displays, even if only scribbled on a cocktail napkin, in order to negotiate laboratory facts. So strong was their dependence on graphs that they often found themselves dumbstruck when deprived of access to graphical materials to help present their case.17 If Latour is right in his claim that graphs are essential to science - that 'graphism' is the distinguishing feature of science - his thesis has researchable implications for the question of how to understand the differences between supposedly well-developed, high-consensus fields (such as physics) and what are thought to be less-developed, low-consensus fields (such as sociology). In general, we would expect the 'harder' sciences to exhibit a higher rate of graph use than the 'softer' sciences, and that such differences might also appear at the level of specialty fields within disciplines. Although novel, this notion may actually cohere with certain longstanding beliefs about the hierarchy of the sciences. For example, if the hard sciences make more use of graphs than the soft sciences, then the intuitions of many that hard sciences enjoy higher degrees of consensus, and work with phenomena that are more stable and clearly defined, might have a Latourian explanation: the unique capacity of graphical displays to render phenomena into transportable yet immutable representations tends to forge consensus of scientific belief. As Latour himself suggests, 'to go from "empirical" to "theoretical" sciences is to go from slower to faster mobiles, from more mutable to less mutable inscriptions'.18 In this paper, we present evidence bearing on this issue of how 'graphism' relates to the hierarchy of the sciences. We do so first by presenting a re-analysis of existing data on graph use across seven scientific disciplines; then we extend the analysis to an original archival study of graph use across journals in ten subfields of psychology. Graph Use Across the Sciences In an extensive study of scientists' use of graphs in various disciplines, William Cleveland surveyed articles in scientific journals from the years 1980-81.19 For each discipline, four journals were surveyed (or five in the case of economics and physics), with 50 articles randomly drawn from each journal.20As a measure of graph use, he recorded 'fractional graph

Smith et aL: Scientific Graphs & the Hierarchy of the Sciences

77

area' (FGA), which represents the proportion of the total page area in articles devoted to graphical data displays. Figure captions were excluded in computing the area of graphs, so that FGA represented, in effect, the amount of text that was displaced by graphs. This page-space measure reflects the common understanding that journal space is a crucial limited resource in the sciences, and hence that whatever occupies it is a valuable commodity. Cleveland defined 'graphs' as figures that have scales and convey quantitative information. Included under this definition are graphs such as scatter plots, line graphs, time series graphs, dot plots and histograms, as well as bar charts (which have one scale at a nominal or ordinal level of measurement and a second scale at an interval or ratio level); maps were counted as graphs only if they conveyed statistical information other than geographical location (for example, colour coding of regions by population density). Excluded under Cleveland's definition are figures such as apparatus illustrations, theoretical diagrams and flow charts.21 The results, reported in Figure 3 of Cleveland's paper, revealed that natural science journals tended to have much higher FGAs than social science journals. In fact, the mean FGA for the hard disciplines - physics, chemistry, medicine and biology - was 0.14, whereas the mean FGA for the soft disciplines - psychology, economics and sociology - was 0.03.22 As Cleveland noted: 'Clearly graph usage is much greater among the natural science journals than among the social science . . . journals'.23 Further analysis indicated that these differences were not due to differences in the sizes of graphs, but rather to differences in their number. Moreover, these observed differences in graph use did not merely reflect the presence or absence of data in the articles surveyed, but rather the means by which the data were presented. Cleveland observed that 'many of the social science journals have much data yet make very little use of graphs', an observation in line with the prevalence of tables as the primary means of data presentation in most social science journals.24 To gain a more detailed view of the possible relation between graph use and disciplinary hardness, we asked a group of 36 respondents (psychologists and psychology doctoral candidates at the University of Maine) to rate the hardness of seven disciplines for which Cleveland had collected FGA measures. The instructions on the rating sheets began as follows: It is commonlybelievedin our culture that a distinctioncan be drawn between the 'hard'sciences and the 'soft' sciences.Althoughthese categories are not alwaysclear-cut,most people have some sense of what the hard-softdistinctionmeans.In the surveyyou are being askedto fill out, we are interestedin your impressionsof which areas of science can be consideredrelatively hard and which can be consideredrelatively soft. The respondents were then asked to rate each of the disciplines on a 10-point Likert scale, with 1 representing the soft end and 10 the hard end. The resulting mean ratings for the disciplines were as follows: physics (9.35), chemistry (8.85), biology (7.95), medicine (7.15), psychology

78

Social Studies of Science 30/1

(6.15), economics (5.10) and sociology (3.39). These ratings conform very closely to the rankings provided by Lodahl and Gordon's respondents, confirming that belief in the Comtean hierarchy of sciences is widespread and consensual. As a further check on the validity of our hardness ratings, they were also compared with two other measures of the hardness of disciplines: (a) Hanna Ashar and Jonathan Shapiro's index of 'paradigm development',25 which is a ranking of hardness based on three field-specific measures (length of dissertation abstracts, length of dissertations, and the longest course-chain of prerequisites for upper-level undergraduate courses in the field); and (b) Anthony Biglan's hard-soft dichotomy,26 a binary variable derived from multidimensional scaling of similarities between fields.27 For the six fields for which there were data on all four measures (medicine was not included in the three comparison measures), the correlations with our respondents' hardness ratings were: 0.94 for Lodahl and Gordon's rankings (Spearman rho); 0.94 for Ashar and Shapiro's paradigm development measure (Spearman rho); and 0.91 for Biglan's hard-soft dichotomy (point biserial correlation).28 In Figure 1, Cleveland's measures of graph use (mean FGAs) for the seven disciplines are plotted against the hardness ratings. As can be seen, the FGAs ranged from a low of 0.01 for sociology to a high of 0.18 for chemistry, and the correlation between hardness and graph use was nearly perfect (Pearson r = 0.97, p < .01). The sole deviation from monotonicity in the relationship was physics, which had a slightly lower FGA (0.17) than chemistry (0.18) despite having a slightly higher hardness rating. These findings, although preliminary, clearly support Latour's thesis that graph
1 FIGURE GraphUse as a Functionof the Rated Hardnessof Seven Scientific Disciplines
0.2 _ 0.16 , c ^r=.97 /
^i

/A

Chemistry Physics

o0.12 -

Biology

~-4-

Medicine

C 0 3 L
LL

0.08-

0 .0
0.04 * / 3 4 5 / /

Psychology Economics Sociology

0 2

10

Rated

Hardness

Source: Data from Cleveland, op. cit. note 19.

Smith et al.: Scientific Graphs & the Hierarchy of the Sciences

79

use is the hallmark of science. Not only is the relationship between graph use and disciplinary hardness a strong one, but the magnitude of the differences is substantial, approaching a 20:1 ratio in FGAs in the case of sociology and chemistry. On the whole, these results would seem to warrant further investigation, and suggest that the current disillusionment with earlier efforts to find quantitative correlates of hardness at the level of disciplines may be premature.29 Graph Use Across Subfields of Psychology

In view of the promise shown by graph use as a correlate of disciplinary hardness, we subjected Latour's thesis of graphism to a further and more stringent test. It is well known, at least among the scientists involved, that the hierarchy of sciences is mirrored within disciplines, in that the various subdisciplines making up a field of study are commonly regarded as exhibiting differing levels of hardness. In physics, for example, particle physics is usually viewed as being more prestigious and better-codified that is, harder - than solid-state physics. In the biological sciences, molecular biology and cell biology are viewed as having higher status in the subdisciplinary hierarchy than systematics or ecology.30 Similarly, psychologists routinely regard such fields as physiological psychology or experimental cognitive psychology as harder than (say) social psychology or educational psychology. If graph use is in fact a representational practice intimately related to the hardness of scientific fields, we would also expect it to correlate highly with measures of subdisciplinary hardness - that is, at the level of both within- and between-discipline differences. As an initial attempt to investigate this possibility, we examined subfield differences in psychology. The same respondents who had rated the disciplines for hardness were also asked to rate the 25 journals published by the American Psychological Association in terms of the hardness of the subfield represented by each (again on a 10-point Likert scale). These ratings were compiled, and the 25 journals were ranked according to their mean hardness rating. Two journals were then chosen from each quintile of hardness ratings, and the ten selected journals were surveyed for their use of graphs.31For each of the ten (with one exception), fractional graph area was measured for 16 randomly sampled articles from the period 1980-95, with four drawn from each of the years 1980, 1985, 1990 and 1995.32 For the resulting 156 articles, FGA was measured following exactly the procedure used by Cleveland. Table 1 presents the numerical ratings of hardness for the journals. As expected, journals representing the biological and experimental areas of psychology (for example, Behavioral Neuroscience,Journal of Experimental Psychology)were consistently rated as harder than those journals that focus on social and educational phenomena (such as Journal of Counseling Psychology,Journal of EducationalPsychology).This finding parallels, at the level of subdisciplines, both Comte's conception and Lodahl and Gordon's

80

Social Studies of Science 30/1 TABLE1 Hardness Ratings for Ten Psychology Journals Journal Behavioral Neuroscience Animal Behavior Processes Journal of ExperimentalPsychology: Journal of ExperimentalPsychology:General DevelopmentalPsychology Journal of ComparativePsychology Journal of Abnormal Psychology Journal of Personalityand Social Psychology Journal of Consultingand Clinical Psychology Journal of EducationalPsychology Journal of CounselingPsychology Rated Hardness 8.77 7.69 6.91 6.06 5.97 5.53 5.18 4.93 3.67 3.46

findings. Thus it seems that the notion of a hierarchy reproduces itself within scientific fields, at least in the case of psychology. Interestingly, the mean rating for the 10 journals represented inTable 1 (M = 5.82) fell close to the mean rating for the discipline of psychology (M = 6.42) as derived from the earlier discipline-rating task. This suggests not only that the selected journals are typical of the discipline (at least in terms of perceived hardness), but also that the raters were using the rating scale consistently, regardless of whether journals or disciplines were being rated.33 The results of the graph-use survey showed that the mean FGA for the 156 articles in the 10 psychology journals was 0.046; this means that, overall, about one-twentieth of the page space in these articles was devoted to graphical displays. This overall mean FGA accords well with Cleveland's finding of a mean FGA of 0.053 for the four psychology journals he surveyed (only one of which, the Journal of ExperimentalPsychology,was in our sample). The results also showed, again as expected, that the harder areas within psychology (such as physiological psychology) exhibited higher graph use than the softer areas (such as social psychology). The highest FGA among the 10 journals was in Behavioral Neuroscience(M = 0.118), a value that approaches the mean for biology (M = 0.128) in Cleveland's data. The lowest FGA was in the Journal of CounselingPsychology (M = 0.007), which falls near Cleveland's mean of 0.01 for sociology. The use of graphs as inscription devices in psychology thus appears to span a large range, just as the field itself encompasses subfields ranging from those closely allied with biology to those nearer the social sciences. As to the crucial issue of whether the relationship between graph use and hardness holds within a single discipline, Figure 2 shows that it is again nearly linear: the correlation between rated hardness and FGA is r = 0.93, p < .01. These results within psychology closely mirror Cleveland's findings for graph use in science at large: our results for psychology indicate that the harder subfields tend to devote more space to graphs than do the softer areas.34

Smith et al.: Scientific Graphs & the Hierarchy of the Sciences 2 FIGURE Graph Use as a Function of the Rated Hardness of Ten Journals in Psychology 0.14 . .. .. . . .

81

m
< cT (' 0 I
o Q.

aJ~

0.12
0.1 0.08-

-BNS

I..LzC~~~~~~~-

r =.93

/.
/

JEP:ALB

-^,,/ */

oS 0.06

COMP JEP:G

0.04 0.02 02 3

DP
JAP

LL

.^L~~~ / / nn->
*/ / 4 5 6 7 8 9

3~~~~JPSP JCCP - JCCP JEdP JCP


10

Rated Hardness
See Table 1 for full titles of journals.

Psychology Concept

in the Hierarchy:

Graphism

as an Integrating

As noted above, the diverse discipline of psychology appears to mirror the hierarchy of the sciences as a whole, even as it holds a place within that hierarchy. For this reason, it may be instructive to examine its place in the hierarchy in more detail. One way to situate it would be to interpolate our findings on graph use in psychology into Cleveland's findings for the disciplines. This is done graphically in Figure 3, where the disciplines studied by Cleveland are arrayed horizontally according to the rank order of their hardness, and the ten psychology journals, also in rank order of hardness, are placed in the position of psychology as a whole. The unified visual impression produced by the resulting upward sweep of data points is suggestive, for it vividly conveys the manner in which psychology bridges the span between the soft and hard disciplines, overlapping in its graph use the sciences of sociology and economics at one end, and the biomedical sciences at the other. This display lends credence to the notion that graphs, although not fully universal to science, at least provide a potentially universal index of the 'scientificity' of fields and subfields across science's hierarchy. By depicting the hierarchy of psychology within this larger context, the display also suggests one possible resolution of the issue of whether aspects of the cognitive structure of science that apply at the discipline level also apply at the specialty level; and it implies that the abandonment of one level in favour of the other may be premature. Because Figure 3 shows graph use as a function only of ranked disciplines and journals, with psychology journals interpolated in the array

82

Social Studies of Science 30/1

FIGURE 3 Graph Use in Six Disciplines and Ten Psychology Journals, as a Function of Ranked Hardness 0.2 ,, ,,,,,,,, , , ,
0

)
-&

0.15 .

c
0

0 0050

0.1-

* 0
L i

L.

*i

Disciplineor Journal
The psychology journals are placed in the position occupied by psychology in Figure 1.

of sciences, it involves discontinuities in the actual hardness ratings. For example, the Journal of CounselingPsychologyappears to the right of economics, even though its hardness rating actually fell below that of economics. Thus, a metrically preferable way to display the integrated findings of the two graph-use surveys would be to present the FGAs as a function of the actual numerical ratings: this is done in Figure 4. Like Figure 3, this graph conveys the close relationship between rated hardness and graph use, regardless of an item's status as a discipline or a journal representing a subdiscipline. The correlation between rated hardness and FGA for the six disciplines and ten journals, taken together, is 0.94,
p < .01.35

Discussion On the face of it, our findings support Latour's view that graphs - with all their virtues as immutable and immobile inscription devices - are crucial to the scientific enterprise. The use of graphs, as measured by the proportion of journal page space devoted to them, appears to be a sensitive index of the hardness of scientific fields, whether at the level of entire disciplines or at the level of specialty subfields. Considering that page space is a

Smith et al.: Scientific Graphs & the Hierarchy of the Sciences

83

4 FIGURE Graph Use in Seven Disciplines and Ten Psychology Journals, as a Function of Rated Hardness 0.2 * 0.16 o , , I . .
0

Psychology Journals Scientific Disciplines 0

0.12 o 0 0
I
0

0.080 o

0.04-

0
! ,

. *,*..,
I

o ?
* . I ,

r=.94
U ,

10

Rated Hardness
precious commodity, FGA has a good deal of face validity as an indicator of the importance of graphs, and would seem to deserve a place among the indicators used in future studies of the cognitive status of scientific fields. One way to put the present results into perspective is to compare them with the results of previous work on correlates of disciplinary hardness. As noted earlier, attempts by sociologists of science to isolate quantitative indices of hardness have met with little success. In reviewing the literature, we found more than 20 variables that have been proposed as indicators of hardness and subjected to empirical investigation.36 Fully one third of these variables (7 of 21) failed even to correlate in the predicted direction with our measure of rated hardness. Of the remaining 14 measures that were at least in the expected direction, only two were correlated strongly enough with hardness ratings to reach statistical significance. These were journal rejection rate,37 and citation concentration by author.38 Of these two, only Cole's measure of citation concentration showed a correlation as strong as that reported here for FGA. The use of FGA as an index of hardness thus compares favourably with the performance of other proposed indices, underscoring our conclusion that graph use is a viable and promising indicator of the hierarchical status of scientific disciplines. For all of the suggestiveness of the present findings, however, they are only preliminary, and are based on limited samples. Although Cleveland's data were drawn from 200 articles in each discipline (250 in the case of economics and physics), the samples came from only four or five journals in any one discipline, and it remains uncertain how representative those

84

Social Studies of Science 30/1

journals are. Similarly, our data for psychology were limited to relatively small samples of articles in each of ten journals; and although the journals likely represent those published by the American Psychological Association, APA journals may differ systematically from those found in the discipline as a whole. Still, the strength of the observed relationships would warrant further use of the FGA measure, particularly given its potential to provide an integrated understanding of representational practices across the spectrum of disciplines and subfields (as seen in Figures 3 and 4). Further studies would help assess the generality of our results, especially studies of diverse disciplines, such as biology, that comprise a wide range of specialties varying in their degree of scientificity.39 Other questions of generality arise from the fact that our sample, like Cleveland's, was taken from a limited time period. Obviously, a historical dimension to graph use remains to be explored. As a number of commentators have noted, the boundaries between hard and soft science are fluid and historically conditioned,40 and there is little reason to presuppose that graph use in the various sciences has remained stable over time. In one of the few studies relevant to this issue, Charles Bazerman found that articles on spectroscopy in the Physical Review made increasing use of graphs over the years 1893-1980, a period during which spectroscopy was being increasingly codified under the theoretical rubric of quantum mechanics.4' The study of scientific practices that accompany the codification of fields from soft into hard - including inscription practices such as the use of graphs - would appear to offer a rich field for historical analysis.42 A more difficult issue is the interpretive problem of whether FGA could be an artefact or epiphenomenon of some other general characteristic that distinguishes the hard and soft sciences. In other words, the relationship between graph use and hardness may not be specific to graph use, but rather reflect another underlying variable. One obvious candidate for such an alternative interpretation would be the degree to which scientific fields work with quantitative data. Some such interpretation of our data is prima facie plausible, but there is reason to suspect that it is faulty. As noted earlier, Cleveland concluded that the differences he observed in graph use could not be attributed to the absence of numerical data in soft-science journals, which he found to be plentiful. Informal observation of our sample of psychology journals indicated that the soft-psychology journals were heavily laden with quantitative data, but that the data were usually presented in the form of tables; differences in graph use did not appear to correlate with the amount of data presented. These impressions could be made more precise, of course, and the question of the specificity of the relation of graph use to hardness could presumably be resolved empirically by measuring table use in a manner analogous to Cleveland's FGA.43 Indeed, a profitable direction for future research on inscription practices would be to quantify the relative use of graphs and tables across the spectrum of sciences.

Smith et al.: Scientific Graphs & the Hierarchy of the Sciences

85

There are further reasons to suppose that variations in graph use are not simply a manifestation of disciplinary differences in reliance on quantitative data. Many authors have observed that the soft sciences typically offer masses of quantitative data, sometimes to the point of being swamped in them. Fritz Machlup, for example, points out that:
in scientific investigation, economics would be on the top of all sciences. in numerical form.44

of ... if the availability numericaldatawere in and of itself an advantage

Economicsis the onlyfield in whichthe rawdataof experiencearealready

As we have seen, economics nonetheless ranks low both in hardness and in graph use. More generally, in assessing the status of quantitative social science, George Bohrnstedt concluded that 'quantification alone is not sufficient for the development of a viable social science'.45 Ian Hacking has also argued that what the social sciences lack in hardness cannot be attributed to their inability to generate and manipulate numerical data: Social scientistsdon't lack experiment;they don't lack calculation; they don't lack speculation; of they lack the collaboration the three.46 The common point here is that masses of data, however carefully collected and assembled, do not in themselves yield immutable facts or spontaneously evince relationships to theory. If 'graphism' in science is not simply a reflection of a field's level of quantification, then what does account for the relationship between graph use and the hardness of fields? Although there is no univocal answer to this question at present, we concur with Latour (and others who study representational practices) in suggesting that graphs play the crucial r6les of stabilizing facts and relating data to theoretical formulations, performing the function that Bazerman calls 'theoretical integration'.47 As noted by Francoise Bastide, tables, the common mode of data display in the soft sciences, generally lack this capability: natural scientists prefer to avoid the use of tables because they are perceptually inefficient, rhetorically unpersuasive, and often 'perfectly undecipherable'.48This conclusion is supported by Bazerman's finding that as the field of spectroscopy matured through the 20th century, the use of tables declined, and that the field's codification was accompanied by an increased use of graphs containing multiple panels and multiple curves, making possible direct comparisons of theoretical and empirical values in a way that was efficient and persuasive.49As Michael Lynch argues, graphs are 'revelatory objects' that 'simultaneously analyze what they reveal'.50 These comments suggest that graph use is a concomitant of theoretical development, and indeed the task of relating data to theoretical values appears to be one important function of graphs. Ronald Giere, for example, has noted that nuclear physicists routinely assess the fit between data and theoretically derived values by plotting the data against theoretical predictions and making informal judgments of the degree of fit. When

86

Social Studies of Science 3011

Giere queried a physicist about why statistical tests for goodness of fit are not used in physics, the reply was: 'More kinds of data can be assimilated by the eye and brain in the form of a graph than can be captured with 2'.51 Such a remark clearly evinces the role (as discussed by Latour and Bazerman) that graphs play in relating observations to theoretical formulations. Yet, even in non-theoretical scientific contexts, graphs can be used to discover, summarize and stabilize empirical relationships, serving the 'revelatory' function alluded to by Lynch. Examples of such nontheoretical uses of graphs are not hard to find in the history of science. Thomas Hankins has recounted how the Harvard physiologist L.J. Henderson used a form of graphs called 'nomograms' to represent the complex interactions of the components of blood, in the absence of any formal theory of those interactions.52 Similarly, Frederic Holmes and Kathryn Olesko have shown that Helmholtz's path-breaking measurement of the speed of the neural impulse depended critically on his use of graphical methods (as Helmholtz himself understood); again, this was achieved in the interest of demonstrating a single (although important) fact, without guidance from any particular theory of neural functioning.53 Between the stages of fact-construction and the higher-order relating of data to pre-existing theories, graphs may also play other important r6les, most notably in the process of formulating tentative theories. According to Hankins, the physicist Willard Gibbs 'began his work in thermodynamics with the stated purpose of creating a better graph', and indeed Gibbs' first publication in thermodynamics was devoted to the use of graphical methods in that field; it was the construction of an adequate graphical representation that then allowed him 'to mathematize and transform the entire science of thermodynamics in a profound way'.54 At a more mundane level, Roger Krohn has described how biologists attempting to improve existing models of algal bloom in lakes proceeded with their task by inspecting time-series graphs, so as to identify new causal factors to be incorporated in revised theories.55 As Russ Hanson has argued, the step from familiarity with relevant data to the formulation of a hypothesis becomes less mysterious once one recognizes that the transition from data to theory is largely a matter of pattern recognition guided by tentative concepts.56 Among the tools available to scientists, graphs would seem to be especially, perhaps uniquely, well suited to the detection and recognition of such patterns.57It thus appears that if graphs are crucial to science, their power is not achieved in any single way; rather, their importance would seem to stem from their use in at least three contexts - factconstruction, theory-testing, and the intermediate process of theoryformation.58 Although the attempt to attribute the power of graphs to any single r6le they play in science seems dubious, legitimate questions can be raised about the relative frequency with which graphs function in various ways to achieve their effects. The alleged rhetorical and integrative power of graphs can presumably be tested by empirical means, and the particular roles they

Smith et al.: Scientific Graphs & the Hierarchy of the Sciences

87

play may well differ systematically from one field to another. For example, if Latour is correct about the rhetorical force of graphical inscriptions, scientific papers that make extensive use of graphs should generally enjoy higher citation rates than those which do not. Moreover, papers that use graphs chiefly for theoretical integration, as indicated by their use of both theoretical and empirical curves, should be especially well-cited in journals (such as Physical Review and PsychologicalReview) that specialize in publishing theoretical articles, and in fields that enjoy relatively high levels of theoretical codification. On the other hand, papers using graphs so as to reveal and depict novel empirical relationships between variables would be expected to receive citations chiefly in journals devoted to reports of empirical research in fields that lack strong theoretical integration.59 The conclusion that graphs constitute the lifeblood of science should perhaps not be surprising, as those who have studied the concrete practices of scientists pursuing research and constructing knowledge-claims have long been aware of the power of graphical displays. David Gooding, for example, has documented how the mathematically untutored Michael Faraday was able to explore and solve difficult problems in electrical theory solely through graphical means.60 Likewise, Gray Funkhouser's history of scientific graphs makes clear that they have long played a critical role in scientific praxis and communication.61 As Michael Lynch has put it: [VJisualdisplays are distinctivelyinvolved in scientific communication and in the very'construction' scientificfacts .... Such representations of constitutethe physiognomy the objectof the research.62 of As the technologies of 'virtual witnessing' have evolved along with the genre of scientific writing, methods of graphical display have come to encompass a wide range of techniques for enlisting allies through what Jan Golinski calls 'ocular proof'.63That the use of such methods has not been evenly distributed across the spectrum of the sciences may well speak directly to the perennial question of why the harder sciences seem to experience higher degrees of facticity and theoretical integration than their softer counterparts.64 Notes
Portions of this paper were presented at the Maine Biological and Medical Sciences Symposium (Waterville, ME, May 1997) and at the annual meeting of the American Psychological Association (Boston, MA, August 1999). We thank Steven Cohn, Linda Silka, Bruno Latour and three anonymous referees for their critical reading of the manuscript and their helpful suggestions. 1. Auguste Comte, The PositivistPhilosophyof Auguste Comte,Vol. 1, trans. Harriet Martineau (London: George Bell & Sons, 1896, first published 1830-42). 2. Janice Beyer Lodahl and Gerald Gordon, 'The Structure of Scientific Fields and the Functioning of University Graduate Departments', American SociologicalReview,Vol. 37 (1972), 57-72. 3. Stephen Toulmin, Human Understanding: CollectiveUse and Evolution of Concepts The (Princeton, NJ: Princeton University Press, 1972), 378-95.

88 4. 5.

Social Studies of Science 30/1 Laurens Laudan, Progressand Its Problems: Towards Theoryof Scientific Growth a (Berkeley: University of California Press, 1977), 151. Lloyd Houser, 'The Classification of Science Literatures by Their "Hardness"', Library & InformationScience Research, Vol. 8 (1986), 357-72, at 367. See also Karin D. KnorrCetina, 'Social and Scientific Method, or What Do We Make of the Distinction Between the Natural and the Social Sciences?', Philosophyof the Social Sciences,Vol. 11 (1981), 335-59. Derek J. de Solla Price, 'Citation Measures of Hard Science, Soft Science, Technology, and Non Science', in Carnot E. Nelson and Donald K. Pollack (eds), Communication Among Scientistsand Engineers(Lexington, MA: D.C. Heath, 1970), 3-22. Harriet Zuckerman and Robert K. Merton, 'Age, Ageing, and Age Structure in Science', in Merton, ed. Norman W. Storer, The Sociologyof Science (Chicago, IL: The University of Chicago Press, 1973), 497-559. Stephen Cole, Jonathan R. Cole and Lorraine Dietrich, 'Measuring the Cognitive State of Scientific Disciplines', inYehuda Elkana, Joshua Lederberg, Robert K. Merton, Arnold Thackray and Harriet Zuckerman (eds), Toward Metric of Science (New York: a John Wiley & Sons, 1978), 209-51. Janice M. Beyer, 'Editorial Policies and Practices Among Leading Journals in Four Scientific Fields', SociologicalQuarterly, Vol. 19 (1978), 68-88. See also the review of this issue in Stephen Cole, Making Science:BetweenNature and Society (Cambridge, MA: Harvard University Press, 1992), 111-16. Stephen Cole, 'The Hierarchy of the Sciences?', AmericanJournal of Sociology,Vol. 89 (1983), 111-39, at 111. For example, the attribution of differences in rejection rates to page-space differences has been disputed by Lowell L. Hargens, 'Cognitive Consensus and Journal Rejection Rates', American SociologicalReview,Vol. 53 (1988), 139-51. Susan E. Cozzens, 'Comparing the Sciences: Citation Context Analysis of Papers from Neuropharmacology and the Sociology of Science,' Social Studies of Science,Vol. 15, No. 1 (February 1985), 127-53. Bruno Latour and Steve Woolgar, LaboratoryLife: The Constructionof Scientific Facts (Princeton, NJ: Princeton University Press, 2nd edn, 1986), esp. Chapter 2. See, for example, Alan G. Gross, The Rhetoricof Science (Cambridge, MA: Harvard University Press, 1990), 74-80; Caroline A. Jones and Peter Galison (eds), Picturing Science, ProducingArt (New York: Routledge, 1998); and Timothy Lenoir (ed.), InscribingScience: ScientificTextsand the Materiality of Communication(Stanford, CA: Stanford University Press, 1998), esp. Chapters 1 & 13. Bruno Latour, 'Drawing Things Together', in Michael Lynch and Steve Woolgar (eds), in Representation Scientific Practice (Cambridge, MA: MIT Press, 1990), 19-68. On Latour's view that 'inscriptions allow conscription',see p. 50 (italics in original). See, for example, Francoise Bastide, 'The Iconography of Scientific Texts: Principles of Analysis', in Lynch & Woolgar (eds), op. cit. note 15, 187-229; Stephen Jay Gould, 'Ladders and Cones: Constraining Evolution by Canonical Icons', in Robert B. Silvers (ed.), Hidden Historiesof Science (NewYork: New York Review of Books, 1995), 37-67, at 39-42; Michael Lynch, 'Discipline and the Material Form of Images: An Analysis of Scientific Visibility', Social Studies of Science,Vol. 15, No. 1 (February 1985), 37-66; and B.H. Mahon, 'Statistics and Decisions: The Importance of Communication and the Power of Graphical Presentation', Journal of the Royal Statistical Society,Vol. 140 (1977), 298-323. Latour, op. cit. note 15, 22. Ibid., 47. William S. Cleveland, 'Graphs in Scientific Publications', American Statistician,Vol. 38 (1984), 261-69. Cleveland did not state how the journals for each discipline were selected, but inspection of his selections suggests that they were intended to represent a range of subfields in each discipline. For example, the physics journals include the Journal of

6.

7.

8.

9.

10. 11.

12.

13. 14.

15.

16.

17. 18. 19. 20.

Smith et al.: Scientific Graphs & the Hierarchy of the Sciences

89

21.

22.

23. 24. 25.

26.

27.

28.

29. 30.

31.

32.

33.

Physics,Journal of GeophysicalResearch,Physical Review (A), Physical Review Lettersand Journal of Applied Physics. In the survey of graph use in psychological journals (to be described below), we followed Cleveland's definition of graphs, and encountered no cases that were ambiguous as to their status as graphs or non-graphs. The vast majority of graphs in these journals fell into one of the categories just listed, and we encountered no maps, statistical or otherwise, in the sampled articles. In addition to these seven traditional scientific disciplines, Cleveland's survey also included a category of general science journals (e.g. Science) and categories for mathematics, statistics, engineering, computer science, education and geography. These fields are not included in the present re-analysis of Cleveland's findings, since (with the possible exception of geography) their status as applied or formal disciplines severely complicates their placement in the hierarchy of the sciences. On the need for multiple dimensions to characterize the full range of technoscientific disciplines, see Anthony Biglan, 'The Characteristics of Subject Matter in Different Academic Areas', Journal of Vol. 57 (1973), 195-203. Applied Psychology, Cleveland, op. cit. note 19, 265. Ibid. On the prevalence of tables in social science journals, see the discussion below and in note 48. Hanna Ashar and Jonathan Z. Shapiro, 'Are Retrenchment Decisions Rational?The Role of Information in Times of Budgetary Stress', Journal of Higher Education,Vol. 61 (1990), 123-41. Anthony Biglan, 'Relationships Between Subject Matter Characteristics and the Structure and Output of University Departments', Journal of Applied Psychology, Vol. 57 (1973), 204-13. For discussion of these (and other) measures, see Lowell L. Hargens and Lisa KellyWilson, 'Determinants of Disciplinary Discontent', Social Forces,Vol. 72 (1994), 1177-95. The close congruence of these measures may be reassuring to those who are understandably wary of possible differences between perceivedhardness and objectively measured hardness: see, for example, Cozzens, op. cit. note 12, 130-31. However, the literature on subjective scaling of social phenomena, such as the seriousness of crimes, shows that such ratings are often closely related to objective measures: see Milton Lodge, Magnitude Scaling: QuantitativeMeasurementof Opinions (Newbury Park, CA: Sage Publications, 1981), Chapters 2 & 3. Cole et al., op. cit. note 8; see discussion in Cozzens, op. cit. note 12. Warren 0. Hagstrom, The ScientificCommunity(New York: Basic Books, 1965), Chapter 4; and Charles C. Davis, 'Biology is Not a Totem Pole', Science,Vol. 141 (26 July 1963), 308-10. As it turned out, the mean of the hardness ratings for these ten journals (M = 5.82) coincided exactly with the mean for the whole set of 25 journals (M = 5.82), suggesting that the strategy of selecting journals from quintiles of hardness produced a representative sample of journals in terms of their relative hardness. Because the Journal of ComparativePsychologydid not exist in 1980, only 12 articles were sampled from it. The journal Behavioral Neurosciencealso did not exist in 1980, so the year 1983, its first year of publication, was used instead of 1980 (this approach could not be used for the Journal of ComparativePsychologybecause it was joined with a physiological journal before 1985). The original rationale for sampling articles across the four 5-year intervals was to allow an assessment of changes in graph use across time, but such changes proved to be minor (see note 42 below). Previous research on perceptions of journals provided an opportunity to validate our respondents' ratings. In a 1967 study, Leon Jakobovits and Charles Osgood asked members of the American Psychological Association to rate 20 psychology journals on various semantic differential scales, and then subjected the results to a factor analysis. One of the resulting factors, called 'rigour', was composed of an average of the scales for 'scientific-unscientific' and 'rigorous-loose'. For the seven journals that were rated

90

Social Studies of Science 30/1 both by our raters and by Jakobovits and Osgood's subjects, the correlation between the hardness ratings and rigour scores was r = 0.94, p < .01. This strong correlation both validates the present ratings and suggests that the hierarchy of hard and soft journals (and presumably of the research areas represented by them) has remained quite stable over the past three decades. See L.A. Jakobovits and C.E. Osgood, 'Connotations of Twenty Psychological Journals to Their Professional Readers', American Psychologist, Vol. 22 (1967), 792-800. This conclusion does not depend on the particular measure of graph use employed here. In addition to FGA, we scored each article for the number of graphs it contained and for the presence or absence of any graphs. The correlation between graphs/article and hardness was r = 0.88, p < .01, and the point biserial correlation between hardness and the presence or absence of graphs was r = 0.94, p < .01. A correlation of 0.93 between FGA and graphs/article was also found, suggesting that the journals differed in number of graphs rather than the size of graphs (as Cleveland found for his crossdiscipline differences). Inspection of this figure suggests the possibility that the relationship between hardness and FGA is nonlinear, and indeed an exponential fit to the data produces a leastsquares correlation of 0.96. However, the minimal improvement in fit and the absence of any obvious interpretation of the nonlinearity give no clear grounds for interpreting the relationship as exponential. In this review, the variables we examined were limited to (a) those that were measured on a ratio or interval scale (so that correlations with rated hardness could be computed); (b) those for which data were available for four or more disciplines (to avoid spurious correlations due to extremely small samples); and (c) those that had relatively direct bearing on the cognitive status of disciplines (a criterion that excluded, for instance, gender differences and differences in political attitudes of scientists across different disciplines). See Hargens, op. cit. note 11. Cole, op. cit. note 9, 125-27. This approach is also suggested by Cole et al., op. cit. note 8. See, for example, I. Bernard Cohen, Interactions:Some ContactsBetween the Natural Sciences and the Social Sciences (Cambridge, MA: MIT Press, 1994), 6-10, 189-96; Hans Zeisel, 'Difficulties in Indicator Construction: Notes and Queries', in Elkana et al. (eds), op. cit. note 8, 253-58. Charles Bazerman, 'Theoretical Integration in Experimental Reports in TwentiethCentury Physics: Spectroscopic Articles in Physical Review, 1893-1980', in his Shaping WrittenKnowledge:The Genre and Activity of the Experimental Article in Science (Madison: University of Wisconsin Press, 1988), 153-86. Cole et al. (op. cit. note 8, at 249-50) also advocate inclusion of a historical dimension in the study of science indicators. In the case of graph use, our own data sampled at 5-year intervals from the period 1980-95 showed a statistically significant but numerically slight increase over time; there was no difference between the five hardest journals and the five softest journals in the rate of increase. For other studies of graph use across time, see Howard Wainer and David Thissen, 'Graphical Data Analysis', Annual Review of Psychology, Vol. 32 (1981), 191-241; and Darrell L. Butler, 'Graphics in Psychology: Pictures, Data, and Especially Concepts', Behavior ResearchMethods, Vol. 25 (1993), 81-92. Both of these studies found increases Instruments,& Computers, in the use of graphs in psychological publications over several decades. We recently reported on a preliminary effort in this direction, with results indicating that table use is negativelycorrelated with the hardness of subfields in psychology (thus suggesting that hardness is specifically related to graph use rather than to quantification in general, at least in the case of psychology). See Lisa A. Best, Andrea M. Bastiani, Laurence D. Smith, John Johnston, D. Alan Stubbs and Roger B. Frey, 'DataPresentation in Hard and Soft Psychology: Graphs and Tables', paper presented at the annual meeting of the American Psychological Association (Boston, MA, August 1999).

34.

35.

36.

37. 38. 39. 40.

41.

42.

43.

Smith et al.: Scientific Graphs & the Hierarchy of the Sciences

91

44. Fritz Machlup, 'Are the Social Sciences Really Inferior?', Southern EconomicJournal, Vol. 27 (1961), 173-84, at 178. 45. George W. Bohrnstedt, 'Social Science Methodology: The Past Twenty-Five Years', American Behavioral Scientist,Vol. 23 (1980), 781-87, at 786. 46. Ian Hacking, Representing and Intervening(Cambridge: Cambridge University Press, 1983), 249. For the broader argument that quantitative methods are not crucial to any science, see Randall Collins, 'Why the Social Sciences Won't Become High-Consensus, Rapid-Discovery Science', SociologicalForum,Vol. 9 (1994), 155-77. 47. Bazerman, op. cit. note 41, 157, 173 & 183. 48. Bastide, op. cit. note 16, 214. The sentiment that tables are undecipherable, which will be familiar to anyone who has been confronted with page after page of tables, was expressed as early as 1891 by the economists Farquhar and Farquhar: 'Getting information from a table is like extracting sunlight from a cucumber' (quoted in Wainer & Thissen, op. cit. note 42, 236). On the prevalence of tables in social science, see the claim in the InternationalEncyclopediaof the Social Sciencesthat 'statistical tables are the most common form of documentation used by the quantitative social scientist': James A. Davis and Ann M. Jacobs, 'Tabular Presentation', InternationalEncyclopediaof the Social Sciences,Vol. 15 (New York: Macmillan, 1968), 497-509, at 497. 49. Bazerman, op. cit. note 41, 173. 50. Michael Lynch, 'The Externalized Retina: Selection and Mathematization in the Visual Documentation of Objects in the Life Sciences', in Lynch & Woolgar (eds), op. cit. note 15, 153-86, at 154. 51. Ronald N. Giere, Explaining Science:A CognitiveApproach (Chicago, IL: The University of Chicago Press, 1988), 190. Physicists' shunning of statistical tests in favour of 'eyeballing' graphs was pointed out earlier by Paul E. Meehl, 'Theoretical Risks and Tabular Asterisks: Sir Karl, Sir Ronald, and the Slow Progress of Soft Psychology', Vol. 46 (1978), 806-34. Meehl has Journal of Consultingand Clinical Psychology, continued to advocate the use of graphs over statistics in the social sciences. 52. Thomas L. Hankins, 'Blood, Dirt, and Nomograms: A Particular History of Graphs', Isis, Vol. 90 (1999), 50-80, at 50-52, 74-77, 79-80. 53. Frederic L. Holmes and Kathryn M. Olesko, 'The Images of Precision: Helmholtz and the Graphical Method in Physiology', in M. Norton Wise (ed.), TheValuesof Precision (Princeton, NJ: Princeton University Press, 1995), 198-221. For further examples, drawn from psychology, of empirical discoveries that depended on graphical methods, see Laurence D. Smith, Lisa A. Best, Virginia A. Cylke and D. Alan Stubbs, 'Psychology Without p Values: Data Analysis at the Turn of the 19th Century', American Psychologist(in press). Helmholtz's graphical technique involved instruments for making recordings of rapidly occurring muscle contractions, a technique popularized by the French physiologist Etienne-Jules Marey. Although not all graph-based discoveries have relied on instrument-generated graphs (as opposed to hand-drawn statistical graphs), the connection between graphs and instruments in many important cases raises the possibility that the relationship between graph use and the hardness of scientific fields is mediated in part by the differential reliance of fields on instruments. For the view that hard science is distinguished from soft science by the former's use of instruments that form a genealogy of research technologies, see Collins, op. cit. note 46. 54. Hankins, op. cit. note 52, at 78, 79. 55. Roger Krohn, 'Why Are Graphs So Central in Science?', Biology and Philosophy, Vol. 6 (1991), 181-203. 56. Norwood Russell Hanson, Patternsof Discovery:An Inquiry into the Conceptual Foundationsof Science (New York: Cambridge University Press, 1958), 70-73, 82-86. A classic example of the role of pattern recognition in theory innovation is the emergence of the plate tectonics theory of continental drift as a consequence of graphical displays that revealed distinctive magnetic patterns in geological strata: see Giere, op. cit. note 51, 272. For a broader account of the functions of visual displays in geological The theorizing, see Martin J.S. Rudwick, The Great Devonian Controversy: Shaping of

92

Social Studies of Science 30/1

ScientificKnowledge Among GentlemanlySpecialists(Chicago, IL: The University of Chicago Press, 1985). 57. For an introductory treatment of the human perceptual abilities that underlie the power of graphs to reveal patterns, see Stephen M. Kosslyn, The Elementsof Graph Design (San Francisco, CA: W.H. Freeman, 1992). 58. This conclusion bears on a question raised by two referees who asked whether the present findings imply that soft science can be made hard simply by requiring the inclusion of graphs in its publications - say, by editorial decree. The answer to the question, at least stated this baldly, has to be 'no'. However, in view of the multiple (and typically beneficial) roles assumed by graphs in science, we find nothing implausible in the claim that the systematic inculcation of graphic skills and visual thinking among practitioners of soft science would in fact increase the rate of progress in such fields (where 'progress' means increased stability of facts, more rapid discovery of novel empirical relationships, greater consensus on previously reported empirical relationships, more and better-focused attention to relationships between data and theory, and so on). The training of soft scientists in graphic skills could well lead to improved ways of planning experiments, structuring data-collection processes, noticing unexpected patterns of findings, thinking about data and trying out hypotheses - in short, to a visual culture among soft scientists in which graphical practices pervade their work. Graphic skills are so taken for granted in the harder sciences that there is no separate curricular provision for them; they are part of the implicit laboratory praxis routinely acquired through graduate apprenticeships. That such skills rarely form a part of social scientists' training is a historically conditioned fact, but not one that is beyond modifying. The simplest answer to the referees' question, then, is that graphs are indeed central to science, but that publishedgraphs are best interpreted as one public manifestation (albeit an important one) of an entire set of practices found among visually oriented scientists. For documentation of the pervasive use of inscriptions prior to the stage of publication, there are still no better sources than Latour & Woolgar, op. cit. note 13, and Lynch & Woolgar (eds), op. cit. note 15; it is also well to remember that Latour's notion of 'graphism' was intended to underscore the pervasiveness of graphs in science, and was certainly not limited in scope to published graphs. For examples of writings that explicitly advocate the training of social scientists in graphical techniques, see William J. McGuire, 'A Perspectivist Approach to the Strategic Planning of Programmatic Scientific Research', in Barry Gholson, William R. Shadish, Jr, Robert A. Neimeyer and Arthur C. Houts (eds), Psychologyof Science: Contributions Metascience(Cambridge: Cambridge University Press, 1989), 214-45, to at 230-31; Armando Machado and Francisco J. Silva, 'Greatness and Misery in the Teaching of Psychology', Journal of the Experimental Analysis of Behavior,Vol. 70 (1998), 215-34, at 230-31; and Leland Wilkinson and the Task Force on Statistical Inference, 'Statistical Methods in Psychology Journals: Guidelines and Explanations', American Psychologist, Vol. 54 (1999), 594-604, at 601-602. The latter paper, written by an elite task force appointed by the American Psychological Association, concludes its treatment of graphical procedures as follows: 'It is time for authors to take advantage of them and for editors and reviewers to urge authors to do so' (602). Thus, to at least one expert committee of social scientists, the promotion of graphism by journal editors is not a far-fetched idea. For the view that training in graphical skills should actually begin in elementary school, see Howard Wainer, 'Understanding Vol. 21 (1992), 14-23. Wainer uses the term Graphs and Tables', EducationalResearcher, 'graphicacy' to describe such skills, so as to highlight their status as a basic form of intellectual competence on a par with 'literacy' or 'numeracy'. 59. As noted by one referee, further research on how inscriptions are used across the range of sciences should bear in mind that visual inscriptions are not limited to graphs. For example, surveys could be done of the use of maps, sketches, tracings and photographs, comparing how these are used in different fields. At least in the case of psychology, however, our survey revealed that graphs were by far the most common type of visual inscription used in the journal literature. In our sample articles, graphs

Smith et al.: Scientific Graphs & the Hierarchy of the Sciences

93

60.

61. 62. 63.

64.

outnumbered photographs by a ratio of 34:1, and non-graph illustrations (for example, conceptual diagrams, flow charts) by more than 7:1. (Similar results were obtained by Butler, op. cit. note 42.) Moreover, unlike graphs, neither photographs nor non-graph illustrations exhibited any consistent relationship with the rated hardness of the journals. David Gooding, '"In Nature's School": Faraday as an Experimentalist', in Gooding and Frank A.J.L. James (eds), Faraday Rediscovered: Essays on the Life and Work of Michael Faraday, 1791-1867 (NewYork: Stockton Press, 1985), 105-36. H. Gray Funkhouser, 'Historical Development of the Graphical Representation of Statistical Data', Osiris,Vol. 3 (1937), 269-404. Lynch, op. cit. note 50, at 153, 154. and the History of Science Jan Golinski, Making Natural Knowledge:Constructivism (Cambridge: Cambridge University Press, 1998), 145. For reviews of the rapidly expanding technologies for graphical display, see John M. Chambers, William S. Cleveland, Beat Kleiner and Paul A. Tukey, GraphicalMethodsfor Data Analysis (New York: Chapman & Hall, 1983); Cleveland, VisualizingData (Summit, NJ: Hobart Press, 1993); Edward R. Tufte, EnvisioningInformation(Cheshire, CT: Graphics Press, 1990); and Wainer & Thissen, op. cit. note 42. Given that Latour's classic paper on scientific graphs was the inspiration for the present investigation, it is ironic that Latour himself disregarded possible differences between the hard and soft sciences in terms of their graph use: 'There is no detectable difference between natural and social science, as far as the obsession for graphism is concerned': Latour, op. cit. note 15, 39.

Curious readers may like to know that the FGA for our paper, as it has been set in this journal, is 0.068. Readers can decide for themselves whether this information is all they need to interpret the paper's significance!

Laurence Smith holds a Master's degree in the Philosophy of Science and a PhD in the History of Psychology. He is Associate Professor of Psychology at the University of Maine, where he pursues research on the history and philosophy of data analysis practices. His publications include Behaviorism and Logical Positivism (Stanford University Press, 1986) and B.F. Skinner and Behaviorism in American Culture (Lehigh University Press, 1996), co-edited with William R. Woodward. Email: Idsmith@maine.edu Lisa Best is a doctoral candidate in Psychology at the University of Maine. She is conducting experimental research on graph perception, with a focus on the detection of exponential trends in time series data. Email: Lisab51@maine.edu Alan Stubbs is Professor of Psychology and Cooperating Professor of Art at the University of Maine. His current interests include visual perception, the theory and practice of data display, 'newmedia' (digital art, web design, etc.), and photography. Email: Alan.Stubbs@umit.maine.edu John Johnston is a former graduate student in Psychology at the University of Maine. His interests are in data display techniques and the social psychology of Icelandic dyads. Fax: +1 207 968 2710; email: pokey@pivot.net Address: Department of Psychology, University of Maine, 5742 Clarence Cook Little Hall, Orono, Maine 04469-5742, USA; Fax: +1 207 581 6128.

94

Social Studies of Science 30/1

Andrea Bastiani Archibald holds a PhD in Developmental Psychology from Columbia University. She is currently a post-doctoral research fellow at the Center for Children and Families, Teachers College, Columbia University, where she conducts research (and has published) on adolescent female development and the r6le of puberty in the development of psychopathology, with special reference to eating disorders. Address: Center for Children and Families, Teachers College, Columbia University, New York, New York 10027, USA; fax: +1 212 678 3676; email: amb74@columbia.edu

You might also like