In ‘What’s in an Error? A Lévy Walk from Astronomy to the Social Sciences’ Isaac talked about how it was really error, or more precisely, the idea and observation that errors in natural scientific experiments and measurements seem to exhibit certain repeatable properties, or laws, that formed the foundation of the study of probability and statistics. Only against a background of noise, variance and aberrations was a concept of normal (eg average man), as well as the idea of truth as non-error, able to emerge and take shape. Versions of these concepts remain central to the social sciences from the 18th century to this day.
Alex Weinreb brought up the point that the rise of statistics and its incorporation into social studies were coeval with the relative geographic immobility of people in the 18th century. What struck me was how much statistical sociology is tied to criminology since its inception. Adolphe Quetelet, the founder of statistical sociology, was a criminologist and used his method primarily to study crime causation. Even the Lévy walk was a mathematical concept, I believe, first developed out of practical application to track down prison escapees within certain calculated perimeters. Given its strong ties to Staatswissenschaft, the emergence of social statistics seems to go hand in hand with that of governance and social control, policy and police–hence the centrality of the normal-pathological distinction.
Nonetheless, the historical contingency of something doesn’t necessarily invalidate its internal consistency. Statistics does describe something, and presents reality, at least in part, in certain ways. Amanda’s presentation outlined some of these principles or consistencies and addressed the utilities and limitations, risks and yields, of a number of statistical models (eg instrumental variables). Alex pointed out that current multilevel nonlinear social statistical models even seem to vindicate many previous, non-statistics-based sociological observations. The underlying assumption in the debate on merits of statistics-based sociology versus those of non-statistics-based sociology is interesting, in that proponents typically put forth that what can’t be empirically verified can’t be included as true, while opponents counter what’s true often lies beyond empirical verification itself. The positions perhaps are not so contradictory as they seem, for there is a supposition common to both, and that is, large-scale phenomena are necessarily somehow more complex than small-scale phenomena (if the distinction of large and small even holds). Here, the statistician’s or (for the lack of a better term) ‘positive sociologist’s’ role becomes merely one of delimitation, whereas that of the ‘critical sociologist’ becomes one of extension of epistemic scope. Expansion and edification are not mutually exclusive.
Perhaps it is this very assumption that needs questioning. Is it really easier to predict the motion of a cell or atom than it is to predict how someone or group will behave? Physics has shown us that the opposite can actually be the case. Large-scale phenomena, as such, are to a far extent stable and relatively easily predictable; it’s when we get down to the micro- and nano-levels of reality that laws collapse and things become very uncertain. Light may be both particle and wave, and yet this keyboard on which I’m typing, I’m fairly certain, won’t suddenly turn into a dove and fly away.
Sociologist Jean Baudrillard once said that knowledge is a high-definition screen onto which the low-definition image that is reality is projected. What may this mean for the sociological apparatus? How may the relation between theory and evidence, knowledge and object, itself be re-conceived within the field? Amidst such questions, Isaac’s and Amanda’s talks remind us of a level of reflexivity that is both essential and useful to the practice and imagination of sociology.