Grant Getters and Committee Men
The characteristics of Facebook’s model (i.e., exemplary) users in many ways reflect the constraints on the model users in the company–i.e., the data scientists who try to build stylized versions of reality (models) based on certain data points and theories. The Facebook emotion experiment is part of a much larger reshaping of social science. To what extent will academics study data driven firms like Facebook, and to what extent will they try to join forces with its own researchers to study others?
Present incentives are clear: collaborate with (rather than develop a critical theory of) big data firms. As Zeynep Tufekci puts it, “the most valuable datasets have become corporate and proprietary [and] top journals love publishing from them.” “Big data” has an aura of scientific validity simply because of the velocity, volume, and variety of the phenomena it encompasses. Psychologists certainly must have learned *something* from looking at over 600,000 accounts’ activity, right?
The problem, though is that the corporate “science” of manipulation is a far cry from academic science’s ethics of openness and reproducibility. That’s already led to some embarrassments in the crossover from corporate to academic modeling (such as Google’s flu trends failures). Researchers within Facebook worried about multiple experiments being performed at once on individual users, which might compromise the results of any one study. Standardized review could have prevented that. But, true to the Silicon Valley ethic of “move fast and break things,” speed was paramount: “There’s no review process. Anyone…could run a test…trying to alter peoples’ behavior,” said one former Facebook data scientist.