For the first time, a researcher has documented the birth of a new language, occurring in Lajamanu, a remote village in Australia's Northern Territory. Nicholas Bakalar does a nice job of telling us the story in The New York Times, carefully noting the significance of the finding and explaining how the new language was discovered.
Oddly--it seems odd to me, if to nobody else--the language is appearing in children, whose parents speak Warlpiri and sometimes kriol, an English-based creole that aboriginal tribes use to communicate with one another. The children learn those languages, but they are also refining a language of their own, called Light Warlpiri.
"People in Lajamanu often engage in what linguists call code-switching, mixing languages together or changing from one to another as they speak. And many words in Light Warlpiri are derived from English or Kriol," Bakalar writes. But Light Warlpiri is not simply a mix of other languages--is something new, a "mother tongue," as one researcher tells Bakalar.
Why it is happening in Lajamanu, and why it is happening now, remain mysteries.
If you like Bakalar's story, check out the video. Mostly it recaps the story, but about halfway through you can hear a sample of Light Warlpiri. Cool.
This is the rare post about plant-breeding project involving GMO seed giant Monsanto wherein I come to praise the effort, not bury it in scorn.
First, a bit of background. Broccoli is a fantastic thing to eat—even President Obama thinks so. It delivers compounds that seem to fight cancer and help maintain your immune system, among other benefits. It also tastes really, really good when it's fresh and in season. (Here's my simple recipe for roasted broccoli with garlic and chili pepper.)
And herein lies the rub. Broccoli plants grow well enough in warm weather, but they won't flower, meaning no delicious broccoli heads during a hot summer. And for that reason, most of the broccoli consumed in the United States—94 percent of it, in fact—is grown in the foggy zones of California. For people in the eastern half of the country, that means you can generally find fresh, locally grown broccoli only when the weather has cooled in the fall. The rest of the year, the stuff tends to be a bit worse
of the wear when it reaches the table after the long haul from California. You know this stuff: limp, bland, vaguely sulfury, kind of gross. Hence, I think, broccoli's tenacious reputation as a good-for-you vegetable that sort of sucks.
As Michael Moss reports in a recent New York Times piece, a group of plant breeders from land-grant universities—including Cornell, the University of Maine, and the University of Tennessee—are looking to extend broccoli's growing season. Using conventional breeding (i.e., not genetic modification), they've created a breakthrough broccoli strain that can
"can thrive in hot, steamy summers like those in New York, South Carolina or Iowa," while also delivering heads that are "crisp, subtly sweet and utterly tender when eaten fresh-picked," Moss reports.
The initiative—facilitated by a $3.2 million grant from the US Department of Agriculture and called the Eastern Broccoli Project—strikes me as a proper use of public plant-breeding funds. Its goals seem impeccable: to increase the supply and appeal of a nutrient-dense vegetable in a way that cuts down on cross-country shipping and boosts local food economies. Too often, plant breeding is geared narrowly to the interests of the seed industry's shareholders—such as crops genetically engineered to resist the very herbicides sold by the companies themselves. It's great to see a breeding project geared to actual public interests, one that could transform the way a high-profile vegetable is grown and consumed over a large swath of the country. I can't think of another public seed-breeding project quite like it.
But there's a catch. As Moss reports, two gigantic agribusiness firms, Monsanto and its Swiss rival Syngenta, are partners in the project. They're most known for their GMO corn and soy, as well as pesticides, but Monsanto and Syngenta are also the globe's two biggest vegetable seed purveyors—and, according the the USDA, they and two other firms together control 70 percent of the entire global trade in vegetable seeds.
With giants like these barreling in, I wondered whether this benevolent-sounding broccoli project might turn into the wholly owned property of these firms. My concern would be market domination. There's a budding scene of small, regionally oriented seed companies in the United States, and I'd hate to see them cut off from a promising, publicly developed broccoli strain. I'd also hate to see the financial benefits of a publicly funded breeding program get completely siphoned off by these ginormous, market-dominating firms.
So I called Thomas Bjorkman, the Cornell plant scientist who's spearheading the project, to ask him about just that. Bjorkman explained that in addition to the biotech giants, partners include relatively small players like Maine-based Johnny's Selected Seeds, a purveyor widely used by small- and mid-size farmers across the eastern United States.
The way it works, Bjorkman explained, is that the Eastern Broccoli Project itself owns the breakthrough seed stock; the private partners like Monsanto and Johnny's license it and cross it with their own broccoli varieties to create proprietary hybrids. "Our goal is to get seeds of better-adapted broccoli varieties out to Eastern growers so that they can grow more local broccoli," he told me. And working with private players with established distribution networks is the fastest way to do that, he added.
In addition to the partnerships with Monsanto and Johnny's and the like, the group also plans to place open-pollinated versions of the new broccoli on the public domain—meaning that smaller seed purveyors will be able to develop and market their own strains. Monsanto and Syngenta are obviously participating because they hope to benefit from an emerging market in summer broccoli for Eastern growers, but Bjorkman convinced me that Eastern farmers who want access to the new summer-friendly broccoli traits will be able to get them without having to deal with a big biotech company if they'd prefer not to.
I asked him about the specter of genetic modification—would it be a tool his team would consider using as it refines its broccoli strains? He told me "no," for two reasons. The first involves consumer demand. "No one wants transgenic [GMO] broccoli, as far as I can tell," Bjorkman said. The second reason is even more fundamental, related to a little-discussed limitation of GM technology that I've written about before (see my 2012 piece on Monsanto's so-so "drought-tolerant" GMO corn): It ends up being not very good at overhauling complex processes, like how heat affects a plant's ability to flower. There's no one single gene that governs how, say, broccoli behaves under hot conditions, Bjorkman told me. And so GM technology "isn't a promising avenue for what we're doing."
In the cover story of a recent Atlantic, David Freedman argued that the answer to the obesity problem lies in kinder, gentler convenience food, engineered to be enticing while containing less sugar and fat. I pushed back, skeptical that Big Food could
hyper-process us a healthy diet even if it wanted to. Freedman and I rekindled our conversation in a joint interview on Minnesota Public Radio:
I got to thinking about the dustup when I read an essay, published in the journal PLOS Medicine by Brazil-based nutrition researchers Carlos Monteiro and Geoffrey Cannon, on how that country is dealing with its own emerging obesity crisis. The piece delivers insights into the relationship between Big Food's dominance of a nation's food system and obesity, as well as ways of thinking about the crisis that we might consider here in the
This is the second in what was to be a two-part series on dual process reasoning and science communication. Now I’ve decided it must be three!
In the first, I described a conception of dual process reasoning that I don’t find compelling. In this one, I’ll describe another that I find more useful, at least for trying to make sense of and dispel the science communication problem. What I am planning to do in the 3rd is something you’ll find out if you make it to the end of this post.
A brief recap (skip down to the red type below if you have a vivid recollection of part 1):
Dual process theories (DPT) have been around a long time and come in a variety of flavors. All the various conceptions, though, posit a basic distinction between information processing that is largely unconscious, automatic, and more or less instantaneous, on the one hand, and information processing that is conscious, effortful, and deliberate, on the other. The theories differ, essentially, over how these two relate to one another.
In the first post I criticized one conception of DPT, which I designated the “orthodox” view to denote its current prominence in popular commentary and synthetic academic work relating to risk perception and science communication.
The orthodox conception, which reflects the popularity and popularization of Kahneman’s appropriately influential work, sees the “fast,” unconscious, automatic type of processing—which it refers to as “System 1”—as the default mode of processing.
System 1 is tremendously useful, to be sure. Try to work out the optimal path of evasion by resort to a methodical algorithm and you’ll be consumed by the saber-tooth tiger long before you complete your computations (etc).
But System 1 is also prone to error, particularly when used for assessing risks that differ from the ones (like being eaten by saber-tooth tigers) that were omnipresent at the moment of our evolutionary development during which our cognitive faculties assumed their current form.
Our prospects for giving proper effect to information about myriad modern risks—including less vivid and conspicuous but nevertheless highly consequential ones, like climate change; or more dramatic and sensational but actuarially less significant ones like those arising from terrorism or from technologies like nuclear power and genetically modified foods the benefits of which might be insufficiently vivid to get System 1’s attention—depends on our capacity, time, and inclination to resort to the more effortful, deliberate, “slow” kind of reasoning, which the orthodox account labels “System 2.”
This is the DPT conception I don’t like.
I don’t like it because it doesn’t make sense.
The orthodox position’s picture of “reliable” System 2 “monitoring” and “correcting” “error-prone” System 1 commits what I called the “System 2 ex nihilo fallacy”—the idea that System 2 crates itself “out of nothing” in some miraculous act of spontaneous generation.
Nothing makes its way onto the screen of consciousness that wasn’t instants earlier floating happily along in the busy stream of unconscious impressions. Moreover, what yanked it from that stream and projected it had to be some unconscious mental operation too, else we face a problem of infinite regress: if it was “consciously” extracted from the stream of unconsciousness, something unconscious had to tell consciousness to perform that extraction.
I accept that the sort of conscious reflection on and re-assessment of intuition associated with System 2 truly & usefully occurs. But those things can happen only if something in System 1 itself—or at least something in the nature of a rapid, automatic, unconscious mental operation—occurs first to get System 2's attention.
So the Orthodox DPT conception is defective. What’s better?
I will call the conception of DPT that I find more compelling “IRM,” which stands for the “integrated, reciprocal model."
The orthodox conception sees “System 1” and “System 2” as discrete and hierarchical. That is, the two are separate, and System 2 is “higher” in the sense of more reliably connected to sound information processing.
“Discrete and hierarchical” is clearly how Kahneman describes the relationship between the two modes of information processing in his Nobel lecture.
For him, System 1 and 2 are "sequential": System 1 operations automatically happen first; System 2 ones occur next, but only sometimes. So the two are necessarily separate.
Moreover, what System 2 does when it occurs is check to see if System 1 has gotten it right. If it hasn’t, it “corrects” System 1’s mistake. So System 2 “knows better,” and thus sits atop the hierarchy of reasoning processes within an ordering that ranks their contribution to rational thought.
IRM sees things differently. It says that “rational thought” occurs as a result of System 1 and System 2 working together, each supplying a necessary contribution to reasoning. That’s the integrated part.
Moreover, IRM posits that the ability of each to make its necessary contribution is dependent on the other’s contribution.
As the “System 2 ex nihilo” fallacy helps us to see, conscious reflection can make its distinctive contribution only if summoned into action by unconscious, automatic System 1 processes, which single out particular unconscious judgments as fit for the sort of interrogation that System 2 is able uniquely to perform.
But System 1 must be seletctive: there are far too many unconscious operations going on for all to be monitored, much less forced onto the screen of conscious tought, which would be overwhelmed by such indiscriminate summoning! But in being selective, it has to pick out the "right" impressions for attention & not ignore the ones unreflective reliance on which would defeat an agent's ends.
How does System 1 learn to perform this selection function reliably? From System 2, of course.
The ability to perform the valid conscious reasoning that consists in making valid inferences from observation, and the experience of doing so regularly, are what calibrate unconscious processes, and train them to select some for the attention System 2, which is then summoned to attend to them.
When it is summoned, moreover, System 2 does exactly what the orthodox view imagines: it checks and corrects, and on the basis of mental operations that are indeed more likely to get the “right” answer than those associated with System 1. That event of correction will itself conduce to the calibration and training of System 1.
That’s the reciprocal part of IRM: System 2 acts on the basis of signals from System 1, the capacity of which to signal reliably is trained by System 2.
I do not by any means claim to have invented IRM! I am synthesizing it from the work of many brilliant decision scientists.
The one who has made the biggest contribution to my view that IRM, and not the Orthodox conception of DRT, is correct is the brilliant social psychologist Howard Margolis.
Margolis presented an IRM account, as I’ve defined it, in his masterful trilogy (see the references below) on the role that “pattern recognition” makes to reasoning.
Pattern recognition is a mental operation in which a phenomenon apprehended via some mode of sensory perception is classified on the basis of a rapid, unconscious process that assimilates the phenomenon to a large inventory of “prototypes” (“dog”; “table”; “Hi, Jim!”; “losing chess position”; “holy shit—those are nuclear missile launchers in this aerial U2 reconaisance photo! Call President Kennedy right away!” etc).
For Margolis, every form of reasoning involves pattern recognition. Even when we think we are performing conscious, deductive or algorithmic mental operations, we are really just manipulating phenomena in a manner that enables us to see the pattern in the manner requisite to an accurate and reliable form of unconscious prototypical classification. Indeed, Margolis ruthlessly shreds theories that identify critical thinking with conscious, algorithimic or logical assessment by showing that they reflect the incoherence I've described as the "System 2 ex nihilo fallacy."
Nevertheless, how well we perform pattern recognition, for Margolis, will reflect the contribution of conscious, algorithmic types of reasoning. The use of such reasoning (particularly in collaboration with experienced others, who can vouch through the use of their trained pattern-recognition sensibilities that we are arriving at the “right” result when we reason this way) stocks the inventory of prototypes and calibrates the unconscious mental processes that are used to survey and match them to the phenomena we are trying to understand.
As I have explained in a previous post (one comparing science communication and “legal neutrality communication”), this position is integral to Margolis’s account of conflicts between expert and lay judgments of risk. Experts, through a process that involves the conscious articulation and sharing of reasons, acquire a set of specialized prototypes, and an ability reliably to survey them, suited to their distinctive task.
The public necessarily uses a different set of prototypes—and sees different things—when it views the same phenomena. There are bridging forms of pattern recognition that enable nonexperts to recognize who the “experts” are—in which case, the public will assent to the experts’ views (their “pictures,” really). But sometimes the bridges collapse; and there is discord.
Margolis’s account is largely (and brilliantly) synthetic—an interpretive extrapolation from a wide range of sources in psychology and related disciplines. I don’t buy it in its entirety, and in particular would take issue with him on certain points about the sources of public conflict on risk perception.
But the IRM structure of his account seems right to me. It is certainly more coherent—because it avoids the ex nihilo fallacy—than the Orthodox view. But it is also in better keeping with the evidence.
That evidence, for me, consists not only in the materials surveyed by Margolis. They include too work by contemporary decision scientists.
The work of some of those decision scientists—and in particular that of Ellen Peters—will be featured in Part 3.
I will also take up there what is in fact the most important thing, and likely what I should have started with: why any of this matters.
Any “dual process theory” of reasoning will necessarily be a simplification of how reasoning “really” works.
But so will any alternative theory of reasoning or any theory whatsoever that has any prospect of being useful.
Better than simplifications, we should say such theories are, like all theories in science, models of phenomena of interest.
The success of theories as models doesn’t depend on how well they “correspond to reality.” Indeed, the idea that that is how to assess them reflects a fundamental confusion: the whole point of “modeling” is to make tractable and comprehensible phenomena that otherwise would be too complex and/or too remote from straightforward ways of seeing to be made sense of otherwise.
The criteria for judging the success of competing models of that sort are pragmatic: How good is this model relative to that one in allowing us to explain, predict, and formulate satisfying prescriptions for improving our situation?
In Part 3, then, I will also be clear about the practical criteria that make IRM conception so much more satisfying than the Orthodox conception of dual process reasoning.
Those criteria, of course, are ones that reflect my interest (and yours; it is inconceivable you have gotten this far otherwise) in advancing the scientific study of science communication--& thus perfecting the Constitution of the Liberal Republic of Science.