Monday, December 8, 2014
Church-goers are NOT dumber
That people are religious because they are stupid has been a frequent assertion, particularly from the Left. Some recent high-quality research (below), however, refutes that. They found no association between church-going and IQ but did find a weak association between non-committed religiosity and IQ. And religious people are also NOT more likely to go ga-ga as they get older. See also here and here
Religiosity is negatively associated with later-life intelligence, but not with age-related cognitive decline
Abstract
A well-replicated finding in the psychological literature is the negative correlation between religiosity and intelligence. However, several studies also conclude that one form of religiosity, church attendance, is protective against later-life cognitive decline.
No effects of religious belief per se on cognitive decline have been found, potentially due to the restricted measures of belief used in previous studies. Here, we examined the associations between religiosity, intelligence, and cognitive change in a cohort of individuals (initial n = 550) with high-quality measures of religious belief taken at age 83 and multiple cognitive measures taken in childhood and at four waves between age 79 and 90.
We found that religious belief, but not attendance, was negatively related to intelligence. The effect size was smaller than in previous studies of younger participants. Longitudinal analyses showed no effect of either religious belief or attendance on cognitive change either from childhood to old age, or across the ninth decade of life.
We discuss differences between our cohort and those in previous studies – including in age and location – that may have led to our non-replication of the association between religious attendance and cognitive decline.
SOURCE
Sunday, December 7, 2014
Kids from affluent families start out smarter than the poor and the gap between them and the poor widens further as they grow up
It has long been known that the rich are smarter. Charles Murray got heavy flak when he showed that two decades ago but it's logical that people who are in general smart should also be smart with money. But the gorgeous Sophie von Stumm has amplified that in the research below. My previous comments about some of her research were rather derogatory but I find no fault with the work below.
Explaining the finding is the challenge. An obvious comment is that measuring the IQ of young children is difficult -- but not impossible -- and that the widening gap simply reflected more accurate measurements in later life.
I would reject the explanation that the better home life in a rich family helped improve the child's IQ -- because all the twin studies show that the family environment is a negligible contributor to IQ -- counter-intuitive though that might be.
The present findings do however tie in well with previous findings that the genetic influence on IQ gets greater as people get older. People shed some environmental influences as they get older and become more and more what their genetics would dictate
Sophie von Stumm
Poverty affects the intelligence of children as young as two, a study has found - and its impact increases as the child ages. Deprived young children were found to have IQ scores six points lower, on average, than children from wealthier families.
And the gap got wider throughout childhood, with the early difference tripling by the time the children reached adolescence.
Scientists from Goldsmiths, University of London compared data on almost 15,000 children and their parents as part of the Twins Early Development Study (Teds). The study is an on-going investigation socio-economic and genetic links to intelligence.
Children were assessed nine times between the ages of two and 16, using a mixture of parent-administered, web and telephone-based tests.
The results, published in the journal Intelligence, revealed that children from wealthier backgrounds with more opportunities scored higher in IQ tests at the age of two, and experienced greater IQ gains over time.
Dr Sophie von Stumm, from Goldsmiths, University of London, who led the study, said: 'We’ve known for some time that children from low socioeconomic status (SES) backgrounds perform on average worse on intelligence tests than children from higher SES backgrounds, but the developmental relationship between intelligence and SES had not been previously shown. 'Our research establishes that relationship, highlighting the link between SES and IQ.
SOURCE
Socioeconomic status and the growth of intelligence from infancy through adolescence
By Sophie von Stumm & Robert Plomin
Abstract
Low socioeconomic status (SES) children perform on average worse on intelligence tests than children from higher SES backgrounds, but the developmental relationship between intelligence and SES has not been adequately investigated. Here, we use latent growth curve (LGC) models to assess associations between SES and individual differences in the intelligence starting point (intercept) and in the rate and direction of change in scores (slope and quadratic term) from infancy through adolescence in 14,853 children from the Twins Early Development Study (TEDS), assessed 9 times on IQ between the ages of 2 and 16 years. SES was significantly associated with intelligence growth factors: higher SES was related both to a higher starting point in infancy and to greater gains in intelligence over time. Specifically, children from low SES families scored on average 6 IQ points lower at age 2 than children from high SES backgrounds; by age 16, this difference had almost tripled. Although these key results did not vary across girls and boys, we observed gender differences in the development of intelligence in early childhood. Overall, SES was shown to be associated with individual differences in intercepts as well as slopes of intelligence. However, this finding does not warrant causal interpretations of the relationship between SES and the development of intelligence.
SOURCE
Monday, December 1, 2014
There is NO American Dream?
Gregory Clark is very good at both social history and economic history. His latest work, however, leans on what I see as a very weak reed. He finds surnames that are associated with wealth and tracks those surnames down the generations. And he finds that in later generations those surnames continue to be associated with wealth.
That is all well and good but he is using only a very small sampling of the population so can tell us nothing about the society at large. The well-known effect of a man making a lot of money only for his grandchildren to blow the lot is not captured by his methods.
So if the American dream consists of raising up a whole new lineage of wealth, we can agree that such a raising up is rare, though not unknown. But if we see the American Dream as just one man "making it" (regardless of what his descendants do) Clark has nothing to tell us about it. And I think that latter version of the dream is the usual one.
But his findings that SOME lineages stay wealthy is an interesting one. And he explains it well. He says (to simplify a little) that what is inherited is not wealth but IQ. As Charles Murray showed some years back, smarter people tend to be richer and tend to marry other smart people. So their descendant stay smart and smart people are mostly smart about money too.
And note that although IQ is about two thirds genetically inherited, genetic inheritance can throw up surprises at times. I once for instance knew two brown-haired parents who had three red-headed kids. The hair was still genetically inherited (there would have been redheads among their ancestors), but just WHICH genes you get out of the parental pool when you are conceived seems to be random. So you do get the phenomenon of two ordinary people having a very bright child. And that child can do very well in various ways -- monetary and otherwise. I was such a child.
>>>>>>>>>>>>>>>>>>>>>
It has powered the hopes and dreams of U.S. citizens for generations. But the American Dream does not actually exist, according to one economics professor.
Gregory Clark, who works at the University of California, Davis, claims the national ethos is simply an illusion and that social mobility in the country is no higher than in the rest of the world.
'America has no higher rate of social mobility than medieval England or pre-industrial Sweden,' he said. 'That’s the most difficult part of talking about social mobility - it's shattering people's dreams.'
After studying figures from the past 100 years and applying a formula to them, Mr Clark concluded that disadvantaged Americans will not be granted more opportunities if they are hard-working.
Instead, they will be stuck in their social status for the rest of their lives - and their position will, in turn, affect the statuses of their children, grandchildren and great-grandchildren, he said.
'The United States is not exceptional in its rates of social mobility,' the professor wrote in an essay published by the Council on Foreign Relations. 'It can perform no special alchemy on the disadvantaged populations of any society in order to transform their life opportunities.'
Speaking to CBS Sacramento, he added: 'The status of your children, grandchildren, great grandchildren, great-great grandchildren will be quite closely related to your average status now.'
However, not all of Mr Clark's students agree with his findings, with some pointing out that although parents' wealth has an effect on a child's life, 'it is not the ultimate deciding factor'.
SOURCE. More HERE.
Thursday, November 20, 2014
Our Futile Efforts to Boost Children's IQ
The twin studies have always shown little influence from family environment -- both as regards IQ and personality. Charles Murray notes more evidence to that effect below
It’s one thing to point out that programs to improve children's cognitive functioning have had a dismal track record. We can always focus on short-term improvements, blame the long-term failures on poor execution or lack of follow-up and try, try again. It’s another to say that it's impossible to do much to permanently improve children's intellectual ability through outside interventions. But that’s increasingly where the data are pointing.
Two studies published this year have made life significantly more difficult for those who continue to be optimists. The first one is by Florida State University’s Kevin Beaver and five colleagues, who asked how much effect parenting has on IQ independently of genes. The database they used, the National Longitudinal Study of Adolescent Health, is large, nationally representative and highly regarded. The measures of parenting included indicators for parental engagement, attachment, involvement and permissiveness. The researchers controlled for age, sex, race and neighborhood disadvantage. Their analytic model, which compares adoptees with biological children, is powerful, and their statistical methods are sophisticated and rigorous.
The answer to their question? Not much. “Taken together,” the authors write, “the results … indicate that family and parenting characteristics are not significant contributors to variations in IQ scores.” It gets worse: Some of the slight effects they did find were in the “wrong” direction. For example, maternal attachment was negatively associated with IQ in the children.
There’s nothing new in the finding that the home environment doesn’t explain much about a child’s IQ after controlling for the parents’ IQ, but the quality of the data and analysis in this study address many of the objections that the environmentalists have raised about such results. Their scholarly wiggle-room for disagreement is shrinking.
The second study breaks new ground. Six of its eight authors come from King’s College London, home to what is probably the world’s leading center for the study of the interplay among genes, environment and developmental factors. The authors applied one of the powerful new methods enabled by the decoding of the genome, “Genome-wide Complex Trait Analysis,” to ask how much effect socioeconomic status has on IQ independently of genes. The technique does not identify the causal role of specific genes, but rather enables researchers to identify patterns that permit conclusions like the one they reached in this study: “When genes associated with children’s IQ are identified, the same genes will also be likely to be associated with family SES.” Specifically, the researchers calculated that 94 percent of the correlation between socioeconomic status and IQ was mediated by genes at age 7 and 56 percent at age 12.
How can parenting and socioeconomic status play such minor roles in determining IQ, when scholars on all sides of the nature-nurture debate agree that somewhere around half of the variation in IQ is environmental? The short answer is that the environment that affects IQ doesn’t consist of the advantages that most people have in mind -- parents who talk a lot to their toddlers, many books in in the house for the older children, high-quality schools and the like.
Instead, studies over the past two decades have consistently found that an amorphous thing called the “nonshared” environment accounts for most (in many studies, nearly all) of the environmentally grounded variation. Scholars are still trying to figure out what features of the nonshared environment are important. Peers? Events in the womb? Accidents? We can be sure only of this: The nonshared environment does not lend itself to policy interventions intended to affect education, parenting, income or family structure.
The relevance of these findings goes beyond questions of public policy. As a parent of four children who all turned out great (in my opinion), I’d like to take some credit. With every new study telling me that I can’t legitimately do so with regard to IQ or this or that personality trait, I try to come up with something, anything, about my children for which I can still believe my parenting made a positive difference. It’s hard.
There’s no question that we know how to physically and psychologically brutalize children so that they are permanently damaged. But it increasingly appears that once we have provided children with a merely OK environment, our contribution as parents and as society is pretty much over. I’m with most of you: I viscerally resist that conclusion. But my resistance is founded on a sustained triumph of hope over evidence.
SOURCE
The twin studies have always shown little influence from family environment -- both as regards IQ and personality. Charles Murray notes more evidence to that effect below
It’s one thing to point out that programs to improve children's cognitive functioning have had a dismal track record. We can always focus on short-term improvements, blame the long-term failures on poor execution or lack of follow-up and try, try again. It’s another to say that it's impossible to do much to permanently improve children's intellectual ability through outside interventions. But that’s increasingly where the data are pointing.
Two studies published this year have made life significantly more difficult for those who continue to be optimists. The first one is by Florida State University’s Kevin Beaver and five colleagues, who asked how much effect parenting has on IQ independently of genes. The database they used, the National Longitudinal Study of Adolescent Health, is large, nationally representative and highly regarded. The measures of parenting included indicators for parental engagement, attachment, involvement and permissiveness. The researchers controlled for age, sex, race and neighborhood disadvantage. Their analytic model, which compares adoptees with biological children, is powerful, and their statistical methods are sophisticated and rigorous.
The answer to their question? Not much. “Taken together,” the authors write, “the results … indicate that family and parenting characteristics are not significant contributors to variations in IQ scores.” It gets worse: Some of the slight effects they did find were in the “wrong” direction. For example, maternal attachment was negatively associated with IQ in the children.
There’s nothing new in the finding that the home environment doesn’t explain much about a child’s IQ after controlling for the parents’ IQ, but the quality of the data and analysis in this study address many of the objections that the environmentalists have raised about such results. Their scholarly wiggle-room for disagreement is shrinking.
The second study breaks new ground. Six of its eight authors come from King’s College London, home to what is probably the world’s leading center for the study of the interplay among genes, environment and developmental factors. The authors applied one of the powerful new methods enabled by the decoding of the genome, “Genome-wide Complex Trait Analysis,” to ask how much effect socioeconomic status has on IQ independently of genes. The technique does not identify the causal role of specific genes, but rather enables researchers to identify patterns that permit conclusions like the one they reached in this study: “When genes associated with children’s IQ are identified, the same genes will also be likely to be associated with family SES.” Specifically, the researchers calculated that 94 percent of the correlation between socioeconomic status and IQ was mediated by genes at age 7 and 56 percent at age 12.
How can parenting and socioeconomic status play such minor roles in determining IQ, when scholars on all sides of the nature-nurture debate agree that somewhere around half of the variation in IQ is environmental? The short answer is that the environment that affects IQ doesn’t consist of the advantages that most people have in mind -- parents who talk a lot to their toddlers, many books in in the house for the older children, high-quality schools and the like.
Instead, studies over the past two decades have consistently found that an amorphous thing called the “nonshared” environment accounts for most (in many studies, nearly all) of the environmentally grounded variation. Scholars are still trying to figure out what features of the nonshared environment are important. Peers? Events in the womb? Accidents? We can be sure only of this: The nonshared environment does not lend itself to policy interventions intended to affect education, parenting, income or family structure.
The relevance of these findings goes beyond questions of public policy. As a parent of four children who all turned out great (in my opinion), I’d like to take some credit. With every new study telling me that I can’t legitimately do so with regard to IQ or this or that personality trait, I try to come up with something, anything, about my children for which I can still believe my parenting made a positive difference. It’s hard.
There’s no question that we know how to physically and psychologically brutalize children so that they are permanently damaged. But it increasingly appears that once we have provided children with a merely OK environment, our contribution as parents and as society is pretty much over. I’m with most of you: I viscerally resist that conclusion. But my resistance is founded on a sustained triumph of hope over evidence.
SOURCE
Monday, November 3, 2014
Did rationing in World War 2 increase intelligence of Britons?
The journal article is Aging trajectories of fluid intelligence in late life: The influence of age, practice and childhood IQ on Raven's Progressive Matrices and the key passage is reproduced below:
What this says is that both groups started out pretty much the same but by the time they had got into their 70s the younger group was much brighter. The authors below attribute the difference to nutrition, which is pretty nonsensical. They say that eating "rich, sugary and fatty foods" lowers IQ but where is the evidence for that? The only studies I know are epidemiological and overlook important third factors such as social class. So those studies can only be relied on if you believe that correlation is causation, which it is not. And one might note that average IQs in Western nations have been RISING even as consumption of fast food has been rising. So even the epidemiology is not very supportive of the claims below.
Where important micronutrients (iodine and iron particularly) are largely absent in the food of a population -- as in Africa -- nutritional improvements can make a big difference but the idea that Aberdonians in the 1920s were severely deprived of such micronutrients seems fanciful. Aberdeen has long been an important fishing port and fish are a major source of iodine -- and iron is mostly got from beef and Scots have long raised and eaten a lot of beef. The traditional diet of poor Scots -- "mince 'n tatties" -- is certainly humble but it does include beef. Aberdeen even has an important beef animal originating there: The widely praised "Aberdeen Angus". You can eat meat from them in most of McDonald's restaurants these days.
So why was the IQ divergence between the two groups below not observed in early childhood when it was so strong in later life? A divergence of that kind (though not of that magnitude) is not unprecedented for a number of reasons: IQ measurement at age 11 is less reliable than measures taken in adulthood; IQ becomes more and more a function of genetics as we get older. In early life environmental factors have more impact and it takes a while for (say) a handicapping early environment to be overcome.
But I suspect that the main influence on the finding was that two different tests were used. IQ was measured at age 11 by an educational aptitude test and in the 70s it was measured by a non-verbal test. The two were correlated but only about .75, which does allow for considerable divergence. So the oldsters (1921 cohort) were simply not good at non-verbal puzzles, probably because they had little experience with them. The tests they did in 1921, however mostly used problems similar to problems they had already encountered many times in the course of their schooling.
The 1936 cohort, by contrast, had most of their education in the postwar era when people spent longer in the educational system. And IQ testing in the schools was much in vogue up until the 1960s so that generation would have had a much wider testing experience.
The retest was, in other words, invalid. It was not comparing like with like
>>>>>>>>>>>>>>>>>>>>>>>>>>
A study by the University of Aberdeen and NHS Grampian has found that children who grew up during the Second World War became far more intelligent than those who were born just 15 years before.
Researchers think that cutting rich, sugary and fatty foods out of the diets of growing children had a hugely beneficial impact on their growing brains.
The University of Aberdeen team examined two groups of people raised in Aberdeen, one born in 1921 and one born in 1936. These people are known as the Aberdeen Birth Cohort and were tested when they were aged 11 and when they were adults after the age of 62. The study consisted of 751 people all tested aged 11 and who were retested between 1998 and 2011 on up to five occasions.
Researchers compared the two groups at age 11 found an increase in IQ of 3.7 points which was marginally below what was expected but within the range seen in other studies. However, comparison in late life found an increase in IQ of 16.5 points which is over three times what was expected.
Before the war, more than two thirds of British food was imported. But enemy ships targeting merchant vessels prevented fruit, sugar, cereals and meat from reaching the UK.
The Ministry of Food issued ration books and rationing for bacon, butter and sugar began in January 1940.
But it was the MoF’s Dig For Victory campaign, encouraging self-sufficiency, which really changed how Britain ate. Allotment [mini farm] numbers rose from 815,000 to 1.4 million.
Pigs, chickens and rabbits were reared domestically for meat, whilst vegetables were grown anywhere that could be cultivated. By 1940 wasting food was a criminal offence.
More HERE
The journal article is Aging trajectories of fluid intelligence in late life: The influence of age, practice and childhood IQ on Raven's Progressive Matrices and the key passage is reproduced below:
"Standardizing the MHT [original] scores indicated a difference between the cohorts of 3.7 points. This is slightly smaller than expected and may be brought about by survival and selection bias discussed above. Late life comparisons indicate a significantly greater difference between the cohorts, comparing the cohorts at age 77; where there is overlap in data we find a difference of 10.4 raw RPM points or 16.5 IQ points, which is surprisingly large."
What this says is that both groups started out pretty much the same but by the time they had got into their 70s the younger group was much brighter. The authors below attribute the difference to nutrition, which is pretty nonsensical. They say that eating "rich, sugary and fatty foods" lowers IQ but where is the evidence for that? The only studies I know are epidemiological and overlook important third factors such as social class. So those studies can only be relied on if you believe that correlation is causation, which it is not. And one might note that average IQs in Western nations have been RISING even as consumption of fast food has been rising. So even the epidemiology is not very supportive of the claims below.
Where important micronutrients (iodine and iron particularly) are largely absent in the food of a population -- as in Africa -- nutritional improvements can make a big difference but the idea that Aberdonians in the 1920s were severely deprived of such micronutrients seems fanciful. Aberdeen has long been an important fishing port and fish are a major source of iodine -- and iron is mostly got from beef and Scots have long raised and eaten a lot of beef. The traditional diet of poor Scots -- "mince 'n tatties" -- is certainly humble but it does include beef. Aberdeen even has an important beef animal originating there: The widely praised "Aberdeen Angus". You can eat meat from them in most of McDonald's restaurants these days.
So why was the IQ divergence between the two groups below not observed in early childhood when it was so strong in later life? A divergence of that kind (though not of that magnitude) is not unprecedented for a number of reasons: IQ measurement at age 11 is less reliable than measures taken in adulthood; IQ becomes more and more a function of genetics as we get older. In early life environmental factors have more impact and it takes a while for (say) a handicapping early environment to be overcome.
But I suspect that the main influence on the finding was that two different tests were used. IQ was measured at age 11 by an educational aptitude test and in the 70s it was measured by a non-verbal test. The two were correlated but only about .75, which does allow for considerable divergence. So the oldsters (1921 cohort) were simply not good at non-verbal puzzles, probably because they had little experience with them. The tests they did in 1921, however mostly used problems similar to problems they had already encountered many times in the course of their schooling.
The 1936 cohort, by contrast, had most of their education in the postwar era when people spent longer in the educational system. And IQ testing in the schools was much in vogue up until the 1960s so that generation would have had a much wider testing experience.
The retest was, in other words, invalid. It was not comparing like with like
>>>>>>>>>>>>>>>>>>>>>>>>>>
A study by the University of Aberdeen and NHS Grampian has found that children who grew up during the Second World War became far more intelligent than those who were born just 15 years before.
Researchers think that cutting rich, sugary and fatty foods out of the diets of growing children had a hugely beneficial impact on their growing brains.
The University of Aberdeen team examined two groups of people raised in Aberdeen, one born in 1921 and one born in 1936. These people are known as the Aberdeen Birth Cohort and were tested when they were aged 11 and when they were adults after the age of 62. The study consisted of 751 people all tested aged 11 and who were retested between 1998 and 2011 on up to five occasions.
Researchers compared the two groups at age 11 found an increase in IQ of 3.7 points which was marginally below what was expected but within the range seen in other studies. However, comparison in late life found an increase in IQ of 16.5 points which is over three times what was expected.
Before the war, more than two thirds of British food was imported. But enemy ships targeting merchant vessels prevented fruit, sugar, cereals and meat from reaching the UK.
The Ministry of Food issued ration books and rationing for bacon, butter and sugar began in January 1940.
But it was the MoF’s Dig For Victory campaign, encouraging self-sufficiency, which really changed how Britain ate. Allotment [mini farm] numbers rose from 815,000 to 1.4 million.
Pigs, chickens and rabbits were reared domestically for meat, whilst vegetables were grown anywhere that could be cultivated. By 1940 wasting food was a criminal offence.
More HERE
Sunday, October 19, 2014
America's most "incorrect" man reflects
"The Bell Curve" 20 years later: A Q&A with Charles Murray
October marks the 20th anniversary of “The Bell Curve: Intelligence and Class Structure in American Life,” the extraordinarily influential and controversial book by AEI scholar Charles Murray and Richard Herrnstein. Here, Murray answers a few questions about the predictions, controversy, and legacy of his book.
Q. It’s been 20 years since “The Bell Curve” was published. Which theses of the book do you think are the most relevant right now to American political and social life?
American political and social life today is pretty much one great big “Q.E.D.” for the two main theses of “The Bell Curve.” Those theses were, first, that changes in the economy over the course of the 20th century had made brains much more valuable in the job market; second, that from the 1950s onward, colleges had become much more efficient in finding cognitive talent wherever it was and shipping that talent off to the best colleges. We then documented all the ways in which cognitive ability is associated with important outcomes in life — everything from employment to crime to family structure to parenting styles. Put those all together, we said, and we’re looking at some serious problems down the road. Let me give you a passage to quote directly from the close of the book:
Q. Predicting the course of society is chancy, but certain tendencies seem strong enough to worry about:
An increasingly isolated cognitive elite.
A merging of the cognitive elite with the affluent.
A deteriorating quality of life for people at the bottom end of the cognitive distribution.
Unchecked, these trends will lead the U.S. toward something resembling a caste society, with the underclass mired ever more firmly at the bottom and the cognitive elite ever more firmly anchored at the top, restructuring the rules of society so that it becomes harder and harder for them to lose. (p. 509)
Remind you of anything you’ve noticed about the US recently? If you look at the first three chapters of the book I published in 2012, “Coming Apart,” you’ll find that they amount to an update of “The Bell Curve,” showing how the trends that we wrote about in the early 1990s had continued and in some cases intensified since 1994. I immodestly suggest that “The Bell Curve” was about as prescient as social science gets.
Q. But none of those issues has anything to do with race, and let’s face it: the firestorm of controversy about “The Bell Curve” was all about race. We now have 20 more years of research and data since you published the book. How does your position hold up?
First, a little background: Why did Dick and I talk about race at all? Not because we thought it was important on its own. In fact, if we lived in a society where people were judged by what they brought to the table as individuals, group differences in IQ would be irrelevant. But we were making pronouncements about America’s social structure (remember that the book’s subtitle is “Intelligence and Class Structure in American Life”). If we hadn’t discussed race, “The Bell Curve” would have been dismissed on grounds that “Herrnstein and Murray refuse to confront the reality that IQ tests are invalid for blacks, which makes their whole analysis meaningless.” We had to establish that in fact IQ tests measure the same thing in blacks as in whites, and doing so required us to discuss the elephant in the corner, the mean difference in test scores between whites and blacks.
Here’s what Dick and I said: "There is a mean difference in black and white scores on mental tests, historically about one standard deviation in magnitude on IQ tests (IQ tests are normed so that the mean is 100 points and the standard deviation is 15). This difference is not the result of test bias, but reflects differences in cognitive functioning. The predictive validity of IQ scores for educational and socioeconomic outcomes is about the same for blacks and whites."
Those were our confidently stated conclusions about the black-white difference in IQ, and none of them was scientifically controversial. See the report of the task force on intelligence that the American Psychological Association formed in the wake of the furor over “The Bell Curve.”
What’s happened in the 20 years since then? Not much. The National Assessment of Educational Progress shows a small narrowing of the gap between 1994 and 2012 on its reading test for 9-year-olds and 13-year-olds (each by the equivalent of about 3 IQ points), but hardly any change for 17-year-olds (about 1 IQ-point-equivalent). For the math test, the gap remained effectively unchanged for all three age groups.
On the SAT, the black-white difference increased slightly from 1994 to 2014 on both the verbal and math tests. On the reading test, it rose from .91 to .96 standard deviations. On the math test, it rose from .95 to 1.03 standard deviations.
If you want to say that the NAEP and SAT results show an academic achievement gap instead of an IQ gap, that’s fine with me, but it doesn’t change anything. The mean group difference for white and African American young people as they complete high school and head to college or the labor force is effectively unchanged since 1994. Whatever the implications were in 1994, they are about the same in 2014.
There is a disturbing codicil to this pattern. A few years ago, I wrote a long technical article about black-white changes in IQ scores by birth cohort. I’m convinced that the convergence of IQ scores for blacks and whites born before the early 1970s was substantial, though there’s still room for argument. For blacks and whites born thereafter, there has been no convergence.
Q. The flashpoint of the controversy about race and IQ was about genes. If you mention “The Bell Curve” to someone, they’re still likely to say “Wasn’t that the book that tried to prove blacks were genetically inferior to whites?” How do you respond to that?
Actually, Dick and I got that reaction even while we were working on the book. As soon as someone knew we were writing a book about IQ, the first thing they assumed was that it would focus on race, and the second thing they assumed was that we would be talking about genes. I think psychiatrists call that “projection.” Fifty years from now, I bet those claims about “The Bell Curve” will be used as a textbook case of the hysteria that has surrounded the possibility that black-white differences in IQ are genetic. Here is the paragraph in which Dick Herrnstein and I stated our conclusion:
"If the reader is now convinced that either the genetic or environmental explanation has won out to the exclusion of the other, we have not done a sufficiently good job of presenting one side or the other. It seems highly likely to us that both genes and the environment have something to do with racial differences. What might the mix be? We are resolutely agnostic on that issue; as far as we can determine, the evidence does not yet justify an estimate." (p. 311)
That’s it. The whole thing. The entire hateful Herrnstein-Murray pseudoscientific racist diatribe about the role of genes in creating the black-white IQ difference. We followed that paragraph with a couple pages explaining why it really doesn’t make any difference whether the differences are caused by genes or the environment. But nothing we wrote could have made any difference. The lesson, subsequently administered to James Watson of DNA fame, is that if you say it is likely that there is any genetic component to the black-white difference in test scores, the roof crashes in on you.
On this score, the roof is about to crash in on those who insist on a purely environmental explanation of all sorts of ethnic differences, not just intelligence. Since the decoding of the genome, it has been securely established that race is not a social construct, evolution continued long after humans left Africa along different paths in different parts of the world, and recent evolution involves cognitive as well as physiological functioning.
The best summary of the evidence is found in the early chapters of Nicholas Wade’s recent book, “A Troublesome Inheritance.” We’re not talking about another 20 years before the purely environmental position is discredited, but probably less than a decade. What happens when a linchpin of political correctness becomes scientifically untenable? It should be interesting to watch. I confess to a problem with schadenfreude.
Q. Let’s talk about the debate over the minimum wage for a moment. You predicted in the book that the “natural” wage for low-skill labor would be low, and that raising the wage artificially could backfire by “making alternatives to human labor more affordable” and “making the jobs disappear altogether.” This seems to be coming true today. What will the labor landscape look like in the next 20 years?
Terrible. I think the best insights on this issue are Tyler Cowen’s in “Average Is Over.” He points out something that a lot of people haven’t thought about: it’s not blue-collar jobs that are going to be hit the hardest. In fact, many kinds of skilled blue-collar work are going to be needed indefinitely. It’s mid-level white-collar jobs that are going to be hollowed out. Think about travel agents. In 1994, I always used a travel agent, and so did just about everybody who traveled a lot. But then came Expedia and Orbitz and good airline websites, and I haven’t used a travel agent for 15 years.
Now think about all the white collar jobs that consist of applying a moderately complex body of interpretive rules to repetitive situations. Not everybody is smart enough to do those jobs, so they have paid pretty well. But now computers combined with machines can already do many of them—think about lab technicians who used to do your blood work, and the machines that do it now. For that matter, how long is it before you’re better off telling a medical diagnostic software package your symptoms than telling a physician?
Then Cowen points out something else I hadn’t thought of: One of the qualities that the new job market will value most highly is conscientiousness. Think of all the jobs involving personal service—working in homes for the elderly or as nannies, for example—for which we don’t need brilliance, but we absolutely need conscientiousness along with basic competence. Cowen’s right—and that has some troubling implications for guys, because, on average, women in such jobs are more conscientious than men.
My own view is that adapting to the new labor market, and making sure that working hard pays a decent wage, are among the most important domestic challenges facing us over the next few decades.
Q. In the book you ask, “How should policy deal with the twin realities that people differ in intelligence for reasons that are not their fault and that intelligence has a powerful bearing on how well people do in life?” How would you answer this question now?
I gave my answer in a book called “In Our Hands: A Plan to Replace the Welfare State,” that I published in 2006. I want to dismantle all the bureaucracies that dole out income transfers, whether they be public housing benefits or Social Security or corporate welfare, and use the money they spend to provide everyone over the age of 21 with a guaranteed income, deposited electronically every month into a bank account. It takes a book to explain why such a plan could not only work, but could revitalize civil society, but it takes only a few sentences to explain why a libertarian would advocate such a plan.
Certain mental skillsets are now the “open sesame” to wealth and social position in ways that are qualitatively different from the role they played in earlier times. Nobody deserves the possession of those skillsets. None of us has earned our IQ. Those of us who are lucky should be acutely aware that it is pure luck (too few are), and be committed to behaving accordingly. Ideally, we would do that without government stage-managing it. That’s not an option. Massive government redistribution is an inevitable feature of advanced postindustrial societies.
Our only option is to do that redistribution in the least destructive way. Hence my solution. It is foreshadowed in the final chapter of “The Bell Curve” where Dick and I talk about “valued places.” The point is not just to pass out enough money so that everyone has the means to live a decent existence. Rather, we need to live in a civil society that naturally creates valued places for people with many different kinds and levels of ability. In my experience, communities that are left alone to solve their own problems tend to produce those valued places. Bureaucracies destroy them. So my public policy message is: Let government does what it does best, cut checks. Let individuals, families, and communities do what they do best, respond to human needs on a one-by-one basis.
Q. Reflecting on the legacy of “The Bell Curve,” what stands out to you?
I’m not going to try to give you a balanced answer to that question, but take it in the spirit you asked it—the thing that stands out in my own mind, even though it may not be the most important. I first expressed it in the Afterword I wrote for the softcover edition of “The Bell Curve.” It is this: The reaction to “The Bell Curve” exposed a profound corruption of the social sciences that has prevailed since the 1960s. “The Bell Curve” is a relentlessly moderate book — both in its use of evidence and in its tone — and yet it was excoriated in remarkably personal and vicious ways, sometimes by eminent academicians who knew very well they were lying. Why? Because the social sciences have been in the grip of a political orthodoxy that has had only the most tenuous connection with empirical reality, and too many social scientists think that threats to the orthodoxy should be suppressed by any means necessary. Corruption is the only word for it.
Now that I’ve said that, I’m also thinking of all the other social scientists who have come up to me over the years and told me what a wonderful book “The Bell Curve” is. But they never said it publicly. So corruption is one thing that ails the social sciences. Cowardice is another.
SOURCE
Friday, October 17, 2014
"Slate" rediscovers IQ -- though they dare not to call it that
They recoil with horror about applying the findings to intergroup differences however, and claim without explanation that what is true of individuals cannot be true of groups of individuals. That is at least counterintuitive. They even claim that there is no evidence of IQ differences between groups being predictive of anything.
I suppose that one has to pity their political correctness, however, because the thing they are greatly at pains to avoid -- the black-white IQ gap -- is superb validation of the fact that group differences in IQ DO matter. From their abysmal average IQ score, we we would predict that blacks would be at the bottom of every heap (income, education, crime etc.) -- and that is exactly where they are. Clearly, group differences in IQ DO matter and the IQ tests are an excellent and valid measure of them
We are not all created equal where our genes and abilities are concerned.
A decade ago, Magnus Carlsen, who at the time was only 13 years old, created a sensation in the chess world when he defeated former world champion Anatoly Karpov at a chess tournament in Reykjavik, Iceland, and the next day played then-top-rated Garry Kasparov—who is widely regarded as the best chess player of all time—to a draw. Carlsen’s subsequent rise to chess stardom was meteoric: grandmaster status later in 2004; a share of first place in the Norwegian Chess Championship in 2006; youngest player ever to reach World No. 1 in 2010; and highest-rated player in history in 2012.
What explains this sort of spectacular success? What makes someone rise to the top in music, games, sports, business, or science? This question is the subject of one of psychology’s oldest debates. In the late 1800s, Francis Galton—founder of the scientific study of intelligence and a cousin of Charles Darwin—analyzed the genealogical records of hundreds of scholars, artists, musicians, and other professionals and found that greatness tends to run in families. For example, he counted more than 20 eminent musicians in the Bach family. (Johann Sebastian was just the most famous.) Galton concluded that experts are “born.” Nearly half a century later, the behaviorist John Watson countered that experts are “made” when he famously guaranteed that he could take any infant at random and “train him to become any type of specialist [he] might select—doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents.”
The experts-are-made view has dominated the discussion in recent decades. In a pivotal 1993 article published in Psychological Review—psychology’s most prestigious journal—the Swedish psychologist K. Anders Ericsson and his colleagues proposed that performance differences across people in domains such as music and chess largely reflect differences in the amount of time people have spent engaging in “deliberate practice,” or training exercises specifically designed to improve performance. To test this idea, Ericsson and colleagues recruited violinists from an elite Berlin music academy and asked them to estimate the amount of time per week they had devoted to deliberate practice for each year of their musical careers. The major finding of the study was that the most accomplished musicians had accumulated the most hours of deliberate practice. For example, the average for elite violinists was about 10,000 hours, compared with only about 5,000 hours for the least accomplished group. In a second study, the difference for pianists was even greater—an average of more than 10,000 hours for experts compared with only about 2,000 hours for amateurs. Based on these findings, Ericsson and colleagues argued that prolonged effort, not innate talent, explained differences between experts and novices.
These findings filtered their way into pop culture. They were the inspiration for what Malcolm Gladwell termed the “10,000 Hour Rule” in his book Outliers, which in turn was the inspiration for the song “Ten Thousand Hours” by the hip-hop duo Macklemore and Ryan Lewis, the opening track on their Grammy-award winning album The Heist. However, recent research has demonstrated that deliberate practice, while undeniably important, is only one piece of the expertise puzzle—and not necessarily the biggest piece. In the first study to convincingly make this point, the cognitive psychologists Fernand Gobet and Guillermo Campitelli found that chess players differed greatly in the amount of deliberate practice they needed to reach a given skill level in chess. For example, the number of hours of deliberate practice to first reach “master” status (a very high level of skill) ranged from 728 hours to 16,120 hours. This means that one player needed 22 times more deliberate practice than another player to become a master.
A recent meta-analysis by Case Western Reserve University psychologist Brooke Macnamara and her colleagues (including the first author of this article for Slate) came to the same conclusion. We searched through more than 9,000 potentially relevant publications and ultimately identified 88 studies that collected measures of activities interpretable as deliberate practice and reported their relationships to corresponding measures of skill. (Analyzing a set of studies can reveal an average correlation between two variables that is statistically more precise than the result of any individual study.) With very few exceptions, deliberate practice correlated positively with skill. In other words, people who reported practicing a lot tended to perform better than those who reported practicing less. But the correlations were far from perfect: Deliberate practice left more of the variation in skill unexplained than it explained. For example, deliberate practice explained 26 percent of the variation for games such as chess, 21 percent for music, and 18 percent for sports. So, deliberate practice did not explain all, nearly all, or even most of the performance variation in these fields. In concrete terms, what this evidence means is that racking up a lot of deliberate practice is no guarantee that you’ll become an expert. Other factors matter.
What are these other factors? There are undoubtedly many. One may be the age at which a person starts an activity. In their study, Gobet and Campitelli found that chess players who started playing early reached higher levels of skill as adults than players who started later, even after taking into account the fact that the early starters had accumulated more deliberate practice than the later starters. There may be a critical window during childhood for acquiring certain complex skills, just as there seems to be for language.
There is now compelling evidence that genes matter for success, too. In a study led by the King’s College London psychologist Robert Plomin, more than 15,000 twins in the United Kingdom were identified through birth records and recruited to perform a battery of tests and questionnaires, including a test of drawing ability in which the children were asked to sketch a person. In a recently published analysis of the data, researchers found that there was a stronger correspondence in drawing ability for the identical twins than for the fraternal twins. In other words, if one identical twin was good at drawing, it was quite likely that his or her identical sibling was, too. Because identical twins share 100 percent of their genes, whereas fraternal twins share only 50 percent on average, this finding indicates that differences across people in basic artistic ability are in part due to genes. In a separate study based on this U.K. sample, well over half of the variation between expert and less skilled readers was found to be due to genes.
In another study, a team of researchers at the Karolinska Institute in Sweden led by psychologist Miriam Mosing had more than 10,000 twins estimate the amount of time they had devoted to music practice and complete tests of basic music abilities, such as determining whether two melodies carry the same rhythm. The surprising discovery of this study was that although the music abilities were influenced by genes—to the tune of about 38 percent, on average—there was no evidence they were influenced by practice. For a pair of identical twins, the twin who practiced music more did not do better on the tests than the twin who practiced less. This finding does not imply that there is no point in practicing if you want to become a musician. The sort of abilities captured by the tests used in this study aren’t the only things necessary for playing music at a high level; things such as being able to read music, finger a keyboard, and commit music to memory also matter, and they require practice. But it does imply that there are limits on the transformative power of practice. As Mosing and her colleagues concluded, practice does not make perfect.
Along the same lines, biologist Michael Lombardo and psychologist Robert Deaner examined the biographies of male and female Olympic sprinters such as Jesse Owens, Marion Jones, and Usain Bolt, and found that, in all cases, they were exceptional compared with their competitors from the very start of their sprinting careers—before they had accumulated much more practice than their peers.
What all of this evidence indicates is that we are not created equal where our abilities are concerned. This conclusion might make you uncomfortable, and understandably so. Throughout history, so much wrong has been done in the name of false beliefs about genetic inequality between different groups of people—males vs. females, blacks vs. whites, and so on. War, slavery, and genocide are the most horrifying examples of the dangers of such beliefs, and there are countless others. In the United States, women were denied the right to vote until 1920 because too many people believed that women were constitutionally incapable of good judgment; in some countries, such as Saudi Arabia, they still are believed to be. Ever since John Locke laid the groundwork for the Enlightenment by proposing that we are born as tabula rasa—blank slates—the idea that we are created equal has been the central tenet of the “modern” worldview. Enshrined as it is in the Declaration of Independence as a “self-evident truth,” this idea has special significance for Americans. Indeed, it is the cornerstone of the American dream—the belief that anyone can become anything they want with enough determination.
It is therefore crucial to differentiate between the influence of genes on differences in abilities across individuals and the influence of genes on differences across groups. The former has been established beyond any reasonable doubt by decades of research in a number of fields, including psychology, biology, and behavioral genetics. There is now an overwhelming scientific consensus that genes contribute to individual differences in abilities. The latter has never been established, and any claim to the contrary is simply false.
Another reason the idea of genetic inequality might make you uncomfortable is because it raises the specter of an anti-meritocratic society in which benefits such as good educations and high-paying jobs go to people who happen to be born with “good” genes. As the technology of genotyping progresses, it is not far-fetched to think that we will all one day have information about our genetic makeup, and that others—physicians, law enforcement, even employers or insurance companies—may have access to this information and use it to make decisions that profoundly affect our lives. However, this concern conflates scientific evidence with how that evidence might be used—which is to say that information about genetic diversity can just as easily be used for good as for ill.
Take the example of intelligence, as measured by IQ. We know from many decades of research in behavioral genetics that about half of the variation across people in IQ is due to genes. Among many other outcomes, IQ predicts success in school, and so once we have identified specific genes that account for individual differences in IQ, this information could be used to identify, at birth, children with the greatest genetic potential for academic success and channel them into the best schools. This would probably create a society even more unequal than the one we have. But this information could just as easily be used to identify children with the least genetic potential for academic success and channel them into the best schools. This would probably create a more equal society than the one we have, and it would do so by identifying those who are likely to face learning challenges and provide them with the support they might need. Science and policy are two different things, and when we dismiss the former because we assume it will influence the latter in a particular and pernicious way, we limit the good that can be done.
Wouldn’t it be better to just act as if we are equal, evidence to the contrary notwithstanding? That way, no people will be discouraged from chasing their dreams—competing in the Olympics or performing at Carnegie Hall or winning a Nobel Prize. The answer is no, for two reasons. The first is that failure is costly, both to society and to individuals. Pretending that all people are equal in their abilities will not change the fact that a person with an average IQ is unlikely to become a theoretical physicist, or the fact that a person with a low level of music ability is unlikely to become a concert pianist. It makes more sense to pay attention to people’s abilities and their likelihood of achieving certain goals, so people can make good decisions about the goals they want to spend their time, money, and energy pursuing. Moreover, genes influence not only our abilities, but the environments we create for ourselves and the activities we prefer—a phenomenon known as gene-environment correlation. For example, yet another recent twin study (and the Karolinska Institute study) found that there was a genetic influence on practicing music. Pushing someone into a career for which he or she is genetically unsuited will likely not work.
SOURCE
Subscribe to:
Posts (Atom)