Race Realism and Other Classics

A Troublesome Inheritance: Genes, Race and Human History (2014) by Nicholas Wade

Clocking the Mind – Mental Chronometry and Individual Differences (2006) by Arthur Jensen

Making Sense of Heritability (2005) by Neven Sesardic

Hereditary Genius (1869) by Francis Galton

Intelligence: A Unifying Construct for the Social Sciences (2011) by Richard Lynn & Tatu Vanhanen

Measuring Intelligence: Facts and Fallacies (2004) by David Bartholomew

Race (1974) by John Baker

Race Differences in Intelligence: An Evolutionary Analysis (2006) by Richard Lynn

Race, Evolution and Behavior (1995) by J. Philippe Rushton

The 10,000 year explosion: How civilization accelerated human evolution by Greg Cochran and Henry Harpending

The Bell Curve (1994) by Richard Herrnstein and Charles Murray

The ethnic phenomenon by Pierre Van den Berghe

The G-Facor – General Intelligence and its Implications (1996) by Chris Brand

The g Factor: The Science of Mental Ability (Human Evolution, Behavior, and Intelligence) (1998) by Arthur Jensen

Understanding Human History (2007) by Michael Hart

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

39 Responses to Race Realism and Other Classics

  1. 猛虎 says:

    Jesus. Do you really have ‘The g Factor’ and ‘Race Evolution and Behavior’ ?
    I knew you were very special.

    (I have sent you a mail with Sesardic 2005 book and Jensen 2006 book. I have not finished reading the latter, but for what I have read, it is not bad at all.)

  2. JL says:

    Chuck, do you have access to this paper? If yes, could you upload it to your site?

  3. Pingback: Race Realism Books | Ethnic Muse

  4. Kiwiguy says:

    Interview with Jensen in Journal of Educational and Behavioral Statistics
    Fall 2006, Vol. 31, No. 3, pp. 327–352

    http://www.edb.utexas.edu/robinson/danr/JEBS%2031(3)%20-06_Jensen%20profile.pdf

  5. Chuck says:

    JL, can I contact you by e-mail when I’m feeling a little better. I would like to discuss some of this with you.

    The NNAT results are interesting because the NNAT is just a figural reasoning test — a knock off of Raven’s matrices

    Here is a discussion of the standardization samples:

    “Naglieri and Ronning (2000) examined differences among three matched samples of African American (n = 2,3 06) and Caucasian (n = 2,3 06), Hispanic (n = 1,176) and Caucasian (n = 1,176), and Asian American (n = 466) and Caucasian (n = 466) children from 22,620 within the Naglieri Nonverbal Abilities Test standardization sample. Participants were matched on type of school setting (private or public), socioeconomic status, ethnicity, and geographic region [and Urbanicy]. Minimal differences were found between Caucasian and Asian (difference ratio = .O2) and Caucasian and Hispanic groups (difference ratio = .17). Similarly, a significant but small difference was found between scores for Caucasian and African-American samples (difference ratio = .25). Scores on the Naglieri Nonverbal Abilities Test were correlated with reading (.52) and mathematics (.63) across the sample.”

    Matching on these additional factors (e.g., school setting and region and urbanicy) had an effect similar to what we have seen before:

    See table 3 on the last page:

    http://faculty.education.uiowa.edu/dlohman/pdf/Review_of_Naglieri_and_Ford.pdf

    The original sample was so unrepresentative that matching increased the difference from 3.2 to 4.1 points. The 4.1 would be the starting point. We would have to unadjust for urbanicity, SES, private/public, and geography.

    Also, as you noted, the SDs were inflated. See Lohman’s discussion. I think that our starting point would be 4/15 and then we would add .33 for SES. And maybe 0.1 SD for everything else.

    Interesting, Lohman notes: “After controlling for demographic variables, Black students actually performed better on the Reading and Math achievement tests than on the NNAT.” So this suggest at least in this sample we might have a non-problem, since no one is claiming that the achievement gaps has been vanquished.

    I should note, here was a school report mentioned by Lohman in a critique of the NNAT:

    http://datacenter.spps.org/uploads/GT_Identification_Report_2005-2006.pdf

    If I’m not mistaken those differences in rates come out to between 0.8 and 1.1 SD.

    Lohman has a number of other critical papers on Naglieri’s NNAT

    See also: http://faculty.education.uiowa.edu/dlohman/pdf/Lohman-Lakin%20Identification%20System.pdf

    Comment:

    Naglieri and Ford’s (2003) claim that the NNAT identifies equal proportions of high-scoring White, Black, and Hispanic students was supported only after the data had been re-weighted to make this happen. Because the data were contrived, other investigators have not been able to replicate these findings, either with Black students (Shaunessy, Karnes, & Cobb, 2004; Stephens, Kiger, Karnes, & Whorton, 1999) or with Hispanic students (Lewis, 2001). Indeed, all of these investigations found that the NNAT identified fewer high-scoring minority students than other nonverbal ability tests
    .
    http://faculty.education.uiowa.edu/dlohman/pdf/Identifying_Academically_Talented.pdf

    Also:

    http://faculty.education.uiowa.edu/dlohman/pdf/LohmanWallace%202006%20talk.pdf

    “The final reason schools have stated using nonverbal tests to screen kids for gifted programs is harder to talk about. I keep wishing that it would just go away, but it does not. Some of you may know that, several years ago, Jack Naglieri presented a paper at NAGC (and subsequently in many other places) that purported to show that his test – the NNAT – identified equal proportions of high scoring White, Black, and Hispanic students in a large, national sample of school children. He and Donna Ford subsequently published an article in on this in the Gifted Child Quarterly. As any one who works in education knows, differences between under-represented minority and majority students on both achievement and ability tests are enormous – typically in the range of a half to a full standard deviation. Further, as Camilla Benbow pointed out many years ago, even small group differences at the mean translate into substantial differences at the tails of the distribution. Therefore, the claim that any achievement or school ability test gives equal representation of high-scoring Black, Hispanic, and White students is, quite literally, unbelievable

    I did not want to be the one who challenged that claim, though. I knew that some would think it simply sour grapes—I work on a test that does not show these effects. In fact, some have even said this to me. I was also warned challenging these claims would brandish me as an opponent of equal opportunities for minority students. That too has happened. But I also realized that very few people who work in the field of gifted education seemed to have the technical expertise in large scale testing to understand what was going on here. And so I challenged that claim, but was restrained in my comments. I pointed out that

    • the numbers did not add up;
    • the results were inconsistent not only with every other large data set but also with previously published analyses of the same data set;
    • and therefore that the conclusions we not to be trusted.

    But I did not explicitly say what I knew – which was that the data had been retroactively fit to the conclusions. I thought that any but the most naïve reader would get the point that the data set had be altered in a serious way I worded the conclusions in this way because I did not want to be confrontational, and I wanted to give the authors a way out of the mess they had created. In my naiveté, I thought they might say something that would allow them to save face and reputation while setting the record straight. I also communicated my concerns privately to the editor, and warned that, if past behavior predicted future behavior, Naglieri would not address the issues that I raised, but would instead attack me.

    And, indeed, this is what happened. My motives for writing the article were questioned. The CogAT was attacked – most spectacularly with a set of readability numbers that are nothing but random noise. And the authors assumed the tone offended advocates for the downtrodden, while caricaturing me and my work as

    defending the evil status quo. The only legitimate point that they raised – and illustrated in several pages of text and figures attacking the CogAT and ITBS – was their contention that ability and achievement are independent constructs. The measurement of one, they said, should not be contaminated by the other. I have a very different view – which I have articulated in a paper that will appear in the Fall 2006 edition of the Roeper Review and that is on my website.

    Needless to say, I was astonished both by what they we allowed to say and, more importantly, what they did not say. There was no admission of re-weighting the data or even of misleading unsuspecting users. Nor was there any explanation for the inconsistencies between their results and previous analyses of the same data set. Burt (because he championed a politically unpopular position) was posthumously pilloried for a decimal point. This is about moving entire distributions amounts that would be classified as very large effects in the experimental literature. The difference between Black & White students on nonverbal tests is about as large as the difference between these groups on measures of academic achievement. For Hispanic students, the differences are reduced but still substantial. If Naglieri had honestly reported ethnic differences on his test, this is what he would be telling potential users……

    Here is what was done. First, test scores were re-weighted to make the score distributions equal. This guaranteed that there would be equal proportions of students from different ethnic groups. Here is a visual demonstration of the process. The blue distribution is for the lower-scoring minority group, the red for the higher scoring non-minority students……

    He then tallied the frequency of demographic variables for these new, re-weighted data sets. For example, the social class and other demographic characteristics of the students whose scores were up-weighted would now be much more important, and conversely for those students whose scores were down-weighted. A large table showing these …”

    I didn’t really follow this. Based on what Lohman said, is there a way to readjust to get a better estimated score? Or is the reported NNAT data salvageable garbage?

    • JL says:

      JL, can I contact you by e-mail when I’m feeling a little better. I would like to discuss some of this with you.

      Sure. You can see my address, right? I check that email infrequently though.

      Regarding Lohman’s criticisms of Naglieri, it’s ironic that Lakin and Gambrell’s study of the CogAT 7 has very similar problems with samples. Lohman is the author of the CogAT. Perhaps he has decided that if you can’t beat Naglieri et co., you must join them. I have no idea how to get reliable estimates from these data.

  6. Ambiguous says:

    If an IQ gap between two groups is not on g, what does that signify about that particular gap? Does it mean the two groups have the same intelligence?

    • Chuck says:

      Often the term “intelligence” is used to refer to”general intelligence” or stratum III of psychometric intelligence. This is because stratum III has the most generalizability, has the highest heritability, and accounts for 95% of the predictivity of IQ test. This isn’t a tautology. For example, there’s a general factor of personality — but it’s barely more predictive, generalizable, and heritable than the broad factors (e.g., conscientiousness), so “personality” isn’t identified with the g of p. Anyways, the point is that lower order factors are technically equally psychometric intelligence. And group differences comprised exclusively of such factors can technically be said to be psychometric intelligence difference — they’re just less interesting.

      As for being loaded on g, this just means that the differences correlate with general intelligence. This, if it’s repeatedly found, is a sign that the difference largely represent g differences. I noted elsewhere in the case of the B/W difference:

      “The probability that Spearman’s Effect does not hold for the Black-White gap is under one in a billion. However, in their critique of Jensen’s method of correlated vectors, which has been used to establish Spearman’s correlations, Dolan et al (2001) argue that repeated findings of Spearman’s Effect is necessary but not sufficient to establish a general intelligence differences (see also Wicherts and Dolan, 2005). It has been argued by te Nijenhuis et al. (2007), though, that the method of correlated vectors may be more robust when used with meta-analyses. The author’s note “The fact that our meta-analytical value of r=−1.06 is virtually identical to the theoretically expected correlation between g and d of −1.00 holds some promise that a psychometric meta-analysis of studies using MCV is a powerful way of reducing some of the limitations of MCV…Additional meta-analyses of studies employing MCV are necessary to establish the validity of the combination of MCV and psychometric meta-analysis. Most likely, many would agree that a high positive meta-analytical correlation between measures of g and measures of another construct implies that g plays a major role, and that a meta-analytical correlation of −1.00 implies that g plays no role. However, it is not clear what value of the meta-analytical correlation to expect from MCV when g plays only a modest role.”

      Basically, you can get spurious results using MCV, the method which is typically used to show g-loadedness, so you really want meta-analytic results, since meta-analysis reduces sampling error.

      Now, as for null results, this doesn’t mean that differences don’t involve g. Just that g is not driving the results. The results are predominantly lower order factor differences — which isn’t to say not at all g difference. An example is secular differences which are NOT loaded on g, but still, in instances possibly entail g. For example, from Wai and Putallaz (2011)::

      For example, for tests that are most g loaded such as the SAT, ACT, and EXPLORE composites, the gains should be lower than on individual subtests such as the SAT-M, ACT-M, and EXPLORE-M. This is precisely the pattern we have found within each set of measures and this suggests that the gain is likely not due as much to genuine increases in g, but perhaps is more likely on the specific knowledge content of the measures. Additionally, following Wicherts et al. (2004),we used multigroup = confirmatory factor analysis (MGCFA) to further investigate whether the gains on the ACT and EXPLORE (the two measures with enough subtests for this analysis) were due to g or to other factors. Using time period as the grouping variable, we uncovered that both tests were not factorially invariant with respect to cohort which aligns with the findings of Wicherts et al. (2004) among multiple tests from the general ability distribution. Therefore, it is unclear whether the gains on these tests are due to g or to other factors, although increases could indeed be due to g, the true aspect, at least in part..(a).
      (a)…Under this model the g gain on the ACT was estimated at 0.078 of the time 1 SD. This result was highly sensitive to model assumptions. Models that allowed g loadings and intercepts for math to change resulted in Flynn effect estimates ranging from zero to 0.30 of the time 1 SD. Models where the math intercept was allowed to change resulted in no gains on g. This indicates that g gain estimates are unreliable and depend heavily on assumptions about measurement invariance. However, all models tested consistently showed an ACT g variance increase of 30 to 40%. Flynn effect gains appeared more robust on the EXPLORE, with all model variations showing a g gain of at least 30% of the time 1 SD. The full scalar invariance model estimated a gain of 30% but showed poor fit. Freeing intercepts on reading and English as well as their residual covariance resulted in a model with very good fit: χ2 (7) = 3024, RMSEA=0.086, CFI=0.985, BIC=2,310,919, SRMR=0.037. Estimates for g gains were quite large under this partial invariance model (50% of the time 1 SD). Contrary to the results from the ACT, all the EXPLORE models found a decrease in g variance of about 30%. This demonstrates that both the ACT and EXPLORE are not factorially invariant with respect to cohort which aligns with the findings of Wicherts et al. (2004) investigating multiple samples from the general ability distribution. Following Wicherts et al. (2004, p. 529), “This implies that the gains in intelligence test scores are not simply manifestations of increases in the constructs that the tests purport to measure (i.e., the common factors).” In other words, gains may still be due to g in part but due to the lack of full measurement invariance, exact estimates of changes in the g distribution depend heavily on complex partial measurement invariance assumptions that are difficult to test. Overall the EXPLORE showed stronger evidence of potential g gains than did the ACT.”

      It’s rather difficult to determine because you need huge sample sizes (this study was based on 1.7 million scores) and even then results are ambiguous. For example, the above showed virtually no g gains for ACT a better measure of IQ, but modest “g” gains for Explore a better measure of achievement. So it seem that there probably was an increase in the general factor of achievement which implies an increase in IQ — but it’s difficult to say.

      Anyways, the point: an absence of g-loading doesn’t imply no g differences — it just implies that the g-differences are not commensurate with the score differences. And no g differences don’t mean no “psychometric intelligence” differences, broadly understood.

  7. B.B. says:

    Reading through The g Factor, Jensen claims (pg. 92-93) that it isn’t inevitable that factor analysis will produce a general factor and uses personality traits as an example where it doesn’t. I find this interesting because one of Rushton’s preoccupations in the late-period of his career was trying to prove the existence of a general factor of personality.

  8. Kiwiguy says:

    Nice little summary of “yes-but” rejoinders in these discussions.

  9. Bostonian says:

    How many of the books are legally in the public domain? I don’t want to point people to pirated works.

  10. Bostonian says:

    Linda Gottfredson’s professional publications page http://www.udel.edu/educ/gottfredson/reprints/index.html#Publications has many HBD papers .

  11. Bostonian says:

    Typo: “facor” should be “factor” for Brand’s book.

  12. ambiguous22 says:

    Can someone tell me the g loading of the WISC and/or WAIS FSIQ?

    • Chuck says:

      The g-loadings are going to depend on the sample and the test (e.g., WAIS-R). I don’t have the standardization sample results on hand. You could email James Flynn though, as he analyzed the results for a 2005 paper on the narrowing of the gap.

  13. I’d rather not post this, but….

  14. 猛虎 says:

    I have come across this 2012 paper :
    http://scholarworks.uno.edu/cgi/viewcontent.cgi?article=2473&context=td

    “An Investigation of the Combined Assessments Used as Entrance Criteria for a Gifted English Middle School Program”

    From the abstract.

    The purpose of this study was to determine if the four assessments for entrance into an academic middle school gifted English program were accurately predicting success, as measured by students’ grades each nine-week grading period.

    On page 103, we read :

    A final analysis used regression linear estimation. These results are presented in Table 12. It was run with score (grade) as the dependent variable against the predictors of student ID and the four entrance assessments. The R2 was 86% and the adjusted R2 was 82%. This analysis revealed that there was a significant difference in individual students, as revealed by a significance of .020. Neither aptitude (CogAT) (.407) nor NNAT (.216) was significant, but the reading (ITBS) (.000) and STEP (.000) were significant. The two aptitude tests (CogAT and NNAT) were not good predictors of student success in the program, but the two achievement instruments (ITBS Reading and STEP Writing test) seemed to better predict which students will be successful.

    P.S. the new theme of the blog is horrible. Some comments are impossible to read. In contrast, the previous one was excellent.

  15. x says:

    meng hu, what is the easiest way to go about scanning a book? i’m thinking of scanning some of my HBD books, though i would need a place for them to be uploaded.

    have any of these not been scanned/put in our collection:

  16. 猛虎 says:

    If you want to scan a book, when your job is finished (to be honest, it took me so much pain to scan Baker’s book, Race) I think you should convert your file in rar files. Then, you can upload it to rapidshare, or depositfile. You must have an account though. I have a free account, but without a premium account the file will be removed if within a month (for deposit file, this is 3 month if my memory is correct), there were no download activities (although I can do that myself). Even if the file is removed, I can refresh the link whenever I want. You just have to re-upload your file.

    P.S. Good collection. Personally, I don’t have Educability & Group Differences, and Bias in Mental Testing. But I am more interested by Salter’s book, On Genetic Interests.

  17. x says:

    ok, i’ll see about salter’s book then.

  18. HBDNeophyte says:

    Is there any place you’d recommend to learn up on HBD and things before reading any of these? I started reading Rushton’s REB, and while I think I’m taking some of it in, I’m sure I’d get more out of it if I had a background in these things. For example, I’m not always sure of some of the terminology (REB has a glossary, so that might not be as bad) and the way the numbers are used (for example, I’m guessing if something is “0.70”, that’s the same as 70%?, but there are probably other stats and such I’m unsure of, too). Interesting subject, but it seems like you might need a background in it to really grasp everything (or maybe I’m just an idiot).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s