Talk:United States National Research Council rankings

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Untitled[edit]

Why does the 2005 ranking only include private schools? The original NRC report does not make this distinction. Surely there must be other sources that have taken the averages of the S-values and R-values of all schools? —Preceding unsigned comment added by 99.121.56.67 (talk) 21:40, 3 November 2010 (UTC)[reply]

I don't know why to keep the average of all scores in the front. This is too biased and would only harm the influence of the ranking. If you don't have a particular field, you get zero in that field.

209.173.238.105 (talk) 03:26, 11 February 2008 (UTC)[reply]


As the linked website points out (http://www.stat.tamu.edu/~jnewton/nrc_rankings/nrc1.html#TOP60), this is an extremely biased way of looking at the rankings since you are taking an average over many fields of study whether or not the school even has a program in that area. For example, a school without a specific program in oceanography or religion will have zeros averaged in with their other scores.

I suggest either using the "Average of Nonzero Scores" table below, or simply break up the rankings by field of study:


Number of Rated Programs         Average of Nonzero Scores          Average of all 41 Scores

1 Stanford              40       1 MIT                   8.70       1 Stanford              7.76
2 Michigan              38       2 UC Berkeley           8.50       2 UC Berkeley           7.46
2 Wisconsin             38       3 Harvard               8.20       3 Michigan              6.71
2 Ohio State            38       4 Princeton             8.03       4 Cornell               6.56
5 Texas                 37       5 Caltech              8.00       5 Wisconsin             6.44
5 Washington            37       6 Stanford              7.95       6 UCLA                  6.32
5 Illinois              37       7 Chicago               7.73       7 Texas                 6.12
5 Minnesota             37       8 Yale                  7.60       8 Columbia              6.07
9 UC Berkeley           36       9 Cornell               7.47       9 Washington            6.05
9 Cornell               36      10 UC San Diego          7.34       9 Illinois              6.05
9 UCLA                  36      11 Columbia              7.32       9 Penn                  6.05
9 Penn State            36      12 Michigan              7.24      12 Harvard               6.00

13 Penn 35 13 UCLA 7.19 13 Minnesota 5.78 14 Columbia 34 14 Penn 7.09 14 Princeton 5.68 14 Pittsburgh 34 15 Wisconsin 6.95 15 Chicago 5.66 16 Duke 33 16 Texas 6.78 16 Yale 5.56 16 Johns Hopkins 33 17 Washington 6.70 17 Ohio State 5.37 18 North Carolina 32 17 Illinois 6.70 18 Duke 5.32 18 Virginia 32 19 Northwestern 6.63 18 Johns Hopkins 5.32 18 Rutgers 32 20 Duke 6.61 20 Penn State 5.22 18 UC Santa Barbara 32 20 Johns Hopkins 6.61 21 UC San Diego 5.20 18 Iowa 32 22 Carnegie Mellon 6.53 22 North Carolina 4.93 18 Florida 32 23 Minnesota 6.41 23 MIT 4.88 18 SUNY Buffalo 32 24 North Carolina 6.31 24 Northwestern 4.85 25 Massachusetts 31 25 Brown 6.31 25 Virginia 4.80 26 Harvard 30 26 UC Irvine 6.22 26 Rutgers 4.63 26 Chicago 30 27 NYU 6.20 27 Brown 4.46 26 Yale 30 28 Virginia 6.16 28 UC Santa Barbara 4.41 26 Northwestern 30 29 Purdue 6.12 29 Pittsburgh 4.37 26 SUNY Stony Brook 30 30 Arizona 6.00 30 Arizona 4.24 26 Colorado 30 31 Penn State 5.94 31 SUNY Stony Brook 4.20 26 Michigan State 30 32 Rutgers 5.94 31 Iowa 4.20 33 Princeton 29 33 Brandeis 5.93 33 Florida 4.15 33 UC San Diego 29 34 Washington St Louis 5.93 34 Colorado 4.07 33 Brown 29 35 Rochester 5.89 34 Massachusetts 4.07 33 Arizona 29 36 UC Davis 5.88 36 Indiana 3.93 33 Kansas 29 37 Emory 5.88 37 Michigan State 3.90 38 Indiana 28 38 Ohio State 5.79 37 Washington St Louis 3.90 38 Maryland 28 39 Indiana 5.75 39 Rochester 3.88 38 Boston University 28 40 SUNY Stony Brook 5.73 40 Maryland 3.80 38 Cincinnati 28 41 Rice 5.67 41 SUNY Buffalo 3.78 42 Washington St Louis 27 42 UC Santa Barbara 5.66 41 NYU 3.78 42 Rochester 27 43 North Carolina State 5.61 43 UC Davis 3.73 42 Kentucky 27 44 Maryland 5.57 44 Caltech 3.71 42 LSU 27 45 Colorado 5.57 45 Purdue 3.59 46 UC Davis 26 46 Southern California 5.50 46 Southern California 3.49 46 Southern California 26 46 CUNY 5.50 46 UC Irvine 3.49 46 CUNY 26 48 Texas A&M 5.48 46 CUNY 3.49 46 Vanderbilt 26 49 Dartmouth 5.45 49 Vanderbilt 3.41 46 Connecticut 26 50 Georgia Tech 5.44 50 Texas A&M 3.34 51 NYU 25 51 Massachusetts 5.39 51 North Carolina State 3.15 51 Texas A&M 25 52 Vanderbilt 5.38 52 Kansas 3.12 51 Arizona State 25 53 Iowa 5.38 53 Boston University 3.05 54 Purdue 24 54 Michigan State 5.33 54 Arizona State 3.00 54 Syracuse 24 55 Florida 5.31 55 Kentucky 2.95 54 Oklahoma 24 56 Utah 5.30 56 Iowa State 2.90 57 MIT 23 57 Case Western 5.29 56 Rice 2.90 57 UC Irvine 23 57 RPI 5.29 58 LSU 2.85 57 North Carolina State 23 59 Pittsburgh 5.26 58 Connecticut 2.85 57 Iowa State 23 60 Delaware 5.23 60 Syracuse 2.78

Release date changed[edit]

The article currently states that the report will be released in September 2008, however the NRC website says (updated September 24th 2008): "The release schedule for reports for the NRC Assessment of Research Doctoral Programs has changed. The release of the Methodology Guide is now estimated for late October or early November. The release schedule for the project report and its database will be announced when we have precise dates." - http://www7.nationalacademies.org/resdoc/Whats_new.html 78.86.101.116 (talk) 19:02, 25 September 2008 (UTC)[reply]

Tables[edit]

Reinserted the tables showing the 2010 release rankings. The tables are calculated as described; they are simply the averages of the high rankings for the programs reviewed for each school. All of the data is publicly available from the National Research Council's website. Furthermore, many universities are using that exact same methodology (e.g. Harvard, Columbia and Duke) to summarize their standards in the review (refer to the unversities' own websites for press releases during the period after the release of the data to understand the institutions' own description of the rankings). —Preceding unsigned comment added by 74.73.111.98 (talk) 20:12, 20 November 2010 (UTC)[reply]

Given that the NRC deliberately avoided a single numerical ranking such as the one you describe (despite the ease of creating one), what you're doing amounts to a violation of WP:SYN. I am removing the tables again. —David Eppstein (talk) 21:31, 20 November 2010 (UTC)[reply]

Re: References required for each assertion[edit]

It may indeed by true that the NRC rankings have been "critiqued by a lot of people," but we must cite every assertion. The passive voice, weasel-like phrase ("The rankings have been criticized for X and Y") is not really great; we ought to state who is doing the critique, preferably directly in the text.

It is most helpful to our readers to state directly who made the proposition: "The 2010 rankings were critiqued by Jonathan R. Cole and the Computing Research Association." Such specificity is particularly important when objections come from particular individuals and groups who make statements that are matters of perspective. Of course, editors should feel free to add other referenced material dealing with criticism from other quarters. But it must be specifically-attributed and must not include vague terms ("The rankings have been criticized by X and Y," when criticisms X and Y came from a particular individual, organization, or community).

(On a separate issue, the characterization of Cole's objections in the prior version of the article seemed not accurate, or at least imprecise: He objects to "faulty assumptions, poor analysis, political pressure from the academy, and unexamined preconceptions." This has nothing to do with "groupthink," which has a specific connotation, and in his essay Cole never uses the term at all. Neutralitytalk 09:10, 18 February 2012 (UTC)[reply]

It's unfortunate that the praise for the rating system uses the term "gold standard," since the US left the gold standard in 1971 finding that it was an anachronistic impediment to a modern economy. Anachronistic impediment is not what the writers meant when they used the term.Phytism (talk) 19:19, 20 January 2020 (UTC)[reply]