Talk:Weighted least squares

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Not useful for those unfamiliar with the language of matrix math[edit]

This article is fine as a reference for experts, but offers very little utility for students (for example) at the advanced high school or freshman undergraduate level who may find this information useful but do not yet speak the language of matrix algebra. It is perfectly straightforward to explain weighted fitting in the language of summation signs; I propose this article needs such an expansion.

152.3.34.82 (talk) 14:54, 18 October 2019 (UTC)[reply]


Mistakes[edit]

There is something wrong with last line of formula, it should be W in the middle instead of W^2.

KL

Hi. I do not see the mistake. Please edit as needed.

ManelMR 15:33, 26 September 2007 (UTC)[reply]


In the Linear Algebra Derivation, the pre multiplication of the parameter vector, Beta, by the traspose of the design matrix,X, is dimensionally incorrect. The system of equations shown is given by themultiplication of beta by X. For the rest of the derivation the places of X and transpose(X) should be swapped.

LivioM (talk) 02:16, 8 February 2008 (UTC)[reply]


In the first equation I believe it should be r_i and not r^2_i. 192.114.105.254 (talk) 11:08, 26 September 2018 (UTC)[reply]

Sources[edit]

There is no scientific source cited in here. Though I do have hope that all of what's written in here is correct, I would still prefer to cite some source with a better acknowledged reliability. I think the following book could be suitable for that:

  • Norman R. Draper, Harry Smith, "Applied Regression Analysis", Chapter 9.2 "Generalized Least Squares and Weighted Least Squares", 3rd Edition, Wiley, 1998, ISBN-10: 0-471-17082-8.

I'll have a look at it in the library later and check the contents of the article with those of the book. If they match, I can cite that book here.


There are many references; the one you cite is known enough. My only concern is that sometimes they use row vectors in the notation. This could lead to a misinterpretation, because the transpose operators change depending on the row or column convention.

ManelMR 15:33, 26 September 2007 (UTC)[reply]


This is still an issue, especially as the notation is not always clear to me. In the Introduction, is introduced. I can only assume that it means . I do not think, that it is standard enough to be used without any explanation, and frankly I am not even sure I understood it correctly. Therefore, I will not edit it. With a proper source, I could have verified this and either fixed a mistake in the notation or made it more clear.

Another thing is that I find it hard to follow the simplification of Mβ. Intermediate steps or a proper source would allow me to understand it better (and again, spot mistakes in case there are any). 194.230.148.197 (talk) 06:30, 23 March 2022 (UTC)[reply]

Dummy header to create an index[edit]

A major proposal[edit]

Please see talk:least squares#A major proposal for details, which include a proposal to merge this article into a revised version of linear least squares. Petergans (talk) 17:45, 22 January 2008 (UTC)[reply]

Missing expressions?[edit]

 The expressions given above are based on the implicit assumption that the errors are uncorrelated with each other and with the independent variables and have equal variance.

There are no expressions above this line. Either there are expressions missing or somehow this text is an artefact from a prior version? Lerad (talk) 11:56, 11 April 2019 (UTC)[reply]