Best introduction to subversion → Day-to-day with Subversion
This explanation of the fourier Transform was obtained from:
Well, I am doing the Coursera StatsLearning course from Stanford and I didn´t understand the use of orthogonal polynomials in a linear regression model.
After much looking around on the web I have finally understood how all is connected.
In linear regression you try to find the coefficients that reduce the sum of squared erros from: where spans to all the samples we have, and spans the polynomial degree we are using to fit the data.
When we use orthogonal polynomial we use instead the following expression to fit the data:
where the polynomials: are orthogonal to each other. Meaning by orthogonal that:
where N is the number of samples.
So, in the above sum the coefficients of the polynomials are chosen to make this sum equal to zero, and this is the polynomial provided by R using the poly function inside and lm expression.
I give a reference to the links I have used to clarify the topic:
It is used in estimating the variance in linear discriminant analysis:
When the prior probabilities of the classes the we want to classify are very imbalanced the it is good to use the retrospective or case-control sampling.
For example, you can do a logistic regression with case-control sampling. You have to use around 4-6 times more controls than cases, and the to adjust the intercept of your model with an adjustment: https://class.stanford.edu/c4x/HumanitiesScience/StatLearning/asset/classification.pdf (page 16), also : http://support.sas.com/kb/22/601.html for an explanation