Well, I am doing the Coursera StatsLearning course from Stanford and I didn´t understand the use of orthogonal polynomials in a linear regression model.
After much looking around on the web I have finally understood how all is connected.
In linear regression you try to find the coefficients that reduce the sum of squared erros from: where spans to all the samples we have, and spans the polynomial degree we are using to fit the data.
When we use orthogonal polynomial we use instead the following expression to fit the data:
where the polynomials: are orthogonal to each other. Meaning by orthogonal that:
where N is the number of samples.
So, in the above sum the coefficients of the polynomials are chosen to make this sum equal to zero, and this is the polynomial provided by R using the poly function inside and lm expression.
I give a reference to the links I have used to clarify the topic: