View Single Post
  #2   Report Post  
Posted to microsoft.public.excel.misc
Adriank Adriank is offline
external usenet poster
 
Posts: 2
Default slope of data that already contains STDEVs

For these further thoughts, I will disregard the x-axis uncertainty, which
derives from the measurement setup. How can I convert deviations in the
x-values into independent y-(or z-,... )values?

I am very interested in the standard error of the resulting function.
One suggestion I got was to simply make 2 lines through the graph, one using
all max values (all y- values +the error), and one using all min values (all
y- values -the error), then taking the average of those two, and using a
simple STDEV as as standard error. I somewhat disagree with that (and would
be happy for comments).


Anyways, the linear OLS (ordinary least square) function in excel from
LINEST gives the slope m out as (sum (x-x(average)*(y-y(average)) / (sum
(x-x(average)) and the intercept with the y-axis b as b=y(av)-mx(av)
so far, so good.
So for a simple sample set of three points (x;y)=(1;1), (2;2.1), (3;2.9).
the slope m = 0.95, and b=0.1
Using Linest (y, x,,TRUE), I can create an array that gives me exactly those
values, with a STDEV for m=0.086603 and for b=0.187
1) How are those values calculated?
2) How come the st.deviation for b is 80% larger than its original value?!?

Furthermore, if I now force the slope to go through 0, I use (y, x,0,TRUE),
my slope becomes 0.9928 and b=0 (obviously), with STDEV(m)=0.026245.
But if I use above-mentioned calculation, and just add a fourth point (0;0),
I get a slope of 0.98, and a b of 0.03, NOT what excel does by forcing it
through zero.

Any help with how both slope-forced through zero, and all STDEVs are
actually calculated would be greatly appreciated.
Thanks.