Home |
Search |
Today's Posts |
#1
![]() |
|||
|
|||
![]()
What an extraordinarily rude response, Mr. Grove. Given the repeated
rudeness of your responses in the face of sincere dialogue from Alan, there's little hope your social manners could ever improve. "Harlan Grove" wrote: "Alan" wrote... .... The 'nature' of the calculations for an NPV (let's leave IRR for now), are really quite simple. .... It's the seemingly simple things that tend to cause the biggest problems because they come as such a surprise. Start with a simple interest rate like 3.0%. That seems to be a simple number. Two problems: first, 0.03 is an infinitely repeating *BINARY* fraction like 1/3 is an infinitely repeating decimal fraction; two, the interest rate isn't used directly, 1 plus the interest rate is. Each later age requires the successively higher integer power of 1+r, so even if the calculations were done in decimal you'd quickly arrive at a point at which you can't retain the lowest order digits. Time 1 1.03 Time 2 1.0609 Time 3 1.092727 Time 4 1.12550881 Time 5 1.1592740743 Time 6 1.194052296529 Time 7 1.22987386542487 Time 8 1.2667700813876161 Time 8 exceeds 15 decimal digits, so it can't and won't be representented except as truncated or rounded in Excel. Without wishing to be presumptious, I am guessing that you are referring to the level of precision that can be held in a single cell (15 significant figures if I recall correctly). Got it in one. If so, then I think that, in theory you are absolutely correct, but that in a commercial environment, we rarely have the precision of inputs / assumptions to warrant showing results to more than three or four significant figures. I often wish it were otherwise! You're confusing the imprecision of inputs/estimates of future cashflows with the precision of NPV calculations based on those cashflows. I too work in a commercial environment, but I don't glibly dismiss the benefits of greater precision of calculations just because the inputs are flaky. I employ standard sensitivity testing to check the reasonableness of results by varying the cashflows stochastically. If my discount rate is a true cost of capital estimate, it gives me a measure of uncertainty which I can translate into variation of a cashflow around its assumed mean. If I have misunderstood your post, please do clarify, as I am always keen to improve! Given the superficiality of comments such as "in a commercial environment, we rarely have the precision of inputs / assumptions to warrant showing results to more than three or four significant figures.", it's obvious you don't see the point of distinguishing between what you display and what you calculate. It's difficult to believe there's any hope you could improve. Estimates of future cashflows are only that, estimates. You have a point that there's no reason to estimate them to 9 significant digits. However, the NPV calculation is different. There's no good reason to round your discount factors. Note that if your nominal cashflows were displayed to only 3 significant digits and your discount factors rounded to only 4 decimal places, then the product of these two sets of numbers have 7 significant digits. If you then round these to 3 or 4 significant digits, you're effectively eliminating the possibility of any meaningful NPV result less than 1.0% or 0.1% of your largest individual cashflow. If you're going to do that, why bother with pseudo-NPV calculations at all? Just sum up the negative cashflows and calculate their dollar-weighted average payout date and sum up the positive cashflows and calculate their dollar-weighted average receipt date. Then calculate your pseudo-NPVs as CumPosCFs/(1+r)^Avg Receipt Date - CumNegCFs/(1+r)^Avg Payout Date The calculations are even simpler than your pseudo-NPV, and it's highly likely the two would give the same accept/reject results except for those cashflows in which sensitivity testing is necessary to decide for sure whether the project represented by the cashflows makes sense or not. |