![]() |
Performance long/single comparison vs converting single to long
I have a workbook that I've inherited and have already made various
revisions to improve it's performance and functionality (what was a 30 minute running time is down to 5). There is one set of items that I haven't changed yet because I'm not sure whether the original programmer didn't have a specific reason for doing it the way he did (taking some of the other changes I've made into consideration, it's a coin toss). There are a *lot* of random numbers generated which are then compared to a column of cell values which are expressed as percentages (yeah, a lookup). Currently, the code coerces the values on *both* sides of this comparison to Integers or Longs before it does the comparison rather than leaving the variables as-is and doing a Single-to-Single comparison. The only reason I can think to do this is that an Integer or Long comparison *might* have some sort of performance edge over a Single comparison that the original programmer knew about and that I don't. However, it seems to me that whatever performance edge there might be (assuming there even is one) would be negated by multiplying 2 variables by 1000 for each comparison. Is there any possible advantage to the existing ConvertedSingleAsLong vs ConvertedSingleAsLong code over changing it to simply read "if SingleA SingleB Then..."? (Figuring it wouldn't hurt to ask before making the change to see what happens...) TIA for any thoughts, -- George Nicholson Remove 'Junk' from return address. |
Performance long/single comparison vs converting single to long
George,
I think a Long to Long comparison will be quicker than a single to single comparison, but it might well be negated by the changing to Longs in the first instance. I would suggest you take the relevant code out and time it and see whether it actually makes any significant difference. Of more concern is Excel's Random Number Generator. If you do have * lots *, then you are going to encounter many repetitions. I don't know if this is a concern, or whether there is any coding added to address duplicated values. You could look at PopTools for an alternative if required. -- HTH Bob Phillips "George Nicholson" wrote in message ... I have a workbook that I've inherited and have already made various revisions to improve it's performance and functionality (what was a 30 minute running time is down to 5). There is one set of items that I haven't changed yet because I'm not sure whether the original programmer didn't have a specific reason for doing it the way he did (taking some of the other changes I've made into consideration, it's a coin toss). There are a *lot* of random numbers generated which are then compared to a column of cell values which are expressed as percentages (yeah, a lookup). Currently, the code coerces the values on *both* sides of this comparison to Integers or Longs before it does the comparison rather than leaving the variables as-is and doing a Single-to-Single comparison. The only reason I can think to do this is that an Integer or Long comparison *might* have some sort of performance edge over a Single comparison that the original programmer knew about and that I don't. However, it seems to me that whatever performance edge there might be (assuming there even is one) would be negated by multiplying 2 variables by 1000 for each comparison. Is there any possible advantage to the existing ConvertedSingleAsLong vs ConvertedSingleAsLong code over changing it to simply read "if SingleA SingleB Then..."? (Figuring it wouldn't hurt to ask before making the change to see what happens...) TIA for any thoughts, -- George Nicholson Remove 'Junk' from return address. |
All times are GMT +1. The time now is 12:56 PM. |
Powered by vBulletin® Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
ExcelBanter.com