When our team first chose to use the Open Source Product R for the majority of our work we did not realise the hornets nest of problems it might create. Not due to the application but to our assumptions coming into using the application. Even though we had had many years working in C, Java and other numerous languages the first problem that knocked us over was of floating point arithmetic.
For those of you who are not sure what I mean take the following question is: SQRT(2)*SQRT(2)== 2.0. Now for all the Excel users out there everyone will cry "Yes" for those of you used to seeing the post "See FAQ 7.31" you will know that applications such as R and S-Plus do not see the world in the same way and thus return FALSE.
The reason is to do with the fact that inside your computer you have a limited amount of bits with which to represent your floating point number. You can find more on floating point representations in the paper David Goldberg (1991), “What Every Computer Scientist Should Know About Floating-Point Arithmetic”, ACM Computing Surveys, 23/1, 5–48 (courtesy again to the R FAQ).
So what is the (decimal) point of it all? Well when dealing with applications that know about IEEE 754-1985 you can normally set the desired amount of precision (in R this is normally an argument called tolerance for example in all.equal). My personal favourite is to use a threshold figure to like 0.00001 to check against. But you should always be on the lookout for such precision problems. A note on the difference between accuracy and precision I will leave for another day but you get the idea.
So finally don't complain when applications are built to properly represent your floating point numbers, recognise that it is doing its job and praise the writers for sticking to standards (again, a topic for another time!).
Happy number crunching!