I was debugging some code in C++ with a friend when I ran into something interesting.
I was set up on Netbeans using MinGW's GCC compiler version 4.6.2, and my friend was using DevC++ which uses MinGW's GCC version 3.4.2.
At the time I was using the setprecision function included in the iomanip standard library file.
setprecision allows you to set decimal precision in output operations.
It was a simple operation that had a small error, I was setting the precision to 2 on 118.125 which should have given me 118.13.
Instead my output was 118.12. I checked the same code on my friend's computer and noticed that his functioned fine, outputting 118.13.
It was only a minor error in the way set precision normally worked, but I assumed the problem was with the compiler version.
I could have it wrong, but I believe each implementation of a compiler, can carry minor differences in the way they implement the standard library. Although this isn't a huge issue, it could have led to larger issues with larger calculations, even if it's only off a decimal.
Like I said, I could be wrong about why this is happening, so if anyone else has an explanation I'd be interested in knowing about it.
Have you guys run into any weird inconsistencies among different common compilers for a language?
Results 1 to 6 of 6
Threaded View
- 13 Feb. 2013 08:32am #1
MinGW Standard Library Differences
I don't get tired.