Friday, 15 July 2011

why python cannot judge 1.0 equals to 1 in some cases, but other cases can? -



why python cannot judge 1.0 equals to 1 in some cases, but other cases can? -

this question has reply here:

python rounding error float numbers [duplicate] 2 answers floating point representation error in python [duplicate] 1 reply

there python code following:

import sys import fileinput, string k = 3 f = raw_input("please input initial "+str(k)+" lamba: ").split() z = [] sumoflamba = 0.0 m in f: j = m.find("/") if j!=-1: e=float(m[:j])/float(m[j+1:]) else: e = float(m) sumoflamba+=e if e==0: print "the initial lamba cannot zero!" sys.exit() z.append(e) print sumoflamba if sumoflamba!=1: print "initial lamba must summed 1!" sys.exit()

when run 0.7, 0.2, 0.1. print warning , exits! however, when run 0.1, 0.2, 0.7. works fine. 0.3, 0.3, 0.4 works fine too. not have clue....can explain this, please? "print sumoflamda" give 1.0 these cases.

pretty much link lattyware provided explains - in nutshell, can't expect equality comparisons work in floating point without beingness explicit precision. if either round off value or cast integer predictable results

>>> f1 = 0.7 + 0.2 + 0.1 >>> f2 = 0.1 + 0.2 + 0.7 >>> f1 == f2 false >>> round(f1,2) == round(f2,2) true

python

No comments:

Post a Comment