For example, the potential energy of an object under gravitation (Ep) = mgh
The three parameters are recorded as follows :
m = 10+-1 kg
g = 9.8+-0.1 ms^-2
h = 10+-1 m
The percentage error of the measurement is:
[ (11x9.9x11) - (10x9.8x10) ] / (10x9.8x10) = +22.2%
OR [ (9x9.7x9) - (10x9.8x10) ] / (10x9.8x10) = -19.8%
It is obvious that the two values is not the same.
Hence, the maximum percentage error is 22.2%.
In physics, however, the general method for solving this problem is:
maximum percentage error = (1/10) + (0.1/9.8) + (1/10) = 21.02%
which is easier to calculate but is not coincide with the real one (22.2%).
Moreover, I found this value is always approximate to the mean value of the above two values:
(22.2+19.8)/2 = 21 ~~21.02
Why should we using this general but wrong method to calculate the percentage error in an experiment?