Error and approximation can take many forms. It can be mathematical, numerical, algorithmic... pretty much any part of what we consider computer science has some level of error or approximation arising from the software, hardware or simply mistakes in our code.
When we are planning to take advantage of acceptable error as I am then we must have a decent understanding of what we consider error and the forms it can take in our software.