In python, you have floats and decimals that can be rounded. If you care about the accuracy of rounding, use decimal type. If you use floats, you will have issues with accuracy.
All the examples use demical types, except for the original value, which is automatically casted as a float.
To set the context of what we are working with, let's start with an original value.
print 16.0/7
Output: 2.2857142857142856
from decimal import Decimal
# First we take a float and convert it to a decimal
x = Decimal(16.0/7)
# Then we round it to 2 places
output = round(x,2)
print output
Output: 2.29
from decimal import Decimal, ROUND_HALF_UP
# Here are all your options for rounding:
# This one offers the most out of the box control
# ROUND_05UP ROUND_DOWN ROUND_HALF_DOWN ROUND_HALF_UP
# ROUND_CEILING ROUND_FLOOR ROUND_HALF_EVEN ROUND_UP
our_value = Decimal(16.0/7)
output = Decimal(our_value.quantize(Decimal('.01'), rounding=ROUND_HALF_UP))
print output
Output: 2.29
# If you use deimcal, you need to import
from decimal import getcontext, Decimal
# Set the precision.
getcontext().prec = 3
# Execute 1/7, however cast both numbers as decimals
output = Decimal(16.0)/Decimal(7)
# Your output will return w/ 6 decimal places, which
# we set above.
print output
Output: 2.29
In example 3, if we set the prec to 2, then we would have 2.3. If we set to 6, then we would have 2.28571.
Which approach is best? They are all viable. I am a fan of the second option, because it offers the most control. If you have a very specific use case (i.e. 2010 WMATA practice rounding habits of up and down to the .05 depending on the fare), you may have to customize this part in your code.
Decimal
has a quite high impact on performances: