I don't often time routines, because the stuff I write mostly operates in realtime or
too fast to become an issue. However, there have lately been some operations on larger data
that i will want to find faster ways for. Sometimes my preconceived notion about how to write the fastest function demands testing, especially on larger data. Hence mark_time() and get_last_time()

The function overhead of calling these functions will be factored out if you stick your routines in a loop for testing, then divide the resulting time by the number of loops performed. Python standard library has a timeit import, which I personally find a little unintuitive.

The function overhead of calling these functions will be factored out if you stick your routines in a loop for testing, then divide the resulting time by the number of loops performed. Python standard library has a timeit import, which I personally find a little unintuitive.

import time def get_last_time(): if len(time_list) < 2: return "ERROR: must have two time entries to calculate the difference" return time_list[-1] - time_list[-2] def mark_time(): time_list.append(time.time()) time_list = [] # mark_time() # # your routine here # mark_time() # {: 5g} will give the difference to 5 decimal point accuracy print('your routine took {: 5g}'.format(get_last_time()))