torch_simple_timing.clock#

Base class to time Pytorch code.

This clock is the base module reused in torch_simple_timing.timer.Timer.

To use stand-alone:

import torch
from torch_simple_timing.clock import Clock

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# enable GPU timing or CPU timing
gpu = device.type == "cuda"

# data structure to store multiple kinds of timings
dict_store = {}

# Create a clock timing the rest of this example.
# Notice you can chain the instantiation and the start method.
# (but you don't have to)
full_clock = Clock(store=dict_store, name="full", gpu=gpu).start()

# create a random tensor for the sake of this demo
# and time its creation with an existing store
with Clock(store=dict_store, name="tensor-init", gpu=gpu):
    t = torch.randn(2000, 2000, device=device)


# create a one-time clock
clock_no_store = Clock(gpu=gpu)

# Create a new clock re-using the same store for its timings.
clock_dict_store = Clock(store=dict_store, name="mult/inv", gpu=gpu)

# start clocks
clock_no_store.start()
clock_dict_store.start()

# Create a new clock and using it as a context-manager
# storing its times in a list
with Clock(gpu=gpu, store=[]) as clock_list_store:
    torch.inverse(t @ t.T)

# re-use the same clock as a context-manager
with clock_list_store:
    torch.inverse(t @ t.T)

# stop clocks
clock_no_store.stop()
clock_dict_store.stop()
full_clock.stop()

# print results
print(clock_no_store.duration)
print(clock_list_store.store)
print(full_clock.store)

Module Contents#

Classes#

Clock

A utility class for timing Pytorch code.

class torch_simple_timing.clock.Clock(name=None, store=None, gpu=False, ignore=False)[source]#

A utility class for timing Pytorch code.

A clock can be used as a context manager or as a stand-alone object.

After the clock is stopped, the duration attribute contains the time in seconds between the start and stop calls.

Clock objects can be used to time GPU code. For timings to be meaningful, they use torch.cuda.synchronize() to ensure that all GPU kernels have finished before the timer starts and stops.

You can provide a dictionary or a list to store the results of the clock. In the case of a dict, the name argument is used as a key to store the duration in a list.

Parameters:
  • name (str) – The name of the timer.

  • store (Dict[str, List], optional) – A dictionary for storing timer results. Defaults to {}.

  • gpu (bool, optional) – Indicates if GPU timing should be used. Defaults to False.

  • ignore (bool, optional) – If True, the timer does not record any results. Defaults to False.

__enter__()[source]#

Starts the timer.

Returns:

The Clock instance.

Return type:

self

__exit__(*args)[source]#

Stops the timer and records the duration.

Parameters:

*args – Any exception raised during the timed code.

Returns:

None

__repr__()[source]#

Describes the clock from its constructor’s arguments. If the clock has been used, it also includes its latest duration.

# Example outputs
Clock(store=<NoneType>, name=None, gpu=False, ignore=False)
Clock(store=<list[2]>, name=None, gpu=False, ignore=False | duration=0.303)
Clock(store={'full': '<list[1]>', 'tensor-init': '<list[1]>', 'mult/inv': '<list[1]>'}, name=full, gpu=False, ignore=False | duration=0.629)
Returns:

Clock’s description.

Return type:

str

start()[source]#

Start timing. This is called automatically when using the clock as a context manager.

Returns itself for chaining:

clock = Clock().start()
Returns:

The self Clock instance.

Return type:

Clock

stop()[source]#

Stop timing. This is called automatically when using the clock as a context manager.

Raises:
  • KeyError – If the clock’s name is not in the self.store dict. This may happen if you tinker with the Timer ‘s internal data.

  • TypeError – If the Timer’s store is not a dict or a list.

  • TypeError – If the Timer’s store is a dict but it does not map to a list.

Return type:

None