The idea of deepcopy clearly has merits: Prevent typing fatigue with the programmer and permit the developer to be lazy about copying objects.
Copies, however always come at a cost. Let's measure it.
import copy
class BaseMessage(object):
def __init__(self, sender, receiver, topic):
self.sender = sender
self.receiver = receiver
self.topic = topic
def copy(self):
return copy.deepcopy(self)
def old_school_copy(self):
return type(self)(**{k:v for k,v in self.__dict__.items() if not k.startswith("_")})
def classic_copy(self):
return BaseMessage(self.sender,self.receiver,self.topic)
class NewMessage(BaseMessage):
def __init__(self, sender, receiver, topic, payload):
super().__init__(sender,receiver,topic)
self.payload = payload
def classic_copy(self):
return NewMessage(self.sender,self.receiver,self.topic, self.payload)
m1 = BaseMessage(1,2,3)
m2 = NewMessage(1,2,3,"payload")
%timeit m1.copy()
4.69 µs ± 47.5 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
%timeit m2.copy()
5.15 µs ± 47.4 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
%timeit m1.old_school_copy()
874 ns ± 12.7 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
%timeit m2.old_school_copy()
1.21 µs ± 20.8 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
%timeit m1.classic_copy()
274 ns ± 4.03 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
%timeit m2.classic_copy()
468 ns ± 1.43 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
Now let's summarize:
message | copy |
old_school_copy |
classic_copy |
---|---|---|---|
m1 | 4.69µs | 0.874µs | 0.274µs |
m2 | 5.15µs | 1.21µs | 0.468µs |
Observations:
old_school
.classic
is the fastest method, but requires the developer to be explicit.As messages are compact data packages of central to a kernel we hence have to ask ourselves:
Would you accept a 17x slowdown of the most used kernel operation?
Probably not.