makes it easier to type functions with behaviour that depends on input types.
Functions marked with
@overload are ignored by Python and only used by the
@overload def process(response: None) -> None: ... @overload def process(response: int) -> tuple[int, str]: ... @overload def process(response: bytes) -> str: ... def process(response): # <actual implementation>
Python's multiprocessing and deadlocks
Python's multiprocessing is prone to deadlocks in a number of conditions. In my case, the running program was a standard single-process, non-threaded script, but it used complex native libraries which might have been the triggers for the deadlocks.
The suggested workaround is using
set_start_method("spawn"), but when we
tried it we hit serious performance penalties.
Lesson learnt: multiprocessing is good for prototypes, and may end up being too hacky for production.
In my case, I was already generating small python scripts corresponding to worker tasks, which were useful for reproducing and debugging Magics issues, so I switched to running those as the actual workers. In the future, this may come in handy for dispatching work to HPC nodes, too.
Here's a parallel execution scheduler based on asyncio that I wrote to run them, which may always come in handy on other projects.