Hi dig maintainers,
My organization made an internal tool that has many different startup tasks that all produce a unique Go type. Before dig/fx, some tasks wrote to a shared struct, some tasks read from it, and some did both. This meant having to carefully order functions, keeping in mind these implicit dependencies hidden in func bodies. I'm busy removing this shared struct and rewriting each task as a dig/fx provider and letting dig order things automatically. It's wonderful! Can't thank you enough.
However—and sorry for frontloading this so much—our existing code, poor quality as it is, does perform many slow tasks in parallel. Forcing these to happen in serial would push our execution time over our execution frequency. But dependency injection with dig is a perfect match with our codebase, so I would really like to work something out.
I don't believe the core of dig is incompatible with parallel execution. To prove this to myself, as a first pass I made paramList use an Errgroup to call all its children and wait for them to finish, and I made the store maps sync.Map. Where two goroutines reached the same constructorNode, I put a mutex to protect the
called field. It worked but felt too simplistic and hard to control.
The approach I settled on lets all dig code run in one goroutine, and user-provided constructors can either run in the same goroutine (default) or in pooled/ephemeral goroutines. This keeps mutexes out of dig's data structures and lets users decide how much concurrency they want.
I tried this several ways, but this pull request is the cleanest way I found. It creates a new
deferred type params and constructorNodes return. A
deferred can be observed and resolved like a very simple Promise. Since params and constructors pass values indirectly via a Scope,
deferred doesn't need much state.
I would very much like to get concurrent initialization into dig by any means; it doesn't have to be this pull request. I'm willing to any changes you deem necessary; let's work together!