Drydock does not easily handle cases like this:
- Harbormaster wants a working copy of rX.
- Drydock is set up to try to build this by building a "working copy cache" resource first, then building the working copy from it (perhaps because this is faster than doing a network clone in the given environment).
- It's not currently easy to build this in a way that limits the number of active working copies per-host (say, 25 per host). It's easy to say "5 working copy caches per host, 5 working copies per cache", but this may not use resources very efficiently. It's not obvious how to say "get a working copy cache, and also 25 working copies per host", at least without putting a whole lot of coordination logic into the blueprints.
Resolving this may actually be possible and straightforward, but there's no specific recommended approach for it right now.
(It's also not clear that this scenario is a very strong driver for it? Particularly, the expectation is that working copies are recycled and serve implicitly as working copy caches. The newer WorkingCopy blueprint (which implements this more explicitly) and actual resource lifecycle (which can actually do this properly) may moot this scenario. We'll see how things work in production after T9123.)
There are various other related scenarios (maybe a resource needs 2 other resources) that likely face the same challenges, but the ground here generally feels very hypothetical for now.
Original Description
So I've been wrestling with this problem today. In our setup, the AWS EC2 host blueprint is constrained to have a maximum of 5 host resources. It will never exceed this amount.
When the working copy blueprint is leased against, often the following happens:
- When the working copy blueprint needs to allocate a resource, it acquires a lease from the host blueprint and then saves the resource ID of that lease.
- All future leases that happen on that working copy lease are forced to lease against that host resource ID. This is because the cache provided by the working copy is host-specific.
This causes a few problems:
- If there are no constraints on the working copy (i.e. it is allowed to perform as many leases per resource as it wants), then the creation of those leases will cause the host resource to get overleased (beyond what would normally be desired). For example, if the host blueprint is configured for an ideal of 5 leases per resource, with a maximum of 5 resources, the working copy will bypass these settings and you'll end up with 25 leases on a single resource (instead of 5 leases on 5 resources, with each resource also having a working copy lease).
- If there are constraints on the working copy, then it has no way to enforce host uniqueness. When it goes to acquire a host lease as part of the working copy resource allocation, there's no guarantee this will result in a new host being created. I could force it to always allocate a new resource based on a parameter provided in the lease attributes (like the resourceID and blueprintPHID lease attributes perform filtering in the latest patches), but at this point it feels like the working copy blueprint has too much control over the allocation behaviour.
Basically, I don't know how we're going to solve the issue where:
A resource on one blueprint has a 1:1 mapping to a resource on another blueprint, where either or both of the blueprints have resource constraints.
@epriestley do you have any suggestions on how we might architecture Drydock to solve this issue?