Hello, team! I'm searching for a best-practice pos...
# help
n
Hello, team! I'm searching for a best-practice post or GitHub repo example or anything. Will be grateful for any advice on how to store a large amount of stubs. I have a Quarkus application and 3 different API to mock and per each API 50-70 requests need to be mocked, Not all of them are unique some are repeatable but with different statuses or response bodies. The current plan is to have default mappings in JSON files, like any request returns 200, and then override them in the test with
atPriority()
for specific cases, but I'm not sure if it is the best solution, or if there is something better. Thanks in advance! 🙂
t
What you’ve described is very sensible. Having a (relatively small) common set of stubs that must always be present for basic functioning then adding higher priority, more specific examples from individual test cases is a strategy I use quite frequently.
I’d say generally it’s best to avoid having lots of long-lived stubs if possible. The nice thing about creating them in code and blowing them away at the end of the test is that there’s minimal risk of unintended behaviour when the stub you didn’t expect gets matched, and also you you don’t have a big maintenance problem when the API is updated - just update the code that generates your in-test stubs.
n
Thanks! Then I'll follow this strategy, any hint on how to make a relatively small set of stubs? For example, (imaginary situation as short explained as i can) you have GET
v1/order/item
and GET
v1/order/seller
Will you merge those stubs as
stubFor(get(urlMatching("order").willReturn...
Basically, any GET to
order
the resource will respond 200, and then inside test override JSON body, or better to have them separately as JSON files?
t
What I mean is it’s best to keep the set that are always present to a small size. Typically I’ll only do this for things that are used during every test setup, or called routinely e.g. endpoints that return lists of reference data. The temporary stubs you create for individual tests can be as abundant as you need them to be. Since they’re scoped to a single test and cleared out (by calling reset before each test or using the rule/extension to do so) each time they’ll never get out of hand, nor will they leak into other test cases.