https://linen.dev logo
#help
Title
# help
c

Chris Connor

01/09/2024, 5:11 PM
Hey folks 🙂 Very much new to wiremock. Really nice product and very powerful. I had a search on this slack but didn’t get many hits. Is anyone using Wiremock with multiple pods on Kubernetes? Just wondering if there are any documented strategies in terms of sharing stubs/mappings across the multiple pods (other than baking them into the image itself)?
a

Andrew Ripley

01/09/2024, 11:27 PM
Hey! I am also learning here! I think your best bet would be to use dockers bind volumes feature. then each container could use the same mappings folder in one location. would something like this work for you?
t

Tom

01/10/2024, 9:33 AM
Hi @Chris Connor, there are a few ways you can approach this. WireMock will read its stubs from the filesystem and if you don’t need to change them very often at runtime you can mount a filesystem into all your WireMock pods and tell WireMock to use this as its root. If you do need to change stubs without bouncing the pods you can hit the
/__admin/reset
endpoint on each pod and it’ll reload from the filesystem.
Alternatively, if you really want to get your hands dirty you can write an implementation of the
Stores
abstraction that backs onto your favourite cache/database/clustering solution.
c

Chris Connor

01/10/2024, 9:36 AM
Awesome Tom, thanks. I’ll have a read at the Stored stuff - that sounds like what we might be after 🙌
t

Tom

01/10/2024, 9:37 AM
I’ll warn that the interface is still in beta at the moment so there may be the odd breaking change while we’re still in v3.x
c

Chris Connor

01/10/2024, 9:38 AM
Ah ok understood. I think for what we’re looking at initially, baking our stubs into the image will be fine which means each pod will have them anyway. Going forward, it would be nice to create / delete on the fly from our test harness - which is where the
Stores
stuff would be important.
t

Tom

01/10/2024, 9:40 AM
Out of curiosity - is your desire to run it over multiple pods about scaling for load, HA or something else?
c

Chris Connor

01/10/2024, 9:42 AM
Yeah mostly load. We’re assessing if we could use Wiremock as a sort of dead end for performance testing; using Wiremock to eliminate the 3rd parties downstream of our system
t

Tom

01/10/2024, 9:44 AM
What kind of throughput do you need it to support? Asking because it’ll go quite a long way on a single pod if tuned correctly and this is generally a lot less hassle than trying to scale out.
c

Chris Connor

01/10/2024, 9:45 AM
In the short term, honestly not a lot. Our ultimate goal would be around 500 requests per second. (Again probably not a massive load for big companies)
t

Tom

01/10/2024, 9:46 AM
You should be able to comfortably manage that on a single 2-4 core pod provided your mock API isn’t using lots of complicated matching.
c

Chris Connor

01/10/2024, 9:47 AM
Ahh really? That’s promising 🙂 I literally only got it stood up in our k8s cluster yesterday so hadn’t thrown any sort of load at it. That would be even better if that was the case.
t

Tom

01/10/2024, 9:49 AM
We run our cloud hosts on Fargate/ECS with 4 cores and for our simpler benchmarking tests we can get about 2k req/s out of those.
c

Chris Connor

01/10/2024, 9:50 AM
oh wow - that’s pretty cool! 2k on single pods??
t

Tom

01/10/2024, 9:50 AM
Some things you’ll need to do tuning-wise: 1. Limit or disable the request journal (so it doesn’t leak the entire heap via log events) 2. Increase the container thread pool to something like 4x your cores 3. If you’re using added delays, enable async and set the pool to the same size as the container pool
c

Chris Connor

01/10/2024, 9:51 AM
amazing - this was going to be my next question ☝️ 😄
t

Tom

01/10/2024, 9:52 AM
Yeah, it serves everything straight out of memory by default, so it can be very fast. Obviously YMMV, as complex matching, large response bodies, templating and gzip all impose a significant processing overhead.
❤️ 1
c

Chris Connor

01/10/2024, 9:54 AM
Yeah absolutely, that makes sense if its having to work harder 👌
Really appreciate this info Tom; definitely gives me some promising info to feedback 🙌
t

Tom

01/10/2024, 9:55 AM
No probs, good luck with your testing!
c

Chris Connor

01/10/2024, 9:55 AM
Thanks again 🙏
@Tom…sorry me again - any recommended cpu/memory allocations?
t

Tom

01/11/2024, 12:37 PM
You might need to experiment a bit based on how complex your mock APIs are, but I’d say 2 CPUs + 2GB of RAM is probably sufficient for your throughput needs. You may find you can get away with much less RAM so I suggest monitoring this while you’re testing.
c

Chris Connor

01/11/2024, 3:05 PM
Awesome, thanks again Tom 🙌
👍 1