Hey folks :slightly_smiling_face: Very much new to...
# help
c
Hey folks 🙂 Very much new to wiremock. Really nice product and very powerful. I had a search on this slack but didn’t get many hits. Is anyone using Wiremock with multiple pods on Kubernetes? Just wondering if there are any documented strategies in terms of sharing stubs/mappings across the multiple pods (other than baking them into the image itself)?
a
Hey! I am also learning here! I think your best bet would be to use dockers bind volumes feature. then each container could use the same mappings folder in one location. would something like this work for you?
t
Hi @Chris Connor, there are a few ways you can approach this. WireMock will read its stubs from the filesystem and if you don’t need to change them very often at runtime you can mount a filesystem into all your WireMock pods and tell WireMock to use this as its root. If you do need to change stubs without bouncing the pods you can hit the
/__admin/reset
endpoint on each pod and it’ll reload from the filesystem.
Alternatively, if you really want to get your hands dirty you can write an implementation of the
Stores
abstraction that backs onto your favourite cache/database/clustering solution.
c
Awesome Tom, thanks. I’ll have a read at the Stored stuff - that sounds like what we might be after 🙌
t
I’ll warn that the interface is still in beta at the moment so there may be the odd breaking change while we’re still in v3.x
c
Ah ok understood. I think for what we’re looking at initially, baking our stubs into the image will be fine which means each pod will have them anyway. Going forward, it would be nice to create / delete on the fly from our test harness - which is where the
Stores
stuff would be important.
t
Out of curiosity - is your desire to run it over multiple pods about scaling for load, HA or something else?
c
Yeah mostly load. We’re assessing if we could use Wiremock as a sort of dead end for performance testing; using Wiremock to eliminate the 3rd parties downstream of our system
t
What kind of throughput do you need it to support? Asking because it’ll go quite a long way on a single pod if tuned correctly and this is generally a lot less hassle than trying to scale out.
c
In the short term, honestly not a lot. Our ultimate goal would be around 500 requests per second. (Again probably not a massive load for big companies)
t
You should be able to comfortably manage that on a single 2-4 core pod provided your mock API isn’t using lots of complicated matching.
c
Ahh really? That’s promising 🙂 I literally only got it stood up in our k8s cluster yesterday so hadn’t thrown any sort of load at it. That would be even better if that was the case.
t
We run our cloud hosts on Fargate/ECS with 4 cores and for our simpler benchmarking tests we can get about 2k req/s out of those.
c
oh wow - that’s pretty cool! 2k on single pods??
t
Some things you’ll need to do tuning-wise: 1. Limit or disable the request journal (so it doesn’t leak the entire heap via log events) 2. Increase the container thread pool to something like 4x your cores 3. If you’re using added delays, enable async and set the pool to the same size as the container pool
c
amazing - this was going to be my next question ☝️ 😄
t
Yeah, it serves everything straight out of memory by default, so it can be very fast. Obviously YMMV, as complex matching, large response bodies, templating and gzip all impose a significant processing overhead.
❤️ 1
c
Yeah absolutely, that makes sense if its having to work harder 👌
Really appreciate this info Tom; definitely gives me some promising info to feedback 🙌
t
No probs, good luck with your testing!
c
Thanks again 🙏
@Tom…sorry me again - any recommended cpu/memory allocations?
t
You might need to experiment a bit based on how complex your mock APIs are, but I’d say 2 CPUs + 2GB of RAM is probably sufficient for your throughput needs. You may find you can get away with much less RAM so I suggest monitoring this while you’re testing.
c
Awesome, thanks again Tom 🙌
👍 1