Hi everyone ! Following the advise found in the Wi...
# help
b
Hi everyone ! Following the advise found in the Wiremock README.md on github, I’m searching for some help here, trying to figure out why Wiremock (apparently) drops connexion whenever it’s under a relatively heavy load You’ll find further in this message thread as I don’t want to pollute the #help channel 😇
I’ve wrote a small Proof Of Concept API, with Spring MCV + WebClient, and my goal is to load test it in several situations. This POC is relying on calls made to a Wiremock thirdparty API In my initial nominal case, each calls to the poc API triggers 10 sequential calls to the Wiremock backend. In such context, everything is working fine. In my second test, I’m trying to increase the throughput by performing the 10 calls in parallel . In that case, the poc complains several times about
java.net.SocketException: Connection reset
So I guess that Wiremock drops certain of the connexions, but I can’t figure out why, and nothing is logged about it so far. Notice that I’m starting Wiremock with the following options :
--port 8081 --no-request-journal --disable-request-logging --async-response-enabled=true --verbose=true
also notice that whenever I remove the
--disable-request-logging
the trouble doesn’t occur 🤔
I tried to git clone the Wiremock project, build it and debug it in my favorite IDE, but I fail stepping in the Jetty layers because it seems that this lib is embedded after a package renaming step in the build process 😛
In such a situation my first question would be : Is there a way to have Wiremock logging whenever it drops connexions ?
I don’t know which are the underlying technics used by Jetty lib, so trying to play with the Wiremock parameters related to it is puzzling me right now.
t
Hi Benoit, can you share your test project?
b
Hey Tom !
I’m working on one of our compagny’s github private repo, so it’s not that straightforward.
I can take some time to export it to a public part if necessary
t
That would be very helpful. With these kinds of stability/performance issues I find it’s generally impossible to diagnose based on bits of 2nd hand info. But if I’ve got something I can run that demonstrates the issue I can usually find the issue.
Are you running the latest version of WireMock?
b
I’ve been running the standalone version since monday, downloaded from the Wiremock website : ~/Downloads/wiremock-standalone-3.4.1.jar And this morning I’ve git cloned and build the latest version from the github repo
Are they any possibilities to activate some Jetty observability options ?
t
Are you just running it on your laptop, or is it deployed somewhere?
b
I’m running a local Prometheus server
t
--print-all-network-traffic
at startup will give you a reasonable amount of detail
b
fully local run, as I don’t want to bug my company’s API Management layer, even the preproduction one
t
OK, so the caller and WireMock are communicating over your
localhost
?
b
to be honest I’m not really good regarding network subject : sometimes I get confused regarding different behavior between using localhost and 127.0.0.1 ... 🫥 but yes, i’ve configures everything over localhost as much as I can.
I’ve dropped the Wiremock docker image usage as I’ve initially load tested it to check if it could be a limitating factor, ans I’ve noticed that my colima is not handling well heavy network loads
t
The main thing I wanted to establish is that you’re not connecting over e.g. a Docker network or VM environment’s network, which it sounds like you’re not. Other thing off the top of my head to look at is the underlying HTTP client being used by Spring. There have been some in the past that exhibited the kind of problem you’ve described under load - older versions of Netty for instance, although I’ve not seen this for a while.
Might be worth you increasing the container threads e.g.
--container-threads=200
b
Ok, I’ll try that also
On the other hand, simply googling jetty & connexion reset, I’ve encountered this SO entry : https://stackoverflow.com/questions/73225548/jetty-connection-reset-timeouts-alerting Since then, I’ve been wondering what is really occurring whenever there is a huge number of incoming requests. Could they stack, up to a point where a limit is reached and the request & its corresponding connexion dropped ?
t
I’d be surprised if it was this issue based on the load levels you’ve been using.
b
same trouble whenever simply adding the
--container-threads=200
option on the downloaded wiremock-standalone-3.4.1.jar
What is really surprising is that adding the
--print-all-network-traffic
also have the trouble disappearing. Much like when I remove the
--disable-request-logging
Apparently, console logging introduce some kind of CPU delay somewhere, that make either my POC or Wiremock behaving differently, and prevent reaching the connexion reset situation
t
That’s irritating. I use
tcpdump
in cases like this
b
yep, used it once or twice for really complex situations, but as I’m really not experienced with ISO layers, it’s always a pain in the ass for me to understand what Wireshark is spitting at me whenever I need to analyse it 😄 I’m more into deep dive debugging in general. But in that particular situation, don’t know where to place breakpoints, especially due to the fact it seems to be impossible to do for Intellij in the Jetty layers ... I guess it’s due to having Jetty embedded into the Wiremock jar, but I’m not 100% certain. Anyway, I’m working on building an anonymized version of my poc to send it to you. It will be (hopefully) more straightforward
@Tom : I’ve finished creating a reproduction repository : https://github.com/Mumeii/wiremockConResetSample/tree/main
I hope that the README.md is clear enough to make it easy to reproduce the trouble
notice that the codebase is using JAVA 21 latest features
Hi @Tom Hope you’re doing well Did you had the opportunity to have a look at the reproduction sample ?
t
I'm on a work trip at the moment so haven't had a chance to look but I think one of my team will pick it up
b
Hi Noticed 🙂 Thanks for the feedback Enjoy your trip !
o
Hi, I am having trouble with high memory usage in performance testing. Ok this may be normal but what is weird is that memory usage goes up to a value (for example 3500 mb) and never drop down even if performance testing is finished or there is no incoming request to the server. It keeps the same memory usage all the time. In this case, I will need to restart server all the time. And this is a manuel process and vulnerable to connection issues when it is forgotten. Could you please help? What can I do?
b
@Orçun BALCILAR: Hi You can maybe have things getting better by using the Wiremock parameters I’ve mentioned in my second message in this thread ?
If I remember right, there is also an issue on the Wiremock’s Github repository where someone ask pretty much the same kind of question as you are
(or a Google group entry ?)
o
I've tried these 3 parameters -> --no-request-journal --disable-request-logging --async-response-enabled=true many times. But it didn't work. Do you have the link?
b
I have the following ones regarding that matter : https://github.com/Rbillon59/wiremock-loadtest https://groups.google.com/g/wiremock-user/c/2TsjGAIU350 If none of them work, only memory profiling could help you spotting what is consuming so much. But it would takes to git clone the whole Wiremock project, manage to have it building, and run it with a right memory profiling tool, such as the one from Intellij, or MAT, which can quickly get quite complex
o
Thank you @Benoit LEFEVRE -CAMPUS-. I've figured out that memory consumption value goes up only once and stays at that level wheher load test is done again or not. So perhaps it is jetty/threads that keeps heap memory.
b
@Orçun BALCILAR: can’t remember during which technical presentation I’ve heard about it, but either Netty or Jetty is doing very low level memory management on is own, I.E. asking memory frames to JRE low level API (some that shouldn’t be used anymore) and that can lead to strange memory behavior. But it’s pure speculation from me, as I’m not familiar at all with those two techs anyway. I guess @Tom should be able to tell us more about it 🙂
t
I wouldn’t like to speculate without some data. It sounds like it’s not any of the obvious things like an ever-growing request journal. I suggest grabbing a thread dump and a heap dump while the server is idle.
b
yup, using mat on a dump should be a good first guess to understand what is clogging the memory that way
o
I will create a repo to reproduce the issue and will share with you. If I am able to analyze memory, I will share results too.
@Tom Is it expected that memory usage goes up from startup value (110 mb.) to 3500 mb (for example) due to in memory requests in a load test ? And would you expect memory to stay at that level (3500 mb) ?
t
What memory measurement is this? Is it the operating system’s view of memory committed to the JVM process or the JVM’s measure of heap size, or something else?
o
operating system’s view of memory committed to the JVM process on task manager window
t
Java doesn’t return memory to the OS by default so this isn’t surprising. I suspect most of that memory is unallocated heap inside the JVM, but you can confirm this via a thread/heap dump. If you want it to consume less OS memory you can set a lower -Xmx value on JVM startup. You probably don’t need as much as 3.5G and it’ll just mean you get smaller, more frequent GCs.
🙌 1
o
Ah sorry, I didn't know -> Java doesn’t return memory to the OS by default so this isn’t surprising. Thank you for the answer. Another point that I came across is that average response time is much bigger (500 ms) when I use response custom handlebars I coded and built in handlebars. But when I switched to response transformer, average response time gets too low (for example 2 ms). So I preferred response transformer to get low response time in performance testing.
t
So your custom code was faster, or the built-in response templating?
o
my custom response transformer code ( -> public class ...ResponseTransformer implements ResponseDefinitionTransformerV2) is much faster than my custom response templating ( -> public class XmlPartTemplate implements TemplateHelperProviderExtension)
t
Are you running the latest version of WireMock? There was a PR quite recently from one of our team that significantly improved the performance of larger templates.
o
yes wiremock 3.5.2
b
Hi @Tom. How are you doing ? I understand that the problematic I’ve reported here is far far from being a top priority, but on the other hand I’d rather not to loose the (tiny) effort I’ve invested in creating the reproduction case Do you think it’s a good alternative to turn it into an issue on Wiremock github ?
o
Hi @Benoit LEFEVRE -CAMPUS- From my experiences I’ve figured out that fine tuning is a key point in scaling wiremock server and achieving performance. What I mean is that you need to fine tune jetty connector threads and acceptors and queue size parameters depending on what tps you are planning to create. If you like, I can share a configuration example.
b
@Orçun BALCILAR: sure, could help seeing how to tune it the right way. But a question first : did you also encountered the
Connection reset
problematic, and solved it with Jetty parameters tuning ? Or are we maybe talking of distinct problematics ? I’m not certain anymore, as the subject start to be a bit old for my tiny memory 🙂
o
I am not sure but most probably I encountered. I can reproduce and make sure I am getting or not. For about 1500 tps load, I’ve used 400 connector, 200 acceptor, 1000 queue size configuration along with disable request logging. As an extra info, I also encountered a performance issue on custom and built in handlebars combined. I needed to replace it a custom whole response transformer extension to meet my average response time expectation and 100 percent success rate.
132 Views