Message-ID: <417798073.1091.1427427935137.JavaMail.email@example.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_1090_1557196466.1427427935137" ------=_Part_1090_1557196466.1427427935137 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html
If you are creating a lot of connections in a short time (as is often th= e case with load testing), then you may need to increase the size of the co= nnection accept queue. This can be done by setting the AcceptQueueSize fiel= d on the jetty connectors. You may also need to increase the somaxconn kern= el attribute to match (see below).=20
We have seen that sometimes under certain loads test cases would fail wh= en they are spinning up a large amount of client and server connections as = a part of the test execution. The below setting seems to address some of th= e issues we have encountered. It stands to reason that under a heavy load i= t would benefit a server outside of the test case environment as well.= =20
> sudo sysctl -w kern.ipc.somaxconn=3D256
kern.ipc.somaxconn controls the size of the connection listening queue a= nd typically only needs to be adjusted in high-performance server environme= nts. The default value of 128 is more than adequate for a home/work machine= and most workgroup servers. If, however, you are running a high-volume ser= ver and connections are getting refused at a TCP level, then you want to in= crease this. This is a very tweakable setting in such a case. Too high and = you'll get resource problems as it tries to notify a server of a large numb= er of connections and many will remain pending, and too low and you'll get = refused connections.=20
> sudo /sbin/sysctl -w net.core.somaxconn=3D256
This is the same setting described above for Mac OSX.=20
> sudo /sbin/sysctl -w net.core.netdev_max_backlog=3D3000
The net.core.netdev_max_backlog controls the size of the incoming packet=
queue for upper-layer (java) processing.
This setting has allowed th= e developers to properly address various performance testing aspects of jet= ty-client, however, this isn't the only mechanism for tweaking congestion o= n a Linux environment.
See http://fasterdata.es.net/TCP-tuning/linux.h= tml for further discussion on tuning TCP under Linux and alternate cong= estion settings.