Message-ID: <1126575599.6181.1369552690221.JavaMail.firstname.lastname@example.org> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_6180_1878122116.1369552690220" ------=_Part_6180_1878122116.1369552690220 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html
If you are creating a lot of connections in a short time (as is often th= e case with load testing), then you may need to increase the size of the co= nnection accept queue. This can be done by setting the AcceptQueueSize fie= ld on the jetty connectors. You may also need to increase the somaxconn ker= nel attribute to match (see below).
We have seen that sometimes under certain loads test cases would fail wh= en they are spinning up a large amount of client and server connections as = a part of the test execution. The below setting seems to address some of t= he issues we have encountered. It stands to reason that under a heavy load= it would benefit a server outside of the test case environment as well.
> sudo sysctl -w kern.ipc.somaxconn=3D256
kern.ipc.somaxconn controls the size of the connection listening queue a= nd typically only needs to be adjusted in high-performance server environme= nts. The default value of 128 is more than adequate for a home/work machine= and most workgroup servers. If, however, you are running a high-volume ser= ver and connections are getting refused at a TCP level, then you want to in= crease this. This is a very tweakable setting in such a case. Too high and = you'll get resource problems as it tries to notify a server of a large numb= er of connections and many will remain pending, and too low and you'll get = refused connections.
From http= ://www.macgeekery.com/tips/configuration/mac_os_x_network_tuning_guide_revi= sited
> sudo /sbin/sysctl -w net.core.somaxconn=3D256
This is the same setting described above for Mac OSX.
> sudo /sbin/sysctl -w net.core.netdev_max_backlog=3D3000
The net.core.netdev_max_backlog controls the size of the incoming packet=
queue for upper-layer (java) processing.
This setting has allowed the developers to properly address various perform= ance testing aspects of jetty-client, however, this isn't the only mechanis= m for tweaking congestion on a Linux environment.
See http://fasterdata.es.net/TCP-tuning/linux.h= tml for further discussion on tuning TCP under Linux and alternate cong= estion settings.------=_Part_6180_1878122116.1369552690220--