thedupdup Posted January 24, 2019 Share Posted January 24, 2019 I have multiple servers in multiple locations. I need to get the frequency at which the websocket messages are received. When I connect to a low-latency server, the frequency is normal (50 - 60 ms). But on high latency servers, the frequency sometimes is 0. I asked a similar question not to long ago, but the answer there was that the socket is buffering messages. I find this unlikely since it only happens on high latency servers. Here is the code responsible for handling the websocket: startTime = Date.now(); ws.onmessage = function (evt) { prevData = recivedData; var receivedMsg = evt.data; recivedData = JSON.parse(receivedMsg); const endTime = Date.now(); ms = endTime - startTime; startTime = endTime; if(msAvg == null){ msAvg = ms; } msAvg = Math.round(((msAvg * 5) + ms) / 6); id = recivedData.id; } ms is the frequency in milliseconds. How can I get to the bottom of this issue? Quote Link to comment Share on other sites More sharing options...
Antriel Posted January 24, 2019 Share Posted January 24, 2019 Server sends message A then B. Message A gets lost along the way, client receives message B, but given how TCP works, it won't be accessible yet (head of line blocking). TCP handles resends and server sends message A again. Client receives it and supplies A and B one after another to your application. That's one very realistic option (longer the path, higher the chance for packet drops). But it can be anything that causes one message to arrive sooner or later compared to other message. Routes can change, packets can get delayed. In general, you can't depend on the rate to be perfect. Quote Link to comment Share on other sites More sharing options...
thedupdup Posted January 24, 2019 Author Share Posted January 24, 2019 15 hours ago, Antriel said: Server sends message A then B. Message A gets lost along the way, client receives message B, but given how TCP works, it won't be accessible yet (head of line blocking). TCP handles resends and server sends message A again. Client receives it and supplies A and B one after another to your application. That's one very realistic option (longer the path, higher the chance for packet drops). But it can be anything that causes one message to arrive sooner or later compared to other message. Routes can change, packets can get delayed. In general, you can't depend on the rate to be perfect. That makes sense. Is there anyway to disable the head of line blocking? I would rather loose a couple packets than have them blocked. EDIT: I should also note that on the highest latency server (200 ms avg) the frequency is almost always 0 Quote Link to comment Share on other sites More sharing options...
mattstyles Posted January 25, 2019 Share Posted January 25, 2019 14 hours ago, thedupdup said: That makes sense. Is there anyway to disable the head of line blocking? No, this defines a key aspect of TCP. UDP doesn't have this restriction, and thus can not ensure message order either, but you can't use it on the web. Quote Link to comment Share on other sites More sharing options...
Antriel Posted January 25, 2019 Share Posted January 25, 2019 18 hours ago, thedupdup said: EDIT: I should also note that on the highest latency server (200 ms avg) the frequency is almost always 0 That is a bit weird. It shouldn't be happening often, maybe at most 1% of messages unless you are on very bad network. This leads me to think that the server has Nagle's algorithm enabled (TCP_NODELAY – buffering of data until there's enough to send out, it is usually enabled by default and the higher latency could make it more aggressive.). Try looking into that. Quote Link to comment Share on other sites More sharing options...
TheAlmightyOne Posted April 2, 2019 Share Posted April 2, 2019 On 1/25/2019 at 2:53 PM, mattstyles said: UDP doesn't have this restriction, and thus can not ensure message order either, but you can't use it on the web. Sure you can. If you're brave enough you can use webRTC over UDP and create you own UDP stack to handle packet order/drops etc. This has an additional advantage of less overhead in each message and that in turn means possible performance improvements. mattstyles 1 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.