We had 80 people connect to a stream on our Nimble streaming server but when looking at the live stream stats we were seeing numbers of connections at 500-700, this caused other connected clients to drop out and have issues with the stream. We definitely did not have 500 people watching but there were loads of connections almost like phantom connections, with this we saw the Bandwidth show only 20Mbps overall when we had 700 people we would expect to see bandwidth consumption of around 2.5Gbps.
Nothing in the logs seems to suggest that this issue would have happened but it caused big issues to any connected peers.
Server is on Azure running Ubuntu 14
Server resources are 16CPU and 32GB RAM
RTMP stream was sent to Nimble at 4Mbps
Nimble has a transcoder creating 2 streams at lower resolution and then ABR to put them into the player
SLDP player is embedded into our website and has always run the stream perfectly with up to 500 participants before.
Would really appreciate some insight into why this happened - stream just skyrocketed and that seemed to be the issue then 20 minutes later all was fine like it never happened!
It can happen if clients reconnect to Nimble multiple times so reported connection count is increased, but average playback session duration is low. Are there any errors in nimble.log like e.g. "client is too slow"?
Hi Max thanks for your reply, is it possible for 60 clients to connect enough times constantly in quick succession to cause 500 - 700 connections to show up in the dashboard and how come this would negatively impact the viewing experience for the rest of the viewers, it killed the stream by going black for a multitude of those 60 people? The server is pretty big and has a max constant bandwidth limit of 12500Mbps.
We did actually get a few here are some of them (blanked ips etc for security reasons)
[2021-09-15 09:34:43 P2607-T2632] [wsls1] E: client s=66 ip='126.96.36.1999' stream='nimbleubuntu/eventname_abr' is too slow 
[2021-09-15 09:19:22 P2607-T2632] [wsls1] E: failed to SSL_shutdown ws_client s=73 (SSL_errno=5 errno=0 e='error:00000000:lib(0):func(0):reason(0)')
[2021-09-15 09:07:20 P2607-T2632] [wsls1] E: failed to SSL_shutdown ws_client s=76 (SSL_errno=1 e='error:140E0197:SSL routines:SSL_shutdown:shutdown while in init')
[2021-09-15 08:37:30 P2607-T2632] [wsls1] E: viewer tries to play invalid stream
WMSPanel aggregate stats for 30 second intervals so it is possible for 60 clients to make this number of re-connections, but re-connections itself should not affect other viewers.
May be there were some temporary network issue on the server preventing viewers downloading stream with appropriate speed causing re-connections?
Hi Max, interesting, have there ever been reports of this before? There is nothing in the Azure Logs for our nimble virtual machine to suggest there was ever any slow down with network especially considering the bandwidth allocated to that VM which is enormous. We do use nimble to transcode the stream into two other scales for ABR delivery but if one stream goes down as long as it is not the main stream it should just failover onto the available sources.
To describe the issue in more detail what happened was we saw the dashboard show a huge number of connections and during this time we peoples SLDP players going black - it appeared as if the stream was running but it was just black. There was nothing wrong with the RTMP feed we were sending into nimble so I just really want to make sure we can get to the bottom of it or how to avoid it in the future, I am guessing a multi server setup with load balancing is best but in the meantime if there anything else you can think of?
Thank you James
We had similar reports about increased connection counts, previously it was caused by network problems with servers causing client re-connections after "too slow" errors (so connection count increases but bandwidth and average play time decreases, check daily and duration stats reports for your server).
The issue in your case may be different if e.g. there were issues with transcoding and players disconnected due to some other reason. I'd suggest e.g. to play stream via HLS if issue happens again and checking downloading speed and/or errors in Network Log of the browser.
Load balancing should help if network speed was the bottleneck.
Hi Max thank you for your reply, what does it mean when you say servers causing client reconnection how would that happen? Is a too slow error normal?
I have checked our duration stats and for a normal event we would have maybe 1000 connections for 100 people over the course of a couple of hours but for this event we had an unreal number see below:
Connections per duration
< 5s 32348
< 10s 1005
< 15s 483
< 20s 161
< 25s 69
< 30s 44
< 5m 281
< 15m 33
< 30m 42
< 1h 49
< 2h 7
< 3h 1
> 3h 2
A total of 34525 connections in 2 hours - what would be the most likely culprit that this many connections were logged?
Surely a 16CPU 32GB RAM Server would be able to cope with a scenario like this, sorry I know it is a lot of info I am just really trying to pinpoint the cause, could a users firewall mess with Nimble at all requests wise?
Thank you James
I'm not sure what is the culprit, unfortunately it is not possible to get what exactly is the issue based on this description. It could be insufficient outgoing bandwidth or some issues with the stream causing player to re-connect. Try record event streams to DVR to check later if stream itself had any problems.
I guess user's firewall should not be the problem causing player to re-connect to the server.
and required to achieve the purposes illustrated in the
If you want to know more or withdraw your consent to all or some of the cookies, please
refer to the