From what I understood, you are letting the connections die and the client reconnect, while keeping track of the messages in between (correct me if I'm wrong).That is different than the EM version, in which the connections are long-running and don't block each other on read/write.I decided to simply check the size of this array and if it grew to a maximum size, removed the connection is from the connection map.
Puma detects that the connection has closed and it just kills the thread associated with it -- which kills the loop. I inspected the code, and it seems that sinatra doesn't "respect" the official rack hijack API.
Puma passes the rack.hijack object down below (it seems it is the Puma:: Client object).
This allows you to scale very naturally with little effort, but you start to discover the evils of threads.
Any time a limited resource is shared, issues around contention for that resource will occur.
When hooked up to Rack, Puma will spawn a new thread for every request made to the Web Server.
You can also spawn your own threads for things like DB access, file I/O and in-memory collection access.
I am hoping I figure out a better way to detect disconnects, so that the memory space of that connections event queue can be freed right away.
@tylermauthe already read the solution and played around with it.
Inside the SSEs event loop, I simply drained the array that corresponded to the connection is being handled in an endless loop.
The endless loop is killed by Puma when the connection closes, so eventually these event queues start filling up if you don't remove them after the connection terminates.
I'm trying to do this with puma, but it seems impossible, as the client socket is only non-blocking on reads, there is no way of reattaching it to the event loop even for writes, clients block as soon as you reach max-threads and all this is basically impossible without hijacking the io and handle the event loop somewhere else (I think this is what actioncable does with nio4r).