you are evicted! now! go! - scheduler has no mercy...
In one of projects we had a challenging situation, our data process flow was interrupted by evicted pods in Kubernetes. That was occurring when our system was in a busy hour mode so incoming stream of data was up to 30MB/s. In the pod we had a C# program that was transposing incoming data for a database entity and pushing that over http request. Our scheduler was prepared to scale up and out, so we could use more hosts and pods when in the busy time, but that was not working as expected... When new pod was created all the data connections were directed to it and ..... bum eviction (to much memory consumed) - as it was buffering all the data and not able to push it out to http.
Our precios data in transit was lost, next pod had a similar scenario - so in short words we were screw up.
We had some ideas to fix it.
- let's check memory usage after each packet received (huge performance drop - we had to have 4x more pods for a standard traffic and up to 6 in peek time)
- let's limit number incoming connections to one, so one worker node handles one stream of data (effect good, kube cluster hosts allocation 2x)
- GC (garbage colector) was useless in this case as all the http data was a json in a string and that was the memory consuption
- so maybe stop the incoming connection when we reach like 1GB of used memory - as this is the best idea - but in linux container we were getting garbage data (and on windows host this snipped was working good):
How I will fix it today? Hmm probably I will use akka.net streams as they will give us a backpressure signal so the incoming data will be rated and pods scheduler will have some time to prepare new workers. Akka streams with tcp example here: https://getakka.net/articles/streams/workingwithstreamingio.html
Comments
Post a Comment