We currently have Talk running on a number of AWS EC2 instances with a centralized Mongo (Atlas) and Redis (Elasticache).
We’re in the process of setting up (researching) Kubernetes, so at some point we want to move horizontally from our docker-on-EC2 based solution to K8S.
Now we’re looking into ‘what to do with Redis’. If possible, running Redis in a pod together with the application would be favorable because of simplicity. But what would be the effect of pod rescheduling?
To my understanding, Talk uses redis as task queue and (I assume) also for performance cache. So having multiple redis instances and the occasional rescheduling would mean:
- Several parallel task queues sending e-mail, each with its own set of tasks (no duplicates).
- Less efficient caching due to duplicate cache effort.
- Short performance penalty when using new empty redis on spawning new pod.
- Possible loss of some tasks if not completed before terminating redis.
- No effect on JWT blacklisting as that is persisted in Mongo.
Are the above assumptions correct and would it therefore be possible to have multiple redis instances in use at a given moment?
This would allow us to keep K8S setup simple and also to move gradually from current to k8s based deployment via DNS load balancing.