Skip to content

Implement weighted round robin load balancing strategy #50

Closed
@donovanmuller

Description

As per the supported load balancing strategies in the initial design a weighted round robin strategy should be implemented to ensure the guarantees stated:

Weighted round robin - Specialisation of the above (default round robin #45) strategy but where a percentage weighting is applied to determine which cluster's Ingress node IPs to resolve. E.g. 80% cluster X and 20% cluster Y

Scenario 1:

  • Given 2 separate Kubernetes clusters, X, and Y
  • Each cluster has a healthy Deployment with a backend Service called app and that backend service exposed with a Gslb resource on cluster X as:
apiVersion: ohmyglb.absa.oss/v1beta1
kind: Gslb
metadata:
  name: app-gslb
  namespace: test-gslb
spec:
  ingress:
    rules:
      - host: app.cloud.example.com
        http:
          paths:
            - backend:
                serviceName: app
                servicePort: http
              path: /
  strategy: roundRobin 
    weight: 80%

and a Gslb resource on cluster Y as:

apiVersion: ohmyglb.absa.oss/v1beta1
kind: Gslb
metadata:
  name: app-gslb
  namespace: test-gslb
spec:
  ingress:
    rules:
      - host: app.cloud.example.com
        http:
          paths:
            - backend:
                serviceName: app
                servicePort: http
              path: /
  strategy: roundRobin 
    weight: 20%
  • Each cluster has one worker node that accepts Ingress traffic. The worker node in each cluster has the following name and IP:
cluster-x-worker-1: 10.0.1.10
cluster-y-worker-1: 10.1.1.11

When issuing the following command, curl -v http://app.cloud.example.com, I would expect the IP's resolved to reflect as follows (if this command was executed 6 times consecutively):

$ curl -v http://app.cloud.example.com # execution 1
*   Trying 10.0.1.10...
...

$ curl -v http://app.cloud.example.com # execution 2
*   Trying 10.0.1.10...
...

$ curl -v http://app.cloud.example.com # execution 3
*   Trying 10.0.1.10...
...

$ curl -v http://app.cloud.example.com # execution 4
*   Trying 10.0.1.10...
...

$ curl -v http://app.cloud.example.com # execution 5
*   Trying 10.1.1.11...
...

$ curl -v http://app.cloud.example.com # execution 6
*   Trying 10.1.1.11...
...

The resolved node IP's that ingress traffic will be sent should be spread approximately according to the weighting configured on the Glsb resources. In this scenario that would be 80% (4 out of 6) resolved to cluster X and 20% (2 out of 6) resolved to cluster Y.

NOTE:

  • The design of the specification around how to indicate the weighting as described in this issue is solely for the purpose of describing the scenario. It should not be considered a design.
  • The scenario where there are more than 2 clusters is currently undefined. I.e. how do the weightings get distributed in the event of missing weightings or uneven weightings? E.g. Given 3 clusters but only 2 Gslb resources in 2 clusters have a weight specified (that might or might not add up to 100%). How does that affect the distribution over 3 clusters?
  • Following on from the above, in the scenario where Deployments become unhealthy on a cluster, then the weighting should be adjusted to honour the weighting across the remaining clusters with healthy Deployments

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

  • Status

    Done

Relationships

None yet

Development

No branches or pull requests

Issue actions