Networking

There is a single CloudCasa service called “amds-envoy-grpcapi” that needs to be exposed outside the cluster. It can be as simple as picking “LoadBalancer” type at the time of installation (by using helm parameter “serviceType”) or the service can be exposed using ingress or other mechanisms. To be very clear, CloudCasa does not have any preference regarding how this service is exposed. The only requirement is that the service is reachable from all the client clusters as well as the browser where UI is accessed.

In addition, network communication between “cloudcasa-server” and “cloudcasa-mongo” namespaces should be allowed. CloudCasa does create a network policy resource to allow such traffic but if any other configuration is required, it needs to be done before using CloudCasa. Please note that this doesn’t apply if you are using your own Mongo instance.

TLS Considerations

CloudCasa terminates TLS in the amds-envoy-grpcapi pod before requests are propagated to specific services. When setting up an ingress or route to expose the envoy service, configure the ingress to use “PASSTHROUGH” or similar TLS settings to ensure the certificate mounted on the pod is used.

Certificates can be created automatically using cert-manager, or supplied during install by creating a certificate secret (See Certificate Configuration). If bringing your own certificate, verify that amdsEnvoyUrl matches the CNAME of the certificate. Certificates created by helm will automatically patch CNAMEs to match amdsEnvoyUrl during helm upgrade.

Using nginx ingress

Here is a sample nginx ingress resource that can be created to expose CloudCasa service.:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/backend-protocol: "GRPCS"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
  labels:
    amds-app: envoy
    amds.component: envoy
  name: amds-envoy-grpcapi
  namespace: cloudcasa-server
spec:
  ingressClassName: nginx
  rules:
  - host: <FQDN-OF-CLOUDCASA-SERVICE>
    http:
      paths:
      - backend:
          service:
            name: amds-envoy-grpcapi
            port:
              number: 443
        pathType: ImplementationSpecific

Using Traefik

Traefik must be installed with the following Helm values:

  • ports.websecure.transport.respondingTimeouts.readTimeout=0s: This prevents Traefik from interrupting the CloudCasa agent connection. (By default, CloudCasa agent connection would be closed by Traefik every 60s)

  • service.spec.externalTrafficPolicy=Cluster: “Cluster” allows external traffic to be routed to the CloudCasa envoy pod regardless of which node is running it

Now install CloudCasa with serviceType=ClusterIP. Create a ServersTransport and IngressRoute resource with the name of your certificate secrets and amdsEnvoyUrl as your domain:

apiVersion: traefik.io/v1alpha1
kind: ServersTransport
metadata:
  name: cloudcasa-server-transport
  namespace: cloudcasa-server
spec:
  serverName: cloudcasa-server
  insecureSkipVerify: false    ## set to "true" if using self-signed cert
  certificatesSecrets:
    - amds-envoy.tls          ## same cert secret configured for CloudCasa envoy service
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: cloudcasa-server-ingress-route
  namespace: cloudcasa-server
spec:
  entryPoints:
  - websecure
  routes:
  - match: Host(<FQDN-OF-CLOUDCASA-SERVICE>)   # should match CloudCasa "amdsEnvoyUrl"
    kind: Rule
    services:
    - name: amds-envoy-grpcapi
      namespace: cloudcasa-server
      port: 443
      scheme: https
      serversTransport: cloudcasa-server-transport
  tls:
    domains:
    - main: <FQDN-OF-CLOUDCASA-SERVICE>
    secretName: amds-envoy.tls  ## same cert secret configured for CloudCasa envoy service

Using Openshift Routes

Here is an example Openshift Route that can be used to expose CloudCasa services. Note that the certificate mounted to the amds-envoy pod will be used via TLS passthrough, so certificate CNAME must match the hostname:

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: cloudcasa-route
  namespace: cloudcasa-server
spec:
  host: <FQDN-OF-CLOUDCASA-SERVICE>  # should match CloudCasa "amdsEnvoyUrl"
  to:
    kind: Service
    name: amds-envoy-grpcapi
  port:
    targetPort: grpc-port
  tls:
    termination: passthrough

Using Istio

Here are sample Gateway and VirtualService resources that can be used to expose CloudCasa with Istio. Note that the certificate mounted to the amds-envoy pod will be used via TLS passthrough, so certificate CNAME must match the hostname:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: cloudcasa-gateway
  namespace: cloudcasa-server
spec:
  selector:
    istio: aks-istio-ingressgateway-external
  servers:
  - port:
      number: 443
      name: grpc-port
      protocol: HTTPS
    tls:
      mode: PASSTHROUGH
    hosts:
    - <FQDN-OF-CLOUDCASA-SERVICE>  # should match CloudCasa "amdsEnvoyUrl"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: amds-envoy-grpcapi
  namespace: cloudcasa-server
spec:
  hosts:
  - <FQDN-OF-CLOUDCASA-SERVICE>  # should match CloudCasa "amdsEnvoyUrl"
  gateways:
  - cloudcasa-server/cloudcasa-gateway
  tcp:
  - match:
    - port: 443
    route:
    - destination:
        host: amds-envoy-grpcapi.cloudcasa-server.svc.cluster.local
        port:
          number: 443

In some setups, NetworkPolicies may be required to enable connections to/from cloudcasa-server:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-cloudcasa-server
  namespace: aks-istio-ingress
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: cloudcasa-server
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-aks-istio-ingress
  namespace: cloudcasa-server
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: aks-istio-ingress

Agent timeouts

There is an always-on connection between agent (from kubeagent manager pod in particular) to the CloudCasa service. This connection is used to send commands from the server to the agent. So it is usually idle unless there is some activity going on. Some times, such idle connections are closed by load balancers or firewalls. The agent and CloudCasa does have a keepalive mechanism to keep the connection active but in case there are still timeouts, you can use information in this section to increase timeout values on the load balancers from few well known providers.

Azure AKS

Edit amds-envoy-grpcapi service located in the cloudcasa-server namespace, and set the service.beta.kubernetes.io/azure-load-balancer-tcp-idle-timeout: "100" annotation. Alternatively, you can also use the Azure Portal: https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-tcp-idle-timeout?tabs=tcp-reset-idle-portal.

Google Cloud GKE

Create a BackendConfig resource with the desired timeout specification and reference it in the amds-envoy-grpcapi service, which is located in the cloudcasa-server namespace, with the annotation cloud.google.com/backend-config. You can find an example here: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-configuration#create_backendconfig

OpenShift

Openshift has many options for routing the traffic and HAProxy Ingress Controller is one of them. On AWS, this will create an EC2 Classic Load Balancer when the CloudCasa server is installed. You can find the load balancer by filtering using the tag kubernetes.io/service-name = cloudcasa-server/amds-envoy-grpcapi. By default, this load balancer will have its Idle timeout set to 60 seconds. This timeout must be increased to at least 420 seconds (7 minutes) to ensure that the connection between the CloudCasa server and agent is not disrupted.

Load balancer annotations

In some cases, you may need to add annotations to the amds-envoy-grpcapi service in order for the load balancer to be configured properly. This can be done either by directly editing the service or preferably, by using CloudCasa helm parameter serviceAnnotations. It can be set in values.yaml or by explicitly passing them with --set.

For example, when using the AWS Load Balancer Controller on Amazon EKS, the annotation service.beta.kubernetes.io/aws-load-balancer-scheme is required when creating an internet facing load balancer which can be done like so:

helm install cloudcasa-server cloudcasa/cloudcasa-server --create-namespace ... \
  --set serviceAnnotations."service\.beta\.kubernetes\.io/aws-load-balancer-scheme"="internet-facing"

See AWS Load Balancer Controller for a full list of service annotations.

Networking requirements

These requirements outline the necessary network access and connectivity for CloudCasa components, including client clusters, the CloudCasa server, and the user workstation.

Client clusters (clusters with CloudCasa agent installed):

  1. Must be able to open a gRPC connection with the CloudCasa server.

  2. Must be able to connect to the object storage where backups will be located.

  3. Must be able to connect to the container registry to pull required images.

CloudCasa server:

  1. Must be able to pull images from CloudCasa ACR registry or, if specified, use images from the provided registry.

  2. Must be able to communicate with the authentication provider.

  3. If an external instance of MongoDB is used, must be able to communicate with it.

  4. If MongoDB backups have been set up, must be able to communicate with the provided storage.

  5. If email configuration has been set up, must be able to communicate with the provided SMTP server.

User workstation:

  1. Must have outbound HTTPS (TCP port 443) access to the CloudCasa web interface at the specified URL.