setup an externally accessible etcd in kubernetes

As we know, the IP addresses of pods and services in kubernetes are not accessible from the outside of the kubernetes cluster. In order for an application deployed in kubernetes to be reached from the outside of the kuberenetes, different methods can be used, for example, ingress, service of LoadBalancer type, or service of NodePort type, and so on. For example, we can deploy a pod containing the application containers and a NodePort service directing the network traffic to the pod. Then we can use the IP address of any kubernetes node and the port exposed by the service to access the application. However, when it comes to etcd, the thing gets a little different.

If you deploy a pod for etcd and a corresponding service of NodePort type, when you try to access the etcd service using the cluster node IP address and port for the –endpoints option of the etcdctl command, you will get an error, because the IP address and port is not a real etcd endpoint.

The error can be easily fixed by adding a reverse proxy gRPC for the etcd cluster. Then gRPC can communicate with the etcd cluster inside the kubernetes and we communicate with the reverse proxy gRPC through a service of NodePort type.

The pod definition for etd is as below:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: etcd-deploy
spec:
  selector:
    matchLabels:
      app: etcd
  replicas: 1
  template:
    metadata:
      labels:
        app: etcd
    spec:
      containers:
      - name: etcd
        image: quay.io/coreos/etcd:latest
        env:
        - name: ETCD_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        command:
        - /usr/local/bin/etcd
        args:
        - "--name"
        - "etcd0"
        - "--listen-client-urls"
        - "http://$(ETCD_POD_IP):2379"
        - "--advertise-client-urls"
        - "http://$(ETCD_POD_IP):2379"
        ports:
        - containerPort: 2379
      - name: etcd-proxy
        image: quay.io/coreos/etcd:latest
        env:
        - name: ETCD_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        command:
        - /usr/local/bin/etcd
        args:
        - "grpc-proxy"
        - "start"
        - "--endpoints"
        - "$(ETCD_POD_IP):2379"
        - "--listen-addr"
        - "$(ETCD_POD_IP):23790"
        ports:
        - containerPort: 23790

The service definition for etd is as below:

apiVersion: v1
kind: Service
metadata:
  name: etcd
spec:
  ports:
    - name: client-port
      port: 23790
      targetPort: 23790
      nodePort: 32001
  selector:
    app: etcd
  type: NodePort

After deplying the above pod and service in kubernetes, you can use the IP address of any kubernetes cluster node and the port 32001 as the argument to the –endpoints option of the etcdctl command to access the single node etcd cluster.

Of course, this only setup a simple etcd for simple purpose, e.g. product demo, development trial and so on. It should not be used in production site.