How to Expose the NGINX Pod In Kubernetes?

13 minutes read

To expose an NGINX pod in Kubernetes, you can use the Kubernetes Service resource. Here's how you can do it:

  1. Create a YAML file to define the Service resource. For example, you can name it nginx-service.yaml.
  2. In the YAML file, specify the kind as Service and the API version as v1.
  3. Set the metadata for the Service, including the name and labels.
  4. Define the spec for the Service, including the type and selector.
  5. Specify the type as NodePort if you want to expose the pod on a random port on each node. Alternatively, you can use LoadBalancer if you have a cloud provider that supports it, or ClusterIP if you want to expose it internally within the cluster.
  6. Provide the selector to target the NGINX pod. The selector should match the labels defined in the NGINX pod's deployment or pod definition.
  7. Save the YAML file.


Example YAML file (nginx-service.yaml):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80


  1. Apply the YAML file using the kubectl command:
1
kubectl apply -f nginx-service.yaml


  1. Verify that the Service is created:
1
kubectl get services


You should see the newly created Service listed with the given name (nginx-service in this case), along with its external IP and/or port information.


Now the NGINX pod is exposed through the Service, and you can access it using the external IP and port provided by Kubernetes.

Best Nginx Books to Ready in 2024

1
Nginx HTTP Server - Third Edition: Harness the power of Nginx to make the most of your infrastructure and serve pages faster than ever

Rating is 5 out of 5

Nginx HTTP Server - Third Edition: Harness the power of Nginx to make the most of your infrastructure and serve pages faster than ever

2
Mastering NGINX Second Edition

Rating is 4.9 out of 5

Mastering NGINX Second Edition

3
NGINX Cookbook: Over 70 recipes for real-world configuration, deployment, and performance

Rating is 4.8 out of 5

NGINX Cookbook: Over 70 recipes for real-world configuration, deployment, and performance

4
Nginx HTTP Server: Harness the power of Nginx to make the most of your infrastructure and serve pages faster than ever before, 4th Edition

Rating is 4.7 out of 5

Nginx HTTP Server: Harness the power of Nginx to make the most of your infrastructure and serve pages faster than ever before, 4th Edition

5
NGINX Cookbook: Advanced Recipes for High-Performance Load Balancing

Rating is 4.6 out of 5

NGINX Cookbook: Advanced Recipes for High-Performance Load Balancing

6
Nginx Simplified: Practical Guide to Web Server Configuration and Optimization

Rating is 4.5 out of 5

Nginx Simplified: Practical Guide to Web Server Configuration and Optimization


What are the different ways to expose a pod in Kubernetes?

There are several ways to expose a Pod in Kubernetes:

  1. NodePort: This type of service exposes the Pod on a static port on each worker node. It allows external access to the Pod using the node's IP address and the assigned port number.
  2. ClusterIP: This type of service exposes the Pod on an internal IP address that is only reachable from within the cluster. It is useful for communication between different services within the cluster.
  3. LoadBalancer: This type of service provisions an external load balancer, such as a cloud load balancer, and assigns it an externally reachable IP address. It automatically distributes incoming traffic to the Pods behind the service.
  4. Ingress: Ingress is an API object that manages external access to services within a cluster. It provides a way to route HTTP and HTTPS traffic to the appropriate services based on hostname or path.
  5. HostNetwork: This type of service exposes the Pod on the host network namespace. It allows the Pod to share the network stack with the node and have direct access to the host's network interfaces.
  6. ExternalName: This type of service provides a way to map a service to an external DNS name. It does not have any selectors or endpoints; instead, it simply redirects the requests to the specified external name.


These are some of the common ways to expose a Pod in Kubernetes, and the choice depends on the specific use case and requirements.


What is the difference between SSL termination and SSL passthrough?

SSL termination and SSL passthrough are two different methods for handling SSL/TLS traffic in a network infrastructure.

  1. SSL Termination:
  • In SSL termination, the SSL/TLS connection is terminated at a load balancer, reverse proxy, or another network device before being passed on to the backend servers.
  • When the SSL/TLS connection terminates, the encrypted data is decrypted at the terminating device, and the traffic is forwarded to the backend servers in plain text.
  • The terminating device then re-encrypts the traffic before sending the response back to the client.
  • This method allows the terminating device to inspect, manipulate, or cache the traffic before distributing it to the backend servers.
  • SSL certificates are installed on the terminating device, and clients establish the connection with the terminating device using the terminating device's certificate.
  1. SSL Passthrough:
  • In SSL passthrough, the SSL/TLS connection remains intact from the client to the backend servers without termination at any intermediate device.
  • The SSL/TLS traffic is directly forwarded from the load balancer or another network device to the backend servers without being decrypted or encrypted in between.
  • The load balancer or network device acts as a transparent proxy, relaying the encrypted traffic to the appropriate backend server without inspecting or manipulating it.
  • SSL certificates are installed on the backend servers, and clients establish the connection with the server directly.
  • This method is useful when end-to-end encryption is required, and there is a need for the backend servers to handle the encrypted traffic without any interference.


In summary, SSL termination terminates the SSL/TLS connection at the intermediate device, decrypts the traffic, and re-encrypts it before forwarding to backend servers. SSL passthrough, on the other hand, forwards the encrypted SSL/TLS traffic directly to the backend servers without any decryption or encryption in between.


How can you configure SSL termination for an NGINX pod in Kubernetes?

To configure SSL termination for an NGINX pod in Kubernetes, you can follow these steps:

  1. Create a secret containing the SSL certificate and private key. This secret will be mounted into the NGINX pod:
1
kubectl create secret tls my-ssl-certificate --key=path/to/private.key --cert=path/to/certificate.crt


  1. Create an NGINX deployment and service:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        - containerPort: 443

---

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - name: http
      port: 80
    - name: https
      port: 443
  type: LoadBalancer


  1. Update the NGINX configuration to enable SSL termination. You can mount a custom NGINX configuration file by adding a volume mount to the NGINX container in the deployment manifest and updating the configMap with your custom configuration:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        volumeMounts:
        - name: nginx-conf
          mountPath: /etc/nginx/nginx.conf
          subPath: nginx.conf
        ports:
        - containerPort: 80
        - containerPort: 443
      volumes:
      - name: nginx-conf
        configMap:
          name: nginx-configmap

---

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-configmap
  namespace: <namespace>
data:
  nginx.conf: |
    http {
      server {
        listen 80;
        server_name example.com;
        return 301 https://$host$request_uri;
      }

      server {
        listen 443 ssl;
        server_name example.com;

        ssl_certificate /etc/nginx/certs/tls.crt;
        ssl_certificate_key /etc/nginx/certs/tls.key;

        # Additional SSL configuration
        <insert additional SSL configuration here>

        # Proxy or serve your application
        location / {
          proxy_pass http://your-application;
        }
      }
    }


  1. Apply the deployment and service manifests:
1
kubectl apply -f nginx-deployment.yaml


After these steps, NGINX will terminate SSL/TLS connections on port 443, and pass the decrypted traffic to your application running behind it.

Best Web Hosting Providers of December 2024

1
Vultr

Rating is 5 out of 5

Vultr

  • Ultra-fast Intel Core Processors
  • Great Uptime and Support
  • High Performance and Cheap Cloud Dedicated Servers
2
Digital Ocean

Rating is 4.9 out of 5

Digital Ocean

  • Professional hosting starting at $5 per month
  • Remarkable Performance


How can you expose a pod using a NodePort service in Kubernetes?

To expose a Pod using a NodePort service in Kubernetes, you need to create a NodePort service manifest file or use the kubectl command-line tool. Here are the steps to follow:

  1. Create or update a Pod manifest file that declares the desired Pod configuration. For example, you might have a file named pod.yml with the following content:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: my-image:latest
    ports:
    - containerPort: 80


  1. Apply the Pod manifest using kubectl to create the Pod:
1
kubectl apply -f pod.yml


  1. Create or update a NodePort service manifest file that selects the target Pod and specifies a NodePort to expose. For example, you might have a file named service.yml with the following content:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: my-app
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP
    nodePort: 30000


  1. Apply the NodePort service manifest using kubectl to create the service:
1
kubectl apply -f service.yml


  1. Verify that the service and Pod are created and running:
1
2
kubectl get services
kubectl get pods


You should see the my-pod Pod and my-service service listed. The service should have a unique port number assigned under the NODE_PORT column.


Now you can access the Pod using any of the cluster's nodes IP address and the assigned NodePort. For example, if one of the nodes has an IP address of 192.168.0.100 and the NodePort is 30000, you can access the Pod via http://192.168.0.100:30000.


How can you specify path-based routing in an NGINX ingress resource?

To specify path-based routing in an NGINX ingress resource, you need to use the nginx.ingress.kubernetes.io/rewrite-target annotation.


Here's an example of how to specify path-based routing using the NGINX ingress controller:

  1. Create an ingress resource YAML file or modify an existing one.
  2. Define the nginx.ingress.kubernetes.io/rewrite-target annotation in the ingress resource specification. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: mydomain.com http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-service port: number: 80 In this example, the annotation nginx.ingress.kubernetes.io/rewrite-target: / is used to rewrite the URL path before forwarding the request to the backend service. The /app1 and /app2 paths are rewritten to root (/) before being forwarded to the respective backend services defined in the spec.
  3. Apply the ingress resource to your Kubernetes cluster. kubectl apply -f example-ingress.yaml


Now, incoming requests to mydomain.com/app1 will be routed to the app1-service, and requests to mydomain.com/app2 will be routed to the app2-service. The URL path will be rewritten to / before being forwarded to the backend services.

Facebook Twitter LinkedIn Telegram Whatsapp Pocket

Related Posts:

To deploy an NGINX pod in Kubernetes, you can follow these steps:Create a YAML file (e.g., nginx-pod.yaml) and define the pod&#39;s specifications. A basic pod configuration includes metadata, such as name and labels, and a container specification, specifying ...
To check nginx logs in Kubernetes, you can follow these steps:Firstly, access the Kubernetes cluster where your nginx pod is running. You can use the kubectl command-line tool to interact with Kubernetes. Identify the nginx pod or pods you want to check the lo...
To expose Docker or Kubernetes ports on DigitalOcean, you can follow these steps:For Docker, when running a container, you can use the -p flag to specify the port mapping. For example, docker run -p 80:80 mycontainer will expose port 80 on the host machine to ...
To run a script manually in a pod once using Helm, you can use the kubectl exec command to execute the script within the pod. This can be done by specifying the pod name and container name along with the desired script to be executed. Alternatively, you can al...
To use NGINX to host a website, follow these steps:Install NGINX: Begin by installing NGINX on your server or computer. The installation process may vary depending on your operating system. NGINX has official documentation to guide you through the installation...
To enable Brotli compression in NGINX, you can follow these steps:Start by installing the necessary tools. Ensure that you have the NGINX web server installed on your system. You also need the Brotli compression library and the ngx_brotli module for NGINX. Onc...