Skip to main content

Kubernetes DaemonSet

DaemonSets
  • replicates a single pod on all or a subset of nodes in the cluster
  • land an agent or daemon on every node for, say, logging, monitoring, etc.
ReplicaSet and DaemonSet both create pods which are expected to be long-running services & try to match the observed and desired state in the cluster.
Use a ReplicaSet when:
  • the app is completely decoupled from the node
  • multiple copies of the pod can be run on one node
  • no scheduling restrictions need to be in place to ensure replicas don't run on the same node
Use a DaemonSet when:
  • one pod per node or subset of nodes in the cluster
DaemonSet Scheduler
  • By default the a copy is created on every pod
  • The nodes can be limited using node selectors which matches to a set of labels
  • DaemonSets determine which node a pod will run on when creating it using the nodeName field
  • Hence, the k8s scheduler ignores the pods created by DaemonSets
The DaemonSet controller:
  • creates a pod on each node that doesn't have one
  • if a new node is added to the cluster, the DaemonSet controller adds a pod to it too
  • it tries to reconcile the observed state and the desired state
Creating DaemonSets
  • the name should be unique across a namespace
  • includes a pod spec
  • creates the pod on every node if a node selector isn't specified
Limiting DaemonSets to Specific Nodes
  1. Add labels to nodes
$ kubectl label nodes k0-default-pool-35609c18-z7tb ssd=true
node "k0-default-pool-35609c18-z7tb" labeled
  1. Add the nodeSelector key in the pod spec to limit the number of nodes the pod will run on:

apiVersion: extensions/v1beta1
kind: "DaemonSet"
metadata:
  labels:
    app: nginx
    ssd: "true"
  name: nginx-fast-storage
spec:
  template:
    metadata:
      labels:
        app: nginx
        ssd: "true"
    spec:
      nodeSelector:
        ssd: "true"
      containers:
        - name: nginx
          image: nginx:1.10.0

  • If the labels specified as a nodeSelector is added to a new to existing node, the pod will be created on that node. Similarly, if the label is removed from an existing node, the pod will also be removed.
Updating a DaemonSet
  • For k8s < 1.6, the DaemonSet was updated and the pods were deleted manually
  • For k8s >= 1.6, the DaemonSet has an equivalent to the Deployment object which manages the rollout
Updating a DaemonSet by Deleting Individual Pods
  • manually delete the pods associated with a DaemonSet
  • delete the entire DaemonSet and create a new one with the updated config. This approach causes downtime as all the pods associated with the DaemonSet are also deleted.
Rolling Update of a DaemonSet
  • for backwards compatibility, the default update strategy is the one described above
  • to use a rolling update set the following in your yaml: spec.updateStrategy.type: RollingUpdate
  • any change to spec.template or sub-field will initiate a rolling update
  • it's controlled by the following two params:
    • spec.minReadySeconds, which determines how long a Pod must be “ready” before the rolling update proceeds to upgrade subsequent Pods
    • spec.updateStrategy.rollingUpdate.maxUnavailable, which indicates how many Pods may be simultaneously updated by the rolling update.
A higher value for spec.updateStrategy.rollingUpdate.maxUnavailable increases the blast radius in case a failure happens but increase the time it takes for the rollout.
Note: In a rolling update, the pods associated with the DaemonSet are upgraded gradually while some pods are still running the old configuration until all pods have the new configuration.
To check the status of the rollout, run: kubectl rollout
Deleting a DaemonSet
kubectl delete -f daemonset.yml
  • Deleting a DaemonSet deletes all the pods associated with it
  • in order to retain the pods and delete only the DaemonSet, use the flag: --cascade = false


Comments

Popular posts from this blog

Comparison between Azure Application Gateway V1 and V2

Microsoft has announced new version of Azure Application Gateway and its Web Application Firewall module (WAF). In this article, we will discuss about the enhancements and new highlights that are available in the new SKUs i.e. Standard_v2 and WAF_v2. Enhancements and new features: Scalability: It allows you to perform scaling of the number of instances on the traffic. Static VIP: The VIP assigned to the Application Gateway can be static which will not change over its lifecycle. Header Rewrite: It allows you to add, remove or update HTTP request and response headers on application gateway. Zone redundancy: It enables application gateway to survive zonal failures which allows increasing the resilience of applications. Improved Performance: Improvement in performance during the provisioning and during the configuration update activities. Cost: V2 SKU may work out to be overall cheaper for you relative to V1 SKU. For more information, refer Microsoft prici

Difference between Azure Front Door Service and Traffic Manager

Azure Front Door Service is Microsoft’s highly available and scalable web application acceleration platform and global HTTP(s) load balancer. Azure Front Door Service supports Dynamic Site Acceleration (DSA), SSL offloading and end to end SSL, Web Application Firewall, cookie-based session affinity, URL path-based routing, free certificates and multiple domain management. In this article, I will compare Azure Front Door to Azure Traffic Manager in terms of performance and functionality. Similarity: Azure Front Door service can be compared to Azure Traffic Manager in a way that this also provides global HTTP load balancing to distribute traffic across different Azure regions, cloud providers or even with your on-premises. Both AFD & Traffic Manager support: Multi-geo redundancy: If one region goes down, traffic routes to the closest region without any intervention. Closest region routing: Traffic is automatically routed to the closest region. Differences: Azu

Install Solr as an Azure App Service

After Sitecore 9.0.2, Solr is a supported search technology for Sitecore Azure PAAS deployments. In this article, we will install SOLR service 8.4.0 in Azure App Service for Sitecore 10. 1. Create Azure App Service Login to Azure and create Azure App service. Make sure Runtime stack should be Java. 2. Download Solr Download Solr 8.4.0 from https://archive.apache.org/dist/lucene/solr/ Extract the files and add the below web.config file in the Solr package. <?xml version="1.0" encoding="UTF-8"?> <configuration>  <system.webServer>      <handlers>      <add  name="httpPlatformHandler"            path="*"            verb="*"            modules="httpPlatformHandler"            resourceType="Unspecified" />    </handlers>    <httpPlatform processPath="%HOME%\site\wwwroot\bin\solr.cmd"        arguments="start -p %HTTP_PLATFORM_PORT%"

Azure Machine Learning public preview announcement //Build, May 2021

Azure service updates Azure Machine Learning public preview announcement //Build, May 2021 New feature: Prebuilt Docker images for Inferencing, now in public preview. Click here for more information.

Export BACPAC file of SQL database

When you need to create an archive of an Azure SQL database, you can export the database schema and data to a BACPAC file. A BACPAC file can be stored in Azure blob storage or in local storage in an on-premises location and later imported back into Azure SQL Database or into a SQL Server on-premises installation. Let's learn some of the ways to export BACPAC file. Export BACPAC using Azure Portal Open your SQL Database and select Export. Fill the parameters as shown below. Select your storage account container & enter your SQL Server admin login. To check the status of your database export. Open your SQL Database server containing the database being exported. Go to Settings and then click Import/Export history Export BACPAC using SSMS Login Azure SQL Database by SSMS. Right-click the database -> Tasks -> Export Data-tier Application Save the .bacpac file into local disk. Export BACPAC using SQLPackage There is a command line tool that you can also choose to