Skip to main content

Creating Docker Images


The distributed systems that can be deployed by k8s are made up primarily of application container images.
Applications = language run time + libraries + source code
  • Problems occur when you deploy an application to a machine that doesn’t available on the production OS. Such a program, naturally, has trouble executing.
  • Deployment often involves running scripts which have a set of instructions resulting in a lot of failure cases being missed.
  • The situation becomes messier if multiple apps deployed to a single machine use different versions of the same shared library.
Containers help solve the problems described above.

Container images
container image is a binary package that encapsulates all the files
necessary to run an application inside of an OS container.
It bundles the application along with its dependencies (runtime, libraries, config, env variables) into a single artefact under a root filesystem. A container in a runtime instance of the image — what the image becomes in memory when executed. We can obtain container images in two ways:
  • a pre-existing image for a container registry (a repository of container images to which people can push images which can be pulled by others)
  • build your own locally
Once an image is present on a computer, it can be run to get an application running inside an OS container.
The Docker Image Format
  • De facto standard for images
  • made up of a series of layers of filesystem where each layers adds/removes/modifies files from the previous layer. It’s called an overlay filesystem.
Container images also have a configuration files which specify how to setup the env (networking, namespaces), the entry point for running the container, resource limits and privileges, etc.
The Docker image format = container root file system + config file
Containers can be of two types:
  1. System containers: like VMs, run a fool boot process, have system services like sshcron, etc.
  2. Application containers: commonly run only a single app
Building Application Images with Docker
Dockerfiles
It a file that automates the building of container images. Read more here . The book uses a demo app, the source code is available on GitHub. To run the kuar (Kubernetes Up and Running) image, follow these steps:
  1. Ensure you’ve Docker installed and running
  2. Download and clone the kuar repo
  3. Run make build to generate a binary
  4. Create a file named Dockerfile (no extension) containing the following:
FROM alpine
COPY bin/1/amd64/kuard /kuard
ENTRYPOINT ["/kuard"]
  1. Next, we build the image using the following command: $ docker build -t kuard-amd64:1 . The -t specifies the name of the tagTags are a way to version Docker images. the . tells docker to use the Dockerfile present in the current directory to build the image.
alpine as mentioned in the Dockerfile is a minimal Linux distribution that is used as a base image. More info can be found here.
Image Security
Don’t ever have passwords/secrets in any layer of your container image. Deleting it from one layer will not help if the preceding layers contain sensitive info.
Optimising Image Sizes
  • If a file is present in a preceding layer, it’ll be present in the image that uses that layer even though it’ll be inaccessible.
  • Every time a layer is changed, every layer that comes after is also changed. Changing any layer the image uses means that layer and all layers dependent on it need to be rebuilt, re-pushed and re-pulled for the image to work. As a rule of thumb, the layers should be ordered from least likely to change to most likely to change to avoid a lot of pushing and pulling.
Storing Images in a Remote Registry
To easily share container images to promote reuse and make them available on more machines, we use what’s called a registry. It’s a remote location where we can push our images and other people can download them from there. They’re of two types:
  1. Public: anyone can download images
  2. Private: authorisation is needed to download images
The book uses the Google Container Registry whereas I used the Docker Hub. After creating an account on Docker Hub, run the following commands:
  1. $ docker login
  2. $ docker tag kuard-amd64:1 $DockerHubUsername/kuard-amd64:1
  3. $ docker push $DockerHubUsername/kuard-amd64:1 Replace $DockerHubUsername with your username.
The Docker Container Runtime
The default container run time used by Kubernetes is Docker.
Running Containers with Docker
Run the following command to run the container that you pushed to Docker Hub in the previous step: $ docker run -d --name kuard -p 8080:8080 $DockerHubUsername/kuard-amd64:1 Let’s try to unpack everything that command does one thing at a time:
  • -d tells Docker to run the contain in detached mode. In this mode, the container doesn’t attach its output to your terminal and runs in the background.
  • —name gives a name to your container. Keep in mind it doesn’t alter the name of your image in anyway.
  • -p enables port-forwarding. It maps port 8080 on your local machine to your container’s port 8080. Each container gets its own IP address and doesn’t have access to the host network (your machine in this case) by default. Hence, we’ve to explicitly expose the port.
To stop & remove the container, run: $ docker stop kuard $ docker rm kuard
Docker allows controlling how many resources your container can use (memory, swap space, CPU) etc., using various flags that can be passed to the run command.
Cleanup
Images can be removed using the following command: $ docker rmi <tag-name/image-id>
Docker IDs can be shortened as long as they remain unique.


Comments

Popular posts from this blog

Comparison between Azure Application Gateway V1 and V2

Microsoft has announced new version of Azure Application Gateway and its Web Application Firewall module (WAF). In this article, we will discuss about the enhancements and new highlights that are available in the new SKUs i.e. Standard_v2 and WAF_v2. Enhancements and new features: Scalability: It allows you to perform scaling of the number of instances on the traffic. Static VIP: The VIP assigned to the Application Gateway can be static which will not change over its lifecycle. Header Rewrite: It allows you to add, remove or update HTTP request and response headers on application gateway. Zone redundancy: It enables application gateway to survive zonal failures which allows increasing the resilience of applications. Improved Performance: Improvement in performance during the provisioning and during the configuration update activities. Cost: V2 SKU may work out to be overall cheaper for you relative to V1 SKU. For more information, refer Microsoft prici

Difference between Azure Front Door Service and Traffic Manager

Azure Front Door Service is Microsoft’s highly available and scalable web application acceleration platform and global HTTP(s) load balancer. Azure Front Door Service supports Dynamic Site Acceleration (DSA), SSL offloading and end to end SSL, Web Application Firewall, cookie-based session affinity, URL path-based routing, free certificates and multiple domain management. In this article, I will compare Azure Front Door to Azure Traffic Manager in terms of performance and functionality. Similarity: Azure Front Door service can be compared to Azure Traffic Manager in a way that this also provides global HTTP load balancing to distribute traffic across different Azure regions, cloud providers or even with your on-premises. Both AFD & Traffic Manager support: Multi-geo redundancy: If one region goes down, traffic routes to the closest region without any intervention. Closest region routing: Traffic is automatically routed to the closest region. Differences: Azu

Install Solr as an Azure App Service

After Sitecore 9.0.2, Solr is a supported search technology for Sitecore Azure PAAS deployments. In this article, we will install SOLR service 8.4.0 in Azure App Service for Sitecore 10. 1. Create Azure App Service Login to Azure and create Azure App service. Make sure Runtime stack should be Java. 2. Download Solr Download Solr 8.4.0 from https://archive.apache.org/dist/lucene/solr/ Extract the files and add the below web.config file in the Solr package. <?xml version="1.0" encoding="UTF-8"?> <configuration>  <system.webServer>      <handlers>      <add  name="httpPlatformHandler"            path="*"            verb="*"            modules="httpPlatformHandler"            resourceType="Unspecified" />    </handlers>    <httpPlatform processPath="%HOME%\site\wwwroot\bin\solr.cmd"        arguments="start -p %HTTP_PLATFORM_PORT%"

Azure Machine Learning public preview announcement //Build, May 2021

Azure service updates Azure Machine Learning public preview announcement //Build, May 2021 New feature: Prebuilt Docker images for Inferencing, now in public preview. Click here for more information.

Export BACPAC file of SQL database

When you need to create an archive of an Azure SQL database, you can export the database schema and data to a BACPAC file. A BACPAC file can be stored in Azure blob storage or in local storage in an on-premises location and later imported back into Azure SQL Database or into a SQL Server on-premises installation. Let's learn some of the ways to export BACPAC file. Export BACPAC using Azure Portal Open your SQL Database and select Export. Fill the parameters as shown below. Select your storage account container & enter your SQL Server admin login. To check the status of your database export. Open your SQL Database server containing the database being exported. Go to Settings and then click Import/Export history Export BACPAC using SSMS Login Azure SQL Database by SSMS. Right-click the database -> Tasks -> Export Data-tier Application Save the .bacpac file into local disk. Export BACPAC using SQLPackage There is a command line tool that you can also choose to