Unit-1-2-QBAnswers
Unit-1-2-QBAnswers
AND
The main difference between DevOps and waterfall is the approach to software development.
Agile focuses on continuous delivery of features, while waterfall focuses on delivering a complete
product. DevOps uses an agile approach and emphasizes communication and collaboration
between developers and operations teams.
DevOps also focuses on culture and people. It emphasizes collaboration between teams and a
shared responsibility for the success of the project. Waterfall and agile tend to have siloed teams
with responsibility for specific parts of the project.
The culture of collaboration: This is the very essence of DevOps—the fact that teams are no
longer separated by specialization (developers, Ops, testers, and so on), but, on the contrary,
these people are brought together by making multidisciplinary teams that have the same
objective: to deliver added value to the product as quickly as possible
Processes: For rapid deployment, teams must follow development processes from agile
methodologies with iterative phases that allow for better functionality quality and rapid feedback.
These processes should not only be integrated into the development workflow with continuous
integration but also into the deployment workflow with continuous delivery and deployment.
The DevOps process is divided into several phases: The planning and prioritization of
functionalities, Development, Continuous integration and delivery, Continuous
deployment, Continuous monitoring.
Tools: The choice of tools and products used by teams is very important in DevOps.
When teams were separated into Dev and Ops, each team used their specific tools—deployment
tools for developers and infrastructure tools for Ops—which further widened communication gaps.
● Developers need to integrate with monitoring tools used by Ops teams to detect performance
problems as early as possible and with security tools provided by Ops to protect access to various
resources .
● Ops, on the other hand, must automate the creation and updating of the infrastructure and
integrate the code into a code manager; this is called Infrastructure as Code,
but this can only be done in collaboration with developers who know the
infrastructure needed for applications.
● Ops must also be integrated into application release processes and tools.
● Important that the package generated during CI, deployed during CD is the
same one that will be installed on all environments until production.
● There may be configuration file transformations that differ depending on
the environment, but the application code (binaries, DLL, and JAR) must
remain unchanged.
● If changes (improvements or bug fixes) are to be made to the code
following verification in one of the environments, once done, the
modification will have to go through the CI and CD cycle again.
● Tools set up for CI/CD:
● A package manager: This constitutes the storage space of the packages generated by CI
and recovered by CD. Must support feeds, versioning, and different types of packages.
Tools in the market, such as Nexus, ProGet, Artifactory, and Azure Artifacts.
● A configuration manager: This allows you to manage configuration changes during CD.
Most CD tools include a configuration mechanism with a system of variables.
● In CD, the deployment of the application in each staging environment is
triggered as follows:
● Triggered Automatically, following a successful execution on a previous
environment.
● For example: Deployment in the pre-production environment is
automatically triggered when the integration tests have been successfully
performed in a dedicated environment.
● Triggered Manually, for sensitive environments such as the production environment, following a
manual approval by a person responsible for validating the proper functioning of the application in
an environment.
● What is important in a CD process is that the deployment to the production environment, that is,
to the end user, is triggered manually by approved users.
● The CD process is a continuation of the CI process.
● The chain of CD steps are automatic for staging environments but manual for production
deployments.
● Package is generated by CI and is stored in a package manager.
● The same package is deployed in different environments.
9. Describe different IaC Languages used in DevOps. Explain the same with a simple code
snippet.
Scripting types: These are scripts such as Bash, PowerShell, or any other
languages that use the different clients (SDKs) provided by the cloud
Provider.
● Ex: Command that creates a resource group in Azure:
● Using the Azure CLI:
● az group create -location west europe -name MyAppResourcegroup
● Using Azure PowerShell:
● New-AzResourceGroup -Name MyAppResourcegroup -Location west europe
10. List and explain The IaC Topologies. (study from ppt)
● The deployment and provisioning of the infrastructure (Setting up servers from scratch)
● The server configuration and templating (modifying server resources and features)
● The containerization (deploying an app on the server as containers)
● The configuration and deployment in Kubernetes.
11. Describe the IaC best practices followed in the DevOps pipeline.
● Everything must be automated in the code
● The code must be in a source control manager
● The infrastructure code must be with the application code
● Separation of roles and directories
● Integration into a CI/CD process
● The code must be idempotent
● To be used as documentation
● The code must be modular
● Having a development environment
12. What is Ansible? List and explain the reasons for using it in DevOps.
Ansible is an open source automation and orchestration tool for software provisioning, configuration
management, and software deployment. Can easily run and configure Unix-like systems as well as
Windows systems to provide infrastructure as code.Contains its own declarative programming
language for system configuration and management. Why use Ansible ?
● Free to use.
● No need for any special system administrator skills to install and use Ansible.
● Its modularity regarding plugins, modules, inventories, and playbooks make ansible useful for
orchestrating large environments .
● Lightweight, consistent, and no constraints regarding the OS or underlying hardware.
● Secure due to its agentless capabilities and use of OpenSSH security features.
● It has a template engine and a vault to encrypt/decrypt sensitive data.
● Comprehensive documentation and easy to learn structure and configuration.
13. What are Pull & Push Configuration Tools? With a neat diagram explain them.
A small software (called agent or client) is installed on every node. This agent/client will:
Chef & Puppet are good examples of such configuration management tools.
Push Based Configuration Management Tool
In this type of configuration management tool, the main server (where the configuration data is
stored) pushes the configuration to the node (hence, the name). So, it is the main server that
initiates communication, not the nodes. Which means that an agent/client may or may not be
installed on each node.
Ansible is an example of a push based configuration management tool that doesn’t need an agent
to be installed on the nodes. SaltStack is an example of a push based configuration management
tool that needs an agent (minion) to be installed on the nodes. In both cases, it's the main server
that starts the communication and sends the configuration data to the nodes without the nodes
asking for it.
15. Explain the working of Ansible Workflow with a neat diagram. (study
● The Management Node is the controlling node that controls the entire execution of the
playbook.
● The inventory file provides the list of hosts where the Ansible modules need to be run.
● The Management Node makes an SSH connection and executes the small modules on
the host's machine and install the software.
● Ansible removes the modules once those are installed.
● It connects to the host machine executes the instructions, and if it is successfully
installed, then remove that code.
17. What is the need for using an inventory file in Ansible? Explain the different types of
inventories used in Ansible.
The inventory contains the list of hosts on which Ansible will perform
administration and configuration actions.
● Dynamic inventory:
○ The list of hosts is dynamically generated by an external script (Ex: With a Python
script).
○ The dynamic inventory is used when the addresses of the hosts are not iavailable.
● Static inventory:
○ Hosts are listed in a text file in INI (or YAML) format.
○ This is the basic mode of Ansible inventory.
○ The static inventory is used in cases where we know the host addresses
18. What is the purpose of using a Playbook in the Ansible? Explain with a sample code
snippet.
● Playbook is one of the essential elements of Ansible.
● It contains the code of the actions or tasks that need to be performed to
configure or administer a VM.
● Once the VM is provisioned, it must be configured, with the installation of
all of the middleware needed to run the applications that will be hosted on
this VM.
● Necessary to perform administrative tasks concerning the configuration of
directories and their access .
19. With an example explain how you improve the Playbooks with Roles in Ansible.
22. What are the concerns with Automating configuration with Ansible? How does the
Packer tool help to overcome this?
Packer is an open-source VM image creation tool from Hashicorp. It helps you automate the
process of Virtual machine image creation on the cloud and on-premise virtualized environments.
All the manual steps performed to create a Virtual machine image can be automated through a
simple Packer config template.
OR -
Creating a generalized VM image can be useful for a number of reasons. Some benefits include:
1. Reduced setup time: By creating a pre-configured VM image, you can deploy new virtual machines more
quickly and easily. This can save time and effort compared to manually installing and configuring the
operating system and all necessary software on each individual VM.
2. Consistency: A generalized VM image ensures that all of your virtual machines are configured in the
same way, which can be helpful for maintaining consistency across your infrastructure.
3. Reusable: A generalized VM image can be used as a starting point for multiple virtual machines, which
can be helpful if you need to deploy a large number of VMs with similar configurations.
4. Version control: By using a tool like Packer to create your VM images, you can version control your
images and track changes over time. This can be helpful for managing and maintaining your
infrastructure.
Overall, creating a generalized VM image can help you to streamline the process of deploying and managing
virtual machines, and can make it easier to maintain a consistent and reliable infrastructure.
25. What are the uses of Variables in Packer template? Demonstrate how to use them in
different sections with code snippets.
In the Packer template, you need to use values that are not static in the code.
● This section is optional.
● Used to define variables that will be filled either
● as command-line arguments
● as environment variables.
● These variables will then be used in the builders or provisioners sections.
The access_key variable with the ACCESS_KEY environment variable The image_folder
variable with the /image value The value of the VM image size, which is the vm_size variable
26. Explain how to build a VM image using Packer with a neat diagram.
2. What are the different types of Version Control System? Explain with a neat diagram.
There are two types of VCS:
● Centralized Version Control System (CVCS): With centralized version control systems, you
have a single “central” copy of your project on a server and commit your changes to this
central copy.
● Distributed Version Control System (DVCS): With distributed version control systems
(DVCS), you don't rely on a central server to store all the versions of a project’s files.
Instead, you clone a copy of a repository locally so that you have the full history of the
project. Two common distributed version control systems are Git and Mercurial.
3. List and explain the different Git vocabulary used.
● Initialize - The git init creates an empty Git repository or re-initializes an existing one.
● Add - The git add command adds new or changed files in your working directory
● Status - The git status command lists all the modified files which are ready to be added to
the local repository.
● Commit - The "commit" command is used to save your changes to the local repository.
● Pull - The git pull origin main command syncs all the files from remote repo to local repo.
● Push - upload local repository content to a remote repository
● Branching - creating branches to make independent changes
● Merging - procedure to connect the forked history. It joins two or more development history
together
● Rebasing
● Git Definitions and Terminology Cheat Sheet | A Cloud Guru
$ git add .
v. Creating a commit
git commit: This will commit the staged snapshot.
$ git commit –a –m “Message”
● Gitflow is an alternative Git branching model that involves the use of feature branches and
multiple primary branches.
● Gitflow has numerous, longer-lived branches and larger commits.
● Developers create a feature branch and delay merging it to the main trunk branch until the
feature is complete
● These long-lived feature branches require more collaboration to merge and have a higher
risk of deviating from the trunk branch.
● They can also introduce conflicting updates.
7. What is the work of a CI Server? Explain the different types of CI Servers. Explain with
examples.
CI is achieved by an automatic task suite that is executed on a server, following similar patterns
executed on a developer's laptop that has the necessary tools for continuous integration; this
server is called the CI server.
● CI servers (also known as build servers) automatically compile, build, and test every new
version of code committed to the central team repository.
● CI server ensures that the entire team is alerted any time the central code repository
contains broken code.
The type of CI servers are :
1. On-premise type - installed in the company data center such as Jenkins or TeamCity
2. Cloud type - such as Azure Pipelines or GitLab CI.
8. What is Continuous Delivery (CD)? Explain the different tools used for setting up a CI/CD
pipeline.
Once the application has been packaged and stored in a package manager during CI, the
Continuous Delivery process is ready to retrieve the package and deploy it in different
environments except production env
Advantages:
● It is fully integrated with other Azure DevOps services such as Azure Pipelines, which
allows managing CI/CD pipelines.
● In Azure Artifacts, there is also a type of package called universal packages that allows
storing all types of files (called a package) in a feed that can be consumed by other services
or users.
● Azure Artifacts is in SaaS offering mode, so there is no installation or infrastructure to
manage.
The files can contain different code and be very large, requiring multiple builds. However,
a single Jenkins server cannot handle multiple files and builds simultaneously; for that, a
distributed Jenkins architecture is necessary.
The Jenkins server accesses the master environment on the left side and the master
environment can push down to multiple other Jenkins Slave environments to distribute
the workload.
That lets you run multiple builds, tests, and product environments across the entire
architecture. Jenkins Slaves can be running different build versions of the code for
different operating systems and the server Master controls how each of the builds
operates.
Azure Repos is a set of version control tools that you can use to manage your code.
(write about version control system)
Step/task Description
Publish Creates a ZIP package that contains the binary files of the project.
Publish Build Defines an artifact that is our ZIP of the application, which we will publish in
Artifacts Azure DevOps, and which will be used in the deployment release, as seen in
the previous, Use package manager, section.