0% found this document useful (0 votes)
10 views

devops interview

The document provides a comprehensive overview of best practices for managing Terraform statefiles, including protection, recovery, and security measures. It covers various aspects of infrastructure management with Terraform, such as importing existing resources, validating EC2 instance types, and provisioning RDS instances. Additionally, it discusses containerization, Kubernetes setup, CI/CD pipeline tools, and Python programming, highlighting automation examples and common bash commands.

Uploaded by

poojanandish1993
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

devops interview

The document provides a comprehensive overview of best practices for managing Terraform statefiles, including protection, recovery, and security measures. It covers various aspects of infrastructure management with Terraform, such as importing existing resources, validating EC2 instance types, and provisioning RDS instances. Additionally, it discusses containerization, Kubernetes setup, CI/CD pipeline tools, and Python programming, highlighting automation examples and common bash commands.

Uploaded by

poojanandish1993
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

1. Terraform statefile: How will you protect it?

Unfortunately, it got corrupted, how will you recover? Why


we don’t store it locally? How will you secure? Heard
about ownership of bucket
 Protection: The statefile contains sensitive information about your
infrastructure, so it must be protected. One approach is to store it in a
remote backend like AWS S3 with versioning enabled.

 Recovery: If the statefile is corrupted, you can use terraform state


commands to manually recover the state or rely on the versioned state
stored in S3. You can also back up your statefile regularly to minimize
data loss.

 Why not store locally?: Storing the statefile locally can cause issues
when working in teams, and it lacks centralized access, which makes
collaboration difficult. It's also prone to being lost or corrupted.

 Security: Secure the statefile by enabling encryption at rest (using


S3 encryption), controlling access through IAM policies, and using
bucket policies to restrict who can access the state. Ownership of the
S3 bucket can be set using bucket policies to control which users or
roles can read/write the state.

2. How to get infrastructure defined in the UI into


Terraform management? How will you pass values to
Terraform?
 Get infrastructure into Terraform: To import existing resources
into Terraform, you can use the terraform import command. This will
bring in the current state of a resource into Terraform management.
Example:

bashCopyEditterraform import aws_instance.example i-12345678

 Passing values to Terraform: Values can be passed to Terraform


through variables, defined in .tf files or terraform.tfvars files. You
can also pass them via the command line with -var flag or
environment variables.

3. You have to provision EC2 servers t2 & t5, if you give t3


& t4, it should not create (validation block)
To ensure that only specific EC2 instance types are allowed (like t2 and t5),
you can use validation rules in the Terraform configuration:
hclCopyEditvariable "instance_type" {
type = string
description = "EC2 instance type"
validation {
condition = contains(["t2", "t5"], var.instance_type)
error_message = "Only t2 and t5 instance types are allowed"
}
}

4. Have you provisioned DB? What RDS have you


provisioned? Just highlight the config main.tf file
Example of an RDS instance configuration in Terraform (main.tf):

hclCopyEditresource "aws_db_instance" "example" {


identifier = "mydbinstance"
instance_class = "db.t2.micro"
engine = "mysql"
username = "admin"
password = "password123"
db_name = "mydb"
allocated_storage = 20
publicly_accessible = true
skip_final_snapshot = true
}

5. Have you worked on Lambda function? Types of EC2


 Lambda Function: Yes, Lambda functions are serverless compute
services where you only focus on code without worrying about
infrastructure. You can write Lambda functions in languages like
Python, Node.js, etc.

 Types of EC2: There are different EC2 instance types, categorized by


use cases:

o General Purpose (e.g., t2, t3)

o Compute Optimized (e.g., c5)

o Memory Optimized (e.g., r5)

o Storage Optimized (e.g., i3)

o Accelerated Computing (e.g., p3, inf1)


6. You have lost a PEM key file? How to recover it?
Unfortunately, if you lose a PEM key file for an EC2 instance, you can't
directly recover it. However, you can:
1. Create a new key pair and associate it with the EC2 instance (via
EC2 instance's metadata).

2. Access EC2 via EC2 Instance Connect if supported.

3. Use EC2 Systems Manager (if configured) to access the instance


without needing the key.

7. You are having an opportunity to containerize 20


applications, how will you do it?
To containerize 20 applications:
1. Write Dockerfiles for each application.

2. Build Docker images for each application.

3. Use Docker Compose for multi-container management, defining


services for each app in a docker-compose.yml file.

4. Push the images to a Docker Registry (e.g., Docker Hub, AWS ECR).

5. Deploy the applications using Kubernetes or Docker Swarm to


orchestrate the containers in production.

8. Dockerfile: On what language we will write? Write


Dockerfile & use WORKDIR & EXPOSE command to run
under a certain user. How will you do it?
Dockerfiles are written in a simple, declarative format for defining how an
image should be built. Example for a Python app:
dockerfileCopyEditFROM python:3.8-slim

# Set working directory


WORKDIR /app

# Copy requirements and install dependencies


COPY requirements.txt .
RUN pip install -r requirements.txt

# Copy app files


COPY . .
# Expose port 5000
EXPOSE 5000

# Switch to non-root user


USER nobody

# Run the application


CMD ["python", "app.py"]

 WORKDIR: Sets /app as the working directory.

 EXPOSE: Exposes port 5000 for external communication.

 USER: Runs the application as the nobody user for better security.

9. What are the things you have set up in K8s? What are
the core controllers of K8s? Diff between Deployment &
ReplicaSet? Diff between Deployment & DaemonSet
 Core controllers:

o ReplicaSet: Ensures a specified number of pod replicas are


running at any given time.

o Deployment: Manages ReplicaSets and provides declarative


updates to pods.

o DaemonSet: Ensures that a pod runs on all or specific nodes in


the cluster.

o StatefulSet: Manages stateful applications.

 Difference:

o Deployment vs ReplicaSet: A Deployment manages


ReplicaSets for pod updates and scaling, while a ReplicaSet
simply ensures the correct number of replicas.

o Deployment vs DaemonSet: A Deployment is for stateless


apps, while a DaemonSet ensures one pod per node (for logging,
monitoring agents, etc.).

10. Pods are not getting deployed to certain nodes? What


is the reason?
Pods might not get deployed to certain nodes due to:
 Resource Constraints: Nodes might not have enough resources
(CPU, memory).
 Node Affinity: If node affinity rules are set, pods might only be
scheduled on specific nodes.

 Taints and Tolerations: Nodes with taints prevent pods from being
scheduled unless the pods have matching tolerations.

11. Have you worked on any monitoring tools? How do


Grafana & Prometheus work?
 Grafana is used to visualize metrics and logs. It integrates with
Prometheus to fetch time-series data and displays it in dashboards.

 Prometheus collects metrics from configured targets (e.g.,


Kubernetes, EC2 instances) and stores them in a time-series database.
Grafana queries Prometheus to visualize the data.

12. How will you create the Kubernetes cluster?


You can create a Kubernetes cluster using:
1. Managed services like Amazon EKS or Google GKE.

2. Kubeadm: A tool for setting up a Kubernetes cluster manually.

3. Minikube: For local Kubernetes development.

13. Will Kubernetes give any option to check the health of


containers or applications?
Yes, Kubernetes provides liveness and readiness probes to check the
health of containers:
 Liveness Probe: Checks if the container is running.

 Readiness Probe: Checks if the container is ready to accept traffic.

14. What is your CI/CD pipeline tool? Which type of script


are you using? How will you secure Jenkins pipeline?
 CI/CD Tool: Jenkins (or GitLab CI, CircleCI, etc.)

 Script Type: Groovy, Shell scripts, or Pipeline-as-Code using


Jenkinsfiles.

 Securing Jenkins Pipeline: Use credentials binding, environment


variables, and plugins like Credentials Plugin to securely handle
secrets and sensitive information in Jenkins pipelines.
15. Will Jenkins provide any option to store secrets &
credentials?
Yes, Jenkins provides a Credentials Plugin to store secrets securely:
 You can store API keys, passwords, and other secrets in Jenkins'
Credentials Manager and reference them securely in your pipeline.

16. I have multiple nodes, I want to run 1 stage on one


node & another stage on another node, how to do it?
Matrix-based security based on what you will provide the
access
Use Jenkins' matrix pipeline:
groovyCopyEditmatrix {
axes {
axis {
name 'NODE'
values 'node1', 'node2'
}
}
stages {
stage('Run on node') {
steps {
node("${NODE}") {
echo "Running on ${NODE}"
}
}
}
}
}

 Matrix enables running stages on different nodes.

17. Have you worked on Python? Example of data types?


Exception handling?
Yes, Python has various data types such as:
 Integers, Strings, Lists, Tuples, Dictionaries, Sets.
Example of exception handling in Python:

pythonCopyEdittry:
a = 10 / 0
except ZeroDivisionError as e:
print(f"Error: {e}")
finally:
print("This will always execute")

18. What have you automated in your organization?


Examples of automation in organizations:
 CI/CD Pipelines: Automated deployments using Jenkins, GitLab CI.

 Infrastructure Provisioning: Using Terraform to manage cloud


resources like EC2, S3, RDS.

 Monitoring and Alerts: Using Prometheus and Grafana for


monitoring, Slack for alerting.

19. In bash, what are the common commands you will use?
Common bash commands include:
 ls: List files

 cd: Change directory

 cat: View file content

 grep: Search for text in files

 awk, sed: Text processing tools

For fetching IP addresses from log files, a script could look like this:
bashCopyEditfor file in *.log; do
grep -oP '\d+\.\d+\.\d+\.\d+' "$file" >> ips.txt
done

This will extract all IP addresses from .log files into a new file ips.txt.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy