Devops Sheet
Devops Sheet
2. Version Control
● Git
● GitHub – Cloud-based Git repository hosting
● GitLab – Git repository with built-in CI/CD pipelines
● Bitbucket – Git repository with Jira integration
● Terraform
● Ansible (playbooks, roles, inventory)
● CloudFormation (stacks, templates)
6. Configuration Management
● Networking Basics
● Ports
● Nginx (Reverse Proxy & Load Balancing)
● Apache (reverse proxy, load balancing)
● HAProxy (Load Balancing)
● Kubernetes Ingress Controller (For Managing External Traffic)
● Practical Examples: Docker for Nginx, Apache, HAProxy, and Kubernetes
Ingress
2. NoSQL Databases
1. File Management
pwd
top
df -h # Human-readable format
uptime
3. Package Management (Ubuntu/Debian)
ping google.com
curl https://example.com
ifconfig
6. Process Management
7. Disk Management
8. Text Processing
awk '{print $1}' file.txt # Print the first column of each line
rsync -avz /source /destination # Sync with compression and archive mode
15. Others
find /var -name "*.log" # Find all .log files under /var
echo "new data" | tee file.txt # Write output to file and terminal
env
2. Shell scripting
1. Automating Server Provisioning (AWS EC2 Launch)
#!/bin/bash
# Variables
INSTANCE_TYPE="t2.micro"
#!/bin/bash
CPU_THRESHOLD=80
fi
#!/bin/bash
# Variables
DB_USER="root"
DB_PASSWORD="password"
DB_NAME="my_database"
BACKUP_DIR="/backup"
DATE=$(date +%F)
mkdir -p $BACKUP_DIR
# Backup command
gzip $BACKUP_DIR/backup_$DATE.sql
#!/bin/bash
# Variables
LOG_DIR="/var/log/myapp"
ARCHIVE_DIR="/var/log/myapp/archive"
DAYS_TO_KEEP=30
mkdir -p $ARCHIVE_DIR
#!/bin/bash
# Jenkins details
JENKINS_URL="http://jenkins.example.com"
JOB_NAME="my-pipeline-job"
USER="your-username"
API_TOKEN="your-api-token"
#!/bin/bash
# Variables
NAMESPACE="default"
DEPLOYMENT_NAME="my-app"
IMAGE="my-app:v1.0"
# Deploy to Kubernetes
kubectl set image deployment/$DEPLOYMENT_NAME
$DEPLOYMENT_NAME=$IMAGE --namespace=$NAMESPACE
#!/bin/bash
# Variables
TF_DIR="/path/to/terraform/config"
cd $TF_DIR
bash
#!/bin/bash
# Variables
DB_USER="postgres"
DB_PASSWORD="password"
DB_NAME="my_database"
MIGRATION_FILE="/path/to/migration.sql"
#!/bin/bash
# Variables
USER_NAME="newuser"
GROUP_NAME="devops"
#!/bin/bash
OPEN_PORTS=$(netstat -tuln)
else
echo "No open ports detected."
Fi
This script clears memory caches and restarts services to free up system resources.
#!/bin/bash
This script runs automated tests using a testing framework like pytest for Python or
JUnit for Java.
#!/bin/bash
# Run unit tests using pytest (Python example)
pytest tests/
mvn test
This script automatically scales EC2 instances in an Auto Scaling group based on
CPU usage.
#!/bin/bash
fi
14. Environment Setup
#!/bin/bash
export DB_HOST="prod-db.example.com"
export API_KEY="prod-api-key"
export DB_HOST="staging-db.example.com"
export API_KEY="staging-api-key"
else
export DB_HOST="dev-db.example.com"
export API_KEY="dev-api-key"
fi
This script checks logs for errors and sends a Slack notification if an error is found.
#!/bin/bash
fi
This script installs Docker if it's not already installed on the system.
#!/bin/bash
# Install Docker
sudo sh get-docker.sh
fi
#!/bin/bash
This script checks the health of multiple web servers by making HTTP requests.
#!/bin/bash
curl -s --head http://$server | head -n 1 | grep "HTTP/1.1 200 OK" > /dev/null
if [ $? -ne 0 ]; then
else
fi
done
19. Automated Cleanup of Temporary Files
This script removes files older than 30 days from the /tmp directory to free up disk
space.
#!/bin/bash
#!/bin/bash
#!/bin/bash
sudo reboot
fi
This script renews SSL certificates using certbot and reloads the web server.
#!/bin/bash
certbot renew
This script checks the CPU usage of a Docker container and scales it based on
usage.
#!/bin/bash
# Check CPU usage of a Docker container and scale if necessary
fi
This script verifies the integrity of backup files and reports any corrupted ones.
#!/bin/bash
else
fi
done
25. Automated Server Cleanup
This script removes unused Docker images, containers, and volumes to save disk
space.
#!/bin/bash
This script pulls the latest changes from a Git repository and creates a release tag.
#!/bin/bash
# Pull latest changes from Git repository and create a release tag
This script reverts to the previous Docker container image if a deployment fails.
#!/bin/bash
if [ $? -ne 0 ]; then
docker-compose down
docker-compose up -d
fi
This script collects logs from multiple servers and uploads them to an S3 bucket.
#!/bin/bash
#!/bin/bash
#!/bin/bash
else
fi
31. DNS Configuration Automation (Route 53)
#!/bin/bash
# Variables
ZONE_ID="your-hosted-zone-id"
DOMAIN_NAME="your-domain.com"
NEW_IP="your-new-ip-address"
"Changes": [
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "'$DOMAIN_NAME'",
"Type": "A",
"TTL": 60,
"ResourceRecords": [
"Value": "'$NEW_IP'"
}
]
}'
#!/bin/bash
# Run ESLint
# Run Prettier
#!/bin/bash
# API URL
API_URL="https://your-api-endpoint.com/endpoint"
else
fi
#!/bin/bash
# Image to scan
IMAGE_NAME="your-docker-image:latest"
if [ $? -eq 1 ]; then
else
fi
#!/bin/bash
THRESHOLD=80
fi
36. Automated Load Testing (Using Apache Benchmark)
#!/bin/bash
# Target URL
URL="https://your-application-url.com"
ab -n 1000 -c 10 $URL
#!/bin/bash
#!/bin/bash
# Variables
ZONE_ID="your-hosted-zone-id"
DOMAIN_NAME="your-domain.com"
NEW_IP="your-new-ip-address"
"Changes": [
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "'$DOMAIN_NAME'",
"Type": "A",
"TTL": 60,
"ResourceRecords": [
{
"Value": "'$NEW_IP'"
}'
#!/bin/bash
# Run ESLint
# Run Prettier
#!/bin/bash
# API URL
API_URL="https://your-api-endpoint.com/endpoint"
else
fi
#!/bin/bash
# Image to scan
IMAGE_NAME="your-docker-image:latest"
if [ $? -eq 1 ]; then
exit 1
else
fi
#!/bin/bash
# Disk usage threshold
THRESHOLD=80
fi
bash
#!/bin/bash
# Target URL
URL="https://your-application-url.com"
ab -n 1000 -c 10 $URL
#!/bin/bash
#!/bin/bash
crontab -e
crontab -r
# * * * * * command_to_execute
#┬┬┬┬┬
#│││││
* * * * * /path/to/script.sh
*/5 * * * * /path/to/script.sh
*/10 * * * * /path/to/script.sh
# Run a script at midnight
0 0 * * * /path/to/script.sh
0 * * * * /path/to/script.sh
0 */2 * * * /path/to/script.sh
0 3 * * 0 /path/to/script.sh
0 9 1 * * /path/to/script.sh
0 18 * * 1-5 /path/to/script.sh
0 12 1,15 * * /path/to/script.sh
0 9-17 * * * /path/to/script.sh
@reboot /path/to/script.sh
@daily /path/to/script.sh
@weekly /path/to/script.sh
@monthly /path/to/script.sh
@yearly /path/to/script.sh
# Redirect cron job output to a log file
SHELL=/bin/bash
PATH=/usr/local/bin:/usr/bin:/bin
0 5 * * * /path/to/script.sh
Python
Python Basics
1. File Operations
● Read a file:
python
python
2. Environment Variables
● Get an environment variable:
python
import os
db_user = os.getenv('DB_USER')
print(db_user)
python
import os
os.environ['NEW_VAR'] = 'value'
3. Subprocess Management
● Run shell commands:
python
import subprocess
python
import requests
response = requests.get('https://api.example.com/data')
print(response.json())
5. JSON Handling
● Read JSON from a file:
python
import json
python
import json
6. Logging
● Basic logging setup:
python
import logging
logging.basicConfig(level=logging.INFO)
logging.info('This is an informational message')
python
import sqlite3
conn = sqlite3.connect('example.db')
cursor = conn.cursor()
cursor.execute('CREATE TABLE IF NOT EXISTS users (id INTEGER
PRIMARY KEY, name TEXT)')
conn.commit()
conn.close()
python
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('hostname', username='user', password='password')
9. Error Handling
● Try-except block:
python
try:
# code that may raise an exception
risky_code()
except Exception as e:
print(f'Error occurred: {e}')
python
import docker
client = docker.from_env()
containers = client.containers.list()
for container in containers:
print(container.name)
python
import yaml
python
import yaml
python
import argparse
args = parser.parse_args()
print(args.num)
python
import psutil
python
from flask import Flask, jsonify
app = Flask(__name__)
@app.route('/health', methods=['GET'])
def health_check():
return jsonify({'status': 'healthy'})
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
python
import docker
client = docker.from_env()
container = client.containers.run('ubuntu', 'echo Hello World', detach=True)
print(container.logs())
python
import schedule
import time
def job():
print("Running scheduled job...")
schedule.every(1).minutes.do(job)
while True:
schedule.run_pending()
time.sleep(1)
17. Version Control with Git
● Using GitPython to interact with Git repositories:
python
import git
repo = git.Repo('/path/to/repo')
repo.git.add('file.txt')
repo.index.commit('Added file.txt')
python
import smtplib
from email.mime.text import MIMEText
python
import os
import subprocess
python
import requests
url = 'http://your-jenkins-url/job/your-job-name/build'
response = requests.post(url, auth=('user', 'token'))
print(response.status_code)
bash
python
import unittest
class TestMathFunctions(unittest.TestCase):
def test_add(self):
self.assertEqual(add(2, 3), 5)
if __name__ == '__main__':
unittest.main()
python
import pandas as pd
df = pd.read_csv('data.csv')
df['new_column'] = df['existing_column'] * 2
df.to_csv('output.csv', index=False)
python
import boto3
ec2 = boto3.resource('ec2')
instances = ec2.instances.filter(Filters=[{'Name': 'instance-state-name', 'Values':
['running']}])
for instance in instances:
print(instance.id, instance.state)
25. Web Scraping
● Using BeautifulSoup to scrape web pages:
python
import requests
from bs4 import BeautifulSoup
response = requests.get('http://example.com')
soup = BeautifulSoup(response.content, 'html.parser')
print(soup.title.string)
python
python
import boto3
s3 = boto3.client('s3')
# Upload a file
s3.upload_file('local_file.txt', 'bucket_name', 's3_file.txt')
# Download a file
s3.download_file('bucket_name', 's3_file.txt', 'local_file.txt')
python
import time
def tail_f(file):
file.seek(0, 2) # Move to the end of the file
while True:
line = file.readline()
if not line:
time.sleep(0.1) # Sleep briefly
continue
print(line)
python
import docker
client = docker.from_env()
container = client.containers.get('container_id')
print(container.attrs['State']['Health']['Status'])
python
import requests
import time
url = 'https://api.example.com/data'
while True:
response = requests.get(url)
if response.status_code == 200:
print(response.json())
break
elif response.status_code == 429: # Too Many Requests
time.sleep(60) # Wait a minute before retrying
else:
print('Error:', response.status_code)
break
python
import os
import subprocess
# Stop services
subprocess.run(['docker-compose', 'down'])
python
import subprocess
# Initialize Terraform
subprocess.run(['terraform', 'init'])
# Apply configuration
subprocess.run(['terraform', 'apply', '-auto-approve'])
python
import requests
response = requests.get('http://localhost:9090/metrics')
metrics = response.text.splitlines()
python
def test_add():
assert add(2, 3) == 5
python
app = Flask(__name__)
@app.route('/webhook', methods=['POST'])
def webhook():
data = request.json
print('Received data:', data)
return 'OK', 200
if __name__ == '__main__':
app.run(port=5000)
python
from jinja2 import Template
python
# Generate a key
key = Fernet.generate_key()
cipher_suite = Fernet(key)
# Encrypt
encrypted_text = cipher_suite.encrypt(b'Secret Data')
# Decrypt
decrypted_text = cipher_suite.decrypt(encrypted_text)
print(decrypted_text.decode())
python
import sentry_sdk
sentry_sdk.init('your_sentry_dsn')
try:
divide(1, 0)
except ZeroDivisionError as e:
sentry_sdk.capture_exception(e)
yaml
name: CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.8'
- name: Install dependencies
run: |
pip install -r requirements.txt
- name: Run tests
run: |
pytest
40. Creating a Simple API with FastAPI
● Using FastAPI for high-performance APIs:
python
app = FastAPI()
@app.get('/items/{item_id}')
async def read_item(item_id: int):
return {'item_id': item_id}
if __name__ == '__main__':
import uvicorn
uvicorn.run(app, host='0.0.0.0', port=8000)
python
es = Elasticsearch(['http://localhost:9200'])
python
import pandas as pd
# Extract
data = pd.read_csv('source.csv')
# Transform
data['new_column'] = data['existing_column'].apply(lambda x: x * 2)
# Load
data.to_csv('destination.csv', index=False)
python
import json
python
import redis
# Set a key
r.set('foo', 'bar')
# Get a key
print(r.get('foo'))
python
python
app = Flask(__name__)
api = Api(app)
class HelloWorld(Resource):
def get(self):
return {'hello': 'world'}
api.add_resource(HelloWorld, '/')
if __name__ == '__main__':
app.run(debug=True)
python
import asyncio
asyncio.run(main())
python
def packet_callback(packet):
print(packet.summary())
sniff(prn=packet_callback, count=10)
python
import configparser
config = configparser.ConfigParser()
config.read('config.ini')
print(config['DEFAULT']['SomeSetting'])
config['DEFAULT']['NewSetting'] = 'Value'
with open('config.ini', 'w') as configfile:
config.write(configfile)
python
import websocket
ws = websocket.WebSocketApp("ws://echo.websocket.org",
on_message=on_message)
ws.run_forever()
python
import docker
client = docker.from_env()
# Dockerfile content
dockerfile_content = """
FROM python:3.9-slim
WORKDIR /app
COPY . /app
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
"""
python
import psutil
python
alembic_cfg = config.Config("alembic.ini")
command.upgrade(alembic_cfg, "head")
python
import paramiko
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect('hostname', username='user', password='your_password')
python
import boto3
cloudformation = boto3.client('cloudformation')
response = cloudformation.create_stack(
StackName='MyStack',
TemplateBody=template_body,
Parameters=[
{
'ParameterKey': 'InstanceType',
'ParameterValue': 't2.micro'
},
],
TimeoutInMinutes=5,
Capabilities=['CAPABILITY_NAMED_IAM'],
)
print(response)
56. Automating EC2 Instance
Management
● Starting and stopping EC2 instances:
python
import boto3
ec2 = boto3.resource('ec2')
# Start an instance
instance = ec2.Instance('instance_id')
instance.start()
# Stop an instance
instance.stop()
python
import shutil
import os
source_dir = '/path/to/source'
backup_dir = '/path/to/backup'
shutil.copytree(source_dir, backup_dir)
58. Using watchdog for File System
Monitoring
● Monitor changes in a directory:
python
class MyHandler(FileSystemEventHandler):
def on_modified(self, event):
print(f'File modified: {event.src_path}')
event_handler = MyHandler()
observer = Observer()
observer.schedule(event_handler, path='path/to/monitor', recursive=False)
observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()
python
class MyUser(HttpUser):
wait_time = between(1, 3)
@task
def load_test(self):
self.client.get('/')
python
import requests
url = 'https://api.github.com/repos/user/repo'
response = requests.get(url, headers={'Authorization': 'token
YOUR_GITHUB_TOKEN'})
repo_info = response.json()
print(repo_info)
python
import subprocess
# Get pods
subprocess.run(['kubectl', 'get', 'pods'])
# Apply a configuration
subprocess.run(['kubectl', 'apply', '-f', 'deployment.yaml'])
62. Using pytest for CI/CD Testing
● Integrate tests in your CI/CD pipeline:
python
# test_example.py
def test_addition():
assert 1 + 1 == 2
python
import argparse
args = parser.parse_args()
print(args.accumulate(args.integers))
64. Using dotenv for Environment
Variables
● Load environment variables from a .env file:
python
load_dotenv()
database_url = os.getenv('DATABASE_URL')
print(database_url)
python
import requests
from bs4 import BeautifulSoup
response = requests.get('http://example.com')
soup = BeautifulSoup(response.text, 'html.parser')
python
import yaml
python
import pika
# Sending messages
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello')
# Receiving messages
def callback(ch, method, properties, body):
print("Received:", body)
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_consume(queue='hello', on_message_callback=callback,
auto_ack=True)
channel.start_consuming()
python
import sentry_sdk
sentry_sdk.init("YOUR_SENTRY_DSN")
try:
# Your code that may throw an exception
1/0
except Exception as e:
sentry_sdk.capture_exception(e)
python
python
Base = declarative_base()
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
engine = create_engine('sqlite:///example.db')
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()
# Create
new_user = User(name='Alice')
session.add(new_user)
71. Monitoring Docker Containers with
docker-py
● Fetch and print the status of running containers:
python
import docker
client = docker.from_env()
containers = client.containers.list()
python
app = Flask(__name__)
@app.route('/api/data', methods=['GET'])
def get_data():
return jsonify({"message": "Hello, World!"})
if __name__ == '__main__':
app.run(debug=True)
python
import subprocess
# Renew certificates
subprocess.run(['certbot', 'renew'])
python
import numpy as np
python
import smtplib
from email.mime.text import MIMEText
sender = 'you@example.com'
recipient = 'recipient@example.com'
msg = MIMEText('This is a test email.')
msg['Subject'] = 'Test Email'
msg['From'] = sender
msg['To'] = recipient
python
import schedule
import time
def job():
print("Job is running...")
schedule.every(10).minutes.do(job)
while True:
schedule.run_pending()
time.sleep(1)
python
x = [1, 2, 3, 4, 5]
y = [2, 3, 5, 7, 11]
plt.plot(x, y)
plt.xlabel('X-axis')
plt.ylabel('Y-axis')
plt.title('Simple Plot')
plt.show()
markdown
my_package/
├── __init__.py
├── module1.py
└── module2.py
python
setup(
name='my_package',
version='0.1',
packages=find_packages(),
install_requires=[
'requests',
'flask'
],
)
python
# test_sample.py
def add(a, b):
return a + b
def test_add():
assert add(1, 2) == 3
python
oauth = OAuth1Session(client_key='YOUR_CLIENT_KEY',
client_secret='YOUR_CLIENT_SECRET')
response = oauth.get('https://api.example.com/user')
print(response.json())
python
import pandas as pd
df = pd.read_csv('data.csv')
print(df.head())
# Filter data
filtered_df = df[df['column_name'] > 10]
print(filtered_df)
82. Using requests for HTTP Requests
● Making a GET and POST request:
python
import requests
# GET request
response = requests.get('https://api.example.com/data')
print(response.json())
# POST request
data = {'key': 'value'}
response = requests.post('https://api.example.com/data', json=data)
print(response.json())
python
PORT = 8000
handler = SimpleHTTPRequestHandler
python
app = Flask(__name__)
@app.route('/webhook', methods=['POST'])
def webhook():
data = request.json
print(data)
return '', 200
if __name__ == '__main__':
app.run(port=5000)
python
import subprocess
python
import subprocess
python
import boto3
from moto import mock_s3
@mock_s3
def test_s3_upload():
s3 = boto3.client('s3', region_name='us-east-1')
s3.create_bucket(Bucket='my-bucket')
s3.upload_file('file.txt', 'my-bucket', 'file.txt')
# Test logic here
python
import asyncio
asyncio.run(main())
89. Using flask-cors for Cross-Origin
Resource Sharing
● Allow CORS in a Flask app:
python
app = Flask(__name__)
CORS(app)
@app.route('/data', methods=['GET'])
def data():
return {"message": "Hello from CORS!"}
if __name__ == '__main__':
app.run()
python
import pytest
@pytest.fixture
def sample_data():
data = {"key": "value"}
yield data # This is the test data
# Teardown code here (if necessary)
def test_sample_data(sample_data):
assert sample_data['key'] == 'value'
91. Using http.client for Low-Level HTTP
Requests
● Make a raw HTTP GET request:
python
import http.client
conn = http.client.HTTPSConnection("www.example.com")
conn.request("GET", "/")
response = conn.getresponse()
print(response.status, response.reason)
data = response.read()
conn.close()
python
import redis
import json
python
import xml.etree.ElementTree as ET
tree = ET.parse('data.xml')
root = tree.getroot()
python
import venv
venv.create('myenv', with_pip=True)
python
import psutil
memory = psutil.virtual_memory()
print(f'Total Memory: {memory.total}, Available Memory: {memory.available}')
python
import sqlite3
conn = sqlite3.connect('example.db')
c = conn.cursor()
conn.close()
bash
pytest -n 4 # Run tests in parallel with 4 workers
python
import argparse
args = parser.parse_args()
print(args.accumulate(args.integers))
python
try:
validate(instance=data, schema=schema)
print("Data is valid")
except ValidationError as e:
print(f"Data is invalid: {e.message}")
2. Version Control
● Git
7. Undoing Changes
git config --global alias.st status # Create alias for status command
git config --global alias.co checkout # Create alias for checkout command
git config --global alias.br branch # Create alias for branch command
git config --global alias.cm commit # Create alias for commit command
git config --list | grep alias # View all configured aliases
2. GitHub
Authentication & Configuration
gh config set editor <editor> – Set the default editor (e.g., nano, vim)
Repository Management
Webhooks:
Commands
Webhooks:
○ Go to Settings → Webhooks
○ Select triggers: Push events, Tag push, Merge request, etc.
○ Use GitLab CI/CD with .gitlab-ci.yml
4. Bitbucket
Commands
Repository Management
Branch Management
Pipeline Management
Issue Tracking
Webhooks:
pipeline {
agent any
environment {
APP_ENV = 'production'
}
stages {
stage('Checkout') {
steps {
git 'https://github.com/your-repo.git'
}
}
stage('Build') {
steps {
sh 'mvn clean package'
}
}
stage('Test') {
steps {
sh 'mvn test'
}
}
stage('Deploy') {
steps {
sh 'scp target/*.jar user@server:/deploy/'
}
}
}
}
5. Jenkins Pipeline (Scripted)
groovy
node {
stage('Checkout') {
git 'https://github.com/your-repo.git'
}
stage('Build') {
sh 'mvn clean package'
}
stage('Test') {
sh 'mvn test'
}
stage('Deploy') {
sh 'scp target/*.jar user@server:/deploy/'
}
}
trigger {
cron('H 4 * * *') # Run at 4 AM every day
}
trigger {
pollSCM('H/5 * * * *') # Check SCM every 5 minutes
}
pipeline {
agent any
parameters {
file(name: 'configFile')
}
stages {
stage('Read File') {
steps {
sh 'cat ${configFile}'
}
}
}
}
pipeline {
agent any
stages {
stage('Clone Repository') {
steps {
git 'https://github.com/user/repo.git'
}
}
stage('Build') {
steps {
sh 'mvn clean package'
}
}
stage('Test') {
steps {
sh 'mvn test'
}
}
stage('Deploy') {
steps {
sh 'scp target/app.jar user@server:/deploy/path'
}
}
}
}
pipeline {
agent any
environment {
DOCKER_HUB_USER = 'your-dockerhub-username'
}
stages {
stage('Build Docker Image') {
steps {
sh 'docker build -t my-app:latest .'
}
}
stage('Push to Docker Hub') {
steps {
withDockerRegistry([credentialsId: 'docker-hub-credentials', url: '']) {
sh 'docker tag my-app:latest $DOCKER_HUB_USER/my-app:latest'
sh 'docker push $DOCKER_HUB_USER/my-app:latest'
}
}
}
}
}
3. Kubernetes Deployment
groovy
pipeline {
agent any
stages {
stage('Deploy to Kubernetes') {
steps {
sh 'kubectl apply -f k8s/deployment.'
}
}
}
}
4. Terraform Deployment
groovy
pipeline {
agent any
stages {
stage('Terraform Init') {
steps {
sh 'terraform init'
}
}
stage('Terraform Apply') {
steps {
sh 'terraform apply -auto-approve'
}
}
}
}
pipeline {
agent any
stages {
stage('Scan with Trivy') {
steps {
sh 'trivy image my-app:latest'
}
}
}
}
pipeline {
agent any
environment {
SONAR_TOKEN = credentials('sonar-token')
}
stages {
stage('SonarQube Analysis') {
steps {
withSonarQubeEnv('SonarQube') {
sh 'mvn sonar:sonar -Dsonar.login=$SONAR_TOKEN'
}
}
}
}
}
GitHub Actions allows automation for CI/CD pipelines directly within GitHub
repositories.
Commands
-H "Accept: application/vnd.github.v3+json" \
https://api.github.com/repos/<owner>/<repo>/actions/workflows/<workflow_file>/
dispatches \
-d '{"ref":"main"}'
on:
push:
branches:
- main
pull_request:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v4
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v4
🔹 Kubernetes Deployment
📌 File: .github/workflows/k8s.yml
name: Deploy to Kubernetes
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v4
🔹 Terraform Deployment
📌 File: .github/workflows/terraform.yml
name: Terraform Deployment
on:
push:
branches:
- main
jobs:
terraform:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v4
on:
push:
branches:
- main
jobs:
scan:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v4
on:
push:
branches:
- main
jobs:
sonar:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v4
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v4
build:
stage: build
script:
- echo "Building application..."
- mvn clean package
artifacts:
paths:
- target/*.jar
test:
stage: test
script:
- echo "Running tests..."
- mvn test
deploy:
stage: deploy
script:
- echo "Deploying application..."
- scp target/*.jar user@server:/deploy/path
only:
- main
🔹 Docker Build & Push to GitLab Container Registry
📌 File: .gitlab-ci.yml
variables:
IMAGE_NAME: registry.gitlab.com/your-namespace/your-repo
stages:
- build
- push
build:
stage: build
script:
- docker build -t $IMAGE_NAME:latest .
only:
- main
push:
stage: push
script:
- echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER
--password-stdin $CI_REGISTRY
- docker push $IMAGE_NAME:latest
only:
- main
🔹 Kubernetes Deployment
📌 File: .gitlab-ci.yml
stages:
- deploy
deploy:
stage: deploy
image: bitnami/kubectl
script:
- kubectl apply -f k8s/deployment.
only:
- main
🔹 Terraform Deployment
📌 File: .gitlab-ci.yml
image: hashicorp/terraform:latest
stages:
- terraform
terraform:
stage: terraform
script:
- terraform init
- terraform apply -auto-approve
only:
- main
security_scan:
stage: security_scan
script:
- docker pull registry.gitlab.com/your-namespace/your-repo:latest
- trivy image registry.gitlab.com/your-namespace/your-repo:latest
only:
- main
stages:
- analysis
sonarqube:
stage: analysis
script:
- mvn sonar:sonar -Dsonar.login=$SONAR_TOKEN
only:
- main
🔹 AWS S3 Upload
📌 File: .gitlab-ci.yml
stages:
- deploy
deploy_s3:
stage: deploy
script:
- aws s3 sync . s3://my-bucket-name --delete
only:
- main
environment:
name: production
🔹 Notify on Slack
📌 File: .gitlab-ci.yml
notify:
stage: notify
script:
- curl -X POST -H 'Content-type: application/json' --data '{"text":"Deployment
completed successfully!"}' $SLACK_WEBHOOK_URL
only:
- main
🔹 Tekton
🔹 What is Tekton?
Tekton is a Kubernetes-native CI/CD framework that allows you to create and
run pipelines for automating builds, testing, security scans, and deployments. It
provides reusable components such as Tasks, Pipelines, and PipelineRuns,
making it ideal for cloud-native DevOps workflows.
kubectl apply -f
https://storage.googleapis.com/tekton-releases/pipeline/latest/release.
🔹 Tekton Basics
● Tasks: The smallest execution unit in Tekton.
● Pipelines: A sequence of tasks forming a CI/CD process.
● PipelineRuns: Executes a pipeline.
● TaskRuns: Executes a task.
● Workspaces: Used for sharing data between tasks.
● Resources: Defines input/output artifacts (e.g., Git repositories, images).
Verify installation:
Commands:
# Install Tekton CLI
kubectl apply -f
https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
Apply:
Apply:
🔹 Tekton PipelineRun
📌 File: pipelinerun.
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: sample-pipelinerun
spec:
pipelineRef:
name: sample-pipeline
Check status:
🔹 Notify on Slack
📌 File: task-slack.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: slack-notify
spec:
steps:
- name: send-slack-message
image: curlimages/curl:latest
script: |
#!/bin/sh
curl -X POST -H 'Content-type: application/json' --data '{"text":"Deployment
completed successfully!"}' $SLACK_WEBHOOK_URL
Circle CI
Introduction
CircleCI is a cloud-based CI/CD tool that automates software testing and
deployment. It provides seamless integration with GitHub, Bitbucket, and other
version control systems, enabling automated builds, tests, and deployments.
Installation
● Sign up at CircleCI
● Connect your repository (GitHub, Bitbucket)
● Configure the .circleci/config.yml file in your project
deploy:
docker:
- image: circleci/python:3.8
steps:
- checkout
- run:
name: Deploy application
command: ./deploy.sh # Custom deploy script
jobs:
deploy:
docker:
- image: circleci/python:3.8
steps:
- checkout
- run:
name: Deploy to Production
command: ./deploy_production.sh
workflows:
version: 2
deploy_to_production:
jobs:
- deploy:
filters:
branches:
only: main # Deploy only on the 'main' branch
jobs:
build:
docker:
- image: circleci/python:3.8
steps:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "requirements.txt" }}
- run:
name: Install dependencies
command: pip install -r requirements.txt
- save_cache:
paths:
- ~/.cache/pip # Save pip cache
key: v1-dependencies-{{ checksum "requirements.txt" }}
jobs:
deploy:
docker:
- image: circleci/python:3.8
steps:
- checkout
- run:
name: Deploy using environment variables
command: ./deploy.sh
environment:
API_KEY: $API_KEY # Use stored API keys
jobs:
deploy:
docker:
- image: circleci/python:3.8
steps:
- checkout
- run:
name: Deploy Application
command: ./deploy.sh
filters:
branches:
only: main
requires:
- build
when:
changes:
- Dockerfile # Only run deploy if the Dockerfile changes
jobs:
test:
docker:
- image: circleci/python:3.8
parallelism: 4 # Run 4 test jobs in parallel
steps:
- checkout
- run:
name: Run tests
command: pytest
jobs:
build:
docker:
- image: circleci/python:3.8
- image: circleci/postgres:13 # Additional container for PostgreSQL
environment:
POSTGRES_USER: circleci
steps:
- checkout
- run:
name: Install dependencies
command: pip install -r requirements.txt
- run:
name: Run database migrations
command: python manage.py migrate
- run:
name: Run tests
command: pytest
jobs:
manual_deploy:
docker:
- image: circleci/python:3.8
steps:
- checkout
- run:
name: Deploy to Production
command: ./deploy.sh
when: manual # Only run when triggered manually
ArgoCD (GitOps)
Introduction
Installation
Linux:
curl -sSL -o /usr/local/bin/argocd
https://github.com/argoproj/argo-cd/releases/download/v2.5.4/argocd-linux-amd64
chmod +x /usr/local/bin/argocd
Initial Password (default is admin and the password is the name of the pod
running Argo CD):
kubectl get pods -n argocd
kubectl logs <argocd-server-pod-name> -n argocd | grep "admin"
Argo CD Commands
Login to Argo CD via CLI
argocd login <ARGOCD_SERVER> --username admin --password <password>
View the current applications
argocd app list
Delete an Application
argocd app delete <app-name>
argocd app refresh <app-name>
Managing Projects
Create a Project
argocd proj create <project-name> \
--description "<description>" \
--dest-namespace <namespace> \
--dest-server <server-url>
List Projects
argocd proj list
Best Practices
Flux CD
Introduction
Flux CD is a GitOps tool for Kubernetes that automates deployment, updates, and
rollback of applications using Git as the source of truth.
Installation
Install Flux CLI
curl -s https://fluxcd.io/install.sh | sudo
Verify Installation
flux --version
Bootstrap Flux in a Cluster
flux bootstrap github \
--owner=<GITHUB_USER_OR_ORG> \
--repository=<REPO_NAME> \
--branch=main \
--path=clusters/my-cluster \
--personal
General Commands
Managing Deployments
flux get sources git # List Git sources
flux get kustomizations # List kustomizations
flux reconcile kustomization <name> # Force sync a kustomization
flux suspend kustomization <name> # Pause updates for a kustomization
flux resume kustomization <name> # Resume updates for a kustomization
Uninstall Flux
flux uninstall --silent
Commands:
wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o
/usr/share/keyrings/hashicorp-archive-keyring.gpg
Commands:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o
"awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
Commands:
curl -LO "https://dl.k8s.io/release/$(curl -L -s
https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
required_providers {
aws = {
source = "hashicorp/aws"
provider "aws" {
region = "us-west-2"
ami = "ami-08d70e59c07c61a3a"
instance_type = "t2.micro"
tags = {
Name = var.instance_name
}
2. Input Variables: variables.tf:
Example:
hcl
variable "instance_name" {
type = string
default = "ExampleAppServerInstance"
Example:
hcl
output "instance_id" {
value = aws_instance.app_server.id
output "instance_public_ip" {
value = aws_instance.app_server.public_ip
1. Provider Configuration:
provider "aws" {
region = "us-west-2"
2. Resource Creation:
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
tags = {
Name = "ExampleInstance"
}
3. Variable Management:
variable "region" {
default = "us-west-2"
provider "aws" {
region = var.region
4. State Management:
backend "s3" {
bucket = "my-tfstate-bucket"
key = "terraform/state"
region = "us-west-2"
encrypt = true
dynamodb_table = "terraform-locks"
5. Modules:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "my-vpc"
cidr = "10.0.0.0/16"
Check version:
ansible --version
Check inventory:
ansible-inventory --list -y
Custom inventory:
ansible -i inventory.ini all -m ping
[db]
3. Ad-Hoc Commands
4. Playbook Structure
hosts: web
become: yes
tasks:
apt:
name: nginx
state: present
Run the playbook:
ansible-playbook install_nginx.yml
apt:
state: present
hosts: web
become: yes
tasks:
apt:
name: nginx
state: present
handlers:
service:
name: nginx
state: restarted
apt:
state: present
loop:
- nginx
- curl
- git
Conditional execution:
- name: Restart service only if Nginx is installed
service:
name: nginx
state: restarted
Create a role:
Ansible-galaxy init my_role
roles:
- my_role
Debug a variable:
- debug:
1. Playbook Structure
hosts: all
become: yes
tasks:
debug:
hosts: web
become: yes
apt:
name: nginx
state: present
become_user: root
hosts: web
become: yes
tasks:
apt:
name: nginx
state: present
● Common Modules
○ command: Run shell commands
○ copy: Copy files
○ service: Manage services
○ user: Manage users
○ file: Set file permissions
4. Using Variables
package_name: nginx
Use them in tasks:
- name: Install {{ package_name }}
apt:
state: present
include_vars: vars.yml
5. Conditionals
service:
name: nginx
state: restarted
6. Loops
apt:
state: present
loop:
- nginx
- git
- curl
7. Handlers
apt:
name: nginx
state: present
handlers:
service:
name: nginx
state: restarted
debug:
Dry run:
ansible-playbook playbook.yml --check
Create a role:
ansible-galaxy init my_role
roles:
- my_role
1. CloudFormation Concepts
AWSTemplateFormatVersion: "2010-09-09"
Resources:
MyBucket:
Type: "AWS::S3::Bucket"
MyEC2Instance:
Type: "AWS::EC2::Instance"
Properties:
InstanceType: "t2.micro"
ImageId: "ami-0abcdef1234567890"
Outputs:
InstanceID:
Stack Operations
aws cloudformation create-stack --stack-name my-stack --template-body
file://template.
Drift Detection
Parameters:
InstanceType:
Type: String
Default: "t2.micro"
RegionMap:
us-east-1:
AMI: "ami-12345678"
us-west-1:
AMI: "ami-87654321"
Resources:
MyDatabase:
Type: "AWS::RDS::DBInstance"
Condition: IsProd
S3BucketName:
Export:
Name: MyBucketExport
5. CloudFormation Troubleshooting
Issue Solution
Stack creation fails Check describe-stack-events for error details.
Basic Commands
Image Commands
Container Commands
Network Commands
Volume Commands
Dockerfile Commands
1. Minimize Layers: Combine RUN, COPY, and ADD commands to reduce
layers and image size.
2. Use Specific Versions: Always specify versions for base images (e.g.,
FROM python:3.9-slim).
3. .dockerignore: Use .dockerignore to exclude unnecessary files (e.g., .git,
node_modules).
4. Multi-Stage Builds: Separate the build process and runtime environment to
optimize image size.
5. Non-root User: Always create and use a non-root user for security.
6. Leverage Docker Cache: Copy dependencies first, so Docker can cache
them for faster builds.
1. Python (Flask/Django)
dockerfile
WORKDIR /app
# Install dependencies
COPY requirements.txt .
COPY . .
EXPOSE 5000
USER appuser
2. Node.js
dockerfile
WORKDIR /app
# Install dependencies
COPY . .
EXPOSE 3000
RUN addgroup --system app && adduser --system --ingroup app app
USER app
CMD ["node", "app.js"]
Best Practices:
dockerfile
WORKDIR /app
EXPOSE 8080
RUN addgroup --system app && adduser --system --ingroup app app
USER app
Best Practices:
● Multi-stage builds for separating build and runtime.
● Use -jdk-slim for smaller images.
● Non-root user (app).
4. Ruby on Rails
dockerfile
FROM ruby:3.0-alpine
# Install dependencies
WORKDIR /app
COPY . .
EXPOSE 3000
USER app
Best Practices:
5. Go
dockerfile
WORKDIR /app
COPY . .
WORKDIR /app
EXPOSE 8080
RUN addgroup --system app && adduser --system --ingroup app app
USER app
CMD ["./myapp"]
Best Practices:
6. Angular (Frontend)
dockerfile
# Build stage
FROM node:16 AS build
WORKDIR /app
COPY . .
FROM nginx:alpine
EXPOSE 80
Best Practices:
dockerfile
FROM php:8.0-fpm
# Install dependencies
RUN apt-get update && apt-get install -y libzip-dev && docker-php-ext-install zip
WORKDIR /var/www/html
# Install Composer
COPY . .
EXPOSE 9000
CMD ["php-fpm"]
Best Practices:
● Minimize Image Size: Use smaller base images like alpine or slim, and
multi-stage builds to reduce the final image size.
● Use a Non-root User: Always run applications as a non-root user to enhance
security.
● Pin Versions: Avoid using the latest tag for images. Use specific versions to
ensure predictable builds.
● Leverage Caching: Place frequently changing files (e.g., source code) after
dependencies to take advantage of Docker's build cache.
● Avoid ADD Unless Necessary: Use COPY instead of ADD unless you need
to fetch files from a URL or extract archives.
services:
app:
image: my-app:latest
container_name: my_app
ports:
- "8080:80"
environment:
- NODE_ENV=production
volumes:
- ./app:/usr/src/app
depends_on:
- db
db:
image: postgres:latest
container_name: my_db
restart: always
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: mydatabase
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
Key Directives
version: '3'
services:
service_name:
ports:
environment:
volumes:
depends_on:
networks:
version: '3'
services:
web:
build: ./app
ports:
- "5000:5000"
environment:
- FLASK_APP=app.py
- FLASK_ENV=development
volumes:
- ./app:/app
networks:
- app_network
redis:
image: "redis:alpine"
networks:
- app_network
networks:
app_network:
driver: bridge
version: '3'
services:
app:
build: ./node-app
ports:
- "3000:3000"
environment:
- MONGO_URI=mongodb://mongo:27017/mydb
depends_on:
- mongo
networks:
- backend
mongo:
image: mongo:latest
volumes:
- mongo_data:/data/db
networks:
- backend
networks:
backend:
driver: bridge
volumes:
mongo_data:
version: '3'
services:
nginx:
image: nginx:alpine
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./html:/usr/share/nginx/html
ports:
- "8080:80"
depends_on:
- php
networks:
- frontend
php:
image: php:8.0-fpm
volumes:
- ./html:/var/www/html
networks:
- frontend
networks:
frontend:
driver: bridge
Best Practices
● Use Versioning: Always specify a version for Docker Compose files (e.g.,
version: '3')
● Define Volumes: Use named volumes for persistent data (e.g., database
storage)
● Environment Variables: Use environment variables for configuration (e.g.,
database connection strings)
● Use depends_on: Ensure proper start order for dependent services
● Custom Networks: Use custom networks for better service communication
management
● Avoid latest Tag: Always use specific version tags for predictable builds
Advanced Options
build:
context: .
args:
NODE_ENV: production
services:
web:
image: my-web-app
healthcheck:
interval: 30s
retries: 3
● Kubernetes (K8s)
1. Kubernetes Basics
2. Managing Pods
4. Managing Services
5. Namespaces
7. Troubleshooting
kubectl get events – View cluster events
kubectl describe pod my-pod – Get detailed pod information
kubectl logs my-pod – View logs of a specific pod
kubectl top pod – Show resource usage of pods
Autoscaling
Kubernetes Debugging
1. Pod
yaml
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
2. Deployment
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
3. ReplicaSet
yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-replicaset
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: nginx
image: nginx:latest
ClusterIP (default)
yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 80
NodePort
yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080
LoadBalancer
yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
5. ConfigMap
yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
key1: value1
key2: value2
6. Secret
yaml
apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/data
yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
9. Ingress
yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-deployment
minReplicas: 2
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
11. CronJob
yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: my-cronjob
spec:
jobTemplate:
spec:
template:
spec:
containers:
- name: my-cron
image: busybox
restartPolicy: OnFailure
5. Cloud Services
2. Azure Storage
4. Azure Functions
6. Configuration Management
● Chef (recipes, cookbooks)
Basic Concepts
Commands
Example Recipe
package 'nginx' do
action :install
end
service 'nginx' do
end
file '/var/www/html/index.html' do
end
● Puppet (manifests, modules)
Basic Concepts
Commands
Example Manifest
puppet
class nginx {
package { 'nginx':
}
service { 'nginx':
file { '/var/www/html/index.html':
include nginx
Basic Concepts
Commands
nginx:
pkg.installed: []
service.running:
- enable: true
/var/www/html/index.html:
file.managed:
- source: salt://webserver/index.html
- mode: 644
Prometheus Basics
prometheus.yml configuration
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'my-app'
static_configs:
- targets: ['localhost:9090']
Grafana Basics
● Data Sources:
○ Prometheus → http://localhost:9090
○ Elasticsearch → http://localhost:9200
● Create Dashboard & Alerts
○ Add Panel → Select metric
○ Set Alert Conditions → Thresholds, No Data, Query Errors
Elasticsearch Commands
Logstash Configuration
logstash.conf
input {
file {
}
}
output {
elasticsearch {
Kibana Basics
Start Kibana
systemctl start kibana
● Useful Queries
○ message: "error" → Search for logs with "error"
○ status:[400 TO 500] → Find logs with HTTP errors
Datadog
● Log Monitoring
○ /etc/datadog-agent/datadog.
logs_enabled: true
○ Restart agent
● Metric Queries
○ avg:system.cpu.user{*} → CPU usage
○ top(avg:system.disk.used{*}, 5, 'mean') → Top 5 disk users
New Relic
newrelic install
1. SonarQube Integration
Jenkins Integration
groovy
pipeline {
agent any
environment {
stages {
stage('Checkout') {
steps {
git 'https://github.com/your-repo.git'
}
}
stage('SonarQube Analysis') {
steps {
script {
withSonarQubeEnv('SonarQubeServer') {
sh 'mvn sonar:sonar'
yaml
stages:
- code_analysis
sonarqube_scan:
stage: code_analysis
image: maven:3.8.7-openjdk-17
script:
variables:
SONAR_HOST_URL: "http://sonarqube-server:9000"
SONAR_TOKEN: "your-sonarqube-token"
yaml
on:
push:
branches:
- main
jobs:
sonar_scan:
runs-on: ubuntu-latest
steps:
uses: actions/checkout@v4
uses: actions/setup-java@v3
with:
distribution: 'temurin'
java-version: '17'
env:
SONAR_HOST_URL: "http://sonarqube-server:9000"
yaml
apiVersion: batch/v1
kind: Job
metadata:
name: sonarqube-analysis
annotations:
argocd.argoproj.io/hook: PreSync
spec:
template:
spec:
containers:
- name: sonar-scanner
image: maven:3.8.7-openjdk-17
env:
- name: SONAR_HOST_URL
value: "http://sonarqube-server:9000"
- name: SONAR_TOKEN
valueFrom:
secretKeyRef:
name: sonar-secret
key: sonar-token
restartPolicy: Never
2. Trivy (Container Vulnerability Scanning)
Basic Commands
Jenkins Integration
groovy
pipeline {
agent any
stages {
stage('Checkout') {
steps {
git 'https://github.com/your-repo.git'
stage('Trivy Scan') {
steps {
yaml
stages:
- security_scan
trivy_scan:
stage: security_scan
image: aquasec/trivy
script:
artifacts:
paths:
- trivy_report.json
on:
push:
branches:
- main
jobs:
trivy_scan:
runs-on: ubuntu-latest
steps:
uses: actions/checkout@v4
run: |
uses: actions/upload-artifact@v4
with:
name: trivy-report
path: trivy_report.json
yaml
apiVersion: batch/v1
kind: Job
metadata:
name: trivy-scan
annotations:
argocd.argoproj.io/hook: PreSync
spec:
template:
spec:
containers:
- name: trivy-scanner
image: aquasec/trivy
restartPolicy: Never
Kubernetes Integration (Admission Controller)
yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: trivy-webhook
webhooks:
- name: trivy-scan.k8s
rules:
- apiGroups: [""]
apiVersions: ["v1"]
operations: ["CREATE"]
resources: ["pods"]
clientConfig:
service:
name: trivy-webhook-service
namespace: security
path: /validate
admissionReviewVersions: ["v1"]
sideEffects: None
Basic Commands
Jenkins Integration
groovy
pipeline {
agent any
stages {
stage('Checkout') {
steps {
git 'https://github.com/your-repo.git'
steps {
sh 'mvn org.owasp:dependency-check-maven:check'
yaml
stages:
- security_scan
owasp_dependency_check:
stage: security_scan
image: maven:3.8.7-openjdk-17
script:
- mvn org.owasp:dependency-check-maven:check
artifacts:
paths:
- target/dependency-check-report.html
GitHub Actions Integration
yaml
on:
push:
branches:
- main
jobs:
owasp_dependency_check:
runs-on: ubuntu-latest
steps:
uses: actions/checkout@v4
with:
name: owasp-report
path: target/dependency-check-report.html
yaml
apiVersion: batch/v1
kind: Job
metadata:
name: owasp-dependency-check
annotations:
argocd.argoproj.io/hook: PreSync
spec:
template:
spec:
containers:
- name: owasp-check
image: maven:3.8.7-openjdk-17
Networking Basics
● IP Addressing
○ Private IPs: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16
○ Public IPs: Assigned by ISPs
○ CIDR Notation: 192.168.1.0/24 (Subnet Mask: 255.255.255.0)
● Ports
○ HTTP: 80
○ HTTPS: 443
○ SSH: 22
○ DNS: 53
○ FTP: 21
○ MySQL: 3306
○ PostgreSQL: 5432
● Protocols
○ TCP (Reliable, connection-based)
○ UDP (Fast, connectionless)
○ ICMP (Used for ping)
○ HTTP(S), FTP, SSH, DNS
2. Network Commands
Linux Networking
Show network interfaces
ip a # Show IP addresses
Trace route
traceroute google.com
DNS lookup
nslookup google.com
dig google.com
Test ports
telnet google.com 80
Allow SSH
sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT
Block an IP
sudo iptables -A INPUT -s 192.168.1.100 -j DROP
Netcat (nc)
Start a simple TCP listener
nc -lvp 8080
3. Kubernetes Networking
List services and their endpoints
kubectl get svc -o wide
Expose a pod
kubectl expose pod mypod --type=NodePort --port=80
4. Docker Networking
List networks
docker network ls
Inspect a network
docker network inspect bridge
AWS
List VPCs
aws ec2 describe-vpcs
List subnets
aws ec2 describe-subnets
Azure
List VNets
az network vnet list -o table
List NSGs
az network nsg list -o table
● Definition: A logically isolated section of the AWS Cloud where you can
launch AWS resources in a virtual network.
● CIDR Block: Define the IP range (e.g., 10.0.0.0/16).
● Components:
○ Subnets: Divide your VPC into public (with internet access) and
private (without direct internet access) segments.
○ Route Tables: Control the traffic routing for subnets.
○ Internet Gateway (IGW): Allows communication between instances
in your VPC and the internet.
○ NAT Gateway/Instance: Enables outbound internet access for
instances in private subnets.
○ VPC Peering: Connects multiple VPCs.
○ VPN Connections & Direct Connect: Securely link your
on-premises network with your VPC.
○ VPC Endpoints: Privately connect your VPC to supported AWS
services.
● Definition: Virtual firewalls that control inbound and outbound traffic for
your EC2 instances.
● Key Characteristics:
○ Stateful: Return traffic is automatically allowed regardless of
inbound/outbound rules.
○ Default Behavior: All outbound traffic is allowed; inbound is denied
until explicitly allowed.
● Rule Components:
○ Protocol: (TCP, UDP, ICMP, etc.)
○ Port Range: Specific ports or a range (e.g., port 80 for HTTP).
○ Source/Destination: IP addresses or CIDR blocks (e.g., 0.0.0.0/0 for
all).
● Usage:
○ Assign one or more security groups to an instance.
○ Modify rules anytime without stopping or restarting the instance.
VPC Operations
Create a VPC:
aws ec2 create-vpc --cidr-block 10.0.0.0/16
Create a Subnet:
aws ec2 create-subnet --vpc-id <vpc-id> --cidr-block 10.0.1.0/24
Best Practices
● Least Privilege: Only open ports and protocols that are necessary.
● Layered Security: Use both Security Groups and Network ACLs for
enhanced security.
● Monitoring & Auditing: Regularly review and update your security group
rules.
● Naming Conventions: Adopt consistent naming for easy identification and
management.
● Documentation: Keep notes on why certain rules exist to help with future
troubleshooting.
Ports
🔹 Databases
● PostgreSQL - 5432 (Relational database)
● MySQL/MariaDB - 3306 (Relational database)
● MongoDB - 27017 (NoSQL database)
● Redis - 6379 (In-memory database)
● Cassandra - 9042 (NoSQL distributed database)
● CockroachDB - 26257 (Distributed SQL database)
● Neo4j - 7474 (Graph database UI), 7687 (Bolt protocol)
● InfluxDB - 8086 (Time-series database)
● Couchbase - 8091 (Web UI), 11210 (Data access)
✅
helps:
✅
Improve security by hiding backend servers.
✅
Handle traffic and reduce load on backend servers.
Improve performance with caching and compression.
✅
Load Balancing distributes traffic across multiple servers to:
✅
Prevent overloading of a single server.
✅
Ensure high availability (if one server fails, others handle traffic).
Improve speed and performance.
location / {
proxy_pass http://backend_servers; # Forward requests to backend
proxy_set_header Host $host; # Keep the original host
proxy_set_header X-Real-IP $remote_addr; # Send real client IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
upstream backend_servers {
server server1.example.com; # Backend Server 1
server server2.example.com; # Backend Server 2
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend_servers; # Send traffic to multiple backend servers
}
}
<VirtualHost *:80>
ServerName example.com
🔹 Install HAProxy
apt install haproxy # Ubuntu/Debian
yum install haproxy # RHEL/CentOS
backend backend_servers
balance roundrobin # Distribute traffic evenly
server server1 server1.example.com:80 check # First server
server server2 server2.example.com:80 check # Second server
🔹 Restart HAProxy
systemctl restart haproxy
systemctl enable haproxy # Enable on startup
4️⃣ Kubernetes Ingress Controller
🔹 Comparison Table
Tool Feature Use Case
app = Flask(__name__)
@app.route('/')
def home():
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
app = Flask(__name__)
@app.route('/')
def home():
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
FROM python:3.9
WORKDIR /app
COPY server1.py /app/
nginx
events {}
http {
upstream backend_servers {
server server1:5000;
server server2:5000;
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://backend_servers;
version: '3'
services:
server1:
build: .
container_name: server1
ports:
- "5001:5000"
server2:
build: .
container_name: server2
ports:
- "5002:5000"
nginx:
image: nginx:latest
container_name: nginx_proxy
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- server1
- server2
docker-compose up --build
curl http://localhost
<VirtualHost *:80>
ServerName localhost
<Proxy "balancer://mycluster">
BalancerMember "http://server1:5000"
BalancerMember "http://server2:5000"
</Proxy>
</VirtualHost>
version: '3'
services:
server1:
build: .
container_name: server1
ports:
- "5001:5000"
server2:
build: .
container_name: server2
ports:
- "5002:5000"
apache:
image: httpd:latest
container_name: apache_proxy
ports:
- "80:80"
volumes:
- ./apache.conf:/usr/local/apache2/conf/httpd.conf
depends_on:
- server1
- server2
frontend http_front
bind *:80
default_backend backend_servers
backend backend_servers
balance roundrobin
version: '3'
services:
server1:
build: .
container_name: server1
ports:
- "5001:5000"
server2:
build: .
container_name: server2
ports:
- "5002:5000"
haproxy:
image: haproxy:latest
container_name: haproxy_loadbalancer
ports:
- "80:80"
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
depends_on:
- server1
- server2
docker-compose up --build
4. Kubernetes Ingress Controller
kubectl apply -f
https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/pro
vider/cloud/deploy.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
Comparison Table
Database Management
User Management
FLUSH PRIVILEGES;
Table Management
SHOW TABLES;
Data Operations
2. NoSQL Databases
MongoDB
show dbs;
use mydb;
db.createCollection("users");
db.users.find();
db.users.deleteOne({name: "Alice"});
mongorestore /backup/
Redis
redis-cli
GET key;
DEL key;
FLUSHALL;
Cassandra (CQL)
DESC KEYSPACES;
USE mykeyspace;
CREATE TABLE users (id UUID PRIMARY KEY, name TEXT, email TEXT);
provider "aws" {
region = "us-east-1"
}
identifier = "devops-db"
engine = "mysql"
instance_class = "db.t3.micro"
allocated_storage = 20
username = "admin"
password = "password"
hosts: db_servers
become: yes
tasks:
apt:
name: mysql-server
state: present
service:
name: mysql
state: started
enabled: yes
mysql_db:
name: devops_db
state: present
- name: Create MySQL User
mysql_user:
name: devops_user
password: DevOps@123
priv: "devops_db.*:ALL"
state: present
groovy
pipeline {
agent any
environment {
MYSQL_ROOT_PASSWORD = credentials('mysql-root-pass')
stages {
stage('Backup Database') {
steps {
stage('Restore Database') {
steps {
wget
https://github.com/prometheus/mysqld_exporter/releases/download/v0.14.0/mysql
d_exporter-0.14.0.linux-amd64.tar.gz
mv mysqld_exporter /usr/local/bin/
- job_name: 'mysql'
static_configs:
- targets: ['localhost:9104']
docker-compose up -d
version: '3.8'
services:
mongo:
image: mongo
container_name: mongodb
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: DevOps@123
ports:
- "27017:27017"
docker-compose up -d
docker-compose down
--attribute-definitions AttributeName=id,AttributeType=S \
--key-schema AttributeName=id,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
Tool Purpose
✔ Block Storage – Used for databases, VMs, containers (e.g., EBS, Cinder)
✔ File Storage – Used for shared access & persistence (e.g., NFS, EFS)
✔ Object Storage – Used for backups, logs, and media (e.g., S3, MinIO)
Disk Management
fdisk -l
umount /mnt
Filesystem Operations
Format a disk:
mkfs.ext4 /dev/sdb1
AWS S3
List buckets:
aws s3 ls
Upload a file:
aws s3 cp file.txt s3://mybucket/
Download a file:
aws s3 cp s3://mybucket/file.txt .
Sync directories:
aws s3 sync /local/path s3://mybucket/
Upload a file:
az storage blob upload --container-name mycontainer --file file.txt --name file.txt
Download a file:
az storage blob download --container-name mycontainer --name file.txt --file
file.txt
List buckets:
gsutil ls
Upload a file:
gsutil cp file.txt gs://mybucket/
Download a file:
gsutil cp gs://mybucket/file.txt .
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data"
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
yaml
apiVersion: v1
kind: Pod
metadata:
name: storage-pod
spec:
containers:
- name: app
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: storage-volume
volumes:
- name: storage-volume
persistentVolumeClaim:
claimName: my-pvc
provider "aws" {
region = "us-east-1"
bucket = "devops-backup-bucket"
acl = "private"
output "bucket_name" {
value = aws_s3_bucket.devops_bucket.id
provider "azurerm" {
features {}
name = "devopsstorageacc"
resource_group_name = "devops-rg"
account_tier = "Standard"
account_replication_type = "LRS"
Backup Strategies
AWS S3 Backup
✔ Artifacts Storage:
✔ Logging Storage:
✔ Use object storage (S3, MinIO, GCS) for logs and backups.
✔ Automate storage provisioning using Terraform or Ansible.
✔ Implement encryption (AES-256, KMS, Secrets Manager) for security.
✔ Optimize performance with data compression & caching (Redis, CDN).
✔ Regularly monitor storage with Prometheus, Grafana, CloudWatch.
1. What is Helm?
2. Helm Basics
Commands:
sh
Explanation:
● helm repo add → Adds a new chart repository (e.g., Bitnami, which has
pre-built applications).
● helm repo update → Updates the list of available applications.
● helm search repo nginx → Searches for a chart named "nginx" in the
repositories.
Explanation:
Explanation:
Explanation:
Explanation:
Explanation: