CLoud Computing Integrated(BCS601) Lab Manual
CLoud Computing Integrated(BCS601) Lab Manual
Created By:
Hanumanthu
Dedicated To.
📺 YouTube: https://www.youtube.com/@searchcreators7348
📸 Instagram : https://www.instagram.com/searchcreators/
📱 Telegram: https://t.me/SearchCreators
💬 WhatsApp:+917348878215
Laboratory Components
2. Cloud Shell & gcloud: Manage Google Cloud resources using gcloud
7.
Experiment-01
Creating a Virtual Machine: Configure and deploy a virtual machine with specific CPU
and memory requirements in Google Cloud.
1. Name the VM
▪ Example:
• Run:
Output
Experiment-02
Getting Started with Cloud Shell and gcloud: Discover the use of gcloud
commands to manage Google Cloud resources from Cloud Shell.
2. Click the Cloud Shell icon ( Terminal icon) in the top-right corner.
gcloud init
Run the following command to view all projects associated with your Google
account:
Output
Experiment-03
nano main.py
import functions_framework
@functions_framework.cloud_event
def gcs_trigger(cloud_event):
data = cloud_event.data
bucket = data["bucket"]
file_name = data["name"]
nano requirements.txt
functions-framework
--gen2 \
--runtime=python311 \
--region=us-central1 \
--source=. \
--entry-point=gcs_trigger \
--trigger-event-filters="type=google.cloud.storage.object.v1.finalized" \
--trigger-event-filters="bucket=BUCKET_NAME" \
--allow-unauthenticated
Output
Experiment-04
nano main.py
app = Flask(__name__)
@app.route('/')
def home():
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8080)
nano requirements.txt
Flask
gunicorn
nano app.yaml
runtime: python311
automatic_scaling:
min_instances: 1
max_instances: 5
target_cpu_utilization: 0.65
target_throughput_utilization: 0.75
Output
Experiment-05
1. In the Google Cloud Console, click the Navigation Menu (☰) on the top
left.
2. Click Create.
7. Click Create.
allUsers
6. Click Save.
https://storage.googleapis.com/your-unique-bucket-name/your-file-name
Output
Experiment-06
Cloud SQL for MySQL: Discover how Google Cloud SQL for MySQL
provide automated management and high availability for MySQL
databases?
Automated Management
3. Set:
or
Using Django/Flask
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'your-db-name',
'USER': 'root',
'PASSWORD': 'your-password',
'HOST': '/cloudsql/your-project-id:your-region:your-instance',
'PORT': '3306',
3. Save changes.
3. Click Create.
Output
Experiment-07
4. Click Create.
4. Click Create.
project_id = "your-project-id"
topic_id = "my-topic"
publisher = pubsub_v1.PublisherClient()
project_id = "your-project-id"
subscription_id = "my-subscription"
subscriber = pubsub_v1.SubscriberClient()
def callback(message):
print(f"Received: {message.data.decode('utf-8')}")
message.ack()
subscriber.subscribe(subscription_path, callback=callback)
import time
while True:
time.sleep(10)
Output
Experiment-08
Google Cloud, Virtual Private Cloud (VPC) allows you to define and control
networking environments for your resources. You can have multiple VPC
networks, each isolated from one another or interconnected for specific use
cases. Managing multiple VPCs helps you scale, secure, and organize resources
efficiently.
• Private Connectivity – Each VPC can have private IPs that do not
communicate with other VPCs unless explicitly allowed, keeping
sensitive data isolated.
• Granular Firewall Rules – Define specific firewall rules for each VPC,
limiting access to resources within a VPC or between multiple VPCs.
• Custom Subnetting – Each VPC can have its own subnet structure
tailored to the needs of specific projects or services.
Traffic Control
• VPC Peering – You can allow traffic between two or more VPCs by
creating VPC peering connections. This gives you flexibility in managing
traffic flow while maintaining network isolation.
• Private Google Access – For certain services, you can configure access
to Google Cloud services without using public IPs, enhancing security.
Scaling Flexibility
Multi-Tier Applications
You can deploy multi-tier architectures where each tier (e.g., web, app,
database) resides in separate VPC networks, enabling better isolation and
security between tiers.
Cross-Region Architecture
You can deploy resources in multiple regions for disaster recovery or to meet
local compliance requirements while maintaining network isolation between
regions. For instance, a production VPC in one region and a disaster recovery
VPC in another.
You might have managed services (like Cloud SQL or BigQuery) in one VPC
while using compute instances or other resources in another, optimizing
resource placement.
2. Specify the name, region, and subnet configuration for your VPC.
3. Click Create.
2. Choose Custom subnet mode to define your own subnets or Auto mode
for auto-assigned subnets.
3. Define the network and routes that can be shared across the VPCs.
4. Click Create.
2. Define the source and destination VPCs, and configure the firewall to
allow or deny traffic between the VPCs.
Output
Experiment-09
Performance Tracking
• Custom Metrics – You can define your own custom metrics for specific
applications to monitor application-level health or performance.
Health Monitoring
1. Open the Google Cloud Console → Navigation Menu (☰) → APIs &
Services → Library.
3. Select Metrics from the dropdown and choose the service you want to
monitor (e.g., Compute Engine, Cloud Storage, Cloud Functions).
4. Add the required metrics to your dashboard and adjust visualizations like
line charts, heat maps, or bar charts.
4. Select the notification channels (e.g., email, Slack, SMS) where the alert
should be sent.
3. Use Log Explorer to search for specific logs, such as error messages or
performance warnings, and correlate them with metrics in Cloud
Monitoring.
2. Enable trace collection in your services (e.g., by using the Cloud Trace
SDK for your app).
Output
Experiment-10
2. Install kubectl
kubectl is the Kubernetes command-line tool used to manage Kubernetes
clusters. The Cloud SDK includes kubectl, so if you have the SDK
installed, you already have kubectl.
1. Create a new Google Cloud project (if you don't already have one):
Replace my-cluster with your desired cluster name and adjust the zone if
necessary.
2. Get the credentials for your cluster: This command configures kubectl
to use the cluster you just created.
Dockerfile:
dockerfile
CopyEdit
FROM node:14
WORKDIR /usr/src/app
COPY . .
EXPOSE 8080
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: gcr.io/PROJECT_ID/my-app:v1
ports:
- containerPort: 8080
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: LoadBalancer
3. Get the external IP address: It may take a few moments for the
LoadBalancer to be provisioned.
The EXTERNAL-IP column will show the public IP once the LoadBalancer is
provisioned.
2. You can also use kubectl to get the status of your pods and services:
Output