Questions Are Based On The Interviews Attended by Folks
Questions Are Based On The Interviews Attended by Folks
Ansible, Terraform, Docker, Kubernetes, Jenkins, Git, Prometheus, Grafana, AWS CLI, Python
scripting.
4. How do you handle the continuous delivery (CD) aspect in your projects?
EC2, S3, RDS, Lambda, IAM, VPC, CloudFormation, Route 53, EBS, ELB, CloudWatch.
7. How would you access data in an S3 bucket from Account A when your application is running on an
EC2 instance in Account B?
9. How can Instance 2, with a static IP, communicate with Instance 1, which is in a private subnet and
mapped to a multi-AZ load balancer?
Use NAT Gateway or Bastion Host for outbound internet access from Instance 1.
Configure security groups to allow traffic between instances.
10. For an EC2 instance in a private subnet, how can it verify and download required packages from the
internet without using a NAT gateway or bastion host? Are there any other AWS services that can
facilitate this?
11. What is the typical latency for a load balancer, and if you encounter high latency, what monitoring
steps would you take?
12. If your application is hosted in S3 and users are in different geographic locations, how can you reduce
latency?
13. Which services can be integrated with a CDN (Content Delivery Network)?
14. How do you dynamically retrieve VPC details from AWS to create an EC2 instance using IaC?
Use the terraform import command to bring existing resources under management.
Consider custom resources or third-party providers.
16. How do you pass arguments to a VPC while using the 'terraform import' command?
Indirectly through state modification: While not recommended, it's technically possible to
modify the Terraform state file to add arguments after import. However, this approach is prone
to errors and should be avoided.
Terraform state file: A valid state file is required to track imported resources.
Resource address: The unique identifier of the VPC to be imported.
Matching configuration: The Terraform configuration should align with the existing VPC's
attributes.
18. If an S3 bucket was created through Terraform but someone manually added a policy to it, how do
you handle this situation?
Use Terraform's lifecycle block: Configure the lifecycle block to replace the existing policy with
the desired one.
Consider custom resources: For more complex scenarios, create a custom resource to manage
the bucket policy.
19. How do you handle credentials for a PHP application accessing MySQL or any other secrets in
Docker?
Use environment variables: Store sensitive information as environment variables within the
Docker container.
Leverage secret management tools: Tools like AWS Secrets Manager or HashiCorp Vault can
securely store and manage secrets.
Avoid hardcoding credentials: Never commit credentials directly to code.
docker logs <container_name_or_id>: This command displays the logs of a running container.
Yes, I have upgraded Kubernetes clusters. The process typically involves careful planning,
creating backups, updating control plane components, and then upgrading worker nodes.
Create deployment manifests: Define the desired state of the application using YAML or JSON
files.
Apply manifests: Use kubectl apply to create or update the deployment.
Monitor deployment status: Use kubectl describe deployment or kubectl get pods to track the
deployment process.
23. How do you communicate with a Jenkins server and a Kubernetes cluster?
Jenkins plugins: Use plugins to integrate with Kubernetes (e.g., Kubernetes Pipeline plugin).
Kubernetes API: Interact with the Kubernetes API directly using the kubectl command or libraries.
Use kubeconfig file: Create a kubeconfig file containing authentication and authorization
information.
Leverage IAM roles for service accounts: Assign IAM roles to service accounts for secure access.
25. Do you only update Docker images in Kubernetes, or do you also update replicas, storage levels, and
CPU allocation?
Update as needed: Docker images are typically updated for code changes, while replicas, storage,
and CPU allocation are adjusted based on workload requirements.
27. Can you define environment variables inside your Jenkins pipeline?
Yes: Environment variables can be defined within the Jenkinsfile or passed as parameters to the
pipeline.
28. What is the role of artifacts in Jenkins, and why do we need to push them to Nexus instead of
building and storing them locally?
Artifacts are build outputs: They can be packages, test results, or other files.
Nexus provides centralized artifact management: It offers features like versioning, search, and
security, improving artifact management and distribution.
29. If you're developing a Python-based application, how do you separate the packages needed for your
local deployment to avoid interfering with globally installed packages?
Create virtual environments: Use tools like venv or virtualenv to isolate project dependencies.
Use try-except blocks: Enclose code that might raise exceptions in a try block and handle them in
an except block.
Raise custom exceptions: Create custom exception classes for specific error conditions.
Utilize logging: Log errors for debugging and monitoring purpose
Here are some prominent tools for dynamic code analysis that can be integrated into a DevOps CI/CD
pipeline:
AppScan: Offers comprehensive vulnerability scanning, including web application and API testing.
Burp Suite: Popular for penetration testing and web application security, it can be integrated into
CI/CD for automated vulnerability scanning.
OWASP ZAP: Open-source tool for web application security testing, suitable for integration into
CI/CD pipelines.
Checkmarx: Provides both static and dynamic analysis, offering a holistic approach to security
testing.
Veracode: Another comprehensive platform that includes DAST capabilities.
Contrast Security: Offers runtime application self-protection (RASP) and IAST capabilities.
Synopsys Seeker: Provides IAST and runtime application protection.
Selenium: While primarily a test automation tool, it can be used for dynamic testing scenarios.
JMeter: For performance testing, which can indirectly identify potential vulnerabilities.
Chaos Engineering Tools: Such as Chaos Monkey, can be used to test system resilience under
unexpected conditions.