PDC-Assignment#03

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

12/26/24, 11:07 PM PDC-Assignment#03.

ipynb - Colab

keyboard_arrow_down Assignment#03
Name: Muhammad Muneeb Khan
Roll Number: 210276

Programming Assignment: MPI Programming

keyboard_arrow_down Objective
The goal of this assignment is to write and implement MPI programs to explore basic message-passing concepts and implement parallel
algorithms using MPI.

Initial Setup

!apt update && apt install -y gcc


!apt-get update
!apt-get install -y mpich
!pip install mpi4py

Hit:1 http://archive.ubuntu.com/ubuntu jammy InRelease


Hit:2 http://archive.ubuntu.com/ubuntu jammy-updates InRelease
Hit:3 http://archive.ubuntu.com/ubuntu jammy-backports InRelease
Hit:4 https://cloud.r-project.org/bin/linux/ubuntu jammy-cran40/ InRelease
Hit:5 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64 InRelease
Hit:6 http://security.ubuntu.com/ubuntu jammy-security InRelease
Hit:7 https://r2u.stat.illinois.edu/ubuntu jammy InRelease
Hit:8 https://ppa.launchpadcontent.net/deadsnakes/ppa/ubuntu jammy InRelease
Hit:9 https://ppa.launchpadcontent.net/graphics-drivers/ppa/ubuntu jammy InRelease
Hit:10 https://ppa.launchpadcontent.net/ubuntugis/ppa/ubuntu jammy InRelease
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
50 packages can be upgraded. Run 'apt list --upgradable' to see them.
W: Skipping acquire of configured file 'main/source/Sources' as repository 'https://r2u.stat.illinois.edu/ubuntu jammy InRelease' does n
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
gcc is already the newest version (4:11.2.0-1ubuntu1).
0 upgraded, 0 newly installed, 0 to remove and 50 not upgraded.
Hit:1 http://archive.ubuntu.com/ubuntu jammy InRelease
Hit:2 http://security.ubuntu.com/ubuntu jammy-security InRelease
Hit:3 http://archive.ubuntu.com/ubuntu jammy-updates InRelease
Hit:4 https://cloud.r-project.org/bin/linux/ubuntu jammy-cran40/ InRelease
Hit:5 http://archive.ubuntu.com/ubuntu jammy-backports InRelease
Hit:6 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64 InRelease
Hit:7 https://r2u.stat.illinois.edu/ubuntu jammy InRelease
Hit:8 https://ppa.launchpadcontent.net/deadsnakes/ppa/ubuntu jammy InRelease
Hit:9 https://ppa.launchpadcontent.net/graphics-drivers/ppa/ubuntu jammy InRelease
Hit:10 https://ppa.launchpadcontent.net/ubuntugis/ppa/ubuntu jammy InRelease
Reading package lists... Done
W: Skipping acquire of configured file 'main/source/Sources' as repository 'https://r2u.stat.illinois.edu/ubuntu jammy InRelease' does n
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
mpich is already the newest version (4.0-3).
0 upgraded, 0 newly installed, 0 to remove and 50 not upgraded.
Requirement already satisfied: mpi4py in /usr/local/lib/python3.10/dist-packages (4.0.1)

Assignment Questions

1. Write and Implement a Hello World Program Write and implement an MPI program where each process prints its rank and the total number
of processes. Use MPI_Comm_rank() to retrieve the process rank and MPI_Comm_size() to get the total number of processes. Run the
program with at least 4 processes.

%%writefile mpi_hello.c
#include <stdio.h>
#include <mpi.h>

https://colab.research.google.com/drive/10NW0mhA8l5WCfardpX1Js6vBZxbsaTQ1#scrollTo=oGWdbDouA0Ij&printMode=true 1/10
12/26/24, 11:07 PM PDC-Assignment#03.ipynb - Colab

int main(int argc, char* argv[]) {


int rank, size;

// Initialize the MPI environment


MPI_Init(&argc, &argv);

// Get the rank of the process


MPI_Comm_rank(MPI_COMM_WORLD, &rank);

// Get the total number of processes


MPI_Comm_size(MPI_COMM_WORLD, &size);

// Print "Hello World" message


printf("Hello World from process %d of %d\n", rank, size);

// Finalize the MPI environment


MPI_Finalize();

return 0;
}

Overwriting mpi_hello.c

!mpicc -o mpi_hello mpi_hello.c

!mpirun --allow-run-as-root --oversubscribe -np 4 ./mpi_hello

Hello World from process 2 of 4


Hello World from process 1 of 4
Hello World from process 3 of 4
Hello World from process 0 of 4

2. Write and Implement Array Summation Write and implement an MPI program to calculate the sum of an integer array. Divide the array
equally among processes, compute partial sums locally, and send the results to the root process using MPI_Send() and MPI_Recv(). The root
process should compute and display the total sum.

%%writefile mpi_array_sum.c
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

int main(int argc, char* argv[]) {


int rank, size;
int *array = NULL;
int local_sum = 0, total_sum = 0;
int n = 100; // Size of the array
int *local_array = NULL;
int local_n;

// Initialize the MPI environment


MPI_Init(&argc, &argv);

// Get the rank of the process


MPI_Comm_rank(MPI_COMM_WORLD, &rank);

// Get the total number of processes


MPI_Comm_size(MPI_COMM_WORLD, &size);

// Allocate memory for the array on root process (rank 0)


if (rank == 0) {
array = (int*)malloc(n * sizeof(int));

// Initialize the array with some values (e.g., 1 to 100)


for (int i = 0; i < n; i++) {
array[i] = i + 1; // Values from 1 to 100
}
}

// Divide the array equally among all processes


local_n = n / size; // Each process gets an equal portion of the array
local_array = (int*)malloc(local_n * sizeof(int));

https://colab.research.google.com/drive/10NW0mhA8l5WCfardpX1Js6vBZxbsaTQ1#scrollTo=oGWdbDouA0Ij&printMode=true 2/10
12/26/24, 11:07 PM PDC-Assignment#03.ipynb - Colab
// Scatter the array to all processes
MPI_Scatter(array, local_n, MPI_INT, local_array, local_n, MPI_INT, 0, MPI_COMM_WORLD);

// Each process computes its partial sum


for (int i = 0; i < local_n; i++) {
local_sum += local_array[i];
}

// Root process (rank 0) collects the partial sums and computes the total sum
if (rank == 0) {
total_sum = local_sum;
// Receive partial sums from other processes
for (int i = 1; i < size; i++) {
int partial_sum;
MPI_Recv(&partial_sum, 1, MPI_INT, i, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
total_sum += partial_sum;
}
// Display the total sum
printf("Total sum: %d\n", total_sum);
} else {
// Other processes send their partial sums to the root process
MPI_Send(&local_sum, 1, MPI_INT, 0, 0, MPI_COMM_WORLD);
}

// Clean up memory
if (rank == 0) {
free(array);
}
free(local_array);

// Finalize the MPI environment


MPI_Finalize();

return 0;
}

Writing mpi_array_sum.c

!mpicc -o mpi_array_sum mpi_array_sum.c


!mpirun --allow-run-as-root --oversubscribe -np 4 ./mpi_array_sum

Total sum: 5050

3. Write and Implement Matrix-Vector Multiplication Write and implement an MPI program to perform matrix-vector multiplication. Distribute
the rows of the matrix among processes, compute partial results locally, and send them to the root process using MPI_Send() and
MPI_Recv(). The root process should assemble and display the final resultant vector.

%%writefile mpi_matrix_vector.c
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

int main(int argc, char* argv[]) {


int rank, size;
int *matrix = NULL, *vector = NULL, *local_matrix = NULL, *result = NULL, *local_result = NULL;
int rows_per_process, n = 4; // Matrix dimensions (n x n)
int i, j;

// Initialize the MPI environment


MPI_Init(&argc, &argv);

// Get the rank of the process


MPI_Comm_rank(MPI_COMM_WORLD, &rank);

// Get the total number of processes


MPI_Comm_size(MPI_COMM_WORLD, &size);

// Allocate memory for the matrix and vector on root process (rank 0)
if (rank == 0) {
matrix = (int*)malloc(n * n * sizeof(int)); // n x n matrix
vector = (int*)malloc(n * sizeof(int)); // n-dimensional vector
result = (int*)malloc(n * sizeof(int)); // Resultant vector

https://colab.research.google.com/drive/10NW0mhA8l5WCfardpX1Js6vBZxbsaTQ1#scrollTo=oGWdbDouA0Ij&printMode=true 3/10
12/26/24, 11:07 PM PDC-Assignment#03.ipynb - Colab
// Initialize the matrix (for simplicity, let's use sequential values)
for (i = 0; i < n; i++) {
for (j = 0; j < n; j++) {
matrix[i * n + j] = i * n + j + 1; // Example values
}
}

// Initialize the vector


for (i = 0; i < n; i++) {
vector[i] = i + 1; // Example values
}
}

// Calculate the number of rows per process


rows_per_process = n / size;
local_matrix = (int*)malloc(rows_per_process * n * sizeof(int));
local_result = (int*)malloc(rows_per_process * sizeof(int));

// Scatter the rows of the matrix to all processes


MPI_Scatter(matrix, rows_per_process * n, MPI_INT, local_matrix, rows_per_process * n, MPI_INT, 0, MPI_COMM_WORLD);

// Scatter the vector to all processes


MPI_Bcast(vector, n, MPI_INT, 0, MPI_COMM_WORLD);

// Each process computes its partial result


for (i = 0; i < rows_per_process; i++) {
local_result[i] = 0;
for (j = 0; j < n; j++) {
local_result[i] += local_matrix[i * n + j] * vector[j];
}
}

// Root process (rank 0) collects the partial results and assembles the final result
if (rank == 0) {
for (i = 0; i < rows_per_process; i++) {
result[i] = local_result[i];
}
// Receive partial results from other processes
for (i = 1; i < size; i++) {
MPI_Recv(&result[i * rows_per_process], rows_per_process, MPI_INT, i, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
}

// Display the final result


printf("Final Resultant Vector:\n");
for (i = 0; i < n; i++) {
printf("%d ", result[i]);
}
printf("\n");
} else {
// Other processes send their partial results to the root process
MPI_Send(local_result, rows_per_process, MPI_INT, 0, 0, MPI_COMM_WORLD);
}

// Clean up memory
if (rank == 0) {
free(matrix);
free(vector);
free(result);
}
free(local_matrix);
free(local_result);

// Finalize the MPI environment


MPI_Finalize();

return 0;
}

Writing mpi_matrix_vector.c

!mpicc -o mpi_matrix_vector mpi_matrix_vector.c


!mpirun --allow-run-as-root --oversubscribe -np 1 ./mpi_matrix_vector

Final Resultant Vector:


30 70 110 150

https://colab.research.google.com/drive/10NW0mhA8l5WCfardpX1Js6vBZxbsaTQ1#scrollTo=oGWdbDouA0Ij&printMode=true 4/10
12/26/24, 11:07 PM PDC-Assignment#03.ipynb - Colab

4. Write and Implement Broadcasting with MPI Write and implement an MPI program where the root process broadcasts an integer array to all
other processes using MPI_Send() and MPI_Recv(). Each process should modify the array by adding its rank to each element. The root
process should display the modified array after gathering the results from all processes.

%%writefile mpi_broadcast.c
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

int main(int argc, char* argv[]) {


int rank, size;
int *array = NULL;
int n = 5; // Size of the array
int i;

// Initialize the MPI environment


MPI_Init(&argc, &argv);

// Get the rank of the process


MPI_Comm_rank(MPI_COMM_WORLD, &rank);

// Get the total number of processes


MPI_Comm_size(MPI_COMM_WORLD, &size);

// Root process initializes the array


if (rank == 0) {
array = (int*)malloc(n * sizeof(int));

// Initialize the array with values (e.g., 1 to 5)


for (i = 0; i < n; i++) {
array[i] = i + 1; // Values from 1 to 5
}
}

// Broadcast the array from the root process to all other processes
MPI_Bcast(array, n, MPI_INT, 0, MPI_COMM_WORLD);

// Modify the array by adding the rank to each element


for (i = 0; i < n; i++) {
array[i] += rank;
}

// Root process collects the modified arrays


if (rank == 0) {
// Root process displays its modified array
printf("Modified Array:\n");
for (i = 0; i < n; i++) {
printf("%d ", array[i]);
}
printf("\n");

// Gather the modified arrays from all processes


for (i = 1; i < size; i++) {
MPI_Recv(&array[0], n, MPI_INT, i, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);

// Display the modified array received from process i


printf("Modified Array from Process %d: ", i);
for (int j = 0; j < n; j++) {
printf("%d ", array[j]);
}
printf("\n");
}
} else {
// Non-root processes send their modified array back to the root
MPI_Send(array, n, MPI_INT, 0, 0, MPI_COMM_WORLD);
}

// Clean up memory
if (rank == 0) {
free(array);
}

// Finalize the MPI environment


MPI_Finalize();

https://colab.research.google.com/drive/10NW0mhA8l5WCfardpX1Js6vBZxbsaTQ1#scrollTo=oGWdbDouA0Ij&printMode=true 5/10
12/26/24, 11:07 PM PDC-Assignment#03.ipynb - Colab

return 0;
}

Writing mpi_broadcast.c

!mpicc -o mpi_broadcast mpi_broadcast.c


!mpirun --allow-run-as-root --oversubscribe -np 1 ./mpi_broadcast

Modified Array:
1 2 3 4 5

5. Write and Implement Matrix Addition Write and implement an MPI program to add two matrices. Divide the rows of the matrices among
processes. Each process performs the addition for its assigned rows, and the root process assembles and displays the resulting matrix.

%%writefile mpi_matrix_addition.c
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

int main(int argc, char* argv[]) {


int rank, size;
int rows_per_process, n = 4, m = 4; // Matrix dimensions (4x4 matrix)
int *A = NULL, *B = NULL, *C = NULL;
int *local_A, *local_B, *local_C;
int i, j;

// Initialize the MPI environment


MPI_Init(&argc, &argv);

// Get the rank of the process


MPI_Comm_rank(MPI_COMM_WORLD, &rank);

// Get the total number of processes


MPI_Comm_size(MPI_COMM_WORLD, &size);

// Initialize matrices in the root process


if (rank == 0) {
A = (int*)malloc(n * m * sizeof(int)); // Matrix A (4x4)
B = (int*)malloc(n * m * sizeof(int)); // Matrix B (4x4)
C = (int*)malloc(n * m * sizeof(int)); // Matrix C (result)

// Initialize matrices A and B


for (i = 0; i < n; i++) {
for (j = 0; j < m; j++) {
A[i * m + j] = i * m + j + 1; // Fill with values (1 to 16)
B[i * m + j] = (i * m + j + 1) * 2; // Fill with different values
}
}
}

// Determine the number of rows each process will handle


rows_per_process = n / size;

// Allocate memory for local arrays


local_A = (int*)malloc(rows_per_process * m * sizeof(int));
local_B = (int*)malloc(rows_per_process * m * sizeof(int));
local_C = (int*)malloc(rows_per_process * m * sizeof(int));

// Scatter the rows of matrix A and B to all processes


MPI_Scatter(A, rows_per_process * m, MPI_INT, local_A, rows_per_process * m, MPI_INT, 0, MPI_COMM_WORLD);
MPI_Scatter(B, rows_per_process * m, MPI_INT, local_B, rows_per_process * m, MPI_INT, 0, MPI_COMM_WORLD);

// Perform the matrix addition (local calculation)


for (i = 0; i < rows_per_process; i++) {
for (j = 0; j < m; j++) {
local_C[i * m + j] = local_A[i * m + j] + local_B[i * m + j];
}
}

// Gather the results from all processes to the root process


MPI_Gather(local_C, rows_per_process * m, MPI_INT, C, rows_per_process * m, MPI_INT, 0, MPI_COMM_WORLD);

// Root process displays the resulting matrix

https://colab.research.google.com/drive/10NW0mhA8l5WCfardpX1Js6vBZxbsaTQ1#scrollTo=oGWdbDouA0Ij&printMode=true 6/10
12/26/24, 11:07 PM PDC-Assignment#03.ipynb - Colab
if (rank == 0) {
printf("Matrix A:\n");
for (i = 0; i < n; i++) {
for (j = 0; j < m; j++) {
printf("%d ", A[i * m + j]);
}
printf("\n");
}

printf("\nMatrix B:\n");
for (i = 0; i < n; i++) {
for (j = 0; j < m; j++) {
printf("%d ", B[i * m + j]);
}
printf("\n");
}

printf("\nResulting Matrix (A + B):\n");


for (i = 0; i < n; i++) {
for (j = 0; j < m; j++) {
printf("%d ", C[i * m + j]);
}
printf("\n");
}

// Free memory for matrices A, B, and C


free(A);
free(B);
free(C);
}

// Free memory for local arrays


free(local_A);
free(local_B);
free(local_C);

// Finalize the MPI environment


MPI_Finalize();

return 0;
}

Writing mpi_matrix_addition.c

!mpicc -o mpi_matrix_addition mpi_matrix_addition.c


!mpirun --allow-run-as-root --oversubscribe -np 4 ./mpi_matrix_addition

Matrix A:
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16

Matrix B:
2 4 6 8
10 12 14 16
18 20 22 24
26 28 30 32

Resulting Matrix (A + B):


3 6 9 12
15 18 21 24
27 30 33 36
39 42 45 48

6. Write and Implement Arrays Addition Write and implement an MPI program to add two arrays element-wise. Distribute the arrays across
processes, and each process computes the sum for its assigned elements. The root process should collect the results and display the final
summed array.

%%writefile mpi_array_addition.c
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

int main(int argc, char* argv[]) {

https://colab.research.google.com/drive/10NW0mhA8l5WCfardpX1Js6vBZxbsaTQ1#scrollTo=oGWdbDouA0Ij&printMode=true 7/10
12/26/24, 11:07 PM PDC-Assignment#03.ipynb - Colab
int rank, size;
int n = 16; // Length of the arrays
int *A = NULL, *B = NULL, *C = NULL;
int *local_A, *local_B, *local_C;
int i, elements_per_process;

// Initialize the MPI environment


MPI_Init(&argc, &argv);

// Get the rank of the process


MPI_Comm_rank(MPI_COMM_WORLD, &rank);

// Get the total number of processes


MPI_Comm_size(MPI_COMM_WORLD, &size);

// Initialize arrays in the root process


if (rank == 0) {
A = (int*)malloc(n * sizeof(int)); // Array A
B = (int*)malloc(n * sizeof(int)); // Array B
C = (int*)malloc(n * sizeof(int)); // Result Array C

// Initialize arrays A and B


for (i = 0; i < n; i++) {
A[i] = i + 1; // Fill with values (1 to 16)
B[i] = (i + 1) * 2; // Fill with different values (2 to 32)
}
}

// Determine the number of elements each process will handle


elements_per_process = n / size;

// Allocate memory for local arrays


local_A = (int*)malloc(elements_per_process * sizeof(int));
local_B = (int*)malloc(elements_per_process * sizeof(int));
local_C = (int*)malloc(elements_per_process * sizeof(int));

// Scatter the elements of array A and B to all processes


MPI_Scatter(A, elements_per_process, MPI_INT, local_A, elements_per_process, MPI_INT, 0, MPI_COMM_WORLD);
MPI_Scatter(B, elements_per_process, MPI_INT, local_B, elements_per_process, MPI_INT, 0, MPI_COMM_WORLD);

// Perform the array addition (local calculation)


for (i = 0; i < elements_per_process; i++) {
local_C[i] = local_A[i] + local_B[i];
}

// Gather the results from all processes to the root process


MPI_Gather(local_C, elements_per_process, MPI_INT, C, elements_per_process, MPI_INT, 0, MPI_COMM_WORLD);

// Root process displays the resulting array


if (rank == 0) {
printf("Array A:\n");
for (i = 0; i < n; i++) {
printf("%d ", A[i]);
}
printf("\n");

printf("Array B:\n");
for (i = 0; i < n; i++) {
printf("%d ", B[i]);
}
printf("\n");

printf("Resulting Array (A + B):\n");


for (i = 0; i < n; i++) {
printf("%d ", C[i]);
}
printf("\n");

// Free memory for arrays A, B, and C


free(A);
free(B);
free(C);
}

// Free memory for local arrays


free(local_A);
free(local_B);

https://colab.research.google.com/drive/10NW0mhA8l5WCfardpX1Js6vBZxbsaTQ1#scrollTo=oGWdbDouA0Ij&printMode=true 8/10
12/26/24, 11:07 PM PDC-Assignment#03.ipynb - Colab
free(local_C);

// Finalize the MPI environment


MPI_Finalize();

return 0;
}

Writing mpi_array_addition.c

!mpicc -o mpi_array_addition mpi_array_addition.c


!mpirun --allow-run-as-root --oversubscribe -np 4 ./mpi_array_addition

Array A:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Array B:
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32
Resulting Array (A + B):
3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48

7. Write and Implement a Parallel Search Write and implement an MPI program to search for a specific value in a large array. Distribute the
array among processes, and each process searches its segment. Use MPI_Send() and MPI_Recv() to send the index of the value (if found) to
the root process, which should display the first occurrence of the value.

%%writefile mpi_parallel_search.c
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

int main(int argc, char* argv[]) {


int rank, size;
int n = 100; // Size of the array
int search_value = 23; // Value to search for
int *array = NULL;
int *local_array;
int elements_per_process;
int found_index = -1; // To store the index of the value if found
int local_found_index = -1; // Local index for each process
int i;

// Initialize the MPI environment


MPI_Init(&argc, &argv);

// Get the rank of the process


MPI_Comm_rank(MPI_COMM_WORLD, &rank);

// Get the total number of processes


MPI_Comm_size(MPI_COMM_WORLD, &size);

// Initialize the array in the root process


if (rank == 0) {
array = (int*)malloc(n * sizeof(int));

// Initialize the array with values


for (i = 0; i < n; i++) {
array[i] = i + 1; // Fill array with values 1 to 100
}
}

// Determine the number of elements each process will handle


elements_per_process = n / size;

// Allocate memory for the local array of each process


local_array = (int*)malloc(elements_per_process * sizeof(int));

// Scatter the array to all processes


MPI_Scatter(array, elements_per_process, MPI_INT, local_array, elements_per_process, MPI_INT, 0, MPI_COMM_WORLD);

// Each process searches for the value in its segment


local_found_index = -1;
for (i = 0; i < elements_per_process; i++) {
if (local_array[i] == search_value) {
local_found_index = rank * elements_per_process + i; // Global index of found value
break;

https://colab.research.google.com/drive/10NW0mhA8l5WCfardpX1Js6vBZxbsaTQ1#scrollTo=oGWdbDouA0Ij&printMode=true 9/10
12/26/24, 11:07 PM PDC-Assignment#03.ipynb - Colab
}
}

// Send the result to the root process


MPI_Gather(&local_found_index, 1, MPI_INT, &found_index, 1, MPI_INT, 0, MPI_COMM_WORLD);

// Root process checks the results


if (rank == 0) {
for (i = 0; i < size; i++) {
if (found_index != -1) {
printf("First occurrence of %d is at index %d\n", search_value, found_index);
break;
}
}
if (found_index == -1) {
printf("Value %d not found in the array\n", search_value);
}

// Free the allocated memory for the array


free(array);
}

// Free the local arrays


free(local_array);

// Finalize the MPI environment


MPI_Finalize();

return 0;
}

Writing mpi_parallel_search.c

!mpicc -o mpi_parallel_search mpi_parallel_search.c


!mpirun --allow-run-as-root --oversubscribe -np 4 ./mpi_parallel_search

First occurrence of 23 is at index 22

https://colab.research.google.com/drive/10NW0mhA8l5WCfardpX1Js6vBZxbsaTQ1#scrollTo=oGWdbDouA0Ij&printMode=true 10/10

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy