0% found this document useful (0 votes)
3K views

47 & 01 Project Report

This document discusses eigenvalues and eigenvectors and how they can be used to analyze networks and determine if a network is bipartite or close to being bipartite. It provides an example of calculating the eigenvalues and eigenvectors of an adjacency matrix for a sample network graph. A bipartite graph is one whose nodes can be separated into two distinct groups with no edges between nodes in the same group. Even if a few edges connect nodes within the same group, it may still be considered close to bipartite. The bipartivity index calculated from a network's eigenvalues can quantify how bipartite a network is, with values closer to 1 indicating networks that are more bipartite.

Uploaded by

Rohan Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3K views

47 & 01 Project Report

This document discusses eigenvalues and eigenvectors and how they can be used to analyze networks and determine if a network is bipartite or close to being bipartite. It provides an example of calculating the eigenvalues and eigenvectors of an adjacency matrix for a sample network graph. A bipartite graph is one whose nodes can be separated into two distinct groups with no edges between nodes in the same group. Even if a few edges connect nodes within the same group, it may still be considered close to bipartite. The bipartivity index calculated from a network's eigenvalues can quantify how bipartite a network is, with values closer to 1 indicating networks that are more bipartite.

Uploaded by

Rohan Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

Eigenvalues and Eigenvectors

Submitted towards the partial fulfilment of the


requirement for the award of the degree of

Bachelor of Technology
In
Mechanical engineering
Submitted by

Naman Kumar Rohan sharma


(mechanical) (mechanical)

2K20/A10/01 2K20/A10/47

Under the Supervision


of

Dr. Anjana Gupta


Department of Mathematical Engineering

Delhi Technological
University Bawana Road.
Delhi -
110042 2020- 2021
INTRODUCTION
Network analysis is the study of the complex, relational data, which anchors the
relationships between its members. Typically, the goals of the network and the
analysis of the structure of the system, the study of the characteristics of the
relationships, in appreciation of its members on the basis of the relationship that
they are a part of, and the identification of the members of the system, and
members of the community. Network analysis has found applications in many
fields, such as Social Networks, Biological Networks, Citation Networks, and
Peer-Networking, the World wide web, the Internet, and particle physics,
Electrical Networks, etc.). As such, program members can be anything - from
individual users, user groups and organizations, molecular structures (such as
proteins), expert publications, computers and routers, websites, power grids and
more. The power of network analysis to produce complex relationships between
existing members is simply a graph with vertices (aka nodes) and edges (aka
links), which can be directed or directed or a combination of both and limited or
weight edges, depending on the nature of interactions between members.

We simulate a complex network is a graph from edges and vertices: a vertex is a


single component in the modeling of the system (e.g., users, computers, and the
topics covered, protein complexes, etc.) and an edge of the registers of the
interactions between them.

For every pair of nodes i and j in the graph G, and the entry in the i-th row and
j-th column of A(G) = 1 and 1 if there is an edge from vertex I to vertex j, and 0
otherwise. Depending on the nature of the interaction, and the edges can be
undirected (symmetric close to the matrix) or a (non-symmetric near-matrix.

In this paper, we present the visual breakdown (eigenvalue and eigenvectors of


the matrix next to the graph) analysis based on network graphs to determine
whether they are bipartite or close to being bipartite or not and if found to be
one of two cases, we show how we can predict two divisions of the "true" or
"close-to" bipartite graph. A bipartite graph is a graph in which a collection of
vets in a graph can be divided into two separate divisions and the edges of the
graph connect vertices across all partitions. In the "true" bipartite graph, there
are no edges connecting the vertices within the separation. In a graph adjacent
to bipartite, there may be one or a few edges (called framed edges) that connect
the vertices within the separation. The spectral distribution is in the form of a
continuous, multi-dimensional presentation of a set of eigenvalues and the
corresponding eigenvectors of the neighborhood matrix of the network graph.
The characteristic vector is the vector of the coordinates of the points on each
axis of a multidimensional space and, as a result, their self-confidence, it is the
length of the projection of a given size. Depending on the characteristics of the
network are considered, we can define a set of axes (eigenvectors) that is
primarily a record of the variability of the data (in this document, we will use
the smallest eigenvalue and the corresponding eigenvector relative): the first
axis corresponds to the direction of the greatest of the data variability, while the
second axis is perpendicular to the first axis) to record the direction of the
largest residual variability (lrv), and so on.

Related Work:-

The majority of the work in the literature focuses on the development of


algorithms for the minimization of the number of edges (e.e, frustrated,
edges) that need to be removed from the graph, in order to obtain a bipartite
histogram of the of the chart. Although it is an NP-hard problem for general
graphs [, for fullerene graphs (a cubic 3-connected planar graph with exactly
12 pentagonal faces, and in addition, a number of six-sided faces), it has
been found that there is a polynomial algorithm in to select the minimal set
of edges that can be removed from the fullerene graphs for the load of a
bipartite histogram graph. In this document developed a mathematical
programming model and a genetic algorithm was used for the determination
of the minimum number of frustrated edges that must be removed from the
histogram of loading a bipartite subgraph.

EIGENVALUES AND EIGENVECTORS -


A division of the electronic health record (Ehr) is a standard method for
processing multi-dimensional data in statistics and the determination of the
direction of maximum variation. These directions are called the eigenvectors of,
and the relative importance of each in the direction of which is determined by
the eigenvalues. A range is a set of all pairs (eigenvalues, eigenvectors) of a
multi-dimensional data are represented as an array. In this paper, we show that
the spectral decomposition of a single-weight, district the matrix, where the
elements are either 0 or 1) of a network graph, it can be carried out in order to
obtain information on the degree of bipartisanship in the main network, as well
as for the prediction of the two chapters, that is, the network graph.
We now show an example of determining the calculation of eigenvalues and
eigenvectors. Figure 1 shows the polynomial calculation of the adjacent matrix
element of the shown network graph. The roots of the polynomial element (i.e.,
the roots of the equation | A - λ I | = 0) are eigenvalues. Therefore, we solve the
polynomial element λ4 - 4λ2 -2λ + 1 = 0; roots are λ = {2.17; 0.31; -1; -1.48}.
The eigenvector X of eigenvalue λ is the most satisfactory (A - λI) X = 0 [7].
Note that X is the vector of a column with n lines where n the size of the nearest
matrix

Figure 1. Characteristic Polynomial for the Adjacency Matrix of the Network Graph

We show the integration of the eigenvalue Eigen value 2.17 in Figure 2. Note
that 2.17 is the largest eigenvalue of the adjacent matrix and is called the
primary eigenvalue and its corresponding eigenvector is called the primary
eigenvector. To calculate the eigenvalues and eigenvectors of the nearest
matrices used in this paper (including Figure 2), we use the website:
http://www.arndt-bruenner.de/mathe/script/engl_eigenwert.htm. A screenshot of
the results obtained for the adjacent matrix of Figure 1 is shown below in Figure
3.
Figure 2. Calculation of the Principal Eigenvector for the Network Grap
in Figure 1

Figure 3. Online Calculator for Eigenvalues and Eigenvectors for an


Adjacency Matrix
BIPARTIVITY INDEX

A graph G = (V, E) is stated to be bipartite if the vertex set V could be divided


into disjoint units V1 and V2 such that there aren't any edges connecting
vertices in the two subsets and each area in E most effective connects a vertex
in V1 to a vertex in V2 or vice-versa (if the graph is directed)
[8]. extra officially, G = (V, E) is stated to be bi-partite if the 2 walls V1 and V2
of the vertex set V and the edge set E are associated as follows :

(i) V1  V2 = V and V1  V2 = Φ (empty set)


(ii)  (i, j)E, both iV1 and jV2 or iV2 and jV1

parent four.1 illustrates a bipartite graph that has no edges inside its vertex


set walls. In truth, it could not be feasible to locate network graphs which
are sincerely bipartite. There may be few edges among the vertices in
the equal partition. Such edges are called pissed off edges. discern 4.2 illustrates
a graph that is close to being bipartite, with the majority of the edges connecting
the vertices throughout the two walls however there are two pissed off edges.
The eigenvalues of the adjacency matrix may be used
to determine the quantity of bipartitivity of a network graph G in the form of a
metric called the bipartitivity index, bS(G), calculated as follows. Permit  λ1,
λ2, λ three, ... , λ n be the eigenvalues of the adjacency matrix of G.

Figure 4. Examples of "True" and "Close-to" Bipartite Graphs


The calculation of the index of the two divisions of the "true" bipartite graph
and the "close" graph of the bipartite is shown in figures 5 and 6 respectively.
We can see that with the "true" bipartite graph, the values of the sinh (λj) in the
formula of the division of objects add 0, resulting in the double split index of 1
in those graphs. On the other hand, with a non-bipartite graph, the sum of the
sinh values (λj) gives a fair value - resulting in an increase in the denominator
value compared to a number in the index formula of two divisions. Therefore,
the index of the non-bipartite graph is always less than 1; if the bS (G) values of
graph G are close to 1, we call them graphs such as "close-to" bipartite, such as
those in Figure 6 (where lines 2 - 4 are removed from the graph in Figure 5 and
1 - 4 can be added. as a distinct edge).

Figure 5. Bipartivity Index Calculations for a "truly" Bipartite Graph

Figure 7 shows the effect of the number of frustrated edges, and their position
on the value of the bipolarity index, some of the network's features. The
bipartite index of a graph as the number of frustrated edges that connect vertices
of the same distribution increases. We can see that the number of frustrated
edges, and a higher value of the bipolarity index, it is also apparent in the
graphs, which have a relatively higher number of frustrated edges of a larger
split, in comparison with a smaller gap.

Figure 6. Bipartivity Index Calculations for a "close-to" Bipartite Graph


Figure 7. Impact of the Number of Frustrated Edges and their Location on the
Bipartivity Index of Network Graphs

PREDICTIONS OF THE PARTITIONS IN A


UNDIRECTED BIPARTITE GRAPH
We now illustrate how to predict the two partitions of a "true" or "close-to"
bipartite graph. For this purpose, we will make use of the smallest of the
eigenvalues and its corresponding eigenvector, hereafter referred to as the
bipartite eigenvalue and the bipartite eigenvector respectively .The bipartite
eigenvector is likely to comprise of both positive and negative entries. The
node IDs whose entries within the bipartite eigenvector are of
the fine signal constitute one of the two walls and people of
the bad signal constitute the alternative partition. The above technique has
been observed to correctly predict the 2 partitions of a "true" bipartite graph,
as proven in figure 8. however, for "close-to" bipartite graphs,
the walls expected (the use of the smallest eigenvalue and its corresponding
eigenvector) won't be similar to the walls predicted (hypothetical partitions) of
the input graph whose adjacency matrix were used to decide the eigenvalue and
the eigenvector. nonetheless, the predicted partitions of the "close- to" bipartite
graphs and the hypothetical walls of the authentic enter graph make a
contribution to the same bipartivity index fee. This shows that two "close-to"
bipartite graphs that bodily look comparable (i.e., same set of vertices and edges
connecting the vertices), however are logically one of a
kind (i.e., fluctuate within the partitions) would still have
the identical bipartivity index; the difference gets compensated in the wide
variety of vertices that shape the two walls and/or the distribution of
the frustrated edges across the 2 walls. word that the predictions of
the walls get less correct because the bipartivitiy index receives a long
way lower than 1.

Figure 8. "True" Bipartite Graph: Predicted Partitions Match with the


Hypothetical Partitions of the Input Graph

Figure 9. "Close-to" Bipartite Graph: Predicted Partitions Appear to be the Same


as that of the Input Graph

Figure 9 shows the prediction of the benefits of an "close to" bipartite graph, so
it looks a little complicated in the first place suppose to be about the two of in
this context, it is assumed that the projected separation is the same as that of
what is expected to be the source of the input graph. However, in the last photo.
10 is an example of the expected distribution of a "city" bipartite graph, which
has a dissimilar dimension with the two of frustrated edges in the input graph is
conjectured to have the same size of the partitions, with a frustrated edge, as
shown in the figure); however, the two graphs have the same bipartite prompt.
The predicted very close to the graph is composed of a large, four corners, and a
small section with two of the corners; in the higher division are two of the ditch
edges, and a smaller one, there are none. Input is very close to the graph in this
figure, three peaks on each of the two sections, as always one of the two
sections. In this example, repeating earlier statements to the effect that two
"close" bipartite graphs, which are physically look the same and have the same
bipartite indicator, it is logical that a different set of keys, a topology with the
same number of vertices in the two sections of the complex, and less and less
frustrated edges to be able to compensate for any topology, with a major
section, with a large number of frustrated edges.

Figure 10. "Close-to" Bipartite Graph: Predicted Partitions do not Match


with the Hypothetical Partitions of the Input Graph

Figure 11. Predicting the Partitions of a "True" Bipartite Directed Graph:


Predicted Partitions Match with the Hypothetical Partitions of the Input
Graph

Figure 12. Predicting the Partitions of a "Close-to" Bipartite Directed Graph: Predicted Partitions do
not Match with the Hypothetical Partitions of the Input Graph
Some Applications of the Eigenvalues and
Eigenvectors of a square matrix
1. Communication system:
The theoretical maximum amount of information that can be sent through a
communication medium, such as your phone through the air. This is done
through the calculation of the the eigenvectors and eigenvalues of the
transmission channel, is expressed by a matrix, and then the pouring of water on
the of the eigenvalues. Then, the eigenvalues are, in general, the benefits of the
main channel of the parameters, which are they are themselves to be taken
prisoner by the eigenvectors.

2. Bridge design:

The Eigen frequency of a bridge, and the self-esteem of the smaller size of the
system's model of the bridge.
Engineers utilize this knowledge in order to ensure the stability of our design.

3. In-Car Audio System Construction:

Eigenvalue analysis is used in the design of the car stereo, which can help you
to recover from the vibrations the car is because of the music.

4. Electrical:

The use of eigenvalues and eigenvectors, it is very useful for the separation of
the three phase-to-phase systems with balanced the component of the
transformation.

5. Mechanical engineering:

Eigenvalues and eigenvectors, you can click "return" to the linear distribution of
control for the simple problems. For example, if a voltage is applied to a plastic
solid, the strain can be decomposed into the "main" these are the instructions
given in that is, the tension is high. The vectors of the principal directions are
the eigenvectors, and the percentage of the deformation in each direction, as
well as the lines are, respectively, the self-estimation.

The oil companies have to make use of self-analysis in order to look for
oil in the country. Oil, dirt, and other substances can cause linear systems with
different eigenvalues, so that the analysis of the eigenvalues, it can give you a
good idea of what the the oil in it. They put a tube around the site and to get out
of the waves by means of the huge truck that can be used to shaking of the
ground. In the waves of change as they pass through different materials in the
ground. Analysis these waves, it refers to the oil-drilling, as possible locations.

Eigenvalues are not only used to explain natural phenomena, but also for the
discovery of new and better models in the future. Some of the results have been
remarkable. If you are asked to do is to build a strong column that, you should
be able to press and hold the weight of the roof, with the use of a limited
amount of material , to which the form would be that the column space.

The majority of the our built as a tank, as well as most of the other columns, we
have seen it before. However, Steve Cox, Rice University, as well as the
Michael Overton of the University of New York, it turned out, was based on the
work of J. Keller and I. Instagram, and that's the column essentially, if it was to
be the biggest of the upper, middle and lower parts. Click the points on the path,
on both sides, the column may be smaller because of the column, it will not be
of the nature of a bend in the road.

This new construction, when it was discovered by the study of the eigenvalues
of the system, which in a column, and an over-weight. Please keep in mind that
this column wasn't the strongest of the structure, as well as one of the important
the pressure is coming from the side, but the column that supports the roof, and
the vast majority of the pressure is directly on the mountain side.

6. Google's PageRank:
Google has been unusually successful as a search engine and, due to their
efficient use of eigenvalues and eigenvectors. Since its inception in 1998,
Google's method to obtain accurate results for our queries, we have a great
many things, and Hence, it is no longer a factor as it was in the beginning.
Hence, it is only one of the ranking factors used by Google's project from the
very beginning. They have been studied as well as the keywords in the search
field, and is compared with the frequency of keywords on the site, as well as
where they are (if you're in the names and descriptions of the pages that are
"more expensive" than it is when the words down on the page). All of these
factors, it was easy to "use" it as I was learning about them, is that Google's
been more secretive about it and what it is used for web pages ranking for a
specific search term.
Currently, we have more than 200 different signals in the analysis of web sites,
including a site's speed, whether it's local or not, in the car, in the amount of
your text, the authority of the whole of the service, the freshness of the content,
and so on. They are reviewed on an ongoing basis, on the basis of these signals,
with the bypass of the "black hat" operators, who are trying to play the system
in order to get to the top, and try to get the best quality and the most influential
of the pages shown at the top.

Power Method Algorithm for Finding


Dominant Eigen Value and Eigen
Vector
In many real world applications of science and engineering, it is required to find
numerically the largest or dominant Eigen value and corresponding Eigen
vector. There are different methods like Cayley-Hamilton method, Power
Method etc. Out of these methods, Power Method follows iterative approach
and is quite convenient and well suited for implementing on computer.

In this, we are going to develop an Algorithm for Power Method for


computing largest or dominant Eigen value and corresponding Eigen vector.
Let A be the square matrix of order n i.e. An x n. Then Power Method starts with
one initial approximation to Eigen vector corresponding to largest Eigen value
of size n x 1. Let this initial approximation be Xn x 1.

After initial assumption, we calculate AX i.e. product of matrix A and X. From


the product of AX we divide each element by largest element (by magnitude)
and express them as λ1X1. Obtained value of λ1 and X1 are next better
approximated value of largest Eigen value and corresponding Eigen vector.
Similarly, for the next step, we multiply A by X1. From the product of AX1 we
divide each element by largest element (by magnitude) and express them as
λ2X2. Obtained value of λ2 and X2 are next better approximated value of largest
Eigen value and corresponding Eigen vector.

And then we repeat this process until largest or dominant Eigen value and
corresponding Eigen vector are obtained within desired accuracy.

ALGORITHM-
1. Start
2. Read Order of Matrix (n) and Tolerable (e)
3. Read Matrix A of Size n x n
4. Read Initial Guess Vector X of Size n x 1
5. Initialize: Lambda_Old = 1
6. Multiply: X_NEW = A * X
7. Replace X by X_NEW
8. Find Largest Element (Lambda_New) by Magnitude from X_NEW
9. Normalize or divide X by Lambda_New
10.Display Lambda_New and X
11.If |Lambda_Old - Lamda_New| > e then
set Lambda_Old = Lamda_New
and goto step (6) otherwise goto step (12)
12.Stop.
Power Method Using C Programming
for Finding Dominant Eigen Value and
Eigen Vector

CODE-

#include<stdio.h>
#include<conio.h>
#include<math.h>

#define SIZE 10

int main()
{
float a[SIZE][SIZE], x[SIZE],x_new[SIZE];
float temp, lambda_new, lambda_old, error;
int i,j,n, step=1;
clrscr();
/* Inputs */
printf("Enter Order of Matrix: ");
scanf("%d", &n);
printf("Enter Tolerable Error: ");
scanf("%f", &error);
/* Reading Matrix */
printf("Enter Coefficient of Matrix:\n");
for(i=1;i<=n;i++)
{
for(j=1;j<=n;j++)
{
printf("a[%d][%d]=",i,j);
scanf("%f", &a[i][j]);
}
}
/* Reading Intial Guess Vector */
printf("Enter Initial Guess Vector:\n");
for(i=1;i<=n;i++)
{
printf("x[%d]=",i);
scanf("%f", &x[i]);
}
/* Initializing Lambda_Old */
lambda_old = 1;
/* Multiplication */
up:
for(i=1;i<=n;i++)
{
temp = 0.0;
for(j=1;j<=n;j++)
{
temp = temp + a[i][j]*x[j];
}
x_new[i] = temp;
}
/* Replacing */
for(i=1;i<=n;i++)
{
x[i] = x_new[i];
}
/* Finding Largest */
lambda_new = fabs(x[1]);
for(i=2;i<=n;i++)
{
if(fabs(x[i])>lambda_new)
{
lambda_new = fabs(x[i]);
}
}
/* Normalization */
for(i=1;i<=n;i++)
{
x[i] = x[i]/lambda_new;
}
/* Display */
printf("\n\nSTEP-%d:\n", step);
printf("Eigen Value = %f\n", lambda_new);
printf("Eigen Vector:\n");
for(i=1;i<=n;i++)
{
printf("%f\t", x[i]);
}
/* Checking Accuracy */
if(fabs(lambda_new-lambda_old)>error)
{
lambda_old=lambda_new;
step++;
goto up;
}
getch();
return(0);
}

OUTPUT-
CONCLUSIONS
This paper illustrates the use of eigenvalue and eigenvectors to analyze the
scope of unregulated and targeted graphs. We see that in a given number of
distressed edges, the index of multiple possibilities can be greater if most of
these layers are found in the size of two bipartite graph separators. In the
"adjacent" bipartite graphs, we see that the predicted divisions of the vertices
are different from those of the assumed subdivision of the input graph; however,
as the set of vertices and the end-to-end series make bipartite graphs unchanged,
the bipartivity index remains the same in both input and predicted graphs. In
other words, with a given number of vertices and edges, there may be more than
one bipartite graph structure (e.g., there may be one or more combinations of
these two components) that may have the same double index value. The above
argument holds a positive effect on both bipartite and non-directed bipartite
graphs.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy