47 & 01 Project Report
47 & 01 Project Report
Bachelor of Technology
In
Mechanical engineering
Submitted by
2K20/A10/01 2K20/A10/47
Delhi Technological
University Bawana Road.
Delhi -
110042 2020- 2021
INTRODUCTION
Network analysis is the study of the complex, relational data, which anchors the
relationships between its members. Typically, the goals of the network and the
analysis of the structure of the system, the study of the characteristics of the
relationships, in appreciation of its members on the basis of the relationship that
they are a part of, and the identification of the members of the system, and
members of the community. Network analysis has found applications in many
fields, such as Social Networks, Biological Networks, Citation Networks, and
Peer-Networking, the World wide web, the Internet, and particle physics,
Electrical Networks, etc.). As such, program members can be anything - from
individual users, user groups and organizations, molecular structures (such as
proteins), expert publications, computers and routers, websites, power grids and
more. The power of network analysis to produce complex relationships between
existing members is simply a graph with vertices (aka nodes) and edges (aka
links), which can be directed or directed or a combination of both and limited or
weight edges, depending on the nature of interactions between members.
For every pair of nodes i and j in the graph G, and the entry in the i-th row and
j-th column of A(G) = 1 and 1 if there is an edge from vertex I to vertex j, and 0
otherwise. Depending on the nature of the interaction, and the edges can be
undirected (symmetric close to the matrix) or a (non-symmetric near-matrix.
Related Work:-
Figure 1. Characteristic Polynomial for the Adjacency Matrix of the Network Graph
We show the integration of the eigenvalue Eigen value 2.17 in Figure 2. Note
that 2.17 is the largest eigenvalue of the adjacent matrix and is called the
primary eigenvalue and its corresponding eigenvector is called the primary
eigenvector. To calculate the eigenvalues and eigenvectors of the nearest
matrices used in this paper (including Figure 2), we use the website:
http://www.arndt-bruenner.de/mathe/script/engl_eigenwert.htm. A screenshot of
the results obtained for the adjacent matrix of Figure 1 is shown below in Figure
3.
Figure 2. Calculation of the Principal Eigenvector for the Network Grap
in Figure 1
Figure 7 shows the effect of the number of frustrated edges, and their position
on the value of the bipolarity index, some of the network's features. The
bipartite index of a graph as the number of frustrated edges that connect vertices
of the same distribution increases. We can see that the number of frustrated
edges, and a higher value of the bipolarity index, it is also apparent in the
graphs, which have a relatively higher number of frustrated edges of a larger
split, in comparison with a smaller gap.
Figure 9 shows the prediction of the benefits of an "close to" bipartite graph, so
it looks a little complicated in the first place suppose to be about the two of in
this context, it is assumed that the projected separation is the same as that of
what is expected to be the source of the input graph. However, in the last photo.
10 is an example of the expected distribution of a "city" bipartite graph, which
has a dissimilar dimension with the two of frustrated edges in the input graph is
conjectured to have the same size of the partitions, with a frustrated edge, as
shown in the figure); however, the two graphs have the same bipartite prompt.
The predicted very close to the graph is composed of a large, four corners, and a
small section with two of the corners; in the higher division are two of the ditch
edges, and a smaller one, there are none. Input is very close to the graph in this
figure, three peaks on each of the two sections, as always one of the two
sections. In this example, repeating earlier statements to the effect that two
"close" bipartite graphs, which are physically look the same and have the same
bipartite indicator, it is logical that a different set of keys, a topology with the
same number of vertices in the two sections of the complex, and less and less
frustrated edges to be able to compensate for any topology, with a major
section, with a large number of frustrated edges.
Figure 12. Predicting the Partitions of a "Close-to" Bipartite Directed Graph: Predicted Partitions do
not Match with the Hypothetical Partitions of the Input Graph
Some Applications of the Eigenvalues and
Eigenvectors of a square matrix
1. Communication system:
The theoretical maximum amount of information that can be sent through a
communication medium, such as your phone through the air. This is done
through the calculation of the the eigenvectors and eigenvalues of the
transmission channel, is expressed by a matrix, and then the pouring of water on
the of the eigenvalues. Then, the eigenvalues are, in general, the benefits of the
main channel of the parameters, which are they are themselves to be taken
prisoner by the eigenvectors.
2. Bridge design:
The Eigen frequency of a bridge, and the self-esteem of the smaller size of the
system's model of the bridge.
Engineers utilize this knowledge in order to ensure the stability of our design.
Eigenvalue analysis is used in the design of the car stereo, which can help you
to recover from the vibrations the car is because of the music.
4. Electrical:
The use of eigenvalues and eigenvectors, it is very useful for the separation of
the three phase-to-phase systems with balanced the component of the
transformation.
5. Mechanical engineering:
Eigenvalues and eigenvectors, you can click "return" to the linear distribution of
control for the simple problems. For example, if a voltage is applied to a plastic
solid, the strain can be decomposed into the "main" these are the instructions
given in that is, the tension is high. The vectors of the principal directions are
the eigenvectors, and the percentage of the deformation in each direction, as
well as the lines are, respectively, the self-estimation.
The oil companies have to make use of self-analysis in order to look for
oil in the country. Oil, dirt, and other substances can cause linear systems with
different eigenvalues, so that the analysis of the eigenvalues, it can give you a
good idea of what the the oil in it. They put a tube around the site and to get out
of the waves by means of the huge truck that can be used to shaking of the
ground. In the waves of change as they pass through different materials in the
ground. Analysis these waves, it refers to the oil-drilling, as possible locations.
Eigenvalues are not only used to explain natural phenomena, but also for the
discovery of new and better models in the future. Some of the results have been
remarkable. If you are asked to do is to build a strong column that, you should
be able to press and hold the weight of the roof, with the use of a limited
amount of material , to which the form would be that the column space.
The majority of the our built as a tank, as well as most of the other columns, we
have seen it before. However, Steve Cox, Rice University, as well as the
Michael Overton of the University of New York, it turned out, was based on the
work of J. Keller and I. Instagram, and that's the column essentially, if it was to
be the biggest of the upper, middle and lower parts. Click the points on the path,
on both sides, the column may be smaller because of the column, it will not be
of the nature of a bend in the road.
This new construction, when it was discovered by the study of the eigenvalues
of the system, which in a column, and an over-weight. Please keep in mind that
this column wasn't the strongest of the structure, as well as one of the important
the pressure is coming from the side, but the column that supports the roof, and
the vast majority of the pressure is directly on the mountain side.
6. Google's PageRank:
Google has been unusually successful as a search engine and, due to their
efficient use of eigenvalues and eigenvectors. Since its inception in 1998,
Google's method to obtain accurate results for our queries, we have a great
many things, and Hence, it is no longer a factor as it was in the beginning.
Hence, it is only one of the ranking factors used by Google's project from the
very beginning. They have been studied as well as the keywords in the search
field, and is compared with the frequency of keywords on the site, as well as
where they are (if you're in the names and descriptions of the pages that are
"more expensive" than it is when the words down on the page). All of these
factors, it was easy to "use" it as I was learning about them, is that Google's
been more secretive about it and what it is used for web pages ranking for a
specific search term.
Currently, we have more than 200 different signals in the analysis of web sites,
including a site's speed, whether it's local or not, in the car, in the amount of
your text, the authority of the whole of the service, the freshness of the content,
and so on. They are reviewed on an ongoing basis, on the basis of these signals,
with the bypass of the "black hat" operators, who are trying to play the system
in order to get to the top, and try to get the best quality and the most influential
of the pages shown at the top.
And then we repeat this process until largest or dominant Eigen value and
corresponding Eigen vector are obtained within desired accuracy.
ALGORITHM-
1. Start
2. Read Order of Matrix (n) and Tolerable (e)
3. Read Matrix A of Size n x n
4. Read Initial Guess Vector X of Size n x 1
5. Initialize: Lambda_Old = 1
6. Multiply: X_NEW = A * X
7. Replace X by X_NEW
8. Find Largest Element (Lambda_New) by Magnitude from X_NEW
9. Normalize or divide X by Lambda_New
10.Display Lambda_New and X
11.If |Lambda_Old - Lamda_New| > e then
set Lambda_Old = Lamda_New
and goto step (6) otherwise goto step (12)
12.Stop.
Power Method Using C Programming
for Finding Dominant Eigen Value and
Eigen Vector
CODE-
#include<stdio.h>
#include<conio.h>
#include<math.h>
#define SIZE 10
int main()
{
float a[SIZE][SIZE], x[SIZE],x_new[SIZE];
float temp, lambda_new, lambda_old, error;
int i,j,n, step=1;
clrscr();
/* Inputs */
printf("Enter Order of Matrix: ");
scanf("%d", &n);
printf("Enter Tolerable Error: ");
scanf("%f", &error);
/* Reading Matrix */
printf("Enter Coefficient of Matrix:\n");
for(i=1;i<=n;i++)
{
for(j=1;j<=n;j++)
{
printf("a[%d][%d]=",i,j);
scanf("%f", &a[i][j]);
}
}
/* Reading Intial Guess Vector */
printf("Enter Initial Guess Vector:\n");
for(i=1;i<=n;i++)
{
printf("x[%d]=",i);
scanf("%f", &x[i]);
}
/* Initializing Lambda_Old */
lambda_old = 1;
/* Multiplication */
up:
for(i=1;i<=n;i++)
{
temp = 0.0;
for(j=1;j<=n;j++)
{
temp = temp + a[i][j]*x[j];
}
x_new[i] = temp;
}
/* Replacing */
for(i=1;i<=n;i++)
{
x[i] = x_new[i];
}
/* Finding Largest */
lambda_new = fabs(x[1]);
for(i=2;i<=n;i++)
{
if(fabs(x[i])>lambda_new)
{
lambda_new = fabs(x[i]);
}
}
/* Normalization */
for(i=1;i<=n;i++)
{
x[i] = x[i]/lambda_new;
}
/* Display */
printf("\n\nSTEP-%d:\n", step);
printf("Eigen Value = %f\n", lambda_new);
printf("Eigen Vector:\n");
for(i=1;i<=n;i++)
{
printf("%f\t", x[i]);
}
/* Checking Accuracy */
if(fabs(lambda_new-lambda_old)>error)
{
lambda_old=lambda_new;
step++;
goto up;
}
getch();
return(0);
}
OUTPUT-
CONCLUSIONS
This paper illustrates the use of eigenvalue and eigenvectors to analyze the
scope of unregulated and targeted graphs. We see that in a given number of
distressed edges, the index of multiple possibilities can be greater if most of
these layers are found in the size of two bipartite graph separators. In the
"adjacent" bipartite graphs, we see that the predicted divisions of the vertices
are different from those of the assumed subdivision of the input graph; however,
as the set of vertices and the end-to-end series make bipartite graphs unchanged,
the bipartivity index remains the same in both input and predicted graphs. In
other words, with a given number of vertices and edges, there may be more than
one bipartite graph structure (e.g., there may be one or more combinations of
these two components) that may have the same double index value. The above
argument holds a positive effect on both bipartite and non-directed bipartite
graphs.