0% found this document useful (0 votes)
11 views

Unit 1 Introduction to Data Structures

Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Unit 1 Introduction to Data Structures

Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 98

Introduction to Data

Structures
Concept of Data:
 Data is a collection of numbers, alphabets &
symbols combined to represent information.
 There are two types:
 Atomic data
 Composite data
Atomic data:
 Atomic data are non decomposable entity.
 Ex. Integer value 123 or character A
 It can not be further divided.
 If we divide value 123 it’s meaning lost.
Composite data
 It is composition of several atomic data. so
it is further divided.
 Ex. Date of birth is composite data
 But day, month & year are atomic data.
Data Type
 A data type is a term which refers to the
kind of data variable may hold in PL.
 Ex.

Data types are int ,float, char, double


etc.
Data Types
1. Built in data type: int ,char, float etc.
2. User defined: typedef, enum
3. Derived data type. Array,structures,union
Data structure
 Def
• The data structure can be defined as the
collection of elements and all possible
operations which are required for those set
of elements.
 A Data structure is defined as a triplet of (D,F,A)

D-set of domain

F-set of function (Operations )

A-Axioms it mean defining the functions in F.

Ex. of data structure “Natural Number (NATNO)”

OPERATIONS:

ISZERO(natno)->boolean

SUCC(natno)->natno

ADD(natno,natno)->natno

EQUAL(natno,natno)->boolean

Ex. Of data structure “Natural Number
(NATNO)”

Axioms:

For all x,y belongs to natno

ISZERO(ZERO) is true.

ADD(zero,y) is y

EQUAL(x,ZERO), if ISZERO(x) then true else
false
 D={natno, Boolean}
 F={ISZERO,SUCC,ADD,EQUAL}
 A={Function Definitions.}
Abstract data type
 ADT it is a triple of

D-set of domain

F-set of function

A-Axioms in which only what is to be done is
mentioned but how to be done is not mentioned.
In ADT , all the implementation details are hidden.
So
ADT=Type+ Function name+ Behavior of each function
 A big program is broken down in smaller
modules.
 Each module is developed independently.
 When the program is hierarchical organized it
utilizes services of functions which utilizes
services of other functions without knowing their
implementation details.
 This is called as abstraction.
 Ex. Int x,y,z;
 X=13;(details of storage is hide)
 Z=x+y; (details of + is hide)
Advantages of ADT
Avoids redundancy of code
Eg: Simulate waiting line of a bank.
Approach1: program that simulates bank
queue. It cannot be reused for simulation
of any other queue
Approach2: design queue ADT to solve any
queue problem. Place it in library for all
programmers to use.
 We don't need to know how a car (or a
fridge) works in order to use one!
 all you need to know is what operations it
supports and how to use those operations
 abstract data types (ADTs) are a collection
of data (values) and all the operations on
that data
Ex: Account ADT
Types of Data Structures
 Primitive Data Structures & Non Primitive
Data Structures
 Linear Data Structures & Non Linear Data
Structures
 Static & Dynamic
 Persistent & Ephemeral data structures.
Primitive Data Structures & Non Primitive Data
Structures:
 Primitive data structures are int, float, char,
pointers etc.
 These data types are available in most of the
programming languages as built in type.
 Non-primitive data structures are derived from
primitive data structures.
 Here a set of homogeneous & heterogeneous
data element’s are stored together.
 Ex. Array, structure, union, linked-list, stack,
queue, tree, graph etc.
Operations performed on non-
primitive data structures
 Creation
 Deletion
 Update
 Selection
 Search
 Sort
Linear Data Structures & Non Linear Data
Structures

 Linear:
 Ex. Linked list, stack, queue.
 Elements are arranged in linear fashion (in
sequence.)
 Only one-one relation can be handled
using linear data structure.
Non Linear Data Structures
 All one-many, many-one , many-many
relations are handled using non linear data
structures.
 Here every data element can have number
of predecessors as well as successors.
 Ex. Tree, graphs.
Static & Dynamic

 Static:
 In case of static data structure, m/m for objects is
allocated at the time compilation.
 Amount of m/m required is determined by the compiler
during compilation.
 Ex. Int a[50];
 Disadvantages:
1. Wastage of m/m.
2. It may cause overflow.
3. No re-usability of allocated m/m.
4. Difficult to guess exact size of data at time of writing of
program.
Dynamic

 Here m/m space required by variables is calculated &


allocated during execution.
 Dynamic m/m is managed in ‘c’ and ‘c++’ through set of
library functions.
 malloc & calloc functions are used in ‘c’ and new and
delete is used for dynamic m/m allocation in c++.
 Linked data structures are preferably implemented using
dynamic data structures.
 It give flexibility of adding, deleting or re arranging data
objects at run time.
 Additional space can allocated at run time.
 Unwanted space can be released at run time.
 It gives reusability of m/m space.
Syntax:
int *p;
p= (int *) malloc( size of block in bytes);
 Return Value:
 On success, malloc returns a pointer to
 the newly allocated block of memory.
 On error (if not enough space exists for
 the new block), malloc returns null.
 If the argument size == 0, malloc returns
 null.
 #include <alloc.h>
 #include <process.h>
 int main()
 {
 char *str;
 /* allocate memory for string */
 if ((str = (char *) malloc(10)) == NULL)
 {
 printf("Not enough memory to allocate buffer\n");
 exit(1); /* terminate program if out of memory */
 }
 /* copy "Hello" into string */
 strcpy(str, "Hello");
 /* display string */
 printf("String is %s\n", str);
 /* free memory */
 free(str);
 return 0;
 }
Dynamic memory allocation
#include< alloc.h >
Main()
{
int n, avg ,i, *p, sum=0;
printf (“Enter the no. of the students whose marks u want to enter.”);
scanf (“%d” ,&n);
p= (int *) malloc (n*2);
if (p==NULL)
{
printf (“\n Memory allocation unsuccessful”);
exit();
}
for (i=0;i<n;i++)
scanf (“%d”,(p+i));
for (i=0;i<n; i++)
sum= sum +* (p+i);
avg=sum/n;
printf ( “Average marks =%d”, avg);
}
calloc
 It required 2 arguments.
 Ex.
int *p;
p=(int*)calloc(10,2);.
First argument is no. of blocks required
Second argument is size of each block.
Difference between malloc() & calloc()

Malloc ( )- memory allocated by malloc contains


garbage value.
Calloc( )- memory allocated by calloc contains
all zeros.
 #include <stdio.h>
 #include <alloc.h>
 #include <string.h>

 int main()
 {
 char *str = NULL;

 /* allocate memory for string */


 str = (char *) calloc(10, sizeof(char));

 /* copy "Hello" into string */


 strcpy(str, "Hello");

 /* display string */
 printf("String is %s\n", str);

 /* free memory */
 free(str);

 return 0;
 }
Persistent & Ephemeral data structures.
 A data structure is said to be persistent if it can be
accessed but can not be modified.
 Any operation on such data structure creates two
versions of data structures. previous version is saved &
all changes are made in new version.
 Operations that changes the data will create
1. Copy of old instance of data structure with original
values.
2. Copy of new instance of data structures with updated
values.
3. Ex. Call by value functions.
 If we will create data cells & modify their
content’s it mean we can create
ephemeral data structures.
 These data structures change with
operations.
 Advantage of it is ability to maintain state
as a shared resource among many
routines.
 It is more time efficient than call by value.
 Disadvantage: complexity.
Algorithm:
 Characteristics:

1.Algorithm should have finiteness.


2.Finite Inputs
3.Must produce finite number of outputs.
4.No Ambiguity
5.Effective:Write calculations which are
mechanically possible
How to create program:
 Any program can be created with the
help of two things
1. Data structures
2. Algorithm.
Program Development cycle for creating
program:
1.Feasibility Study
2.Requirement analysis & problem
specification
3.Design
4.Coading
5.Debugging
6.Testing
7.Maintenance
How to Analyze Programs:
 The analysis of program doesn’t mean simply
working of the program.
 But to check whether for all possible situations
program works or not.
 The analysis also involves working of program
efficiently.
 Program requires less amount of storage space.
 Program gets executed in very less amount of
time.
 Analysis of algorithm Focuses on time & space
complexity.
 Space requirement means the space required to
store i/p data either static or dynamic. Here
space required on top of the system stack to
handle recursion/function call should also be
considered.
 Computing time, an algorithm might require for
it’s execution normally depend on the size of
input.

Frequency count:
 #include<stdio.h>
 void main()
 {
 int i,n,sum,x;
 1 sum=0;
 2 printf(“enter no. of data to be added”);
 3 scanf(“%d”,&n);
 4 for(i=1;i<=n;i++)
 {
 5 scanf(“%d”,&x);
 6 sum=sum+x;
 }
 7 printf(“sum= %d”,sum);
 }
 Calculation of computation time:
 Statement no. Frequency computation
time
 1 1 t1
 2 1 t2
 3 1 t3
 4 n+1 (n+1)t4
 5 n nt5
 6 n nt6
 7 1 t7
 Total computation time=
 t1 + t2 + t3 + n(t4 +t5+ t6) + t4 + t7
 T=n(t4 + t5+ t6) +(t1+t2+t3+t4+t7)
 For large n T can be approximated to
 T=n(t4+t5+t6) = kn
 For faster computers time required for
execution of (t1+t2+t3+t4+t7) is less.

Determine frequency count for following program .

 1. j=1;
 2. While (j<= n)
 {
 3. x=x+1;
 4. j++;
 }
 Statement no. Frequency
 1 1
 2 n+1
 3 n
 4 n
Determine frequency count
 1. for(i=1;i<=n;i++)
 2. for(j=1;j<=i;j++)
 3. x=x+1;
 Statement no. 1 = n+1
 Statement no. 2 = 2+3+…..+n+(n+1)
 = (1+2+3+…..n)+n
 =n(n+1) + n =1/2(n2 + 3n)
2
 Statement no 3: =1+2+3+…n
 =n(n+1)
2

 =1/2 (n2 + n)
Measuring the running time of a program:

 Running time can be measured from


1. Input to the program
2. Size of the program
3. Machine language instruction set
4. Machine we are executing on
5. Time required to execute each m/c instruction.
6. The time complexity of the algorithm of the
program.
Measurement of growth rate:
 Asymptotic Consideration:
 suppose f1(n) & f2(n) are time
complexities of two different algorithms for
a given problem of size n.
 If behavior of two functions for smaller &
larger values of n differ, we ignore the
conflicting behavior of smaller values.
 f1(n)=100n2
 f2(n)=5n3

 n f1(n) f2(n)
 1 100 5
 5 2500 625
 10 10000 5000
 20 40000 40000

 f1(n)>=f2(n) for n<=20


 f1(n)<=f2(n) for all n>=20
 We will prefer solution having time
complexity as f1(n).
The constant factor in complexity measure:
 f(n)=100n2
 100 – a constant
 n – size of the problem.
 Time required for solving a problem, depends
not only on the size of the problem but also on
h/w & s/w used to execute the solution.
 A new computer executes a program two times
faster than another computer.
 Then irrespective of the size new computer
solves a problem two times faster
 So functions differ from each other by
constant factor when treated as time
complexities, should not be treated as
different. It should be treated as
complexity wise same.
 Ex.
 F1(n)=5n2
 F2(n)=100n2
 F3(n)=1000n2
 F4(n)=n2
 Time complexity of all is same.
Time complexity: Best case(Ω)
 Best case: Algorithm will give it’s best behavior if the
element to be searched is the first element in array.
 Only one comparison will be needed to search an
element.
 Big - Omega notation is used to define the lower
bound of an algorithm in terms of Time Complexity.
 That means Big - Omega notation always indicates
the minimum time required by an algorithm for all
input values.
 That means Big - Omega notation describes the
best case of an algorithm time complexity.
Best case= Ω (1)
Big - Omega Notation can be defined
as follows...
 Consider function f(n) the time complexity of
an algorithm and g(n) is the most significant
term. If f(n) >= C x g(n) for all n >= n 0, C >
0 and n0 >= 1. Then we can
represent f(n) as Ω(g(n)).
 f(n) = Ω(g(n))
Worst case: Big - Oh notation
 worst case: Algorithm will give it’s worst behavior if the
element to be searched is the last element in array or
search ends in a failure.
 n comparison will be needed to search an element.
 Big - Oh notation is used to define the upper bound of an
algorithm in terms of Time Complexity.
 That means Big - Oh notation always indicates the
maximum time required by an algorithm for all input values.
 That means Big - Oh notation describes the worst case of
an algorithm time complexity.
 worst case= O(n)
Big - Oh Notation can be defined as
follows...
 Consider function f(n) the time complexity of
an algorithm and g(n) is the most significant
term. If f(n) <= C g(n) for all n >= n0, C >
0 and n0 >= 1. Then we can
represent f(n) as O(g(n)).

 f(n) = O(g(n))
Average case (Big - Theta notation)
 number of comparisons required to search an
element present in between 1 and n.
 Big - Theta notation is used to define the average
bound of an algorithm in terms of Time Complexity.
 That means Big - Theta notation always indicates
the average time required by an algorithm for all
input values.
 That means Big - Theta notation describes the
average case of an algorithm time complexity.
 Average case= Θ (n)
Big - Theta Notation can be defined as
follows...
 Consider function f(n) the time complexity of
an algorithm and g(n) is the most significant
term. If C1 g(n) <= f(n) <= C2 g(n) for all n >=
n0, C1, C2 > 0 and n0 >= 1. Then we can
represent f(n) as Θ(g(n)).

 f(n) = Θ(g(n))
Ordered List
 Ordered list is a set of elements where set
may be empty or it can be written as a
collection of elements such as
(a1,a2,a3……..an) a list sometimes called as
linear list.
 Ex. set of days in week.
 List of one digit numbers.
 Operations: Display, search, insert, delete.
Polynomials
 One classic example of an ordered list is a
polynomials.
 Def.-

A polynomial is the sum of terms where each
terms consists of variable, coefficient and
exponent.

Various operations –
• Addition of two polynomials
• Multiplication
• Evaluation
Polynomial representation
one dimensional array where:
// an array of struct
index represents the exponent
#define MAXSIZE 10
and the stored value is the
typedef struct poly {
corresponding coefficient
real coeff;
OR
int expo;
2x8 + 4x2 + 1 }term;
term poly1[MAXSIZE];
0 1 2 3 4 5 6 7 8 9
term poly2[MAXSIZE];
1 4 2
term poly3[MAXSIZE];
Drawbacks Of Using Array:
1. If the exponent is too large then
unnecessary the size of array will be
more. Ex. 7x999-10. Scanning such
array will be time consuming.
2. Wastage of space.
3. Can not decide what the array size
should be.
 Ex.

3x4+5x3+7x2+10x-19


This type of representation of polynomials is
suitable if, the upper limit on exponent value
is reasonably high

Actual number or terms are closer to this
limiting value.
Polynomial representation
 By using structure.
 2 main advantages

1. There is no limit on maximum value of the exponent.

2. It requires less number of terms though there is vast
difference in max & min value of exponent.

Ex.
typedef struct poly
{
int coeff;
int expo;
}p;
p p1[10];
Ex. 7x999-10
Coeff expo
0
7 999
-10 0
1
.
.
.
9
Polynomial addition
 Take polynomial A & B

3x3+2x2+x+1

5x3+7x
Polynomial evaluation
 Consider the polynomial for evaluation as

-10x7+4x5+3x2

Algorithm:
Step1: Read the polynomial array A.
Step2: Read the value of x.
Step3: Initialize the variable sum to zero.
Step 4: Then calculate coeff * pow (x, exp) of each term
and add the result to sum.
Step 5: Display sum.
Step 6 : stop
Concept of Sequential Organization:

 It mean data is stored in sequential form,


in consecutive m/m locations.
 Ex. Array
 There are two basic operations
performed on this data.
i. Storing data at desired location.
ii. Retrieving data from desired location.
Storage representation for arrays:
 One dimensional array Ex. int a[10];
a

0 10
Value
1 20 stored in
Index array
used to 2 30
find 3 40
element
.
.
.
9
Two dimensional array Ex. int a[10][3];
Columns

0 1 2

0 10 20 30
row 1 40 50 60
2

.
.
.

9
 The elements in two dimensional array
may be arranged either in row wise or
column wise.
 If the elements are stored in row wise manner
then it is called “Row Major Representation”.
 If the elements are stored in column wise
manner then it is called “Column Major
Representation”.
Row major Representation
 If the elements are stored in row wise manner then
it is called “Row Major Representation”.

Ex. If we want to store elements
• 10 20 30 40 50 60
then
• In a 2D array
0 1 2
0 10 20 30
1
40 50 60
.
.
.
9
Column Major Representation
 If the elements are stored in column wise manner
then it is called “Column Major Representation”.

Ex. If we want to store elements
• 10 20 30 40 50 60 then
elements will be filled up by column wise
manner (consider array a[3][2])
• In a 2D array
0 1
0
10 40
1 20 50
2 30 60
 Each element is occupied at successive location
if the element is of integer type then 2 bytes of
memory will be allocated.
 If it is float then 4 bytes of memory will be
allocated.
 Ex.
 int a[3][2]={ {10,20}
{30,40}
{50,60} }
 Then in a row major matrix
a[0][0] a[0][1] a[1][0] a[1][1] a[2][0] a [2][1]

10 20 30 40 50 60

100 102 104 106 108 110


And in the column major matrix
a[0][0] a[1][0] a[2][0] a[0][1] a[1][1] a [2][1]

10 30 50 20 40 60

100 102 104 106 108 110


 Address calculation for any element will be
as follows

In row major matrix, the element a[i][j] will be
base address + (i * total number of columns + j)*
size of data type.

In column major matrix, the element a[i][j] will be


base address +( j * total number of rows + i)*
size of data type
Sparse Matrices
 An example sparse matrix:
15 0 0 22 0 -15
0 11 3 0 0 0
0 0 0 -6 0 0
A= 0 0 0 0 0 0
91 0 0 0 0 0
0 0 28 0 0 0

 A lot of “zero” entries.


Thus large memory space is wasted.
 Could we use other representation to save
memory space ??
75
Representation for Sparse Matrices
 Usetriple <row, col, value> to
characterize an element in the
matrix.

ro co valu
 Use
array of triples a[] to represent
w l e
a[0] 6 6 8
a matrix. a[1] 0 0 15
a[2] 0 3 22

row by row a[3] 0 5 -15
a[4] 1 1 11

within a row, a[5] 1 2 3
a[6] 2 3 -6
column by column a[7] 4 0 91
a[8] 5 2 28
Sparse Matrices
 Definition

Sparse matrix is that matrix which has a very few non
zero elements.
 Representation

Ex.
• Suppose a matrix is 6X7 & number of non-zero elements
are say 8 then representation will be
Index Row No Column value
No
0 6 7 8
1 0 6 -10
2 1 0 55
3 2 5 -23
4 3 1 67
5 3 6 88
6 4 3 14
7 4 4 -28
8 5 0 99
Sparse Matrices

col1 col2 col3 col4 col5 col6


row0  15 0 0 22 0  15
row1
 0 11 3 0 0 0 
 
row2  0 0 0  6 0 0
 
row3  0 0 0 0 0 0 
row4
 91 0 0 0 0 0
 
row5  0 0 28 0 0 0
5*3 6*6
15/15 8/36
sparse matrix
data structure?
Sparse Matrix Representation
 In normal Matrix –
• Space will be 6*6*2=72 bytes

 In sparse matrix –
• (TOTAL NO. OF NON-ZERO VALUE +1)*3*2
(8+1)*3*2=9*6 =54 bytes
Sparse matrix representation saves (72-54=18 bytes) of
memory.

.
Representation for Sparse Matrices

typedef struct {
int col, row, value;
} term;

term a[MAX_TERMS];
Read Sparse
cout<<"\n Enter the size of matrix (rows,columns)”;
cin>>m>>n;
a[0][0]=m;
a[0][1]=n;

cout<<"\nEnter no. of non-zero elements:”;


cin>>t;
a[0][2]=t;
for(i=1; i<=t; i++)
{
cout<<"\n Enter the next triple(row,column,value) :”;
cin>>a[i][0]>>a[i][1]>>a[i][2];
}
Display Sparse
n=a[0][2]; //no of 3-triples
cout<<"\nrows”<<a[0][0]<<“columns”<<[0]
[1]<<“Values<<a[0][2]”;
cout<<"\n”;
for(i=1;i<=n;i++)
cout<<a[i][0]<<a[i][1]<<a[i][2]);
Addition of sparse matrix
 Conventions

Order of 2 matrices are same.

If there is an element at <i ,j> in one matrix &
also an element at the position in another
matrix then add two elements & store the sum
in resultant matrix.

Otherwise copy both the elements in resulting
matrix.
Algorithm:

 1. Start
 2.Read two sparse matrices SP1 and SP2
 3.The index of SP1 and SP2 will be i=1,j=1
resp. Non zero elements are t1 and t2 for
SP1 and SP2.
 4.The k=1 index will be for sparse matrix
SP3 which will store the addition of two
matrices.
 5. SP3[0][0]=SP1[0][0]
 SP3[0][1]=SP1[0][1]
Row Col Val Row Col Val
3 3 4 3 3 4
0 0 1 0 0 1
1 0 2 0 1 2
2 0 3 1 0 3
2 1 4 2 1 4

SP1 SP2
Row Col Val
3 3 5
0 0 2
0 1 2
1 0 5
2 0 3
2 1 8
SP3
Operations: Transpose
 c transpose(a) // a: m x n
matrix
//Algorithm 1:
for each row i {
take element (i, j, value) and
store it as (j, i, value).
}
row col valu row col valu
e e
a[0] 6 6 8 c[0] 6 6 8
a[1] 0 0 15 c[1 ] 0 0 15
a[2] 0 3 22 c[2] 3 0 22
a[3] 0 5 -15 c[3] 5 0 -15
 Eg. a[4] 1 1 11 c[4] 1 1 11
a[5] 1 2 3 c[5] 2 1 3
a[6] 2 3 -6 c[6] 3 2 -6
a[7] 4 0 91 c[7] 0 4 91
a[8] 5 2 28 c[8] 2 5 28
Index Row No Column value
No
0 6 7 8
Find Transpose 1 0 6 -10
of this sparse
matrix. 2 1 0 55
3 2 5 -23
4 3 1 67
5 3 6 88
6 4 3 14
7 4 4 -28
8 5 0 99
Index Row No Column value
No
0 7 6 8
1 0 1 55
2 0 5 99
3 1 3 67
4 3 4 14
5 4 4 -28
6 5 2 -23
7 6 0 -10
8 6 3 88
Operations: Transpose
 Problem: If we just place them
consecutively, we need to do a lot of
insertions to make the
row col valuordering right.
e

c[0] 6 6 8
c[1 ] 0 0 15
c[2] 3 0 22
c[3] 5 0 -15
c[4] 1 1 11
c[5] 2 1 3
c[6] 3 2 -6
c[7] 0 4 91
c[8] 2 5 28
Alg. 2 for Transpose
 Algorithm 2:

Find all elements in col. 0, and store them in row 0;
Find all elements in col. 1, and store them in row 1;
… … … … etc

row col valu row col valu


e e
a[0] 6 6 8 c[0] 6 6 8
a[1 ] 0 0 15 c[1 ] 0 0 15
a[2] 0 3 22 c[2] 0 4 91
a[3] 0 5 -15 c[3] 1 1 11
a[4] 1 1 11 c[4] 2 1 3
a[5] 1 2 3 c[5] 2 5 28
a[6] 2 3 -6 c[6] 3 0 22
a[7] 4 0 91 c[7] 3 2 -6
a[8] 5 2 28 c[8] 5 0 -15
Alg. 2 for Transpose
 Algorithm 2:

Running time = O(#col x #terms)

for (j=0; j<#col; j++) { //O(#col)


for all elements in col j { //O(#terms)
place element (i, j, value) in the
next position of array c[];
}
}
Simple Transpose
B[0][0]=A[0][1];
B[0][1]=A[0][0];
B[0][2]=A[0][2];
noterms=A[0][2];
noc=A[0][1];
if(A[0][2]>1)
{
nxt=1;
for(c=0;c<noc;c++)//loop till col number of nonzero elements
{ for(Term=1;Term<=noterms;Term++)// loop till we have nonzero
elements
{
/* if a column number of current triple == c, then insert the current triple in B */

if(A[Term][1]== c)
{
B[nxt][0]=A[Term][1];
B[nxt][1]=A[Term][0];
B[nxt][2]=A[Term][2];
nxt++;
}
}
}
}
Complexity of simple transpose
 O(No.of columns* No. of terms)
 Fast Transpose:
 Determine the number of elements in
each column of the original matrix.
 ==>
 Determine the starting positions of each
row in the transpose matrix.
a[0] 6 6 8
a[1] 0 0 15
a[2] 0 3 22
a[3] 0 5 -15
a[4] 1 1 11
a[5] 1 2 3
a[6] 2 3 -6
a[7] 4 0 91
a[8] 5 2 28
[0] [1] [2] [3] [4] [5]
row_terms = 2 1 2 2 0 1
starting_pos = 1 3 4 6 8 8
void fast_transpose(term a[ ], term b[ ])
{
/* the transpose of a is placed in b */
int row_terms[MAX_COL], starting_pos[MAX_COL];
int i, j, num_cols = a[0].col, num_terms = a[0].value;
b[0].row = num_cols; b[0].col = a[0].row;
b[0].value = num_terms;
if (num_terms > 0){ /*nonzero matrix*/
columns
for (i = 0; i < num_cols; i++) //Initialise to 0.
row_terms[i] = 0;
elements
for (i = 1; i <= num_terms; i++)
row_term [a[i].col]++
starting_pos[0] = 1;
columns for (i =1; i < num_cols; i++)
starting_pos[i]=starting_pos[i-1] +row_terms [i-1];

CHAPTER 2 97
for (i=1; i <= num_terms, i++) {
j = starting_pos[a[i].col];
b[j].row = a[i].col;
elements b[j].col = a[i].row;
b[j].value = a[i].value;
starting_pos[a[i].col]++;

}
}
}

CHAPTER 2 98

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy