Python Unit 5 New 2020
Python Unit 5 New 2020
Python Iterators
Iterators are objects that can be iterated upon. In this tutorial, you will learn how
iterator works and how you can build your own iterator using __iter__ and __next__
methods.
What are iterators in Python?
Iterators are everywhere in Python. They are elegantly implemented within for loops,
comprehensions, generators etc. but hidden in plain sight.
Iterator in Python is simply an object that can be iterated upon. An object which will return
data, one element at a time.
Technically speaking, Python iterator object must implement two special
methods, __iter__() and __next__(), collectively called the iterator protocol.
An object is called iterable if we can get an iterator from it. Most of built-in containers in
Python like: list, tuple, string etc. are iterables.
The iter() function (which in turn calls the __iter__() method) returns an iterator from them.
So internally, the for loop creates an iterator object, iter_obj by calling iter() on the iterable.
Ironically, this for loop is actually an infinite while loop.
Inside the loop, it calls next() to get the next element and executes the body of the for loop
with this value. After all the items exhaust, StopIteration is raised which is internally caught
and the loop ends. Note that any other kind of exception will pass through.
Building Your Own Iterator in Python
Building an iterator from scratch is easy in Python. We just have to implement the
methods __iter__() and __next__().
The __iter__() method returns the iterator object itself. If required, some initialization can be
performed.
The __next__() method must return the next item in the sequence. On reaching the end, and
in subsequent calls, it must raise StopIteration.
Here, we show an example that will give us next power of 2 in each iteration. Power
exponent starts from zero up to a user set number.
class PowTwo:
"""Class to implement an iterator
of powers of two"""
def __iter__(self):
self.n = 0
return self
def __next__(self):
if self.n<= self.max:
result = 2 ** self.n
self.n += 1
return result
else:
raise StopIteration
2 Python Recursion
A physical world example would be to place two parallel mirrors facing each other. Any
object in between them would be reflected recursively.
Factorial of a number is the product of all the integers from 1 to that number. For example,
the factorial of 6 (denoted as 6!) is 1*2*3*4*5*6 = 720.
# An example of a recursive function to
# find the factorial of a number
defcalc_factorial(x):
"""This is a recursive function
to find the factorial of an integer"""
if x == 1:
return 1
else:
return (x * calc_factorial(x-1))
num = 4
print("The factorial of", num, "is", calc_factorial(num))
In the above example, calc_factorial() is a recursive functions as it calls itself.
When we call this function with a positive integer, it will recursively call itself by decreasing
the number.
Each function call multiples the number with the factorial of number 1 until the number is
equal to one. This recursive call can be explained in the following steps.
Our recursion ends when the number reduces to 1. This is called the base condition.
Every recursive function must have a base condition that stops the recursion or else the
function calls itself infinitely.
Advantages of Recursion
1. Recursive functions make the code look clean and elegant.
2. A complex task can be broken down into simpler sub-problems using recursion.
3. Sequence generation is easier with recursion than using some nested iteration.
Disadvantages of Recursion
1. Sometimes the logic behind recursion is hard to follow through.
2. Recursive calls are expensive (inefficient) as they take up a lot of memory and time.
3. Recursive functions are hard to debug.
The first two terms are 0 and 1. All other terms are obtained by adding the preceding two
terms.This means to say the nth term is the sum of (n-1)th and (n-2)th term.
Source Code
1. # Python program to display the Fibonacci sequence
2.
3. defrecur_fibo(n):
4. if n <= 1:
5. return n
6. else:
7. return(recur_fibo(n-1) + recur_fibo(n-2))
8.
9. nterms = 10
10.
11. # check if the number of terms is valid
12. ifnterms<= 0:
13. print("Plese enter a positive integer")
14. else:
15. print("Fibonacci sequence:")
16. foriin range(nterms):
17. print(recur_fibo(i))
Output
Fibonacci sequence:
0
1
1
2
3
5
8
13
21
34
# Driver code
n =4
TowerOfHanoi(n, \'A\', \'C\', \'B\')
# A, C, B are the name of rods
Output:
Move disk 1 from rod A to rod B
Move disk 2 from rod A to rod C
Move disk 1 from rod B to rod C
Move disk 3 from rod A to rod B
Move disk 1 from rod C to rod A
Move disk 2 from rod C to rod B
Move disk 1 from rod A to rod B
Move disk 4 from rod A to rod C
Move disk 1 from rod B to rod C
Move disk 2 from rod B to rod A
Move disk 1 from rod C to rod A
Move disk 3 from rod B to rod C
Move disk 1 from rod A to rod B
Move disk 2 from rod A to rod C
Move disk 1 from rod B to rod C
Sorting, searching and algorithm analysis
Introduction
We have learned that in order to write a computer program which performs some task we
must construct a suitable algorithm. However, whatever algorithm we construct is unlikely to
be unique – there are likely to be many possible algorithms which can perform the same task.
Are some of these algorithms in some sense better than others? Algorithm analysis is the
study of this question.
In this chapter we will analyse four algorithms; two for each of the following common tasks:
Algorithm analysis should begin with a clear statement of the task to be performed. This
allows us both to check that the algorithm is correct and to ensure that the algorithms we are
comparing perform the same task.
Although there are many ways that algorithms can be compared, we will focus on two that
are of primary importance to many data processing algorithms:
time complexity: how the number of steps required depends on the size of the input
space complexity: how the amount of extra memory or storage required depends on the
size of the input
Sorting algorithms
The sorting of a list of values is a common computational task which has been studied
extensively. The classic description of the task is as follows:
Given a list of values and a function that compares two values, order the values in the list
from smallest to largest.
The values might be integers, or strings or even other kinds of objects. We will examine two
algorithms:
Selection sort, which relies on repeated selection of the next smallest item
Merge sort, which relies on repeated merging of sections of the list that are already sorted
Other well-known algorithms for sorting lists are insertion sort, bubble sort, heap
sort, quicksort and shell sort.
There are also various algorithms which perform the sorting task for restricted kinds of
values, for example:
Counting sort, which relies on the values belonging to a small set of items
Bucket sort, which relies on the ability to map each value to one of a small set of items
Radix sort, which relies on the values being sequences of digits
If we restrict the task, we can enlarge the set of algorithms that can perform it. Among these
new algorithms may be ones that have desirable properties. For example, Radix sort uses
fewer steps than any generic sorting algorithm.
What does O(1) mean? It means that the cost of an algorithm is constant, no matter what the
value of N is. For both these search algorithms, the best case scenario happens when the first
element to be tested is the correct element – then we only have to perform a single operation
to find it.
In the previous table, big O notation has been used to describe the time complexity of
algorithms. It can also be used to describe their space complexity – in which case the cost
function represents the number of units of space required for storage rather than the required
number of operations. Here are the space complexities of the algorithms above (for the worst
case, and excluding the space required to store the input):
Algorithm Space complexity
Using join()+ListSlicing
The join function can be coupled with list slicing which can perform the task of joining each
character in a range picked by the list slicing functionality.
# initializing list
test_list =['I', 'L', 'O', 'V', 'E', 'G', 'F', 'G']
# printing result
print("The list after merging elements : "+ str(test_list))
Output:
The original list is : ['I', 'L', 'O', 'V', 'E', 'G', 'F', 'G']
The list after merging elements : ['I', 'L', 'O', 'V', 'E', 'GFG']
# initializing list
test_list =['I', 'L', 'O', 'V', 'E', 'G', 'F', 'G']
# printing result
print("The list after merging elements : "+ str(test_list))
Output:
The original list is : ['I', 'L', 'O', 'V', 'E', 'G', 'F', 'G']
The list after merging elements : ['I', 'L', 'O', 'V', 'E', 'GFG']
6. Linear Search
Problem: Given an array arr[] of n elements, write a function to search a given element x in
arr[].
Examples :
Input :arr[] = {10, 20, 80, 30, 60, 50,
110, 100, 130, 170}
x = 110;
Output : 6
Element x is present at index 6
Start from the leftmost element of arr[] and one by one compare x with each element of
arr[]
If x matches with an element, return the index.
If x doesn’t match with any of elements, return -1.
Example:
defsearch(arr, n, x):
# Driver Code
arr =[ 2, 3, 4, 10, 40];
x =10;
n =len(arr);
result =search(arr, n, x)
if(result ==-1):
print("Element is not present in array")
else:
print("Element is present at index", result);
Output:
Element is present at index 3
The time complexity of above algorithm is O(n).
Linear search is rarely used practically because other search algorithms such as the binary
search algorithm and hash tables allow significantly faster searching comparison to Linear
search.
7. Binary Search
Given a sorted array arr[] of n elements, write a function to search a given element x in arr[].
A simple approach is to do linear search.The time complexity of above algorithm is O(n).
Another approach to perform the same task is using Binary Search.
Binary Search: Search a sorted array by repeatedly dividing the search interval in half.
Begin with an interval covering the whole array. If the value of the search key is less than the
item in the middle of the interval, narrow the interval to the lower half. Otherwise narrow it
to the upper half. Repeatedly check until the value is found or the interval is empty.
Example :
The idea of binary search is to use the information that the array is sorted and reduce the time
complexity to O(Log n).
We basically ignore half of the elements just after one comparison.
1. Compare x with the middle element.
2. If x matches with middle element, we return the mid index.
3. Else If x is greater than the mid element, then x can only lie in right half subarray after
the mid element. So we recur for right half.
4. Else (x is smaller) recur for the left half.
else:
# Element is not present in the array
return-1
# Test array
arr =[ 2, 3, 4, 10, 40]
x =10
# Function call
result =binarySearch(arr, 0, len(arr)-1, x)
ifresult !=-1:
print"Element is present at index % d"%result
else:
print"Element is not present in array"
Output :
Element is present at index 3
whilel <=r:
# Test array
arr =[ 2, 3, 4, 10, 40]
x =10
# Function call
result =binarySearch(arr, 0, len(arr)-1, x)
ifresult !=-1:
print"Element is present at index % d"%result
else:
print"Element is not present in array"
Output :
Element is present at index 3
Time Complexity:
The time complexity of Binary Search can be written as
T(n) = T(n/2) + c
The above recurrence can be solved either using Recurrence T ree method or Master method.
It falls in case II of Master Method and solution of the recurrence is .
Auxiliary Space: O(1) in case of iterative implementation. In case of recursive
implementation, O(Logn) recursion call stack space.
Selection sort is an algorithm that selects the smallest element from an unsorted list in each
iteration and places that element at the beginning of the unsorted list.
Compare minimum with the third element. Again, if the third element is smaller, then
assign minimum to the third element otherwise do nothing. The process goes on until the last
element.
3. After each iteration, minimum is placed in the front of the unsorted list.
4. For each iteration, indexing starts from the first unsorted element. Step 1 to 3 are repeated
until all the elements are placed at their correct positions.
Selection Sort Algorithm
1. selectionSort(array, size)
2. repeat (size - 1) times
3. set the first unsorted element as the minimum
4. for each of the unsorted elements
5. if element <currentMinimum
6. set element asnew minimum
7. swap minimum with first unsorted position
8. endselectionSort
Python
1. # Selection sort in Python
2.
3. defselectionSort(array, size):
4. for step in range(size):
5. min_idx = step
6. foriinrange(step + 1, size):
7.
8. # To sort in descending order, change > to < in this line.
9.
10. if array[i] < array[min_idx]:
11. min_idx = i
12.
13. (array[step], array[min_idx]) = (array[min_idx], array[step])
14.
15.
16. data = [-2, 45, 0, 11, -9]
17. size = len(data)
18. selectionSort(data, size)
19. print('Sorted Array in Ascending Order:\n')
20. print(data)
Complexity
1st (n-1)
2nd (n-2)
3rd (n-3)
... ...
last 1
Complexity = O(n2)
Also, we can analyze the complexity by simply observing the number of loops. There are 2
loops so the complexity is n*n = n2.
Time Complexities:
WorstCaseComplexity: O(n2)
If we want to sort in ascending order and the array is in descending order then, the worst case
occurs.
BestCaseComplexity: O(n2)
AverageCaseComplexity: O(n2)
It occurs when the elements of the array are in jumbled order (neither ascending nor
descending).
The time complexity of selection sort is the same in all cases. At every step, you have to find
the minimum element and put it in the right place. The minimum element is not known until
the end of the array is not reached.
Space Complexity:
Space complexity is O(1) because an extra variable temp is used.
Like QuickSort, Merge Sort is a Divide and Conquer algorithm. It divides input array in two
halves, calls itself for the two halves and then merges the two sorted halves. The merge ()
function is used for merging two halves. The merge(arr, l, m, r) is key process that assumes
that arr[l..m] and arr[m+1..r] are sorted and merges the two sorted sub-arrays into one
MergeSort(arr[], l, r)
If r > l
1. Find the middle point to divide the array into two halves:
middle m = (l+r)/2
2. Call mergeSort for first half:
Call mergeSort(arr, l, m)
3. Call mergeSort for second half:
Call mergeSort(arr, m+1, r)
4. Merge the two halves sorted in step 2 and 3:
Call merge(arr, l, m, r)
Merge Sort is a Divide and Conquer algorithm. It divides input array in two halves, calls
itself for the two halves and then merges the two sorted halves. The merge () function is
used for merging two halves. The merge(arr, l, m, r) is key process that assumes that arr[l..m]
and arr[m+1..r] are sorted and merges the two sorted sub-arrays into one.
mergeSort(arr,0,n-1)
print("\n\nSorted array is")
fori inrange(n):
print("%d"%arr[i]),
Sorted array is
5 6 7 11 12 13