Speaker Recognition Report
Speaker Recognition Report
Speaker Recognition Report
SPEAKER RECOGNITION
Speaker Recognition is the problem of identifying a speaker from a recording of their speech. It is an
important topic in Speech Signal Processing and has a variety of applications, especially in security
systems. Voice controlled devices also rely heavily on speaker recognition.
This is already a well-researched problem; my aim was not to come up with a new algorithm for
speaker recognition, but to implement some already famous existing methods using Python. My
motivation behind doing this independent project was to make a shift from MATLAB to Python for
scientific computing. For this I primarily used the NumPy, SciPy and Matplotlib packages that have a
huge repository for matrix manipulation, signal processing and plotting.
The main principle behind speaker recognition is extraction of features from speech which are
characteristic to a speaker, followed by training on a data set and testing. I have relied heavily on the
algorithm suggested in [1], where they extract the Mel-Frequency Cepstral Coefficients from each
speaker and train them with Vector Quantization (using the LBG algorithm). I have also tried Vector
Quantization by extracting the Linear Prediction Coefficients (LPCs) for training. The data set I have
trained and tested on was downloaded from [1], and consists of 8 different female speakers uttering
the word ‘zero’. This data set is not extensive enough to give conclusive results (with only 8 training
and test sets), and the results are far from satisfactory. However, my intention was to learn about and
implement algorithms in Python, not carry out accurate tests. I wish to gather more data to extend this
work.
1. Speech Signal
Digital speech is a one dimensional time-varying discrete signal as shown in Figure 1a. Various
mathematical models of speech are available such as the Autoregressive Model and Sinusoidal +
Residual model. A popular model of speech production says speech consists of a train of impulses of
period equal to its pitch, added with random noise, controlled by a voiced/ unvoiced switch and
modulated by the vocal tract which is a time-varying filter.
Speech is quasi-stationary in nature. For short intervals, the signal is stationary but over longer
periods, the signal frequency varies. Hence a Short-Time Fourier Transform is needed to visualize its
frequency content, as given in Figure 1b (also computed in Python with FFT size = Hanning window
size = 256 samples and overlap of 50%). The 3D plot for STFT with time and frequency along x and y
axis, and log amplitude indicated by intensity of the colour is called a Spectrogram.
Figure 1
Speaker Recognition Orchisama Das
2. Feature Extraction
Choosing which features to extract from speech is the most significant part of speaker recognition.
Some popular features are: MFCCs, LPCs, Zero-Crossing Rates etc. In this work, I have concentrated
on MFCCs and LPCs. Here is a brief overview of these features.
To calculate MFCCs, the steps are as follows. A very good tutorial is available in [2]. A schematic of
this process is given in Figure 2.
1. The speech signal is divided into frames of 25ms with an overlap of 10ms. Each frame is
multiplied with a Hamming window.
2. The periodogram of each frame of speech is calculated by first doing an FFT of 512 samples
on individual frames, then taking the power spectrum as:
Where P(k) refers to power spectral estimate and S(k) refers to Fourier coefficients for the kth
frame of speech and N is the length of the analysis window. The last 257 samples of the
periodogram are preserved since it is an even function.
3. The entire frequency range is divided into ‘n’ Mel filter banks, which is also the number of
coefficients we want. ‘For ‘n’ = 12, the filter bank is shown in Figure 3 - a number of
overlapping triangular filters with increasing bandwidth as the frequency increases.
4. To calculate filter bank energies we multiply each filter bank with the power spectrum, and
add up the coefficients. Once this is performed we are left with ‘n’ numbers that give us an
indication of how much energy was in each filter bank.
5. We take the logarithm of these ‘n’ energies and compute its Discrete Cosine Transform to get
the final MFCCs.
The Python code for calculating MFCCs from a given speech file (.wav format) is shown in Listing 1.
42. #zero padding to make signal length long enough to have nFrames
43. padding = ((nSamples-overlap)*nFrames) - len(s)
44. if padding > 0:
45. signal = np.append(s, np.zeros(padding))
46. else:
47. signal = s
48. segment = np.empty((nSamples, nFrames))
49. start = 0
50. for i in range(nFrames):
51. segment[:,i] = signal[start:start+nSamples]
52. start = (nSamples-overlap)*i
53.
54. #compute periodogram
55. nfft = 512
56. periodogram = np.empty((nFrames,nfft/2 + 1))
57. for i in range(nFrames):
58. x = segment[:,i] * hamming(nSamples)
59. spectrum = fftshift(fft(x,nfft))
60. periodogram[i,:] = abs(spectrum[nfft/2-1:])/nSamples
61.
62. #calculating mfccs
63. fbank = mel_filterbank(nfft, nfiltbank, fs)
64. #nfiltbank MFCCs for each frame
65. mel_coeff = np.empty((nfiltbank,nFrames))
66. for i in range(nfiltbank):
67. for k in range(nFrames):
68. mel_coeff[i,k] = np.sum(periodogram[k,:]*fbank[:,i])
69.
70. mel_coeff = np.log10(mel_coeff)
71. mel_coeff = dct(mel_coeff)
72. #exclude 0th order coefficient (much larger than others)
73. mel_coeff[0,:]= np.zeros(nFrames)
74. return mel_coeff
75.
Listing 1 – mel_coefficients.py
Each sample at the nth instant depends on ‘p’ previous samples, added with a Gaussian noise u(n).
This model comes from the assumption that a speech signal is produced by a buzzer at the end of a
tube (voiced sounds), with occasional added hissing and popping sounds.
LPC coefficients are given by α. To estimate the coefficients, we use the Yule-Walker equations
which are explained in [3]. It uses the autocorrelation function Rx. Autocorrelation at lag l is given by:
∑
Speaker Recognition Orchisama Das
While calculating ACF in Python, the Box-Jenkins method is used which scales the correlation at
each lag by the sample variance so that the autocorrelation at lag 0 is unity.
= --
In this case, I have normalised the LPC coefficients estimated so that they lie between [-1,1]. This was
seen to give more accurate results. The Python code for calculating LPCs is given in Listing 2. We
first divide speech into frames of 25ms with 10ms overlap, then calculate ‘p’ LPCs for each frame.
19. return R
20.
21. def lpc(s,fs,p):
22.
23. #divide into segments of 25 ms with overlap of 10ms
24. nSamples = np.int32(0.025*fs)
25. overlap = np.int32(0.01*fs)
26. nFrames = np.int32(np.ceil(len(s)/(nSamples-overlap)))
27.
28. #zero padding to make signal length long enough to have nFrames
29. padding = ((nSamples-overlap)*nFrames) - len(s)
30. if padding > 0:
31. signal = np.append(s, np.zeros(padding))
32. else:
33. signal = s
34. segment = np.empty((nSamples, nFrames))
35. start = 0
36. for i in range(nFrames):
37. segment[:,i] = signal[start:start+nSamples]
38. start = (nSamples-overlap)*i
39.
40. #calculate LPC with Yule-Walker
41. lpc_coeffs = np.empty((p, nFrames))
42. for i in range(nFrames):
43. acf = autocorr(segment[:,i])
44. r = -acf[1:p+1].T
45. R = createSymmetricMatrix(acf,p)
46. lpc_coeffs[:,i] = np.dot(np.linalg.inv(R),r)
47. lpc_coeffs[:,i] = lpc_coeffs[:,i]/np.max(np.abs(lpc_coeffs[:,i]))
48.
49. return lpc_coeffs
Listing 2 – LPC.py
3. Feature Matching
The most popular feature matching algorithms for speaker recognition are Dynamic Time Warping
(DTW), Hidden Markov Model (HMM) and Vector Quantization (VQ). Here, I have used Vector
Quantization as suggested in [1].
.
VQ is a process of mapping vectors from a large vector space to a finite number of regions in that
space. Each region is called a cluster and can be represented by its center called a codeword. The
collection of all codewords is called a codebook.
Figure 5a shows a conceptual diagram to illustrate this recognition process. Only 2 feature
dimensions are shown on a 2D plane for two different speakers. We can identify clusters of
vectors in the plane. During training, a codeword is chosen for each cluster by minimizing the
distortion between each vector in the cluster and the codeword. The collection of codewords
forms a speaker specific codebook unique to each speaker. The codebook for each speaker is
determined by the LBG algorithm, which is described later. To identify a speaker, the
distance (or distortion) of the speaker’s features from the all the trained codebooks is
calculated. The codebook that has minimum distortion with the speaker’s features is
identified.
Figure 5b shows the actual 2D diagram of 5th and 6th features of two speakers, and their
respective codebooks.
Speaker Recognition Orchisama Das
Speaker 1 Speaker 2
Speaker 1
centroid
sample VQ distortion
Speaker 2
centroid
sample
1. Design a 1-vector codebook; this is the centroid of the entire set of training vectors (hence, no
iteration is required here).
2. Double the size of the codebook by splitting each current codebook yn according to the rule
Speaker Recognition Orchisama Das
yn yn (1 )
yn yn (1 )
where n varies from 1 to the current size of the codebook, and is a splitting parameter (we
choose =0.01).
3. Nearest-Neighbor Search: for each training vector, find the codeword in the current codebook that
is closest (in terms of similarity measurement), and assign that vector to the corresponding cell
(associated with the closest codeword).
4. Centroid Update: update the codeword in each cell using the centroid of the training vectors
assigned to that cell.
5. Iteration 1: repeat steps 3 and 4 until vector distortion for current iteration falls below a fraction of
the pervious iteration’s distortion. This is to ensure that the process has converged.
Intuitively, the LBG algorithm designs an M-vector codebook in stages. It starts first by
designing a 1-vector codebook, then uses a splitting technique on the codewords to initialize the
search for a 2-vector codebook, and continues the splitting process until the desired M-vector
codebook is obtained.
Listing 3 gives the details of implementing the LBG algorithm in Python. The print statements should
be used for debugging, especially to see if distortion is converging.
4. Feature Training
The main algorithms needed for speaker recognition have been implemented. Now, everything needs
to be brought together to train our dataset and derive codebooks for each speaker using VQ. The
Python code is given in Listing 4. I have hard-coded the name of the directory where the speech files
are stored and the .wav filenames, but that can be easily changed by giving them as parameters to the
training( ) function. The number of speakers is nSpeaker = 8. As mentioned before, speech
recordings of 8 female speakers uttering the word ‘zero’ has been taken for training and testing. Each
codebook should have 16 codewords, hence nCentroid = 16 (it is highly recommended to keep this
number a power of 2).
Codebooks for both MFCC features and LPC features are plotted for all 8 speakers. One of them is
shown in Figure 6. Lines 44 to 61 may be commented out as they are only used to plot the 5th and 6th
dimension MFCC features for the first two speakers on a 2D plane.
5. Testing
It is finally time to test our speaker recognition algorithm. Listing 5 contains the code for identifying
the speaker by comparing their feature vector to the codebooks of all trained speakers and computing
the minimum distance between them. Heads up, the results are not as accurate as I thought they’d be,
Speaker Recognition Orchisama Das
yielding an accuracy of 50% with MFCC and 37.5% with LPC. Reasons for this low accuracy can
be the fact that there wasn’t enough data to train on. Other complex classification algorithms such as
ANNs and SVMs should yield better results. I observed that training with12 MFCC features and
LPC coefficients of order 15 gives the best results. There are other parameters that can be varied,
such as number of codewords in a codebook and FFT size while computing MFCCs. It is possible that
a different combination of these will give higher accuracy.
31. print 'Now speaker ', str(i+1), 'features are being tested'
32. (fs,s) = read(directory + fname)
33. mel_coefs = mfcc(s,fs,nfiltbank)
34. lpc_coefs = lpc(s, fs, orderLPC)
35. sp_mfcc = minDistance(mel_coefs, codebooks_mfcc)
36. sp_lpc = minDistance(lpc_coefs, codebooks_lpc)
37.
38. print 'Speaker', (i+1), ' in test matches with speaker ', (sp_mfcc+1), 'in train for training
39. with MFCC'
40. print 'Speaker', (i+1), ' in test matches with speaker ', (sp_lpc+1), 'in train for training
41. with LPC'
42.
43. if i == sp_mfcc:
44. nCorrect_MFCC += 1
45. if i == sp_lpc:
46. nCorrect_LPC += 1
47.
48.
49. percentageCorrect_MFCC = (nCorrect_MFCC/nSpeaker)*100
50. print 'Accuracy of result for training with MFCC is ', percentageCorrect_MFCC, '%'
51. percentageCorrect_LPC = (nCorrect_LPC/nSpeaker)*100
52. print 'Accuracy of result for training with LPC is ', percentageCorrect_LPC, '%'
53.
54.
Listing 5 – test.py
6. Results
The following table gives the identification results for each of the 8 speakers with MFCC and LPC
coefficients and Vector Quantization with LBG algorithm for classification.
7. Wrapping up
I had a lot of fun doing this small project, understanding algorithms and implementing them in
Python. It was a good learning experience, and the resources available online helped a lot. I would
like to extend it by experimenting with more features and more classification algorithms. This was not
an original research work (I don’t want to be accused of plagiarism ), but rather an exploration and
implementation of existing methods (the code is mine). It might be particularly helpful for those who
want to get started with Python for signal processing, or those who want to explore the vast topic of
Speech Processing.
Speaker Recognition Orchisama Das
Supplement
I would like to include the Python program for plotting the spectrogram of a signal by doing an STFT.
A spectrogram carries fundamental information about speech signals, and it is a basic tool that is used
in all audio analysis. The default window type has been taken as a Hanning window, and the window
length is equal to the number of FFT points.
Listing 6 – spectrogram.py