0% found this document useful (0 votes)
13 views16 pages

Pattern Recognition 21BR551 MODULE 02 NOTES

The document provides comprehensive notes on Bayesian classifiers, focusing on their application in pattern recognition. It covers key concepts such as decision boundaries, error rate estimation, and confusion matrices, along with practical examples and techniques like simple and fractional counting. The notes are prepared by Mr. Thanmay J S, Assistant Professor at Mysore University School of Engineering.

Uploaded by

Thanmay JS
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views16 pages

Pattern Recognition 21BR551 MODULE 02 NOTES

The document provides comprehensive notes on Bayesian classifiers, focusing on their application in pattern recognition. It covers key concepts such as decision boundaries, error rate estimation, and confusion matrices, along with practical examples and techniques like simple and fractional counting. The notes are prepared by Mr. Thanmay J S, Assistant Professor at Mysore University School of Engineering.

Uploaded by

Thanmay JS
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Mysore University School of Engineering

8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006

Pattern Recognition
(21BR551)

MODULE 02 NOTES

Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006

Module 02: Course Content

2.0 Introduction to Bayesian classifiers


2.1 Decision boundaries
2.2 Two dimensional examples
2.3 D-dimensional decision boundaries in matrix notation
2.4 Estimation of error rates
2.5 Unequal costs of error
2.6 Model-based estimates
2.7 Simple counting
2.8 Fractional counting
2.9 Characteristic curves
2.10 Confusion matrices
2.11 Estimating the composition of populations.

Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006
2.0 Introduction to Bayesian Classifiers
Bayesian classifiers are a type of statistical classification model that rely on Bayes' Theorem to predict the
probability of a class label given the input features. The key idea behind Bayesian classifiers is to model the
relationship between the features of the data and the target class, using probabilistic reasoning. The general
framework uses Bayes' Theorem to estimate the posterior probabilities of different classes, which are then
used for classification.
Bayes' Theorem states that:

2.1 Decision Boundaries


In classification, a decision boundary is the boundary that separates the different classes in the feature space.
For a two-class problem, this boundary is a line (or hyperplane in higher dimensions) where the classifier is
equally likely to assign one class as the other.

Simplifying this gives the equation for the decision boundary.


For more complex classifiers, such as those involving Gaussian distributions for the likelihoods, the decision
boundaries can be curves or higher-dimensional surfaces.

2.2 Two-Dimensional Examples


In two-dimensional space, the feature vector 𝑋 = (𝑥1, 𝑥2) has two components, and the decision boundary is
a curve or line that separates different class regions. In the case of a Gaussian Naive Bayes classifier,
assuming that each class follows a Gaussian distribution, the decision boundary is typically a hyperbolic curve.

Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006
This equation determines the points where the posterior probabilities of each class are equal, and
thus, the boundary. For simple synthetic examples, such as two classes with different means and covariances,
the decision boundary might be linear or quadratic.

2.3 D-Dimensional Decision Boundaries in Matrix Notation


In higher dimensions (D-dimensional space), decision boundaries become hyperplanes or more complex
geometric objects, and it’s useful to express the problem in matrix notation.

which can be written in matrix form for multidimensional distributions. This typically results in a quadratic
decision boundary.

2.4 Estimation of Error Rates


Error rates are a key measure of model performance. They can be estimated using a confusion matrix, cross-
validation, or out-of-sample testing. In general, error rates can be defined as:

Error rates are often used to evaluate the effectiveness of a classifier and to tune model parameters (e.g.,
regularization). Error rate estimation can also be done through cross-validation techniques, where the data is
split into multiple subsets and the model is trained on one subset while validated on others. This provides a
more robust estimate of the error rate, especially when the data is limited.

2.5 Unequal Costs of Error


In many real-world applications, the costs of misclassification are not equal. For example, in medical
diagnosis, misclassifying a sick patient as healthy may be more costly than misclassifying a healthy patient as
sick. In these cases, the error rates need to be weighted by the cost of each type of misclassification.

This allows for a more nuanced evaluation of classifier performance when the costs of different errors vary.

Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006
2.6 Model-Based Estimates
In model-based estimation, the error rates are computed based on the model's assumptions. For example, in a
Gaussian Naive Bayes model, you may estimate the error rate using the likelihood of observing each class's
features given the model parameters.
Model-based estimates refer to predictions or estimates generated through a statistical or computational model,
which uses existing data and assumptions to predict an outcome or future value. These models can range from
simple linear regression to complex machine learning algorithms, depending on the problem at hand.
Example Problem on Model-Based Estimates (Bayesian Classification)
Problem Statement: You are working with a model to predict whether a person will buy a product based on
their income. You are given the following training data:
Customer Income (in thousands) Purchased (Class)
1 30 Yes
2 50 No
3 40 Yes
4 60 No
5 55 No
6 45 Yes
Now, you are asked to predict if a new customer, who has an income of 47 (thousand), will purchase the
product or not. We will use Bayesian classification to solve this problem, assuming that the income feature
follows a normal distribution for each class ("Yes" and "No").
Step 1: Calculate Prior Probabilities
First, calculate the prior probabilities for the "Yes" and "No" classes, which are based on the frequency of
each class in the dataset.

Step 2: Calculate the Likelihoods for Each Class


Next, we calculate the mean and standard deviation of the income feature for both classes ("Y" and "N").

Step 3: Calculate the Likelihood of the New Customer’s Income


Now, we calculate the likelihood of observing the new customer’s income (47) given each class ("Yes" and
"No") using the normal distribution formula:
Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006

Step 4: Compute the Posterior Probabilities


Now, we compute the posterior probabilities for each class using Bayes’ Theorem. Since we are interested
in the relative probabilities, we only need to compute the numerators.

Step 5: Conclusion
Since 𝑃(𝑌𝑒𝑠 ∣ 47) = 0.030 is greater than 𝑃(𝑁𝑜 ∣ 47) = 0.0265, we predict that the new customer will
purchase the product. This is a simple application of Model-Based Estimation (Bayesian classification) to
classify a new customer based on their income.

2.7 Simple Counting


Simple counting is a method for error rate estimation that involves counting the number of misclassified
samples from a dataset and dividing by the total number of samples. Simple counting for error rate estimation
involves directly counting the number of errors made by a model or system, and then calculating the error rate
as a proportion of the total number of observations.
Example Problem:
Given Data: Let’s consider a binary classification task where a model is tasked with predicting whether an
email is spam or not spam. The model's predictions are compared to the actual labels (ground truth), and the
error rate is calculated. Let’s say we have 10 email samples, and the model has made predictions on whether
each email is spam (1) or not spam (0). The actual labels (ground truth) are also given.
Email No. Actual (True) Predicted Is Error?
1 1 (Spam) 1 (Spam) No
2 0 (Not Spam) 1 (Spam) Yes
3 1 (Spam) 0 (Not Spam) Yes
4 0 (Not Spam) 0 (Not Spam) No
5 1 (Spam) 1 (Spam) No
6 0 (Not Spam) 1 (Spam) Yes
7 0 (Not Spam) 0 (Not Spam) No
8 1 (Spam) 1 (Spam) No
9 0 (Not Spam) 1 (Spam) Yes
10 1 (Spam) 0 (Not Spam) Yes

Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006
Steps to Calculate Error Rate:
1. Count the total number of observations (emails):
o Total emails = 10
2. Count the number of errors:
o Errors occur when the predicted label does not match the actual label.
o In the table above, errors occurred in email samples 2, 3, 6, 9, and 10.
o Total number of errors = 5
3. Calculate the error rate: The error rate is simply the number of errors divided by the total number of
observations (emails).

Result: The error rate of the model is 0.5 or 50%.

2.8 Fractional Counting


Fractional counting is a technique used in machine learning, particularly in dealing with probabilistic
classifiers like Naive Bayes or logistic regression. The idea is to adjust the weights of each class based on their
probabilistic predictions, using fractional contributions rather than discrete labels. This is often applied when
there's class imbalance, allowing each data point to contribute in a fractionally weighted manner to the learning
process. A Simple example, where we apply fractional counting in the context of probabilistic classifiers is
given below,
Example Problem:
We have a binary classification problem (spam vs. not spam). Our classifier (e.g., logistic regression, Naive
Bayes) outputs probabilities for each instance being a certain class. We want to use fractional counting to
adjust the contribution of each instance based on these predicted probabilities.

Formula for Fractional Counting:

Step-by-Step Example:
1. Prediction Probabilities: Suppose our classifier makes the following predictions for four emails:

Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006
2. Calculate Fractional Counts: Now, we compute the fractional contributions for both the
spam and not spam classes.

Summary: In this example, we see that the


total fractional count for spam is 1.9 and for
not spam is 2.1. This technique is useful
when dealing with imbalanced datasets, or
when you want to refine your model's
learning using the confidence of predictions.

2.9 Characteristic Curves


Characteristic curves, such as ROC (Receiver Operating Characteristic) curves and Precision-Recall
curves, are used to evaluate the performance of classifiers across different threshold values. These curves plot
the true positive rate (sensitivity) against the false positive rate (1-specificity) or precision against recall.
The Receiver Operating Characteristic (ROC) curve is a visual and graphical representation of a classification
model’s performance across various thresholds. It is created by first calculating the True Positive Rate (TPR)
and False Positive Rate (FPR) at every threshold value and then plotting them against each other. A visual
representation of such an ideal classification model is shown below:
The true positive rate or TPR is calculated as:

The false positive rate or FPR is calculated as:

Example: Consider a hypothetical example of a medical test to detect a certain disease. There are 100 patients
and 40 of these patients have the disease. We will use this example to create the ROC curve to have an idea
other than the ideal model. Consider that our classification model performed as such:

Predicted Positive Predicted Negative


Actual Positive True Positive (30) False Positive (10)
The
Actual Negative False Negative (10) True Negative (50)

Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006
Calculation of TPR and FPR is carried out as:

The ROC curve will then look something like this:

2.10 Confusion Matrices


A confusion matrix is a table used to evaluate the performance of a classification algorithm. It shows the
number of correct and incorrect predictions, broken down by each class. For a binary classification problem,
a confusion matrix might look like this:
Predicted Positive Predicted Negative
Actual Positive True Positive (TP) False Negative (FN)
Actual Negative False Positive (FP) True Negative (TN)
The confusion matrix helps you understand the performance of a classification model by breaking down its
predictions into four categories: true positives, false positives, true negatives, and false negatives. From this,
you can derive important performance metrics such as accuracy, precision, recall, and F1-score, which provide
insight into how well your model is performing in different aspects.

Problem Setup
We have a binary classification problem, where the goal is to predict whether an email is spam or not spam.
The model predicts a probability for each email, and we set a threshold (e.g., 0.5) to classify the email as
spam (1) or not spam (0).
Let’s assume we have the following true labels (the actual class) and the predicted labels for 5 emails:
True Label Predicted Probability Predicted Label
Email
(Actual) (Spam) (Spam: 1, Not Spam: 0)
E1 Spam (1) 0.9 1
E2 Not Spam (0) 0.6 1
E3 Spam (1) 0.8 1
E4 Not Spam (0) 0.2 0
E5 Not Spam (0) 0.3 0

Step 1: Thresholding
We set the threshold to 0.5. That means:
• If the predicted probability for spam is greater than or equal to 0.5, classify as spam (1).
• If the predicted probability for spam is less than 0.5, classify as not spam (0).
Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006

Step 2: Confusion Matrix Components


A confusion matrix shows the counts of the following four types of predictions:
1. True Positives (TP): Instances that are correctly classified as the positive class (spam).
2. True Negatives (TN): Instances that are correctly classified as the negative class (not spam).
3. False Positives (FP): Instances that are incorrectly classified as the positive class (spam).
4. False Negatives (FN): Instances that are incorrectly classified as the negative class (not spam).

Step 3: Calculate the Confusion Matrix


We can now compute the confusion matrix based on the true labels and predicted labels:
Email True Label (Actual) Predicted Label Correct / Not Correct
E1 Spam (1) 1 Correct (True Positive)
E2 Not Spam (0) 1 Incorrect (False Positive)
E3 Spam (1) 1 Correct (True Positive)
E4 Not Spam (0) 0 Correct (True Negative)
E5 Not Spam (0) 0 Correct (True Negative)
Now, we can count:
• True Positives (TP) = 2 (Emails E1 and E3 are correctly classified as spam)
• False Positives (FP) = 1 (Email E2 is incorrectly classified as spam, but it's not)
• True Negatives (TN) = 2 (Emails E4 and E5 are correctly classified as not spam)
• False Negatives (FN) = 0 (There are no emails that are incorrectly classified as not spam)

Step 4: Construct the Confusion Matrix


The confusion matrix is typically presented
in this format:
• TP = 2: Two emails are correctly predicted as spam.
• FP = 1: One email is incorrectly predicted as spam (false alarm).
• TN = 2: Two emails are correctly predicted as not spam.
• FN = 0: No emails are incorrectly predicted as not spam.

Step 5: Performance Metrics from the Confusion Matrix


Once we have the confusion matrix, we can calculate some important classification performance metrics.
1. Accuracy: Proportion of correct predictions (both true positives and true negatives).

So, the accuracy of the model is 80%.


2. Precision: Proportion of positive predictions that are actually correct (how many predicted spams were
true spams).

So, the precision is 66.67%.


3. Recall (or Sensitivity): Proportion of actual positives (spam) that were correctly identified.

So, the recall is 100%.


4. F1-Score: Harmonic mean of precision and recall, providing a balance between the two.

So, the F1-score is 80%.

Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006
2.12 Estimating the Composition of Populations
Estimating the composition of populations often refers to determining the proportions of different subgroups
within a population based on sample data. This is commonly done in fields like biology, social science, or
market research.
Example Problem:
Suppose you are studying the population of a town with 10,000 residents. You want to estimate the proportion
of people in the town who own pets (dogs or cats). A simple random sample of 200 people is taken, and out
of the 200 people surveyed, 80 report owning pets.
You want to estimate the proportion of the entire population that owns pets.
Formula:

Calculation:

This means the estimated proportion of people who own pets in the sample is 0.4, or 40%.
Estimating Population Composition:
Since you are dealing with a sample, this proportion is used to estimate the proportion in the entire population.
So, the estimated proportion of people in the entire town who own pets is 0.4 (or 40%).
Estimating the Total Number of Pet Owners:
Now, if you want to estimate the total number of people in the entire population who own pets, you can
multiply the estimated proportion by the total population size (NN):

Thus, you estimate that about 4,000 people in the town own pets.
Summary:
• Estimated proportion of pet owners in the sample: 0.4 or 40%
• Estimated total number of pet owners in the population: 4,000

Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006

Questions
4 Marks Questions
1) Explain the concept of Bayesian classifiers.
2) Discuss the role of Bayes' Theorem in Bayesian classification.
3) What is a decision boundary in the context of classification
4) Given two-dimensional feature data from two classes, describe how you would visualize the decision
boundary
5) Derive the equation for a decision boundary in a D-dimensional feature space using matrix notation
6) Describe the process of estimating error rates in classification models.
7) Explain the impact of unequal costs of error (misclassification)
8) What are model-based estimates in the context of classification
9) Explain the concept of simple counting in the context of classification.
10) Define fractional counting in the context of Bayesian classifiers.
11) What are characteristic curves, how are they used to evaluate the performance of classification
models
12) What is a confusion matrix, and how can it be used to assess the performance of a classification
model
13) How to use to estimate the composition of populations in a classification setting

8 Marks Questions
1) Explain the working of a Bayesian classifier in detail. Describe the role of Bayes' Theorem in the
classification process
2) Describe in detail how decision boundaries are formed in classification problems?
3) Consider a classification problem with two-dimensional data points belonging to two classes.
Explain how to determine and visualize the decision boundary between the two classes.
4) Derive the equation of a decision boundary in a D-dimensional feature space using matrix notation.
5) Discuss the importance of error rate estimation in classification tasks
6) Explain the concept of unequal costs of error in classification problems.

Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006
Problems with solution for Practice
Problem on Model-Based Estimation:
a) Consider the following scenario where we are trying to classify whether a customer will buy a product
based on their age: We are given the following training data:
Customer Age Purchased (Class)
1 25 Yes
2 35 No
3 45 Yes
4 30 Yes
5 40 No
We want to classify a new customer whose age is 33.
We will use Model-Based Estimation (Bayesian classification) to classify the new customer. Assume the
feature (age) follows a normal distribution in each class, and we'll estimate the likelihoods using the sample
mean and standard deviation.
Solution:
Step 1: Calculate the Prior Probabilities
The prior probabilities for "Yes" and "No" classes are simply the frequencies of each class in the training data:

Step 2: Calculate the Likelihoods for Each Class


Next, we calculate the mean and standard deviation for the Age feature in each class.
For the "Yes" Class (Purchased): For the "No" Class (Not Purchased):

Step 3: Calculate the Likelihood of the New Customer's Age


We will now calculate the likelihood of observing the new customer's age (33 years) given each class using
the normal distribution formula:

For "Yes" (Purchased): For "No" (Not Purchased):

Step 4: Compute the Posterior Probabilities


Now, we calculate the posterior probabilities for each class using Bayes' Theorem. We compute the numerator
of Bayes' Theorem for each class, as the denominator (the total probability) is constant for both classes and
does not affect the classification decision. That is 60 % Yes and 40% No
Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006

Step 5: Conclusion
Since 𝑃(𝑌𝑒𝑠 ∣ 33) = 0.027 is greater than 𝑃(𝑁𝑜 ∣ 33) = 0.0216, the model predicts that the customer will
purchase the product.

b) Problem on Simple Counting for Error Estimation


Problem Statement: A factory produces light bulbs, and an inspector randomly selects 100 light bulbs from
the production line for testing. Out of these 100 bulbs, 8 are found to be defective. What is the estimated
error rate (defective rate) for the production line?
Solution using Simple Counting Method:
The simple counting method for error rate estimation involves calculating the fraction of defective items
out of the total inspected items. The formula for the error rate is:

c) Problem on Fractional Counting


Problem Statement: A factory produces a large number of electronic components. An inspector randomly
selects a sample of 150 components for testing. Out of these 150 components, 5 components are found to be
defective. Use the fractional counting method to estimate the defective rate for the entire batch of
components.
The formula for fractional counting is:

Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006
d) Problem on ROC characteristics Curve for Error Rate Estimation

Problem Statement: A medical test is conducted to detect a disease. The test produces the following results:
• True Positives (TP): 80 (patients correctly identified as having the disease)
• False Positives (FP): 20 (healthy patients incorrectly identified as having the disease)
• True Negatives (TN): 70 (healthy patients correctly identified as healthy)
• False Negatives (FN): 30 (patients with the disease incorrectly identified as healthy)
We are asked to calculate the True Positive Rate (Sensitivity) and False Positive Rate (1 - Specificity) and
plot a basic ROC curve.
Solution: ROC Curve for the Medical Test

e) Problem on Confusion Matrices


Problem: A machine learning classifier is used to predict whether an email is spam or not spam (ham).
The classifier makes the following predictions on a test dataset of 1000 emails:
• True Positives (TP): 200 emails correctly classified as spam.
• False Positives (FP): 50 emails incorrectly classified as spam (but they are actually ham).
• True Negatives (TN): 700 emails correctly classified as ham.
• False Negatives (FN): 50 emails incorrectly classified as ham (but they are actually spam).
We are asked to create the confusion matrix and calculate some evaluation metrics such as accuracy,
precision, recall, and F1-score.
Solution: Confusion Matrix and Metrics Calculation

Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006
Mysore University School of Engineering
8J99+QC7, Manasa Gangothiri, Mysuru, Karnataka 570006
So, the confusion matrix will look like this:
Predicted Spam Predicted Ham
Actual Spam 200 50
Actual Ham 50 700
Step 2: Calculate Evaluation Metrics
1. Accuracy: The accuracy of the classifier is the proportion of correct predictions (both spam and ham)
out of all predictions:

Accuracy = 90%
2. Precision: Precision is the proportion of correctly predicted spam emails out of all the emails predicted
as spam:

Precision = 80%
3. Recall (Sensitivity): Recall is the proportion of correctly predicted spam emails out of all the actual
spam emails:

Recall = 80%
4. F1-Score: The F1-score is the harmonic mean of precision and recall:

F1-Score = 80%
f) Problem on Estimating the Composition of Populations
Problem Definition: A researcher wants to estimate the proportion of male and female students in a large
university. Since it is not feasible to survey all students, the researcher randomly selects a sample of 200
students and records the following:
• 120 students are male.
• 80 students are female.
Using the sample, estimate the proportion of male and female students in the entire university.
Solution: The sampling proportion can be directly applied as an estimate of the population proportion.

Prepared by: Mr Thanmay J S, Assistant Professor, Bio-Medical & Robotics Engineering, UoM, SoE, Mysore 57006

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy