0% found this document useful (0 votes)
18 views9 pages

SAP Hackfest Format[1]

The document outlines a project titled 'Fair Lens - An AI Hiring Model' aimed at addressing biases in AI-driven hiring systems. It details the project's objectives, methodology, and expected outcomes, emphasizing the need for ethical AI practices to mitigate discrimination based on gender, race, and educational background. The project proposes a lightweight tool for bias detection and mitigation, along with comprehensive reporting to support responsible hiring decisions.

Uploaded by

ullasprabhakar7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views9 pages

SAP Hackfest Format[1]

The document outlines a project titled 'Fair Lens - An AI Hiring Model' aimed at addressing biases in AI-driven hiring systems. It details the project's objectives, methodology, and expected outcomes, emphasizing the need for ethical AI practices to mitigate discrimination based on gender, race, and educational background. The project proposes a lightweight tool for bias detection and mitigation, along with comprehensive reporting to support responsible hiring decisions.

Uploaded by

ullasprabhakar7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Kammavari Sangham (R) 1952, K.S.

Group of Institutions
K. S SCHOOL OF ENGINEERING AND
MANAGEMENT, BENGALURU-
560109
(Affiliated to VTU, Belagavi & Approved by AICTE, New Delhi, Accredited by NAAC)

DEPARTMENT OF ARTIFICIAL INTELLIGENCE & DATA SCIENCE

SAP HACKFEST - 2025


“FAIRLENS-AN AI HIRING MODEL”

Presented by:
Name USN
Team Name Ashwin M R 1KG22AD005
CODEBLOODED G Tejaswini 1KG22AD016
Hemanand S 1KG22AD021
Muhammed Yassir Khan 1KG23AD401
AGENDA

 Introduction
 Problem statement
 Project Objectives
 Proposed Methodology
 Expected Outcome
 Applications

Slide no: 01
INTRODUCTION

In the age of digital transformation, Artificial Intelligence (AI) is revolutionizing how businesses operate, especially in human
resources and recruitment. From parsing resumes to shortlisting candidates, AI-driven hiring platforms promise efficiency,
consistency, and scalability. However, these systems often inherit biases from historical data—data that may reflect societal
prejudices based on gender, ethnicity, educational background, or age. If left unchecked, these biases can systematically
marginalize qualified candidates and reinforce inequality at scale.Ethical AI is no longer optional as the governments,
corporations, and civil society are raising critical questions about fairness, accountability, and transparency in AI systems.
Yet, most companies lack the tools or expertise to identify and mitigate bias in their AI pipelines. This creates a pressing
need for practical, accessible solutions that ensure AI decisions are just, explainable, and legally compliant.

Slide no: 02
PROBLEM STATEMENT

• As businesses increasingly adopt AI models for hiring, a growing concern has emerged. These AI systems,
trained on historical hiring data, often reflect the same discriminatory patterns they are meant to eliminate
candidates based on gender, race, or educational background, even when they are equally or more
qualified than their peers.

• This not only poses moral and ethical risks but also leads to serious legal and reputational consequences for
organizations. Furthermore, many AI developers and HR professionals lack visibility into how decisions
are made by their models, and have no built-in tools to measure, explain, or reduce these biases.

• Our project Fair Lens addresses this gap by auditing AI hiring models. It uses cutting-edge Explainable
AI (XAI) methods and fairness metrics to produce actionable reports, helping organizations make
responsible hiring decisions.

Slide no: 3
PROJECT OBJECTIVES

• To design and develop a lightweight, easy-to-integrate AI tool for bias detection in hiring models.

• To evaluate fairness in model decisions using ethical AI metrics such as Statistical Parity and Disparate Impact.

• To implement Explainable AI (XAI) techniques like SHAP or LIME to visualize feature influence and promote transparency.

• To apply and compare different bias mitigation techniques (e.g., reweighting, oversampling) for ethical model improvement.

Slide no: 4
PROPOSED METHODOLOGY

• Step 1: Data Input Import candidate data with features such as gender, education, experience.
• Step 2: Bias Detection Apply statistical metrics (e.g., disparate impact, equal opportunity difference) to detect bias across
groups.

• Step 3: Model Explanation Use SHAP or LIME to interpret model decisions and visualize the contribution of sensitive
features.

• Step 4: Bias Mitigation Techniques like reweighting, oversampling, or adversarial debiasing are applied to reduce unfair
influence.

• Step 5: Fairness Report Generate a visual and textual report showing before-and-after metrics, fairness score, and
recommendations.

Slide no: 5
EXPECTED OUTCOMES

• Bias Identification:
- Detects gender, ethnicity, age, and educational bias in AI hiring models.
- Computes fairness metrics like Statistical Parity, Disparate Impact.

• Bias Mitigation:
- Applies fairness techniques (e.g., reweighting, oversampling).
- Reduces bias with minimal impact on accuracy.

• Fairness Report Generation:


- Outputs comprehensive bias reports with graphs and interpretations.
- Summarizes fairness metrics and applied mitigation strategies.

Slide no: 6
APPLICATIONS

• Recruitment & HR Tech:


- Supports unbiased candidate screening.
- Reduces legal and reputational risks.

• Audit Tools for AI:


- Used by internal/external auditors for AI ethics compliance.
- Provides audit-ready documentation and visualizations.

• Educational Tool:
- Demonstrates real-world bias detection and correction.
- Supports training in ethical AI practices.

• AI Ethics Consulting:
- Helps firms verify and improve fairness in proprietary AI tools.

Slide no: 7
Slide no: 16

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy