Skip to content

Multimodal Emotion and Sentiment Analysis Frontend : Unified deep learning framework for emotion and sentiment recognition from video, audio, and text. Powered by BERT, ResNet3D, and CNNs. End-to-end training, robust evaluation —built for research and real-world affective computing / FRONTEND

Notifications You must be signed in to change notification settings

lalitdotdev/neurosense-frontend

Repository files navigation

🧠 NeuroSense: Multimodal Sentiment & Emotion Classification frontend

Next.js Prisma AWS

A full-stack platform for secure video uploads and AI-powered sentiment analysis, featuring robust API management and usage quotas.

neurosense-fusion

Decode human emotion and sentiment from video, audio, and text—at scale, in real-time, and with research-grade accuracy.


📖 Overview

NeuroSense is a next-generation multimodal AI framework that fuses video, audio, and text to recognize emotions and sentiments in human communication. Designed for research, real-world deployment, and SaaS applications, NeuroSense combines the power of deep learning, cloud scalability, and a modern web interface.


🚀 Features

  • 🎥 Video Frame Analysis — Extracts facial and contextual cues using ResNet3D.
  • 🎙️ Audio Feature Extraction — Captures vocal emotion with Mel spectrograms and CNNs.
  • 📝 Text Embeddings with BERT — Understands semantic sentiment from transcripts.
  • 🔗 Multimodal Fusion — Late fusion of 128D features from each modality for robust affect detection.
  • 📊 Dual Head Classification — Simultaneous prediction of 7 emotion classes and 3 sentiment classes.
  • 🧪 Model Training & Evaluation — Efficient PyTorch pipeline with TensorBoard logging.
  • ☁️ Scalable Cloud Deployment — AWS SageMaker for training, S3 for data, and real-time inference endpoints.
  • 🔐 Authentication & API Keys — Auth.js and secure key management for SaaS users.
  • 📈 Usage Quota Tracking — Monitor and limit API usage per user.
  • 🌐 Modern Frontend — Next.js, Tailwind CSS, and T3 Stack for a seamless user experience.
  • 🖼️ Rich Visualizations — Confusion matrices, training curves, and interactive analytics.

🏗️ Model Architecture

Video Frames ─┐
              │
         [ResNet3D]──┐
Text ─────[BERT]─────┼─► [Fusion Layer] ──► [Emotion Classifier] ─► 7 Emotions
              │      │                    └─► [Sentiment Classifier] ─► 3 Sentiments
Audio ──[CNN+Mel]────┘

Model Architecture

🚀 Key Features

🔒 Secure Authentication

  • JWT-based session management with NextAuth
  • Credential authentication with bcrypt hashing
  • Role-based API key system with crypto-safe secret generation

🎥 Video Processing Pipeline

  • Secure S3 presigned URL generation for uploads
  • AWS SageMaker integration for ML analysis
  • Supported formats: MP4, MOV, AVI

⚡ Developer-Friendly API

  • Type-safe endpoints with Zod validation
  • Usage quotas with monthly resets
  • Interactive API documentation with TS/cURL examples

📊 Dashboard Features

  • Animated UI components with real-time feedback
  • Analysis visualization with emotion timelines
  • Quota monitoring and API key management

🛠 Tech Stack

Core

  • Next.js 13 (App Router)
  • TypeScript
  • Prisma (PostgreSQL)
  • NextAuth.js

AI/Cloud

  • AWS S3 (Video Storage)
  • AWS SageMaker (ML Inference)
  • AWS SDK v3

Utilities

  • Zod (Schema Validation)
  • bcryptjs (Password Hashing)
  • react-icons (UI Icons)

🚀 Getting Started

Prerequisites

  • Node.js 18+
  • PostgreSQL
  • AWS account with S3/SageMaker access

Installation


git clone https://github.com/yourusername/neurosense-frontend.git
cd neurosense-frontend
bun install

Configuration

  1. Create .env file:

DATABASE_URL="postgresql://..."
AWS_ACCESS_KEY_ID="..."
AWS_SECRET_ACCESS_KEY="..."
AWS_INFERENCE_BUCKET="..."
AWS_ENDPOINT_NAME="..."

  1. Initialize database:

npx prisma migrate dev

Running


pnpm dev

🔑 API Usage

Get Upload URL


curl -X POST /api/upload-url \
 -H "Authorization: Bearer {API_KEY}" \
 -d '{"fileType": ".mp4"}'

Upload File


curl -X PUT "{PRESIGNED_URL}" \
 -H "Content-Type: video/mp4" \
 --data-binary @video.mp4

Analyze Video


curl -X POST /api/sentiment-inference \
 -H "Authorization: Bearer {API_KEY}" \
 -d '{"key": "inference/uuid.mp4"}'

🤝 Contributing

  1. Fork the repository
  2. Create feature branch: git checkout -b feature/amazing-feature
  3. Commit changes: git commit -m 'feat: add amazing feature'
  4. Push branch: git push origin feature/amazing-feature
  5. Open Pull Request

About

Multimodal Emotion and Sentiment Analysis Frontend : Unified deep learning framework for emotion and sentiment recognition from video, audio, and text. Powered by BERT, ResNet3D, and CNNs. End-to-end training, robust evaluation —built for research and real-world affective computing / FRONTEND

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy