Skip to content

quantizer_hqq should not require a gpu/cuda device to run #38439

Open
@learning-chip

Description

@learning-chip

quantizer_hqq.py requires cuda device:

if not torch.cuda.is_available():
raise RuntimeError("No GPU found. A GPU is needed for quantization.")

However the original HQQ library also runs on CPU, by falling back to default aten operators: https://github.com/mobiusml/hqq?tab=readme-ov-file#usage-with-models

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      pFad - Phonifier reborn

      Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

      Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


      Alternative Proxies:

      Alternative Proxy

      pFad Proxy

      pFad v3 Proxy

      pFad v4 Proxy