Skip to content

Batched multi_dot / chain_matmul + let it accept a tensor instead of tuple #55261

@vadimkantorov

Description

@vadimkantorov

This is useful for computing an accumulated transition matrix from individual transition matrices:

  • First usecase: (BxN1xN2, BxN2xN3, ...) -> BxN1xN3
  • Second usecase: (BxTxNxN, dim = 1) -> BxNxN -> this could save an unbind call and maybe a GPU sync (could also be (TxBxNxN, dim = 0) -> BxNxN)

Also may be useful to allow this for logmm in addition to mm:

In the wild this happens in https://github.com/ajabri/videowalk/blob/master/code/model.py#L149 (computing the cycle's total transition matrix)

Originally requested here:

cc @jianyuh @nikitaved @pearu @mruberry @heitorschueroff @walterddr @IvanYashchuk

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNot as big of a feature, but technically not a bug. Should be easy to fixmodule: linear algebraIssues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmultriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      pFad - Phonifier reborn

      Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

      Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


      Alternative Proxies:

      Alternative Proxy

      pFad Proxy

      pFad v3 Proxy

      pFad v4 Proxy