Content-Length: 676366 | pFad | http://github.com/pytorch/TensorRT/pull/3528

DA FX graph visualization by cehongwang · Pull Request #3528 · pytorch/TensorRT · GitHub
Skip to content

FX graph visualization #3528

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 12 commits into
base: main
Choose a base branch
from
Open

FX graph visualization #3528

wants to merge 12 commits into from

Conversation

cehongwang
Copy link
Collaborator

Description

Debugging FX graphs can be challenging due to the complexity of analyzing node connections directly from the FX table. Therefore, providing a clear visualization of the FX graph is essential to facilitate effective debugging.

Fixes # (issue)

Type of change

Please delete options that are not relevant and/or add your own.

  • New feature (non-breaking change which adds functionality)

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

@github-actions github-actions bot added component: lowering Issues re: The lowering / preprocessing passes component: build system Issues re: Build system component: api [Python] Issues re: Python API component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths labels May 21, 2025
@github-actions github-actions bot requested a review from narendasan May 21, 2025 17:49
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/lowering/passes/pass_manager.py	2025-05-23 04:32:05.196604+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/lowering/passes/pass_manager.py	2025-05-23 04:32:28.295008+00:00
@@ -31,11 +31,11 @@
                Callable[
                    [torch.fx.GraphModule, CompilationSettings], torch.fx.GraphModule
                ]
            ]
        ] = None,
-        constraints: Optional[List[Callable]] = None
+        constraints: Optional[List[Callable]] = None,
    ):
        super().__init__(passes, constraints)

    @classmethod
    def build_from_passlist(
@@ -66,11 +66,11 @@

    def remove_pass_with_index(self, index: int) -> None:
        del self.passes[index]

    def insert_debug_pass_before(
-        self, passes: List[str], output_path_prefix: str=tempfile.gettempdir()
+        self, passes: List[str], output_path_prefix: str = tempfile.gettempdir()
    ) -> None:
        """Insert debug passes in the PassManager pass sequence prior to the execution of a particular pass.

        Args:
            passes: List of pass names to insert debug passes before
@@ -80,18 +80,22 @@
        in the pass sequence.
        """
        new_pass_list = []
        for ps in self.passes:
            if ps.__name__ in passes:
-                new_pass_list.append(_generate_draw_fx_graph_pass(output_path_prefix, f"before_{ps.__name__}"))
+                new_pass_list.append(
+                    _generate_draw_fx_graph_pass(
+                        output_path_prefix, f"before_{ps.__name__}"
+                    )
+                )
            new_pass_list.append(ps)

        self.passes = new_pass_list
        self._validated = False

    def insert_debug_pass_after(
-        self, passes: List[str], output_path_prefix: str=tempfile.gettempdir()
+        self, passes: List[str], output_path_prefix: str = tempfile.gettempdir()
    ) -> None:
        """Insert debug passes in the PassManager pass sequence after the execution of a particular pass.

        Args:
            passes: List of pass names to insert debug passes after
@@ -102,12 +106,15 @@
        """
        new_pass_list = []
        for ps in self.passes:
            new_pass_list.append(ps)
            if ps.__name__ in passes:
-                new_pass_list.append(_generate_draw_fx_graph_pass(output_path_prefix, f"after_{ps.__name__}"))
-
+                new_pass_list.append(
+                    _generate_draw_fx_graph_pass(
+                        output_path_prefix, f"after_{ps.__name__}"
+                    )
+                )

        self.passes = new_pass_list
        self._validated = False

    def __call__(self, gm: Any, settings: CompilationSettings) -> Any:

@cehongwang cehongwang force-pushed the graph-visualization branch from 2a91f9d to f6a3f86 Compare May 27, 2025 19:29
@github-actions github-actions bot added component: core Issues re: The core compiler component: runtime labels May 28, 2025
@@ -15,6 +15,7 @@
DLA_SRAM_SIZE = 1048576
ENGINE_CAPABILITY = EngineCapability.STANDARD
WORKSPACE_SIZE = 0
ENGINE_VIS_DIR = None
Copy link
Collaborator

@narendasan narendasan May 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just set to temp_dir/torch_tensorrt_debug or something

@narendasan
Copy link
Collaborator

@cehongwang can you target the debugging branch and we can pull all those changes in at once?

@cehongwang cehongwang self-assigned this May 28, 2025
@cehongwang cehongwang force-pushed the graph-visualization branch from a6fd323 to 031267c Compare May 29, 2025 16:32
@github-actions github-actions bot added the component: conversion Issues re: Conversion stage label May 30, 2025
@cehongwang cehongwang force-pushed the graph-visualization branch 2 times, most recently from d3e3058 to 74bb32d Compare June 2, 2025 20:51
@cehongwang cehongwang force-pushed the graph-visualization branch from 6a8e2a0 to 2fff7ad Compare June 3, 2025 19:38
@cehongwang cehongwang force-pushed the graph-visualization branch from 6e2af0b to 2c92ec0 Compare June 6, 2025 17:43
@cehongwang cehongwang force-pushed the graph-visualization branch from 2c92ec0 to 861a684 Compare June 6, 2025 17:46
ATEN_PRE_LOWERING_PASSES,
)

_LOGGER = logging.getLogger("torch_tensorrt [TensorRT Conversion Context]")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the default be conversion context ?

Copy link
Collaborator

@narendasan narendasan Jun 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesnt need to. This is just like the channel, messages from this file will be submitted on. But the config needs to be in the debugger

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I delete this line?

@cehongwang cehongwang force-pushed the graph-visualization branch from 861a684 to 95db34d Compare June 6, 2025 20:28
@cehongwang cehongwang force-pushed the graph-visualization branch from 95db34d to fb5dc81 Compare June 6, 2025 22:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed component: api [Python] Issues re: Python API component: build system Issues re: Build system component: conversion Issues re: Conversion stage component: core Issues re: The core compiler component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths component: lowering Issues re: The lowering / preprocessing passes component: runtime
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants








ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: http://github.com/pytorch/TensorRT/pull/3528

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy