-
Notifications
You must be signed in to change notification settings - Fork 24.7k
Open
Labels
module: flaky-testsProblem is a flaky test in CIProblem is a flaky test in CIoncall: cpu inductorCPU Inductor issues for Intel team to triageCPU Inductor issues for Intel team to triageskippedDenotes a (flaky) test currently skipped in CI.Denotes a (flaky) test currently skipped in CI.
Description
Platforms: asan, linux, slow
This test was disabled because it is failing in CI. See recent examples and the most recent trunk workflow logs.
Over the past 6 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
Debugging instructions (after clicking on the recent samples link):
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
- Click on the workflow logs linked above
- Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
- Grep for
test_qlinear_add_int8_mixed_bf16_use_relu_False_is_qat_False_is_dynamic_True
- There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Sample error message
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_mkldnn_pattern_matcher.py", line 2935, in test_qlinear_add_int8_mixed_bf16
self._qlinear_add_test_helper(
File "/var/lib/jenkins/workspace/test/inductor/test_mkldnn_pattern_matcher.py", line 2894, in _qlinear_add_test_helper
self._test_code_common(
File "/var/lib/jenkins/workspace/test/inductor/test_mkldnn_pattern_matcher.py", line 246, in _test_code_common
actual, (source_code,) = run_and_get_code(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/utils.py", line 1928, in run_and_get_code
result = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 395, in __call__
return super().__call__(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 797, in compile_wrapper
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1886, in _call_user_compiler
raise BackendCompilerFailed(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1861, in _call_user_compiler
compiled_fn = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/__init__.py", line 2392, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 2468, in compile_fx
return aot_autograd(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 109, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1199, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/autograd_cache.py", line 1150, in load
compiled_fn = dispatch_and_compile()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1184, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 575, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 836, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 246, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1955, in fw_compiler_freezing
optimized_function = inner_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 773, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 925, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1622, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1485, in codegen_and_compile
compiled_module = graph.compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2289, in compile_to_module
return self._compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2299, in _compile_to_module
mod = self._compile_to_module_lines(wrapper_code)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2367, in _compile_to_module_lines
mod = PyCodeCache.load_by_key_path(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3237, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmp157wk8xc/fk/cfkzfsoatpzvoc2vpharnmlodxfgihh2gksjiws6suoxbkimxtap.py", line 491, in <module>
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 547, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 567, in _wait_futures
kernel = result.result()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3971, in result
return self.result_fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 2714, in future
result = get_result()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 2510, in load_fn
future.result()
File "/opt/conda/envs/py_3.10/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/opt/conda/envs/py_3.10/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/opt/conda/envs/py_3.10/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 2540, in _worker_compile_cpp
builder.build()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cpp_builder.py", line 1711, in build
run_compile_cmd(build_cmd, cwd=_build_tmp_dir)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cpp_builder.py", line 401, in run_compile_cmd
_run_compile_cmd(cmd_line, cwd)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cpp_builder.py", line 396, in _run_compile_cmd
raise exc.CppCompileError(cmd, output) from e
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
CppCompileError: C++ compile error
Command:
g++ /tmp/tmp157wk8xc/pr/cpr2f3kpb4x35ufo2mcrqrqxy5omx4j3g4dqatmu3xyazbxczolv.main.cpp -D TORCH_INDUCTOR_CPP_WRAPPER -D STANDALONE_TORCH_HEADER -D C10_USING_CUSTOM_GENERATED_MACROS -D CPU_CAPABILITY_AVX512 -shared -fPIC -O3 -DNDEBUG -fno-trapping-math -funsafe-math-optimizations -ffinite-math-only -fno-signed-zeros -fno-math-errno -fexcess-precision=fast -fno-finite-math-only -fno-unsafe-math-optimizations -ffp-contract=off -fno-tree-loop-vectorize -march=native -Wall -std=c++17 -Wno-unused-variable -Wno-unknown-pragmas -pedantic -fopenmp -include /tmp/torchinductor_jenkins/precompiled_headers/cx2mc37rivycrx5ql7mjs62h4cve22fn3jcq5jiqhcagqg7vgeg7.h -I/opt/conda/envs/py_3.10/include/python3.10 -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -mavx512f -mavx512dq -mavx512vl -mavx512bw -mfma -o /tmp/tmp157wk8xc/pr/cpr2f3kpb4x35ufo2mcrqrqxy5omx4j3g4dqatmu3xyazbxczolv.main.so -ltorch -ltorch_cpu -ltorch_python -lgomp -L/opt/conda/envs/py_3.10/lib -L/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/lib
Output:
g++: internal compiler error: Segmentation fault signal terminated program cc1plus
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-11/README.Bugs> for instructions.
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ASAN=1 PYTORCH_TEST_WITH_UBSAN=1 PYTORCH_TEST_WITH_SLOW=1 PYTORCH_TEST_SKIP_FAST=1 python test/inductor/test_mkldnn_pattern_matcher.py TestPatternMatcher.test_qlinear_add_int8_mixed_bf16_use_relu_False_is_qat_False_is_dynamic_True
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
Test file path: inductor/test_mkldnn_pattern_matcher.py
For all disabled tests (by GitHub issue), see https://hud.pytorch.org/disabled.
cc @clee2000
Metadata
Metadata
Assignees
Labels
module: flaky-testsProblem is a flaky test in CIProblem is a flaky test in CIoncall: cpu inductorCPU Inductor issues for Intel team to triageCPU Inductor issues for Intel team to triageskippedDenotes a (flaky) test currently skipped in CI.Denotes a (flaky) test currently skipped in CI.