Skip to content

feat: add special_shifted_chebyshev_polynomial_u base#479

Draft
voltjia wants to merge 2 commits intofeat/torch-codegenfrom
codex/add-special_shifted_chebyshev_polynomial_u-base
Draft

feat: add special_shifted_chebyshev_polynomial_u base#479
voltjia wants to merge 2 commits intofeat/torch-codegenfrom
codex/add-special_shifted_chebyshev_polynomial_u-base

Conversation

@voltjia
Copy link
Copy Markdown
Collaborator

@voltjia voltjia commented May 5, 2026

Summary

  • Add the hand-written InfiniOps base class for special_shifted_chebyshev_polynomial_u in src/base/special_shifted_chebyshev_polynomial_u.h.
  • Let the torch code generator reuse src/base/special_shifted_chebyshev_polynomial_u.h instead of emitting generated/base/special_shifted_chebyshev_polynomial_u.h.
  • Apply the base class member-spacing convention required by scripts/check_conventions.py.

Motivation

This PR is part of the feat/torch-codegen base-header migration. The generated SpecialShiftedChebyshevPolynomialU base declaration is moved into src/base so code generation can reuse a reviewed hand-written header.

N/A: no linked issue.

Type of Change

  • feat - new feature / new operator / new platform
  • fix - bug fix
  • perf - performance improvement (no behavioral change)
  • refactor - code restructuring without behavior change
  • test - adding or fixing tests only
  • docs - documentation only
  • build / ci - build system or CI configuration
  • chore - tooling, formatting, or other non-code changes
  • Breaking change (requires a ! in the Conventional Commits prefix or a BREAKING CHANGE: footer)

Platforms Affected

  • CPU (WITH_CPU)
  • NVIDIA (WITH_NVIDIA)
  • Iluvatar (WITH_ILUVATAR)
  • MetaX (WITH_METAX)
  • Cambricon (WITH_CAMBRICON)
  • Moore (WITH_MOORE)
  • Ascend (WITH_ASCEND)
  • PyTorch C++ bindings (WITH_TORCH)
  • Build system / CMake / CI
  • Python bindings / user-facing API

Test Results on Supported Platforms

Platform Built pytest Result Notes / Hardware
NVIDIA N/A Not run Not required for this non-master feat/torch-codegen base-header PR; no runtime implementation is added.
Iluvatar N/A Not run Not required for this non-master feat/torch-codegen base-header PR; no runtime implementation is added.
MetaX N/A Not run Not required for this non-master feat/torch-codegen base-header PR; no runtime implementation is added.
Cambricon N/A Not run Not required for this non-master feat/torch-codegen base-header PR; no runtime implementation is added.
Moore N/A Not run Not required for this non-master feat/torch-codegen base-header PR; no runtime implementation is added.
Ascend N/A Not run Not required for this non-master feat/torch-codegen base-header PR; no runtime implementation is added.
Full `pytest` output (optional)
N/A: pytest was intentionally not run because this PR targets `feat/torch-codegen`, not `master`, and only adds a reusable base header declaration.

Benchmark / Performance Impact

N/A. This PR only adds a base operator declaration for torch codegen reuse and does not add a runtime implementation.

Notes for Reviewers

  • This PR targets feat/torch-codegen, not master.
  • The branch diff against feat/torch-codegen contains only src/base/special_shifted_chebyshev_polynomial_u.h.
  • Original branch validation reported clang-format 21 passing on src/base/special_shifted_chebyshev_polynomial_u.h; the follow-up formatting commit applies the class member spacing required by scripts/check_conventions.py.

Checklist

Title, Branch, and Commits

  • PR title follows Conventional Commits (e.g. feat(nvidia): …, fix(cuda/gemm): …).
  • N/A: this automated batch uses existing codex/add-special_shifted_chebyshev_polynomial_u-base PR branches targeting feat/torch-codegen; branch renaming is intentionally out of scope.
  • Each commit message follows Conventional Commits.
  • N/A: this batch intentionally keeps the base-header addition and convention-formatting follow-up as two meaningful, squashable commits.
  • N/A: this PR is based on feat/torch-codegen, not master; no master rebase is required for this integration target.
  • No fixup! / squash! / wip commits remain.

Scope and Design

  • Changes are minimal - nothing unrelated to the stated motivation was added (CONTRIBUTING.md §Code/General).
  • No dead code, commented-out blocks, debug prints, printf/std::cout/print(...) left behind, or TODO without an owner and issue link.
  • No unrelated formatting churn that would obscure the diff.
  • Public API changes are intentional and limited to the SpecialShiftedChebyshevPolynomialU base operator declaration used by torch codegen.

General Code Hygiene (applies to all languages)

  • The code is self-explanatory; comments were added only where the why is non-obvious (CONTRIBUTING.md §Code/General).
  • Every modified or added file ends with a single trailing newline (CONTRIBUTING.md §Code/General).
  • No trailing whitespace, tab/space mixing, or stray BOMs.
  • Identifiers in comments and error messages are wrapped in backticks (e.g. the `seqlens_k` tensor) (CONTRIBUTING.md §Code/General).
  • All comments and error messages are in English (CONTRIBUTING.md §Code/General).
  • Comments and error messages are complete sentences - capitalized first letter, terminal punctuation - unless the language/framework convention says otherwise (CONTRIBUTING.md §Code/General; §Python).

C++ Specific (if C++ files changed)

  • Code follows the Google C++ Style Guide strictly.
  • clang-format (version 21, per .github/workflows/clang-format.yml) has been run against all modified .h, .cc, .cuh, and .mlu files; the diff is clean.
  • N/A: clang-tidy was not run because this PR only adds a base declaration header for feat/torch-codegen; no runtime implementation is added.
  • Operator parameter order is inputs first, outputs last; attributes are between inputs and outputs; naming follows PyTorch → ONNX → CUDA API precedence (CONTRIBUTING.md §C++).
  • N/A: this base declaration does not add C++ error paths or exceptions.
  • N/A: this base declaration does not add error or warning messages.
  • N/A: this base declaration does not add kernel files.
  • N/A: this base declaration does not add kernel launchers.
  • Constructor initializer list order matches member declaration order (CONTRIBUTING.md §C++).
  • Exactly one blank line between classes, between classes and functions, and between functions (CONTRIBUTING.md §C++).
  • Exactly one blank line between members (functions and variables) within a class (CONTRIBUTING.md §C++).
  • Exactly one blank line before and after the contents of a namespace (CONTRIBUTING.md §C++).
  • N/A: this PR adds only src/base/special_shifted_chebyshev_polynomial_u.h for torch codegen reuse; platform implementations are out of scope.
  • No raw new/delete; RAII / smart pointers / existing allocators are used.

Python Specific (if Python files changed)

N/A: no Python files changed.

Testing

  • N/A: platform pytest was intentionally not run because this PR targets feat/torch-codegen, not master, and only adds a reusable base header declaration.
  • N/A: the table above records the reason platform testing was skipped.
  • N/A: no runtime functionality was added, so no new tests/ coverage is required.
  • N/A: no new pytest parameterization was added.
  • N/A: no Payload-returning test was added.
  • N/A: no dtype / device parameterization was added.
  • N/A: no flaky test was added.
  • N/A: this is not a runtime bug fix.

Build, CI, and Tooling

  • N/A: full platform builds were not run because this PR targets feat/torch-codegen, not master, and only adds a reusable base header declaration.
  • N/A: compile_commands.json behavior was not changed.
  • N/A: no new backend or device was added.
  • N/A: CUDA-like backend mutual exclusion was not changed.
  • Existing CI formatting expectations are preserved; original validation reported clang-format 21 passing on src/base/special_shifted_chebyshev_polynomial_u.h.
  • N/A: no new runtime dependency was added.

Documentation

  • N/A: README.md, CONTRIBUTING.md, and developer workflow are unchanged.
  • N/A: SpecialShiftedChebyshevPolynomialU is an internal base declaration for torch codegen reuse; no user-facing documentation is required.
  • N/A: no user-visible breaking change.

Security and Safety

  • No secrets, access tokens, internal URLs, customer data, or personal hardware identifiers have been committed.
  • N/A: no third-party code was added.
  • N/A: no unsafe pointer arithmetic, uninitialized reads, or missing bounds checks were introduced.

@voltjia voltjia force-pushed the codex/add-special_shifted_chebyshev_polynomial_u-base branch from 9febe13 to 048c0bd Compare May 5, 2026 22:22
@voltjia voltjia force-pushed the codex/add-special_shifted_chebyshev_polynomial_u-base branch from 56b1168 to 17a93ac Compare May 7, 2026 09:59
@voltjia voltjia changed the title feat: add SpecialShiftedChebyshevPolynomialU base feat: add special_shifted_chebyshev_polynomial_u base May 7, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant