Skip to content

Releases: tensorzero/tensorzero

2025.5.7

19 May 19:12
5066d92
Compare
Choose a tag to compare

Bug Fixes

  • Fix a regression in the evaluations UI where incremental results are not displayed until all generated outputs are ready.
  • Fix a regression in the evaluations UI where users could not open the detail page for completed datapoints in partial (e.g. failed) runs.

& multiple under-the-hood and UI improvements (thanks @adithya-adee @Garvity!)

2025.5.6

17 May 02:17
adc9179
Compare
Choose a tag to compare

Bug Fixes

  • Fix an edge case in the Python client affecting list_datapoints and get_datapoint for datapoints without output.

2025.5.5

15 May 23:15
8990e72
Compare
Choose a tag to compare

Warning

Planned Deprecations

  • We are renaming the Python client types ChatInferenceDatapointInput and JsonInferenceDatapointInput to ChatDatapointInsert and JsonDatapointInsert for clarity. Both versions will be supported until 2025.8+ (#2131).

Bug Fixes

  • Handle a regression in the Fireworks AI SFT API

New Features

  • Add endpoints (and client methods) for listing and querying datapoints programmatically

& multiple under-the-hood and UI improvements

2025.5.4

14 May 01:30
d12d3f4
Compare
Choose a tag to compare

Warning

Completed Deprecations

  • This release completes the planned deprecation of the gateway.disable_observability configuration option. Use gateway.observability.enabled instead.

Bug Fixes

  • Allow multiple text content blocks in individual input messages.

2025.5.3

12 May 22:37
6dd317b
Compare
Choose a tag to compare

Important

Please upgrade to this version if you're having performance issues with your ClickHouse queries.

Bug Fixes

  • Optimized the performance of additional ClickHouse queries in the UI that previously consumed excessive time and memory at scale.

& multiple under-the-hood and UI improvements (thanks @bhatt-priyadutt!)

2025.5.2

10 May 14:35
158b25c
Compare
Choose a tag to compare

Warning

Planned Deprecations

  • This release patches our OpenAI-compatible inference API to match the OpenAI API format when using dynamic output schemas. Both API formats are accepted for now, but we plan to deprecate the legacy (incorrect) format in 2025.8+ (#2094).

Bug Fixes

  • Comply with the OpenAI API format when using dynamic output schemas in the OpenAI-compatible inference API

2025.5.1

09 May 18:56
ebe5c6b
Compare
Choose a tag to compare

Bug Fixes

  • Optimized the performance of ClickHouse queries in the UI that previously consumed excessive time and memory at scale

New Features

  • Add support for managing datasets and datapoints programmatically
  • Enable tool use with SGLang (thanks @subygan!)
  • Improve error messages across the stack

& multiple under-the-hood and UI improvements (thanks @subham73 @Daksh14!)

2025.5.0

05 May 15:44
162c3da
Compare
Choose a tag to compare

Bug Fixes

  • Fix an issue in the UI where multimodal inferences fail to parse when no object storage region is specified.
  • Handle an edge case with Google AI Studio's streaming API where some response fields are missing in certain chunks.

New Features

  • Support tensorzero::extra_body and tensorzero::extra_headers in the OpenAI-compatible inference endpoint.
  • Allow users to specify inference caching behavior in the evaluations UI.
  • Improve the performance of some database queries in the UI.

& multiple under-the-hood and UI improvements

2025.4.8

01 May 01:13
cf29526
Compare
Choose a tag to compare

Important

This release addresses an issue affecting a small subset of users caused by a change to the OpenAI API. The OpenAI API suddenly started rejecting non-standard HTTP request headers sent by our gateway. If you receive the error message Input should be a valid dictionary from the OpenAI API, please upgrade to the latest version of TensorZero.

Bug Fixes

  • Fix an issue affecting a small subset of users caused by a recent change to the OpenAI API
  • Support Windows-specific signal handling to enable non-WSL Windows users to run the gateway natively

New Features

  • Support exporting OpenTelemetry traces (OTLP) from the TensorZero Gateway
  • Add batch inference for GCP Vertex AI Gemini models (gcp_vertex_gemini provider)
  • Enable users to send extra headers to model providers at inference time (thanks @oliverbarnes!)

& multiple under-the-hood and UI improvements (thanks @nyurik @rushatgabhane!)

2025.4.7

25 Apr 17:46
e2f38ab
Compare
Choose a tag to compare

Bug Fixes

  • Fix an edge case that could result in duplicate inference results in the database for batch inference jobs.
  • Improve the performance of a database query in the inference detail page in the UI.
  • Fix an edge case that prevented the UI from parsing tool choices correctly in some scenarios.
  • Avoid unnecessarily parsing tool results sent to GCP Vertex AI models.

New Features

  • Add examples integrating TensorZero with Cursor and OpenAI Codex.
  • Make the OpenAI-compatible inference endpoint respect include_usage.

& multiple under-the-hood and UI improvements

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy