Skip to content
This repository was archived by the owner on Apr 13, 2026. It is now read-only.

Grafana bridge fix#24

Merged
haoruizhou merged 8 commits into
mainfrom
grafana-bridge-fix
Apr 12, 2026
Merged

Grafana bridge fix#24
haoruizhou merged 8 commits into
mainfrom
grafana-bridge-fix

Conversation

@haoruizhou
Copy link
Copy Markdown
Collaborator

No description provided.

haoruizhou and others added 8 commits April 1, 2026 20:46
- Set per-service memory limits to prevent OOM crashes
- influxdb3: 4096M, file-uploader: 1536M, data-downloader-api: 1024M
- sandbox: 1024M, grafana: 512M, lap-detector: 512M (disabled)
- slackbot/health-monitor/grafana-bridge/influxdb3-explorer: 256M
- data-downloader-frontend: 128M, code-generator: 512M
- Disable lap-detector via profiles (not needed currently)
- file-uploader: write to WFR database, season name as measurement table
- file-uploader: getSeasons() queries WFR tables, falls back to SEASONS env var
- file-uploader: UI label Bucket -> Season
- data-downloader: all seasons share INFLUX_DATABASE (WFR), table=season name
- .env: INFLUX_DATABASE=WFR, INFLUXDB_DATABASE=WFR (updated separately, gitignored)
Introduce optional GitHub-based DBC support: add GITHUB_DBC_TOKEN/REPO/BRANCH to .env.example and expose them through docker-compose. Implement server-side helpers in file-uploader/app.py to list repository .dbc blobs and download a selected .dbc to a temp file, and add a /dbc/list endpoint. Extend upload flow to accept either a custom uploaded .dbc or a repo path (dbc_github_path) and enforce validation/error handling (token never sent to the client). Update the UI and JS: add a team DBC select list, hint, and custom .dbc input; load team DBCs via /dbc/list, manage select modes, append the chosen DBC to the upload FormData, and improve response parsing and progress behavior. Misc: small import/type hint cleanup and various client-side UX tweaks.
Add server-side support for .zip archives containing CSVs and rename UI/vars from "bucket" to "season". Introduces configurable upload limits in .env.example and implements safe zip expansion (size/member/count limits, zip-slip/path traversal protection) in app.py and helper.py. Updates CANInfluxStreamer to accept database/table semantics, writes CSVs to a safe temp tree (recursive CSV discovery), and preserves progress reporting using the new "season" keys. Frontend templates, JS and CSS are updated to accept .zip files and to use the season select control and user-facing text.
@haoruizhou haoruizhou merged commit 59bcf60 into main Apr 12, 2026
5 of 6 checks passed
@haoruizhou haoruizhou deleted the grafana-bridge-fix branch April 12, 2026 18:02
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: db23746e6a

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +51 to +53
if (ds.type === "influxdb" && ds.name) {
const match = SEASON_RE.exec(ds.name);
if (match && match[1] === season) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Use deployed datasource UID when season name does not match

This lookup now requires datasource names to start with WFR##, but the provisioned Grafana datasource in this repo is named InfluxDB-WFR (see installer/grafana/provisioning/datasources/influxdb.yml) and was previously addressed via fixed UID, so findDatasourceUidForSeason will throw and /api/grafana/create-dashboard will return 500 for normal requests. Please fall back to the configured UID (or match against datasource database/default bucket metadata) so dashboard creation still works with current provisioning.

Useful? React with 👍 / 👎.

Comment on lines +247 to +251
with z.open(i, "r") as fp:
body = fp.read()
if len(body) != i.file_size:
return [], f"Size mismatch for {i.filename} in {name}"
out.append((f"_z{zlabel}/{leaf}", body))
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Stream zip members instead of buffering full payload in RAM

The zip path reads each CSV entry fully (fp.read()) and stores all extracted bytes in out before processing, while the configured limits allow multi‑GiB archives (UPLOAD_ZIP_MAX_TOTAL_UNCOMPRESSED_BYTES defaults to 24 GiB), so large uploads can exhaust memory and kill the uploader before streaming starts. This is especially risky because the compose service memory limit is much lower than the accepted uncompressed size; write entries to temp files incrementally or cap accepted data to a safe in-memory bound.

Useful? React with 👍 / 👎.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants