Grafana bridge fix#24
Conversation
- Set per-service memory limits to prevent OOM crashes - influxdb3: 4096M, file-uploader: 1536M, data-downloader-api: 1024M - sandbox: 1024M, grafana: 512M, lap-detector: 512M (disabled) - slackbot/health-monitor/grafana-bridge/influxdb3-explorer: 256M - data-downloader-frontend: 128M, code-generator: 512M - Disable lap-detector via profiles (not needed currently)
- file-uploader: write to WFR database, season name as measurement table - file-uploader: getSeasons() queries WFR tables, falls back to SEASONS env var - file-uploader: UI label Bucket -> Season - data-downloader: all seasons share INFLUX_DATABASE (WFR), table=season name - .env: INFLUX_DATABASE=WFR, INFLUXDB_DATABASE=WFR (updated separately, gitignored)
Introduce optional GitHub-based DBC support: add GITHUB_DBC_TOKEN/REPO/BRANCH to .env.example and expose them through docker-compose. Implement server-side helpers in file-uploader/app.py to list repository .dbc blobs and download a selected .dbc to a temp file, and add a /dbc/list endpoint. Extend upload flow to accept either a custom uploaded .dbc or a repo path (dbc_github_path) and enforce validation/error handling (token never sent to the client). Update the UI and JS: add a team DBC select list, hint, and custom .dbc input; load team DBCs via /dbc/list, manage select modes, append the chosen DBC to the upload FormData, and improve response parsing and progress behavior. Misc: small import/type hint cleanup and various client-side UX tweaks.
Add server-side support for .zip archives containing CSVs and rename UI/vars from "bucket" to "season". Introduces configurable upload limits in .env.example and implements safe zip expansion (size/member/count limits, zip-slip/path traversal protection) in app.py and helper.py. Updates CANInfluxStreamer to accept database/table semantics, writes CSVs to a safe temp tree (recursive CSV discovery), and preserves progress reporting using the new "season" keys. Frontend templates, JS and CSS are updated to accept .zip files and to use the season select control and user-facing text.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: db23746e6a
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if (ds.type === "influxdb" && ds.name) { | ||
| const match = SEASON_RE.exec(ds.name); | ||
| if (match && match[1] === season) { |
There was a problem hiding this comment.
Use deployed datasource UID when season name does not match
This lookup now requires datasource names to start with WFR##, but the provisioned Grafana datasource in this repo is named InfluxDB-WFR (see installer/grafana/provisioning/datasources/influxdb.yml) and was previously addressed via fixed UID, so findDatasourceUidForSeason will throw and /api/grafana/create-dashboard will return 500 for normal requests. Please fall back to the configured UID (or match against datasource database/default bucket metadata) so dashboard creation still works with current provisioning.
Useful? React with 👍 / 👎.
| with z.open(i, "r") as fp: | ||
| body = fp.read() | ||
| if len(body) != i.file_size: | ||
| return [], f"Size mismatch for {i.filename} in {name}" | ||
| out.append((f"_z{zlabel}/{leaf}", body)) |
There was a problem hiding this comment.
Stream zip members instead of buffering full payload in RAM
The zip path reads each CSV entry fully (fp.read()) and stores all extracted bytes in out before processing, while the configured limits allow multi‑GiB archives (UPLOAD_ZIP_MAX_TOTAL_UNCOMPRESSED_BYTES defaults to 24 GiB), so large uploads can exhaust memory and kill the uploader before streaming starts. This is especially risky because the compose service memory limit is much lower than the accepted uncompressed size; write entries to temp files incrementally or cap accepted data to a safe in-memory bound.
Useful? React with 👍 / 👎.
No description provided.