Skip to content

feat(docs): add production deployment guide#259

Open
marythought wants to merge 8 commits intomainfrom
docs/production-guide
Open

feat(docs): add production deployment guide#259
marythought wants to merge 8 commits intomainfrom
docs/production-guide

Conversation

@marythought
Copy link
Contributor

@marythought marythought commented Mar 20, 2026

Summary

  • Adds a comprehensive guide covering both local development setup and production deployment of OpenTDF Platform
  • Part 1: Local Development — prerequisites, infrastructure setup, Keycloak provisioning, attribute/policy configuration, SDK integration (Go + browser), and a full encrypt/decrypt round-trip
  • Part 2: Production Deployment — split-horizon DNS, Keycloak hardening, KAS key management, container registry mirroring, production config with TLS/DPoP/CORS hardening, and SDK changes for production
  • Includes troubleshooting tables for both environments
  • Documents advanced patterns: per-field encryption with compound attribute values, browser-side decrypt for cryptographic access enforcement
  • Based on real-world experience deploying the GLP-1 Tracker demo to Railway

Test plan

  • npm run check-vendored-yaml passes
  • npm run build succeeds with no new broken links
  • Verify guide renders correctly in Surge preview
  • Walk through Part 1 steps against a fresh local setup
  • Verify Part 2 recommendations match current platform behavior

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Documentation

    • Added comprehensive production deployment guide covering end-to-end integration with OpenTDF.
    • Added local development setup guide with prerequisites, step-by-step instructions, configuration examples, and troubleshooting.
    • Added production deployment guide with production-specific configuration, Keycloak setup, key handling, SDK updates, and advanced features.
  • Improvements

    • Enhanced GitHub API request handling with improved error reporting.

Two-part guide covering local development setup and production
deployment of OpenTDF Platform. Based on real-world experience
deploying the GLP-1 Tracker demo app to Railway.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@marythought marythought requested review from a team as code owners March 20, 2026 21:57
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a comprehensive guide designed to assist developers in setting up and deploying OpenTDF applications. It provides a structured walkthrough for both local development environments and production deployments, addressing common challenges and offering best practices to ensure secure and efficient integration of OpenTDF capabilities.

Highlights

  • Comprehensive Deployment Guide: Added a new guide covering the full lifecycle of building an OpenTDF-powered application, from local development to production deployment.
  • Local Development Setup: Detailed steps for setting up OpenTDF locally, including prerequisites, infrastructure (PostgreSQL, Keycloak), attribute/policy configuration, and SDK integration examples for Go and JavaScript/TypeScript.
  • Production Deployment Best Practices: Outlined critical considerations for production environments, such as split-horizon DNS, Keycloak hardening, secure KAS key management, container registry mirroring, and production-ready configuration with TLS/DPoP/CORS hardening.
  • Troubleshooting and Advanced Patterns: Included dedicated troubleshooting tables for both local and production environments, and documented advanced patterns like per-field encryption with compound attribute values and browser-side decryption for cryptographic access enforcement.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link

coderabbitai bot commented Mar 20, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

The PR adds GitHub token authentication support to the OpenAPI specification checking utility with improved error handling and API response validation, and introduces a comprehensive three-part production deployment guide covering architecture overview, local development setup, and production-hardened configuration.

Changes

Cohort / File(s) Summary
GitHub API Enhancements
src/openapi/check-vendored-yaml.ts
Added GITHUB_TOKEN environment variable support for authenticated API requests. Enhanced error handling to capture HTTP response bodies and status codes. Added validation that GitHub Contents API responses are arrays, throwing descriptive errors for unexpected response shapes.
Production Deployment Guide
docs/guides/production-deployment-guide/index.mdx, docs/guides/production-deployment-guide/local-development.mdx, docs/guides/production-deployment-guide/production-deployment.mdx
New three-part end-to-end guide for deploying OpenTDF: (1) architecture and overview, (2) local development setup with Docker/Keycloak/PostgreSQL provisioning, policy configuration, and SDK integration, and (3) production deployment with hardened settings, managed database TLS, DPoP enforcement, and troubleshooting guidance.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

Poem

🐰 Hopping through tokens with GitHub's great might,
Errors now caught in the pale morning light,
From local dev burrows to production's grand stage,
A guide for the ages, written page after page!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title 'feat(docs): add production deployment guide' accurately reflects the main objective of the PR, which is to add comprehensive production deployment documentation.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch docs/production-guide

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive guide for deploying the OpenTDF Platform, covering both local development and production environments. The guide is well-structured and provides valuable information, including setup instructions, configuration examples, and troubleshooting tips. My review focuses on improving the clarity and correctness of the provided commands and configurations to ensure a smoother experience for users following the guide. I've identified a few areas for improvement: a potentially incorrect keyring configuration is shown in two places, a command snippet is missing a variable definition, and there's an opportunity to make instructions for using otdfctl more explicit.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
docs/guides/production-deployment-guide.md (1)

133-212: Consider extracting long configuration examples to code_samples/.

The YAML configuration blocks at lines 133-212 (79 lines) and 535-630 (95 lines) are substantial. As per coding guidelines, long examples should be kept in code_samples/ and referenced from docs to improve maintainability and avoid duplication.

For example, you could create:

  • code_samples/opentdf-local-config.yaml
  • code_samples/opentdf-production-config.yaml

Then reference them with includes or links in the documentation.

Example approach

Create the sample files in code_samples/ directory, then reference them in the docs:

See the complete configuration in [`opentdf-local-config.yaml`](../../code_samples/opentdf-local-config.yaml):

This also applies to other lengthy code blocks like the multi-step bash scripts (lines 86-117, 258-298, etc.).

Also applies to: 535-630

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/guides/production-deployment-guide.md` around lines 133 - 212, Extract
the long YAML configuration blocks (the large examples containing keys like
logger, services.kas, entityresolution, server.tls/auth/cors, cryptoProvider and
port) into standalone sample files under code_samples (suggested names:
opentdf-local-config.yaml and opentdf-production-config.yaml), replace the
inline code fences in the docs with a short include or link pointing to those
files, and do the same extraction for other lengthy code blocks (multi-step bash
scripts) into code_samples script files and reference them from the
documentation to remove duplication and improve maintainability.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/guides/production-deployment-guide.md`:
- Around line 1-730: Rename the documentation file
production-deployment-guide.md to production-deployment-guide.mdx and update any
references; specifically, run a move (git mv production-deployment-guide.md
production-deployment-guide.mdx), ensure the new .mdx filename is used wherever
the old name is linked or imported (sidebar config, cross-doc links, and any
docs tooling configs), and commit the change so the documentation build picks up
the .mdx file.

---

Nitpick comments:
In `@docs/guides/production-deployment-guide.md`:
- Around line 133-212: Extract the long YAML configuration blocks (the large
examples containing keys like logger, services.kas, entityresolution,
server.tls/auth/cors, cryptoProvider and port) into standalone sample files
under code_samples (suggested names: opentdf-local-config.yaml and
opentdf-production-config.yaml), replace the inline code fences in the docs with
a short include or link pointing to those files, and do the same extraction for
other lengthy code blocks (multi-step bash scripts) into code_samples script
files and reference them from the documentation to remove duplication and
improve maintainability.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3ad968f4-e023-476a-ab5f-6d2e1c83d9dc

📥 Commits

Reviewing files that changed from the base of the PR and between 1ed492d and fb3a74d.

📒 Files selected for processing (1)
  • docs/guides/production-deployment-guide.md

@github-actions
Copy link
Contributor

📄 Preview deployed to https://opentdf-docs-pr-259.surge.sh

marythought and others added 2 commits March 20, 2026 15:10
- Rename to .mdx per repo convention
- Add comment explaining duplicate kid entries (current + legacy)
- Note that otdfctl commands output IDs needed for subsequent steps
- Add TOKEN acquisition step to VERIFY_PROFILE example

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The check-vendored-yaml script hits the GitHub Contents API without
authentication, causing rate-limit failures (403) in CI. The error
manifested as a cryptic "contents is not iterable" TypeError because
fetchJson didn't check HTTP status codes.

Changes:
- Use GITHUB_TOKEN env var for authenticated API requests (already
  passed by CI workflows, raises rate limit from 60 to 5000 req/hr)
- Add HTTP status code checking in fetchJson with clear error messages
- Validate that Contents API response is an array before iterating

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/openapi/check-vendored-yaml.ts (1)

71-85: ⚠️ Potential issue | 🟡 Minor

fetchText lacks HTTP status checking unlike fetchJson.

fetchJson now properly rejects on non-200 responses (lines 48-59), but fetchText still resolves with whatever body is returned, even on 404 or other errors. This affects hasApiPaths() which uses fetchText to fetch raw file content—if a file doesn't exist or the request fails, it will attempt to parse the error response as YAML, leading to confusing errors or silent failures.

Consider adding the same status check pattern:

Proposed fix
       https.get(url, { headers } as any, (response: any) => {
+        if (response.statusCode !== 200) {
+          let body = '';
+          response.on('data', (chunk: string) => { body += chunk; });
+          response.on('end', () => {
+            reject(new Error(
+              `Failed to fetch ${url}: Status ${response.statusCode}\n` +
+              `  Response: ${body.slice(0, 200)}`
+            ));
+          });
+          return;
+        }
         let data = '';
         response.on('data', (chunk: string) => { data += chunk; });
         response.on('end', () => resolve(data));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openapi/check-vendored-yaml.ts` around lines 71 - 85, fetchText currently
resolves with any response body and doesn't check HTTP status, which lets
404/500 responses get parsed as YAML (used by hasApiPaths); update fetchText to
mirror fetchJson's behavior: after https.get receives the response, collect the
body, then check response.statusCode and if it's not in the 200–299 range reject
the Promise with an Error that includes status and optionally body (or
statusMessage); otherwise resolve the body. Keep the existing
headers/GITHUB_TOKEN logic and error handlers, and follow the same response
handling pattern used in fetchJson to ensure callers like hasApiPaths receive
proper rejections on non-successful HTTP responses.
🧹 Nitpick comments (1)
src/openapi/check-vendored-yaml.ts (1)

19-38: Consider adding auth header to downloadFile for consistency.

fetchJson and fetchText now include the GitHub token for authentication, but downloadFile (which fetches from raw.githubusercontent.com) doesn't. While raw content from public repos doesn't typically require auth, adding it would provide consistency and help if rate limits become an issue.

Optional: Add auth header
 function downloadFile(url: string, dest: string): Promise<void> {
   return new Promise((resolve, reject) => {
     import('https').then(https => {
       const file = fs.createWriteStream(dest);
-      https.get(url, (response: any) => {
+      const headers: Record<string, string> = { 'User-Agent': 'opentdf-docs-check-vendored-yaml' };
+      if (GITHUB_TOKEN) {
+        headers['Authorization'] = `token ${GITHUB_TOKEN}`;
+      }
+      https.get(url, { headers } as any, (response: any) => {
         if (response.statusCode !== 200) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openapi/check-vendored-yaml.ts` around lines 19 - 38, downloadFile
currently uses https.get without any Authorization header; update the function
(downloadFile) to include the GitHub token when present by passing an options
object to https.get/https.request that sets the Authorization: `token
${process.env.GITHUB_TOKEN}` (or equivalent config used by fetchJson/fetchText),
preserving existing error and stream handling and still unlinking dest on error;
locate the downloadFile function and add the header logic so downloads from
raw.githubusercontent.com use the same auth as fetchJson/fetchText.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@src/openapi/check-vendored-yaml.ts`:
- Around line 71-85: fetchText currently resolves with any response body and
doesn't check HTTP status, which lets 404/500 responses get parsed as YAML (used
by hasApiPaths); update fetchText to mirror fetchJson's behavior: after
https.get receives the response, collect the body, then check
response.statusCode and if it's not in the 200–299 range reject the Promise with
an Error that includes status and optionally body (or statusMessage); otherwise
resolve the body. Keep the existing headers/GITHUB_TOKEN logic and error
handlers, and follow the same response handling pattern used in fetchJson to
ensure callers like hasApiPaths receive proper rejections on non-successful HTTP
responses.

---

Nitpick comments:
In `@src/openapi/check-vendored-yaml.ts`:
- Around line 19-38: downloadFile currently uses https.get without any
Authorization header; update the function (downloadFile) to include the GitHub
token when present by passing an options object to https.get/https.request that
sets the Authorization: `token ${process.env.GITHUB_TOKEN}` (or equivalent
config used by fetchJson/fetchText), preserving existing error and stream
handling and still unlinking dest on error; locate the downloadFile function and
add the header logic so downloads from raw.githubusercontent.com use the same
auth as fetchJson/fetchText.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: df0dff54-ad33-4b91-b61f-b2a968a746d6

📥 Commits

Reviewing files that changed from the base of the PR and between fb3a74d and 15dc1ef.

⛔ Files ignored due to path filters (1)
  • static/img/filecontents.svg is excluded by !**/*.svg
📒 Files selected for processing (2)
  • docs/guides/production-deployment-guide.mdx
  • src/openapi/check-vendored-yaml.ts
✅ Files skipped from review due to trivial changes (1)
  • docs/guides/production-deployment-guide.mdx

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (4)
docs/guides/production-deployment-guide.mdx (4)

98-119: Consider using jq instead of python3 for JSON parsing.

The shell examples use python3 -c "import sys,json; ..." for JSON extraction. Since jq is a common CLI tool and is already used elsewhere in documentation, it would be more consistent:

♻️ Suggested change
-  | python3 -c "import sys,json; print(json.load(sys.stdin)['access_token'])")
+  | jq -r '.access_token')
-  | python3 -c "import sys,json; print(json.load(sys.stdin)[0]['id'])")
+  | jq -r '.[0].id')
-  | python3 -c "import sys,json; print(json.load(sys.stdin)['id'])")
+  | jq -r '.id')
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/guides/production-deployment-guide.mdx` around lines 98 - 119, Replace
the inline Python JSON parsing used for extracting tokens/IDs in the
ADMIN_TOKEN, ALICE_ID, and STD_ROLE_ID commands with jq calls to keep
consistency with other docs; update the three occurrences that pipe curl output
into python3 -c "import sys,json; ..." so they instead pipe into jq and extract
the same fields (access_token, [0].id, id) and ensure the surrounding shell
variable assignment syntax remains valid.

141-220: Consider extracting large YAML configs to code_samples/.

The two full opentdf.yaml examples (local dev ~80 lines, production ~87 lines) are substantial. As per coding guidelines, long examples should be kept in code_samples/ and referenced from docs instead of duplicating.

The collapsible <details> blocks help, but extraction would:

  • Enable easier maintenance and testing of configs
  • Allow users to download/copy the files directly
  • Keep the guide focused on explanation rather than raw config

Example structure:

code_samples/
  production-deployment/
    opentdf-local.yaml
    opentdf-production.yaml

Then reference with imports or links.

Also applies to: 568-654

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/guides/production-deployment-guide.mdx` around lines 141 - 220, The
large inline YAML example blocks (the opentdf.yaml config in
docs/guides/production-deployment-guide.mdx) should be extracted into standalone
files under a new code_samples/production-deployment/ directory (e.g.,
opentdf-local.yaml and opentdf-production.yaml), commit those files, and replace
the long inline fenced code blocks in the MDX with brief references or
imports/links to the new files (keeping the <details> summaries but pointing to
the external samples). Update any filenames referenced inside the YAML (e.g.,
kas-private.pem, kas-ec-private.pem) if necessary and ensure the doc still shows
minimal example snippets or instructions on where to download/copy the full
configs.

449-449: Clarify DPoP SDK version requirements.

The DPoP note mentions "Requires SDK and IdP support" but doesn't specify which SDK versions support DPoP. Consider adding a link to SDK release notes or version requirements.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/guides/production-deployment-guide.mdx` at line 449, The DPoP table row
for the "DPoP" feature (the cell mentioning `enforceDPoP: false` → `enforceDPoP:
true`) lacks SDK version requirements and links; update that cell to mention
which SDK versions support DPoP and add a link to the SDK release notes or
changelog—for example reference the Browser SDK and Backend SDK names and the
minimum version numbers that implement DPoP, and include a concise link to the
corresponding release notes or docs page so readers can verify compatibility
before enabling `enforceDPoP`.

744-748: Consider adding a code example for browser-side decrypt.

The explanation of server-side vs client-side decrypt is valuable, but a brief code snippet showing the browser decrypt pattern would make this more actionable. The current text explains the "why" but not the "how."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/guides/production-deployment-guide.mdx` around lines 744 - 748, Under
the "Server-side vs client-side decrypt" heading add a short browser-focused
example that shows (1) acquiring the user's Keycloak access token via the
Keycloak JS adapter (e.g., keycloak.init() and keycloak.token), (2) calling the
KAS decrypt endpoint (use an Authorization: Bearer <token> header) with the
ciphertext payload, and (3) handling the decrypted plaintext in the browser;
reference the existing Subject Mapping Guide and GLP-1 Tracker demo for full
flow and include brief notes about security (never send the user's token to
untrusted parties).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/guides/production-deployment-guide.mdx`:
- Line 492: Update the paragraph that currently mentions "Keycloak 25" and the
explanation of VERIFY_PROFILE: change the version reference to "Keycloak
24.0.0", remove the claim that VERIFY_PROFILE requires a specific kc.org
attribute, and replace it with a concise statement that VERIFY_PROFILE is
triggered when a user's profile does not comply with the realm's User Profile
configuration; also note that this enforcement is enabled by default for new
realms while existing realms keep their prior setting. Ensure the text still
references VERIFY_PROFILE and User Profile so readers can find the correct
Keycloak behavior.
- Line 51: Update the Go version requirement text in the production deployment
guide by replacing the current "Go 1.21+" wording with "Go 1.24+" so the docs
match the platform's minimum; locate the string in
docs/guides/production-deployment-guide.mdx (the line containing "Go 1.21+ (to
run the platform from source — needed for provisioning)") and change it to "Go
1.24+ (to run the platform from source — needed for provisioning)".

---

Nitpick comments:
In `@docs/guides/production-deployment-guide.mdx`:
- Around line 98-119: Replace the inline Python JSON parsing used for extracting
tokens/IDs in the ADMIN_TOKEN, ALICE_ID, and STD_ROLE_ID commands with jq calls
to keep consistency with other docs; update the three occurrences that pipe curl
output into python3 -c "import sys,json; ..." so they instead pipe into jq and
extract the same fields (access_token, [0].id, id) and ensure the surrounding
shell variable assignment syntax remains valid.
- Around line 141-220: The large inline YAML example blocks (the opentdf.yaml
config in docs/guides/production-deployment-guide.mdx) should be extracted into
standalone files under a new code_samples/production-deployment/ directory
(e.g., opentdf-local.yaml and opentdf-production.yaml), commit those files, and
replace the long inline fenced code blocks in the MDX with brief references or
imports/links to the new files (keeping the <details> summaries but pointing to
the external samples). Update any filenames referenced inside the YAML (e.g.,
kas-private.pem, kas-ec-private.pem) if necessary and ensure the doc still shows
minimal example snippets or instructions on where to download/copy the full
configs.
- Line 449: The DPoP table row for the "DPoP" feature (the cell mentioning
`enforceDPoP: false` → `enforceDPoP: true`) lacks SDK version requirements and
links; update that cell to mention which SDK versions support DPoP and add a
link to the SDK release notes or changelog—for example reference the Browser SDK
and Backend SDK names and the minimum version numbers that implement DPoP, and
include a concise link to the corresponding release notes or docs page so
readers can verify compatibility before enabling `enforceDPoP`.
- Around line 744-748: Under the "Server-side vs client-side decrypt" heading
add a short browser-focused example that shows (1) acquiring the user's Keycloak
access token via the Keycloak JS adapter (e.g., keycloak.init() and
keycloak.token), (2) calling the KAS decrypt endpoint (use an Authorization:
Bearer <token> header) with the ciphertext payload, and (3) handling the
decrypted plaintext in the browser; reference the existing Subject Mapping Guide
and GLP-1 Tracker demo for full flow and include brief notes about security
(never send the user's token to untrusted parties).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: a4d43797-d58f-4f39-858d-d204b44c18f4

📥 Commits

Reviewing files that changed from the base of the PR and between 15dc1ef and 5cfa3cf.

📒 Files selected for processing (1)
  • docs/guides/production-deployment-guide.mdx

### Prerequisites

- Docker and Docker Compose
- Go 1.21+ (to run the platform from source — needed for provisioning)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check go.mod in the platform repo for actual Go version requirement
cat go.mod 2>/dev/null | grep -E "^go [0-9]+\.[0-9]+" || echo "go.mod not found in current directory"

# Also check if there's a .go-version file
fd -t f "\.go-version$" --exec cat {}

Repository: opentdf/docs

Length of output: 93


🏁 Script executed:

# Search for any references to go.mod or Go versions in the docs
rg "go\.mod|Go 1\.[0-9]+|golang" --type mdx --type md -i | head -20

# Check for any references to the platform repository or links
rg "opentdf|platform.*repo|github\.com" --type mdx --type md | head -10

# Look for any configuration files that might reference Go version
fd -e toml -e yaml -e yml -e json | xargs grep -l "go" 2>/dev/null | head -5

Repository: opentdf/docs

Length of output: 297


🌐 Web query:

opentdf platform Go version requirement minimum version 2024 2025

💡 Result:

  • 2024 (OpenTDF Platform SDK v0.1.0, released 2024-04-22): workspace requires Go 1.22 (see go.work: go 1.22). [1][2]
  • 2025: workspace moved to Go 1.24.x:
    • 2025-10-21 (sdk/v0.10.0): go 1.24.0 (and pins toolchain go1.24.6). [3]
    • 2025-11-19 (service/v0.11.6): go 1.24.9. [4]
    • 2025-12-19 (protocol/go/v0.14.0): go 1.24.11. [5]

So the minimum Go version was Go 1.22 in 2024, and Go 1.24 (starting at 1.24.0) in 2025. [1][3]


Update Go version requirement to reflect current platform minimum.

Documentation specifies Go 1.21+, but the OpenTDF platform has required Go 1.22 since 2024 (SDK v0.1.0) and moved to Go 1.24+ in 2025. Update line 51 to require Go 1.24+ to match current platform requirements.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/guides/production-deployment-guide.mdx` at line 51, Update the Go
version requirement text in the production deployment guide by replacing the
current "Go 1.21+" wording with "Go 1.24+" so the docs match the platform's
minimum; locate the string in docs/guides/production-deployment-guide.mdx (the
line containing "Go 1.21+ (to run the platform from source — needed for
provisioning)") and change it to "Go 1.24+ (to run the platform from source —
needed for provisioning)".


#### Required Keycloak fixes for production

**Disable VERIFY_PROFILE**: Keycloak 25 requires a `kc.org` attribute that blocks login. Disable it:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

Keycloak VERIFY_PROFILE kc.org attribute requirement version

💡 Result:

  • What attributes does VERIFY_PROFILE require?
    VERIFY_PROFILE (aka verify-profile) doesn’t hardcode a specific attribute list. It triggers when the user’s profile is not compliant with the realm’s User Profile configuration (for example: missing attributes that are marked required, or values failing configured validators). Keycloak describes it as requiring users to update their profile “as long as some user attributes do not match the requirements according to the user profile configuration.” [1]

  • Which Keycloak version made this behavior “on by default” for new realms?
    Keycloak 24.0.0 enabled the verify-profile required action by default for new realms (not automatically for upgraded realms). The upgrading guide states it’s enabled by default for new realms, while existing realms keep their previous setting (typically disabled), and the 24.0.0 release notes include the change (“Enable verify-profile required action by default”). [2] [3]

Sources:
[1] Keycloak Server Administration Guide (latest) — Required actions / verify profile behavior
[2] Keycloak Upgrading Guide (latest) — “Verify Profile required action enabled by default” section
[3] Keycloak 24.0.0 release notes — item “Enable verify-profile required action by default” (#25985)


Correct the Keycloak version reference and VERIFY_PROFILE explanation.

The documentation incorrectly states "Keycloak 25" when the relevant change was introduced in Keycloak 24.0.0. Additionally, VERIFY_PROFILE doesn't require a specific kc.org attribute—it's triggered when a user's profile fails to comply with the realm's User Profile configuration. The behavior is also enabled by default only for new realms; existing realms retain their previous setting. Update the documentation to accurately reflect these details.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/guides/production-deployment-guide.mdx` at line 492, Update the
paragraph that currently mentions "Keycloak 25" and the explanation of
VERIFY_PROFILE: change the version reference to "Keycloak 24.0.0", remove the
claim that VERIFY_PROFILE requires a specific kc.org attribute, and replace it
with a concise statement that VERIFY_PROFILE is triggered when a user's profile
does not comply with the realm's User Profile configuration; also note that this
enforcement is enabled by default for new realms while existing realms keep
their prior setting. Ensure the text still references VERIFY_PROFILE and User
Profile so readers can find the correct Keycloak behavior.

marythought and others added 4 commits March 20, 2026 15:30
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Index page introduces the guide scope and links to both parts
- Part 1: Local Development (setup, Keycloak, attributes, SDK integration)
- Part 2: Production Deployment (hardening, DNS, containers, config)
- Add discussion forum links for more examples
- Clarify Go SDK is encrypt-only, browser SDK handles decrypt
- Add step-by-step for assigning opentdf-standard to default roles
- Fix Go version to link to go.mod instead of hardcoding
- Fix VERIFY_PROFILE description accuracy

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add frontend with OIDC auth as first prerequisite
- Link to demo app examples throughout: auth setup, TDF client,
  policy manager, create-users script, local config, Railway deploy
- Note that production uses JWT auth from your own IdP

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Keep the domain-scoped GITHUB_TOKEN checks from PR #252.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant