feat(docs): add production deployment guide#259
Conversation
Two-part guide covering local development setup and production deployment of OpenTDF Platform. Based on real-world experience deploying the GLP-1 Tracker demo app to Railway. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a comprehensive guide designed to assist developers in setting up and deploying OpenTDF applications. It provides a structured walkthrough for both local development environments and production deployments, addressing common challenges and offering best practices to ensure secure and efficient integration of OpenTDF capabilities. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThe PR adds GitHub token authentication support to the OpenAPI specification checking utility with improved error handling and API response validation, and introduces a comprehensive three-part production deployment guide covering architecture overview, local development setup, and production-hardened configuration. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Code Review
This pull request introduces a comprehensive guide for deploying the OpenTDF Platform, covering both local development and production environments. The guide is well-structured and provides valuable information, including setup instructions, configuration examples, and troubleshooting tips. My review focuses on improving the clarity and correctness of the provided commands and configurations to ensure a smoother experience for users following the guide. I've identified a few areas for improvement: a potentially incorrect keyring configuration is shown in two places, a command snippet is missing a variable definition, and there's an opportunity to make instructions for using otdfctl more explicit.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
docs/guides/production-deployment-guide.md (1)
133-212: Consider extracting long configuration examples tocode_samples/.The YAML configuration blocks at lines 133-212 (79 lines) and 535-630 (95 lines) are substantial. As per coding guidelines, long examples should be kept in
code_samples/and referenced from docs to improve maintainability and avoid duplication.For example, you could create:
code_samples/opentdf-local-config.yamlcode_samples/opentdf-production-config.yamlThen reference them with includes or links in the documentation.
Example approach
Create the sample files in
code_samples/directory, then reference them in the docs:See the complete configuration in [`opentdf-local-config.yaml`](../../code_samples/opentdf-local-config.yaml):This also applies to other lengthy code blocks like the multi-step bash scripts (lines 86-117, 258-298, etc.).
Also applies to: 535-630
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/guides/production-deployment-guide.md` around lines 133 - 212, Extract the long YAML configuration blocks (the large examples containing keys like logger, services.kas, entityresolution, server.tls/auth/cors, cryptoProvider and port) into standalone sample files under code_samples (suggested names: opentdf-local-config.yaml and opentdf-production-config.yaml), replace the inline code fences in the docs with a short include or link pointing to those files, and do the same extraction for other lengthy code blocks (multi-step bash scripts) into code_samples script files and reference them from the documentation to remove duplication and improve maintainability.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/guides/production-deployment-guide.md`:
- Around line 1-730: Rename the documentation file
production-deployment-guide.md to production-deployment-guide.mdx and update any
references; specifically, run a move (git mv production-deployment-guide.md
production-deployment-guide.mdx), ensure the new .mdx filename is used wherever
the old name is linked or imported (sidebar config, cross-doc links, and any
docs tooling configs), and commit the change so the documentation build picks up
the .mdx file.
---
Nitpick comments:
In `@docs/guides/production-deployment-guide.md`:
- Around line 133-212: Extract the long YAML configuration blocks (the large
examples containing keys like logger, services.kas, entityresolution,
server.tls/auth/cors, cryptoProvider and port) into standalone sample files
under code_samples (suggested names: opentdf-local-config.yaml and
opentdf-production-config.yaml), replace the inline code fences in the docs with
a short include or link pointing to those files, and do the same extraction for
other lengthy code blocks (multi-step bash scripts) into code_samples script
files and reference them from the documentation to remove duplication and
improve maintainability.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3ad968f4-e023-476a-ab5f-6d2e1c83d9dc
📒 Files selected for processing (1)
docs/guides/production-deployment-guide.md
|
📄 Preview deployed to https://opentdf-docs-pr-259.surge.sh |
- Rename to .mdx per repo convention - Add comment explaining duplicate kid entries (current + legacy) - Note that otdfctl commands output IDs needed for subsequent steps - Add TOKEN acquisition step to VERIFY_PROFILE example Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The check-vendored-yaml script hits the GitHub Contents API without authentication, causing rate-limit failures (403) in CI. The error manifested as a cryptic "contents is not iterable" TypeError because fetchJson didn't check HTTP status codes. Changes: - Use GITHUB_TOKEN env var for authenticated API requests (already passed by CI workflows, raises rate limit from 60 to 5000 req/hr) - Add HTTP status code checking in fetchJson with clear error messages - Validate that Contents API response is an array before iterating Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/openapi/check-vendored-yaml.ts (1)
71-85:⚠️ Potential issue | 🟡 Minor
fetchTextlacks HTTP status checking unlikefetchJson.
fetchJsonnow properly rejects on non-200 responses (lines 48-59), butfetchTextstill resolves with whatever body is returned, even on 404 or other errors. This affectshasApiPaths()which usesfetchTextto fetch raw file content—if a file doesn't exist or the request fails, it will attempt to parse the error response as YAML, leading to confusing errors or silent failures.Consider adding the same status check pattern:
Proposed fix
https.get(url, { headers } as any, (response: any) => { + if (response.statusCode !== 200) { + let body = ''; + response.on('data', (chunk: string) => { body += chunk; }); + response.on('end', () => { + reject(new Error( + `Failed to fetch ${url}: Status ${response.statusCode}\n` + + ` Response: ${body.slice(0, 200)}` + )); + }); + return; + } let data = ''; response.on('data', (chunk: string) => { data += chunk; }); response.on('end', () => resolve(data));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openapi/check-vendored-yaml.ts` around lines 71 - 85, fetchText currently resolves with any response body and doesn't check HTTP status, which lets 404/500 responses get parsed as YAML (used by hasApiPaths); update fetchText to mirror fetchJson's behavior: after https.get receives the response, collect the body, then check response.statusCode and if it's not in the 200–299 range reject the Promise with an Error that includes status and optionally body (or statusMessage); otherwise resolve the body. Keep the existing headers/GITHUB_TOKEN logic and error handlers, and follow the same response handling pattern used in fetchJson to ensure callers like hasApiPaths receive proper rejections on non-successful HTTP responses.
🧹 Nitpick comments (1)
src/openapi/check-vendored-yaml.ts (1)
19-38: Consider adding auth header todownloadFilefor consistency.
fetchJsonandfetchTextnow include the GitHub token for authentication, butdownloadFile(which fetches fromraw.githubusercontent.com) doesn't. While raw content from public repos doesn't typically require auth, adding it would provide consistency and help if rate limits become an issue.Optional: Add auth header
function downloadFile(url: string, dest: string): Promise<void> { return new Promise((resolve, reject) => { import('https').then(https => { const file = fs.createWriteStream(dest); - https.get(url, (response: any) => { + const headers: Record<string, string> = { 'User-Agent': 'opentdf-docs-check-vendored-yaml' }; + if (GITHUB_TOKEN) { + headers['Authorization'] = `token ${GITHUB_TOKEN}`; + } + https.get(url, { headers } as any, (response: any) => { if (response.statusCode !== 200) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openapi/check-vendored-yaml.ts` around lines 19 - 38, downloadFile currently uses https.get without any Authorization header; update the function (downloadFile) to include the GitHub token when present by passing an options object to https.get/https.request that sets the Authorization: `token ${process.env.GITHUB_TOKEN}` (or equivalent config used by fetchJson/fetchText), preserving existing error and stream handling and still unlinking dest on error; locate the downloadFile function and add the header logic so downloads from raw.githubusercontent.com use the same auth as fetchJson/fetchText.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/openapi/check-vendored-yaml.ts`:
- Around line 71-85: fetchText currently resolves with any response body and
doesn't check HTTP status, which lets 404/500 responses get parsed as YAML (used
by hasApiPaths); update fetchText to mirror fetchJson's behavior: after
https.get receives the response, collect the body, then check
response.statusCode and if it's not in the 200–299 range reject the Promise with
an Error that includes status and optionally body (or statusMessage); otherwise
resolve the body. Keep the existing headers/GITHUB_TOKEN logic and error
handlers, and follow the same response handling pattern used in fetchJson to
ensure callers like hasApiPaths receive proper rejections on non-successful HTTP
responses.
---
Nitpick comments:
In `@src/openapi/check-vendored-yaml.ts`:
- Around line 19-38: downloadFile currently uses https.get without any
Authorization header; update the function (downloadFile) to include the GitHub
token when present by passing an options object to https.get/https.request that
sets the Authorization: `token ${process.env.GITHUB_TOKEN}` (or equivalent
config used by fetchJson/fetchText), preserving existing error and stream
handling and still unlinking dest on error; locate the downloadFile function and
add the header logic so downloads from raw.githubusercontent.com use the same
auth as fetchJson/fetchText.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: df0dff54-ad33-4b91-b61f-b2a968a746d6
⛔ Files ignored due to path filters (1)
static/img/filecontents.svgis excluded by!**/*.svg
📒 Files selected for processing (2)
docs/guides/production-deployment-guide.mdxsrc/openapi/check-vendored-yaml.ts
✅ Files skipped from review due to trivial changes (1)
- docs/guides/production-deployment-guide.mdx
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (4)
docs/guides/production-deployment-guide.mdx (4)
98-119: Consider usingjqinstead ofpython3for JSON parsing.The shell examples use
python3 -c "import sys,json; ..."for JSON extraction. Sincejqis a common CLI tool and is already used elsewhere in documentation, it would be more consistent:♻️ Suggested change
- | python3 -c "import sys,json; print(json.load(sys.stdin)['access_token'])") + | jq -r '.access_token')- | python3 -c "import sys,json; print(json.load(sys.stdin)[0]['id'])") + | jq -r '.[0].id')- | python3 -c "import sys,json; print(json.load(sys.stdin)['id'])") + | jq -r '.id')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/guides/production-deployment-guide.mdx` around lines 98 - 119, Replace the inline Python JSON parsing used for extracting tokens/IDs in the ADMIN_TOKEN, ALICE_ID, and STD_ROLE_ID commands with jq calls to keep consistency with other docs; update the three occurrences that pipe curl output into python3 -c "import sys,json; ..." so they instead pipe into jq and extract the same fields (access_token, [0].id, id) and ensure the surrounding shell variable assignment syntax remains valid.
141-220: Consider extracting large YAML configs tocode_samples/.The two full
opentdf.yamlexamples (local dev ~80 lines, production ~87 lines) are substantial. As per coding guidelines, long examples should be kept incode_samples/and referenced from docs instead of duplicating.The collapsible
<details>blocks help, but extraction would:
- Enable easier maintenance and testing of configs
- Allow users to download/copy the files directly
- Keep the guide focused on explanation rather than raw config
Example structure:
code_samples/ production-deployment/ opentdf-local.yaml opentdf-production.yamlThen reference with imports or links.
Also applies to: 568-654
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/guides/production-deployment-guide.mdx` around lines 141 - 220, The large inline YAML example blocks (the opentdf.yaml config in docs/guides/production-deployment-guide.mdx) should be extracted into standalone files under a new code_samples/production-deployment/ directory (e.g., opentdf-local.yaml and opentdf-production.yaml), commit those files, and replace the long inline fenced code blocks in the MDX with brief references or imports/links to the new files (keeping the <details> summaries but pointing to the external samples). Update any filenames referenced inside the YAML (e.g., kas-private.pem, kas-ec-private.pem) if necessary and ensure the doc still shows minimal example snippets or instructions on where to download/copy the full configs.
449-449: Clarify DPoP SDK version requirements.The DPoP note mentions "Requires SDK and IdP support" but doesn't specify which SDK versions support DPoP. Consider adding a link to SDK release notes or version requirements.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/guides/production-deployment-guide.mdx` at line 449, The DPoP table row for the "DPoP" feature (the cell mentioning `enforceDPoP: false` → `enforceDPoP: true`) lacks SDK version requirements and links; update that cell to mention which SDK versions support DPoP and add a link to the SDK release notes or changelog—for example reference the Browser SDK and Backend SDK names and the minimum version numbers that implement DPoP, and include a concise link to the corresponding release notes or docs page so readers can verify compatibility before enabling `enforceDPoP`.
744-748: Consider adding a code example for browser-side decrypt.The explanation of server-side vs client-side decrypt is valuable, but a brief code snippet showing the browser decrypt pattern would make this more actionable. The current text explains the "why" but not the "how."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/guides/production-deployment-guide.mdx` around lines 744 - 748, Under the "Server-side vs client-side decrypt" heading add a short browser-focused example that shows (1) acquiring the user's Keycloak access token via the Keycloak JS adapter (e.g., keycloak.init() and keycloak.token), (2) calling the KAS decrypt endpoint (use an Authorization: Bearer <token> header) with the ciphertext payload, and (3) handling the decrypted plaintext in the browser; reference the existing Subject Mapping Guide and GLP-1 Tracker demo for full flow and include brief notes about security (never send the user's token to untrusted parties).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/guides/production-deployment-guide.mdx`:
- Line 492: Update the paragraph that currently mentions "Keycloak 25" and the
explanation of VERIFY_PROFILE: change the version reference to "Keycloak
24.0.0", remove the claim that VERIFY_PROFILE requires a specific kc.org
attribute, and replace it with a concise statement that VERIFY_PROFILE is
triggered when a user's profile does not comply with the realm's User Profile
configuration; also note that this enforcement is enabled by default for new
realms while existing realms keep their prior setting. Ensure the text still
references VERIFY_PROFILE and User Profile so readers can find the correct
Keycloak behavior.
- Line 51: Update the Go version requirement text in the production deployment
guide by replacing the current "Go 1.21+" wording with "Go 1.24+" so the docs
match the platform's minimum; locate the string in
docs/guides/production-deployment-guide.mdx (the line containing "Go 1.21+ (to
run the platform from source — needed for provisioning)") and change it to "Go
1.24+ (to run the platform from source — needed for provisioning)".
---
Nitpick comments:
In `@docs/guides/production-deployment-guide.mdx`:
- Around line 98-119: Replace the inline Python JSON parsing used for extracting
tokens/IDs in the ADMIN_TOKEN, ALICE_ID, and STD_ROLE_ID commands with jq calls
to keep consistency with other docs; update the three occurrences that pipe curl
output into python3 -c "import sys,json; ..." so they instead pipe into jq and
extract the same fields (access_token, [0].id, id) and ensure the surrounding
shell variable assignment syntax remains valid.
- Around line 141-220: The large inline YAML example blocks (the opentdf.yaml
config in docs/guides/production-deployment-guide.mdx) should be extracted into
standalone files under a new code_samples/production-deployment/ directory
(e.g., opentdf-local.yaml and opentdf-production.yaml), commit those files, and
replace the long inline fenced code blocks in the MDX with brief references or
imports/links to the new files (keeping the <details> summaries but pointing to
the external samples). Update any filenames referenced inside the YAML (e.g.,
kas-private.pem, kas-ec-private.pem) if necessary and ensure the doc still shows
minimal example snippets or instructions on where to download/copy the full
configs.
- Line 449: The DPoP table row for the "DPoP" feature (the cell mentioning
`enforceDPoP: false` → `enforceDPoP: true`) lacks SDK version requirements and
links; update that cell to mention which SDK versions support DPoP and add a
link to the SDK release notes or changelog—for example reference the Browser SDK
and Backend SDK names and the minimum version numbers that implement DPoP, and
include a concise link to the corresponding release notes or docs page so
readers can verify compatibility before enabling `enforceDPoP`.
- Around line 744-748: Under the "Server-side vs client-side decrypt" heading
add a short browser-focused example that shows (1) acquiring the user's Keycloak
access token via the Keycloak JS adapter (e.g., keycloak.init() and
keycloak.token), (2) calling the KAS decrypt endpoint (use an Authorization:
Bearer <token> header) with the ciphertext payload, and (3) handling the
decrypted plaintext in the browser; reference the existing Subject Mapping Guide
and GLP-1 Tracker demo for full flow and include brief notes about security
(never send the user's token to untrusted parties).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a4d43797-d58f-4f39-858d-d204b44c18f4
📒 Files selected for processing (1)
docs/guides/production-deployment-guide.mdx
| ### Prerequisites | ||
|
|
||
| - Docker and Docker Compose | ||
| - Go 1.21+ (to run the platform from source — needed for provisioning) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check go.mod in the platform repo for actual Go version requirement
cat go.mod 2>/dev/null | grep -E "^go [0-9]+\.[0-9]+" || echo "go.mod not found in current directory"
# Also check if there's a .go-version file
fd -t f "\.go-version$" --exec cat {}Repository: opentdf/docs
Length of output: 93
🏁 Script executed:
# Search for any references to go.mod or Go versions in the docs
rg "go\.mod|Go 1\.[0-9]+|golang" --type mdx --type md -i | head -20
# Check for any references to the platform repository or links
rg "opentdf|platform.*repo|github\.com" --type mdx --type md | head -10
# Look for any configuration files that might reference Go version
fd -e toml -e yaml -e yml -e json | xargs grep -l "go" 2>/dev/null | head -5Repository: opentdf/docs
Length of output: 297
🌐 Web query:
opentdf platform Go version requirement minimum version 2024 2025
💡 Result:
- 2024 (OpenTDF Platform SDK v0.1.0, released 2024-04-22): workspace requires Go 1.22 (see
go.work:go 1.22). [1][2] - 2025: workspace moved to Go 1.24.x:
- 2025-10-21 (sdk/v0.10.0):
go 1.24.0(and pinstoolchain go1.24.6). [3] - 2025-11-19 (service/v0.11.6):
go 1.24.9. [4] - 2025-12-19 (protocol/go/v0.14.0):
go 1.24.11. [5]
- 2025-10-21 (sdk/v0.10.0):
So the minimum Go version was Go 1.22 in 2024, and Go 1.24 (starting at 1.24.0) in 2025. [1][3]
Update Go version requirement to reflect current platform minimum.
Documentation specifies Go 1.21+, but the OpenTDF platform has required Go 1.22 since 2024 (SDK v0.1.0) and moved to Go 1.24+ in 2025. Update line 51 to require Go 1.24+ to match current platform requirements.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/guides/production-deployment-guide.mdx` at line 51, Update the Go
version requirement text in the production deployment guide by replacing the
current "Go 1.21+" wording with "Go 1.24+" so the docs match the platform's
minimum; locate the string in docs/guides/production-deployment-guide.mdx (the
line containing "Go 1.21+ (to run the platform from source — needed for
provisioning)") and change it to "Go 1.24+ (to run the platform from source —
needed for provisioning)".
|
|
||
| #### Required Keycloak fixes for production | ||
|
|
||
| **Disable VERIFY_PROFILE**: Keycloak 25 requires a `kc.org` attribute that blocks login. Disable it: |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Keycloak VERIFY_PROFILE kc.org attribute requirement version
💡 Result:
-
What attributes does
VERIFY_PROFILErequire?
VERIFY_PROFILE(akaverify-profile) doesn’t hardcode a specific attribute list. It triggers when the user’s profile is not compliant with the realm’s User Profile configuration (for example: missing attributes that are marked required, or values failing configured validators). Keycloak describes it as requiring users to update their profile “as long as some user attributes do not match the requirements according to the user profile configuration.” [1] -
Which Keycloak version made this behavior “on by default” for new realms?
Keycloak 24.0.0 enabled theverify-profilerequired action by default for new realms (not automatically for upgraded realms). The upgrading guide states it’s enabled by default for new realms, while existing realms keep their previous setting (typically disabled), and the 24.0.0 release notes include the change (“Enable verify-profile required action by default”). [2] [3]
Sources:
[1] Keycloak Server Administration Guide (latest) — Required actions / verify profile behavior
[2] Keycloak Upgrading Guide (latest) — “Verify Profile required action enabled by default” section
[3] Keycloak 24.0.0 release notes — item “Enable verify-profile required action by default” (#25985)
Correct the Keycloak version reference and VERIFY_PROFILE explanation.
The documentation incorrectly states "Keycloak 25" when the relevant change was introduced in Keycloak 24.0.0. Additionally, VERIFY_PROFILE doesn't require a specific kc.org attribute—it's triggered when a user's profile fails to comply with the realm's User Profile configuration. The behavior is also enabled by default only for new realms; existing realms retain their previous setting. Update the documentation to accurately reflect these details.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/guides/production-deployment-guide.mdx` at line 492, Update the
paragraph that currently mentions "Keycloak 25" and the explanation of
VERIFY_PROFILE: change the version reference to "Keycloak 24.0.0", remove the
claim that VERIFY_PROFILE requires a specific kc.org attribute, and replace it
with a concise statement that VERIFY_PROFILE is triggered when a user's profile
does not comply with the realm's User Profile configuration; also note that this
enforcement is enabled by default for new realms while existing realms keep
their prior setting. Ensure the text still references VERIFY_PROFILE and User
Profile so readers can find the correct Keycloak behavior.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Index page introduces the guide scope and links to both parts - Part 1: Local Development (setup, Keycloak, attributes, SDK integration) - Part 2: Production Deployment (hardening, DNS, containers, config) - Add discussion forum links for more examples - Clarify Go SDK is encrypt-only, browser SDK handles decrypt - Add step-by-step for assigning opentdf-standard to default roles - Fix Go version to link to go.mod instead of hardcoding - Fix VERIFY_PROFILE description accuracy Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add frontend with OIDC auth as first prerequisite - Link to demo app examples throughout: auth setup, TDF client, policy manager, create-users script, local config, Railway deploy - Note that production uses JWT auth from your own IdP Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Keep the domain-scoped GITHUB_TOKEN checks from PR #252. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
Test plan
npm run check-vendored-yamlpassesnpm run buildsucceeds with no new broken links🤖 Generated with Claude Code
Summary by CodeRabbit
Documentation
Improvements