Skip to content

Commit 35d2296

Browse files
feat(run-tasks): add run tasks integration support
This commit adds comprehensive support for Terraform Cloud/Enterprise Run Tasks Integration to the python-tfe SDK. This feature allows developers to create webhook servers that can validate Terraform runs and send results back to TFC/TFE. Key additions: - Production-ready webhook server example with deployment instructions - Complete documentation explaining architecture and flow - Support for multiple cloud deployment platforms (AWS EC2, Heroku, GCP, etc.) - Comprehensive validation examples (cost control, security, compliance) - Clean implementation following HashiCorp patterns
1 parent 40471ee commit 35d2296

2 files changed

Lines changed: 561 additions & 0 deletions

File tree

Lines changed: 296 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,296 @@
1+
# Run Tasks Integration Example - Explanation
2+
3+
## What is `examples/run_tasks_integration.py`?
4+
5+
It's a **webhook server** that integrates with Terraform Cloud/Enterprise (TFC/TFE) run tasks. This is NOT a test file - it's a fully functional example server that you can deploy and customize.
6+
7+
---
8+
9+
## How It Works: The Complete Flow
10+
11+
### Step 1: You Start the Server
12+
```bash
13+
python examples/run_tasks_integration.py --port 8888
14+
```
15+
16+
The server starts and waits for incoming webhooks from TFC/TFE.
17+
18+
### Step 2: Configure in TFC/TFE
19+
You configure a run task in TFC/TFE pointing to your server:
20+
- **URL**: `http://your-server:8888`
21+
- **Stage**: When to run (pre-plan, post-plan, pre-apply, post-apply)
22+
- **Enforcement**: Advisory (warn) or Mandatory (block)
23+
24+
### Step 3: Someone Triggers a Terraform Run
25+
When a user clicks "Start Run" in TFC/TFE or pushes code:
26+
27+
```
28+
User triggers run
29+
30+
TFC/TFE prepares the run
31+
32+
TFC/TFE sends webhook → http://your-server:8888
33+
```
34+
35+
### Step 4: Your Server Receives the Webhook
36+
The webhook payload contains:
37+
```json
38+
{
39+
"run_id": "run-abc123",
40+
"workspace_name": "prod-app",
41+
"organization_name": "my-company",
42+
"stage": "pre_plan",
43+
"access_token": "secret-token",
44+
"task_result_callback_url": "https://app.terraform.io/api/v2/task-results/xyz",
45+
...
46+
}
47+
```
48+
49+
### Step 5: Your Server Processes It
50+
```python
51+
# Parse the incoming webhook
52+
request = RunTaskRequest.model_validate(payload)
53+
54+
# YOUR CUSTOM VALIDATION LOGIC HERE
55+
# Examples:
56+
# - Check if resources have required tags
57+
# - Validate naming conventions
58+
# - Run security scans (Checkov, tfsec, etc.)
59+
# - Check cost estimates
60+
# - Verify compliance policies
61+
# - Check for sensitive data in configs
62+
63+
result_status = "passed" # or "failed"
64+
result_message = "All checks passed!"
65+
```
66+
67+
### Step 6: Your Server Sends Results Back
68+
```python
69+
client.run_tasks_integration.callback(
70+
callback_url=request.task_result_callback_url,
71+
access_token=request.access_token,
72+
options=TaskResultCallbackOptions(
73+
status="passed", # or "failed"
74+
message="All checks passed!",
75+
url="https://your-dashboard.com/results",
76+
outcomes=[
77+
TaskResultOutcome(
78+
outcome_id="check-1",
79+
description="Security scan passed",
80+
body="No vulnerabilities found",
81+
tags={
82+
"Status": [TaskResultTag(label="Passed", level="info")],
83+
"Severity": [TaskResultTag(label="Low")]
84+
}
85+
)
86+
]
87+
)
88+
)
89+
```
90+
91+
### Step 7: TFC/TFE Receives and Displays Results
92+
In the TFC/TFE UI, users see:
93+
-**Run Task Status**: Passed or Failed
94+
- 📝 **Message**: Your custom message
95+
- 📊 **Outcomes**: Detailed results with tags
96+
- 🔗 **Link**: To your detailed results page
97+
98+
If mandatory and failed → Run is blocked ⛔
99+
If advisory and failed → Run continues with warning ⚠️
100+
101+
---
102+
103+
## Real-World Use Cases
104+
105+
### Example 1: Cost Control
106+
```python
107+
# Check estimated costs
108+
if estimated_cost > 10000:
109+
result_status = "failed"
110+
result_message = f"Cost ${estimated_cost} exceeds budget limit"
111+
```
112+
113+
### Example 2: Production Safety
114+
```python
115+
# Require approval for production
116+
if request.workspace_name.startswith("prod-"):
117+
result_status = "failed"
118+
result_message = "Production changes require manual approval"
119+
```
120+
121+
### Example 3: Security Scanning
122+
```python
123+
# Run Checkov security scan
124+
scan_results = run_checkov(request.configuration_version_download_url)
125+
if scan_results.has_critical_issues:
126+
result_status = "failed"
127+
result_message = f"Found {len(scan_results.critical)} critical security issues"
128+
```
129+
130+
### Example 4: Tagging Enforcement
131+
```python
132+
# Check if all resources have required tags
133+
if not all_resources_have_tags(config, required_tags=["owner", "project"]):
134+
result_status = "failed"
135+
result_message = "All resources must have 'owner' and 'project' tags"
136+
```
137+
138+
### Example 5: Compliance Checking
139+
```python
140+
# Check against compliance policies
141+
if not meets_compliance_standards(config):
142+
result_status = "failed"
143+
result_message = "Configuration violates compliance policy XYZ-123"
144+
```
145+
146+
---
147+
148+
## What the Example Demonstrates
149+
150+
The example file shows you how to:
151+
152+
**Receive webhooks** from TFC/TFE using a simple HTTP server
153+
**Parse `RunTaskRequest`** - the webhook payload from TFC/TFE
154+
**Access run information** - workspace, organization, stage, run ID
155+
**Add custom validation logic** - where you insert your checks
156+
**Create detailed outcomes** - with descriptions, tags, and links
157+
**Send results back** - using the `callback()` method
158+
**Handle errors gracefully** - proper error handling and responses
159+
160+
---
161+
162+
## Why This Example is Important
163+
164+
### Without Run Tasks Integration:
165+
- ❌ Manual code reviews for every change
166+
- ❌ Inconsistent policy enforcement
167+
- ❌ Security issues discovered after deployment
168+
- ❌ Cost overruns without warnings
169+
170+
### With Run Tasks Integration:
171+
- ✅ Automated validation before apply
172+
- ✅ Consistent policy enforcement
173+
- ✅ Security issues caught early
174+
- ✅ Cost controls built into workflow
175+
- ✅ Detailed audit trail
176+
- ✅ Custom business logic enforcement
177+
178+
---
179+
180+
## How to Use This Example
181+
182+
### 1. Basic Usage (Local Testing)
183+
```bash
184+
# Start the server
185+
python examples/run_tasks_integration.py --port 8888
186+
187+
# In another terminal, test with mock data
188+
python test_run_tasks_local.py
189+
```
190+
191+
### 2. Deploy to Cloud (Real Usage)
192+
```bash
193+
# On your cloud server (EC2, Azure, GCP, etc.)
194+
python examples/run_tasks_integration.py --port 8888
195+
196+
# Configure in TFC/TFE:
197+
# URL: http://your-server-ip:8888
198+
```
199+
200+
### 3. Customize the Logic
201+
Edit the example file around line 54-67:
202+
```python
203+
# Replace this section with your custom checks
204+
# Example: Check workspace naming
205+
if not request.workspace_name.startswith(("dev-", "prod-", "staging-")):
206+
result_status = "failed"
207+
result_message = "Workspace must be prefixed with dev-, prod-, or staging-"
208+
```
209+
210+
---
211+
212+
## Key Components Used
213+
214+
### 1. `RunTaskRequest`
215+
Parses the incoming webhook from TFC/TFE:
216+
- `run_id` - The Terraform run ID
217+
- `workspace_name` - Which workspace
218+
- `organization_name` - Which organization
219+
- `stage` - When it's running (pre-plan, post-plan, etc.)
220+
- `access_token` - Token for sending callback
221+
- `task_result_callback_url` - Where to send results
222+
223+
### 2. `TaskResultCallbackOptions`
224+
Defines the result to send back:
225+
- `status` - "passed", "failed", "running"
226+
- `message` - Short summary
227+
- `url` - Link to detailed results (optional)
228+
- `outcomes` - Detailed results list (optional)
229+
230+
### 3. `TaskResultOutcome`
231+
Individual check result:
232+
- `outcome_id` - Unique identifier
233+
- `description` - What was checked
234+
- `body` - Detailed explanation
235+
- `url` - Link to more info
236+
- `tags` - Categorization (Status, Severity, etc.)
237+
238+
### 4. `TaskResultTag`
239+
Tag for categorization:
240+
- `label` - Tag name (e.g., "Critical", "Passed")
241+
- `level` - Severity (e.g., "error", "warning", "info")
242+
243+
### 5. `run_tasks_integration.callback()`
244+
Sends results back to TFC/TFE:
245+
- Uses the callback URL from the webhook
246+
- Authenticates with the access token
247+
- Sends structured result data
248+
249+
---
250+
251+
## Testing Strategy
252+
253+
### Level 1: Unit Tests ✅
254+
```bash
255+
pytest tests/units/test_run_tasks_integration.py
256+
```
257+
Tests parsing and validation logic.
258+
259+
### Level 2: Local Integration ✅
260+
```bash
261+
python test_run_tasks_local.py
262+
```
263+
Simulates complete flow with mock TFC/TFE server.
264+
265+
### Level 3: Cloud Deployment ✅
266+
Deploy to EC2/cloud and test with real webhooks.
267+
268+
### Level 4: Real HCP Terraform ✅
269+
Configure in actual TFC/TFE and trigger real runs.
270+
271+
---
272+
273+
## Summary
274+
275+
**What it is**: A working webhook server that integrates with TFC/TFE run tasks
276+
277+
**What it does**: Receives run information, validates it, sends results back
278+
279+
**Why it's important**: Enables automated policy enforcement and custom validation
280+
281+
**How to use it**: Deploy the server, configure in TFC/TFE, customize the validation logic
282+
283+
**Not a test**: It's a functional example you can deploy and use in production!
284+
285+
---
286+
287+
## Next Steps
288+
289+
1. ✅ Review the example code
290+
2. ✅ Test locally with `test_run_tasks_local.py`
291+
3. ✅ Customize validation logic for your needs
292+
4. ✅ Deploy to cloud server
293+
5. ✅ Configure in TFC/TFE
294+
6. ✅ Monitor and iterate
295+
296+
**The example gives you everything you need to build your own run tasks integration!**

0 commit comments

Comments
 (0)