|
39 | 39 | * ## Related Components |
40 | 40 | * |
41 | 41 | * Services work together with other Stackattack components: |
42 | | - * - [cluster](/components/cluster) - Provides compute capacity for running services |
43 | | - * - [vpc](/components/vpc) - Provides networking foundation with private/public subnets |
44 | | - * - [load-balancer](/components/load-balancer) - Routes external traffic to services |
| 42 | + * - [cluster](/components/cluster/) - Provides compute capacity for running services |
| 43 | + * - [vpc](/components/vpc/) - Provides networking foundation with private/public subnets |
| 44 | + * - [load-balancer](/components/load-balancer/) - Routes external traffic to services |
45 | 45 | * |
46 | 46 | * ## Costs |
47 | 47 | * |
48 | 48 | * ECS service costs depend on the underlying compute resources and are **usage-based**: |
49 | 49 | * |
50 | | - * - **EC2 instances** - If using EC2 capacity providers, you pay for the underlying EC2 instances (~$0.0116/hour for t3.micro). The [cluster](/components/cluster) component manages auto-scaling groups that can scale to zero when no tasks are running. |
| 50 | + * - **EC2 instances** - If using EC2 capacity providers, you pay for the underlying EC2 instances (~$0.0116/hour for t3.micro). The [cluster](/components/cluster/) component manages auto-scaling groups that can scale to zero when no tasks are running. |
51 | 51 | * |
52 | 52 | * - **Fargate** - If using Fargate capacity providers, you pay per vCPU-hour (~$0.04048/vCPU/hour) and per GB-hour (~$0.004445/GB/hour). A 0.5 vCPU, 1GB task running 24/7 costs ~$15/month. |
53 | 53 | * |
|
56 | 56 | * - **CloudWatch Logs** - Log storage is ~$0.50/GB/month. Use the `logRetention` parameter to automatically delete old logs and control costs. |
57 | 57 | * |
58 | 58 | * Cost optimization strategies: |
59 | | - * - Use the [cluster](/components/cluster) component's auto-scaling features to scale EC2 instances to zero during low usage |
| 59 | + * - Use the [cluster](/components/cluster/) component's auto-scaling features to scale EC2 instances to zero during low usage |
60 | 60 | * - Set appropriate `logRetention` periods (default: 30 days) |
61 | 61 | * - Consider spot instances for non-critical workloads through capacity provider configuration |
62 | 62 | * |
@@ -345,7 +345,7 @@ export interface ServiceArgs extends TaskDefinitionArgs { |
345 | 345 | orderedPlacementStrategies?: pulumi.Input< |
346 | 346 | pulumi.Input<aws.types.input.ecs.ServiceOrderedPlacementStrategy>[] |
347 | 347 | >; |
348 | | - /** Specify an auto-scaling configuration for your service. Cannot be used with `replicas`. See the [serviceAutoScaling](/components/service-autoscaling) component for argument documentation. */ |
| 348 | + /** Specify an auto-scaling configuration for your service. Cannot be used with `replicas`. See the [serviceAutoScaling](/components/service-autoscaling/) component for argument documentation. */ |
349 | 349 | autoScaling?: Omit<ServiceAutoScalingArgs, "service">; |
350 | 350 | } |
351 | 351 |
|
|
0 commit comments