diff --git a/README.md b/README.md new file mode 100644 index 0000000..017fd44 --- /dev/null +++ b/README.md @@ -0,0 +1,464 @@ +# Amazon EKS cluster management using kro & ACK + +This example demonstrates how to manage a fleet of Amazon EKS clusters using kro, ACK (AWS Controllers for Kubernetes), and Argo CD across multiple regions and accounts. You'll learn how to create EKS clusters and bootstrap them with required add-ons. + +The solution implements a hub-spoke model where a management cluster (hub) is created during initial setup, with EKS capabilities (kro, ACK and Argo CD) enabled for provisioning and bootstrapping workload clusters (spokes) via a GitOps flow. + +![EKS cluster management using kro & ACK](docs/eks-cluster-mgmt-central.drawio.png) + +## Prerequisites + +1. AWS account for the management cluster, and optional AWS accounts for spoke clusters (management account can be reused for spokes) +2. AWS IAM Identity Center (IdC) is enabled in the management account +3. GitHub account and a valid GitHub Token +4. GitHub [cli](https://cli.github.com/) +5. Argo CD [cli](https://argo-cd.readthedocs.io/en/stable/cli_installation/) +6. Terraform [cli](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) +7. AWS [cli v2.32.27+](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) + +## Instructions + +### Configure workspace + +1. Create variables + + First, set these environment variables that typically don't need modification: + + ```sh + export KRO_REPO_URL="https://github.com/kubernetes-sigs/kro.git" + export WORKING_REPO="eks-cluster-mgmt" # Try to keep this default name as it is referenced in terraform and gitops configs + export TF_VAR_FILE="terraform.tfvars" # the name of terraform configuration file to use + ``` + + Then customize these variables for your specific environment: + + ```sh + export MGMT_ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account) # Or update to the AWS account to use for your management cluster + export AWS_REGION="us-west-2" # change to your preferred region + export WORKSPACE_PATH="$HOME" # the directory where repos will be cloned + export GITHUB_ORG_NAME="iamahgoub" # your Github username or organization name + ``` + +2. Clone kro repository + + ```sh + git clone $KRO_REPO_URL $WORKSPACE_PATH/kro + ``` + +3. Create your working GitHub repository + + Create a new repository using the GitHub CLI or through the GitHub website: + + ```sh + gh repo create $WORKING_REPO --private + ``` + +4. Clone the working empty git repository + + ```sh + gh repo clone $WORKING_REPO $WORKSPACE_PATH/$WORKING_REPO + ``` + +5. Populate the repository + + ```sh + cp -r $WORKSPACE_PATH/kro/examples/aws/eks-cluster-mgmt/* $WORKSPACE_PATH/$WORKING_REPO/ + ``` + +6. Update the Spoke accounts + + If deploying EKS clusters across multiple AWS accounts, update the configuration below. Even for single account deployments, you must specify the AWS account for each namespace. + + ```sh + code $WORKSPACE_PATH/$WORKING_REPO/addons/tenants/tenant1/default/addons/multi-acct/values.yaml + ``` + + Values: + + ```yaml + clusters: + workload-cluster1: "012345678910" # AWS account for workload cluster 1 + workload-cluster2: "123456789101" # AWS account for workload cluster 2 + ``` + + > Note: If you only want to use 1 AWS account, reuse the AWS account of your management cluster for the other workload clusters. + +7. Add, Commit and Push changes + + ```sh + cd $WORKSPACE_PATH/$WORKING_REPO/ + git status + git add . + git commit -s -m "initial commit" + git push + ``` + +### Create the Management cluster + +1. Update the terraform.tfvars with your values + + Modify the terraform.tfvars file with your GitHub working repo details: + - Set `git_org_name` + - Update any `gitops_xxx` values if you modified the proposed setup (git path, branch...) + - Confirm `gitops_xxx_repo_name` is "eks-cluster-mgmt" (or update if modified) + - Configure `accounts_ids` with the list of AWS accounts for spoke clusters (use management account ID if creating spoke clusters in the same account) + + ```sh + # edit: terraform.tfvars + code $WORKSPACE_PATH/$WORKING_REPO/terraform/hub/terraform.tfvars + ``` + +1. Log in to your AWS management account + + Connect to your AWS management account using your preferred authentication method: + + ```sh + export AWS_PROFILE=management_account # use your own profile or ensure you're connected to the appropriate account + ``` + +1. Apply the terraform to create the management cluster: + + ```sh + cd $WORKSPACE_PATH/$WORKING_REPO/terraform/hub + ./install.sh + ``` + + Review the proposed changes and accept to deploy. + + > Note: EKS capabilities are not supported yet by Terraform AWS provider. So, we will create them manually using CLI commands. + +1. Retrieve terraform outputs and set into environment variables: + + ```sh + export CLUSTER_NAME=$(terraform output -raw cluster_name) + export ACK_CONTROLLER_ROLE_ARN=$(terraform output -raw ack_controller_role_arn) + export KRO_CONTROLLER_ROLE_ARN=$(terraform output -raw kro_controller_role_arn) + export ARGOCD_CONTROLLER_ROLE_ARN=$(terraform output -raw argocd_controller_role_arn) + ``` + +1. Create ACK capability + ```sh + aws eks create-capability \ + --region ${AWS_REGION} \ + --cluster-name ${CLUSTER_NAME} \ + --capability-name ${CLUSTER_NAME}-ack \ + --type ACK \ + --role-arn ${ACK_CONTROLLER_ROLE_ARN} \ + --delete-propagation-policy RETAIN + ``` + +1. Now, we need to create the Argo CD capability -- IdC has to be enabled in the management account for that. So before creating the capability let's store the IdC instance details and the user that will be used for accessing Argo CD in environment variables: + + ```sh + export IDC_INSTANCE_ARN='' + export IDC_USER_ID='' + export IDC_REGION='' + ``` +1. Create Argo CD capability + + ```sh + aws eks create-capability \ + --region ${AWS_REGION} \ + --cluster-name ${CLUSTER_NAME} \ + --capability-name ${CLUSTER_NAME}-argocd \ + --type ARGOCD \ + --role-arn ${ARGOCD_CONTROLLER_ROLE_ARN} \ + --delete-propagation-policy RETAIN \ + --configuration '{ + "argoCd": { + "awsIdc": { + "idcInstanceArn": "'${IDC_INSTANCE_ARN}'", + "idcRegion": "'${IDC_REGION}'" + }, + "rbacRoleMappings": [{ + "role": "ADMIN", + "identities": [{ + "id": "'${IDC_USER_ID}'", + "type": "SSO_USER" + }] + }] + } + }' + ``` + +1. Create kro capability + + ```sh + aws eks create-capability \ + --region ${AWS_REGION} \ + --cluster-name ${CLUSTER_NAME} \ + --capability-name ${CLUSTER_NAME}-kro \ + --type KRO \ + --role-arn ${KRO_CONTROLLER_ROLE_ARN} \ + --delete-propagation-policy RETAIN + ``` + +1. Make sure all the capabilities are now enabled by checking status using the console or the `describe-capability` command. For example: + ```sh + aws eks describe-capability \ + --region ${AWS_REGION} \ + --cluster-name ${CLUSTER_NAME} \ + --capability-name ${CLUSTER_NAME}-argocd \ + --query 'capability.status' \ + --output text + ``` + + Modify/run the commands above for other capabilities to make sure they are all `ACTIVE`. + +1. Retrieve the ArgoCD server URL and log on using the user provided during the capability creation: + ```sh + export ARGOCD_SERVER=$(aws eks describe-capability \ + --cluster-name ${CLUSTER_NAME} \ + --capability-name ${CLUSTER_NAME}-argocd \ + --query 'capability.configuration.argoCd.serverUrl' \ + --output text \ + --region ${AWS_REGION}) + + echo ${ARGOCD_SERVER} + export ARGOCD_SERVER=${ARGOCD_SERVER#https://} + ``` + +1. Generate an account token from the Argo CD UI (Settings → Accounts → admin → Generate New Token), then set it as an environment variable: + ```sh + export ARGOCD_AUTH_TOKEN="" + export ARGOCD_OPTS="--grpc-web" + ``` + +1. Configure GitHub repository access (if using private repository). We automate this process using the Argo CD CLI. You can also configure this in the Web interface under "Settings / Repositories" + + ```sh + export GITHUB_TOKEN="" + argocd repo add https://github.com/$GITHUB_ORG_NAME/$WORKING_REPO.git --username iamahgoub --password $GITHUB_TOKEN --upsert --name github + ``` + + > Note: If you encounter the error "Failed to load target state: failed to generate manifest for source 1 of 1: rpc error: code = Unknown desc = authentication required", verify your GitHub token settings. + +1. Connect to the cluster + + ```sh + aws eks update-kubeconfig --name hub-cluster + ``` + +1. Install Argo CD App of App: + ```sh + kubectl apply -f $WORKSPACE_PATH/$WORKING_REPO/terraform/hub/bootstrap/applicationsets.yaml + ``` + +### Bootstrap Spoke accounts + +For the management cluster to create resources in the spoke AWS accounts, we need to create an IAM roles in the spoke accounts to be assumed by the ACK capability in the management account for that purpose. + +> Note: Even if you're only testing this in the management account, you still need to perform this procedure, replacing the list of spoke account numbers with the management account number. + +We provide a script to help with that. You need to first connect to each of your spoke accounts and execute the script. + +1. Log in to your AWS Spoke account + + Connect to your AWS spoke account. This example uses specific profiles, but adapt this to your own setup: + + ```sh + export AWS_PROFILE=spoke_account1 # use your own profile or ensure you're connected to the appropriate account + ``` + +2. Execute the script to configure IAM roles + + ```sh + cd $WORKSPACE_PATH/$WORKING_REPO/scripts + ./create_ack_workload_roles.sh + ``` + +Repeat this step for each spoke account you want to use with the solution + +### Create a Spoke cluster + +Update $WORKSPACE_PATH/$WORKING_REPO + +1. Add cluster creation by kro + + Edit the file: + + ```sh + code $WORKSPACE_PATH/$WORKING_REPO/fleet/kro-values/tenants/tenant1/kro-clusters/values.yaml + ``` + + Configure the AWS accounts for management and spoke accounts: + + ```yaml + workload-cluster1: + managementAccountId: "012345678910" # replace with your management cluster AWS account ID + accountId: "123456789101" # replace with your spoke workload cluster AWS account ID (can be the same) + tenant: "tenant1" # We have only configured tenant1 in the repo. If you change it, you need to duplicate all tenant1 directories + k8sVersion: "1.30" + workloads: "true" # Set to true if you want to deploy the workloads namespaces and applications + gitops: + addonsRepoUrl: "https://github.com/XXXXX/eks-cluster-mgmt" # replace with your github account + fleetRepoUrl: "https://github.com/XXXXX/eks-cluster-mgmt" + platformRepoUrl: "https://github.com/XXXXX/eks-cluster-mgmt" + workloadRepoUrl: "https://github.com/XXXXX/eks-cluster-mgmt" + ``` + +2. Add, Commit and Push + + ```sh + cd $WORKSPACE_PATH/$WORKING_REPO/ + git status + git add . + git commit -s -m "initial commit" + git push + ``` + +5. After some time, the cluster should be created in the spoke account. + + ```sh + kubectl get EksClusterwithvpcs -A + ``` + + ```sh + NAMESPACE NAME STATE SYNCED AGE + argocd workload-cluster1 ACTIVE True 36m + ``` + + If you see `STATE=ERROR`, this may be normal as it will take some time for all dependencies to be ready. Check the logs of kro and ACK controllers for possible configuration errors. + + You can also list resources created by kro to validate their status: + + ```sh + kubectl get vpcs.kro.run -A + kubectl get vpcs.ec2.services.k8s.aws -A -o yaml # check for errors + ``` + + If you see errors, double-check the multi-cluster account settings and verify that IAM roles in both management and workload AWS accounts are properly configured. + + When VPCs are ready, check EKS resources: + + ```sh + kubectl get eksclusters.kro.run -A + kubectl get clusters.eks.services.k8s.aws -A -o yaml # Check for errors + ``` + +6. Connect to the spoke cluster + + ```sh + export AWS_PROFILE=spoke_account1 # use your own profile or ensure you're connected to the appropriate account + ``` + + Get kubectl configuration (update name and region if needed): + + ```sh + aws eks update-kubeconfig --name workload-cluster1 --region us-west-2 + ``` + + View deployed resources: + + ```sh + kubectl get pods -A + ``` + Output: + + ```sh + NAMESPACE NAME READY STATUS RESTARTS AGE + external-secrets external-secrets-679b98f996-74lsb 1/1 Running 0 70s + external-secrets external-secrets-cert-controller-556d7f95c5-h5nvq 1/1 Running 0 70s + external-secrets external-secrets-webhook-7b456d589f-6bjzr 1/1 Running 0 70s + ``` + + This output shows that our GitOps solution has successfully deployed our addons in the cluster + + +You can repeat these steps for any additional clusters you want to manage. + +Each cluster is created by its kro RGD, deployed to AWS using ACK controllers, and then automatically registered to Argo CD which can install addons and workloads automatically. + +## Conclusion + +This solution demonstrates a powerful way to manage multiple EKS clusters across different AWS accounts and regions using three key components: + +1. **kro (Kubernetes Resource Orchestrator)** + - Manages complex multi-resource deployments + - Handles dependencies between resources + - Provides a declarative way to define EKS clusters and their requirements + +2. **AWS Controllers for Kubernetes (ACK)** + - Enables native AWS resource management from within Kubernetes + - Supports cross-account operations through namespace isolation + - Manages AWS resources like VPCs, IAM roles, and EKS clusters + +3. **Argo CD** + - Implements GitOps practices for cluster configuration + - Automatically bootstraps new clusters with required add-ons + - Manages workload deployments across the cluster fleet + +Key benefits of this architecture: + +- **Scalability**: Easily add new clusters by updating Git configuration +- **Consistency**: Ensures uniform configuration across all clusters +- **Automation**: Reduces manual intervention in cluster lifecycle management +- **Separation of Concerns**: Clear distinction between infrastructure and application management +- **Audit Trail**: All changes are tracked through Git history +- **Multi-Account Support**: Secure isolation between different environments or business units + +To expand this solution, you can: +- Add more clusters by replicating the configuration pattern +- Customize add-ons and workloads per cluster +- Implement different configurations for different environments (dev, staging, prod) +- Add monitoring and logging solutions across the cluster fleet +- Implement cluster upgrade strategies using the same tooling + +The combination of kro, ACK, and Argo CD provides a robust, scalable, and maintainable approach to EKS cluster fleet management. + +## Clean-up + +1. Delete the workload clusters by editing the following file: + + ```sh + code $WORKSPACE_PATH/$WORKING_REPO/fleet/kro-values/tenants/tenant1/kro-clusters/values.yaml + ``` + + In the Argo CD UI, synchronize the cluster Applicationset with the prune option enabled, or use the CLI: + + ```bash + argocd app sync clusters --prune + ``` + + > **Known issue**: We noticed that some of the VPCs resources (route tables) do not get properly deleted when workload clusters are removed from manifests. If this occurred to you, delete the VPC resources manually to allow for the clean-up to complete till the issue is identified/resolved. + +1. Delete Argo CD App of App: + ```sh + kubectl delete -f $WORKSPACE_PATH/$WORKING_REPO/terraform/hub/bootstrap/applicationsets.yaml + ``` + +1. Delete the EKS capabilities on the management cluster + + ```sh + aws eks delete-capability \ + --cluster-name ${CLUSTER_NAME} \ + --capability-name ${CLUSTER_NAME}-argocd + + aws eks delete-capability \ + --cluster-name ${CLUSTER_NAME} \ + --capability-name ${CLUSTER_NAME}-kro + + aws eks delete-capability \ + --cluster-name ${CLUSTER_NAME} \ + --capability-name ${CLUSTER_NAME}-ack + ``` + +1. Make sure all the capabilities are deleted by checking the console or using the `describe-capability` command. For example: + +1. Delete Management Cluster + + After successfully de-registering all spoke accounts, remove the workload cluster created with Terraform: + + ```sh + cd $WORKSPACE_PATH/$WORKING_REPO/terraform/hub + ./destroy.sh + ``` + +1. Remove ACK IAM Roles in workload accounts + + Finally, connect to each workload account and delete the IAM roles and policies created initially: + + ```bash + cd $WORKSPACE_PATH/$WORKING_REPO/ + ./scripts/delete_ack_workload_roles.sh ack + ``` diff --git a/addons/bootstrap/default/addons.yaml b/addons/bootstrap/default/addons.yaml new file mode 100644 index 0000000..f94b802 --- /dev/null +++ b/addons/bootstrap/default/addons.yaml @@ -0,0 +1,76 @@ +syncPolicy: + automated: + selfHeal: true + allowEmpty: true + prune: true + retry: + limit: -1 # number of failed sync attempt retries; unlimited number of attempts if less than 0 + backoff: + duration: 5s # the amount to back off. Default unit is seconds, but could also be a duration (e.g. "2m", "1h") + factor: 2 # a factor to multiply the base duration after each failed retry + maxDuration: 10m # the maximum amount of time allowed for the backoff strategy + syncOptions: + - CreateNamespace=true + - ServerSideApply=true # Big CRDs. +syncPolicyAppSet: + preserveResourcesOnDeletion: false # to be able to cleanup +useSelectors: true +repoURLGit: '{{.metadata.annotations.addons_repo_url}}' +repoURLGitRevision: '{{.metadata.annotations.addons_repo_revision}}' +repoURLGitBasePath: '{{.metadata.annotations.addons_repo_basepath}}' +valueFiles: + - default/addons + - environments/{{.metadata.labels.environment}}/addons + - clusters/{{.nameNormalized}}/addons +useValuesFilePrefix: true +valuesFilePrefix: 'tenants/{{.metadata.labels.tenant}}/' + +######################################## +# define Addons +######################################## + +external-secrets: + enabled: false + enableACK: false + annotationsAppSet: + argocd.argoproj.io/sync-wave: "3" # Needs to be after KRO RGD + namespace: external-secrets + chartName: external-secrets + defaultVersion: "0.10.3" + chartRepository: "https://charts.external-secrets.io" + selector: + matchExpressions: + - key: enable_external_secrets + operator: In + values: ['true'] + valuesObject: + serviceAccount: + name: "external-secrets-sa" + +kro-eks-rgs: + enabled: false + type: manifest + namespace: kro + annotationsAppSet: + argocd.argoproj.io/sync-wave: "-2" # Needs to be before resources that needs PodIdentity + path: 'charts/kro/resource-groups/eks' + chartRepository: '{{.metadata.annotations.addons_repo_url}}' + targetRevision: '{{.metadata.annotations.addons_repo_revision}}' + selector: + matchExpressions: + - key: enable_kro_eks_rgs + operator: In + values: ['true'] + +multi-acct: + enabled: false + namespace: kro + annotationsAppSet: + argocd.argoproj.io/sync-wave: "-5" # Needs to be before KRO RGD + defaultVersion: "0.1.0" + path: charts/multi-acct + selector: + matchExpressions: + - key: enable_multi_acct + operator: In + values: ['true'] \ No newline at end of file diff --git a/addons/tenants/tenant1/clusters/hub-cluster/application-sets/addons.yaml b/addons/tenants/tenant1/clusters/hub-cluster/application-sets/addons.yaml new file mode 100644 index 0000000..a170ee3 --- /dev/null +++ b/addons/tenants/tenant1/clusters/hub-cluster/application-sets/addons.yaml @@ -0,0 +1,11 @@ +useSelectors: true # necessary to enable addons with cluster secret labels + +#We are using this to enable applicationSets, then use cluster secret to enable applications +# globalSelectors: +# fleet_member: control-plane #If we activate this, only cluster from this selector will have the applicationsets enabled +external-secrets: + enabled: true +kro-eks-rgs: + enabled: true +multi-acct: + enabled: true \ No newline at end of file diff --git a/addons/tenants/tenant1/default/addons/multi-acct/values.yaml b/addons/tenants/tenant1/default/addons/multi-acct/values.yaml new file mode 100644 index 0000000..af4a115 --- /dev/null +++ b/addons/tenants/tenant1/default/addons/multi-acct/values.yaml @@ -0,0 +1,2 @@ +clusters: + workload-cluster1: "012345678910" # AWS account for workload cluster 1 diff --git a/charts/application-sets/.helmignore b/charts/application-sets/.helmignore new file mode 100644 index 0000000..0e8a0eb --- /dev/null +++ b/charts/application-sets/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/application-sets/Chart.yaml b/charts/application-sets/Chart.yaml new file mode 100644 index 0000000..3546ee5 --- /dev/null +++ b/charts/application-sets/Chart.yaml @@ -0,0 +1,24 @@ +apiVersion: v2 +name: application-sets +description: A Helm chart for Kubernetes + +# A chart can be either an 'application' or a 'library' chart. +# +# Application charts are a collection of templates that can be packaged into versioned archives +# to be deployed. +# +# Library charts provide useful utilities or functions for the chart developer. They're included as +# a dependency of application charts to inject those utilities and functions into the rendering +# pipeline. Library charts do not define any templates and therefore cannot be deployed. +type: application + +# This is the chart version. This version number should be incremented each time you make changes +# to the chart and its templates, including the app version. +# Versions are expected to follow Semantic Versioning (https://semver.org/) +version: 0.1.0 + +# This is the version number of the application being deployed. This version number should be +# incremented each time you make changes to the application. Versions are not expected to +# follow Semantic Versioning. They should reflect the version the application is using. +# It is recommended to use it with quotes. +appVersion: "1.16.0" diff --git a/charts/application-sets/templates/_application_set.tpl b/charts/application-sets/templates/_application_set.tpl new file mode 100644 index 0000000..3ca8928 --- /dev/null +++ b/charts/application-sets/templates/_application_set.tpl @@ -0,0 +1,58 @@ +{{/* +Template to generate additional resources configuration +*/}} +{{- define "application-sets.additionalResources" -}} +{{- $chartName := .chartName -}} +{{- $chartConfig := .chartConfig -}} +{{- $valueFiles := .valueFiles -}} +{{- $additionalResourcesType := .additionalResourcesType -}} +{{- $additionalResourcesPath := .path -}} +{{- $values := .values -}} +{{- if $chartConfig.additionalResources.path }} +- repoURL: {{ $values.repoURLGit | squote }} + targetRevision: {{ $values.repoURLGitRevision | squote }} + path: {{- if eq $additionalResourcesType "manifests" }} + '{{ $values.repoURLGitBasePath }}{{ if $values.useValuesFilePrefix }}{{ $values.valuesFilePrefix }}{{ end }}clusters/{{`{{.nameNormalized}}`}}/{{ $chartConfig.additionalResources.manifestPath }}' + {{- else }} + {{ $chartConfig.additionalResources.path | squote }} + {{- end}} +{{- end }} +{{- if $chartConfig.additionalResources.chart }} +- repoURL: '{{$chartConfig.additionalResources.repoURL}}' + chart: '{{$chartConfig.additionalResources.chart}}' + targetRevision: '{{$chartConfig.additionalResources.chartVersion }}' +{{- end }} +{{- if $chartConfig.additionalResources.helm }} + helm: + releaseName: '{{`{{ .name }}`}}-{{ $chartConfig.additionalResources.helm.releaseName }}' + {{- if $chartConfig.additionalResources.helm.valuesObject }} + valuesObject: + {{- $chartConfig.additionalResources.helm.valuesObject | toYaml | nindent 6 }} + {{- end }} + ignoreMissingValueFiles: true + valueFiles: + {{- include "application-sets.valueFiles" (dict + "nameNormalize" $chartName + "valueFiles" $valueFiles + "values" $values + "chartType" $additionalResourcesType) | nindent 6 }} +{{- end }} +{{- end }} + + +{{/* +Define the values path for reusability +*/}} +{{- define "application-sets.valueFiles" -}} +{{- $nameNormalize := .nameNormalize -}} +{{- $chartConfig := .chartConfig -}} +{{- $valueFiles := .valueFiles -}} +{{- $chartType := .chartType -}} +{{- $values := .values -}} +{{- with .valueFiles }} +{{- range . }} +- $values/{{ $values.repoURLGitBasePath }}{{ . }}/{{ $nameNormalize }}{{ if $chartType }}/{{ $chartType }}{{ end }}/{{ if $chartConfig.valuesFileName }}{{ $chartConfig.valuesFileName }}{{ else }}values.yaml{{ end }} +- $values/{{ $values.repoURLGitBasePath }}{{ if $values.useValuesFilePrefix }}{{ $values.valuesFilePrefix }}{{ end }}{{ . }}/{{ $nameNormalize }}{{ if $chartType }}/{{ $chartType }}{{ end }}/{{ if $chartConfig.valuesFileName }}{{ $chartConfig.valuesFileName }}{{ else }}values.yaml{{ end }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/application-sets/templates/_git_matrix.tpl b/charts/application-sets/templates/_git_matrix.tpl new file mode 100644 index 0000000..2395d84 --- /dev/null +++ b/charts/application-sets/templates/_git_matrix.tpl @@ -0,0 +1,37 @@ +# {{/* +# Template creating git matrix generator +# */}} +# {{- define "application-sets.git-matrix" -}} +# {{- $chartName := .chartName -}} +# {{- $chartConfig := .chartConfig -}} +# {{- $repoURLGit := .repoURLGit -}} +# {{- $repoURLGitRevision := .repoURLGitRevision -}} +# {{- $selectors := .selectors -}} +# {{- $useSelectors := .useSelectors -}} +# generators: +# - matrix: +# generators: +# - clusters: +# selector: +# matchLabels: +# argocd.argoproj.io/secret-type: cluster +# {{- if $selectors }} +# {{- toYaml $selectors | nindent 16 }} +# - key: fleet_member +# operator: NotIn +# values: ['control-plane'] +# {{- end }} +# {{- if $chartConfig.selectorMatchLabels }} +# {{- toYaml $chartConfig.selectorMatchLabels | nindent 18 }} +# {{- end }} +# {{- if and $chartConfig.selector $useSelectors }} +# {{- toYaml $chartConfig.selector | nindent 16 }} +# {{- end }} +# values: +# chart: {{ $chartConfig.chartName | default $chartName | quote }} +# - git: +# repoURL: {{ $repoURLGit | squote }} +# revision: {{ $repoURLGitRevision | squote }} +# files: +# - path: {{ $chartConfig.matrixPath | squote }} +# {{- end }} \ No newline at end of file diff --git a/charts/application-sets/templates/_helpers.tpl b/charts/application-sets/templates/_helpers.tpl new file mode 100644 index 0000000..c705613 --- /dev/null +++ b/charts/application-sets/templates/_helpers.tpl @@ -0,0 +1,48 @@ +{{/* +Expand the name of the chart. Defaults to `.Chart.Name` or `nameOverride`. +*/}} +{{- define "application-sets.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Generate a fully qualified app name. +If `fullnameOverride` is defined, it uses that; otherwise, it constructs the name based on `Release.Name` and chart name. +*/}} +{{- define "application-sets.fullname" -}} +{{- if .Values.fullnameOverride }} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- printf "%s-%s" .Release.Name (default .Chart.Name .Values.nameOverride) | trunc 63 | trimSuffix "-" }} +{{- end }} +{{- end }} + +{{/* +Create chart name and version, useful for labels. +*/}} +{{- define "application-sets.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Common labels for the ApplicationSet, including version and managed-by labels. +*/}} +{{- define "application-sets.labels" -}} +helm.sh/chart: {{ include "application-sets.chart" . }} +app.kubernetes.io/name: {{ include "application-sets.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- if .Chart.AppVersion }} +app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} +{{- end }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- end }} + +{{/* +Common Helm and Kubernetes Annotations +*/}} +{{- define "application-sets.annotations" -}} +helm.sh/chart: {{ include "application-sets.chart" . }} +{{- if .Values.annotations }} +{{ toYaml .Values.annotations }} +{{- end }} +{{- end }} diff --git a/charts/application-sets/templates/_pod_identity.tpl b/charts/application-sets/templates/_pod_identity.tpl new file mode 100644 index 0000000..5c08f4a --- /dev/null +++ b/charts/application-sets/templates/_pod_identity.tpl @@ -0,0 +1,27 @@ +{{/* +Template to generate pod-identity configuration +*/}} +{{- define "application-sets.pod-identity" -}} +{{- $chartName := .chartName -}} +{{- $chartConfig := .chartConfig -}} +{{- $valueFiles := .valueFiles -}} +{{- $values := .values -}} +- repoURL: '{{ $values.repoURLGit }}' + targetRevision: '{{ $values.repoURLGitRevision }}' + path: 'charts/pod-identity' + helm: + releaseName: '{{`{{ .name }}`}}-{{ $chartConfig.chartName | default $chartName }}' + valuesObject: + create: '{{`{{default "`}}{{ $chartConfig.enableACK }}{{`" (index .metadata.annotations "ack_create")}}`}}' + region: '{{`{{ .metadata.annotations.aws_region }}`}}' + accountId: '{{`{{ .metadata.annotations.aws_account_id}}`}}' + podIdentityAssociation: + clusterName: '{{`{{ .name }}`}}' + namespace: '{{ default $chartConfig.namespace .namespace }}' + ignoreMissingValueFiles: true + valueFiles: + {{- include "application-sets.valueFiles" (dict + "nameNormalize" $chartName + "valueFiles" $valueFiles + "values" $values "chartType" "pod-identity") | nindent 6 }} +{{- end }} diff --git a/charts/application-sets/templates/application-set.yaml b/charts/application-sets/templates/application-set.yaml new file mode 100644 index 0000000..78857ac --- /dev/null +++ b/charts/application-sets/templates/application-set.yaml @@ -0,0 +1,177 @@ +{{- $values := .Values }} +{{- $chartType := .Values.chartType }} +{{- $namespace := .Values.namespace }} +{{- $syncPolicy := .Values.syncPolicy -}} +{{- $syncPolicyAppSet := .Values.syncPolicyAppSet -}} +{{- $goTemplateOptions := .Values.goTemplateOptions -}} +{{- $repoURLGit := .Values.repoURLGit -}} +{{- $repoURLGitRevision := .Values.repoURLGitRevision -}} +{{- $repoURLGitBasePath := .Values.repoURLGitBasePath -}} +{{- $valueFiles := .Values.valueFiles -}} +{{- $valuesFilePrefix := .Values.valuesFilePrefix -}} +{{- $useValuesFilePrefix := (default false .Values.useValuesFilePrefix ) -}} +{{- $useSelectors:= .Values.useSelectors -}} +{{- $globalSelectors := .Values.globalSelectors -}} + +{{- range $chartName, $chartConfig := .Values }} +{{- if and (kindIs "map" $chartConfig) (hasKey $chartConfig "enabled") }} +{{- if eq (toString $chartConfig.enabled) "true" }} +{{- $nameNormalize := printf "%s" $chartName | replace "_" "-" | trunc 63 | trimSuffix "-" -}} +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: {{ $nameNormalize }} + namespace: {{ default "argocd" $namespace }} + annotations: + {{- include "application-sets.annotations" $ | nindent 4 }} + {{- if $chartConfig.annotationsAppSet }}{{- toYaml $chartConfig.annotationsAppSet | nindent 4 }}{{- end }} + labels: + {{- include "application-sets.labels" $ | nindent 4 }} + {{- if $chartConfig.labelsAppSet }}{{- toYaml $chartConfig.labelsAppSet | nindent 4 }}{{- end }} +spec: + goTemplate: true + {{- if $chartConfig.goTemplateOptions }} + goTemplateOptions: + {{ toYaml $chartConfig.goTemplateOptions | nindent 2 }} + {{- else }} + goTemplateOptions: {{ default (list "missingkey=error") $goTemplateOptions }} + {{- end }} + {{- if $chartConfig.syncPolicyAppSet }} + syncPolicy: + {{- toYaml $chartConfig.syncPolicyAppSet | nindent 4 }} + {{- else }} + syncPolicy: + {{- toYaml $syncPolicyAppSet | nindent 4 }} + {{- end }} + {{- if $chartConfig.gitMatrix }} + {{ include "application-sets.git-matrix" (dict + "chartName" $nameNormalize "chartConfig" $chartConfig + "repoURLGit" $repoURLGit "repoURLGitRevision" $repoURLGitRevision + "selectors" $globalSelectors "useSelectors" $useSelectors + ) | nindent 2 }} + {{- else }} + generators: + {{- if $chartConfig.environments }} + - merge: + mergeKeys: [server] + generators: + {{- end }} + - clusters: + selector: + matchLabels: + argocd.argoproj.io/secret-type: cluster + {{- if $globalSelectors }} + {{- toYaml $globalSelectors | nindent 18 }} + {{- end }} + {{- if $chartConfig.selectorMatchLabels }} + {{- toYaml $chartConfig.selectorMatchLabels | nindent 18 }} + {{- end }} + {{- if and $chartConfig.selector $useSelectors }} + {{- toYaml $chartConfig.selector | nindent 16 }} + # If you want you can excluste some clusters based on their membership + # - key: fleet_member + # operator: NotIn + # values: ['control-plane'] + {{- end }} + {{- if not $chartConfig.resourceGroup }} + values: + addonChart: {{ $chartConfig.chartName | default $nameNormalize | quote }} + {{- if $chartConfig.defaultVersion }} + addonChartVersion: {{ $chartConfig.defaultVersion | quote }} + {{- end }} + {{- if $chartConfig.chartRepository }} + addonChartRepository: {{ $chartConfig.chartRepository | quote }} + {{- end }} + {{- if $chartConfig.chartNamespace }} + addonChartRepositoryNamespace: {{ $chartConfig.chartNamespace | quote }} + chart: {{ printf "%s/%s" $chartConfig.chartNamespace ($chartConfig.chartName | default $nameNormalize) | quote }} + {{- else }} + chart: {{ $chartConfig.chartName | default $nameNormalize | quote }} + {{- end }} + {{- end }} + {{- if $chartConfig.environments }} + {{- range $chartConfig.environments }} + - clusters: + selector: + matchLabels: + {{- toYaml .selector | nindent 18 }} + values: + addonChartVersion: {{ .chartVersion | default $chartConfig.defaultVersion | quote }} + {{- end }} + {{- end }} + {{- end }} + template: + metadata: + {{- if $chartConfig.appSetName }} + name: {{ $chartConfig.appSetName }} + {{- else }} + name: '{{ $nameNormalize }}-{{`{{ .name }}`}}' + {{- end }} + spec: + project: default + sources: + - repoURL: {{ $repoURLGit | squote}} + targetRevision: {{ $repoURLGitRevision | squote }} + ref: values + {{- if eq (toString $chartConfig.enableACK ) "true" }} + {{ include "application-sets.pod-identity" (dict + "chartName" ($chartConfig.chartName | default $nameNormalize) + "valueFiles" $valueFiles + "chartConfig" $chartConfig "values" $values ) | nindent 6 }} + {{- end }} + {{- if $chartConfig.path }} + - repoURL: {{ $repoURLGit | squote }} + path: {{$chartConfig.path | squote }} + targetRevision: {{ $repoURLGitRevision | squote }} + {{- else }} + - repoURL: '{{`{{ .values.addonChartRepository }}`}}' + chart: '{{`{{ .values.chart }}`}}' + targetRevision: '{{`{{.values.addonChartVersion }}`}}' + {{- end }} + {{- if ne (default "" $chartConfig.type) "manifest" }} + helm: + releaseName: {{ default "{{ .values.addonChart }}" $chartConfig.releaseName | squote }} + ignoreMissingValueFiles: true + {{- if $chartConfig.valuesObject }} + valuesObject: + {{- $chartConfig.valuesObject | toYaml | nindent 12 }} + {{- end }} + {{- if $valueFiles }} + valueFiles: + {{- include "application-sets.valueFiles" (dict + "nameNormalize" ($chartConfig.chartName | default $nameNormalize) + "chartConfig" $chartConfig + "valueFiles" $valueFiles "values" $values) | nindent 12 }} + {{- end }} + {{- if $chartConfig.additionalResources}} + {{ include "application-sets.additionalResources" (dict + "chartName" ($chartConfig.chartName | default $nameNormalize) + "valueFiles" $valueFiles + "chartConfig" $chartConfig + "values" $values + "additionalResourcesType" $chartConfig.additionalResources.type + "additionalResourcesPath" $chartConfig.additionalResources.path ) | nindent 6 }} + {{- end}} + {{- end }} + destination: + namespace: '{{ $chartConfig.namespace }}' + name: '{{`{{ .name }}`}}' + {{- if $chartConfig.syncPolicy }} + syncPolicy: + {{- toYaml $chartConfig.syncPolicy | nindent 8 }} + {{ else }} + syncPolicy: + {{- toYaml $syncPolicy | nindent 8 }} + {{- end }} + {{- with $chartConfig.ignoreDifferences }} + ignoreDifferences: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- if $chartConfig.ignoreDifferences}} + ignoreDifferences: + {{- $chartConfig.ignoreDifferences | toYaml | nindent 8 }} + {{- end }} +--- +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/kro-clusters/Chart.yaml b/charts/kro-clusters/Chart.yaml new file mode 100644 index 0000000..6fc7633 --- /dev/null +++ b/charts/kro-clusters/Chart.yaml @@ -0,0 +1,6 @@ +apiVersion: v2 +name: eks-fleet-clusters +description: A Helm chart for managing EKS Fleet clusters +type: application +version: 0.1.0 +appVersion: "1.0.0" diff --git a/charts/kro-clusters/templates/NOTES.txt b/charts/kro-clusters/templates/NOTES.txt new file mode 100644 index 0000000..4797a46 --- /dev/null +++ b/charts/kro-clusters/templates/NOTES.txt @@ -0,0 +1,21 @@ +Thank you for installing {{ .Chart.Name }}. + +Your EKS Fleet clusters have been configured with the following details: + +{{- range $name, $cluster := .Values.clusters }} +Cluster: {{ $name }} + - Tenant: {{ $cluster.tenant }} + - K8s Version: {{ $cluster.k8sVersion }} + - Domain: {{ $cluster.domainName }} +{{- end }} + +To manage your clusters: +1. Edit the values.yaml file to add, modify, or remove cluster configurations +2. Use helm upgrade to apply changes: + helm upgrade ./chart + +To verify the cluster resources: + kubectl get eksclusterwithvpc + +For more information about the chart and available configuration options, +please refer to the chart's documentation. diff --git a/charts/kro-clusters/templates/clusters.yaml b/charts/kro-clusters/templates/clusters.yaml new file mode 100644 index 0000000..f58b6bf --- /dev/null +++ b/charts/kro-clusters/templates/clusters.yaml @@ -0,0 +1,42 @@ +{{- range $name, $cluster := .Values.clusters }} +--- +apiVersion: kro.run/v1alpha1 +kind: EksCluster +metadata: + name: {{ $name }} + annotations: + argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true + argocd.argoproj.io/sync-wave: "1" +spec: + name: {{ $name }} + tenant: {{ $cluster.tenant | default "tenant1" | quote }} + environment: {{ $cluster.environment | default "staging" | quote }} + region: {{ $cluster.region | default "us-west-2" | quote }} + k8sVersion: {{ $cluster.k8sVersion | default "1.32" | quote }} + accountId: {{ $cluster.accountId | quote }} + managementAccountId: {{ $cluster.managementAccountId | quote }} + adminRoleName: {{ $cluster.adminRoleName | default "Admin" | quote }} + fleetSecretManagerSecretNameSuffix: {{ $cluster.fleetSecretManagerSecretNameSuffix | default "argocd-secret" | quote }} + domainName: {{ $cluster.domainName | default "" | quote }} + workloads: {{ $cluster.workloads | default "false" | quote }} + {{- if $cluster.subHostedZone | quote }} + subHostedZone: + {{- toYaml $cluster.subHostedZone | nindent 4 }} + {{- end }} + {{- if $cluster.vpc | quote}} + vpc: + {{- toYaml $cluster.vpc | nindent 4 }} + {{- end }} + {{- if $cluster.gitops }} + gitops: + {{- toYaml $cluster.gitops | nindent 4 }} + {{- else }} + gitops: {} + {{- end }} + {{- if $cluster.addons }} + addons: + {{- toYaml $cluster.addons | nindent 4 }} + {{- else }} + addons: {} + {{- end }} +{{- end }} \ No newline at end of file diff --git a/charts/kro/instances/pod-identity/.helmignore b/charts/kro/instances/pod-identity/.helmignore new file mode 100644 index 0000000..0e8a0eb --- /dev/null +++ b/charts/kro/instances/pod-identity/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/kro/instances/pod-identity/Chart.yaml b/charts/kro/instances/pod-identity/Chart.yaml new file mode 100644 index 0000000..8c2b8b3 --- /dev/null +++ b/charts/kro/instances/pod-identity/Chart.yaml @@ -0,0 +1,24 @@ +apiVersion: v2 +name: kro-pi-instance +description: A Helm chart for Kubernetes + +# A chart can be either an 'application' or a 'library' chart. +# +# Application charts are a collection of templates that can be packaged into versioned archives +# to be deployed. +# +# Library charts provide useful utilities or functions for the chart developer. They're included as +# a dependency of application charts to inject those utilities and functions into the rendering +# pipeline. Library charts do not define any templates and therefore cannot be deployed. +type: application + +# This is the chart version. This version number should be incremented each time you make changes +# to the chart and its templates, including the app version. +# Versions are expected to follow Semantic Versioning (https://semver.org/) +version: 0.1.0 + +# This is the version number of the application being deployed. This version number should be +# incremented each time you make changes to the application. Versions are not expected to +# follow Semantic Versioning. They should reflect the version the application is using. +# It is recommended to use it with quotes. +appVersion: "1.16.0" diff --git a/charts/kro/instances/pod-identity/templates/_helpers.tpl b/charts/kro/instances/pod-identity/templates/_helpers.tpl new file mode 100644 index 0000000..815affa --- /dev/null +++ b/charts/kro/instances/pod-identity/templates/_helpers.tpl @@ -0,0 +1,62 @@ +{{/* +Expand the name of the chart. +*/}} +{{- define "kro-pi-instance.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "kro-pi-instance.fullname" -}} +{{- if .Values.fullnameOverride }} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- $name := default .Chart.Name .Values.nameOverride }} +{{- if contains $name .Release.Name }} +{{- .Release.Name | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} +{{- end }} +{{- end }} +{{- end }} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "kro-pi-instance.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Common labels +*/}} +{{- define "kro-pi-instance.labels" -}} +helm.sh/chart: {{ include "kro-pi-instance.chart" . }} +{{ include "kro-pi-instance.selectorLabels" . }} +{{- if .Chart.AppVersion }} +app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} +{{- end }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- end }} + +{{/* +Selector labels +*/}} +{{- define "kro-pi-instance.selectorLabels" -}} +app.kubernetes.io/name: {{ include "kro-pi-instance.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end }} + +{{/* +Create the name of the service account to use +*/}} +{{- define "kro-pi-instance.serviceAccountName" -}} +{{- if .Values.serviceAccount.create }} +{{- default (include "kro-pi-instance.fullname" .) .Values.serviceAccount.name }} +{{- else }} +{{- default "default" .Values.serviceAccount.name }} +{{- end }} +{{- end }} diff --git a/charts/kro/instances/pod-identity/templates/instance.yaml b/charts/kro/instances/pod-identity/templates/instance.yaml new file mode 100644 index 0000000..00c6010 --- /dev/null +++ b/charts/kro/instances/pod-identity/templates/instance.yaml @@ -0,0 +1,63 @@ +{{- $cluster := .Values.clusterName -}} +{{- $namespace := .Values.piNamespace -}} +{{- $name := .Values.name -}} +{{- $root := . -}} +{{- $serviceAccounts := .Values.serviceAccounts -}} +{{- $policyDocument := .Values.policyDocument -}} +{{- range $serviceAccounts }} +apiVersion: kro.run/v1alpha1 +kind: PodIdentity +metadata: + name: "{{ include "kro-pi-instance.name" $root }}-{{ . }}" + annotations: + argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true + argocd.argoproj.io/sync-wave: "-5" +spec: + name: {{$name}} + values: + aws: + clusterName: {{ $cluster }} + policy: + policyDocument: | + { + "Version": "2012-10-17", + "Statement": [ + {{- range $index, $policy := $policyDocument }} + { + "Effect": "Allow", + "Action": [ + {{- range $i, $action := $policy.actions }} + "{{ $action }}"{{ if not (eq (add $i 1) (len $policy.actions)) }},{{ end }} + {{- end }} + ], + "Resource": [ + {{- if $policy.customArn }} + "{{ $policy.customArn }}" + {{- else if eq $policy.resourceName "*" }} + "*" + {{- else }} + "arn:aws:{{ $policy.resourceType }}:{{ $.Values.region }}:{{ $.Values.accountId }}:{{ $policy.resourceName }}" + {{- end }} + ] + {{- if $policy.conditions }} + ,"Condition": { + {{- range $j, $condition := $policy.conditions }} + "{{ $condition.test }}": { + "{{ $condition.variable }}": [ + {{- range $k, $value := $condition.values }} + "{{ $value }}"{{ if not (eq (add $k 1) (len $condition.values)) }},{{ end }} + {{- end }} + ] + } + {{- end }} + } + {{- end }} + }{{ if not (eq (add $index 1) (len $.Values.policyDocument)) }},{{ end }} + {{- end }} + ] + } + piAssociation: + serviceAccount: {{ . }} + piNamespace: {{ $namespace }} +--- +{{- end}} \ No newline at end of file diff --git a/charts/kro/instances/pod-identity/values.yaml b/charts/kro/instances/pod-identity/values.yaml new file mode 100644 index 0000000..362a50a --- /dev/null +++ b/charts/kro/instances/pod-identity/values.yaml @@ -0,0 +1,12 @@ +# region: eu-west-2 +# name: myname +# serviceAccounts: +# - "test" +# - "test2" +# piNamespace: "default" +# clusterName: "spoke-workload2" +# policyDocument: +# - resourceType: ssm +# resourceName: "*" +# actions: +# - "ssm:DescribeParameters" \ No newline at end of file diff --git a/charts/kro/resource-groups/efs/Chart.yaml b/charts/kro/resource-groups/efs/Chart.yaml new file mode 100644 index 0000000..e69de29 diff --git a/charts/kro/resource-groups/efs/templates/rg-efs.yaml b/charts/kro/resource-groups/efs/templates/rg-efs.yaml new file mode 100644 index 0000000..087c6a5 --- /dev/null +++ b/charts/kro/resource-groups/efs/templates/rg-efs.yaml @@ -0,0 +1 @@ +# TODO: rg that creates EFS file system (using ACK EFS controller) and corresponding StorageClass \ No newline at end of file diff --git a/charts/kro/resource-groups/efs/values.yaml b/charts/kro/resource-groups/efs/values.yaml new file mode 100644 index 0000000..e69de29 diff --git a/charts/kro/resource-groups/eks/rg-addons-iam.yaml b/charts/kro/resource-groups/eks/rg-addons-iam.yaml new file mode 100644 index 0000000..e69de29 diff --git a/charts/kro/resource-groups/eks/rg-eks-basic.yaml b/charts/kro/resource-groups/eks/rg-eks-basic.yaml new file mode 100644 index 0000000..58705e1 --- /dev/null +++ b/charts/kro/resource-groups/eks/rg-eks-basic.yaml @@ -0,0 +1,342 @@ +# yamllint disable rule:line-length +--- +apiVersion: kro.run/v1alpha1 +kind: ResourceGraphDefinition +metadata: + name: eksclusterbasic.kro.run + annotations: + argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true + argocd.argoproj.io/sync-wave: "-1" +spec: + schema: + apiVersion: v1alpha1 + kind: EksClusterBasic + spec: + name: string + tenant: string + environment: string + region: string + accountId: string + managementAccountId: string + k8sVersion: string + adminRoleName: string + fleetSecretManagerSecretNameSuffix: string + domainName: string + aws_partition: string | default="aws" + aws_dns_suffix: string | default="amazonaws.com" + network: + vpcID: string + subnets: + controlplane: + subnet1ID: string + subnet2ID: string + workers: + subnet1ID: string + subnet2ID: string + workloads: string # Define if we want to deploy workloads application + gitops: + addonsRepoBasePath: string + addonsRepoPath: string + addonsRepoRevision: string + addonsRepoUrl: string + fleetRepoBasePath: string + fleetRepoPath: string + fleetRepoRevision: string + fleetRepoUrl: string + addons: + enable_external_secrets: string + external_secrets_namespace: string + external_secrets_service_account: string + status: + clusterARN: ${ekscluster.status.ackResourceMetadata.arn} + cdata: ${ekscluster.status.certificateAuthority.data} + endpoint: ${ekscluster.status.endpoint} + clusterState: ${ekscluster.status.status} + + + resources: + + ########################################################### + # EKS Cluster + ########################################################### + - id: clusterRole + template: + apiVersion: iam.services.k8s.aws/v1alpha1 + kind: Role + metadata: + namespace: "${schema.spec.name}" + name: "${schema.spec.name}-cluster-role" + annotations: + services.k8s.aws/region: ${schema.spec.region} + spec: + name: "${schema.spec.name}-cluster-role" + policies: + - arn:aws:iam::aws:policy/AmazonEKSClusterPolicy + - arn:aws:iam::aws:policy/AmazonEKSComputePolicy + - arn:aws:iam::aws:policy/AmazonEKSBlockStoragePolicy + - arn:aws:iam::aws:policy/AmazonEKSLoadBalancingPolicy + - arn:aws:iam::aws:policy/AmazonEKSNetworkingPolicy + assumeRolePolicyDocument: | + { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "eks.amazonaws.com" + }, + "Action": [ + "sts:AssumeRole", + "sts:TagSession" + ] + } + ] + } + - id: nodeRole + template: + apiVersion: iam.services.k8s.aws/v1alpha1 + kind: Role + metadata: + namespace: "${schema.spec.name}" + name: "${schema.spec.name}-cluster-node-role" + annotations: + services.k8s.aws/region: ${schema.spec.region} + spec: + name: "${schema.spec.name}-cluster-node-role" + policies: + - arn:aws:iam::aws:policy/AmazonEKSWorkerNodeMinimalPolicy + - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly + - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore + - arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy + assumeRolePolicyDocument: | + { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "ec2.amazonaws.com" + }, + "Action": [ + "sts:AssumeRole", + "sts:TagSession" + ] + } + ] + } + # https://aws-controllers-k8s.github.io/community/reference/eks/v1alpha1/cluster/ + - id: ekscluster + readyWhen: + - ${ekscluster.status.status == "ACTIVE"} + template: + apiVersion: eks.services.k8s.aws/v1alpha1 + kind: Cluster + metadata: + namespace: "${schema.spec.name}" + name: "${schema.spec.name}" + # implicit dependencies with roles + annotations: + clusterRoleArn: "${clusterRole.status.ackResourceMetadata.arn}" + nodeRoleArn: "${nodeRole.status.ackResourceMetadata.arn}" + services.k8s.aws/region: ${schema.spec.region} + spec: + name: "${schema.spec.name}" + roleARN: "${clusterRole.status.ackResourceMetadata.arn}" + version: "${schema.spec.k8sVersion}" + accessConfig: + authenticationMode: "API_AND_CONFIG_MAP" + bootstrapClusterCreatorAdminPermissions: true + computeConfig: + enabled: true + nodeRoleARN: ${nodeRole.status.ackResourceMetadata.arn} + nodePools: + - system + - general-purpose + kubernetesNetworkConfig: + ipFamily: ipv4 + elasticLoadBalancing: + enabled: true + logging: + clusterLogging: + - enabled: true + types: + - api + - audit + - authenticator + - controllerManager + - scheduler + storageConfig: + blockStorage: + enabled: true + resourcesVPCConfig: + endpointPrivateAccess: true + endpointPublicAccess: true + subnetIDs: + - ${schema.spec.network.subnets.controlplane.subnet1ID} + - ${schema.spec.network.subnets.controlplane.subnet2ID} + zonalShiftConfig: + enabled: true + tags: + kro-management: ${schema.spec.name} + tenant: ${schema.spec.tenant} + environment: ${schema.spec.environment} + + - id: podIdentityAddon + template: + apiVersion: eks.services.k8s.aws/v1alpha1 + kind: Addon + metadata: + name: eks-pod-identity-agent + namespace: "${schema.spec.name}" + annotations: + clusterArn: "${ekscluster.status.ackResourceMetadata.arn}" + services.k8s.aws/region: ${schema.spec.region} + spec: + name: eks-pod-identity-agent + addonVersion: v1.3.4-eksbuild.1 + clusterName: "${schema.spec.name}" + + ########################################################### + # ArgoCD Integration + ########################################################### + - id: argocdSecret + template: + apiVersion: v1 + kind: Secret + metadata: + name: "${schema.spec.name}" + namespace: argocd + labels: + argocd.argoproj.io/secret-type: cluster + # Compatible fleet-management + fleet_member: spoke + tenant: "${schema.spec.tenant}" + environment: "${schema.spec.environment}" + aws_cluster_name: "${schema.spec.name}" + workloads: "${schema.spec.workloads}" + #using : useSelector: true for centralized mode + + enable_external_secrets: "${schema.spec.addons.enable_external_secrets}" + + annotations: + # GitOps Bridge + accountId: "${schema.spec.accountId}" + aws_account_id: "${schema.spec.accountId}" + region: "${schema.spec.region}" + aws_region: "${schema.spec.region}" + aws_central_region: "${schema.spec.region}" # used in fleet-management gitops + oidcProvider: "${ekscluster.status.identity.oidc.issuer}" + aws_cluster_name: "${schema.spec.name}" + aws_vpc_id: "${schema.spec.network.vpcID}" + # GitOps Configuration + addons_repo_basepath: "${schema.spec.gitops.addonsRepoBasePath}" + addons_repo_path: "${schema.spec.gitops.addonsRepoPath}" + addons_repo_revision: "${schema.spec.gitops.addonsRepoRevision}" + addons_repo_url: "${schema.spec.gitops.addonsRepoUrl}" + fleet_repo_basepath: "${schema.spec.gitops.fleetRepoBasePath}" + fleet_repo_path: "${schema.spec.gitops.fleetRepoPath}" + fleet_repo_revision: "${schema.spec.gitops.fleetRepoRevision}" + fleet_repo_url: "${schema.spec.gitops.fleetRepoUrl}" + # Generic + external_secrets_namespace: "${schema.spec.addons.external_secrets_namespace}" + external_secrets_service_account: "${schema.spec.addons.external_secrets_service_account}" + + access_entry_arn: "${accessEntry.status.ackResourceMetadata.arn}" + type: Opaque + # TODO bug in KRO, it always see some drifts.. + stringData: + name: "${schema.spec.name}" + server: "${ekscluster.status.ackResourceMetadata.arn}" + project: "default" + - id: accessEntry + readyWhen: + - ${accessEntry.status.conditions.exists(x, x.type == 'ACK.ResourceSynced' && x.status == "True")} #check on ACK condition + template: + apiVersion: eks.services.k8s.aws/v1alpha1 + kind: AccessEntry + metadata: + namespace: "${schema.spec.name}" + name: "${schema.spec.name}-access-entry" + annotations: + services.k8s.aws/region: ${schema.spec.region} + spec: + clusterName: "${schema.spec.name}" + accessPolicies: + - accessScope: + type: "cluster" + policyARN: "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy" + principalARN: "arn:aws:iam::${schema.spec.managementAccountId}:role/hub-cluster-argocd-controller" + type: STANDARD + + - id: accessEntryAdmin + template: + apiVersion: eks.services.k8s.aws/v1alpha1 + kind: AccessEntry + metadata: + namespace: "${schema.spec.name}" + name: "${schema.spec.name}-access-entry-admin" + annotations: + services.k8s.aws/region: ${schema.spec.region} + spec: + clusterName: "${schema.spec.name}" + accessPolicies: + - accessScope: + type: "cluster" + policyARN: "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy" + principalARN: "arn:aws:iam::${schema.spec.accountId}:role/${schema.spec.adminRoleName}" + type: STANDARD + + + ########################################################### + # External Secrets AddOn Pod Identity + ########################################################### + - id: externalSecretsRole + template: + apiVersion: iam.services.k8s.aws/v1alpha1 + kind: Role + metadata: + namespace: "${schema.spec.name}" + name: "${schema.spec.name}-external-secrets-role" + annotations: + services.k8s.aws/region: ${schema.spec.region} + spec: + name: "${schema.spec.name}-external-secrets-role" + policies: + - arn:aws:iam::aws:policy/SecretsManagerReadWrite + assumeRolePolicyDocument: | + { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "pods.eks.amazonaws.com" + }, + "Action": [ + "sts:AssumeRole", + "sts:TagSession" + ] + } + ] + } + - id: externalSecretsPodIdentityAssociation + readyWhen: + - ${externalSecretsPodIdentityAssociation.status.conditions.exists(x, x.type == 'ACK.ResourceSynced' && x.status == "True")} #check on ACK condition + template: + apiVersion: eks.services.k8s.aws/v1alpha1 + kind: PodIdentityAssociation + metadata: + name: "${schema.spec.name}-external-secrets" + namespace: "${schema.spec.name}" + annotations: + services.k8s.aws/region: ${schema.spec.region} + spec: + clusterName: "${schema.spec.name}" + namespace: argocd + roleARN: "${externalSecretsRole.status.ackResourceMetadata.arn}" + serviceAccount: external-secrets-sa + tags: + environment: "${schema.spec.environment}" + managedBy: ACK + application: external-secrets + diff --git a/charts/kro/resource-groups/eks/rg-eks.yaml b/charts/kro/resource-groups/eks/rg-eks.yaml new file mode 100644 index 0000000..64f9a0a --- /dev/null +++ b/charts/kro/resource-groups/eks/rg-eks.yaml @@ -0,0 +1,175 @@ +apiVersion: kro.run/v1alpha1 +kind: ResourceGraphDefinition +metadata: + name: ekscluster.kro.run + annotations: + argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true + argocd.argoproj.io/sync-wave: "0" +spec: + schema: + apiVersion: v1alpha1 + kind: EksCluster + spec: + name: string + tenant: string | default="auto1" + environment: string | default="staging" + region: string | default="us-west-2" + k8sVersion: string | default="1.34" + accountId: string + managementAccountId: string + adminRoleName: string | default="Admin" + fleetSecretManagerSecretNameSuffix: string | default="argocd-secret" + domainName: string | default="cluster.example.com" + vpc: + create: boolean | default=true + vpcCidr: string | default="10.0.0.0/16" + publicSubnet1Cidr: string | default="10.0.1.0/24" + publicSubnet2Cidr: string | default="10.0.2.0/24" + privateSubnet1Cidr: string | default="10.0.11.0/24" + privateSubnet2Cidr: string | default="10.0.12.0/24" + vpcId: string | default="" + publicSubnet1Id: string | default="" + publicSubnet2Id: string | default="" + privateSubnet1Id: string | default="" + privateSubnet2Id: string | default="" + workloads: string | default="false" # Define if we want to deploy workloads application + gitops: + addonsRepoBasePath: string | default="addons/" + addonsRepoPath: string | default="bootstrap" + addonsRepoRevision: string | default="main" + addonsRepoUrl: string | default="https://github.com/allamand/eks-cluster-mgmt" + + fleetRepoBasePath: string | default="fleet/" + fleetRepoPath: string | default="bootstrap" + fleetRepoRevision: string | default="main" + fleetRepoUrl: string | default="https://github.com/allamand/eks-cluster-mgmt" + + addons: + + enable_external_secrets: string | default="true" + external_secrets_namespace: string | default="external-secrets" + external_secrets_service_account: string | default="external-secrets-sa" + + resources: + - id: vpc + includeWhen: + - ${schema.spec.vpc.create} + readyWhen: + - ${vpc.status.conditions.exists(x, x.type == 'Ready' && x.status == "True")} # Check on kro conditions + template: + apiVersion: kro.run/v1alpha1 + kind: Vpc + metadata: + name: ${schema.spec.name} + namespace: ${schema.spec.name} + labels: + app.kubernetes.io/instance: ${schema.spec.name} + annotations: + argocd.argoproj.io/tracking-id: clusters:kro.run/Vpc:${schema.spec.name}/${schema.spec.name} + spec: + name: ${schema.spec.name} + region: ${schema.spec.region} + cidr: + vpcCidr: ${schema.spec.vpc.vpcCidr} + publicSubnet1Cidr: ${schema.spec.vpc.publicSubnet1Cidr} + publicSubnet2Cidr: ${schema.spec.vpc.publicSubnet2Cidr} + privateSubnet1Cidr: ${schema.spec.vpc.privateSubnet1Cidr} + privateSubnet2Cidr: ${schema.spec.vpc.privateSubnet2Cidr} + - id: eksWithVpc + includeWhen: + - ${schema.spec.vpc.create} + readyWhen: + - ${eksWithVpc.status.conditions.exists(x, x.type == 'Ready' && x.status == "True")} # Check on kro conditions + template: + apiVersion: kro.run/v1alpha1 + kind: EksClusterBasic + metadata: + name: ${schema.spec.name} + namespace: ${schema.spec.name} + labels: + app.kubernetes.io/instance: ${schema.spec.name} + annotations: + argocd.argoproj.io/tracking-id: clusters:kro.run/EksCluster:${schema.spec.name}/${schema.spec.name} + spec: + name: ${schema.spec.name} + tenant: ${schema.spec.tenant} + environment: ${schema.spec.environment} + region: ${schema.spec.region} + accountId: ${schema.spec.accountId} + managementAccountId: ${schema.spec.managementAccountId} + k8sVersion: ${schema.spec.k8sVersion} + adminRoleName: ${schema.spec.adminRoleName} + fleetSecretManagerSecretNameSuffix: ${schema.spec.fleetSecretManagerSecretNameSuffix} + domainName: ${schema.spec.domainName} + network: + vpcID: "${vpc.status.vpcID}" + subnets: + controlplane: + subnet1ID: "${vpc.status.privateSubnet1ID}" + subnet2ID: "${vpc.status.privateSubnet2ID}" + workers: + subnet1ID: "${vpc.status.privateSubnet1ID}" + subnet2ID: "${vpc.status.privateSubnet2ID}" + workloads: ${schema.spec.workloads} + gitops: + addonsRepoBasePath: ${schema.spec.gitops.addonsRepoBasePath} + addonsRepoPath: ${schema.spec.gitops.addonsRepoPath} + addonsRepoRevision: ${schema.spec.gitops.addonsRepoRevision} + addonsRepoUrl: ${schema.spec.gitops.addonsRepoUrl} + fleetRepoBasePath: ${schema.spec.gitops.fleetRepoBasePath} + fleetRepoPath: ${schema.spec.gitops.fleetRepoPath} + fleetRepoRevision: ${schema.spec.gitops.fleetRepoRevision} + fleetRepoUrl: ${schema.spec.gitops.fleetRepoUrl} + addons: + enable_external_secrets: ${schema.spec.addons.enable_external_secrets} + external_secrets_namespace: ${schema.spec.addons.external_secrets_namespace} + external_secrets_service_account: ${schema.spec.addons.external_secrets_service_account} + - id: eksExistingVpc + includeWhen: + - ${!schema.spec.vpc.create} + readyWhen: + - ${eksExistingVpc.status.conditions.exists(x, x.type == 'Ready' && x.status == "True")} # Check on kro conditions + template: + apiVersion: kro.run/v1alpha1 + kind: EksClusterBasic + metadata: + name: ${schema.spec.name} + namespace: ${schema.spec.name} + labels: + app.kubernetes.io/instance: ${schema.spec.name} + annotations: + argocd.argoproj.io/tracking-id: clusters:kro.run/EksCluster:${schema.spec.name}/${schema.spec.name} + spec: + name: ${schema.spec.name} + tenant: ${schema.spec.tenant} + environment: ${schema.spec.environment} + region: ${schema.spec.region} + accountId: ${schema.spec.accountId} + managementAccountId: ${schema.spec.managementAccountId} + k8sVersion: ${schema.spec.k8sVersion} + adminRoleName: ${schema.spec.adminRoleName} + fleetSecretManagerSecretNameSuffix: ${schema.spec.fleetSecretManagerSecretNameSuffix} + domainName: ${schema.spec.domainName} + network: + vpcID: "${schema.spec.vpc.vpcId}" + subnets: + controlplane: + subnet1ID: "${schema.spec.vpc.privateSubnet1Id}" + subnet2ID: "${schema.spec.vpc.privateSubnet2Id}" + workers: + subnet1ID: "${schema.spec.vpc.privateSubnet1Id}" + subnet2ID: "${schema.spec.vpc.privateSubnet2Id}" + workloads: ${schema.spec.workloads} + gitops: + addonsRepoBasePath: ${schema.spec.gitops.addonsRepoBasePath} + addonsRepoPath: ${schema.spec.gitops.addonsRepoPath} + addonsRepoRevision: ${schema.spec.gitops.addonsRepoRevision} + addonsRepoUrl: ${schema.spec.gitops.addonsRepoUrl} + fleetRepoBasePath: ${schema.spec.gitops.fleetRepoBasePath} + fleetRepoPath: ${schema.spec.gitops.fleetRepoPath} + fleetRepoRevision: ${schema.spec.gitops.fleetRepoRevision} + fleetRepoUrl: ${schema.spec.gitops.fleetRepoUrl} + addons: + enable_external_secrets: ${schema.spec.addons.enable_external_secrets} + external_secrets_namespace: ${schema.spec.addons.external_secrets_namespace} + external_secrets_service_account: ${schema.spec.addons.external_secrets_service_account} diff --git a/charts/kro/resource-groups/eks/rg-vpc.yaml b/charts/kro/resource-groups/eks/rg-vpc.yaml new file mode 100644 index 0000000..910bc45 --- /dev/null +++ b/charts/kro/resource-groups/eks/rg-vpc.yaml @@ -0,0 +1,247 @@ +apiVersion: kro.run/v1alpha1 +kind: ResourceGraphDefinition +metadata: + name: vpc.kro.run + annotations: + argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true + argocd.argoproj.io/sync-wave: "-1" +spec: + schema: + apiVersion: v1alpha1 + kind: Vpc + spec: + name: string + region: string + cidr: + vpcCidr: string | default="10.0.0.0/16" + publicSubnet1Cidr: string | default="10.0.1.0/24" + publicSubnet2Cidr: string | default="10.0.2.0/24" + privateSubnet1Cidr: string | default="10.0.11.0/24" + privateSubnet2Cidr: string | default="10.0.12.0/24" + status: + vpcID: ${vpc.status.vpcID} + publicSubnet1ID: ${publicSubnet1.status.subnetID} + publicSubnet2ID: ${publicSubnet2.status.subnetID} + privateSubnet1ID: ${privateSubnet1.status.subnetID} + privateSubnet2ID: ${privateSubnet2.status.subnetID} + resources: # how to publish a field in the RG claim e.g. vpcID + - id: vpc + template: + apiVersion: ec2.services.k8s.aws/v1alpha1 + kind: VPC + metadata: + namespace: ${schema.spec.name} + name: ${schema.spec.name}-vpc + annotations: + services.k8s.aws/region: ${schema.spec.region} + spec: + cidrBlocks: + - ${schema.spec.cidr.vpcCidr} + enableDNSSupport: true + enableDNSHostnames: true + tags: + - key: "Name" + value: ${schema.spec.name}-vpc + - id: internetGateway + template: + apiVersion: ec2.services.k8s.aws/v1alpha1 + kind: InternetGateway + metadata: + namespace: ${schema.spec.name} + name: ${schema.spec.name}-igw + annotations: + services.k8s.aws/region: ${schema.spec.region} + spec: + vpc: ${vpc.status.vpcID} + tags: + - key: "Name" + value: ${schema.spec.name}-igw + - id: natGateway1 + template: + apiVersion: ec2.services.k8s.aws/v1alpha1 + kind: NATGateway + metadata: + namespace: ${schema.spec.name} + name: ${schema.spec.name}-nat-gateway1 + annotations: + services.k8s.aws/region: ${schema.spec.region} + spec: + subnetID: ${publicSubnet1.status.subnetID} + allocationID: ${eip1.status.allocationID} + tags: + - key: "Name" + value: ${schema.spec.name}-nat-gateway1 + - id: natGateway2 + template: + apiVersion: ec2.services.k8s.aws/v1alpha1 + kind: NATGateway + metadata: + namespace: ${schema.spec.name} + name: ${schema.spec.name}-nat-gateway2 + annotations: + services.k8s.aws/region: ${schema.spec.region} + spec: + subnetID: ${publicSubnet2.status.subnetID} + allocationID: ${eip2.status.allocationID} + tags: + - key: "Name" + value: ${schema.spec.name}-nat-gateway2 + - id: eip1 + template: + apiVersion: ec2.services.k8s.aws/v1alpha1 + kind: ElasticIPAddress + metadata: + namespace: ${schema.spec.name} + name: ${schema.spec.name}-eip1 + annotations: + services.k8s.aws/region: ${schema.spec.region} + spec: + tags: + - key: "Name" + value: ${schema.spec.name}-eip1 + - id: eip2 + template: + apiVersion: ec2.services.k8s.aws/v1alpha1 + kind: ElasticIPAddress + metadata: + namespace: ${schema.spec.name} + name: ${schema.spec.name}-eip2 + annotations: + services.k8s.aws/region: ${schema.spec.region} + spec: + tags: + - key: "Name" + value: ${schema.spec.name}-eip2 + - id: publicRoutetable + template: + apiVersion: ec2.services.k8s.aws/v1alpha1 + kind: RouteTable + metadata: + namespace: ${schema.spec.name} + name: ${schema.spec.name}-public-routetable + annotations: + services.k8s.aws/region: ${schema.spec.region} + spec: + vpcID: ${vpc.status.vpcID} + routes: + - destinationCIDRBlock: 0.0.0.0/0 + gatewayID: ${internetGateway.status.internetGatewayID} + tags: + - key: "Name" + value: ${schema.spec.name}-public-routetable + - id: privateRoutetable1 + template: + apiVersion: ec2.services.k8s.aws/v1alpha1 + kind: RouteTable + metadata: + namespace: ${schema.spec.name} + name: ${schema.spec.name}-private-routetable1 + annotations: + services.k8s.aws/region: ${schema.spec.region} + spec: + vpcID: ${vpc.status.vpcID} + routes: + - destinationCIDRBlock: 0.0.0.0/0 + natGatewayID: ${natGateway1.status.natGatewayID} + tags: + - key: "Name" + value: ${schema.spec.name}-private-routetable1 + - id: privateRoutetable2 + template: + apiVersion: ec2.services.k8s.aws/v1alpha1 + kind: RouteTable + metadata: + namespace: ${schema.spec.name} + name: ${schema.spec.name}-private-routetable2 + annotations: + services.k8s.aws/region: ${schema.spec.region} + spec: + vpcID: ${vpc.status.vpcID} + routes: + - destinationCIDRBlock: 0.0.0.0/0 + natGatewayID: ${natGateway2.status.natGatewayID} + tags: + - key: "Name" + value: ${schema.spec.name}-private-routetable2 + - id: publicSubnet1 + template: + apiVersion: ec2.services.k8s.aws/v1alpha1 + kind: Subnet + metadata: + namespace: ${schema.spec.name} + name: ${schema.spec.name}-public-subnet1 + annotations: + services.k8s.aws/region: ${schema.spec.region} + spec: + availabilityZone: ${schema.spec.region}a + cidrBlock: ${schema.spec.cidr.publicSubnet1Cidr} + mapPublicIPOnLaunch: true + vpcID: ${vpc.status.vpcID} + routeTables: + - ${publicRoutetable.status.routeTableID} + tags: + - key: "Name" + value: ${schema.spec.name}-public-subnet1 + - key: kubernetes.io/role/elb + value: '1' + - id: publicSubnet2 + template: + apiVersion: ec2.services.k8s.aws/v1alpha1 + kind: Subnet + metadata: + namespace: ${schema.spec.name} + name: ${schema.spec.name}-public-subnet2 + annotations: + services.k8s.aws/region: ${schema.spec.region} + spec: + availabilityZone: ${schema.spec.region}b + cidrBlock: ${schema.spec.cidr.publicSubnet2Cidr} + mapPublicIPOnLaunch: true + vpcID: ${vpc.status.vpcID} + routeTables: + - ${publicRoutetable.status.routeTableID} + tags: + - key: "Name" + value: ${schema.spec.name}-public-subnet2 + - key: kubernetes.io/role/elb + value: '1' + - id: privateSubnet1 + template: + apiVersion: ec2.services.k8s.aws/v1alpha1 + kind: Subnet + metadata: + namespace: ${schema.spec.name} + name: ${schema.spec.name}-private-subnet1 + annotations: + services.k8s.aws/region: ${schema.spec.region} + spec: + availabilityZone: ${schema.spec.region}a + cidrBlock: ${schema.spec.cidr.privateSubnet1Cidr} + vpcID: ${vpc.status.vpcID} + routeTables: + - ${privateRoutetable1.status.routeTableID} + tags: + - key: "Name" + value: ${schema.spec.name}-private-subnet1 + - key: kubernetes.io/role/internal-elb + value: '1' + - id: privateSubnet2 + template: + apiVersion: ec2.services.k8s.aws/v1alpha1 + kind: Subnet + metadata: + namespace: ${schema.spec.name} + name: ${schema.spec.name}-private-subnet2 + annotations: + services.k8s.aws/region: ${schema.spec.region} + spec: + availabilityZone: ${schema.spec.region}b + cidrBlock: ${schema.spec.cidr.privateSubnet2Cidr} + vpcID: ${vpc.status.vpcID} + routeTables: + - ${privateRoutetable2.status.routeTableID} + tags: + - key: "Name" + value: ${schema.spec.name}-private-subnet2 + - key: kubernetes.io/role/internal-elb + value: '1' diff --git a/charts/kro/resource-groups/iam/Chart.yaml b/charts/kro/resource-groups/iam/Chart.yaml new file mode 100644 index 0000000..e69de29 diff --git a/charts/kro/resource-groups/iam/templates/rg-iam.yaml b/charts/kro/resource-groups/iam/templates/rg-iam.yaml new file mode 100644 index 0000000..cfbd656 --- /dev/null +++ b/charts/kro/resource-groups/iam/templates/rg-iam.yaml @@ -0,0 +1 @@ +# TODO: rgi for creating IAM role/policy, ServiceAccount, and EKS pod identity association \ No newline at end of file diff --git a/charts/kro/resource-groups/iam/values.yaml b/charts/kro/resource-groups/iam/values.yaml new file mode 100644 index 0000000..e69de29 diff --git a/charts/kro/resource-groups/pod-identity/pod-identity.yaml b/charts/kro/resource-groups/pod-identity/pod-identity.yaml new file mode 100644 index 0000000..82e9e78 --- /dev/null +++ b/charts/kro/resource-groups/pod-identity/pod-identity.yaml @@ -0,0 +1,80 @@ +apiVersion: kro.run/v1alpha1 +kind: ResourceGroup +metadata: + name: podidentity.kro.run + annotations: + argocd.argoproj.io/sync-wave: "-5" +spec: + schema: + apiVersion: v1alpha1 + kind: PodIdentity + spec: + name: string | default="pod-identity" + values: + aws: + clusterName: string + policy: + description: 'string | default="Test Description"' + path: 'string | default="/"' + policyDocument: string | default="" + piAssociation: + serviceAccount: string + piNamespace: string + status: + policyStatus: ${podpolicy.status.conditions} + roleStatus: ${podrole.status.conditions} + resources: + - id: podpolicy + readyWhen: + - ${podpolicy.status.conditions[0].status == "True"} + template: + apiVersion: iam.services.k8s.aws/v1alpha1 + kind: Policy + metadata: + name: ${schema.spec.name}-pod-policy + spec: + name: ${schema.spec.name}-pod-policy + description: ${schema.spec.values.policy.description} + path: ${schema.spec.values.policy.path} + policyDocument: ${schema.spec.values.policy.policyDocument} + - id: podrole + readyWhen: + - ${podrole.status.conditions[0].status == "True"} + template: + apiVersion: iam.services.k8s.aws/v1alpha1 + kind: Role + metadata: + name: ${schema.spec.name}-role + spec: + name: ${schema.spec.name}-role + policies: + - ${podpolicy.status.ackResourceMetadata.arn} + assumeRolePolicyDocument: | + { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "pods.eks.amazonaws.com" + }, + "Action": [ + "sts:TagSession", + "sts:AssumeRole" + ] + } + ] + } + - id: piAssociation + readyWhen: + - ${piAssociation.status.conditions[0].status == "True"} + template: + apiVersion: eks.services.k8s.aws/v1alpha1 + kind: PodIdentityAssociation + metadata: + name: ${schema.spec.name}-pod-association-${schema.spec.values.piAssociation.serviceAccount} + spec: + clusterName: ${schema.spec.values.aws.clusterName} + roleARN: ${podrole.status.ackResourceMetadata.arn} + serviceAccount: ${schema.spec.values.piAssociation.serviceAccount} + namespace: ${schema.spec.values.piAssociation.piNamespace} \ No newline at end of file diff --git a/charts/multi-acct/Chart.yaml b/charts/multi-acct/Chart.yaml new file mode 100644 index 0000000..95128fb --- /dev/null +++ b/charts/multi-acct/Chart.yaml @@ -0,0 +1,19 @@ +apiVersion: v2 +name: ack-multi-account +description: A Helm chart for Kubernetes + +# A chart can be either an 'application' or a 'library' chart. +# +# Application charts are a collection of templates that can be packaged into versioned archives +# to be deployed. +# +# Library charts provide useful utilities or functions for the chart developer. They're included as +# a dependency of application charts to inject those utilities and functions into the rendering +# pipeline. Library charts do not define any templates and therefore cannot be deployed. +type: application + +# This is the chart version. This version number should be incremented each time you make changes +# to the chart and its templates, including the app version. +# Versions are expected to follow Semantic Versioning (https://semver.org/) +version: 0.1.0 + diff --git a/charts/multi-acct/templates/iam-role-selector.yaml b/charts/multi-acct/templates/iam-role-selector.yaml new file mode 100644 index 0000000..6379037 --- /dev/null +++ b/charts/multi-acct/templates/iam-role-selector.yaml @@ -0,0 +1,12 @@ +{{- range $key, $value := .Values.clusters }} +--- +apiVersion: services.k8s.aws/v1alpha1 +kind: IAMRoleSelector +metadata: + name: {{ $key }}-namespace-config +spec: + arn: arn:aws:iam::{{ $value }}:role/ack + namespaceSelector: + names: + - {{ $key }} +{{- end }} \ No newline at end of file diff --git a/charts/multi-acct/templates/namespace.yaml b/charts/multi-acct/templates/namespace.yaml new file mode 100644 index 0000000..97be724 --- /dev/null +++ b/charts/multi-acct/templates/namespace.yaml @@ -0,0 +1,7 @@ +{{- range $key, $value := .Values.clusters }} +--- +apiVersion: v1 +kind: Namespace +metadata: + name: {{ $key }} +{{- end }} diff --git a/charts/pod-identity/.helmignore b/charts/pod-identity/.helmignore new file mode 100644 index 0000000..0e8a0eb --- /dev/null +++ b/charts/pod-identity/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/pod-identity/Chart.yaml b/charts/pod-identity/Chart.yaml new file mode 100644 index 0000000..aae321e --- /dev/null +++ b/charts/pod-identity/Chart.yaml @@ -0,0 +1,24 @@ +apiVersion: v2 +name: pod-identity +description: A Helm chart for Kubernetes + +# A chart can be either an 'application' or a 'library' chart. +# +# Application charts are a collection of templates that can be packaged into versioned archives +# to be deployed. +# +# Library charts provide useful utilities or functions for the chart developer. They're included as +# a dependency of application charts to inject those utilities and functions into the rendering +# pipeline. Library charts do not define any templates and therefore cannot be deployed. +type: application + +# This is the chart version. This version number should be incremented each time you make changes +# to the chart and its templates, including the app version. +# Versions are expected to follow Semantic Versioning (https://semver.org/) +version: 0.1.0 + +# This is the version number of the application being deployed. This version number should be +# incremented each time you make changes to the application. Versions are not expected to +# follow Semantic Versioning. They should reflect the version the application is using. +# It is recommended to use it with quotes. +appVersion: "1.16.0" diff --git a/charts/pod-identity/templates/_helpers.tpl b/charts/pod-identity/templates/_helpers.tpl new file mode 100644 index 0000000..235c382 --- /dev/null +++ b/charts/pod-identity/templates/_helpers.tpl @@ -0,0 +1,74 @@ +{{/* +Expand the name of the chart. +*/}} +{{- define "pod-identity.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "pod-identity.fullname" -}} +{{- if .Values.fullnameOverride }} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- $name := default .Chart.Name .Values.nameOverride }} +{{- if contains $name .Release.Name }} +{{- .Release.Name | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} +{{- end }} +{{- end }} +{{- end }} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "pod-identity.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Common labels +*/}} +{{- define "pod-identity.labels" -}} +helm.sh/chart: {{ include "pod-identity.chart" . }} +{{ include "pod-identity.selectorLabels" . }} +{{- if .Chart.AppVersion }} +app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} +{{- end }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- end }} + +{{/* +Selector labels +*/}} +{{- define "pod-identity.selectorLabels" -}} +app.kubernetes.io/name: {{ include "pod-identity.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end }} + +{{/* +Create the name of the service account to use +*/}} +{{- define "pod-identity.serviceAccountName" -}} +{{- if .Values.serviceAccount.create }} +{{- default (include "pod-identity.fullname" .) .Values.serviceAccount.name }} +{{- else }} +{{- default "default" .Values.serviceAccount.name }} +{{- end }} +{{- end }} +{{/* +Construct a dynamic ARN based on the values passed from ArgoCD or values.yaml. +*/}} +{{- define "pod-identity.resourceArn" -}} +arn:aws:{{ .resourceType }}:{{ .region }}:{{ .accountId }}:{{ .resourceName }} +{{- end }} +{{- define "pod-identity.accountID" -}} +{{ .accountId }} +{{- end }} +{{- define "pod-identity.region" -}} +{{ .region }} +{{- end }} \ No newline at end of file diff --git a/charts/pod-identity/templates/pod-identity-association.yaml b/charts/pod-identity/templates/pod-identity-association.yaml new file mode 100644 index 0000000..6f9b1e8 --- /dev/null +++ b/charts/pod-identity/templates/pod-identity-association.yaml @@ -0,0 +1,27 @@ +{{- if .Values.create | default false }} +{{- $cluster := .Values.podIdentityAssociation.clusterName -}} +{{- $namespace := .Values.podIdentityAssociation.namespace -}} +{{- $tags := .Values.podIdentityAssociation.tags -}} +{{- $root := . -}} +{{- $serviceAccounts := .Values.podIdentityAssociation.serviceAccounts -}} +{{- range $serviceAccounts }} +apiVersion: eks.services.k8s.aws/v1alpha1 +kind: PodIdentityAssociation +metadata: + name: "{{ include "pod-identity.fullname" $root }}-{{ . }}" + annotations: + argocd.argoproj.io/sync-wave: "-1" +spec: + clusterName: {{ $cluster }} + roleRef: + from: + name: "{{ include "pod-identity.fullname" $root }}" + namespace: {{ $namespace }} + serviceAccount: {{ . }} + {{- if $tags}} + tags: + {{- $tags| toYaml | nindent 10 }} + {{- end }} +--- +{{- end }} +{{- end }} diff --git a/charts/pod-identity/templates/pod-identity-policy.yaml b/charts/pod-identity/templates/pod-identity-policy.yaml new file mode 100644 index 0000000..71783e2 --- /dev/null +++ b/charts/pod-identity/templates/pod-identity-policy.yaml @@ -0,0 +1,56 @@ +{{- if and (.Values.create | default false) (.Values.podIdentityPolicyCreate | default false) }} +apiVersion: iam.services.k8s.aws/v1alpha1 +kind: Policy +metadata: + name: {{ include "pod-identity.fullname" . }} + annotations: + argocd.argoproj.io/sync-wave: "-3" +spec: + name: {{ include "pod-identity.fullname" . }} + description: {{ .Values.podIdentityPolicy.description }} + {{- if .Values.podIdentityPolicy.path }} + path: {{ .Values.podIdentityPolicy.path }} + {{- end }} + policyDocument: | + { + "Version": "2012-10-17", + "Statement": [ + {{- range $index, $policy := .Values.podIdentityPolicy.policies }} + { + "Effect": "Allow", + "Action": [ + {{- range $i, $action := $policy.actions }} + "{{ $action }}"{{ if not (eq (add $i 1) (len $policy.actions)) }},{{ end }} + {{- end }} + ], + "Resource": [ + {{- if $policy.customArn }} + "{{ $policy.customArn }}" + {{- else if eq $policy.resourceName "*" }} + "*" + {{- else }} + "arn:aws:{{ $policy.resourceType }}:{{ $.Values.region }}:{{ $.Values.accountId }}:{{ $policy.resourceName }}" + {{- end }} + ] + {{- if $policy.conditions }} + ,"Condition": { + {{- range $j, $condition := $policy.conditions }} + "{{ $condition.test }}": { + "{{ $condition.variable }}": [ + {{- range $k, $value := $condition.values }} + "{{ $value }}"{{ if not (eq (add $k 1) (len $condition.values)) }},{{ end }} + {{- end }} + ] + } + {{- end }} + } + {{- end }} + }{{ if not (eq (add $index 1) (len $.Values.podIdentityPolicy.policies)) }},{{ end }} + {{- end }} + ] + } + {{- if .Values.podIdentityPolicy.tags }} + tags: + {{- .Values.podIdentityPolicy.tags | toYaml | nindent 10 }} + {{- end }} +{{- end }} diff --git a/charts/pod-identity/templates/pod-identity-role.yaml b/charts/pod-identity/templates/pod-identity-role.yaml new file mode 100644 index 0000000..5f76215 --- /dev/null +++ b/charts/pod-identity/templates/pod-identity-role.yaml @@ -0,0 +1,66 @@ +{{- if .Values.create | default false }} +apiVersion: iam.services.k8s.aws/v1alpha1 +kind: Role +metadata: + name: {{ include "pod-identity.fullname" . }} + annotations: + argocd.argoproj.io/sync-wave: "-2" +spec: + name: {{ include "pod-identity.fullname" . }} + assumeRolePolicyDocument: | + { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "pods.eks.amazonaws.com" + }, + "Action": [ + "sts:TagSession", + "sts:AssumeRole" + ] + } + ] + } + description: {{ .Values.podIdentityRole.description }} + + {{- if .Values.podIdentityRole.managedPolicies }} + policies: + {{- if and (.Values.podIdentityPolicyCreate | default false) .Values.podIdentityRole.managedPolicies }} + - "arn:aws:iam::{{ $.Values.accountId }}:policy/{{ include "pod-identity.fullname" . }}" + {{- end }} + {{- range .Values.podIdentityRole.managedPolicies }} + - "{{ . }}" + {{- end }} + + {{- else if .Values.podIdentityRole.policyRefs }} + policyRefs: + {{- if .Values.podIdentityPolicyCreate | default true }} + - from: + name: "{{ include "pod-identity.fullname" . }}" + {{- end }} + {{- range .Values.podIdentityRole.policyRefs }} + - from: + name: "{{ .name }}" + {{- if .namespace }} + namespace: "{{ .namespace }}" + {{- end }} + {{- end }} + + {{- else }} + policyRefs: + - from: + name: "{{ include "pod-identity.fullname" . }}" + {{- end }} + + {{- if .Values.podIdentityRole.inlinePolicies }} + inlinePolicies: + {{ .Values.podIdentityRole.inlinePolicies | toYaml | nindent 4 }} + {{- end }} + + {{- if .Values.podIdentityRole.tags }} + tags: + {{ .Values.podIdentityRole.tags | toYaml | nindent 4 }} + {{- end }} +{{- end }} diff --git a/charts/pod-identity/values.yaml b/charts/pod-identity/values.yaml new file mode 100644 index 0000000..c9a674a --- /dev/null +++ b/charts/pod-identity/values.yaml @@ -0,0 +1,61 @@ +# region: us-west-2 +# accountId: "471112582304" +# create: true +# podIdentityPolicyCreate: false +# podIdentityRole: +# description: "Test" +# # Only one of the two can be true Managed Policy or Policy Refs +# # If Policy is created it will automatically add it on managed Policies or PolicyRefs +# managedPolicies: +# - "arn:aws:iam::aws:policy/AmazonSSMReadOnlyAccess" +# - "arn:aws:iam::aws:policy/Admin" +# policyRefs: +# - name: "custom-policy-1" +# namespace: kube-system +# - name: "AmazonSSMReadOnlyAccess" +# namespace: kube-system +# podIdentityAssociation: +# clusterName: control-plane +# namespace: default +# serviceAccounts: +# - serviceAccount1 +# - serviceAccount2 +# podIdentityPolicy: +# description: "Test" +# policies: +# - resourceType: ssm +# resourceName: "*" +# actions: +# - "ssm:DescribeParameters" +# - resourceType: ssm +# resourceName: parameter/* +# actions: +# - "ssm:GetParameter" +# - "ssm:GetParameters" +# - resourceType: secretsmanager +# resourceName: secret:* +# actions: +# - "secretsmanager:GetResourcePolicy" +# - "secretsmanager:GetSecretValue" +# - "secretsmanager:DescribeSecret" +# - "secretsmanager:ListSecretVersionIds" +# - "secretsmanager:CreateSecret" +# - "secretsmanager:PutSecretValue" +# - "secretsmanager:TagResource" +# - resourceType: secretsmanager +# resourceName: secret:* +# actions: +# - "secretsmanager:DeleteSecret" +# conditions: +# - test: "StringEquals" +# variable: "secretsmanager:ResourceTag/managed-by" +# values: +# - "external-secrets" +# - resourceType: kms +# resourceName: "key/*" +# actions: +# - "kms:Decrypt" +# - resourceType: ecr +# resourceName: "*" +# actions: +# - "ecr:GetAuthorizationToken" diff --git a/charts/storageclass-resources/.helmignore b/charts/storageclass-resources/.helmignore new file mode 100644 index 0000000..0e8a0eb --- /dev/null +++ b/charts/storageclass-resources/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/storageclass-resources/Chart.yaml b/charts/storageclass-resources/Chart.yaml new file mode 100644 index 0000000..7d00b19 --- /dev/null +++ b/charts/storageclass-resources/Chart.yaml @@ -0,0 +1,24 @@ +apiVersion: v2 +name: efs-classes +description: A Helm chart for Kubernetes + +# A chart can be either an 'application' or a 'library' chart. +# +# Application charts are a collection of templates that can be packaged into versioned archives +# to be deployed. +# +# Library charts provide useful utilities or functions for the chart developer. They're included as +# a dependency of application charts to inject those utilities and functions into the rendering +# pipeline. Library charts do not define any templates and therefore cannot be deployed. +type: application + +# This is the chart version. This version number should be incremented each time you make changes +# to the chart and its templates, including the app version. +# Versions are expected to follow Semantic Versioning (https://semver.org/) +version: 0.1.0 + +# This is the version number of the application being deployed. This version number should be +# incremented each time you make changes to the application. Versions are not expected to +# follow Semantic Versioning. They should reflect the version the application is using. +# It is recommended to use it with quotes. +appVersion: "1.16.0" diff --git a/charts/storageclass-resources/templates/_helpers.tpl b/charts/storageclass-resources/templates/_helpers.tpl new file mode 100644 index 0000000..5bc5bbe --- /dev/null +++ b/charts/storageclass-resources/templates/_helpers.tpl @@ -0,0 +1,62 @@ +{{/* +Expand the name of the chart. +*/}} +{{- define "efs-classes.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "efs-classes.fullname" -}} +{{- if .Values.fullnameOverride }} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- $name := default .Chart.Name .Values.nameOverride }} +{{- if contains $name .Release.Name }} +{{- .Release.Name | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} +{{- end }} +{{- end }} +{{- end }} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "efs-classes.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Common labels +*/}} +{{- define "efs-classes.labels" -}} +helm.sh/chart: {{ include "efs-classes.chart" . }} +{{ include "efs-classes.selectorLabels" . }} +{{- if .Chart.AppVersion }} +app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} +{{- end }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- end }} + +{{/* +Selector labels +*/}} +{{- define "efs-classes.selectorLabels" -}} +app.kubernetes.io/name: {{ include "efs-classes.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end }} + +{{/* +Create the name of the service account to use +*/}} +{{- define "efs-classes.serviceAccountName" -}} +{{- if .Values.serviceAccount.create }} +{{- default (include "efs-classes.fullname" .) .Values.serviceAccount.name }} +{{- else }} +{{- default "default" .Values.serviceAccount.name }} +{{- end }} +{{- end }} diff --git a/charts/storageclass-resources/templates/storageclass.yaml b/charts/storageclass-resources/templates/storageclass.yaml new file mode 100644 index 0000000..d0e3c9d --- /dev/null +++ b/charts/storageclass-resources/templates/storageclass.yaml @@ -0,0 +1,39 @@ +{{- $fileSystemId := "" -}} +{{- if .Values.storageClasses.efs }} + {{- $fileSystemId = .Values.storageClasses.efs.fileSystemId | default "" -}} +{{- end }} + +{{- range $storageClassType, $storageClasses := .Values.storageClasses }} + {{- range $storageClassName, $storageClass := $storageClasses }} + {{- if ne $storageClassName "fileSystemId" }} +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: {{ $storageClassName }} + annotations: + storageclass.kubernetes.io/is-default-class: "false" +provisioner: {{ if eq $storageClassType "efs" }}efs.csi.aws.com{{ else }}ebs.csi.aws.com{{ end }} +{{- if and (eq $storageClassType "efs") $fileSystemId }} +parameters: + fileSystemId: {{ $fileSystemId }} + directoryPerms: "{{ $storageClass.directoryPerms | default "700" }}" + provisioningMode: {{ $storageClass.provisioningMode | default "efs-ap" }} + basePath: {{ $storageClass.basePath | default "/" }} +mountOptions: +{{- range $storageClass.mountOptions }} + - {{ . }} +{{- end }} +{{- else if eq $storageClassType "ebs" }} +parameters: + type: {{ $storageClass.volumeType }} + fsType: ext4 + iopsPerGiB: "{{ $storageClass.iops | default "3000" }}" + throughput: "{{ $storageClass.throughput | default "125" }}" +{{- end }} +reclaimPolicy: {{ $storageClass.reclaimPolicy | default "Delete" }} +allowVolumeExpansion: true +volumeBindingMode: WaitForFirstConsumer +--- + {{- end }} + {{- end }} +{{- end }} diff --git a/charts/storageclass-resources/values.yaml b/charts/storageclass-resources/values.yaml new file mode 100644 index 0000000..934777f --- /dev/null +++ b/charts/storageclass-resources/values.yaml @@ -0,0 +1,17 @@ +storageClasses: + # efs: + # fileSystemId: fs-12345678 + # efs-sc: + # reclaimPolicy: Delete + # directoryPerms: "700" + # basePath: /data + # mountOptions: + # - nfsvers=4.1 + + ebs: + ebs-sc-gp3: + reclaimPolicy: Retain + volumeType: gp3 + size: 20Gi + iops: 3000 + throughput: 125 diff --git a/docs/eks-cluster-mgmt-central.drawio.png b/docs/eks-cluster-mgmt-central.drawio.png new file mode 100644 index 0000000..15225f8 Binary files /dev/null and b/docs/eks-cluster-mgmt-central.drawio.png differ diff --git a/fleet/bootstrap/addons.yaml b/fleet/bootstrap/addons.yaml new file mode 100644 index 0000000..d4aa5ca --- /dev/null +++ b/fleet/bootstrap/addons.yaml @@ -0,0 +1,52 @@ +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: cluster-addons + namespace: argocd + annotations: + argocd.argoproj.io/sync-wave: "-1" +spec: + syncPolicy: + preserveResourcesOnDeletion: false # to be able to cleanup + goTemplate: true + goTemplateOptions: + - missingkey=error + generators: + - clusters: + selector: + matchLabels: + fleet_member: control-plane + values: + addonChart: application-sets + template: + metadata: + name: cluster-addons + spec: + project: default + sources: + - ref: values + repoURL: '{{.metadata.annotations.addons_repo_url}}' + targetRevision: '{{.metadata.annotations.addons_repo_revision}}' + - repoURL: '{{.metadata.annotations.addons_repo_url}}' + path: 'charts/{{.values.addonChart}}' + targetRevision: '{{.metadata.annotations.addons_repo_revision}}' + helm: + ignoreMissingValueFiles: true + valueFiles: + - $values/{{.metadata.annotations.addons_repo_basepath}}bootstrap/default/addons.yaml + - $values/{{.metadata.annotations.addons_repo_basepath}}environments/{{ .metadata.labels.environment }}/addons.yaml + - $values/{{.metadata.annotations.addons_repo_basepath}}tenants/{{ .metadata.labels.tenant }}/default/{{ .values.addonChart }}/addons.yaml + - $values/{{.metadata.annotations.addons_repo_basepath}}tenants/{{ .metadata.labels.tenant }}/clusters/{{ .name }}/{{.values.addonChart}}/addons.yaml + destination: + namespace: argocd + name: '{{.name}}' + syncPolicy: + automated: + selfHeal: false + allowEmpty: true + prune: false + retry: + limit: 100 + syncOptions: + - CreateNamespace=true + - ServerSideApply=true # Big CRDs. \ No newline at end of file diff --git a/fleet/bootstrap/clusters.yaml b/fleet/bootstrap/clusters.yaml new file mode 100644 index 0000000..f67b227 --- /dev/null +++ b/fleet/bootstrap/clusters.yaml @@ -0,0 +1,52 @@ +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: clusters + namespace: argocd + annotations: + argocd.argoproj.io/sync-wave: "0" +spec: + syncPolicy: + preserveResourcesOnDeletion: true + goTemplate: true + goTemplateOptions: + - missingkey=error + generators: + - clusters: + selector: + matchLabels: + fleet_member: control-plane + template: + metadata: + name: clusters + labels: + environment: '{{.metadata.labels.environment}}' + tenant: '{{.metadata.labels.tenant}}' + spec: + project: default + sources: + - repoURL: '{{.metadata.annotations.fleet_repo_url}}' + targetRevision: '{{.metadata.annotations.fleet_repo_revision}}' + ref: values + - repoURL: '{{.metadata.annotations.fleet_repo_url}}' + path: 'charts/kro-clusters/' + targetRevision: '{{.metadata.annotations.fleet_repo_revision}}' + helm: + releaseName: 'kro-clusters' + ignoreMissingValueFiles: true + valueFiles: + - '$values/{{.metadata.annotations.fleet_repo_basepath}}kro-values/default/kro-clusters/values.yaml' + - '$values/{{.metadata.annotations.fleet_repo_basepath}}kro-values/tenants/{{.metadata.labels.tenant}}/kro-clusters/values.yaml' + destination: + namespace: argocd + name: '{{.name}}' + syncPolicy: + automated: + selfHeal: false + allowEmpty: true + prune: true + retry: + limit: 100 + syncOptions: + - CreateNamespace=true + - ServerSideApply=true # Big CRDs. \ No newline at end of file diff --git a/fleet/kro-values/tenants/tenant1/kro-clusters/values.yaml b/fleet/kro-values/tenants/tenant1/kro-clusters/values.yaml new file mode 100644 index 0000000..ac79a6b --- /dev/null +++ b/fleet/kro-values/tenants/tenant1/kro-clusters/values.yaml @@ -0,0 +1,41 @@ +clusters: + # workload-cluster1: + # managementAccountId: "XXXXXX" + # accountId: "XXXXXX" + # tenant: "tenant1" + # k8sVersion: "1.34" + # vpc: + # create: true + # gitops: + # addonsRepoUrl: "https://github.com/XXXXXX/eks-cluster-mgmt" + # fleetRepoUrl: "https://github.com/XXXXXX/eks-cluster-mgmt" + + # workload-cluster2: + # managementAccountId: "XXXXXX" + # accountId: "XXXXXX" + # tenant: "tenant1" + # k8sVersion: "1.34" + # vpc: + # create: true + # gitops: + # addonsRepoUrl: "https://github.com/XXXXXX/eks-cluster-mgmt" + # fleetRepoUrl: "https://github.com/XXXXXX/eks-cluster-mgmt" + # addons: + # enable_external_secrets: "true" + + # workload-cluster3: + # managementAccountId: "XXXXXX" + # accountId: "XXXXXX" + # tenant: "tenant1" + # k8sVersion: "1.34" + # workloads: "true" + # vpc: + # create: false + # vpcId: "vpc-XXXX" + # publicSubnet1Id: "subnet-XXXX" + # publicSubnet2Id: "subnet-XXXX" + # privateSubnet1Id: "subnet-XXXX" + # privateSubnet2Id: "subnet-XXXX" + # gitops: + # addonsRepoUrl: "https://github.com/XXXXXX/eks-cluster-mgmt" + # fleetRepoUrl: "https://github.com/XXXXXX/eks-cluster-mgmt" diff --git a/scripts/create_ack_workload_roles.sh b/scripts/create_ack_workload_roles.sh new file mode 100755 index 0000000..1f83467 --- /dev/null +++ b/scripts/create_ack_workload_roles.sh @@ -0,0 +1,124 @@ +#!/bin/bash + +# Disable AWS CLI paging +export AWS_PAGER="" + +create_ack_workload_roles() { + local MGMT_ACCOUNT_ID="$1" + + if [ -z "$MGMT_ACCOUNT_ID" ]; then + echo "Usage: create_ack_workload_roles " + echo "Example: create_ack_workload_roles 123456789012" + return 1 + fi + # Generate trust policy for a specific service + generate_trust_policy() { + cat < trust.json + + + # Create the role with the trust policy + local ROLE_NAME="ack" + local ROLE_DESCRIPTION="Workload role for ACK controllers" + echo "Creating role ${ROLE_NAME}" + aws iam create-role \ + --role-name "${ROLE_NAME}" \ + --assume-role-policy-document file://trust.json \ + --description "${ROLE_DESCRIPTION}" + + if [ $? -eq 0 ]; then + echo "Successfully created role ${ROLE_NAME}" + local ROLE_ARN + ROLE_ARN=$(aws iam get-role --role-name "${ROLE_NAME}" --query Role.Arn --output text) + echo "Role ARN: ${ROLE_ARN}" + rm -f trust.json + else + echo "Failed to create/configure role ${ROLE_NAME}" + rm -f trust.json + return 1 + fi + + #for SERVICE in iam ec2 eks secretsmanager; do + for SERVICE in iam ec2 eks; do + echo ">>>>>>>>>SERVICE:$SERVICE" + + # Download and apply the recommended policies + local BASE_URL="https://raw.githubusercontent.com/aws-controllers-k8s/${SERVICE}-controller/main" + local POLICY_ARN_URL="${BASE_URL}/config/iam/recommended-policy-arn" + local POLICY_ARN_STRINGS + POLICY_ARN_STRINGS="$(wget -qO- ${POLICY_ARN_URL})" + + local INLINE_POLICY_URL="${BASE_URL}/config/iam/recommended-inline-policy" + local INLINE_POLICY + INLINE_POLICY="$(wget -qO- ${INLINE_POLICY_URL})" + + # Attach managed policies + while IFS= read -r POLICY_ARN; do + if [ -n "$POLICY_ARN" ]; then + echo -n "Attaching $POLICY_ARN ... " + aws iam attach-role-policy \ + --role-name "${ROLE_NAME}" \ + --policy-arn "${POLICY_ARN}" + echo "ok." + fi + done <<< "$POLICY_ARN_STRINGS" + + # Add inline policy if it exists + if [ ! -z "$INLINE_POLICY" ]; then + echo -n "Putting inline policy ... " + aws iam put-role-policy \ + --role-name "${ROLE_NAME}" \ + --policy-name "ack-recommended-policy-${SERVICE}" \ + --policy-document "$INLINE_POLICY" + echo "ok." + fi + + if [ $? -eq 0 ]; then + echo "Successfully configured role ${ROLE_NAME}" + else + echo "Failed to configure role ${ROLE_NAME}" + return 1 + fi + done + + return 0 +} + +# Main script execution +if [ -z "$MGMT_ACCOUNT_ID" ]; then + echo "You must set the MGMT_ACCOUNT_ID environment variable" + echo "Example: export MGMT_ACCOUNT_ID=123456789012" + exit 1 +fi + +if [ -z "$CLUSTER_NAME" ]; then + echo "You must set the CLUSTER_NAME environment variable" + echo "Example: export CLUSTER_NAME=hub-cluster" + exit 1 +fi + +echo "Management Account ID: $MGMT_ACCOUNT_ID" +echo "Cluster Name: $CLUSTER_NAME" +create_ack_workload_roles "$MGMT_ACCOUNT_ID" \ No newline at end of file diff --git a/scripts/delete_ack_workload_roles.sh b/scripts/delete_ack_workload_roles.sh new file mode 100755 index 0000000..bbc4aef --- /dev/null +++ b/scripts/delete_ack_workload_roles.sh @@ -0,0 +1,87 @@ +#!/bin/bash + +# Script to delete IAM roles by first removing all attached policies +# Usage: ./delete_ack_workload_roles.sh role1 role2 role3 ... +# ./delete_ack_workload_roles.sh eks-cluster-mgmt-iam eks-cluster-mgmt-ec2 eks-cluster-mgmt-eks + +set -e + +# Check if AWS CLI is installed +if ! command -v aws &> /dev/null; then + echo "AWS CLI is not installed. Please install it first." + exit 1 +fi + +# Check if at least one role name is provided +if [ $# -eq 0 ]; then + echo "Usage: $0 role1 role2 role3 ..." + echo "Please provide at least one role name to delete." + exit 1 +fi + +# Function to delete a role +delete_role() { + local role_name=$1 + echo "Processing role: $role_name" + + # Check if role exists + if ! aws iam get-role --role-name "$role_name" &> /dev/null; then + echo "Role $role_name does not exist. Skipping." + return 0 + fi + + # List and detach managed policies + echo "Checking for attached managed policies..." + local attached_policies=$(aws iam list-attached-role-policies --role-name "$role_name" --query "AttachedPolicies[*].PolicyArn" --output text) + + if [ -n "$attached_policies" ]; then + echo "Detaching managed policies from $role_name..." + for policy_arn in $attached_policies; do + echo " Detaching policy: $policy_arn" + aws iam detach-role-policy --role-name "$role_name" --policy-arn "$policy_arn" + done + else + echo "No managed policies attached to $role_name." + fi + + # List and delete inline policies + echo "Checking for inline policies..." + local inline_policies=$(aws iam list-role-policies --role-name "$role_name" --query "PolicyNames" --output text) + + if [ -n "$inline_policies" ] && [ "$inline_policies" != "None" ]; then + echo "Removing inline policies from $role_name..." + for policy_name in $inline_policies; do + echo " Removing inline policy: $policy_name" + aws iam delete-role-policy --role-name "$role_name" --policy-name "$policy_name" + done + else + echo "No inline policies for $role_name." + fi + + # Delete instance profiles associated with the role (if any) + echo "Checking for instance profiles..." + local instance_profiles=$(aws iam list-instance-profiles-for-role --role-name "$role_name" --query "InstanceProfiles[*].InstanceProfileName" --output text) + + if [ -n "$instance_profiles" ] && [ "$instance_profiles" != "None" ]; then + echo "Removing role from instance profiles..." + for profile_name in $instance_profiles; do + echo " Removing role from instance profile: $profile_name" + aws iam remove-role-from-instance-profile --instance-profile-name "$profile_name" --role-name "$role_name" + done + else + echo "No instance profiles for $role_name." + fi + + # Finally delete the role + echo "Deleting role: $role_name" + aws iam delete-role --role-name "$role_name" + echo "Role $role_name successfully deleted." + echo "----------------------------------------" +} + +# Process each role +for role in "$@"; do + delete_role "$role" +done + +echo "All specified roles have been processed." diff --git a/terraform/hub/.gitignore b/terraform/hub/.gitignore new file mode 100644 index 0000000..0216ebd --- /dev/null +++ b/terraform/hub/.gitignore @@ -0,0 +1,32 @@ +# Local .terraform directories +**/.terraform/* +.terraform.lock.hcl +# .tfstate files +*.tfstate +*.tfstate.* +tfstate.* + +# Crash log files +crash.log + +# Exclude all .tfvars files, which might contain sensitive data, such as +# password, private keys, and other secrets. + +# Ignore override files as they are usually used to override resources locally. +override.tf +override.tf.json +*_override.tf +*_override.tf.json + +# Ignore CLI configuration files +.terraformrc +terraform.rc +backend.hcl + +# Ignore log files +*.log + +# Ignore temporary files +*.tmp +*.temp +.envrc diff --git a/terraform/hub/argocd.tf b/terraform/hub/argocd.tf new file mode 100644 index 0000000..1997db1 --- /dev/null +++ b/terraform/hub/argocd.tf @@ -0,0 +1,58 @@ +# Create ArgoCD namespace +resource "kubernetes_namespace_v1" "argocd" { + metadata { + name = local.argocd_namespace + } +} + +locals { + cluster_name = module.eks.cluster_name + argocd_labels = merge({ + cluster_name = local.cluster_name + environment = local.environment + "argocd.argoproj.io/secret-type" = "cluster" + }, + try(local.addons, {}) + ) + argocd_annotations = merge( + { + cluster_name = local.cluster_name + environment = local.environment + }, + try(local.addons_metadata, {}) + ) +} + +locals { + config = <<-EOT + { + "tlsClientConfig": { + "insecure": false + } + } + EOT + argocd = { + apiVersion = "v1" + kind = "Secret" + metadata = { + name = module.eks.cluster_name + namespace = local.argocd_namespace + annotations = local.argocd_annotations + labels = local.argocd_labels + } + stringData = { + name = module.eks.cluster_name + server = module.eks.cluster_arn + project = "default" + } + } +} +resource "kubernetes_secret_v1" "cluster" { + metadata { + name = local.argocd.metadata.name + namespace = local.argocd.metadata.namespace + annotations = local.argocd.metadata.annotations + labels = local.argocd.metadata.labels + } + data = local.argocd.stringData +} \ No newline at end of file diff --git a/terraform/hub/bootstrap/applicationsets.yaml b/terraform/hub/bootstrap/applicationsets.yaml new file mode 100644 index 0000000..13bdf05 --- /dev/null +++ b/terraform/hub/bootstrap/applicationsets.yaml @@ -0,0 +1,31 @@ +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: bootstrap + namespace: argocd +spec: + goTemplate: true + syncPolicy: + preserveResourcesOnDeletion: false # to be able to cleanup + generators: + - clusters: + selector: + matchLabels: + fleet_member: control-plane + template: + metadata: + name: bootstrap + spec: + project: default + source: + repoURL: '{{.metadata.annotations.fleet_repo_url}}' + path: '{{.metadata.annotations.fleet_repo_basepath}}{{.metadata.annotations.fleet_repo_path}}' + targetRevision: '{{.metadata.annotations.fleet_repo_revision}}' + directory: + recurse: false + exclude: exclude/* + destination: + namespace: 'argocd' + name: '{{.name}}' + syncPolicy: + automated: {} diff --git a/terraform/hub/data.tf b/terraform/hub/data.tf new file mode 100644 index 0000000..9128d6a --- /dev/null +++ b/terraform/hub/data.tf @@ -0,0 +1,18 @@ +data "aws_region" "current" {} +data "aws_caller_identity" "current" {} +data "aws_availability_zones" "available" { + # Do not include local zones + filter { + name = "opt-in-status" + values = ["opt-in-not-required"] + } +} +data "aws_ecr_authorization_token" "token" {} + +data "aws_iam_session_context" "current" { + # This data source provides information on the IAM source role of an STS assumed role + # For non-role ARNs, this data source simply passes the ARN through issuer ARN + # Ref https://github.com/terraform-aws-modules/terraform-aws-eks/issues/2327#issuecomment-1355581682 + # Ref https://github.com/hashicorp/terraform-provider-aws/issues/28381 + arn = data.aws_caller_identity.current.arn +} diff --git a/terraform/hub/destroy.sh b/terraform/hub/destroy.sh new file mode 100755 index 0000000..04b28c8 --- /dev/null +++ b/terraform/hub/destroy.sh @@ -0,0 +1,7 @@ +#!/bin/bash + +#if var not exit provide default +TF_VAR_FILE=${TF_VAR_FILE:-"terraform.tfvars"} + +terraform init +terraform destroy -var-file=$TF_VAR_FILE \ No newline at end of file diff --git a/terraform/hub/eks-capability-iam.tf b/terraform/hub/eks-capability-iam.tf new file mode 100644 index 0000000..afee91d --- /dev/null +++ b/terraform/hub/eks-capability-iam.tf @@ -0,0 +1,153 @@ +# IAM role for ACK controllers with assume role capability +resource "aws_iam_role" "ack_controller" { + name = "${local.name}-ack-controller" + + assume_role_policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Effect = "Allow" + Principal = { + Service = "capabilities.eks.amazonaws.com" + } + Action = [ + "sts:AssumeRole", + "sts:TagSession" + ] + } + ] + }) + + tags = local.tags +} + +# IAM policy allowing the role to assume any role +resource "aws_iam_policy" "ack_assume_role" { + name = "${local.name}-ack-assume-role" + description = "Policy allowing ACK controller to assume any role" + + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Effect = "Allow" + Action = [ + "sts:AssumeRole", + "sts:TagSession" + ] + Resource = "*" + } + ] + }) + + tags = local.tags +} + +# Attach the assume role policy to the ACK controller role +resource "aws_iam_role_policy_attachment" "ack_assume_role" { + role = aws_iam_role.ack_controller.name + policy_arn = aws_iam_policy.ack_assume_role.arn +} + +# Grant ACK controller role admin access to EKS cluster +resource "aws_eks_access_entry" "ack_controller" { + cluster_name = module.eks.cluster_name + principal_arn = aws_iam_role.ack_controller.arn + type = "STANDARD" +} + +resource "aws_eks_access_policy_association" "ack_controller_admin" { + cluster_name = module.eks.cluster_name + principal_arn = aws_iam_role.ack_controller.arn + policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy" + + access_scope { + type = "cluster" + } + + depends_on = [aws_eks_access_entry.ack_controller] +} + +# IAM role for kro capability +resource "aws_iam_role" "kro_controller" { + name = "${local.name}-kro-controller" + + assume_role_policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Effect = "Allow" + Principal = { + Service = "capabilities.eks.amazonaws.com" + } + Action = [ + "sts:AssumeRole", + "sts:TagSession" + ] + } + ] + }) + + tags = local.tags +} + +# Grant kro controller role admin access to EKS cluster +resource "aws_eks_access_entry" "kro_controller" { + cluster_name = module.eks.cluster_name + principal_arn = aws_iam_role.kro_controller.arn + type = "STANDARD" +} + +resource "aws_eks_access_policy_association" "kro_controller_admin" { + cluster_name = module.eks.cluster_name + principal_arn = aws_iam_role.kro_controller.arn + policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy" + + access_scope { + type = "cluster" + } + + depends_on = [aws_eks_access_entry.kro_controller] +} + +# IAM role for argocd capability +resource "aws_iam_role" "argocd_controller" { + name = "${local.name}-argocd-controller" + + assume_role_policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Effect = "Allow" + Principal = { + Service = "capabilities.eks.amazonaws.com" + } + Action = [ + "sts:AssumeRole", + "sts:TagSession" + ] + } + ] + }) + + tags = local.tags +} + +# Grant argocd controller role admin access to EKS cluster +resource "aws_eks_access_entry" "argocd_controller" { + cluster_name = module.eks.cluster_name + principal_arn = aws_iam_role.argocd_controller.arn + type = "STANDARD" +} + +resource "aws_eks_access_policy_association" "argocd_controller_admin" { + cluster_name = module.eks.cluster_name + principal_arn = aws_iam_role.argocd_controller.arn + policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy" + + access_scope { + type = "cluster" + } + + depends_on = [aws_eks_access_entry.argocd_controller] +} \ No newline at end of file diff --git a/terraform/hub/eks.tf b/terraform/hub/eks.tf new file mode 100644 index 0000000..e09aa76 --- /dev/null +++ b/terraform/hub/eks.tf @@ -0,0 +1,55 @@ +module "eks" { + #checkov:skip=CKV_TF_1:We are using version control for those modules + #checkov:skip=CKV_TF_2:We are using version control for those modules + source = "terraform-aws-modules/eks/aws" + version = "~> 21.10.1" + + name = local.name + kubernetes_version = local.cluster_version + endpoint_public_access = true + + vpc_id = module.vpc.vpc_id + subnet_ids = module.vpc.private_subnets + + enable_cluster_creator_admin_permissions = true + + compute_config = { + enabled = true + node_pools = ["general-purpose", "system"] + } + + tags = { + Blueprint = local.name + GithubRepo = "https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest" + } +} + +################################################################################ +# Supporting Resources +################################################################################ +module "vpc" { + source = "terraform-aws-modules/vpc/aws" + version = "~> 5.0" + + name = local.name + cidr = local.vpc_cidr + + azs = local.azs + private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)] + public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)] + + enable_nat_gateway = true + single_nat_gateway = true + + public_subnet_tags = { + "kubernetes.io/role/elb" = 1 + } + + private_subnet_tags = { + "kubernetes.io/role/internal-elb" = 1 + # Tags subnets for Karpenter auto-discovery + "karpenter.sh/discovery" = local.name + } + + tags = local.tags +} \ No newline at end of file diff --git a/terraform/hub/install.sh b/terraform/hub/install.sh new file mode 100755 index 0000000..65be9a5 --- /dev/null +++ b/terraform/hub/install.sh @@ -0,0 +1,7 @@ +#!/bin/bash + +#if var not exit provide default +TF_VAR_FILE=${TF_VAR_FILE:-"terraform.tfvars"} + +terraform init +terraform apply -var-file=$TF_VAR_FILE \ No newline at end of file diff --git a/terraform/hub/locals.tf b/terraform/hub/locals.tf new file mode 100644 index 0000000..022cecf --- /dev/null +++ b/terraform/hub/locals.tf @@ -0,0 +1,95 @@ +locals { + cluster_info = module.eks + vpc_cidr = "10.0.0.0/16" + azs = slice(data.aws_availability_zones.available.names, 0, 2) + enable_automode = var.enable_automode + use_ack = var.use_ack + enable_efs = var.enable_efs + name = var.cluster_name + environment = var.environment + fleet_member = "control-plane" + tenant = var.tenant + region = data.aws_region.current.id + cluster_version = var.kubernetes_version + argocd_namespace = "argocd" + gitops_addons_repo_url = "https://github.com/${var.git_org_name}/${var.gitops_addons_repo_name}.git" + gitops_fleet_repo_url = "https://github.com/${var.git_org_name}/${var.gitops_fleet_repo_name}.git" + + external_secrets = { + namespace = "external-secrets" + service_account = "external-secrets-sa" + } + + aws_addons = { + enable_external_secrets = try(var.addons.enable_external_secrets, false) + enable_kro_eks_rgs = try(var.addons.enable_kro_eks_rgs, false) + enable_multi_acct = try(var.addons.enable_multi_acct, false) + } + oss_addons = { + } + + addons = merge( + local.aws_addons, + local.oss_addons, + { tenant = local.tenant }, + { fleet_member = local.fleet_member }, + { kubernetes_version = local.cluster_version }, + { aws_cluster_name = local.cluster_info.cluster_name }, + ) + + addons_metadata = merge( + { + aws_cluster_name = local.cluster_info.cluster_name + aws_region = local.region + aws_account_id = data.aws_caller_identity.current.account_id + aws_vpc_id = module.vpc.vpc_id + use_ack = local.use_ack + }, + { + addons_repo_url = local.gitops_addons_repo_url + addons_repo_path = var.gitops_addons_repo_path + addons_repo_basepath = var.gitops_addons_repo_base_path + addons_repo_revision = var.gitops_addons_repo_revision + }, + { + fleet_repo_url = local.gitops_fleet_repo_url + fleet_repo_path = var.gitops_fleet_repo_path + fleet_repo_basepath = var.gitops_fleet_repo_base_path + fleet_repo_revision = var.gitops_fleet_repo_revision + }, + { + external_secrets_namespace = local.external_secrets.namespace + external_secrets_service_account = local.external_secrets.service_account + } + ) + + argocd_apps = { + applicationsets = file("${path.module}/bootstrap/applicationsets.yaml") + } + role_arns = [] + # # Generate dynamic access entries for each admin rolelocals { + admin_access_entries = { + for role_arn in local.role_arns : role_arn => { + principal_arn = role_arn + policy_associations = { + admins = { + policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy" + access_scope = { + type = "cluster" + } + } + } + } + } + + + # Merging dynamic entries with static entries if needed + access_entries = merge({}, local.admin_access_entries) + + tags = { + Blueprint = local.name + GithubRepo = "github.com/gitops-bridge-dev/gitops-bridge" + } +} + + diff --git a/terraform/hub/outputs.tf b/terraform/hub/outputs.tf new file mode 100644 index 0000000..a2cb716 --- /dev/null +++ b/terraform/hub/outputs.tf @@ -0,0 +1,23 @@ +# Output the ACK controller role ARN +output "ack_controller_role_arn" { + description = "ARN of the IAM role for ACK controller" + value = aws_iam_role.ack_controller.arn +} + +# Output the kro controller role ARN +output "kro_controller_role_arn" { + description = "ARN of the IAM role for kro controller" + value = aws_iam_role.kro_controller.arn +} + +# Output the argocd controller role ARN +output "argocd_controller_role_arn" { + description = "ARN of the IAM role for argocd controller" + value = aws_iam_role.argocd_controller.arn +} + +# Output cluster name +output "cluster_name" { + description = "Name of the EKS cluster" + value = module.eks.cluster_name +} \ No newline at end of file diff --git a/terraform/hub/pod-identity.tf b/terraform/hub/pod-identity.tf new file mode 100644 index 0000000..5d58d31 --- /dev/null +++ b/terraform/hub/pod-identity.tf @@ -0,0 +1,34 @@ +################################################################################ +# External Secrets EKS Access +################################################################################ +module "external_secrets_pod_identity" { + count = local.aws_addons.enable_external_secrets ? 1 : 0 + source = "terraform-aws-modules/eks-pod-identity/aws" + version = "~> 1.4.0" + + name = "external-secrets" + + attach_external_secrets_policy = true + external_secrets_kms_key_arns = ["arn:aws:kms:${local.region}:*:key/${local.cluster_info.cluster_name}/*"] + external_secrets_secrets_manager_arns = ["arn:aws:secretsmanager:${local.region}:*:secret:${local.cluster_info.cluster_name}/*"] + external_secrets_ssm_parameter_arns = ["arn:aws:ssm:${local.region}:*:parameter/${local.cluster_info.cluster_name}/*"] + external_secrets_create_permission = false + attach_custom_policy = true + policy_statements = [ + { + sid = "ecr" + actions = ["ecr:*"] + resources = ["*"] + } + ] + # Pod Identity Associations + associations = { + addon = { + cluster_name = local.cluster_info.cluster_name + namespace = local.external_secrets.namespace + service_account = local.external_secrets.service_account + } + } + + tags = local.tags +} diff --git a/terraform/hub/providers.tf b/terraform/hub/providers.tf new file mode 100644 index 0000000..40d5029 --- /dev/null +++ b/terraform/hub/providers.tf @@ -0,0 +1,39 @@ + +provider "helm" { + kubernetes { + host = local.cluster_info.cluster_endpoint + cluster_ca_certificate = base64decode(local.cluster_info.cluster_certificate_authority_data) + + exec { + api_version = "client.authentication.k8s.io/v1beta1" + command = "aws" + # This requires the awscli to be installed locally where Terraform is executed + args = [ + "eks", + "get-token", + "--cluster-name", local.cluster_info.cluster_name, + "--region", local.region + ] + } + } +} + +provider "kubernetes" { + host = local.cluster_info.cluster_endpoint + cluster_ca_certificate = base64decode(local.cluster_info.cluster_certificate_authority_data) + # insecure = true + exec { + api_version = "client.authentication.k8s.io/v1beta1" + command = "aws" + # This requires the awscli to be installed locally where Terraform is executed + args = [ + "eks", + "get-token", + "--cluster-name", local.cluster_info.cluster_name, + "--region", local.region + ] + } +} + +provider "aws" { +} diff --git a/terraform/hub/terraform.tfvars b/terraform/hub/terraform.tfvars new file mode 100644 index 0000000..ff40a07 --- /dev/null +++ b/terraform/hub/terraform.tfvars @@ -0,0 +1,19 @@ +vpc_name = "hub-cluster" +kubernetes_version = "1.34" +cluster_name = "hub-cluster" +tenant = "tenant1" + +git_org_name = "XXXXXXXX" # update this if you want to customize the gitops configurations + +gitops_addons_repo_name = "eks-cluster-mgmt" +gitops_addons_repo_base_path = "addons/" +gitops_addons_repo_path = "bootstrap" +gitops_addons_repo_revision = "main" + +gitops_fleet_repo_name = "eks-cluster-mgmt" +gitops_fleet_repo_base_path = "fleet/" +gitops_fleet_repo_path = "bootstrap" +gitops_fleet_repo_revision = "main" + +# AWS Accounts used for demo purposes (cluster1 cluster2) +account_ids = "012345678910 123456789101" # update this with your spoke aws accounts ids diff --git a/terraform/hub/variables.tf b/terraform/hub/variables.tf new file mode 100644 index 0000000..cf681b1 --- /dev/null +++ b/terraform/hub/variables.tf @@ -0,0 +1,142 @@ +variable "vpc_name" { + description = "VPC name to be used by pipelines for data" + type = string +} + +variable "kubernetes_version" { + description = "Kubernetes version" + type = string + default = "1.34" +} + +variable "github_app_credentilas_secret" { + description = "The name of the Secret storing github app credentials" + type = string + default = "" +} + +variable "kms_key_admin_roles" { + description = "list of role ARNs to add to the KMS policy" + type = list(string) + default = [] +} + +variable "addons" { + description = "Kubernetes addons" + type = any + default = { + enable_external_secrets = true + enable_kro_eks_rgs = true + enable_multi_acct = true + } +} + +variable "manifests" { + description = "Kubernetes manifests" + type = any + default = {} +} + +variable "enable_addon_selector" { + description = "select addons using cluster selector" + type = bool + default = false +} + +variable "route53_zone_name" { + description = "The route53 zone for external dns" + default = "" +} +# Github Repos Variables + +variable "git_org_name" { + description = "The name of Github organisation" + default = "kro-run" +} + +variable "gitops_addons_repo_name" { + description = "The name of git repo" + default = "kro" +} + +variable "gitops_addons_repo_path" { + description = "The path of addons bootstraps in the repo" + default = "bootstrap" +} + +variable "gitops_addons_repo_base_path" { + description = "The base path of addons in the repon" + default = "examples/aws/eks-cluster-mgmt/addons/" +} + +variable "gitops_addons_repo_revision" { + description = "The name of branch or tag" + default = "main" +} +# Fleet +variable "gitops_fleet_repo_name" { + description = "The name of Git repo" + default = "kro" +} + +variable "gitops_fleet_repo_path" { + description = "The path of fleet bootstraps in the repo" + default = "bootstrap" +} + +variable "gitops_fleet_repo_base_path" { + description = "The base path of fleet in the repon" + default = "examples/aws/eks-cluster-mgmt/fleet/" +} + +variable "gitops_fleet_repo_revision" { + description = "The name of branch or tag" + default = "main" +} + +variable "ackCreate" { + description = "Creating PodIdentity and addons relevant resources with ACK" + default = false +} + +variable "enable_efs" { + description = "Enabling EFS file system" + type = bool + default = false +} + +variable "enable_automode" { + description = "Enabling Automode Cluster" + type = bool + default = true +} + +variable "cluster_name" { + description = "Name of the cluster" + type = string + default = "hub-cluster" +} + +variable "use_ack" { + description = "Defining to use ack or terraform for pod identity if this is true then we will use this label to deploy resources with ack" + type = bool + default = true +} + +variable "environment" { + description = "Name of the environment for the Hub Cluster" + type = string + default = "control-plane" +} + +variable "tenant" { + description = "Name of the tenant for the Hub Cluster" + type = string + default = "control-plane" +} + +variable "account_ids" { + description = "List of aws accounts ACK will need to connect to" + type = string + default = "" +} \ No newline at end of file diff --git a/terraform/hub/versions.tf b/terraform/hub/versions.tf new file mode 100644 index 0000000..8a843e5 --- /dev/null +++ b/terraform/hub/versions.tf @@ -0,0 +1,18 @@ +terraform { + required_version = ">= 1.0" + + required_providers { + aws = { + source = "hashicorp/aws" + version = "~> 6.27.0" + } + helm = { + source = "hashicorp/helm" + version = "~> 3.1.1" + } + kubernetes = { + source = "hashicorp/kubernetes" + version = "3.0.1" + } + } +}