Optimizing development to eliminate daily update of SSO variables
In my recent foray into code development, I found myself grappling with the intricacies of connecting to various AWS services.
The accounts in question were configured with AWS IAM Identity Center, adding an extra layer of complexity to the connectivity puzzle.
There was a daily process to update the .env
file that the developers needed to follow. The reason? The need to log in through Single Sign-On (SSO) to procure the required values and manually insert them into the file.
Cut-and-Paste problem
In some cases, that can be a cumbersome, error prone and time-consuming task for the development team.
Recognizing the need for a more streamlined and efficient approach, I set out to explore solutions that would alleviate the daily burden of recreating the .env file. Enter a transformative journey towards optimizing AWS IAM integration and revolutionizing the Docker image build process.
Requirements
The .env
file in question had the AWS environment variable in it as shown below:
APP_API_URL="http://127.0.0.1:3000" AWS_REGION="us-west-2" DB_TLS_FILE="" NODE_ENV="dev" AWS_ACCESS_KEY_ID=<access> AWS_SECRET_ACCESS_KEY=<secret> AWS_SESSION_TOKEN=<token
Manual Process
In their daily routine, developers encountered a manual process for updating the .env file, involving several steps:
- Navigate to the SSO login page and complete the login process, including redirection to the SSO provider and entering username, password, and MFA.
- Click on the account to view the accessible roles.
- Choose the required role.
- Access “Command line or programmatic access.” to get the secrets.
- Copy the secrets, either individually or from the options displayed in the popup.
- Edit the .env file with the acquired values.
Given the repetitive nature of this task and the cut-and-paste actions involved, it became apparent that this process could lead to rework and challenges for the developers.
Automate all the things
To address this, I aimed to automate the retrieval of credentials displayed in the “Command line or programmatic access” screen. I started to do some Googling.
After a bit of that, I came across a helpful AWS Knowledge Center article: How do I use the AWS CLI to get temporary credentials for an IAM Identity Center user?
Reading that helped me see what I needed to do. There were a few gaps that the article didn’t address. It didn’t help me with how to identify the specific file I needed to parse, or how that tied back to the AWS profile I had logged in with.
Mind the gaps
To bridge these gaps, I began crafting a Bash script, starting with the essential step of verifying an active user session for the required profile.
In my specific case, where I needed access to three distinct accounts (development, staging, and production), each configured with separate AWS CLI profiles.
First, I introduced a variable named LANE
to facilitate the selection process, and used that in a CASE
statement to set the AWS_PROFILE
to the correct value.
#!/usr/bin/env bash LANE=${1:-dev} AWS_REGION=us-west-2 # CASE statement for the LANE variable sets the AWS_PROFILE and DOCDB_TO_PORT variables case $LANE in stage) AWS_PROFILE=accuweaver-dev ;; prod) AWS_PROFILE=accuweaver-stg ;; dev) AWS_PROFILE=accuweaver-prd ;; *) echo "Invalid LANE: ${LANE}" echo "Valid LANE values are: stage, prod, dev" exit 1 ;; esac
When running the script, the LANE
variable gets set to the first argument to the script, or to dev
if there is no argument.
In this example, the profile (AWS_PROFILE
) gets set to accuweaver-dev.
Verify login
Next I needed to test that the profile was logged in, which is most easily done with aws sts get-caller-identity --profile ${AWS_PROFILE}
account_id=$(aws sts get-caller-identity --query Account --output text --profile "${AWS_PROFILE}") if [ "" == "$account_id" ]; then echo "You must be logged in to AWS before running this script" if aws sso login --profile "${AWS_PROFILE}"; then account_id=$(aws sts get-caller-identity --query Account --output text --profile "${AWS_PROFILE}") fi fi echo "Account ID: $account_id"
Running this part of the script will get the account ID from the get-caller-identity
output. If it is empty, we try to do a login with the profile again.
Note that in the above, there is an assumption that the login succeeded.
Find the accessToken
After verifying the login, I had to figure out how to grab the credentials. The AWS article describes finding a file in the ~/.aws/sso/cache
folder.
Continuing my search, I looked in the folder on my computer. Much to my surprise, there were about a dozen files in the folder, and not all of them had the accessToken
element the article says you need.
Noodling the cache
After seeing the myriad of files, I spent some time trying to figure out the naming convention. First, I removed all the files from the cache folder. Second, I logged in again with one of my profiles to see what files were created there.
This brought me to the conclusion that there are two files created by logging in.
It appears that the first file is created for each IAM Identity Center start page you log into, and the second is for the role you have currently requested. It’s that second one you need, as it is the only one with the accessToken
in it.
Read the file and extract variables
The next piece of this script will assume that only one IAM Identity Center is being used. The next thing I needed was to find the right file and extract the accessToken
which was a simple grep
and some jq
.
Repeating the extract for the region
completes this part of the code.
# find the json file in ~/.aws/sso/cache that has the accessToken value sso_cache_file=$(grep -l accessToken ~/.aws/sso/cache/*.json) # get the accessToken value from the json file access_token=$(jq -r '.accessToken' $sso_cache_file) # get the region value from the json file region=$(jq -r '.region' $sso_cache_file)
The next step is to extract the permission set name.
That is done by sing the output from the get-caller-identity
payload to get the ARN.
The logic to split the ARN is done by splitting it by characters and extracting specfic pieces:
- First split is by
:
to get the ARN (field 6) - Next it is split by
/
to get the Role ID (field 2) - The Role ID is split by
_
to get the actual name (field 2)
Note the above works in my case since the permission names are default AWS ones which follow that format. I haven’t tried this with any other permission set mappings.
# get the role ID from the session using aws sts get-caller-identity --profile "${AWS_PROFILE}" role_id=$(aws sts get-caller-identity --profile "${AWS_PROFILE}" | jq -r '.Arn' | cut -d: -f6 | cut -d/ -f2 | cut -d_ -f2)
After getting those, I have the accessToken
, role name and region
I can run the command to get the keys that will be placed into the .env
file
# Get role credentials for the specified LANE role_credentials=$(aws sso get-role-credentials --account-id "$account_id" --role-name "${role_id}" --access-token "${access_token}" --region "${region}" --profile "${AWS_PROFILE}" | jq -r '.roleCredentials') AWS_ACCESS_KEY_ID=$(echo "${role_credentials}" | jq -r '.accessKeyId') AWS_SECRET_ACCESS_KEY=$(echo "${role_credentials}" | jq -r '.secretAccessKey') AWS_SESSION_TOKEN=$(echo "${role_credentials}" | jq -r '.sessionToken')
The aws sso get-role-credentials
output is piped through jq
to get the roleCredentials
to get the AWS environment variables we’ll need.
Write the .env file
The final step in our journey is to write those values to the .env
file, which is simply to do a normal cat
# write out the .env file cat << EOF > .env APP_API_URL="http://127.0.0.1:3000" AWS_REGION="us-west-2" DB_TLS_FILE="" NODE_ENV="dev" AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} AWS_SESSION_TOKEN=${AWS_SESSION_TOKEN} EOF
The script results in a new .env
file which has the current values. This can be run this each day to make sure your .env
file is up to date.
Conclusion
This approach laid the groundwork for a more efficient and automated workflow, minimizing the need for daily manual interventions and ensuring a smoother development experience.