Have you ever pushed a small change to your static site and spent more time deploying than coding?
You upload files to S3, something breaks, you fix it, and then permissions throw another error. The cycle repeats, and small edits snowball into wasted hours.
You’re not alone!
According to the 2025 State of Software Engineering Excellence report by Harness, 50% of application deployments still rely on manual processes. The Atlassian Developer Experience Report 2025 shows developers lose more than ten hours every week to inefficiencies like these.
The problem isn’t just one missed stylesheet or one broken image. It’s the lack of a reliable, repeatable process.
So how do you overcome it?
In this blog, we’ll look at why static sites need the same deployment discipline as larger applications and how CI/CD can make every update faster, safer, and more consistent.
Why Bother With CI/CD for Static Sites?
It’s common to believe that static sites are too simple to need automation. After all, they’re just HTML, CSS, and a bit of JavaScript. But maintaining them by hand is a different story.
Uploading files manually often leads to missed stylesheets, overwritten assets, or forgotten tweaks. CI/CD removes that risk by turning deployments into a repeatable, reliable process.
- Speed: Every push becomes a release. Instead of manual uploads, your site updates within minutes.
- Consistency: The same process runs every time, so you never wonder why production looks different from staging.
- Lower risk: Automated deployments are smaller and easier to debug than a single large manual update.
- Peace of mind: Once the pipeline is in place, it runs quietly in the background, letting you focus on code.
From Problems to Pipeline
If CI/CD solves the headaches of static site deployment, what does it look like in practice? Before creating buckets or writing YAML, it helps to see the big picture.
Think of the pipeline as a production line. Each time you push code to GitHub, it triggers a chain reaction:
- GitHub Actions starts a workflow.
- The workflow pulls your code, runs checks, and prepares the files.
- If everything passes, it syncs the files to Amazon S3.
- S3 immediately serves the updated site to your visitors.
No uploads to juggle, no files forgotten, no wondering if the site is live.
This works because the two platforms complement each other. GitHub Actions is the automation engine that handles every change the moment it’s pushed. Amazon S3 is the hosting layer, built for scale, reliability, and pay-as-you-go simplicity.
Together, they make deployments effortless. Every push becomes a release, giving you back time to focus on building instead of babysitting.
Before we set up the S3 bucket, let’s run through a few prerequisites to make sure everything is ready.
What You’ll Need
Before we touch AWS or write a line of YAML, make sure you have the basics in place. This keeps the setup smooth and avoids back-and-forth later.
1. Accounts
- GitHub account to host your repo and run Actions
- AWS account with permission to use S3 and IAM
2. Skills You’ll Use
- Basic HTML, CSS, JavaScript
- Comfort with Git for cloning, committing, and pushing
3. Local Setup
- Git installed on your machine
- AWS CLI is optional but helpful for quick checks and local testing
4. Repo Readiness
A simple, tidy structure works best:
/
index.html
css/
js/
assets/
README.md
5. What To Have Handy
- A bucket name you plan to use on S3
- A region you prefer for hosting
- We will add AWS credentials as GitHub Secrets during setup
That’s all you need. With the groundwork set, we can move straight to hosting.
Step 1: Setting Up Your S3 Bucket
The first building block of the pipeline is an Amazon S3 bucket, the public folder where your entire site will live. S3 is perfect for this because it’s simple, cost-effective, and scales automatically. Whether you get ten visitors or ten thousand, it keeps serving files without extra effort.
1.1 Create the Bucket
- Log in to the AWS Management Console and head over to S3.
- Click Create bucket.
- Give it a globally unique name (e.g., my-blog-site-prod). Bucket names are global, so you may need a few tries.
- Choose the AWS region closest to your audience. Hosting in the right region can cut milliseconds off load times.
- Uncheck Block all public access to ensure your site remains accessible.
1.2 Enable Static Website Hosting
- Open your new bucket and go to Properties → Static website hosting.
- Enable hosting and point both the Index document and Error document to
index.html.
This way, even if someone lands on the wrong URL, they’ll still get served the same entry point instead of an ugly “Access Denied” page.
1.3 Add a Bucket Policy
Right now, your bucket exists, but it’s still private. To actually serve web pages, you need to grant public read-only access. Attach this policy (don’t forget to replace your-bucket-name):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::your-bucket-name/*"
}
]
}
This ensures that anyone can fetch your files in a browser, but nobody can modify or delete them. It is the right balance between openness for visitors and control for you.
Once this is done, you’ll already have a working static site host. Drop an index.html into the bucket, and you could technically share it right away. But our goal isn’t to upload files manually; it is to make GitHub Actions handle this for us automatically.
That’s where IAM credentials come in, which we’ll set up next.
Step 2: Creating IAM Credentials
Now that your S3 bucket is ready, the next question is: how will GitHub Actions talk to it? Remember, your GitHub workflow will run on GitHub’s servers, not on your machine. That means it needs a way to authenticate with AWS securely.
This is where IAM (Identity and Access Management) comes into play. Instead of giving GitHub your root AWS credentials (never do this), you’ll create a dedicated IAM user with just the right permissions for deploying files to S3.
2.1 Create a Custom IAM Policy
Start by defining what this user should be able to do. In our case, the workflow only needs to upload, read, list, and delete files inside your S3 bucket.
- Go to IAM → Policies → Create policy.
- Choose JSON and paste the following (replace
your-bucket-name):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::your-bucket-name",
"arn:aws:s3:::your-bucket-name/*"
]
}
]
}
This gives GitHub just enough permission to deploy files to your bucket and nothing more.
2.2 Create an IAM User
- Go to IAM → Users → Add user.
- Give it a descriptive name like
github-actions-deployer. - Select Programmatic access (this generates an access key and secret key).
- Attach the custom policy you just created.
2.3 Save the Access Keys Securely
Once the user is created, AWS will show you an Access Key ID and a Secret Access Key. These two values are the credentials GitHub will use during deployments.
⚠️ Important: You’ll only see the Secret Key once. Copy it somewhere safe right now, you’ll need it in the next step.
At this stage, GitHub Actions doesn’t know about these credentials yet, but you’ve prepared a secure bridge between GitHub and AWS.
Next, we’ll move into your GitHub repository and store these credentials as secrets so the workflow can use them safely.
Step 3: Configuring Your GitHub Repository
With your S3 bucket ready and IAM credentials in hand, the next step is to configure your GitHub repo to securely communicate with AWS during deployments.
3.1 Prepare Your Repository
If you already have a repository for your static site, you’re set. If not:
- Create a new repo on GitHub.
- Add your static site files (HTML, CSS, JS, assets).
- Commit and push them to the
mainbranch.
Tip: Keep your repo clean. Ignore unnecessary files like .vscode, .DS_Store, or build logs. They do not belong in production.
3.2 Add Secrets for AWS Access
Now we need to store the credentials created in Step 2. GitHub has a built-in way to manage these securely, called Secrets.
- In your repository, go to Settings → Secrets and variables → Actions.
- Click New repository secret and add the following:
AWS_ACCESS_KEY_ID→ your IAM user’s access key IDAWS_SECRET_ACCESS_KEY→ your IAM user’s secret keyS3_BUCKET_NAME→ the exact name of your S3 bucket
These values will be injected into your GitHub Actions workflow at runtime. They stay encrypted and never appear in logs, so it’s safe to store them here.
3.3 Confirm the Setup
At this point, your repo is more than just code. It now has the credentials and configuration needed for automated deployments. We haven’t written the workflow yet, but all the building blocks are in place.
With this, GitHub is ready to take charge of deployments.
Next, we’ll write the GitHub Actions workflow file that ties everything together.
Step 4: Writing the GitHub Actions Workflow
We’ve got the bucket, we’ve got the credentials, and we’ve wired them into GitHub. Now comes the fun part: telling GitHub what to do whenever you push code. That’s what a GitHub Actions workflow is for.
A workflow is essentially a YAML-based instruction file. It defines when the pipeline should run and what steps it should execute. In our case, the job is simple: grab the code, configure AWS, and deploy it to S3.
4.1 Create the Workflow File
In your repository, create a new folder and file:
.github/workflows/deploy-to-s3.yml
This is where all your deployment logic will live.
4.2 Add the Workflow Definition
Paste the following into the file:
name: Deploy Static Website to S3
on:
push:
branches: [ main ]
workflow_dispatch: # Allows manual triggering
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Update deployment timestamp
run: |
echo "// Deployed on $(date)" >> js/script.js
echo "// Run ID: ${{ github.run_id }}" >> js/script.js
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ap-south-1
- name: Deploy to S3
run: |
aws s3 sync ./ s3://${{ secrets.S3_BUCKET_NAME }}/ \
--delete \
--exclude ".git/*" \
--exclude ".github/*" \
--exclude "README.md"
4.3 Breaking It Down
- Trigger: The workflow runs whenever code is pushed to the
mainbranch. Theworkflow_dispatchoption also lets you trigger it manually. - Checkout: This pulls your repo into the runner environment so GitHub Actions has the latest files.
- Timestamp update (optional): Appends a deployment timestamp into your
script.js. This helps you confirm that deployments are fresh. - AWS credentials: Pulled from the GitHub Secrets you configured in Step 3.
- Deploy to S3: The
aws s3 synccommand uploads your files to the bucket, deleting anything that no longer exists in the repo.
4.4 Why This Workflow Works
This setup is minimal but effective. It doesn’t overcomplicate things with build pipelines (since we’re dealing with static sites), yet it gives you a consistent, automated way to deploy. And because it’s just YAML, it’s easy to tweak or extend later. For example, you could add tests or integrate with CloudFront for global caching.
With the workflow in place, we now have a complete pipeline defined.
Next, let’s put it to the test and actually deploy the site.
Step 5: Testing the Pipeline
You have the bucket, the credentials, and the workflow. Time to see it work.
5.1 Run Your First Deployment
- Commit and push a small change to the
mainbranch.
Example: update text inindex.html,or the timestamp step will cover it. - Go to your repo and open the Actions tab.
- You should see a new run for Deploy Static Website to S3.
Status moves from Queued to In Progress to Success.
Tip: You can also trigger it manually using Run workflow since we enabled workflow_dispatch.
5.2 Verify the Website
- Open the S3 bucket in the AWS Console.
- In Properties, find Static website hosting.
Copy the Website endpoint shown there. - Visit that URL in your browser.
- You should see your latest changes live.
If you do not see the change right away, hard refresh your browser or try a private window.
5.3 Read the Logs Like a Pro
- Open the latest workflow run in Actions.
- Expand steps to see exact commands and outputs.
- The Deploy to S3 step shows which files synced, added, or deleted.
This is your single source of truth for what went live.
5.4 Quick Sanity Checks
- Confirm that
index.htmlexists in the bucket’s root. - Confirm that your bucket policy allows public reads.
- Confirm that your region in the workflow matches the bucket region.
- Confirm that your secrets are spelled correctly in GitHub.
5.5 Make One More Change
- Do one more small edit and push again.
- You should see the pipeline run and your site update.
- Once you see this loop working, you are done with manual uploads for good.
Troubleshooting & Common Issues
Even with everything set up correctly, it is normal to hit a snag on the first run. Here are some common issues you might run into, and how to fix them quickly.
Access Denied Errors
If your workflow fails with “Access Denied”:
- Double-check that your IAM policy allows
s3:PutObject,s3:GetObject,s3:ListBucket, ands3:DeleteObject. - Verify that your bucket policy matches your bucket name exactly (typos in ARN paths are common culprits).
- Make sure your GitHub secrets (
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY, andS3_BUCKET_NAME) are spelled correctly and saved.
Files Not Updating
If your site doesn’t show the latest changes:
- Confirm you pushed to the main branch (the one your workflow is watching).
- Check the workflow logs under Actions for errors in the deploy step.
- Try a hard refresh in your browser or clear the cache — S3 often serves cached versions of files.
Workflow Not Triggering
If nothing happens after pushing code:
- Ensure the workflow file is inside
.github/workflows/with a.ymlextension. - Check that your workflow
on:section is watching the correct branch (main). - Run the workflow manually using workflow_dispatch to confirm it executes.
Wrong Region
If the deploy step fails with region errors:
- Confirm that the
aws-regionin your workflow matches the region where you created your bucket. - You can find the region in the bucket’s Properties in the AWS Console.
Debugging Like a Pro
- Expand each workflow step in GitHub Actions to review logs and see which commands ran and their outputs.
- If needed, confirm the bucket contents locally with
aws s3 ls s3://your-bucket-nameor add temporaryechostatements in the workflow to debug environment variables.
Conclusion & Next Steps
With your pipeline live, static site deployment is no longer a manual chore. Every commit you push to GitHub is now a release to production. No uploads, no guesswork, no late-night worries about missing files.
The real value goes beyond convenience. It is about confidence that your workflow is consistent, secure, and repeatable. You can take it further by adding automated tests, plugging into Amazon CloudFront for global delivery, or connecting Route 53 to serve your custom domain.
At Coditas, we help teams move from quick fixes to cloud native practices that scale. From lightweight pipelines like this to enterprise-grade AWS architectures with observability and GenAI integration, we build setups that last. If you are ready to take your cloud to the next level, let’s talk.
References:
https://www.prnewswire.com/news-releases/new-report-reveals-alarming-state-of-software-engineering-excellence-most-organizations-failing-to-deliver-on-developer-experience-and-devops-maturity-302483517.html
https://www.atlassian.com/blog/developer/developer-experience-report-2025






Need help with your Business?
Don’t worry, we’ve got your back.


