Cloud Resume Challenge Part 1 - Deploying a Website

Welcome to Part 1 of my attempts at blogging my experience doing the Cloud Resume Challenge!

If you’re not familiar with the Cloud Resume Challenge , it is a starter project for those interested in getting their feet wet in public clouds like AWS or Azure. It uses building a resume as the foundation for practicing skills like programming, automation, and cloud administration.

I stumbled across it while looking for example projects that might give me an excuse to use AWS. I have experience with some similar tech used in the challenge, so I will likely go a little further into some of the goals so I can spend more time working on it.

My blog will be broken up into two pieces rather than one long brain dump, as this is a side project and it will be very easy for me to forget my experiences each step of the way. This first post will be about what I’ve learned deploying a static website to AWS, and the follow up will be the rest of the challenge, primarily the backend code and automation.

I did skip the certification for the time being as I took the training about 2 years ago, and just want to get my hands dirty on some engineering. So I won’t be including that in my challenge walkthrough.

Without further ado, let’s get to learning!

Writing (and styling) a resume

I’ll be honest: writing pure HTML not what I would consider fun. So I decided to make things even more complicated and include Markdown and Go templates: enter Hugo, the static site generator .

Using Hugo and a template called Hermit , I started off with a simple landing page with links to some social media pages. The original theme was a dull grey, so I did modify the _predefined.scss file to include a different color theme.

The theme makes it easy enough to make separate pages in Markdown that will be converted into HTML using Hugo. I started off with Markdown for my resume, and saw a lot of repetitive boilerplate that I decided would be a good opportunity to learn how to use Hugo’s ‘shortcode’ mechanism. Shortcodes are snippets that can be referred to using Hugo’s templating language (which are just Go templates), but require some reading through Hugo’s documentation to learn how to call certain functions and format the shortcodes.

A good example of using a shortcode for my resume is for the ‘Work Experience’ section. I wanted to use the same HTML formatting for each block, so I used a shortcode to create the headers for each block so that whenever a new job is added, it is easy enough to call a new shortcode with some parameters and have the proper format.

<div class="resume-section">
  <div class="flex-container resume-line">
    <h3>{{ .Get "title" }}</h3>
    {{ $endDate := .Get "endDate" }}
        <span>{{ .Get "startDate" }} - {{ replace $endDate " " "&nbsp;" | safeHTML }} </span>
  </div>
  <i>{{ .Get "position" }}</i>
</div>

It did take some time learning where to put certain files like base templates for pages or shortcodes so that they were used instead of the templates used in the Hermit theme. Hugo’s documentation is thorough, and did a great job of explaining the order of preference when searching for templates.

I did create some custom CSS to help format the resume with some better spacing and flexible containers. Flex containers and grids are a pain, so I didn’t fight too hard to make the pages perfect.

Deploying on S3

Hugo generates the public directory whose contents could be dropped directly into an S3 bucket for use as a static site. I named my bucket jonathandefreeuw.com since I had already purchased the domain, and configured the bucket to host a static website, which was fairly straightforward. It was available at the time at http://jonathandefreeuw.com.s3-website-us-east-1.amazonaws.com/ . I did have to configure Hugo to use that URL as the baseURL, so that all content was referenced properly instead of using localhost.

Configuring Route53 and CloudFront

I purchased my domain, and created a hosted zone in Route53. This gave me four DNS servers that I could use to input into my domain registrar. This way, when looking for jonathandefreeuw.com, DNS queries would be properly forwarded to Route53.

With Route53 configured, I created a distribution in CloudFront and created an SSL certificate to enable HTTPS on my website. It took me a bit longer than I’d like to admit to understand the order of registrar->Route53->CloudFront->S3, but after reading through lots of documentation, I have a better understanding of why each component is pointing to the next.

CloudFront caching took an extra chunk of time to understand. Making sure the right setting was enabled (Managed-CachingOptimized) was key, but the Cache Statistics tab in CloudFront wasn’t matching up with what I imagined it should be. I’m assuming that browser caching behavior had a lot to do with it, since if the browser is caching static files, there isn’t a need to try to hit the CloudFront cache.

Frontend CI/CD with GitHub Actions

I use GitLab Pipelines often in my current role, and really enjoy the automation side of building and deploying software. Templatizing (if that’s a word) and describing deployments as code is fun to me, so I skipped ahead on the CRC checklist and wanted to take a crack at learning GitHub Actions.

First I needed to create the proper IAM roles to allow a third-party application to make modifications to both S3 and CloudFront. IAM is a part of AWS that I feel like I need more time with, as there are so many different permissions and combinations possible that I feel like I barely scratched the surface with this small project.

Adding a token to Github allowed me to create an Action that could deploy files to S3. I did forget that Github Actions only clone the local repo, not any submodules, so I did have to include a step in the job to download my fork of the Hermit template.

To reduce the amount of files getting synced to S3, I included the --size-only flag to the aws s3 sync command so that only the files that have been changed get pushed. I realize that if a file is edited but ends with the same number of bytes (like deleting a comment but adding code that is equal to the comment in size) then that file won’t get pushed, but I feel like that is a rare edge case that I shouldn’t be concerned with.

I made invalidating the CloudFront cache a bit more complicated than it probably needed to be. Rather than just clearing the entire cache (which could require a lot of API calls to CloudFront), I decided to use the output from the aws s3 sync command to determine the specific URLs that needed to be invalidated. This required some grep and sed parsing to extract file paths from the sync output and format them to match what CloudFront wants to see. Because CloudFront relies on file paths and not full URLs, I needed to remove the domain name and index.html from each uploaded file.

- name: Deploy site to S3
  run: |
  aws s3 sync ${{ github.workspace }}/public/ s3://jonathandefreeuw.com --delete --size-only > .sync.output

- name: Get changed files from S3 Sync
  run: |
  grep -e "s3://jonathandefreeuw.com.*" -o .sync.output | cut -f 4- -d '/' | sed 's/^/\//' | sed 's/index\.html//' > .changed.output