Migration in progress
Currently migrating content from Blogger, please excuse the chaos. Excerpts and pagination should now be (mostly) correct, a few other useability improvements are still in progress.
-
Working with 20+ node ElastiCache Memcached clusters
ElastiCache Memcached limits the number of nodes in a cluster to 20 by default. This limit can be increased with ElastiCache recommending against clusters larger than 50 nodes.
-
Extracting S3 bucket sizes using the AWS CLI
A quick one liner for printing out the size (in bytes) of S3 StandardStorage buckets in your account (using bash)
-
Understanding EC2 "Up to 10 Gigabit" network performance for R4 instances
This post investigates the network performance of AWS R4 instances with a focus on the "Up to 10 Gigabit" networking expected from smaller (r4.large - r4.4xlarge) instance types. Before starting it should be noted that this post is based on observation and as such is prone to imprecision and variance, it is intended as a guide for what can be expected and not a comprehensive or scientific review.
-
AWS CLI - Switching to and from regional EC2 reserved instances
AWS recently announced the availability of regional reserved instances, this post explains how to switch a reservation from AZ specific to regional (and back) using the AWS CLI.
-
AWS troubleshooting - Lamba deployment package file permissions
When creating your own Lambda deployment packages be aware of the permissions on the files before zipping them. Lambda requires the files to have read access for all users, particularly "other", if this is missing you will receive a non-obvious error when trying to call the function. The fix is simple enough, perform a 'chmod a+r *' before creating your zip file. If the code is visible in the inline editor adding an empty line and saving will also fix the problem, presumably by overwriting the file with the correct permissions.
-
AWS Tip of the day: Tagging EC2 reserved instances
A quick post pointing out that EC2 reserved instances actually support tagging. This functionality is only available on the command line of via the API and not via the console but it still allows to you tag your reservations making it easier to keep track of why a reserved instance was purchased and what component it was intended for. Of course the reservation itself is not actually tied to a running instance in any way, it is merely a billing construct that is applied to any matching instances running in your account but if you are making architectural changes or considering different instance types for specific workloads or components the tags allow you (and your team) to see why the reservation was originally purchased. So for example if you are scaling up the instance sizes of a specific component, lets say from m4.large to m4.xlarge, you can check your reserved instance tags and modify the reservations associated with the component to ensure you continue to benefit from the purchase.
-
AWS Tip: Save S3 costs with abort multipart lifecycle policy
S3 multipart uploads provide a number of benefits -- better throughput, recovery from network errors -- and a number of tools will automatically use multipart uploads for larger uploads. The AWS CLI cp, mv, and sync commands all make use of multipart uploads and make a note that "If the process is interrupted by a kill command or system failure, the in-progress multipart upload remains in Amazon S3 and must be cleaned up manually..."
Enabling longer AWS IDs in a region using an IAM role
AWS is moving towards longer EC2 and EBS IDs and you can enable them for an IAM user or at an account level using the
root credentials. You can avoid using the root credentials by using an IAM role instead. This is a quick post to explain
the steps needed to use an IAM role on an instance to enable the longer IDs at an account level.