AWS Elastic File System Unleashed

I

‘ve been anxiously awaiting the release of Amazon’s Elastic File System (EFS). This new type of file system allows multiple EC2 instances to mount the same cloud file system concurrently. The beta version has been available for a while, but only on the west coast. I received the announcement that EFS was live on us-east-1 at 6:01 this morning, I couldn’t wait to get started. So far setting things up has been an easy process, although the AWS documentation is a bit scattered and incomplete in some areas. I’ll go over the steps I took using the aws cli to set up my first AWS EFS test.
 

What Is The Elastic File System?

EFS allows multiple EC2 instances to read and write to the same file system as though it were an NFS server. In layman’s terms EFS is simply a cloud NFS server. At work we have code and reference libraries that until now needed to be copied to each EC2 instance. This takes time, costs money, and is just downright wasteful. So along comes the EFS file system to solve all that. You set it up and dozens of instances can now all connect to the same NFS share, just as they do on your own in-house networks.

There are a few caveats, EFS uses NFS4 protocols, which need to be tuned properly for best performance. The file system does not have all the ACL abilities or Kerberos authentication options of a typical NFS4 file system. However, you do have IAM security, VPCs and security groups to control access and restrict permissions if necessary. So far I haven’t found the limitations of AWS EFS to be detrimental.
 

Getting Started

First off, you need to create the file system. Actually, you’re just creating the address of the file system, we won’t be adding any files or configuring permissions yet. The only requirement here is that you have a unique name to assign your file system. You also need to be working with a role that has permissions to create EFS shares, I’m using my administrator role for this test. I’m calling my file system ‘TestEFS.’ If your aws .credential file includes a default region you don’t need to specify it in the command. Here’s a link to the aws cli efs tools if you need more information or additional options
 

 
Here’s the response back from AWS, it all looks good. Make a note of the FileSystemId, it will be used quite a bit.

 

Tag the File System

Not a requirement, but tagging your resources makes things much easier down the road when you’re trying to identify them later. It also helps when purchasing wants you to help them reconcile the AWS bill three months after you’ve torn all this down, besides, it only takes a second. We use the FielSystemId from above, and again you don’t need the region if it’s set in your credential file:

 
Always check the result to be sure you didn’t make typo:

And the response back showing all is well:

 

Create the Mount Target

Now that the file system has been created and tagged, we’re ready to set it up so it can be mounted by and EC2 instance and written to. To create the mount target you’ll need the subnet-id and security group id where your EC2 instances will be launched. The file system can only have mount targets in one VPC, so keep that in mind when you’re deciding where you want to access the file system from. It is possible to move the EFS system from one VPC to another, you simply need to delete all the mount targets from the first VPC before moving to the new VPC.
 
Here’s the command:

And the response back:

The response back:

If you’re on ubuntu do:

If you’re on windows….better ask somebody else for help, we’re a linux shop.
&nbsp
Now create a directory and mount the file system:

After I tested things a bit I discovered these tuning settings in the AWS docs, you can put this in your fstab:

 

Test EFS

I don’t have examples here, but I wrote a bunch of files to EFS, then spun up several instances and read/wrote concurrently from them. I watched nload stats on several of the instances, the traffic was a bit choppy with bursts of speed and then slowdowns, but then that’s all expected if you read the docs. It wasn’t terribly slow, in fact it averaged a little faster than copying data from S3, that’s good enough for me at this point.

Close It Down and Clean It Up

You do get charged for the file system for as long as it’s hanging up there in the cloud, so when your tests are done be sure to tear it all down again. First, delete all the mount targets:

No response means it worked.
 
Now delete the file system:

And just for good measure — check to be sure it’s really gone:

response:

Happy Cloud Computing!

Leave a Reply

Be the First to Comment!

Notify of
wpDiscuz