EBS vs EFS for Multi-Instance File Sharing: What You Actually Need
A common architectural question when scaling EC2 workloads is: can I mount a single EBS volume across multiple instances to share files? The short answer is mostly no — and understanding why reveals a fundamental distinction between block storage and network file systems on AWS.
TL;DR
| Dimension | EBS (Elastic Block Store) | EFS (Elastic File System) |
|---|---|---|
| Storage Type | Block storage | Network file system (NFS v4.1/4.2) |
| Multi-instance mount | Limited — EBS Multi-Attach only for specific volume types and use cases | Native — thousands of instances concurrently |
| Shared folder use case | Not recommended for general shared folder access | Purpose-built for this |
| Availability Zone scope | Single AZ (standard); Multi-Attach within same AZ | Regional — spans all AZs in a region |
| File system semantics | Managed by OS on single instance | POSIX-compliant, shared locking |
| Typical use case | OS boot volumes, databases, single-instance apps | Shared content repos, ML training data, CMS media |
The Core Problem: Block vs. File Storage
EBS is block storage — it presents raw disk blocks to a single OS, which then formats and manages a file system on top. Think of it like a physical hard drive plugged directly into one computer. The OS owns the file system metadata, the inode table, and the journal. If two operating systems tried to write to the same block device simultaneously without coordination, you'd get immediate data corruption.
Analogy: EBS is like a USB hard drive. You can plug it into one laptop at a time. If you physically split the cable to two laptops simultaneously, both OSes would fight over the file system and corrupt it instantly. EFS, by contrast, is like a NAS (Network Attached Storage) device on your office network — every computer connects over the network and the NAS itself arbitrates all reads and writes safely.
What About EBS Multi-Attach?
AWS does offer EBS Multi-Attach, but it is heavily constrained and is not a general-purpose shared folder solution. Key restrictions include:
- Only supported on io1 and io2 (Provisioned IOPS SSD) volume types.
- All attached instances must be in the same Availability Zone.
- Maximum of 16 instances per volume.
- The application or cluster software (e.g., a cluster-aware file system like GFS2 or OCFS2) is entirely responsible for coordinating concurrent writes. A standard ext4 or XFS file system mounted on multiple instances will corrupt data.
- Not suitable for general shared folder access without a cluster file system layer.
For sharing a folder across five standard EC2 instances, EBS Multi-Attach is the wrong tool. EFS is the correct answer.
Architecture: EFS Shared Across 5 EC2 Instances
(Regional Resource)"] MT1["Mount Target
AZ: us-east-1a"] MT2["Mount Target
AZ: us-east-1b"] EC1["EC2 Instance 1
(AZ: us-east-1a)"] EC2["EC2 Instance 2
(AZ: us-east-1a)"] EC3["EC2 Instance 3
(AZ: us-east-1a)"] EC4["EC2 Instance 4
(AZ: us-east-1b)"] EC5["EC2 Instance 5
(AZ: us-east-1b)"] SHARED["Shared Folder
/mnt/shared"] EFS --> MT1 EFS --> MT2 MT1 -->|"NFS TCP 2049"| EC1 MT1 -->|"NFS TCP 2049"| EC2 MT1 -->|"NFS TCP 2049"| EC3 MT2 -->|"NFS TCP 2049"| EC4 MT2 -->|"NFS TCP 2049"| EC5 EC1 --- SHARED EC2 --- SHARED EC3 --- SHARED EC4 --- SHARED EC5 --- SHARED style EFS fill:#FF9900,color:#fff,stroke:#FF9900 style MT1 fill:#1A73E8,color:#fff,stroke:#1A73E8 style MT2 fill:#1A73E8,color:#fff,stroke:#1A73E8 style SHARED fill:#34A853,color:#fff,stroke:#34A853
- EFS File System is a regional resource. AWS manages the underlying storage infrastructure.
- Mount Targets are created per Availability Zone inside your VPC. Each mount target gets an IP address within the subnet.
- Each EC2 instance mounts the EFS file system via the NFS protocol, connecting to the mount target in its AZ.
- All five instances see the same directory tree with POSIX-compliant file and directory semantics.
- EFS handles concurrent access, file locking, and consistency automatically.
How to Mount EFS on EC2 Instances
AWS provides the EFS mount helper (amazon-efs-utils) which simplifies mounting and supports encryption in transit via TLS automatically.
Step 1: Install the EFS Mount Helper
# Amazon Linux 2 / Amazon Linux 2023
sudo yum install -y amazon-efs-utils
# Ubuntu / Debian
sudo apt-get install -y amazon-efs-utils
Step 2: Create the Mount Point and Mount EFS
# Replace fs-0123456789abcdef0 with your actual EFS File System ID
EFS_ID="fs-0123456789abcdef0"
MOUNT_POINT="/mnt/shared"
sudo mkdir -p $MOUNT_POINT
sudo mount -t efs -o tls $EFS_ID:/ $MOUNT_POINT
Step 3: Persist the Mount Across Reboots
Add the following line to /etc/fstab on each instance:
fs-0123456789abcdef0:/ /mnt/shared efs _netdev,tls 0 0
🔽 [Click to expand] Full CloudFormation snippet: EFS + Mount Targets
AWSTemplateFormatVersion: '2010-09-09'
Description: EFS File System with Mount Targets in two AZs
Resources:
SharedEFS:
Type: AWS::EFS::FileSystem
Properties:
Encrypted: true
PerformanceMode: generalPurpose
ThroughputMode: bursting
FileSystemTags:
- Key: Name
Value: SharedFileSystem
EFSMountTargetAZ1:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: !Ref SharedEFS
SubnetId: subnet-0abc111111111111a # Replace with your subnet in AZ1
SecurityGroups:
- !Ref EFSSecurityGroup
EFSMountTargetAZ2:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: !Ref SharedEFS
SubnetId: subnet-0abc222222222222b # Replace with your subnet in AZ2
SecurityGroups:
- !Ref EFSSecurityGroup
EFSSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allow NFS from EC2 instances
VpcId: vpc-0abc000000000000c # Replace with your VPC ID
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 2049
ToPort: 2049
SourceSecurityGroupId: sg-0abc333333333333d # EC2 instance SG
IAM & Security: Least Privilege for EFS
Attach the following IAM policy to your EC2 instance role to allow EFS mount operations. Scope the resource ARN to your specific file system.
🔽 [Click to expand] IAM Policy for EFS Access
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowEFSClientMount",
"Effect": "Allow",
"Action": [
"elasticfilesystem:ClientMount",
"elasticfilesystem:ClientWrite",
"elasticfilesystem:DescribeMountTargets"
],
"Resource": "arn:aws:elasticfilesystem:us-east-1:123456789012:file-system/fs-0123456789abcdef0"
}
]
}
Network security note: Your EFS mount target security group must allow inbound TCP port 2049 (NFS) from the security group attached to your EC2 instances. No other ports are required.
EFS Performance Modes: Choosing the Right One
| Mode | Best For | Consideration |
|---|---|---|
| General Purpose (default) | Web serving, CMS, dev environments, most workloads | Lower latency per operation |
| Max I/O | Highly parallelized workloads (big data, media processing) | Higher aggregate throughput, slightly higher latency per operation |
| Bursting Throughput | Spiky, intermittent workloads | Throughput scales with storage size |
| Provisioned Throughput | Consistently high throughput needs independent of storage size | Additional cost; check AWS pricing page |
Decision Flowchart: EBS or EFS?
multiple EC2 instances?"} C{"Cluster-aware file system
+ same AZ + io1/io2 only?"} D["Use EBS Multi-Attach
(Specialized use case)"] E["Use EFS
(Shared folder — recommended)"] F{"Single instance?"} G["Use EBS
(General Purpose or Provisioned IOPS)"] A --> B B -->|"Yes"| C B -->|"No"| F C -->|"Yes — strict conditions met"| D C -->|"No — general shared folder"| E F -->|"Yes"| G F -->|"No"| E style E fill:#34A853,color:#fff,stroke:#34A853 style D fill:#FBBC04,color:#000,stroke:#FBBC04 style G fill:#1A73E8,color:#fff,stroke:#1A73E8
Wrap-Up & Next Steps
If your goal is to share a folder across multiple EC2 instances, EFS is the correct and purpose-built solution. EBS is a single-instance block device; EBS Multi-Attach exists for specialized cluster workloads with cluster-aware file systems, not general shared folder access.
Glossary
| Term | Definition |
|---|---|
| Block Storage | Raw disk storage presented to an OS as a device; the OS manages the file system. Examples: EBS, local NVMe. |
| NFS (Network File System) | A distributed file system protocol allowing a client to access files over a network as if they were local. |
| Mount Target | An EFS endpoint within a specific VPC subnet and AZ that EC2 instances connect to via NFS. |
| POSIX | A standard defining file system semantics including permissions, ownership, and file locking — EFS is POSIX-compliant. |
| EBS Multi-Attach | An EBS feature allowing a single io1/io2 volume to be attached to up to 16 instances in the same AZ, requiring cluster-aware file system software. |
Comments
Post a Comment