Serverless File Upload with AWS Cognito and S3

Ali Certel
Hipo
Published in
5 min readJun 20, 2018

--

At Hipo we use AWS EC2 instances for backend services. Each backend service we create has at least 2 servers.

Any application we build must be horizontally scalable since we start with 2 application servers. While making an application scalable, the first challenge is to serve static files in a proper way. Static files can be web app related assets like js, css, images or files uploaded by users.The simple solution is to use S3 to serve all kinds of static files. It’s scalable, fast and reliable.

For smaller files like photos, uploading files to server and sending from the server to S3 is done synchronously. But for large files, in order to not keep the user waiting, it’s much more effective to send them asynchronously. We use celery to run our asynchronous tasks, and in this post, we’ll talk about our use of Celery and how we overcome various challenges while handling asynchronous tasks .

How to upload large files without timeout errors

For reliable uploads and to prevent timeout errors on large file uploads we used nginx upload module. This allowed us to break down files and upload them in smaller chunks, preventing possible timeout errors.

How to make sure the app server which runs the background task to upload file to S3 is the same as the one who receives the request

Let’s say we have two app servers called ServerA and ServerB. A user uploads a file, ServerA gets the request and fires a task. Now the file is uploaded to a temporary folder on ServerA. If the task worker who receives the task is on ServerA there is no problem. But if ServerB’s worker receives the task, it fails because it can not find a file that was uploaded to ServerA.

This problem can be solved by having only one server run celery tasks. But most of the time we don’t really need an extra machine just to run background jobs. They will be mostly idle. Why spend more money on it and have another server to maintain?

To solve this without adding an extra instance, each server must have it’s own unique queue and when the task is fired, it must be routed to that queue.

The way we solved this was to simply add a setting to create a unique queue for the server.
MEDIA_UPLOAD_QUEUE_KEY = "media_upload_%s" % socket.gethostname()

This key is used while the task is fired, so the server that fires the task will always receive that task.

There may come a time when your project needs a whole server to handle upload requests. Most of the projects don’t. That’s why we start handling uploads on the app server. It’s however possible that tens or hundreds of users upload files at the same time. This would cause your api requests to have very high latency as your server workers would be tied up with slow I/O operations.

Having this happen to us pointed out the fact that this is not the perfect infrastructure to handle large file uploads. It works, but we love simplicity. We also don’t want to get our hands dirty if we can achieve the same thing more easily.

What if users upload directly to S3?

We thus decided to upload files directly to Amazon S3 without the data first going through our servers. Scalable, reliable and fast upload without needing a server. Now that was exciting to hear.

Users, to be able to upload files to directly S3, must have AWS credentials with necessary permissions. We can’t directly hand over our AWS keys to users. What we can do is to create temporary credentials for each user. To achieve this we use AWS CognitoIdentity.

Amazon Cognito Identity enables you to create unique identities for your users and authenticate them with identity providers. With an identity, you can obtain temporary, limited-privilege AWS credentials to synchronize data with Amazon Cognito Sync, or directly access other AWS services. Amazon Cognito Identity supports public identity providers — Amazon, Facebook, and Google — as well as unauthenticated identities. It also supports developer authenticated identities, which let you register and authenticate users via your own backend authentication process.

To get started with Cognito, first you need to create an identity pool. An Identity pool is pool of app users. Identity is an individual user. It can also be a guest user.

How to create a new identity pool for your application

  1. Log in to the Amazon Cognito console and click Create new identity pool. We are going to create an identity pool with custom authentication provider which is our backend service.
  1. Click Create pool
  2. Create an iam role for your authenticated users.

Here is a sample iam role which allows users to upload files to a specific directory.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"mobileanalytics:PutEvents",
"cognito-sync:*",
"cognito-identity:*"
],
"Resource": [
"*"
]
},
{
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::hipo-test/uploads/${cognito-identity.amazonaws.com:sub}/*"
]
}
]
}

${cognito-identity.amazonaws.com:sub} This variable holds user's unique cognito identity id. So each user can only upload to a folder named with their cognito id.

This allows us to create an isolated folder for each user. This is one of the reason why we choose cognito over STS.

Cognito’s Developer Authenticated Identities Authflow

  1. Login via Developer Provider
  2. Validate the user’s login
  3. GetOpenIdTokenForDeveloperIdentity (Backend) [Api reference]
    Registers (or retrieves) a Cognito IdentityId and an OpenID Connect token for a user authenticated by your backend authentication process.
  4. GetCredentialsForIdentity (Client) [Api reference]
    Returns credentials for the provided identity ID.

Client gets Cognito IdentityId and OpenID Connect token of the authenticated user from backend service. Client makes a request to The Cognito Service using these credentials to retrieve temporary aws access and secret key. Then uses these AWS keys to upload files to S3.

We use Amazon S3 Transfer Manager to upload files on the client side.

Amazon S3 Transfer Manager makes it easy for you to upload and download files from S3 while optimizing for performance and reliability. It hides the complexity of transferring files behind a simple API. Whenever possible, uploads are broken up into multiple pieces, so that several pieces can be sent in parallel to provide better throughput. This approach enables more robust transfers, since an I/O error in any individual piece means the SDK only needs to retransmit the one affected piece, and not the entire transfer.

S3 Transfer Manager provides simple APIs to pause, resume, and cancel file transfers.

Summary

Handling file uploads in the app servers brings extra burden to server. It’s also requires extra work to make it scalable. Instead let clients to upload files directly to S3.

--

--