Opening in Singapore
Transloadit currently operates from two main datacenters, one in the US (North Virginia) and one in the EU (Ireland). Most of our users are served well by these, but for people in Japan or Australia, we could still do better. So we're opening in a third datacenter in Singapore (also named: Asia-Pacific, AP, or:
ap-southeast-1) in order to offer much lower latencies for the entire region.
We're almost done setting up shop, and have taken this opportunity to make some other big changes to our networking setup as well. AP will already be equipped with those from the start, and EU & US will soon receive them as well.
In short, we'll transition to Amazon VPCs (Virtual Private Clouds) and NLBs (Network Load Balancers). Let's geek out on some AWS networking tech
Virtual Private Clouds
Like many other companies, Transloadit builds on AWS. When we started in 2009, AWS was just three years old and many features that are obvious to use today, didn't exist yet. One of these no-brainers is deploying a VPC.
A VPC offers enhanced security and organizational controls. For instance, it allows you to shield off machines from the internet entirely, and use networking ACLs to control what traffic is allowed between different segments of your network.
Additionally, many of the new features and machine types AWS is offering are only available inside VPCs. We've had our eye on a number of new machine types that deliver greater performance at lower costs, that we will be finally able to utilize once we've fully switched to VPCs in all datacenters. As mentioned, Singapore is already equiped with a VPC. This introduces some changes to our firewalls and that could potentially hurt customers with setups requiring outgoing connections to non-standard ports. One example could be someone using our /html/convert Robot to make a screenshot of a
Please reach out if this is an issue for you, as by default, we no longer allow dialing out to arbitrary ports.
Network Load Balancers
So far, both our US and EU datacenters used AWS' Elastic Load Balancer (ELB) for distributing incoming traffic across many different machines. Recently, Amazon introduced two new types of loadbalancers, and renamed their existing one to 'Classic':
- Classic Load Balancer (ELB)
- Application Load Balancer (ALB)
- Network Load Balancer (NLB)
The ALB can be seen as an iteration over the ELB. Both are Layer 7 load balancers, meaning they can "understand" HTTP traffic, and, for instance, handle SSL termination, so that your backend servers don't need to worry about SSL. The ALB offers additional advantages, such as being able to route traffic to different servers depending on the URL.
The newly-introduced Network Load Balancer is a different beast altogether. It operates on Layer 4, meaning it's blissfully unaware of things like HTTP, and routes traffic to backend machines on the IP-packet level. This makes them less feature-rich (no SSL termination or routing based on URL) but much more powerful. It's possible to handle millions of requests per second at much lower latencies, for a number of reasons.
As for not being able to handle SSL anymore on the loadbalancer, we have decided that is a good thing. After all, with termination, the traffic would be decrypted by AWS and flowing unencrypted to our backend servers anyway. It's only the last hop, but still: handling decryption ourselves at the very last station reduces the attack surface.
Another reason for moving away from the classic ELB is that if you expected a traffic spike, you had to contact AWS support to Pre-Warm loadbalancers for you or risk being severely throttled for some time. Not so great! It's clear why AWS now recommends to move away from them, and that's exactly what we're doing, starting with our new Singapore data center.
We are looking forward to the lower latencies and enhanced security that these changes will provide to your end-users, but it's also always good to be cautious.
We think it's unlikely that these NLBs will cause any problems, but we also don't want to make assumptions. There are differences in how traffic flows and the way SSL is terminated, meaning there is always risk of breakage. So, if you care, you could already try our
https://api2-ap-southeast-1.transloadit.com endpoint in your staging environment to see if the connections are established correctly.
Note that some issues are to be expected, as Singapore:
- is still undergoing changes and not running at full capacity, so encoding performance will be worse
- will have higher latencies if you're testing this from the US or EU. Normally, when hitting
https://api2.transloadit.com, US traffic would just be routed to our US DC, while people in, e.g., Tokyo would be routed to Singapore. But while testing, you may now be explicitly asking us to handle traffic in a DC farther away from you
Despite of these shortcomings, Singapore can already be used to test if your app and integration is compatible with our new networking setup. Please do reach out if your tests fail so we can rule out issues together.
These are the biggest changes that we think could impact our customers, but we did recreate our entire infra from the ground up, so there might be other subtile differences. This is why we will first test Singapore rigourously before rolling out these changes in our exising datacenters — and we are asking you to do the same.
We intend to launch Singapore on January 7, 2019 (so, roughly two weeks from now), this will affect only end-users in the Asia-Pacific region (improving latencies for them). Note that we can easily stop sending traffic to it at the first sign of trouble, making EU & US share that load again, just like nothing happened.
If all goes well, our aim is to roll out these changes to the existing US & EU datacenters before the end of January, 2019.
In creating Singapore, we have standardized how datacenters are provisioned, so it has become very straightforward for us to open a datacenter beyond Singapore. I have my eyes on you São Paulo!
If you have any questions or concerns, please do get in touch or leave a comment below.