Why would you call colocation "building your own data center"? You could call it "colocation" or "renting space in a data center". What are you building? You're racking. Can you say what you mean?
I have to second this. While it takes mich effort and in-depth knowledge do build up from an “empty” cage it’s still far from dealing with everything from building permits, to plan and realize a data center to code including redundant power lines, AC and fibre.
Still kudos going this path in the cloud-centric time we live in.
Dealing with power at that scale, arranging your own ISPs, seems a bit beyond your normal colocation project, but I haven’t bee in the data center space in a very long time.
It seems a bit disingenuous but it’s common practice. Even the hyperscalers, who do have their own datacenters, include their colocation servers in the term “datacenter.” Good luck finding the actual, physical location of a server in GCP europe-west2-a (“London”). Maybe it’s in a real Google datacenter in London! Or it could be in an Equinix datacenter in Slough, one room away from AWS eu-west-1.
Cloudflare has also historically used “datacenter” to refer to their rack deployments.
All that said, for the purpose of the blog post, “building your own datacenter” is misleading.
The hyperscalers are absolutely not colo-ing their general purpose compute at Equinix! A cage for routers and direct connect, maybe some limited Edge CDN/compute at most.
Even where they do lease wholesale space, you'd be hard pushed to find examples of more than one in a single building. If you count them as Microsoft, Google, AWS then I'm not sure I can think of a single example off the top of my head. Only really possible if you start including players like IBM or Oracle in that list.
Google doesn't put GCP compute inside Equinx Slough. I could perhaps believe if they have a cage of routers and perhaps even CDN boxes/Edge, but no general cloud compute.
Google and AWS will put routers inside Equinx Slough sure, but that's literally written on the tin, and the only way a carrier hotel could work.
Security reasons, I presume? Otherwise it would be trivial for an adversary to map out their resources by sampling VM rentals over a moderate time-period.
> Why would you call colocation "building your own data center"?
The cynic in me says this was written by sales/marketing people targeted specifically at a whole new generation of people who've never laid hands on the bare metal or racked a piece of equipment or done low voltage cabling, fiber cabling, and "plug this into A and B power AC power" cabling.
By this, I mean people who've never done anything that isn't GCP, Azure, AWS, etc. Many terminologies related to bare metal infrastructure are misused by people who haven't been around in the industry long enough to have been required to DIY all their own infrastructure on their own bare metal.
I really don't mean any insult to people reading this who've only ever touched the software side, but if a document is describing the general concept of hot aisles and cold aisles to an audience in such a way that it assumes they don't know what those are, it's at a very introductory/beginner level of understanding the OSI layer 1 infrastructure.
I think that's my fault BTW (Railway Founder here). I asked Charith to cut down a bit on the details to make sure it was approachable to a wider audience (And most people have only done Cloud)
I wanted to start off with the 101 content to see if people found it approachable/interesting. He's got like reams and reams of 201, 301, 401
> You could call it "colocation" or "renting space in a data center". What are you building? You're racking. Can you say what you mean?
TFA explain what they're doing, they literally write this:
"In general you have three main choices: Greenfield buildout (...), Cage Colocation (getting a private space inside a provider's datacenter enclosed by mesh walls), or Rack colocation...
Reminds me of the old Rackspace days! Boy we had some war stories:
- Some EMC guys came to install a storage device for us to test... and tripped over each other and knocked out an entire Rack of servers like a comedy skit. (They uh... didn't win the contract.)
- Some poor guy driving a truck had a heart attack and the crash took our DFW datecenter offline. (There were ballards to prevent this sort of scenario, but the cement hadn't been poured in them yet.)
- At one point we temporarily laser-beamed bandwidth across the street to another building
- There was one day we knocked out windows and purchased box fans because servers were literally catching on fire.
Data center science has... well improved since the earlier days. We worked with Facebook on the OpenCompute Project that had some very forward looking infra concepts at the time.
This is a pretty decent write up. One thing that comes to mind is why would you write your own internal tooling for managing a rack when Netbox exists? Netbox is fantastic and I wish I had this back in the mid 2000s when I was managing 50+ racks.
we evaluated a lot of commercial and oss offerings before we decided do go build it ourselves - we still have a deploy of netbox somewhere. But our custom tool (Railyard) works so well because it integrates deeply into the our full software, hardware and orchestration stack. The problem with the OSS stuff is that it's almost too generic - you shape the problem to fit its data model vs. solve the problem. We're likely going to fold our tool into Railway itself eventually - want to go on-prem; button click hardware design, commission, deploy and devex. Sorta like what Oxide is doing, but approaching the problem from the opposite side.
It is not that difficult to build it into your app, if you're already storing information about hosts, networking etc. All you're really doing is expanding the scope, netbox is a fine starting point if you're willing to start there and build your systems around it, but if you've already got a system (or you need to do anything that doesn't fit netbox logic) you're probably better off just extending it.
In this case railway will need to care about a lot of extra information beyond just racks, IP addresses and physical servers.
correct; I think the first version of our tool sprung up in the space of a couple of weekends. It wasn't planned, my colleague Pierre who wrote it just had a lot of fun building it.
My first colo box came courtesy of a friend of a friend that worked for one of the companies that did that (leaving out names to protect the innocent). It was a true frankenputer built out of whatever spare parts he had laying around. He let me come visit it, and it was an art project as much as a webserver. The mainboard was hung on the wall with some zip ties, the PSU was on the desk top, the hard drive was suspended as well. Eventually, the system was upgraded to newer hardware, put in an actual case, and then racked with an upgraded 100base-t connection. We were screaming in 1999.
It would be nice to have a lot more detail. The WTF sections are the best part. Sounds like your gear needs "this side towards enemy" sign and/or the right affordances so it only goes in one way.
Did you standardize on layout at the rack level? What poke-yoke processes did you put into place to prevent mistakes?
What does your metal->boot stack look like?
Having worked for two different cloud providers and built my own internal clouds with PXE booted hosts, I too find this stuff fascinating.
Also take utmost advantage of a new DC when you are booting it to try out all the failure scenarios you can think of and the ones you can't through randomized fault injection.
I'm going to save this for when I'm asked to cut the three paras on power circuit types.
Re: standardising layout at the rack level; we do now! we only figured this out after site #2. It makes everything so much easier to verify. And yeah, validation is hard - manually doing it thus far; want to play around with scraping LLDP data but our switch software stack has a bug :/. It's an evolving process, the more we work with different contractors, the more edge cases we unearth and account for. The biggest improvement is that we have built a internal DCIM that templates a rack design and exports a interactive "cabling explorer" for the site techs - including detailed annotated diagrams of equipment showing port names, etc... The screenshot of the elevation is a screenshot of part of that tool.
> What does your metal->boot stack look like?
We've hacked together something on top of https://github.com/danderson/netboot/tree/main/pixiecore that serves a debian netboot + preseed file. We have some custom temporal workers to connect to Redfish APIs on the BMCs to puppeteer the contraption. Then a custom host agent to provision QEMU VMs and advertise assigned IPs via BGP (using FRR) from the host.
Re: new DCs for failure scenarios, yeah we've already blown breakers etc... testing stuff (that's how we figured out our phase balancing was off). Went in with a thermal camera on another. A site in AMS is coming up next week and the goal for that is to see how far we can push a fully loaded switch fabric.
The edge cases are the gold btw, collect the whole set and keep them in a human and machine readable format.
I'd also go through and using a color coded set of cables, insert bad cables (one at a time at first) while the system is doing an aggressive all to all workload and see how quickly you can identify faults.
It is the gray failures that will bring the system down, often multiple as a single failure will go undetected for months and then finally tip over an inflection point at a later time.
Are you workloads ephemeral and/or do they live migrate? Or will physical hosts have long uptimes? It is nice to be able to rebaseline the hardware before and after host kernel upgrades so you can detect any anomalies.
You would be surprised about how larger of a systemic performance degradation that major cloud providers have been able to see over months because "all machines are the same", high precision but low absolute accuracy. It is nice to run the same benchmarks on bare metal and then again under virtualization.
I am sure you know, but you are running a multivariate longitudinal experiment, science the shit out of it.
Long running hosts at the moment, but we can drain most workloads off a specific host/rack if required and reschedule it pretty fast. We have the advantage of having a custom scheduler/orchestrator we've been working on for years, so we have a lot of control on that layer than with Kube or Nomad.
Re: Live Migration
We're working on adding Live Migration support to our orchestrator atm. We aim to have it running this quarter. That'll makes things super seamless.
Re: kernels
We've already seen some perf improvements somewhere between 6.0 and 6.5 (I forget the exact reason/version) - but it was some fix specific to the Sapphire Rapids cpus we had. But I wish we had more time to science on it, it's really fun playing with all the knobs and benchmarking stuff. Some of the telemetry on the new CPUs is also crazy - there's stuff like Intel PCM that can pull super fine-grained telemetry direct from the CPU/chipset https://github.com/intel/pcm. Only used it to confirm that we got NUMA affinity right so far - nothing crazy.
You will need a way to coordinate LM with users due them being sensitive to LM blackouts. Not many workloads are, but the ones that are are the kinds of things that customers will just leave over.
If you are draining a host, make sure new VMs are on hosts that can be guaranteed to be maintenance free for the next x-days. This allows customers to restart their workloads on their schedule and have a guarantee that they won't be impacted. It also encourages good hygiene.
Allow customers to trigger migration.
Charge extra for a long running maintenance free host.
It is good you are hooked into the PCM already. You will experience accidentally antagonistic workloads and the PCM will really help debug those issues.
If I were building a DC, I put as many NICs into a host as possible and use SR-VIO to pass the nics into the guests. The switches should be sized to allow for full speed on all nics. I know it sounds crazy but if you design for a typical crud serving tree, you are a saving a buck but making your software problem 100x harder.
Everything should have enough headroom so it never hits a knee of a contention curve.
I thought it was an interesting post, so I tried to add Railway's blog to my RSS reader... but it didn't work. I tried searching the page source for RSS and also found nothing. Eventually, I noticed the RSS icon in the top right, but it's some kind of special button that I can't right click and copy the link from, and Safari prevents me from knowing what the URL is... so I had to open that from Firefox to find it.
Everything is dual redundancy. We run RAID so if a drive fails it's fine; alerting will page oncall which will trigger remote hands onsite, where we have spares for everything in each datacenter
We built some internal tooling to help manage the hosts. Once a host is onboarded onto it, it's a few button clicks on an internal dashboard to provision a QEMU VM. We made a custom ansible inventory plugin so we can manage these VMs the same as we do machines on GCP.
The host runs a custom daemon that programs FRR (an OSS routing stack), so that it advertises addresses assigned to a VM to the rest of the cluster via BGP. So zero config of network switches, etc... required after initial setup.
We'll blog about this system at some point in the coming months.
This is how you build a dominant company. Good for you ignoring the whiny conventional wisdom that keeps people stuck in the hyperscalers.
You’re an infrastructure company. You gotta own the metal that you sell or you’re just a middleman for the cloud, and always at risk of being undercut by a competitor on bare metal with $0 egress fees.
Colocation and peering for $0 egress is why Cloudflare has a free tier, and why new entrants could never compete with them by reselling cloud services.
In fact, for hyperscalers, bandwidth price gouging isn’t just a profit center; it’s a moat. It ensures you can’t build the next AWS on AWS, and creates an entirely new (and strategically weaker) market segment of “PaaS” on top of “IaaS.”
If you’re using 7280-SR3 switches, they’re certainly a fine choice. However, have you considered the 7280-CR3(K) range? They're much better $/Gbps and more relevant edge interfaces.
At this scale, why did you opt for a spine-and-leaf design with 25G switches and a dedicated 32×100G spine? Did you explore just collapsing it and using 1-2 32×100G switches per rack, then employing 100G>4×25G AOC breakout cables and direct 100G links for inter-switch connections and storage servers?
All valid points - and our ideas for Gen 2 sound directionally similar - but those are at crayon drawing stage.
When we started, we didn't have much of an idea about what the rack needs to look like. So we chose a combination of things we thought we could pull this off. We're mostly software and systems folks, and there's a dearth of information out there on what to do. Vendors tend to gravitate towards selling BGP+EVPN+VXLAN or whatever "enterprise" reference designs; so we kinda YOLO'ed the Gen 1. We decided to spend extra money if we could get to a working setup sooner. When the clock is in cloud spend, there's uh... lots of opportunity cost :D.
A lot of the chipset and switch choices were bets and we had to pick and choose what we gambled on - and what we could get our hands on. The main bets this round were eBGP to the hosts with BGP unnumbered, SONiC switches - this lets us do a lot of networking with our existing IPv6/Wireguard/eBPF overlay and a debian based switch OS + FRR (so fewer things to learn). And ofc. figuring out how to operationalise the install process and get stuff running on the hardware as soon as possible.
Now we've got a working design, we'll start iterating a bit more on the hardware choice and network design. I'd love for us to write about it when we get through it. Plus I think we owe the internet a rant on networking in general.
Edit: Also we don't use UniFi Pro / Uniquity gear anywhere?
The date and time durations given seem a bit confusing to me...
"we kicked off a Railway Metal project last year. Nine months later we were live with the first site in California".
seems inconsistent with:
"From kicking off the Railway Metal project in October last-year, it took us five long months to get the first servers plugged in"
The article was posted today (Jan 2025), was it maybe originally written last year and the project has been going on for more than a year, and they mean that the Railway Metal project actually started in 2023?
ah that's my bad - I wrote this in Dec, we only published in Jan. Obv. missed updating that.
Timeline wise;
- we decided to go for it and spend the $$$ in Oct '23
- Convos/planning started ~ Jan '24
- Picked the vendors we wanted by ~ Feb/Mar '24
- Lead-times, etc... meant everything was ready for us to go fit the first gear by mostly ourselves at the start of May (that's the 5mo)
- We did the "proper" re-install around June, followed closely by the second site in ~ Sep, around when we started letting our users on it as a open beta
- Sep-Dec we just doubled down on refining software/automation and process while building out successive installs
Lead times can be mind numbing. We have certain switches from Arista that have a 3-6 mo leadtime. Servers are build to order, so again 2+ months depending on stock. And obv. holidays mean a lot of stuff shuts down around December.
Sometimes you can swap stuff around to get better lead-times, but then the operational complexity explodes because you have this slightly different component at this one site.
I used to be a EEE, and I thought supply chain there was bad. But with DCs I think it's sometimes worse because you don't directly control some parts of your BoM/supply chain (especially with build-to-order servers).
Was really hoping this was was actually about building your own data center. Our town doesn't have a data center, we need to go an hour south or an hour north. The building that a past failed data center was in (which doesn't bode well for a data center in town, eh?), is up for lease and I'm tempted.
But, I'd need to start off small, probably per-cabinet UPSes and transfer switches, smaller generators. I've built up cabinets and cages before, but never built up the exterior infrastructure.
I would be super interested to know how this stuff scales physically - how much hardware ended up in that cage (maybe in Cloud-equivalent terms), and how much does it cost to run now that it's set up?
oh yes we want to; I even priced a couple out. Most of the SKUs I found were pretty old, and we couldn't find anything compelling to risk deploying at the scale we wanted.
It's on the wishlist, and if the right hardware comes along; we'll rack it up even as a bet. We maintain Nixpacks (https://nixpacks.com/docs/getting-started), so for most of our users we could rebuild most their apps for ARM seamlessly - infact we mostly develop our build systems on ARM (because macbooks). One day.
I remember talking to Jake a couple of years ago when they were looking for someone with a storage background. Cool dude, and cool set of people. Really chuffed to see them doing what they believe in.
We didn't find many good up-to-date resources online on the hardware side of things - kinda why we wanted to write about it. The networking aspect was the most mystical - I highly recommend "BGP in the datacenter" by Dinesh Dutt on that (I think it's available for free via NVidia). Our design is heavily influenced by the ideas discussed there.
We talked to a few, I think they're called MSPs? We weren't super impressed. We decided to YOLO it. There are probably great outfits out there, but it's hard to find them through the noise. We're mostly software and systems folks, but Railway is a infrastructure company so we need to own stuff down to the cage-nut - we owe it to our users. All engineering, project management and procurement is in-house.
We're lucky to have a few great distributors/manufacturers who help us pick the right gear. But we learnt a lot.
We've found a lot of value in getting a broker in to source our transit though.
My personal (and potentially misguided) hot take is that most of the baremetal world is stuck in the early 2000's, and the only companies doing anything interesting here the likes of AWS,Google and Meta. So the only way to innovate is to stumble around, escape the norms and experiment.
Kinda. It's like if you had everything from an infra stack but didn't need to manage it (Kubernetes for resilience, Argo for rollouts, Terraform for safely evolving infrastructure, DataDog for observability)
If you've heard of serverless, this is one step farther; infraless
Give us your code, we will spin it up, keep it up, automate rollouts service discovery, cluster scaling, monitoring, etc
I've been using railway since 2022 and it's been great. I host all my personal projects there and I can go from code to a url by copy-pasting my single dockerfile around.
More to learn from the failures than the blog haha. It tells you what the risks are with a colocation facility. There really isn't any text on how to do this stuff. The last time I wanted to build out a rack there aren't even any instructions on how to do cable management well. It's sort of learned by apprenticeship and practice.
The requirements end up being pretty specific, based on workloads/power draw/supply chain
So, while we could have bought something off the shelf, that would have been suboptimal from a specs perspective. Plus then we'd have to source supply chain etc.
By owning not just the servers but the whole supply chain, we have redundancy at every layer, from the machine, to the parts on site (for failures), to the supply chain (refilling those spare parts/expanding capacity/etc)
Why would you call colocation "building your own data center"? You could call it "colocation" or "renting space in a data center". What are you building? You're racking. Can you say what you mean?
I have to second this. While it takes mich effort and in-depth knowledge do build up from an “empty” cage it’s still far from dealing with everything from building permits, to plan and realize a data center to code including redundant power lines, AC and fibre.
Still kudos going this path in the cloud-centric time we live in.
Yes, the second is much more work, orders of magnitude at least.
Having been around and through both, setting up a cage or two is very different than the entire facility.
I think you and GP are in agreement.
Dealing with power at that scale, arranging your own ISPs, seems a bit beyond your normal colocation project, but I haven’t bee in the data center space in a very long time.
I worked for a colo provider for a long time. Many tenants arranged for their own ISPs, especially the ones large enough to use a cage.
It seems a bit disingenuous but it’s common practice. Even the hyperscalers, who do have their own datacenters, include their colocation servers in the term “datacenter.” Good luck finding the actual, physical location of a server in GCP europe-west2-a (“London”). Maybe it’s in a real Google datacenter in London! Or it could be in an Equinix datacenter in Slough, one room away from AWS eu-west-1.
Cloudflare has also historically used “datacenter” to refer to their rack deployments.
All that said, for the purpose of the blog post, “building your own datacenter” is misleading.
The hyperscalers are absolutely not colo-ing their general purpose compute at Equinix! A cage for routers and direct connect, maybe some limited Edge CDN/compute at most.
Even where they do lease wholesale space, you'd be hard pushed to find examples of more than one in a single building. If you count them as Microsoft, Google, AWS then I'm not sure I can think of a single example off the top of my head. Only really possible if you start including players like IBM or Oracle in that list.
Maybe leasing wholesale space shouldn’t be considered colocation, but GCP absolutely does this and the Slough datacenter was a real example.
I can’t dig up the source atm but IIRC some Equinix website was bragging about it (and it wasn’t just about direct connect to GCP).
Google doesn't put GCP compute inside Equinx Slough. I could perhaps believe if they have a cage of routers and perhaps even CDN boxes/Edge, but no general cloud compute.
Google and AWS will put routers inside Equinx Slough sure, but that's literally written on the tin, and the only way a carrier hotel could work.
Then why do they obfuscate the location of their servers? If they were all in Google datacenters, why not let me see where my VM is?
Security reasons, I presume? Otherwise it would be trivial for an adversary to map out their resources by sampling VM rentals over a moderate time-period.
I’m very naive on the subject here - what advantage would this give someone?
Hyperscalers use colos all the time for edge presence.
> Why would you call colocation "building your own data center"?
The cynic in me says this was written by sales/marketing people targeted specifically at a whole new generation of people who've never laid hands on the bare metal or racked a piece of equipment or done low voltage cabling, fiber cabling, and "plug this into A and B power AC power" cabling.
By this, I mean people who've never done anything that isn't GCP, Azure, AWS, etc. Many terminologies related to bare metal infrastructure are misused by people who haven't been around in the industry long enough to have been required to DIY all their own infrastructure on their own bare metal.
I really don't mean any insult to people reading this who've only ever touched the software side, but if a document is describing the general concept of hot aisles and cold aisles to an audience in such a way that it assumes they don't know what those are, it's at a very introductory/beginner level of understanding the OSI layer 1 infrastructure.
I think that's my fault BTW (Railway Founder here). I asked Charith to cut down a bit on the details to make sure it was approachable to a wider audience (And most people have only done Cloud)
I wanted to start off with the 101 content to see if people found it approachable/interesting. He's got like reams and reams of 201, 301, 401
Next time I'll stay out of the writing room!
Bro let him at the 401 and higher hahaha!
"Booo who let this guy cook?"
Fair tbh
We will indeed write more on this so this is great feedback for next time!
> You could call it "colocation" or "renting space in a data center". What are you building? You're racking. Can you say what you mean?
TFA explain what they're doing, they literally write this:
"In general you have three main choices: Greenfield buildout (...), Cage Colocation (getting a private space inside a provider's datacenter enclosed by mesh walls), or Rack colocation...
We chose the second option"
I don't know how much clearer they can be.
Reminds me of the old Rackspace days! Boy we had some war stories:
Data center science has... well improved since the earlier days. We worked with Facebook on the OpenCompute Project that had some very forward looking infra concepts at the time.> There was one day we knocked out windows and purchased box fans because servers were literally catching on fire.
Pointing the fans in or out?
This is a pretty decent write up. One thing that comes to mind is why would you write your own internal tooling for managing a rack when Netbox exists? Netbox is fantastic and I wish I had this back in the mid 2000s when I was managing 50+ racks.
https://github.com/netbox-community/netbox
we evaluated a lot of commercial and oss offerings before we decided do go build it ourselves - we still have a deploy of netbox somewhere. But our custom tool (Railyard) works so well because it integrates deeply into the our full software, hardware and orchestration stack. The problem with the OSS stuff is that it's almost too generic - you shape the problem to fit its data model vs. solve the problem. We're likely going to fold our tool into Railway itself eventually - want to go on-prem; button click hardware design, commission, deploy and devex. Sorta like what Oxide is doing, but approaching the problem from the opposite side.
It is not that difficult to build it into your app, if you're already storing information about hosts, networking etc. All you're really doing is expanding the scope, netbox is a fine starting point if you're willing to start there and build your systems around it, but if you've already got a system (or you need to do anything that doesn't fit netbox logic) you're probably better off just extending it.
In this case railway will need to care about a lot of extra information beyond just racks, IP addresses and physical servers.
correct; I think the first version of our tool sprung up in the space of a couple of weekends. It wasn't planned, my colleague Pierre who wrote it just had a lot of fun building it.
My first colo box came courtesy of a friend of a friend that worked for one of the companies that did that (leaving out names to protect the innocent). It was a true frankenputer built out of whatever spare parts he had laying around. He let me come visit it, and it was an art project as much as a webserver. The mainboard was hung on the wall with some zip ties, the PSU was on the desk top, the hard drive was suspended as well. Eventually, the system was upgraded to newer hardware, put in an actual case, and then racked with an upgraded 100base-t connection. We were screaming in 1999.
It would be nice to have a lot more detail. The WTF sections are the best part. Sounds like your gear needs "this side towards enemy" sign and/or the right affordances so it only goes in one way.
Did you standardize on layout at the rack level? What poke-yoke processes did you put into place to prevent mistakes?
What does your metal->boot stack look like?
Having worked for two different cloud providers and built my own internal clouds with PXE booted hosts, I too find this stuff fascinating.
Also take utmost advantage of a new DC when you are booting it to try out all the failure scenarios you can think of and the ones you can't through randomized fault injection.
> It would be nice to have a lot more detail
I'm going to save this for when I'm asked to cut the three paras on power circuit types.
Re: standardising layout at the rack level; we do now! we only figured this out after site #2. It makes everything so much easier to verify. And yeah, validation is hard - manually doing it thus far; want to play around with scraping LLDP data but our switch software stack has a bug :/. It's an evolving process, the more we work with different contractors, the more edge cases we unearth and account for. The biggest improvement is that we have built a internal DCIM that templates a rack design and exports a interactive "cabling explorer" for the site techs - including detailed annotated diagrams of equipment showing port names, etc... The screenshot of the elevation is a screenshot of part of that tool.
> What does your metal->boot stack look like?
We've hacked together something on top of https://github.com/danderson/netboot/tree/main/pixiecore that serves a debian netboot + preseed file. We have some custom temporal workers to connect to Redfish APIs on the BMCs to puppeteer the contraption. Then a custom host agent to provision QEMU VMs and advertise assigned IPs via BGP (using FRR) from the host.
Re: new DCs for failure scenarios, yeah we've already blown breakers etc... testing stuff (that's how we figured out our phase balancing was off). Went in with a thermal camera on another. A site in AMS is coming up next week and the goal for that is to see how far we can push a fully loaded switch fabric.
Wonderful!
The edge cases are the gold btw, collect the whole set and keep them in a human and machine readable format.
I'd also go through and using a color coded set of cables, insert bad cables (one at a time at first) while the system is doing an aggressive all to all workload and see how quickly you can identify faults.
It is the gray failures that will bring the system down, often multiple as a single failure will go undetected for months and then finally tip over an inflection point at a later time.
Are you workloads ephemeral and/or do they live migrate? Or will physical hosts have long uptimes? It is nice to be able to rebaseline the hardware before and after host kernel upgrades so you can detect any anomalies.
You would be surprised about how larger of a systemic performance degradation that major cloud providers have been able to see over months because "all machines are the same", high precision but low absolute accuracy. It is nice to run the same benchmarks on bare metal and then again under virtualization.
I am sure you know, but you are running a multivariate longitudinal experiment, science the shit out of it.
Long running hosts at the moment, but we can drain most workloads off a specific host/rack if required and reschedule it pretty fast. We have the advantage of having a custom scheduler/orchestrator we've been working on for years, so we have a lot of control on that layer than with Kube or Nomad.
Re: Live Migration We're working on adding Live Migration support to our orchestrator atm. We aim to have it running this quarter. That'll makes things super seamless.
Re: kernels We've already seen some perf improvements somewhere between 6.0 and 6.5 (I forget the exact reason/version) - but it was some fix specific to the Sapphire Rapids cpus we had. But I wish we had more time to science on it, it's really fun playing with all the knobs and benchmarking stuff. Some of the telemetry on the new CPUs is also crazy - there's stuff like Intel PCM that can pull super fine-grained telemetry direct from the CPU/chipset https://github.com/intel/pcm. Only used it to confirm that we got NUMA affinity right so far - nothing crazy.
Last thing.
You will need a way to coordinate LM with users due them being sensitive to LM blackouts. Not many workloads are, but the ones that are are the kinds of things that customers will just leave over.
If you are draining a host, make sure new VMs are on hosts that can be guaranteed to be maintenance free for the next x-days. This allows customers to restart their workloads on their schedule and have a guarantee that they won't be impacted. It also encourages good hygiene.
Allow customers to trigger migration.
Charge extra for a long running maintenance free host.
It is good you are hooked into the PCM already. You will experience accidentally antagonistic workloads and the PCM will really help debug those issues.
If I were building a DC, I put as many NICs into a host as possible and use SR-VIO to pass the nics into the guests. The switches should be sized to allow for full speed on all nics. I know it sounds crazy but if you design for a typical crud serving tree, you are a saving a buck but making your software problem 100x harder.
Everything should have enough headroom so it never hits a knee of a contention curve.
This is our first post about building out data centers. If you have any questions, we're happy to answer them here :)
I thought it was an interesting post, so I tried to add Railway's blog to my RSS reader... but it didn't work. I tried searching the page source for RSS and also found nothing. Eventually, I noticed the RSS icon in the top right, but it's some kind of special button that I can't right click and copy the link from, and Safari prevents me from knowing what the URL is... so I had to open that from Firefox to find it.
Could be worth adding a <meta> tag to the <head> so that RSS readers can autodiscover the feed. A random link I found on Google: https://www.petefreitag.com/blog/rss-autodiscovery/
How do you deal with drive failures? How often does a Railway team member need to visit a DC? What's it like inside?
Everything is dual redundancy. We run RAID so if a drive fails it's fine; alerting will page oncall which will trigger remote hands onsite, where we have spares for everything in each datacenter
How much additional overhead is there for managing the bare-metal vs cloud? Is it mostly fine after the big effort for initial setup?
We built some internal tooling to help manage the hosts. Once a host is onboarded onto it, it's a few button clicks on an internal dashboard to provision a QEMU VM. We made a custom ansible inventory plugin so we can manage these VMs the same as we do machines on GCP.
The host runs a custom daemon that programs FRR (an OSS routing stack), so that it advertises addresses assigned to a VM to the rest of the cluster via BGP. So zero config of network switches, etc... required after initial setup.
We'll blog about this system at some point in the coming months.
This is how you build a dominant company. Good for you ignoring the whiny conventional wisdom that keeps people stuck in the hyperscalers.
You’re an infrastructure company. You gotta own the metal that you sell or you’re just a middleman for the cloud, and always at risk of being undercut by a competitor on bare metal with $0 egress fees.
Colocation and peering for $0 egress is why Cloudflare has a free tier, and why new entrants could never compete with them by reselling cloud services.
In fact, for hyperscalers, bandwidth price gouging isn’t just a profit center; it’s a moat. It ensures you can’t build the next AWS on AWS, and creates an entirely new (and strategically weaker) market segment of “PaaS” on top of “IaaS.”
Yup. Bingo. We've had to pass the cloud egress costs onto our customers, which sucks.
With this, it'll mean we can slash that in half, lower storage costs, remove "per seat" pricing, etc
Super exciting
If you’re using 7280-SR3 switches, they’re certainly a fine choice. However, have you considered the 7280-CR3(K) range? They're much better $/Gbps and more relevant edge interfaces.
At this scale, why did you opt for a spine-and-leaf design with 25G switches and a dedicated 32×100G spine? Did you explore just collapsing it and using 1-2 32×100G switches per rack, then employing 100G>4×25G AOC breakout cables and direct 100G links for inter-switch connections and storage servers?
Have you also thought about creating a record on PeeringDB?https://www.peeringdb.com/net/400940.
By the way, I’m not convinced I’d recommend a UniFi Pro for anything, even for out-of-band management.
All valid points - and our ideas for Gen 2 sound directionally similar - but those are at crayon drawing stage.
When we started, we didn't have much of an idea about what the rack needs to look like. So we chose a combination of things we thought we could pull this off. We're mostly software and systems folks, and there's a dearth of information out there on what to do. Vendors tend to gravitate towards selling BGP+EVPN+VXLAN or whatever "enterprise" reference designs; so we kinda YOLO'ed the Gen 1. We decided to spend extra money if we could get to a working setup sooner. When the clock is in cloud spend, there's uh... lots of opportunity cost :D.
A lot of the chipset and switch choices were bets and we had to pick and choose what we gambled on - and what we could get our hands on. The main bets this round were eBGP to the hosts with BGP unnumbered, SONiC switches - this lets us do a lot of networking with our existing IPv6/Wireguard/eBPF overlay and a debian based switch OS + FRR (so fewer things to learn). And ofc. figuring out how to operationalise the install process and get stuff running on the hardware as soon as possible.
Now we've got a working design, we'll start iterating a bit more on the hardware choice and network design. I'd love for us to write about it when we get through it. Plus I think we owe the internet a rant on networking in general.
Edit: Also we don't use UniFi Pro / Uniquity gear anywhere?
The date and time durations given seem a bit confusing to me...
"we kicked off a Railway Metal project last year. Nine months later we were live with the first site in California".
seems inconsistent with:
"From kicking off the Railway Metal project in October last-year, it took us five long months to get the first servers plugged in"
The article was posted today (Jan 2025), was it maybe originally written last year and the project has been going on for more than a year, and they mean that the Railway Metal project actually started in 2023?
ah that's my bad - I wrote this in Dec, we only published in Jan. Obv. missed updating that.
Timeline wise; - we decided to go for it and spend the $$$ in Oct '23 - Convos/planning started ~ Jan '24 - Picked the vendors we wanted by ~ Feb/Mar '24 - Lead-times, etc... meant everything was ready for us to go fit the first gear by mostly ourselves at the start of May (that's the 5mo) - We did the "proper" re-install around June, followed closely by the second site in ~ Sep, around when we started letting our users on it as a open beta - Sep-Dec we just doubled down on refining software/automation and process while building out successive installs
Lead times can be mind numbing. We have certain switches from Arista that have a 3-6 mo leadtime. Servers are build to order, so again 2+ months depending on stock. And obv. holidays mean a lot of stuff shuts down around December.
Sometimes you can swap stuff around to get better lead-times, but then the operational complexity explodes because you have this slightly different component at this one site.
I used to be a EEE, and I thought supply chain there was bad. But with DCs I think it's sometimes worse because you don't directly control some parts of your BoM/supply chain (especially with build-to-order servers).
Was really hoping this was was actually about building your own data center. Our town doesn't have a data center, we need to go an hour south or an hour north. The building that a past failed data center was in (which doesn't bode well for a data center in town, eh?), is up for lease and I'm tempted.
But, I'd need to start off small, probably per-cabinet UPSes and transfer switches, smaller generators. I've built up cabinets and cages before, but never built up the exterior infrastructure.
Love these kinds of posts. Tried railway for the first time a few days ago. It was a delightful experience. Great work!
I would be super interested to know how this stuff scales physically - how much hardware ended up in that cage (maybe in Cloud-equivalent terms), and how much does it cost to run now that it's set up?
What brand of servers was used?
Yes, considering the importance of the power draw, I wondered if ARM servers were used.
oh yes we want to; I even priced a couple out. Most of the SKUs I found were pretty old, and we couldn't find anything compelling to risk deploying at the scale we wanted. It's on the wishlist, and if the right hardware comes along; we'll rack it up even as a bet. We maintain Nixpacks (https://nixpacks.com/docs/getting-started), so for most of our users we could rebuild most their apps for ARM seamlessly - infact we mostly develop our build systems on ARM (because macbooks). One day.
> We maintain Nixpacks
I _knew_ Railway sounded familiar.
Out of curiosity: is nix used to deploy the servers?
Looks like Supermicro.
Winner winner chicken dinner!
Awesome!! Hope to see more companies go this route. I had the pleasure to do something similar for a company(lot smaller scale though)
It was my first job out of university. I will never forget the awesome experience of walking into the datacenter and start plugging cables and stuff
I remember talking to Jake a couple of years ago when they were looking for someone with a storage background. Cool dude, and cool set of people. Really chuffed to see them doing what they believe in.
Thanks dude <3. We are indeed doing the thing :D
Can anyone recommend some engineering reading for building and running DC infrastructure?
We didn't find many good up-to-date resources online on the hardware side of things - kinda why we wanted to write about it. The networking aspect was the most mystical - I highly recommend "BGP in the datacenter" by Dinesh Dutt on that (I think it's available for free via NVidia). Our design is heavily influenced by the ideas discussed there.
What was the background of your team going into this project? Did you hire specialists for it (whether full time or consultants)?
We talked to a few, I think they're called MSPs? We weren't super impressed. We decided to YOLO it. There are probably great outfits out there, but it's hard to find them through the noise. We're mostly software and systems folks, but Railway is a infrastructure company so we need to own stuff down to the cage-nut - we owe it to our users. All engineering, project management and procurement is in-house.
We're lucky to have a few great distributors/manufacturers who help us pick the right gear. But we learnt a lot.
We've found a lot of value in getting a broker in to source our transit though.
My personal (and potentially misguided) hot take is that most of the baremetal world is stuck in the early 2000's, and the only companies doing anything interesting here the likes of AWS,Google and Meta. So the only way to innovate is to stumble around, escape the norms and experiment.
First time checking out railway product- it seems like a “low code” and visual way to define and operate infrastructure?
Like, if Terraform had a nice UI?
Kinda. It's like if you had everything from an infra stack but didn't need to manage it (Kubernetes for resilience, Argo for rollouts, Terraform for safely evolving infrastructure, DataDog for observability)
If you've heard of serverless, this is one step farther; infraless
Give us your code, we will spin it up, keep it up, automate rollouts service discovery, cluster scaling, monitoring, etc
for additional social proof
I've been using railway since 2022 and it's been great. I host all my personal projects there and I can go from code to a url by copy-pasting my single dockerfile around.
weird to think my final internship was running on one of these things. thanks for all the free minutes! it was a nice experience
y’all really need to open source that racking modeling tool, that would save sooooo many people so much time
More to learn from the failures than the blog haha. It tells you what the risks are with a colocation facility. There really isn't any text on how to do this stuff. The last time I wanted to build out a rack there aren't even any instructions on how to do cable management well. It's sort of learned by apprenticeship and practice.
I'm surprised you guys are building new!
Tons of Colocation available nearly everywhere in the US, and in the KCMO area, there are even a few dark datacenters available for sale!
cool project none-the-less. Bit jealous actually :P
They're not building new, though—the post is about renting a cage in a datacenter.
The requirements end up being pretty specific, based on workloads/power draw/supply chain
So, while we could have bought something off the shelf, that would have been suboptimal from a specs perspective. Plus then we'd have to source supply chain etc.
By owning not just the servers but the whole supply chain, we have redundancy at every layer, from the machine, to the parts on site (for failures), to the supply chain (refilling those spare parts/expanding capacity/etc)
Can you share a list of dark datacenters that are for sale. They sound interesting as a business.
More info on the cost comparison between all the options would be interesting
We pulled some cost stuff out of the post in final review because we weren't sure it was interesting ... we'll bring it back for a future post