micolous.id.au

The result of a blogging accident

Powering StreetGeek 0x00: The Network

I’m going to be working on a new series of blog posts with Firstyear in which we’re going to discuss a lot of StreetGeek‘s network and server infrastructure. Information about this is around the place (or you can ask us), but we would talk about more publicly about what goes into making a “medium” LAN party work. I say “medium”, which is what I’d personally group all LANs with 100 to 1,000 attendees. As a contrast, I’d call an event like wiLANga with about 50 people “small”, and the major (commercial) US/EU LANs like QuakeCon with > 7,000 people “large”.

StreetGeek itself has seen many major changes over the years, as it grew from a LAN of about 20 people hanging out in The University of Adelaide to running monthly events with over 100 attendees at Colonel Light Gardens Uniting Church, the LAN area at AVCON, and leading SAGAfest in 2009. With this, it’s had to adapt to these new, larger environments, and invest in better hardware.

Here’s what StreetGeek’s network looks like, showing only the switches.

StreetGeek’s backbone is a Cisco-Linksys SGE2010. It’s a 48 port managed full-fabric gigabit switch, with 4 SFP ports. It replaced a previous Alloy 48-port managed gigabit switch, which didn’t have as many features, and had the nasty habit of overheating under the load we put it through. This switch lives on the “mu table” (formerly known as the B/C tables).

Coming off that are three Cisco-Linksys SRW2024 switches. They’re 24-port managed full-fabric gigabit switches, which are connected to the backbone each by two copper links using link aggregation. This means between these switches and the backbone there is 2gbit/s of connectivity, full-duplex. These switches live on D, E and G tables, where they together pushed about 7 TiB of traffic during the 10.06 event, with 1.1GiB/s peaks from clients on those tables (that’s about 10gbit/s). While the amount may seem impressive, in reality it’s really not - the switches could push 8.3GiB/s (72gbit/s) of traffic, which if running flat out for an entire event would add up to about 758.6 TiB.

On the overflow tables, and in the console area, there are Cisco-Linksys SR2024 switches. They’re the unmanaged counterpart of the SRW2024, so they don’t support link aggregation. These are connected in various points around the LAN, generally where there aren’t a large amount of clients to justify extra bandwidth.

At the moment, Firstyear has loaned the use of his Apple Airport Extreme access point. It’s a 802.11n access point, which is also one of the few wireless access points on the market that can do 2.4GHz and 5GHz at the same time (thereby effectively doubling the available spectrum that an access point can occupy). They easily handle having 50 clients on the network, and have coverage around most of the venue. We’re planning to replace this in the future with the LAN purchasing two of it’s own Airport Extremes: one in the main hall, and one in the console area.

We’ve tested these access points with gm trying to perform denial of service attacks against it with thousands of simulated clients, and it held up where other routers would just crash. Previously we had used 3 D-Link 802.11n access points to service the network, however the reliability of these access points under load was absolutely atrocious.

Something that may stand out with this is that the console area has a very poor layout. In the end, the console area doesn’t push enough traffic to justify additional cabling to do things “properly”. The only current-generation console that has gigabit ethernet is the Playstation 3, and nothing on the Playstation 3 actually pushes that kind of bandwidth. Typically, LAN games use a megabit or two per second, which is just tiny. The largest amount of bandwidth is used by the RetroLAN PCs, where 10 machines boot from an iSCSI virtual disk - and even then that’s only used for the operating system.

So this has pretty much covered our layout. We run a lot of services on the network itself, which will be covered in later editions of this series. Most of them are from servers in the office, or from servers in the admin area of the LAN hanging off the backbone.