Data in, data out. I’d always found watching the pattern of data flow relaxing. Once you knew what to look for you could kind of get a sense of what it was. Not specifically, of course, but between file size and transfer rate you could make an educated guess — a high res movie, downloaded from somewhere nearby; a bunch of image files, from a server further away; an album of mp3s, from someone on dial-up. It wasn’t so clear on the Bucket, since mostly we were our own bandwidth bottleneck. But I watched while a couple of movie-size files moved around and many small uploads occurred — probably desktop backups of documents.
It reminded me of pictures taken from orbit I’d seen of the Earth at night. The patterns of lights could tell a kind of story — about where there was activity, and when. Metadata, the story about the story. I liked that kind of thing.
If I hadn’t been mesmerized by the packets, I probably wouldn’t have been able to pinpoint the time it happened, but I was watching the normal peaks and valleys when, very right before my eyes, the transfer rate spiked. And it stayed up; much higher than I’d seen it before on these servers. I was pretty sure the system could handle it, but it was odd, so I pulled up the logs. There was a clear increase, but not from a single user like I’d expected. The connections were coming from all over.
I frowned. It wasn’t a typically alarming amount of traffic — certainly on a normal, contemporary database this wouldn’t even rate as unusual. But for us… Could it be the start of a DDoS?
A distributed denial-of-service attack was a standard black hat hacker’s tool for causing mischief and mayhem. Using many machines, usually the personal computers of innocent victims of malware, the attackers flooded the target with the goal of crippling their resources by overwhelming their bandwidth. Our entire business model relied on our servers being accessible to our clients — if we were down, we were losing business. Being bandwidth-challenged, we were particularly vulnerable to this kind of attack, though our distributed server network was designed to help mitigate that possibility. I wished I knew what was going on at the other server nodes.
In my orientation I’d been told that I was expected to work independently. The strength of RRD’s system was that each set of servers ran as if it were the only one; we were all each other’s failsafes. When my node was down, the others covered. If it were a deliberate attack, and each of us were overwhelmed, well, that was the point of a DDoS. Knowing whether it was happening all over didn’t really change my course of action, I just wanted to know.
That got me thinking. I looked at the data transfer graph for the last hour. The spike was obvious and it had plateaued now, which was somewhat good news. I did a little mental math — with four nodes operating at an average of n, if one failed then we’d each have an increase of roughly .25n. Not enough to account for the more than doubling I was seeing. But if they’d all failed…
I made a decision. I sent a short email to headquarters describing the spike, making it clear that I had it under control for now, but that it was concerning. I’d rather catch hell for being overcautious than let the whole enchilada go bad because I didn’t want to bother anyone.
I watched a while longer, but the traffic was stable. There wasn’t anything I could do unless things changed, so I figured I might as well go back above decks and check out the view. Hopefully the open ocean would take my mind off it for a while.