Got a warning for my blog going over 100GB in bandwidth this month… which sounded incredibly unusual. My blog is text and a couple images and I haven’t posted anything to it in ages… like how would that even be possible?

Turns out it’s possible when you have crawlers going apeshit on your server. Am I even reading this right? 12,181 with 181 zeros at the end for ‘Unknown robot’? This is actually bonkers.

Edit: As Thunraz points out below, there’s a footnote that reads “Numbers after + are successful hits on ‘robots.txt’ files” and not scientific notation.

Edit 2: After doing more digging, the culprit is a post where I shared a few wallpapers for download. The bots have been downloading these wallpapers over and over, using 100GB of bandwidth usage in the first 12 days of November. That’s when my account was suspended for exceeding bandwidth (it’s an artificial limit I put on there awhile back and forgot about…) that’s also why the ‘last visit’ for all the bots is November 12th.

  • slazer2au@lemmy.world
    link
    fedilink
    English
    arrow-up
    72
    ·
    19 days ago

    AI scrapers are the new internet DDoS.

    Might want to throw something Infront of your blog to ward them off like Anubis or a Tarpit.

  • dual_sport_dork 🐧🗡️@lemmy.world
    link
    fedilink
    English
    arrow-up
    53
    ·
    19 days ago

    I run an ecommerce site and lately they’ve latched onto one very specific product with attempts to hammer its page and any of those branching from it for no readily identifiable reason, at the rate of several hundred times every second. I found out pretty quickly, because suddenly our view stats for that page in particular rocketed into the millions.

    I had to insert a little script to IP ban these fuckers, which kicks in if I see a malformed user agent string or if you try to hit this page specifically more than 100 times. Through this I discovered that the requests are coming from hundreds of thousands of individual random IP addresses, many of which are located in Singapore, Brazil, and India, and mostly resolve down into those owned by local ISPs and cell phone carriers.

    Of course they ignore your robots.txt as well. This smells like some kind of botnet thing to me.

    • panda_abyss@lemmy.ca
      link
      fedilink
      English
      arrow-up
      19
      ·
      19 days ago

      I don’t really get those bots.

      Like, there are bots that are trying to scrape product info, or prices, or scan for quantity fields. But why the hell do some of these bots behave the way they do?

      Do you use Shopify by chance? With Shopify the bots could be scraping the product.json endpoint unless it’s disabled in your theme. Shopify just seems to show the updated at timestamp from the db in their headers+product data, so inventory quantity changes actually result in a timestamp change that can be used to estimate your sales.

      There are companies that do that and sell sales numbers to competitors.

      No idea why they have inventory info on their products table, it’s probably a performance optimization.

      I haven’t really done much scraping work in a while, not since before these new stupid scrapers started proliferating.

      • dual_sport_dork 🐧🗡️@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        19 days ago

        Negative. Our solution is completely home grown. All artisinal-like, from scratch. I can’t imagine I reveal anything anyone would care about much except product specs, and our inventory and pricing really doesn’t change very frequently.

        Even so, you think someone bothering to run a botnet to hound our site would distribute page loads across all of our products, right? Not just one. It’s nonsensical.

        • panda_abyss@lemmy.ca
          link
          fedilink
          English
          arrow-up
          7
          ·
          19 days ago

          Yeah, that’s the kind of weird shit I don’t understand. Someone on the other hand is paying for servers and a residential proxy to send that traffic too. Why?

          • dual_sport_dork 🐧🗡️@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            ·
            19 days ago

            It doesn’t quite work that way, since the URL is also the model number/SKU which comes from the manufacturer. I suppose I could write an alias for just that product but it would become rather confusing.

            What I did experiment with was temporarily deleting the product altogether for a day or two. (We barely ever sell it. Maybe 1 or 2 units of it a year. This is no great loss in the name of science.) This causes our page to return a 404 when you try to request it. The bots blithely ignored this, and continued attempting to hammer that nonexistent page all the same. Puzzling.

        • Lka1988@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          19 days ago

          Could it be a competitor for that particular product? Hired some foreign entity to hit anything related to their own product?

          • dual_sport_dork 🐧🗡️@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            19 days ago

            Maybe, but I also carry literally hundreds of other products from that same brand including several that are basically identical with trivial differences, and they’re only picking on that one particular SKU.

  • Thunraz@feddit.org
    link
    fedilink
    English
    arrow-up
    43
    ·
    19 days ago

    It’s 12181 hits and the number behind the plus sign are robots.txt hits. See the footnote at the bottom of your screenshot.

    • benagain@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      17
      ·
      19 days ago

      Phew, so I’m a dumbass and not reading it right. I wonder how they’ve managed to use 3MB per visit?

        • benagain@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          19 days ago

          12,000 visits, with 181 of those to the robots.txt file makes way, way more sense. The ‘Not viewed traffic’ adds up to 136,957 too - so I should have figured it out sooner.

          I couldn’t wrap my head around how large the number was and how many visits that would actually entail to reach that number in 25 days. Turns out that would be roughly 5.64 quinquinquagintillion visits per nanosecond. Call it a hunch, but I suspect my server might not handle that.

      • EarMaster@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        18 days ago

        The traffic is really suspicious. Have you by any chance a health or heartbeat endpoint which provides continuous output? That would explain why so little hits cause so much traffic.

        • benagain@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          19 days ago

          It’s super weird for sure. I’m not sure how the bots have managed to use so much more bandwidth with only 30k more hits than regular traffic, I guess they probably don’t rely on any caching and fetch each page from scratch?

          Still going through my stats, but it doesn’t look like I’ve gotten much traffic via any API endpoint (running WordPress). I had a few wallpapers available for download and it looks like for whatever reason the bots have latched onto those.

  • pendel@feddit.org
    link
    fedilink
    English
    arrow-up
    12
    ·
    18 days ago

    I had to pull an all nighter to fix some unoptimized query because I had just launched a new website with barely any visitors and hadn’t implemented caching yet for something that I thought no one uses anyway, but a bot found it and broke my entire DB through hitting the endpoint again and again until nothing worked anymore

  • WolfLink@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    19 days ago

    This is why I use CloudFlare. They block the worst and cache for me to reduce the load of the rest. It’s not 100% but it does help.

    • irmadlad@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      19 days ago

      LOL Someone took exception to your use of Cloudflare. Hilarious. Anyways, yeah, what Cloudflare doesn’t get, pFsense does.

  • ohshit604@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    19 days ago

    I just geo-restrict my server to my country, certain services I’ll run an ip-blacklist and only whitelist the known few networks.

    Works okay I suppose, kills the need for a WAF, haven’t had any issues with it.

    • benagain@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 days ago

      It’s a mix, I put two screenshots together. On the left is my monthly bandwidth usage from CPanel on the right is Awstats (though I hid some sections so the Robots/Spiders section was closer to the top).

        • benagain@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          18 days ago

          I think they’re winding down the project unfortunately, so I might have to get with the times…

          • [object Object]@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            18 days ago

            I mean, I thought it was long dead. It’s twenty-five years old, and the web has changed quite a bit in that time. No one uses Perl anymore, for starters. I used Open Web Analytics, Webalizer, or somesuch by 2008 or so. I remember Webalizer being snappy as heck.

            I tinkered with log analysis myself back then, peeping into the source of AWStats and others. Learned that a humongous regexp with like two hundred alternative matches for the user-agent string was way faster than trying to match them individually — which of course makes sense seeing as regexps work as state-machines in a sort of a very specialized VM. My first attempts, in comparison, were laughably naive and slow. Ah, what a time.

            Sure enough, working on a high-traffic site taught me that it’s way more efficient to prepare data for reading at the moment of change instead of when it’s being read — which translates to analyzing visits on the fly and writing to an optimized database like ElasticSearch.

  • hdsrob@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    18 days ago

    Had the same thing happen on one of my servers. Got up one day a few weeks ago and the server was suspended (luckily the hosting provider unsuspended it for me quickly).

    It’s mostly business sites, but we do have an old personal blog on there with a lot of travel pictures on it, and 4 or 5 AI bots were just pounding it. Went from 300GB per month average to 5TB on August, and 10/11 TB in September and October.

  • ChaoticNeutralCzech@feddit.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    17 days ago

    I don’t know what “12,181+181” means (edit: thanks @Thunraz@feddit.org, see Edit 1) but absolutely not 1.2181 × 10185. That many requests can’t be made within the 39 × 109 bytes of bandwidth − in fact, they exceed the number of atoms on Earth times its age in microseconds (that’s close to 1070). Also, “0+57” in another row would be dubious exponential notation, the exponent should be 0 (or omitted) if the mantissa (and thus the value represented) is 0.

    • benagain@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 days ago

      My little brain broke when I started trying to figure out how big the number was… thanks for breaking it down even more intuitively, yeah it is way to large to have been correct!