Most C2 redirectors are set up as simple reverse proxies. Apache or Nginx sitting in front of the C2 server, forwarding everything that arrives. This works right up until a scanner finds your endpoint, a defender starts poking at it, or an AV sandbox replays captured traffic against your infrastructure.

Your redirector is the one component that faces the open internet. Everything hits it: legitimate implant callbacks, automated scanners, security researchers, sandbox detonations, and occasionally a defender investigating something suspicious. If all of that reaches your C2 server, you've got problems.

This post covers the filtering layers I put on redirectors, why each one matters, and why the order you apply them in makes a difference.

Why Filtering Order Matters

What you check matters, but the order you check it in matters more.

A request from Nessus with a python-requests User-Agent doesn't need its URI path validated, its headers inspected, or its source IP checked against a blocklist. You already know it's not your implant. Catch it early and drop it before wasting cycles on deeper inspection.

Reject the obvious stuff fast, then progressively validate the things that require more context. Complex rules (CIDR lookups, cookie validation, custom header checks) take more processing time and more configuration to get right. If a scanner trips the User-Agent check in the first rule, none of that downstream logic runs.

One note before we get into the rules: every failed check on your redirector should serve a decoy page, not return an error code. A 403 Forbidden tells a defender there's filtering logic. A 502 tells them something is behind the redirector. A 200 with a generic website tells them nothing. All the examples below use decoy redirects for this reason.

It also matters for debugging. When something breaks mid-engagement and your implant stops checking in, you want to know which layer rejected it. If your rules are a tangled mess with no clear order, good luck tracing the problem at 2 AM.

The Filtering Stack

Here's the order I use, from first check to last.

Layer 1: Network Access Controls

Before a request even reaches your web server, network level controls should be doing the first pass. In AWS, this means Security Groups.

For a C2 redirector:

That's it. No SSH from the internet. No unrestricted outbound. The redirector talks to the internet on 443 and to your C2 server on whatever port your listener runs on. Everything else is denied by default.

One thing I've changed over the years: I don't even allow SSH inbound on the redirector anymore. Initial provisioning happens through Ansible or cloud-init, and ongoing access goes through a VPN or Tailscale. If a defender pulls the redirector's Security Group rules, they see a box that accepts HTTPS and nothing else. That's what a legitimate web server looks like.

The outbound restriction matters more than you'd think. If your redirector gets compromised or a defender gains access to it, restricted egress means they can't easily pivot or exfiltrate from it. They're stuck on a box that can only talk to one IP on one port.

Layer 2: User-Agent Filtering

This is your first application-layer check and it catches the most noise. Automated scanners, security tools, and lazy scripts all announce themselves in the User-Agent header.

# Block known scanners and tools
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} (curl|wget|nikto|nmap|sqlmap|python-requests|ruby|perl|masscan|zgrab|nessus) [NC]
RewriteRule .* /var/www/html/decoy/index.html [L]

This one rule eliminates a huge percentage of junk traffic. Most scanners don't bother spoofing their User-Agent because they're scanning millions of hosts and don't care about stealth.

But User-Agent filtering alone isn't enough. A defender who's actively investigating your infrastructure will use a browser or spoof a legitimate User-Agent. This layer catches automated noise, not targeted investigation.

The next step is positive validation: only allow the exact User-Agent your implant sends.

# Set environment variable for matching User-Agent
SetEnvIfNoCase User-Agent "^Mozilla/5\.0 \(Windows NT 10\.0; Win64; x64\) AppleWebKit/537\.36 \(KHTML, like Gecko\) Chrome/9[0-9]\.0\.[0-9]+\.[0-9]+ Safari/537\.36$" valid_agent

# Block empty or non-matching User-Agents
RewriteCond %{HTTP_USER_AGENT} ^$ [OR]
RewriteCond %{ENV:valid_agent} !=1
RewriteRule .* /var/www/html/decoy/index.html [L]

This needs to match your malleable C2 profile exactly. If your profile specifies a Chrome 96 User-Agent and your filter expects Chrome 90+, you're fine. If your profile uses a Firefox User-Agent and your filter only allows Chrome patterns, you'll block your own implant.

I've seen this happen in the field. Team deploys a redirector, copies filtering rules from a previous engagement, and the User-Agent pattern doesn't match the new profile. Implants can't check in, operators panic, and someone ends up disabling all filtering to "just get it working."

Layer 3: HTTP Method Validation

Most C2 frameworks only use GET and POST. If someone is sending DELETE, PUT, PATCH, or OPTIONS to your redirector, it's not your implant.

RewriteCond %{REQUEST_METHOD} !^(GET|POST)$ [NC]
RewriteRule .* /var/www/html/decoy/index.html [L]

Simple, fast, and cuts out another chunk of scanner noise. Web vulnerability scanners in particular love sending unusual HTTP methods to see how the server responds.

Layer 4: URI Path Whitelisting

Your C2 profile defines specific URI paths for callbacks. /api/v3/session, /images/logo.png, /css/style.css, whatever you've configured. Anything that doesn't match those paths isn't your implant.

# Only allow paths defined in the C2 profile
RewriteCond %{REQUEST_URI} !^/(api/v3/session|images/[^/]+\.jpg|css/style\.css|js/main\.js)$ [NC]
RewriteRule .* /var/www/html/decoy/index.html [L]

Everything else gets the decoy page. Clone a generic corporate site, a blog, or a parking page. When a defender manually browses to your redirector's IP, they see a real website. Not an error page, not a blank page, not an Apache default page. A real site that makes them move on.

Layer 5: HTTP Header Validation

Legitimate browsers send a predictable set of headers. Your implant, if properly configured through a malleable profile, should too. Validate the ones that matter:

# Require expected headers
RewriteCond %{HTTP:Accept-Language} !^en-US [NC,OR]
RewriteCond %{HTTP:Accept-Encoding} !^gzip [NC,OR]
RewriteCond %{HTTP:Connection} !^keep-alive [NC]
RewriteRule .* /var/www/html/decoy/index.html [L]

You can also strip headers that leak information about your redirector:

<IfModule mod_headers.c>
    Header always unset Server
    Header always unset X-Powered-By
</IfModule>

Remove every identifying header. Your redirector should reveal nothing about what software it's running.

Layer 6: Source IP and CIDR Blocking

Block IP ranges you know aren't your implants: AV/EDR sandbox ranges, known scanner networks, and cloud provider ranges you're not operating in.

# Load blocked network list
RewriteMap blocked-networks txt:/etc/apache2/blocked-networks.txt
RewriteCond ${blocked-networks:%{REMOTE_ADDR}|NOT-FOUND} !NOT-FOUND [NC]
RewriteRule .* /var/www/html/decoy/index.html [L]

Your blocklist should include:

You can also go the other direction and only allow traffic from your target's IP ranges. I've done this on every engagement for years. The concern some operators have is that blocking non-target IPs looks suspicious if a defender inspects the redirector. In practice, I've never had a blue team detect infrastructure because of this. The reduction in attack surface is worth it.

If you're deploying behind a CDN like CloudFront, you can restrict your redirector's Security Group to only accept traffic from CloudFront's IP ranges. AWS publishes these ranges and you can automate the updates. This means the only way to reach your redirector is through the CDN, which adds another layer between defenders and your infrastructure.

Layer 7: Cookie and Token Validation

This is your final validation layer. Your C2 profile likely generates session cookies or custom tokens with specific formats. Validate them:

# Require valid session cookie format
RewriteCond %{HTTP_COOKIE} !^session=[a-zA-Z0-9]{32}$ [NC]
RewriteRule .* /var/www/html/decoy/index.html [L]

A defender who has somehow spoofed the right User-Agent, sent the right HTTP method, hit the right URI path, included the right headers, and is coming from an allowed IP range still won't have a valid session cookie. This is the last check before traffic reaches the C2 server.

Nginx vs Apache

Everything above uses Apache mod_rewrite because that's what most C2 redirector guides and tools are built around. Nginx can do the same thing with a different syntax:

# Block scanners -- serve decoy
if ($http_user_agent ~* (curl|wget|python|scanner|bot|nikto|nmap)) {
    rewrite ^ /index.html last;
}

# Or block aggressively -- close connection with no response
if ($http_user_agent ~* (nmap|nikto|masscan|zgrab)) {
    return 444;
}

# Only allow C2 paths, proxy to backend
location ~ ^/(api/|auth/|static/) {
    proxy_pass https://c2-server:8443;
    proxy_ssl_verify off;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
}

# Serve decoy for everything else
location / {
    root /var/www/html/decoy;
    index index.html;
}

Nginx's return 444 is worth noting. It closes the connection without sending any response at all. No status code, no headers, nothing. To the scanner, it looks like the port isn't open. This is more aggressive than returning a 403 or even dropping the packet with a firewall rule, and it's useful for known-bad traffic you want to completely ignore.

Apache mod_rewrite is more flexible for complex conditional logic. Nginx is faster and lighter. For most redirector setups, either works. Pick the one you're more comfortable debugging at 2 AM during an engagement.

Aligning Filters with Your C2 Profile

This is the part that actually breaks operations.

Every filtering rule on your redirector needs to match what your malleable C2 profile generates. The User-Agent pattern, the URI paths, the HTTP methods, the headers, the cookie format. If any of these are mismatched, you either block your own implant or let scanners through.

Here's how to verify alignment before going live:

  1. Deploy the redirector with all filtering rules active
  2. Generate a test beacon/implant with the same profile
  3. Run it and watch the Apache/Nginx logs
  4. Verify the request passes through every filtering layer

If the beacon can't check in, enable trace logging to figure out which rule is blocking it:

# Enable detailed rewrite logging
LogLevel warn rewrite:trace6

Then watch the error log in real time:

sudo tail -f /var/log/apache2/error.log | grep rewrite

The trace output shows you exactly which condition matched or failed for each rule. Fix the mismatch, disable trace logging, and clear the logs. Don't leave trace logging on during an operation because it writes your entire filtering strategy to disk.

What Not to Do

Don't return error codes for blocked traffic. This is worth repeating. A 403 or 502 leaks information about your filtering logic and backend architecture. Every failed check should serve a decoy page or, in Nginx, close the connection silently with 444.

Don't log everything forever. Logs contain your filtering logic, your backend IP, your C2 paths. If the redirector gets compromised, those logs are the first thing a defender reads. Set aggressive log rotation, encrypt what you keep, and truncate what you don't need.

Don't reuse filtering rules across engagements. If a defender reverse engineers your filtering patterns from one operation, they can fingerprint your redirectors on future operations. Rotate your User-Agent patterns, your URI paths, and your cookie formats between engagements.

Don't filter only at the application layer. If your Security Groups allow all traffic in and out and you're relying entirely on Apache rules, one misconfiguration in your rewrite rules exposes your C2 server to the internet. Every layer should work independently — if one fails, the others still hold.

Putting It All Together

The full filtering stack, in order:

  1. Network controls (Security Groups) - restrict ports and IPs at the infrastructure level
  2. User-Agent - drop scanners and validate implant UA
  3. HTTP method - only allow GET/POST
  4. URI path - whitelist C2 profile paths only
  5. HTTP headers - validate expected headers, strip identifying ones
  6. Source IP/CIDR - block sandboxes, scanners, non-target ranges
  7. Cookie/token - validate C2 session format

Each layer catches traffic the previous one missed. By the time a request reaches your C2 server, it has passed seven independent checks. A scanner might spoof a User-Agent, but it won't have the right cookie. A defender might hit the right path, but they won't have the right headers.

No single layer is bulletproof. All seven together make your redirector much harder to probe, fingerprint, or accidentally expose.

References

This technique should only be implemented during authorized security engagements with explicit written permission. Know your local laws and obtain proper authorization before testing.