Controlling Traffic to Imgix with Block and Allowlists
Imgix does not provide native IP-based, geo-based, or user-agent filtering. There is no configuration option to create allowlists or blocklists directly on an Imgix source. If you need to control who can access your Imgix URLs, those controls must be implemented in your own infrastructure.
Controlling traffic is important for three primary reasons:
- Security – Prevent unauthorized access to restricted or licensed content.
- Cost control – Reduce unnecessary transformations and bandwidth usage.
- Abuse mitigation – Limit scraping, parameter tampering, and automated misuse.
Imgix is responsible for transforming and delivering media at scale. It is not designed to enforce network policy. Access restrictions must be applied before a request reaches the Imgix network.
For an overview of how requests are processed, see Imgix Overview.
Why Traffic Control Matters
If a request reaches Imgix, it may trigger a transformation, generate a new derivative, populate edge cache, and consume bandwidth. Even unwanted or malicious requests must be evaluated and processed to some degree.
A typical request flow looks like this:
End User → Your Infrastructure → Imgix → Origin
Filtering requests within your own infrastructure ensures that only approved traffic reaches Imgix. This prevents unnecessary processing, protects restricted assets, and reduces exposure to automated abuse.
Where Allowlists and Blocklists Should Be Implemented
Allowlists and blocklists must be enforced in layers you control. The correct location depends on what you are trying to protect and how strict the restriction needs to be.
Application Layer (Your Website or API)
If your goal is to restrict access to content within an authenticated product or platform, the most effective place to enforce control is within your application.
Authentication and authorization rules prevent unauthorized users from accessing pages or APIs that generate Imgix URLs. Because most crawlers and automated tools discover asset URLs through HTML responses, restricting page access significantly limits asset discovery.
When combined with signed URLs generated server-side, this model prevents parameter tampering and arbitrary transformation requests. Adding expiration via the expires parameter further limits how long a URL can be reused.
This approach is well suited for SaaS dashboards, internal tools, member-only platforms, and partner portals.
See Securing Assets and URL Expiration.
Reverse Proxy in Front of Imgix
If you need network-level restrictions such as IP allowlists, enforcement should occur in a reverse proxy that sits in front of Imgix.
assets.yoursite.com → Reverse Proxy → yourdomain.imgix.net
In this model, the proxy evaluates each request before forwarding it. Requests from unapproved IP ranges or missing required headers are denied immediately and never reach Imgix.
This prevents restricted content from being accessed outside approved networks and ensures that unauthorized traffic does not trigger image processing or consume bandwidth. It is commonly used in B2B integrations, licensed distribution environments, and scenarios where contractual agreements require strict network controls.
If you are using a Web Proxy source, see Web Proxy Source Setup.
CDN or Web Application Firewall (WAF)
A CDN or WAF can enforce broader traffic policies in front of your website or proxy. These systems are designed to evaluate traffic at scale and can block malicious IP ranges, restrict traffic by country, mitigate bots, and apply rate limits.
Protecting the application layer often provides the greatest security benefit. If attackers cannot access login endpoints, APIs, or HTML pages, they cannot generate or retrieve valid Imgix URLs. This reduces abuse before it impacts downstream systems.
For delivery considerations, see Serving Assets and CDN Guidelines.
Using robots.txt for Crawl Control
In addition to allowlists and blocklists, you may want to control how search engines and well-behaved crawlers access your Imgix assets.
A robots.txt file can be configured on your website domain to:
- Prevent indexing of specific paths
- Disallow crawling of image directories
- Reduce search engine discovery of asset URLs
For example:
User-agent: * Disallow: /images/
This tells compliant crawlers not to crawl or index those paths.
However, robots.txt is not a security mechanism. It is a voluntary standard followed by reputable search engines. It does not prevent:
- Direct URL access
- Malicious scraping
- Non-compliant bots
Use robots.txt to control indexing and SEO behavior, not to protect restricted content. For restricted assets, rely on authentication, signed URLs, proxies, or WAF rules.
For related guidance on user agents and crawlers, see Understanding User Agents and Referers.
Why These Controls Are Not Enforced on Imgix
Imgix does not provide IP filtering, country blocking, or custom firewall configuration. Its responsibility is media transformation and global delivery, not request policy enforcement.
Keeping traffic controls in your own infrastructure ensures that:
- Security policies are consistent across all services
- Enforcement can be customized to your requirements
- Requests are blocked before triggering processing
This separation of responsibilities allows Imgix to remain optimized for performance while you maintain control over access policy.
Placing a CDN in Front of Imgix
Imgix already operates as a globally distributed CDN. Placing an additional CDN directly in front of Imgix is technically possible but introduces additional layers of caching and routing.
Additional edge layers can increase latency, complicate cache invalidation, and make troubleshooting more difficult. In some enterprise environments, centralized traffic enforcement may require it. If so, ensure that query parameters and URL signatures are preserved so that signed URLs continue to function correctly.
CDN in Front of Imgix Warning
If possible, we recommend that you not use a third-party CDN on top of the one we provide. Due to the close integration of all components of the Imgix service, performance with a third-party CDN may be reduced and some features will not be available.
Allowlists vs Blocklists
An allowlist permits only explicitly approved traffic and denies all other requests. This model is appropriate when access must be tightly controlled, such as partner IP restrictions or internal-only content.
A blocklist denies known malicious or unwanted traffic while allowing other requests. This is more common for public-facing websites where the goal is abuse reduction rather than strict access limitation.
In both cases, enforcement occurs in your application, proxy, CDN, or WAF — not within Imgix.
Layered Protection
Traffic filtering determines who can reach your infrastructure. Signed URLs determine how Imgix assets can be requested.
Signed URLs prevent parameter tampering, restrict unauthorized transformations, and limit long-term URL reuse. When combined with authentication, upstream filtering, and expiration controls, they form a layered security model that protects both your content and your transformation resources.
For related guidance, see:
Access control for Imgix must be implemented before requests reach the Imgix network. Doing so protects your content, reduces abuse, and helps manage delivery costs.