ReferencesConfiguring Robots.txt

Configuring Robots.txt

imgix outputs a default /robots.txt file for every domain with the following:

User-agent: *
Disallow:

The "User-agent: *" means that the instructions will apply to all search engine robots, such as Google’s Googlebot. The "Disallow: " tells each robot that it has complete access to crawl every URL associated with the domain. This is equivalent to having an empty /robots.txt or none at all. For more information on /robots.txt, please refer here

Requests made by robots are counted as regular requests, which means they contribute toward the Origin Image and bandwidth calculations for billing.

Using a custom robots.txt file

It’s possible to use your own custom robots.txt on your Source. To do so:

  1. Upload your robots.txt to your Origin so that it can be accessed at the root of your imgix domain (ie: subdomain.imgix.net/robots.txt)
  2. Contact support@imgix.com to enable your custom robots.txt to be loaded and let us know the Source(s) you’d like to enable it for.

Note: You will have to use our Purging API if any changes are made to your robots.txt file.