Prevent search engines from indexing pages, folders, your entire site, or just your webflow.io subdomain.
You can tell search engines which pages to crawl and which not to crawl on your site by writing a robots.txt file. You can prevent crawling pages, folders, your entire site. Or, just disable indexing of your webflow.io subdomain. This is useful to hide pages like your 404 page from being indexed and listed in search results.
You can prevent Google and other search engines from indexing the webflow.io subdomain by simply disabling indexing from your Project settings.
A unique robots.txt will be published only on the subdomain telling search engines to ignore the domain.
The robots.txt is usually used to list the URLs on a site that you don't want search engines to crawl. You can also include the sitemap of your site in your robots.txt file to tell search engine crawlers which content they should crawl.
Just like a sitemap, the robots.txt file lives in the top-level directory of your domain. Webflow will generate the /robots.txt file for your site once you populate it in your Project settings.
You can use any of these rules to populate the robots.txt file.
If you don't want anyone from finding a particular page or URL on your site, do not use the robots.txt file to disallow the URL from being crawled. Instead, use any of the options below:
Something went wrong while submitting the form. Please contact firstname.lastname@example.org