我可以阻止Apache Web服务器上每个站点的搜索爬虫吗?

时间:2021-06-11 16:52:05

I have somewhat of a staging server on the public internet running copies of the production code for a few websites. I'd really not like it if the staging sites get indexed.

我在公共互联网上有一些临时服务器,它运行一些网站的生产代码副本。如果登台网站被编入索引,我真的不喜欢它。

Is there a way I can modify my httpd.conf on the staging server to block search engine crawlers?

有没有办法可以在登台服务器上修改我的httpd.conf来阻止搜索引擎爬虫?

Changing the robots.txt wouldn't really work since I use scripts to copy the same code base to both servers. Also, I would rather not change the virtual host conf files either as there is a bunch of sites and I don't want to have to remember to copy over a certain setting if I make a new site.

由于我使用脚本将相同的代码库复制到两个服务器,因此更改robots.txt将无法正常工作。此外,我宁愿不更改虚拟主机conf文件,因为有一堆网站,如果我建立一个新网站,我不想记得复制某个设置。

6 个解决方案

#1


33  

Create a robots.txt file with the following contents:

使用以下内容创建robots.txt文件:

User-agent: *
Disallow: /

Put that file somewhere on your staging server; your directory root is a great place for it (e.g. /var/www/html/robots.txt).

将该文件放在登台服务器上的某个位置;您的目录根目录是一个很好的地方(例如/var/www/html/robots.txt)。

Add the following to your httpd.conf file:

将以下内容添加到httpd.conf文件中:

# Exclude all robots
<Location "/robots.txt">
    SetHandler None
</Location>
Alias /robots.txt /path/to/robots.txt

The SetHandler directive is probably not required, but it might be needed if you're using a handler like mod_python, for example.

可能不需要SetHandler指令,但是如果您使用的是像mod_python这样的处理程序,则可能需要它。

That robots.txt file will now be served for all virtual hosts on your server, overriding any robots.txt file you might have for individual hosts.

现在,将为服务器上的所有虚拟主机提供robots.txt文件,覆盖您可能拥有的各个主机的任何robots.txt文件。

(Note: My answer is essentially the same thing that ceejayoz's answer is suggesting you do, but I had to spend a few extra minutes figuring out all the specifics to get it to work. I decided to put this answer here for the sake of others who might stumble upon this question.)

(注意:我的答案基本上与ceejayoz的答案建议你做的一样,但我不得不花费额外的时间来弄清楚所有具体细节才能让它发挥作用。我决定把这个答案放在这里为了别人的利益谁可能偶然发现这个问题。)

#2


4  

You can use Apache's mod_rewrite to do it. Let's assume that your real host is www.example.com and your staging host is staging.example.com. Create a file called 'robots-staging.txt' and conditionally rewrite the request to go to that.

您可以使用Apache的mod_rewrite来执行此操作。假设您的真实主机是www.example.com,而您的暂存主机是staging.example.com。创建一个名为“robots-staging.txt”的文件,并有条件地重写要转到该文件的请求。

This example would be suitable for protecting a single staging site, a bit of a simpler use case than what you are asking for, but this has worked reliably for me:

这个示例适用于保护单个暂存站点,比您要求的更简单的用例,但这对我来说可靠:

<IfModule mod_rewrite.c>
  RewriteEngine on

  # Dissuade web spiders from crawling the staging site
  RewriteCond %{HTTP_HOST}  ^staging\.example\.com$
  RewriteRule ^robots.txt$ robots-staging.txt [L]
</IfModule>

You could try to redirect the spiders to a master robots.txt on a different server, but some of the spiders may balk after they get anything other than a "200 OK" or "404 not found" return code from the HTTP request, and they may not read the redirected URL.

您可以尝试将蜘蛛重定向到另一台服务器上的主robots.txt,但是一些蜘蛛在获得HTTP请求中的“200 OK”或“404 not found”返回代码以外的任何内容后可能会犹豫不决,并且他们可能无法读取重定向的URL。

Here's how you would do that:

这是你如何做到这一点:

<IfModule mod_rewrite.c>
  RewriteEngine on

  # Redirect web spiders to a robots.txt file elsewhere (possibly unreliable)
  RewriteRule ^robots.txt$ http://www.example.com/robots-staging.txt [R]
</IfModule>

#3


2  

Could you alias robots.txt on the staging virtualhosts to a restrictive robots.txt hosted in a different location?

您是否可以将暂存虚拟主机上的robots.txt替换为托管在其他位置的限制性robots.txt?

#4


2  

To truly stop pages from being indexed, you'll need to hide the sites behind HTTP auth. You can do this in your global Apache config and use a simple .htpasswd file.

要真正阻止页面被编入索引,您需要隐藏HTTP身份验证背后的网站。您可以在全局Apache配置中执行此操作,并使用简单的.htpasswd文件。

Only downside to this is you now have to type in a username/password the first time you browse to any pages on the staging server.

唯一的缺点是,您现在必须在第一次浏览到登台服务器上的任何页面时键入用户名/密码。

#5


1  

Depending on your deployment scenario, you should look for ways to deploy different robots.txt files to dev/stage/test/prod (or whatever combination you have). Assuming you have different database config files or (or whatever's analogous) on the different servers, this should follow a similar process (you do have different passwords for your databases, right?)

根据您的部署方案,您应该寻找将不同的robots.txt文件部署到dev / stage / test / prod(或者您拥有的任何组合)的方法。假设您在不同的服务器上有不同的数据库配置文件或(或类似的),这应遵循类似的过程(您的数据库有不同的密码,对吧?)

If you don't have a one-step deployment process in place, this is probably good motivation to get one... there are tons of tools out there for different environments - Capistrano is a pretty good one, and favored in the Rails/Django world, but is by no means the only one.

如果你没有一步到位的部署过程,这可能是一个很好的动力来获得一个...有很多工具可用于不同的环境--Capistrano是一个相当不错的工具,并且在Rails /中受到青睐Django世界,但绝不是唯一的。

Failing all that, you could probably set up a global Alias directive in your Apache config that would apply to all virtualhosts and point to a restrictive robots.txt

如果做不到这一点,您可以在Apache配置中设置一个全局Alias指令,该指令适用于所有虚拟主机并指向限制性robots.txt

#6


0  

Try Using Apache to stop bad robots. You can get the user agents online or just allow browsers, rather than trying to block all bots.

尝试使用Apache来阻止坏机器人。您可以在线获取用户代理或仅允许浏览器,而不是尝试阻止所有机器人。

#1


33  

Create a robots.txt file with the following contents:

使用以下内容创建robots.txt文件:

User-agent: *
Disallow: /

Put that file somewhere on your staging server; your directory root is a great place for it (e.g. /var/www/html/robots.txt).

将该文件放在登台服务器上的某个位置;您的目录根目录是一个很好的地方(例如/var/www/html/robots.txt)。

Add the following to your httpd.conf file:

将以下内容添加到httpd.conf文件中:

# Exclude all robots
<Location "/robots.txt">
    SetHandler None
</Location>
Alias /robots.txt /path/to/robots.txt

The SetHandler directive is probably not required, but it might be needed if you're using a handler like mod_python, for example.

可能不需要SetHandler指令,但是如果您使用的是像mod_python这样的处理程序,则可能需要它。

That robots.txt file will now be served for all virtual hosts on your server, overriding any robots.txt file you might have for individual hosts.

现在,将为服务器上的所有虚拟主机提供robots.txt文件,覆盖您可能拥有的各个主机的任何robots.txt文件。

(Note: My answer is essentially the same thing that ceejayoz's answer is suggesting you do, but I had to spend a few extra minutes figuring out all the specifics to get it to work. I decided to put this answer here for the sake of others who might stumble upon this question.)

(注意:我的答案基本上与ceejayoz的答案建议你做的一样,但我不得不花费额外的时间来弄清楚所有具体细节才能让它发挥作用。我决定把这个答案放在这里为了别人的利益谁可能偶然发现这个问题。)

#2


4  

You can use Apache's mod_rewrite to do it. Let's assume that your real host is www.example.com and your staging host is staging.example.com. Create a file called 'robots-staging.txt' and conditionally rewrite the request to go to that.

您可以使用Apache的mod_rewrite来执行此操作。假设您的真实主机是www.example.com,而您的暂存主机是staging.example.com。创建一个名为“robots-staging.txt”的文件,并有条件地重写要转到该文件的请求。

This example would be suitable for protecting a single staging site, a bit of a simpler use case than what you are asking for, but this has worked reliably for me:

这个示例适用于保护单个暂存站点,比您要求的更简单的用例,但这对我来说可靠:

<IfModule mod_rewrite.c>
  RewriteEngine on

  # Dissuade web spiders from crawling the staging site
  RewriteCond %{HTTP_HOST}  ^staging\.example\.com$
  RewriteRule ^robots.txt$ robots-staging.txt [L]
</IfModule>

You could try to redirect the spiders to a master robots.txt on a different server, but some of the spiders may balk after they get anything other than a "200 OK" or "404 not found" return code from the HTTP request, and they may not read the redirected URL.

您可以尝试将蜘蛛重定向到另一台服务器上的主robots.txt,但是一些蜘蛛在获得HTTP请求中的“200 OK”或“404 not found”返回代码以外的任何内容后可能会犹豫不决,并且他们可能无法读取重定向的URL。

Here's how you would do that:

这是你如何做到这一点:

<IfModule mod_rewrite.c>
  RewriteEngine on

  # Redirect web spiders to a robots.txt file elsewhere (possibly unreliable)
  RewriteRule ^robots.txt$ http://www.example.com/robots-staging.txt [R]
</IfModule>

#3


2  

Could you alias robots.txt on the staging virtualhosts to a restrictive robots.txt hosted in a different location?

您是否可以将暂存虚拟主机上的robots.txt替换为托管在其他位置的限制性robots.txt?

#4


2  

To truly stop pages from being indexed, you'll need to hide the sites behind HTTP auth. You can do this in your global Apache config and use a simple .htpasswd file.

要真正阻止页面被编入索引,您需要隐藏HTTP身份验证背后的网站。您可以在全局Apache配置中执行此操作,并使用简单的.htpasswd文件。

Only downside to this is you now have to type in a username/password the first time you browse to any pages on the staging server.

唯一的缺点是,您现在必须在第一次浏览到登台服务器上的任何页面时键入用户名/密码。

#5


1  

Depending on your deployment scenario, you should look for ways to deploy different robots.txt files to dev/stage/test/prod (or whatever combination you have). Assuming you have different database config files or (or whatever's analogous) on the different servers, this should follow a similar process (you do have different passwords for your databases, right?)

根据您的部署方案,您应该寻找将不同的robots.txt文件部署到dev / stage / test / prod(或者您拥有的任何组合)的方法。假设您在不同的服务器上有不同的数据库配置文件或(或类似的),这应遵循类似的过程(您的数据库有不同的密码,对吧?)

If you don't have a one-step deployment process in place, this is probably good motivation to get one... there are tons of tools out there for different environments - Capistrano is a pretty good one, and favored in the Rails/Django world, but is by no means the only one.

如果你没有一步到位的部署过程,这可能是一个很好的动力来获得一个...有很多工具可用于不同的环境--Capistrano是一个相当不错的工具,并且在Rails /中受到青睐Django世界,但绝不是唯一的。

Failing all that, you could probably set up a global Alias directive in your Apache config that would apply to all virtualhosts and point to a restrictive robots.txt

如果做不到这一点,您可以在Apache配置中设置一个全局Alias指令,该指令适用于所有虚拟主机并指向限制性robots.txt

#6


0  

Try Using Apache to stop bad robots. You can get the user agents online or just allow browsers, rather than trying to block all bots.

尝试使用Apache来阻止坏机器人。您可以在线获取用户代理或仅允许浏览器,而不是尝试阻止所有机器人。