使用memcached有数百/数千个tcp套接字是否合理?

时间:2022-10-02 21:21:48

I'm using Merb::Cache for storing txt/xml and have noticed that the longer I leave my merbs running the larger the amount of open tcp sockets I have open -- I believe this is causing some major performance problems.

我正在使用Merb :: Cache来存储txt / xml,并注意到我让我的merbs运行的时间越长,打开的tcp套接字的数量越大 - 我相信这会导致一些主要的性能问题。

lsof | grep 11211 | wc -l
494
merb      27206       root   71u     IPv4   13759908                 TCP localhost.localdomain:59756->localhost.localdomain:11211 (ESTABLISHED)
merb      27206       root   72u     IPv4   13759969                 TCP localhost.localdomain:59779->localhost.localdomain:11211 (ESTABLISHED)
merb      27206       root   73u     IPv4   13760039                 TCP localhost.localdomain:59805->localhost.localdomain:11211 (ESTABLISHED)
merb      27206       root   74u     IPv4   13760052                 TCP localhost.localdomain:59810->localhost.localdomain:11211 (ESTABLISHED)
merb      27206       root   75u     IPv4   13760135                 TCP localhost.localdomain:59841->localhost.localdomain:11211 (ESTABLISHED)
merb      27206       root   76u     IPv4   13760823                 TCP localhost.localdomain:59866->localhost.localdomain:11211 (ESTABLISHED)
merb      27206       root   77u     IPv4   13760951                 TCP localhost.localdomain:52095->localhost.localdomain:11211 (ESTABLISHED)

etc...

my relevant code is :

我的相关代码是:

    if !exists?(:memcached) then
      register(:memcached, Merb::Cache::MemcachedStore, :namespace => 'mynamespace', :servers => ['127.0.0.1:11211'])
    end

&&

    when :xml
      unless @hand_xml = Merb::Cache[:memcached].read("/hands/#{@hand.id}.xml")
        @hand_xml = display(@hand)
        Merb::Cache[:memcached].write("/hands/#{@hand.id}.xml", @hand_xml)
      end
      return @hand_xml

is this code straight out wrong or am I using the wrong version of memcache??

这个代码是错误的还是我使用了错误版本的memcache?

I have memcached 1.2.8 and have the following:

我有memcached 1.2.8并有以下内容:

libmemcached-0.25.14.tar.gz memcached-0.13.gem

this is kind of driving me crazy..

这有点让我发疯

2 个解决方案

#1


k I figured out some stuff..

我想出了一些东西......

1) it CAN be reasonable to have hundreds/thousands of sockets connected to memcached assuming you are using a library that utilizes epoll or something else -- however, if you are using ruby like me I'm not aware of a lib that utilizes something else than select() or poll() -- therefore this strikes this question/want out immediately

1)假设您正在使用一个利用epoll或其他东西的库,将数百/数千个套接字连接到memcached是合理的 - 但是,如果您使用像我这样的ruby,我不知道利用某些东西的lib而不是select()或poll() - 因此这会立即触及这个问题/想要

2) if you are like me you only have 1 memcached server running right now and a couple of mongrels/thins running around taking care of requests..therefore your memcache connections should prob. be no more than the number of mongrels/thins you have running (assuming you only caching 1 or two sets of things) -- which was my case

2)如果你像我一样,你现在只有1个memcached服务器正在运行,还有几个mongrel / thins在处理请求。所以你的memcache连接应该是概率。不超过你运行的mongrels / thins的数量(假设你只缓存一两套东西) - 这是我的情况

here's the fix:

这是修复:

setup memcache through memcached gem rather than merb::cache (which actually wraps whatever memcache lib you are using

通过memcached gem设置memcache而不是merb :: cache(它实际上包含了你正在使用的任何memcache lib

MMCACHE = Memcached.new("localhost:11211")

get/set your values:

获取/设置您的值:

  @cache = MMCACHE.clone
  begin
    @hand_xml = @cache.get("/hands/#{@hand.id}.xml")
  rescue
    @hand_xml = display(@hand)
    @cache.set("/hands/#{@hand.id}.xml", @hand_xml)
  end
  @cache.quit

sit back and drink a cold one cause now when you do this:

现在当你这样做时,请坐下来喝一杯冷饮:

lsof | grep 11211 | wc -l

you see something like 2 or 3 instead of 2036!

你会看到像2或3而不是2036的东西!

props to reef for cluing me in that it's not uncommon for memcache connections to be persistent to begin with

为了让我知道,为了让memcache连接始终具有持久性并不常见

#2


I might be able to help, but I need to tell a story to do that. Here it is.

我也许可以提供帮助,但我需要讲一个故事来做到这一点。这里是。

Once upon a time there was a cluster of 10 apache(ssl) servers configured to have exactly 100 threads each. There also was a cluster of 10 memcached servers (on the same boxes), and they all seemed to live peacefully. Both apache's and memcached's were guarded by the evil monit daemon.

曾几何时,有一个由10个apache(ssl)服务器组成的集群,每个服务器配置为每个服务器只有100个线程。还有一组10个memcached服务器(在同一个盒子上),它们似乎都和平相处。 apache和memcached都被邪恶的monit守护进程守护着。

Then the King installed a 11th apache(ssl) server and memcached's started to restart randomly every few hours! The King started investigating and what did he found? There was a bug in the php memcache module documentation that said that the default constructor of memcache connection object is not persistent, but apparently it was. What happened was that every php thread (and there were like 1000 of them), opened a connection to every memcached in the pool when he needed one, and it held it. There were 10*100 connections to every memcached server and it was fine, but with 11 servers it was 1100 and as 1024<1100. Maximum number of open sockets for memcached was 1024. When all the sockets were taken, the monit daemon couldn't connect, so he restarted the memcached.

然后国王安装了第11个apache(ssl)服务器,memcached开始每隔几个小时随机重启!国王开始调查,他发现了什么? php memcache模块文档中有一个错误,说memcache连接对象的默认构造函数不是持久的,但显然它是。发生了什么事情是每个PHP线程(并且有1000个),当他需要时,打开了与池中每个memcached的连接,并且它持有它。每个memcached服务器有10 * 100个连接,很好,但有11个服务器,它是1100,而1024 <1100。 memcached的最大打开套接字数为1024.当所有套接字都被占用时,monit守护进程无法连接,因此他重新启动了memcached。

Every story has to have a moral. So, what did the King do with all of this? He disabled the persistent connections and they all lived happily ever after, with number of connections on the cluster peaking at 5 (five). Those servers were serving hudge amount of data, so We couldn't have 1000 spare sockets and it was cheaper to negotiate the memcache connection on every request.

每个故事都要有道德。那么,国王对这一切做了什么呢?他禁用了持久连接,他们从此过上幸福的生活,群集上的连接数达到峰值5(五)。那些服务器正在提供大量数据,因此我们不能拥有1000个备用套接字,并且在每个请求上协商memcache连接会更便宜。

I am sorry but I don't know ruby, it looks like You had an awful amount of threads or You are caching it wrong.

我很抱歉,但我不知道ruby,看起来你有很多线程或者你错误地缓存了。

Good luck!

#1


k I figured out some stuff..

我想出了一些东西......

1) it CAN be reasonable to have hundreds/thousands of sockets connected to memcached assuming you are using a library that utilizes epoll or something else -- however, if you are using ruby like me I'm not aware of a lib that utilizes something else than select() or poll() -- therefore this strikes this question/want out immediately

1)假设您正在使用一个利用epoll或其他东西的库,将数百/数千个套接字连接到memcached是合理的 - 但是,如果您使用像我这样的ruby,我不知道利用某些东西的lib而不是select()或poll() - 因此这会立即触及这个问题/想要

2) if you are like me you only have 1 memcached server running right now and a couple of mongrels/thins running around taking care of requests..therefore your memcache connections should prob. be no more than the number of mongrels/thins you have running (assuming you only caching 1 or two sets of things) -- which was my case

2)如果你像我一样,你现在只有1个memcached服务器正在运行,还有几个mongrel / thins在处理请求。所以你的memcache连接应该是概率。不超过你运行的mongrels / thins的数量(假设你只缓存一两套东西) - 这是我的情况

here's the fix:

这是修复:

setup memcache through memcached gem rather than merb::cache (which actually wraps whatever memcache lib you are using

通过memcached gem设置memcache而不是merb :: cache(它实际上包含了你正在使用的任何memcache lib

MMCACHE = Memcached.new("localhost:11211")

get/set your values:

获取/设置您的值:

  @cache = MMCACHE.clone
  begin
    @hand_xml = @cache.get("/hands/#{@hand.id}.xml")
  rescue
    @hand_xml = display(@hand)
    @cache.set("/hands/#{@hand.id}.xml", @hand_xml)
  end
  @cache.quit

sit back and drink a cold one cause now when you do this:

现在当你这样做时,请坐下来喝一杯冷饮:

lsof | grep 11211 | wc -l

you see something like 2 or 3 instead of 2036!

你会看到像2或3而不是2036的东西!

props to reef for cluing me in that it's not uncommon for memcache connections to be persistent to begin with

为了让我知道,为了让memcache连接始终具有持久性并不常见

#2


I might be able to help, but I need to tell a story to do that. Here it is.

我也许可以提供帮助,但我需要讲一个故事来做到这一点。这里是。

Once upon a time there was a cluster of 10 apache(ssl) servers configured to have exactly 100 threads each. There also was a cluster of 10 memcached servers (on the same boxes), and they all seemed to live peacefully. Both apache's and memcached's were guarded by the evil monit daemon.

曾几何时,有一个由10个apache(ssl)服务器组成的集群,每个服务器配置为每个服务器只有100个线程。还有一组10个memcached服务器(在同一个盒子上),它们似乎都和平相处。 apache和memcached都被邪恶的monit守护进程守护着。

Then the King installed a 11th apache(ssl) server and memcached's started to restart randomly every few hours! The King started investigating and what did he found? There was a bug in the php memcache module documentation that said that the default constructor of memcache connection object is not persistent, but apparently it was. What happened was that every php thread (and there were like 1000 of them), opened a connection to every memcached in the pool when he needed one, and it held it. There were 10*100 connections to every memcached server and it was fine, but with 11 servers it was 1100 and as 1024<1100. Maximum number of open sockets for memcached was 1024. When all the sockets were taken, the monit daemon couldn't connect, so he restarted the memcached.

然后国王安装了第11个apache(ssl)服务器,memcached开始每隔几个小时随机重启!国王开始调查,他发现了什么? php memcache模块文档中有一个错误,说memcache连接对象的默认构造函数不是持久的,但显然它是。发生了什么事情是每个PHP线程(并且有1000个),当他需要时,打开了与池中每个memcached的连接,并且它持有它。每个memcached服务器有10 * 100个连接,很好,但有11个服务器,它是1100,而1024 <1100。 memcached的最大打开套接字数为1024.当所有套接字都被占用时,monit守护进程无法连接,因此他重新启动了memcached。

Every story has to have a moral. So, what did the King do with all of this? He disabled the persistent connections and they all lived happily ever after, with number of connections on the cluster peaking at 5 (five). Those servers were serving hudge amount of data, so We couldn't have 1000 spare sockets and it was cheaper to negotiate the memcache connection on every request.

每个故事都要有道德。那么,国王对这一切做了什么呢?他禁用了持久连接,他们从此过上幸福的生活,群集上的连接数达到峰值5(五)。那些服务器正在提供大量数据,因此我们不能拥有1000个备用套接字,并且在每个请求上协商memcache连接会更便宜。

I am sorry but I don't know ruby, it looks like You had an awful amount of threads or You are caching it wrong.

我很抱歉,但我不知道ruby,看起来你有很多线程或者你错误地缓存了。

Good luck!