如何修正Java中的“请求数组大小超过VM限制”错误?

时间:2023-01-27 20:10:22

Is there an log option that can let tomcat log the bad query instead just throwing this ?

是否有一个日志选项可以让tomcat记录坏查询,而不是直接抛出这个?

SEVERE: java.lang.OutOfMemoryError: Requested array size exceeds VM limit

严重:. lang。OutOfMemoryError:请求的数组大小超过VM限制

(Tried log level to FULL, but only capture the above)

(尝试将日志级别设置为FULL,但只捕获上述内容)

This is not enough information to further debug
Alternatively, if this can be fixed by allocated more memory by tweaking the following?

如果可以通过调整以下内容来修复这一点,那么这还不足以进一步调试。

-Xms1024M -Xmx4096M -XX:MaxPermSize=256M

-Xms1024M -Xmx4096M - xx:MaxPermSize = 256

Update

-Xms6G -Xmx6G -XX:MaxPermSize=1G -XX:PermSize=512M

-Xms6G -Xmx6G - xx:MaxPermSize = 1 g - xx:PermSize = 512

(the above seems works better, keep monitoring)

(以上方法似乎更有效,请继续监控)

6 个解决方案

#1


9  

I suspect you might be using sorts on a large index. That's one thing I definitely know can require a large array size with Lucene. Either way, you might want to try using a 64-bit JVM with these options:

我怀疑您可能在大索引上使用排序。这是我肯定知道的一件事,需要一个大的数组大小的Lucene。无论哪种方式,您都可能希望尝试使用带有以下选项的64位JVM:

-Xmx6G -XX:MaxPermSize=128M -XX:+UseCompressedOops

The last option will reduce 64-bit memory pointers to 32-bit (as long the heap is under 32GB). This typically reduces the memory overhead by about 40%, so it can help stretch your memory significantly.

最后一个选项将64位内存指针减少到32位(只要堆在32GB以下)。这通常会减少大约40%的内存开销,因此它可以大大扩展内存。

Update: Most likely you don't need such a large permanent generation size, certainly not 1G. You're probably fine with 128M, and you'll get a specific error if you go over with Java 6. Since you're limited to 8G in your server you might be able to get away with 7G for the heap with a smaller perm gen. Be careful about not going into swap, that can seriously slow things down for Java.

更新:很可能您不需要这么大的永久生成大小,当然也不需要1G。使用128M可能没问题,如果使用Java 6,您将会得到一个特定的错误。由于您的服务器中限制为8G,所以您可以使用更小的perm gen来处理堆的7G,请小心不要进入交换,这将严重降低Java的速度。

I noticed you didn't mention -XX:+UseCompressedOops in your update. That can make a huge difference if you haven't tried it yet. You might be able to squeeze a little more space out by reducing the size of eden to give the tenured generation more room. Beyond that I think you'll simply need more memory or fewer sort fields.

我注意到你在更新中没有提到-XX:+UseCompressedOops。如果你还没有尝试过,那就会有很大的不同。你也许可以通过减少伊甸园的大小来挤出更多的空间,给终身一代更多的空间。除此之外,我认为您只需要更多的内存或更少的排序字段。

#2


3  

If you want to find out what causes OutOfMemory, you can add

如果你想知道什么导致了OutOfMemory,你可以添加

-XX:+HeapDumpOnOutOfMemoryError 

to your java opts.

您的java选择。

The next time you get out of memory, you will get a heap dump file that can be analyzed with "jhat" that is located inside jdk/lib. Jhat will show you what objects exist in your heap and how much memory they consume.

下次内存耗尽时,您将得到一个堆转储文件,可以使用位于jdk/lib中的“jhat”进行分析。Jhat将向您展示堆中存在哪些对象以及它们消耗了多少内存。

#3


2  

You will get this exception because you are trying to create an Array that is larger than the maximum contiguous block of memory in your Java VMs heap.

您将得到这个异常,因为您正在尝试创建一个大于Java vm堆中最大连续内存块的数组。

https://plumbr.eu/outofmemoryerror/requested-array-size-exceeds-vm-limit

https://plumbr.eu/outofmemoryerror/requested-array-size-exceeds-vm-limit

What is the solution?

解决方案是什么?

The java.lang.OutOfMemoryError: Requested array size exceeds VM limit can appear as a result of either of the following situations:

. lang。OutOfMemoryError:请求的数组大小超过VM限制可能会出现以下情况:

Your arrays grow too big and end up having a size between the platform limit and the Integer.MAX_INT

您的数组太大了,最终在平台限制和Integer.MAX_INT之间有一个大小

You deliberately try to allocate arrays larger than 2^31-1 elements to experiment with the limits.

你故意试图分配数组大于2 ^还有元素实验的局限性。

In the first case, check your code base to see whether you really need arrays that large. Maybe you could reduce the size of the arrays and be done with it. Or divide the array into smaller bulks and load the data you need to work with in batches fitting into your platform limit.

在第一个例子中,检查代码库,看看是否真的需要这么大的数组。也许你可以减小数组的大小,然后用它来做。或者将数组分割成更小的块,将需要处理的数据分批装载到平台限制中。

In the second case – remember that Java arrays are indexed by int. So you cannot go beyond 2^31-1 elements in your arrays when using the standard data structures within the platform. In fact, in this case you are already blocked by the compiler announcing “error: integer number too large” during compilation. But if you really work with truly large data sets, you need to rethink your options. You can load the data you need to work with in smaller batches and still use standard Java tools, or you might go beyond the standard utilities. One way to achieve this is to look into the sun.misc.Unsafe class. This allows you to allocate memory directly like you would in C.

在第二种情况下,记住Java int数组索引。所以你不能超出2 ^还有元素在数组在使用标准的数据结构内的平台。事实上,在这种情况下,编译器已经在编译过程中阻止了您,编译器宣布“error: integer number too large”。但是,如果您真的使用真正的大型数据集,您需要重新考虑您的选项。您可以以较小的批装载需要处理的数据,并且仍然使用标准Java工具,或者您可能会超出标准实用程序。实现这一点的一种方法是观察太阳。不安全的类。这使您可以像在C中那样直接分配内存。

#4


0  

Out of memory! See if there is an array out of bounds, or loop the system resources are swallowed up!

内存不足!看看是否有数组超出范围,或循环系统资源被吞噬!


1.java.lang.OutOfMemoryError: Java heap space In the JVM, if 98% of the time is available for the GC Heap size and less than 2% of the time to throw this exception information. JVM heap setting is the java program is running JVM memory space can be used to deploy the settings. JVM at startup automatically set Heap size value, the initial space (ie-Xms) is the physical memory of 1 / 64 , The maximum space (-Xmx) is the physical memory of 1 / 4. JVM can be used to provide the-Xmn-Xms-Xmx and other options can be set.

1. . lang。OutOfMemoryError: JVM中的Java堆空间,如果GC堆大小可用的时间是98%,抛出异常信息的时间少于2%。JVM堆设置是java程序正在运行的JVM内存空间可以用来部署设置。JVM在启动时自动设置堆大小值,初始空间(ie-Xms)是1 / 64的物理内存,最大空间(-Xmx)是1 / 4的物理内存。可以使用JVM提供xmn - xml - xmx,可以设置其他选项。

2.Requested array size exceeds VM limit: This is because the application of the array size exceeds the size of heap space, such as a 256M of heap space in the array to apply for a 512M

2。请求的数组大小超过VM限制:这是因为数组大小的应用程序超过了堆空间的大小,例如数组中256M的堆空间用于申请512M

#5


0  

Upgrade solr to a newer version seems have sort this problem, likely newer version has a better heap memory management.

升级solr到更新的版本似乎有这个问题,更新的版本可能有更好的堆内存管理。

#6


0  

I use this in catalina.sh

我在加泰罗尼亚用这个

JAVA_OPTS="-Dsolr.solr.home=/etc/tomcat6/solr -Djava.awt.headless=true -server -XX:NewSize=256m -XX:MaxNewSize=256m -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+DisableExplicitGC"

I never had mem problems on Tomcat/solr with 30M small documents. I had problems with the solrJ indexing client though. I had to use -Xms8G -Xmx8G for the Java client, and add documents by chunks of 250K documents.

在Tomcat/solr上,我从来没有遇到过mem问题,只有30M的小文档。但是我在solrJ索引客户端有问题。我必须为Java客户机使用-Xms8G -Xmx8G,并按250K文档块添加文档。

#1


9  

I suspect you might be using sorts on a large index. That's one thing I definitely know can require a large array size with Lucene. Either way, you might want to try using a 64-bit JVM with these options:

我怀疑您可能在大索引上使用排序。这是我肯定知道的一件事,需要一个大的数组大小的Lucene。无论哪种方式,您都可能希望尝试使用带有以下选项的64位JVM:

-Xmx6G -XX:MaxPermSize=128M -XX:+UseCompressedOops

The last option will reduce 64-bit memory pointers to 32-bit (as long the heap is under 32GB). This typically reduces the memory overhead by about 40%, so it can help stretch your memory significantly.

最后一个选项将64位内存指针减少到32位(只要堆在32GB以下)。这通常会减少大约40%的内存开销,因此它可以大大扩展内存。

Update: Most likely you don't need such a large permanent generation size, certainly not 1G. You're probably fine with 128M, and you'll get a specific error if you go over with Java 6. Since you're limited to 8G in your server you might be able to get away with 7G for the heap with a smaller perm gen. Be careful about not going into swap, that can seriously slow things down for Java.

更新:很可能您不需要这么大的永久生成大小,当然也不需要1G。使用128M可能没问题,如果使用Java 6,您将会得到一个特定的错误。由于您的服务器中限制为8G,所以您可以使用更小的perm gen来处理堆的7G,请小心不要进入交换,这将严重降低Java的速度。

I noticed you didn't mention -XX:+UseCompressedOops in your update. That can make a huge difference if you haven't tried it yet. You might be able to squeeze a little more space out by reducing the size of eden to give the tenured generation more room. Beyond that I think you'll simply need more memory or fewer sort fields.

我注意到你在更新中没有提到-XX:+UseCompressedOops。如果你还没有尝试过,那就会有很大的不同。你也许可以通过减少伊甸园的大小来挤出更多的空间,给终身一代更多的空间。除此之外,我认为您只需要更多的内存或更少的排序字段。

#2


3  

If you want to find out what causes OutOfMemory, you can add

如果你想知道什么导致了OutOfMemory,你可以添加

-XX:+HeapDumpOnOutOfMemoryError 

to your java opts.

您的java选择。

The next time you get out of memory, you will get a heap dump file that can be analyzed with "jhat" that is located inside jdk/lib. Jhat will show you what objects exist in your heap and how much memory they consume.

下次内存耗尽时,您将得到一个堆转储文件,可以使用位于jdk/lib中的“jhat”进行分析。Jhat将向您展示堆中存在哪些对象以及它们消耗了多少内存。

#3


2  

You will get this exception because you are trying to create an Array that is larger than the maximum contiguous block of memory in your Java VMs heap.

您将得到这个异常,因为您正在尝试创建一个大于Java vm堆中最大连续内存块的数组。

https://plumbr.eu/outofmemoryerror/requested-array-size-exceeds-vm-limit

https://plumbr.eu/outofmemoryerror/requested-array-size-exceeds-vm-limit

What is the solution?

解决方案是什么?

The java.lang.OutOfMemoryError: Requested array size exceeds VM limit can appear as a result of either of the following situations:

. lang。OutOfMemoryError:请求的数组大小超过VM限制可能会出现以下情况:

Your arrays grow too big and end up having a size between the platform limit and the Integer.MAX_INT

您的数组太大了,最终在平台限制和Integer.MAX_INT之间有一个大小

You deliberately try to allocate arrays larger than 2^31-1 elements to experiment with the limits.

你故意试图分配数组大于2 ^还有元素实验的局限性。

In the first case, check your code base to see whether you really need arrays that large. Maybe you could reduce the size of the arrays and be done with it. Or divide the array into smaller bulks and load the data you need to work with in batches fitting into your platform limit.

在第一个例子中,检查代码库,看看是否真的需要这么大的数组。也许你可以减小数组的大小,然后用它来做。或者将数组分割成更小的块,将需要处理的数据分批装载到平台限制中。

In the second case – remember that Java arrays are indexed by int. So you cannot go beyond 2^31-1 elements in your arrays when using the standard data structures within the platform. In fact, in this case you are already blocked by the compiler announcing “error: integer number too large” during compilation. But if you really work with truly large data sets, you need to rethink your options. You can load the data you need to work with in smaller batches and still use standard Java tools, or you might go beyond the standard utilities. One way to achieve this is to look into the sun.misc.Unsafe class. This allows you to allocate memory directly like you would in C.

在第二种情况下,记住Java int数组索引。所以你不能超出2 ^还有元素在数组在使用标准的数据结构内的平台。事实上,在这种情况下,编译器已经在编译过程中阻止了您,编译器宣布“error: integer number too large”。但是,如果您真的使用真正的大型数据集,您需要重新考虑您的选项。您可以以较小的批装载需要处理的数据,并且仍然使用标准Java工具,或者您可能会超出标准实用程序。实现这一点的一种方法是观察太阳。不安全的类。这使您可以像在C中那样直接分配内存。

#4


0  

Out of memory! See if there is an array out of bounds, or loop the system resources are swallowed up!

内存不足!看看是否有数组超出范围,或循环系统资源被吞噬!


1.java.lang.OutOfMemoryError: Java heap space In the JVM, if 98% of the time is available for the GC Heap size and less than 2% of the time to throw this exception information. JVM heap setting is the java program is running JVM memory space can be used to deploy the settings. JVM at startup automatically set Heap size value, the initial space (ie-Xms) is the physical memory of 1 / 64 , The maximum space (-Xmx) is the physical memory of 1 / 4. JVM can be used to provide the-Xmn-Xms-Xmx and other options can be set.

1. . lang。OutOfMemoryError: JVM中的Java堆空间,如果GC堆大小可用的时间是98%,抛出异常信息的时间少于2%。JVM堆设置是java程序正在运行的JVM内存空间可以用来部署设置。JVM在启动时自动设置堆大小值,初始空间(ie-Xms)是1 / 64的物理内存,最大空间(-Xmx)是1 / 4的物理内存。可以使用JVM提供xmn - xml - xmx,可以设置其他选项。

2.Requested array size exceeds VM limit: This is because the application of the array size exceeds the size of heap space, such as a 256M of heap space in the array to apply for a 512M

2。请求的数组大小超过VM限制:这是因为数组大小的应用程序超过了堆空间的大小,例如数组中256M的堆空间用于申请512M

#5


0  

Upgrade solr to a newer version seems have sort this problem, likely newer version has a better heap memory management.

升级solr到更新的版本似乎有这个问题,更新的版本可能有更好的堆内存管理。

#6


0  

I use this in catalina.sh

我在加泰罗尼亚用这个

JAVA_OPTS="-Dsolr.solr.home=/etc/tomcat6/solr -Djava.awt.headless=true -server -XX:NewSize=256m -XX:MaxNewSize=256m -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+DisableExplicitGC"

I never had mem problems on Tomcat/solr with 30M small documents. I had problems with the solrJ indexing client though. I had to use -Xms8G -Xmx8G for the Java client, and add documents by chunks of 250K documents.

在Tomcat/solr上,我从来没有遇到过mem问题,只有30M的小文档。但是我在solrJ索引客户端有问题。我必须为Java客户机使用-Xms8G -Xmx8G,并按250K文档块添加文档。