ART在调用Runtime::Init进行初始化的过程中,会创建一个虚拟机堆Heap。堆的初始化过程需要由一些虚拟机启动参数决定,例如有:
1. options->heap_initial_size_: 堆的初始大小,通过选项-Xms指定。
2.options->heap_growth_limit_: 堆允许增长的上限值,这是堆的一个软上限值,通过选项-XX:HeapGrowthLimit指定。
3. options->heap_min_free_: 堆的最小空闲值,通过选项-XX:HeapMinFree指定。
4. options->heap_max_free_: 堆的最大空闲值,通过选项-XX:HeapMaxFree指定。
5. options->heap_target_utilization_: 堆的目标利用率,通过选项-XX:HeapTargetUtilization指定。
6.options->heap_maximum_size_: 堆的最大值,这是堆的一个硬上限值,通过选项-Xmx指定。
7.options->image_: 用来创建Image Space的Image文件,通过选项-Ximage指定。
8.options->is_concurrent_gc_enabled_: 是否支持并行GC,通过选项-Xgc指定。
9. options->parallel_gc_threads_: GC暂停阶段用于同时执行GC任务的线程数,通过选项-XX:ParallelGCThreads指定。
10.options->conc_gc_threads_: GC非暂停阶段用于同时执行GC任务的线程数,通过选项-XX:ConcGCThreads指定。
11. options->low_memory_mode_: 是否在低内存模式运行,通过选项XX:LowMemoryMode指定。
12.options->long_pause_log_threshold_: GC造成应用程序暂停的时间阀值,一旦超过该阀值,则输出警告日志,通过选项XX:LongPauseLogThreshold指定。
13. options->long_gc_log_threshold_: GC时间阀值,一旦超过该阀值,则输出警告日志,通过选项-XX:LongGCLogThreshold指定。
14. options->ignore_max_footprint_: 不对堆的大小进行限制标志,通过选项-XX:IgnoreMaxFootprint指定。
/art/runtime/runtime.cc
bool Runtime::Init(const RuntimeOptions& raw_options, bool ignore_unrecognized) {
...
heap_ = new gc::Heap(options->heap_initial_size_,
options->heap_growth_limit_,
options->heap_min_free_,
options->heap_max_free_,
options->heap_target_utilization_,
options->foreground_heap_growth_multiplier_,
options->heap_maximum_size_,
options->heap_non_moving_space_capacity_,
options->image_,
options->image_isa_,
options->collector_type_,
options->background_collector_type_,
options->parallel_gc_threads_,
options->conc_gc_threads_,
options->low_memory_mode_,
options->long_pause_log_threshold_,
options->long_gc_log_threshold_,
options->ignore_max_footprint_,
options->use_tlab_,
options->verify_pre_gc_heap_,
options->verify_pre_sweeping_heap_,
options->verify_post_gc_heap_,
options->verify_pre_gc_rosalloc_,
options->verify_pre_sweeping_rosalloc_,
options->verify_post_gc_rosalloc_,
options->use_homogeneous_space_compaction_for_oom_,
options->min_interval_homogeneous_space_compaction_by_oom_);
...
Heap的构造函数比较长,这里只选取部分重要代码讲解,而且是默认流程。
由自添加的log可以看到,一开始foreground_collector_type_ 为
CollectorTypeCMS。
background_collector_type_ 为CollectorTypeHomogeneousSpaceCompact。
12:55:02:011I/art ( 1200): foreground_collector_type is CollectorTypeCMS,background_collector_type is CollectorTypeHomogeneousSpaceCompact
当前进程不是Zygote进程时,成员background_collector_type_(前台收集算法)要与成员foreground_collector_type_(后台收集算法)相同。由上面可以看到,Runtime传给Heap的参数前台收集算法为CMS,后台收集算法为HomogeneousSpaceCompact。
ChangeCollector将成员collector_type_设置为desired_collector_type_,desired_collector_type_在前面被初始化为foreground_collector_type_。也就是说,到此一步:
foreground_collector_type_ :CollectorTypeCMS
background_collector_type_ : CollectorTypeHomogeneousSpaceCompact
collector_type_ : CollectorTypeCMS
desired_collector_type_ : CollectorTypeCMS
/art/runtime/gc/heap.cc
// If we aren't the zygote, switch to the default non zygote allocator. This may update the
// entrypoints.
const bool is_zygote = Runtime::Current()->IsZygote();
if (!is_zygote) {
large_object_threshold_ = kDefaultLargeObjectThreshold;
// Background compaction is currently not supported for command line runs.
if (background_collector_type_ != foreground_collector_type_) {
VLOG(heap) << "Disabling background compaction for non zygote";
background_collector_type_ = foreground_collector_type_;
}
}
ChangeCollector(desired_collector_type_);
image_file_name为”/system/framework/boot.art”,不为空,于是进入imageSpace的创建流程。ImageSpace::Create函数的部分流程已在《ART加载OAT文件的过程分析》有所介绍。现在假设已经生成了relocate过的/data/dalvik-cache/arm/system@framework@boot.art这个image,并以此为参数进行imageSpace的初始化操作,即调用
ImageSpace::Init函数。
/art/runtime/gc/heap.cc
if (!image_file_name.empty()) {
std::string error_msg;
space::ImageSpace* image_space = space::ImageSpace::Create(image_file_name.c_str(),
image_instruction_set,
&error_msg);
if (image_space != nullptr) {
AddSpace(image_space);
// Oat files referenced by image files immediately follow them in memory, ensure alloc space
// isn't going to get in the middle
byte* oat_file_end_addr = image_space->GetImageHeader().GetOatFileEnd();
CHECK_GT(oat_file_end_addr, image_space->End());
requested_alloc_space_begin = AlignUp(oat_file_end_addr, kPageSize);
} else {
LOG(WARNING) << "Could not create image space with image file '" << image_file_name << "'. "
<< "Attempting to fall back to imageless running. Error was: " << error_msg;
}
}
ImageSpace::Init的第一个入参为“/data/dalvik-cache/arm/system@framework@boot.art”,第二个入参为“/system/framework/boot.art”,第三个参数为true,第四个参数用来记录产生的错误信息。另外,我们需要了解一下MapFileAtAddress函数创建MemMap的过程。
MapFileAtAddress第一个参数expected_ptr为想要进行映射的地址,这里传入的是ImageHeader记录的image的起始地址,但是不表示映射的地址就是image的起始地址,因为实际映射的地址还要经过image起始地址经过4k对齐调整后得到。第二个参数为映射的内容的大小,这里传入的是image的大小,实际上映射的内存大小也要经4k对齐调整后得到。第三个参数prot和第四个参数flags分别为内存保护标志和映射对象的类型,第五个参数fd是映射用的文件描述符,第六个参数start是映射用的文件描述符的偏移,第七个参数reuse表示创建的MemMap是否可以和已存在的MemMap重叠,第八个参数filename为文件描述符对应的文件名,最后一个参数error_msg记录错误信息。
实际上MapFileAtAddress内部使用了mmap来进行内存映射。mmap的参数都是根据MapFileAtAddress的入参进行4k调整过的。文件偏移量和文件大小需要进行4k对齐,于是mmap的做法是从传入的期望地址减去文件偏移量4k向下对齐后改变量的地址开始映射,映射大小为文件大小4k向上对齐的值。
/art/runtime/mem_map.cc
MemMap* MemMap::MapFileAtAddress(byte* expected_ptr, size_t byte_count, int prot, int flags, int fd,
off_t start, bool reuse, const char* filename,
std::string* error_msg) {
CHECK_NE(0, prot);
CHECK_NE(0, flags & (MAP_SHARED | MAP_PRIVATE));
// Note that we do not allow MAP_FIXED unless reuse == true, i.e we
// expect his mapping to be contained within an existing map.
if (reuse) {
// reuse means it is okay that it overlaps an existing page mapping.
// Only use this if you actually made the page reservation yourself.
CHECK(expected_ptr != nullptr);
#if !defined(__APPLE__) // TODO: Reanable after b/16861075 BacktraceMap issue is addressed.
uintptr_t expected = reinterpret_cast<uintptr_t>(expected_ptr);
uintptr_t limit = expected + byte_count;
DCHECK(ContainedWithinExistingMap(expected, limit, error_msg));
#endif
flags |= MAP_FIXED;
} else {
CHECK_EQ(0, flags & MAP_FIXED);
// Don't bother checking for an overlapping region here. We'll
// check this if required after the fact inside CheckMapRequest.
}
if (byte_count == 0) {
return new MemMap(filename, nullptr, 0, nullptr, 0, prot, false);
}
// Adjust 'offset' to be page-aligned as required by mmap.
int page_offset = start % kPageSize;
off_t page_aligned_offset = start - page_offset;
// Adjust 'byte_count' to be page-aligned as we will map this anyway.
size_t page_aligned_byte_count = RoundUp(byte_count + page_offset, kPageSize);
// The 'expected_ptr' is modified (if specified, ie non-null) to be page aligned to the file but
// not necessarily to virtual memory. mmap will page align 'expected' for us.
byte* page_aligned_expected = (expected_ptr == nullptr) ? nullptr : (expected_ptr - page_offset);
byte* actual = reinterpret_cast<byte*>(mmap(page_aligned_expected,
page_aligned_byte_count,
prot,
flags,
fd,
page_aligned_offset));
if (actual == MAP_FAILED) {
auto saved_errno = errno;
std::string maps;
ReadFileToString("/proc/self/maps", &maps);
*error_msg = StringPrintf("mmap(%p, %zd, 0x%x, 0x%x, %d, %" PRId64
") of file '%s' failed: %s\n%s",
page_aligned_expected, page_aligned_byte_count, prot, flags, fd,
static_cast<int64_t>(page_aligned_offset), filename,
strerror(saved_errno), maps.c_str());
return nullptr;
}
std::ostringstream check_map_request_error_msg;
if (!CheckMapRequest(expected_ptr, actual, page_aligned_byte_count, error_msg)) {
return nullptr;
}
return new MemMap(filename, actual + page_offset, byte_count, actual, page_aligned_byte_count,
prot, reuse);
}
/art/runtime/gc/heap.cc
ImageSpace* ImageSpace::Init(const char* image_filename, const char* image_location,
bool validate_oat_file, std::string* error_msg) {
CHECK(image_filename != nullptr);
CHECK(image_location != nullptr);
uint64_t start_time = 0;
if (VLOG_IS_ON(heap) || VLOG_IS_ON(startup)) {
start_time = NanoTime();
LOG(INFO) << "ImageSpace::Init entering image_filename=" << image_filename;
}
std::unique_ptr<File> file(OS::OpenFileForReading(image_filename));
if (file.get() == NULL) {
*error_msg = StringPrintf("Failed to open '%s'", image_filename);
return nullptr;
}
ImageHeader image_header;
bool success = file->ReadFully(&image_header, sizeof(image_header));
if (!success || !image_header.IsValid()) {
*error_msg = StringPrintf("Invalid image header in '%s'", image_filename);
return nullptr;
}
// Note: The image header is part of the image due to mmap page alignment required of offset.
//map boot.art到一个MemMap中
std::unique_ptr<MemMap> map(MemMap::MapFileAtAddress(image_header.GetImageBegin(),
image_header.GetImageSize(),
PROT_READ | PROT_WRITE,
MAP_PRIVATE,
file->Fd(),
0,
false,
image_filename,
error_msg));
if (map.get() == NULL) {
DCHECK(!error_msg->empty());
return nullptr;
}
CHECK_EQ(image_header.GetImageBegin(), map->Begin());
DCHECK_EQ(0, memcmp(&image_header, map->Begin(), sizeof(ImageHeader)));
//map GC要用到的一个live bitmap到一个MemMap中
std::unique_ptr<MemMap> image_map(
MemMap::MapFileAtAddress(nullptr, image_header.GetImageBitmapSize(),
PROT_READ, MAP_PRIVATE,
file->Fd(), image_header.GetBitmapOffset(),
false,
image_filename,
error_msg));
if (image_map.get() == nullptr) {
*error_msg = StringPrintf("Failed to map image bitmap: %s", error_msg->c_str());
return nullptr;
}
uint32_t bitmap_index = bitmap_index_.FetchAndAddSequentiallyConsistent(1);
std::string bitmap_name(StringPrintf("imagespace %s live-bitmap %u", image_filename,
bitmap_index));
//创建一个ContinuousSpaceBitmap
std::unique_ptr<accounting::ContinuousSpaceBitmap> bitmap(
accounting::ContinuousSpaceBitmap::CreateFromMemMap(bitmap_name, image_map.release(),
reinterpret_cast<byte*>(map->Begin()),
map->Size()));
if (bitmap.get() == nullptr) {
*error_msg = StringPrintf("Could not create bitmap '%s'", bitmap_name.c_str());
return nullptr;
}
std::unique_ptr<ImageSpace> space(new ImageSpace(image_filename, image_location,
map.release(), bitmap.release()));
// VerifyImageAllocations() will be called later in Runtime::Init()
// as some class roots like ArtMethod::java_lang_reflect_ArtMethod_
// and ArtField::java_lang_reflect_ArtField_, which are used from
// Object::SizeOf() which VerifyImageAllocations() calls, are not
// set yet at this point.
space->oat_file_.reset(space->OpenOatFile(image_filename, error_msg));
if (space->oat_file_.get() == nullptr) {
DCHECK(!error_msg->empty());
return nullptr;
}
if (validate_oat_file && !space->ValidateOatFile(error_msg)) {
DCHECK(!error_msg->empty());
return nullptr;
}
Runtime* runtime = Runtime::Current();
runtime->SetInstructionSet(space->oat_file_->GetOatHeader().GetInstructionSet());
//从boot.oat里面class_linker加载好的一些方法会记录在ImageHeader的image_roots_成员中,运行时可以通过读取boot.art的ImageHeader拿到这些方法的入口
mirror::Object* resolution_method = image_header.GetImageRoot(ImageHeader::kResolutionMethod);
runtime->SetResolutionMethod(down_cast<mirror::ArtMethod*>(resolution_method));
mirror::Object* imt_conflict_method = image_header.GetImageRoot(ImageHeader::kImtConflictMethod);
runtime->SetImtConflictMethod(down_cast<mirror::ArtMethod*>(imt_conflict_method));
mirror::Object* imt_unimplemented_method =
image_header.GetImageRoot(ImageHeader::kImtUnimplementedMethod);
runtime->SetImtUnimplementedMethod(down_cast<mirror::ArtMethod*>(imt_unimplemented_method));
mirror::Object* default_imt = image_header.GetImageRoot(ImageHeader::kDefaultImt);
runtime->SetDefaultImt(down_cast<mirror::ObjectArray<mirror::ArtMethod>*>(default_imt));
mirror::Object* callee_save_method = image_header.GetImageRoot(ImageHeader::kCalleeSaveMethod);
runtime->SetCalleeSaveMethod(down_cast<mirror::ArtMethod*>(callee_save_method),
Runtime::kSaveAll);
callee_save_method = image_header.GetImageRoot(ImageHeader::kRefsOnlySaveMethod);
runtime->SetCalleeSaveMethod(down_cast<mirror::ArtMethod*>(callee_save_method),
Runtime::kRefsOnly);
callee_save_method = image_header.GetImageRoot(ImageHeader::kRefsAndArgsSaveMethod);
runtime->SetCalleeSaveMethod(down_cast<mirror::ArtMethod*>(callee_save_method),
Runtime::kRefsAndArgs);
if (VLOG_IS_ON(heap) || VLOG_IS_ON(startup)) {
LOG(INFO) << "ImageSpace::Init exiting (" << PrettyDuration(NanoTime() - start_time)
<< ") " << *space.get();
}
return space.release();
}
ImageSpace是一个物理地址空间连续的Space,因为其内部是使用mmap进行映射的。ImageSpace的起址是ImageHeader中得到的image起址,终址为ImageSpace的起址加上ImageSpace的大小。在log里面可以看到:
I/art ( 1200): ImageSpace::Init exiting (29.485ms) SpaceTypeImageSpace begin=0x6f2e9000,end=0x6fce42f8,size=9MB,name="/data/dalvik-cache/arm/system@framework@boot.art"]
实际上ImageSpace和下一个space之间还隔着一个SpaceBitmap和boot.oat文件。这个SpaceBitmap紧接着ImageSpace的终址,用来记录ImageSpace的live bitmap。
生成ImageSpace后,通过OpenOatFile函数打开和boot.art对应的boot.oat文件,boot.oat是一个ELF文件,也就是将这个ELF文件加载到boot.art指定位置的内存中,这个位置也就是boot.art的终址向上4k对齐的地址。由log可以看到:
I/art ( 1200): oat file begin is 0x6fce5000 ,oat file end is 0x734d6000,oat file data begin is 0x6fce6000 ,oat data end is 0x734d4f48
oat文件的起址0x6fce5000 恰好是boot.art文件终址0x6fce42f8 向上4k对齐的结果。打开加载oat文件的过程不作详细解释。按我的理解,除了oat文件的起址,还有oat文件加载后oatdata段的起址也是由boot.art中指定的。oat文件有 两个区别于一般ELF的段:oatdata段和oatexec段。oatdata段放置了原来的dex文件的信息,oatexec段放置了这些dex文件编译后的本地机器指令,oatdata段有链接可以直达oatexec段的对应本地代码。界定oatdata段的标记是oatdata符号和oatexec符号,界定oatexec段的标记是oatexec符号和oatlastword符号+4的位置。
在生成boot.art的就已经在ImageHeader指定了一些边界地址:oat_file_begin参数是oat文件加载到内存的起址,oat_data_begin_参数是oat文件oatdata段的起址,oat_data_end是上面提到的oatlastword符号+4的地址,oat_file_end是整个ELF文件的终址,这些地址在生成boot.art之前打开oat文件的过程中得到。
/art/compiler/image_writer.cc
ImageHeader image_header(PointerToLowMemUInt32(image_begin_), static_cast<uint32_t>(image_end_),
RoundUp(image_end_, kPageSize),
RoundUp(bitmap_bytes, kPageSize),
PointerToLowMemUInt32(GetImageAddress(image_roots.Get())),
oat_file_->GetOatHeader().GetChecksum(),
PointerToLowMemUInt32(oat_file_begin),
PointerToLowMemUInt32(oat_data_begin_),
PointerToLowMemUInt32(oat_data_end),
PointerToLowMemUInt32(oat_file_end),
compile_pic_);
memcpy(image_->Begin(), &image_header, sizeof(image_header));
Heap::AddSpace将ImageSpace加入到存储连续空间的vector
continuous_spaces_ 中去。此后,通过ImageHeader拿到boot.oat的结束地址,将其做4k对齐后的新地址作为下一个MemMap的起始期望地址。
接着我们将以boot.oat所在内存终址向上4k对齐后的地址为起址创建一段MemMap,该MemMap在zygote进程被命名为zygote space”,在非zygote进程被命名为”non moving space”。创建完成后,把300MB处的地址作为下一个MemMap的起始期望地址。这里使用了新的map函数MemMap::MapAnonymous,将名字为”dalvik-xxx”由ashmem分配的空间映射到进程空间,进而构建一个MemMap。
/art/runtime/gc/heap.cc
// We may use the same space the main space for the non moving space if we don't need to compact
// from the main space.
// This is not the case if we support homogeneous compaction or have a moving background
// collector type.
bool separate_non_moving_space = is_zygote ||
support_homogeneous_space_compaction || IsMovingGc(foreground_collector_type_) ||
IsMovingGc(background_collector_type_);
if (foreground_collector_type == kCollectorTypeGSS) {
separate_non_moving_space = false;
}
std::unique_ptr<MemMap> main_mem_map_1;
std::unique_ptr<MemMap> main_mem_map_2;
byte* request_begin = requested_alloc_space_begin;
if (request_begin != nullptr && separate_non_moving_space) {
request_begin += non_moving_space_capacity;
}
std::string error_str;
std::unique_ptr<MemMap> non_moving_space_mem_map;
if (separate_non_moving_space) {
// If we are the zygote, the non moving space becomes the zygote space when we run
// PreZygoteFork the first time. In this case, call the map "zygote space" since we can't
// rename the mem map later.
const char* space_name = is_zygote ? kZygoteSpaceName: kNonMovingSpaceName;
// Reserve the non moving mem map before the other two since it needs to be at a specific
// address.
non_moving_space_mem_map.reset(
MemMap::MapAnonymous(space_name, requested_alloc_space_begin,
non_moving_space_capacity, PROT_READ | PROT_WRITE, true, &error_str));
CHECK(non_moving_space_mem_map != nullptr) << error_str;
// Try to reserve virtual memory at a lower address if we have a separate non moving space.
request_begin = reinterpret_cast<byte*>(300 * MB);
}
main_mem_map_1在request_begin(300MB)为起址处创建一个名为”main space”的MemMap,request_begin在zygote进程为300MB地址处。按默认情况,在main_mem_map_1的终址处还会创建一个除名字以外,规格和main_mem_map_1一模一样的MemMap main_mem_map_2。
/art/runtime/gc/heap.cc
// Attempt to create 2 mem maps at or after the requested begin.
main_mem_map_1.reset(MapAnonymousPreferredAddress(kMemMapSpaceName[0], request_begin, capacity_,
PROT_READ | PROT_WRITE, &error_str));
CHECK(main_mem_map_1.get() != nullptr) << error_str;
if (support_homogeneous_space_compaction ||
background_collector_type_ == kCollectorTypeSS ||
foreground_collector_type_ == kCollectorTypeSS) {
main_mem_map_2.reset(MapAnonymousPreferredAddress(kMemMapSpaceName[1], main_mem_map_1->End(),
capacity_, PROT_READ | PROT_WRITE,
&error_str));
CHECK(main_mem_map_2.get() != nullptr) << error_str;
}
然后我们从之前拿到的名为”zygote space”或”non moving space”的 MemMap创建一个名为”zygote / non moving space”的DlMallocSpace,并且通过AddSpace将其加入到存储连续空间的vector continuous_spaces_ 中。这个space被命名为”zygote / non moving space”。
/art/runtime/gc/heap.cc
// Create the non moving space first so that bitmaps don't take up the address range.
if (separate_non_moving_space) {
// Non moving space is always dlmalloc since we currently don't have support for multiple
// active rosalloc spaces.
const size_t size = non_moving_space_mem_map->Size();
non_moving_space_ = space::DlMallocSpace::CreateFromMemMap(
non_moving_space_mem_map.release(), "zygote / non moving space", kDefaultStartingSize,
initial_size, size, size, false);
non_moving_space_->SetFootprintLimit(non_moving_space_->Capacity());
CHECK(non_moving_space_ != nullptr) << "Failed creating non moving space "
<< requested_alloc_space_begin;
AddSpace(non_moving_space_);
}
前面提到的main_mem_map_1这个MemMap将会被用来创建名字为”main rosalloc space”的RosAllocSpace,和DlMallocSpace一样的是它们都用来分配内存,不一样的是分配的算法不一样。之后会将这个RosAllocSpace加入到连续空间vector中。之后基于main_mem_map_2创建和上一个RosAllocSpace一模一样的名字为”main rosalloc space 1”的RosAllocSpace。
/art/runtime/gc/heap.cc
// Create other spaces based on whether or not we have a moving GC.
if (IsMovingGc(foreground_collector_type_) && foreground_collector_type_ != kCollectorTypeGSS) {
// Create bump pointer spaces.
// We only to create the bump pointer if the foreground collector is a compacting GC.
// TODO: Place bump-pointer spaces somewhere to minimize size of card table.
bump_pointer_space_ = space::BumpPointerSpace::CreateFromMemMap("Bump pointer space 1",
main_mem_map_1.release());
CHECK(bump_pointer_space_ != nullptr) << "Failed to create bump pointer space";
AddSpace(bump_pointer_space_);
temp_space_ = space::BumpPointerSpace::CreateFromMemMap("Bump pointer space 2",
main_mem_map_2.release());
CHECK(temp_space_ != nullptr) << "Failed to create bump pointer space";
AddSpace(temp_space_);
CHECK(separate_non_moving_space);
} else {
CreateMainMallocSpace(main_mem_map_1.release(), initial_size, growth_limit_, capacity_);
CHECK(main_space_ != nullptr);
AddSpace(main_space_);
if (!separate_non_moving_space) {
non_moving_space_ = main_space_;
CHECK(!non_moving_space_->CanMoveObjects());
}
...
} else if (main_mem_map_2.get() != nullptr) {
const char* name = kUseRosAlloc ? kRosAllocSpaceName[1] : kDlMallocSpaceName[1];
main_space_backup_.reset(CreateMallocSpaceFromMemMap(main_mem_map_2.release(), initial_size,
growth_limit_, capacity_, name, true));
CHECK(main_space_backup_.get() != nullptr);
// Add the space so its accounted for in the heap_begin and heap_end.
AddSpace(main_space_backup_.get());
}
最后创建的是large object space。64位处理器large object space的表现形式为FreeListSpace,32位处理器large object space的表现形式为LargeObjectMapSpace。值得一提的是,使用AddSpace不会将LargeObjectMapSpace加入到连续空间vector中,而是加入到非连续空间vector discontinuous_spaces_ 中。这是因为large object space在分配内存时才会去mmap一块内存,这块内存的起始地址是由系统决定的,所以分配的内存是不连续的。
/art/runtime/gc/heap.cc
// Allocate the large object space.
if (kUseFreeListSpaceForLOS) {
large_object_space_ = space::FreeListSpace::Create("large object space", nullptr, capacity_);
} else {
large_object_space_ = space::LargeObjectMapSpace::Create("large object space");
}
CHECK(large_object_space_ != nullptr) << "Failed to create large object space";
AddSpace(large_object_space_);
// Compute heap capacity. Continuous spaces are sorted in order of Begin().
最后创建card table区域,跟GC有关。
/art/runtime/gc/heap.cc
// Allocate the card table.
card_table_.reset(accounting::CardTable::Create(heap_begin, heap_capacity));
CHECK(card_table_.get() != NULL) << "Failed to create card table";
// Card cache for now since it makes it easier for us to update the references to the copying
// spaces.
在zygote每次fork出进程之前,还会通过 Heap::PreZygoteFork将non_moving_space_一分为二,得到一个”Zygote space”和一个”non moving space”,此处过程不详述。其中”Zygote space”用于在zygote进程和由zygote fork出来的进程之间共享资源,”non moving space”用于存放不可移动的对象。不可移动的对象是指那些使用AllocNonMovableObject接口分配空间的对象。