之前的项目有这样的需求,有一个页面可以检索一块区域内图片的主色。效果图如图所示
下面是如何获取区域主色的实现思路:
首先,我们需要从相册,或者相机去获取图片;
其次,得到图片需要截取区域图片;
然后,获取截取图片的主色。
理想中是这样的,但是有很多阻碍,得到的图片需要自适应imageView(imageView的宽高是固定的),截图的得到的区域不是自己想要的区域。这里需要将图片等比例缩放,按照ImageView的宽高。
那现在就直接上代码吧!
首先我们需要等比例缩放Image:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
|
/**
* 缩放图片
*
* @param img image
* @param size 缩放后的大小
*
* @return image
*/
+ (UIImage *)scaleToSize:(UIImage *)img size:(CGSize)size{
// 创建一个bitmap的context
CGFloat width = CGImageGetWidth(img.CGImage);
CGFloat height = CGImageGetHeight(img.CGImage);
CGFloat max = width >= height ? width:height;
CGSize originSize;
if (max <= 0) {
return nil;
}
if (width >= height) {
originSize = CGSizeMake(size.width, (size.width * height)/width);
} else {
originSize = CGSizeMake((size.height * width)/height, size.height);
}
// 并把它设置成为当前正在使用的context
UIGraphicsBeginImageContext(size);
// 绘制改变大小的图片
[img drawInRect:CGRectMake((size.width - originSize.width)/2, (size.height - originSize.height)/2, originSize.width, originSize.height)];
// 从当前context中创建一个改变大小后的图片
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
// 使当前的context出堆栈
UIGraphicsEndImageContext();
// 返回新的改变大小后的图片
return scaledImage;
}
|
之后就是截取区域图片,这理解去10*10的方块:
1
2
3
4
5
6
7
8
|
// 裁剪图片
+ (UIImage *)imageFromImage:(UIImage *)image inRect:(CGRect)rect {
CGImageRef sourceImageRef = [image CGImage];
CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, rect);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
return newImage;
}
|
然后获取图片的主色:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
|
/**
* 获取图片的主色
*
* @param image image
* @param scale 精准度0.1~1
*
* @return 图片的主要颜色
*/
+ (NSDictionary *)mostColor:(UIImage *)image scale:(CGFloat)scale{
#if __IPHONE_OS_VERSION_MAX_ALLOWED > __IPHONE_6_1
int bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;
#else
int bitmapInfo = kCGImageAlphaPremultipliedLast;
#endif
if (scale <= 0.1) {
scale = 0.1;
} else if (scale >= 1){
scale = 1;
}
//第一步 先把图片缩小 加快计算速度. 但越小结果误差可能越大
CGSize thumbSize=CGSizeMake([image size].width * scale, [image size].height * scale);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL,
thumbSize.width,
thumbSize.height,
8, //bits per component
thumbSize.width*4,
colorSpace,
bitmapInfo);
CGRect drawRect = CGRectMake(0, 0, thumbSize.width, thumbSize.height);
CGContextDrawImage(context, drawRect, image.CGImage);
CGColorSpaceRelease(colorSpace);
//第二步 取每个点的像素值
unsigned char * data = CGBitmapContextGetData (context);
if (data == NULL){
CGContextRelease(context);
return nil;
}
NSCountedSet *cls=[NSCountedSet setWithCapacity:thumbSize.width*thumbSize.height];
for ( int x=0; x<thumbSize.height; x++) {
for ( int y=0; y<thumbSize.width; y++) {
int offset = 4*(x*thumbSize.width + y);
int red = data[offset];
int green = data[offset+1];
int blue = data[offset+2];
int alpha = data[offset+3];
NSArray *clr=@[@(red),@(green),@(blue),@(alpha)];
[cls addObject:clr];
}
}
CGContextRelease(context);
//第三步 找到出现次数最多的那个颜色
NSEnumerator *enumerator = [cls objectEnumerator];
NSArray *curColor = nil;
NSArray *MaxColor=nil;
NSUInteger MaxCount=0;
while ( (curColor = [enumerator nextObject]) != nil )
{
NSUInteger tmpCount = [cls countForObject:curColor];
if ( tmpCount < MaxCount ) continue ;
MaxCount=tmpCount;
MaxColor=curColor;
}
//返回三原色色值
NSMutableDictionary *dic = [[NSMutableDictionary alloc] initWithCapacity:0];
[dic setValue:@([MaxColor[0] intValue]/255.0f) forKey:@ "red" ];
[dic setValue:@([MaxColor[1] intValue]/255.0f) forKey:@ "green" ];
[dic setValue:@([MaxColor[2] intValue]/255.0f) forKey:@ "blue" ];
return dic;
}
|
其实获取图片区域的主要颜色就是这么简单,线面附上获取单点的颜色:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
|
/**
* 获取图片上一个点的颜色
*
* @param point 点击的点的位置
* @param image image
*
* @return 返回点击点的颜色
*/
+ (UIColor *)colorAtPixel:(CGPoint)point UIImage:(UIImage *)image CGRect:(CGRect)rect{
// Cancel if point is outside image coordinates
if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f,rect.size.width, rect.size.height), point)) {
return nil;
}
NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = image.CGImage;
NSUInteger width = image.size.width;
NSUInteger height = image.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * 1;
NSUInteger bitsPerComponent = 8;
unsigned char pixelData[4] = { 0, 0, 0, 0 };
CGContextRef context = CGBitmapContextCreate(pixelData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the pixel we are interested in onto the bitmap context
CGContextTranslateCTM(context, -pointX, pointY-(CGFloat)height);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage);
CGContextRelease(context);
// Convert color values [0..255] to floats [0.0..1.0]
CGFloat red = (CGFloat)pixelData[0] / 255.0f;
CGFloat green = (CGFloat)pixelData[1] / 255.0f;
CGFloat blue = (CGFloat)pixelData[2] / 255.0f;
CGFloat alpha = (CGFloat)pixelData[3] / 255.0f;
return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}
|
最终就实现如上图的想过,页面丑了点,但是效果是实现了,又不真确的地方,请留言。
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持服务器之家。
原文链接:https://blog.csdn.net/pengf_wuxiaowu/article/details/63688398