我是用这个函数对图片进行压缩的
NSData *fData = UIImageJPEGRepresentation(self.photo,1.0);
这样,图片是7MB这样。假如压缩级别是0.5,如:
NSData *fData = UIImageJPEGRepresentation(self.photo,0.5);
图片压缩之后,大小是 1MB左右。。现在问题来了。
一张图片,压缩级别是0.5的话,大小不会变成原来的0.5倍。我算了一下,大概是0.14.所以压缩级别和这个大小好像不能形成一定的关联。
比如我一张 500KB的图片,0.5的压缩级别。大小可能才100多KB。对我来说,正好。但是假如像上面那样,7MB的大小,按照这样的压缩率,就是1MB.所以图片时大时小。对用户来说,体验肯定比较差。
我想动态的压缩图片,让没有wifi的情况下,无论多少大小的图片,压缩之后大小最大 200KB。有wifi,压缩大小最大 700KB。
目前有比较好的算法吗?或者开源的函数。
--------------------------
贴一个相关函数
//图片压缩到指定大小
- (UIImage*)imageByScalingAndCroppingForSize:(CGSize)targetSize
{
UIImage *sourceImage =self;
UIImage *newImage = nil;
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor =0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize)== NO)
{
CGFloat widthFactor = targetWidth /width;
CGFloat heightFactor = targetHeight /height;
if (widthFactor > heightFactor)
scaleFactor = widthFactor; // scale to fit height
else
scaleFactor = heightFactor; // scale to fit width
scaledWidth= width * scaleFactor;
scaledHeight = height * scaleFactor;
// center theimage
if (widthFactor > heightFactor)
{
thumbnailPoint.y =(targetHeight - scaledHeight) *0.5;
}
elseif (widthFactor < heightFactor)
{
thumbnailPoint.x =(targetWidth - scaledWidth) *0.5;
}
}
UIGraphicsBeginImageContext(targetSize);// this will crop
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = thumbnailPoint;
thumbnailRect.size.width= scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
if(newImage == nil)
NSLog(@"could not scale image");
//pop the context to get back tothe default
UIGraphicsEndImageContext();
return newImage;
}纳
第二个参数是压缩系数,他的设置并不能保证图片的大小,因为压缩之后的大小和图片的内容也有关系,比如你的图片的的颜色相似的话压缩之后图片的大小就会小点。你所在在意的wifi情况压缩图片是不是指需要上传呢,如果是需要上传的话你可以压缩两种格式的图片,高质量的和低质量的图片,有wifi的情况下上传高质量的图片,没有wifi的情况下上传低质量的图片
UIImageJPEGRepresentation(self.photo, 0.5);
对于每张图片进行压缩,其实有一个最小值,此后无论再怎么改小压缩系数都无济于事。
如果还是过大,只能裁剪图片了。
- (NSData *)imageWithImage:(UIImage*)image
scaledToSize:(CGSize)newSize;
{
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage =UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return UIImageJPEGRepresentation(newImage,0.8);
}
你可以弄一个 for循环,不断的逼近你要的大小
- (UIImage *)compressImage:(UIImage *)image toMaxFileSize:(NSInteger)maxFileSize {
CGFloat compression =0.9f;
CGFloat maxCompression =0.1f;
NSData *imageData =UIImageJPEGRepresentation(image, compression);
while ([imageData length] > maxFileSize&& compression > maxCompression) {
compression -= 0.1;
imageData = UIImageJPEGRepresentation(image, compression);
}
UIImage *compressedImage = [UIImage imageWithData:imageData];
return compressedImage;
}
图片的压缩其实是俩概念,
1、是 “压”文件体积变小,但是像素数不变,长宽尺寸不变,那么质量可能下降,
2、是 “缩”文件的尺寸变小,也就是像素数减少。长宽尺寸变小,文件体积同样会减小。
这个 UIImageJPEGRepresentation(image,0.0),是1的功能。
这个 [sourceImagedrawInRect:CGRectMake(0,0,targetWidth, targetHeight)]是2的功能。
所以,这俩你得结合使用来满足需求,不然你一味的用1,导致,图片模糊的不行,但是尺寸还是很大。
我有个图片,可能是14M或者更多,想把它压缩到500K以内,UIImageJPEGRepresentation(image,0.0),就算这个压缩系数设置为最小,压缩后仍然比500k大;那有什么方法能把任意大小的图片压缩到指定的字节大小呢,忽略图片清晰度的要求!楼主解决此问题没,求解!复制放进笔记
-(UIImage *) imageCompressForWidth:(UIImage*)sourceImage targetWidth:(CGFloat)defineWidth
{
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = defineWidth;
CGFloat targetHeight = (targetWidth / width) *height;
UIGraphicsBeginImageContext(CGSizeMake(targetWidth, targetHeight));
[sourceImage drawInRect:CGRectMake(0,0,targetWidth, targetHeight)];
UIImage* newImage =UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
最近做论坛功能,发帖的时候需要用到从相册中选取图片然后上传,由于每次上传图片的最大数量为9张,所以需要对图片进行压缩。开始时用了以前经常用的压缩的方法:
- //压缩图片质量
- +(UIImage *)reduceImage:(UIImage *)image percent:(float)percent
- {
- NSData *imageData = UIImageJPEGRepresentation(image, percent);
- UIImage *newImage = [UIImage imageWithData:imageData];
- return newImage;
- }
- //压缩图片尺寸
- + (UIImage*)imageWithImageSimple:(UIImage*)image scaledToSize:(CGSize)newSize
- {
- // Create a graphics image context
- UIGraphicsBeginImageContext(newSize);
- // new size
- [image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
- // Get the new image from the context
- UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
- // End the context
- UIGraphicsEndImageContext();
- // Return the new image.
- return newImage;
- }
上面的方法比较常见,可是需要加载到内存中来处理图片,当图片数量多了的时候就会收到内存警告,程序崩溃。研究半天终于在一篇博客中找到了解决方法:
- static size_t getAssetBytesCallback(voidvoid *info, voidvoid *buffer, off_t position, size_t count) {
- ALAssetRepresentation *rep = (__bridge id)info;
- NSError *error = nil;
- size_t countRead = [rep getBytes:(uint8_t *)buffer fromOffset:position length:count error:&error];
- if (countRead == 0 && error) {
- // We have no way of passing this info back to the caller, so we log it, at least.
- NDDebug(@"thumbnailForAsset:maxPixelSize: got an error reading an asset: %@", error);
- }
- return countRead;
- }
- static void releaseAssetCallback(voidvoid *info) {
- // The info here is an ALAssetRepresentation which we CFRetain in thumbnailForAsset:maxPixelSize:.
- // This release balances that retain.
- CFRelease(info);
- }
- // Returns a UIImage for the given asset, with size length at most the passed size.
- // The resulting UIImage will be already rotated to UIImageOrientationUp, so its CGImageRef
- // can be used directly without additional rotation handling.
- // This is done synchronously, so you should call this method on a background queue/thread.
- - (UIImage *)thumbnailForAsset:(ALAsset *)asset maxPixelSize:(NSUInteger)size {
- NSParameterAssert(asset != nil);
- NSParameterAssert(size > 0);
- ALAssetRepresentation *rep = [asset defaultRepresentation];
- CGDataProviderDirectCallbacks callbacks = {
- .version = 0,
- .getBytePointer = NULL,
- .releaseBytePointer = NULL,
- .getBytesAtPosition = getAssetBytesCallback,
- .releaseInfo = releaseAssetCallback,
- };
- CGDataProviderRef provider = CGDataProviderCreateDirect((voidvoid *)CFBridgingRetain(rep), [rep size], &callbacks);
- CGImageSourceRef source = CGImageSourceCreateWithDataProvider(provider, NULL);
- CGImageRef imageRef = CGImageSourceCreateThumbnailAtIndex(source, 0, (__bridge CFDictionaryRef) @{
- (NSString *)kCGImageSourceCreateThumbnailFromImageAlways : @YES,
- (NSString *)kCGImageSourceThumbnailMaxPixelSize : [NSNumber numberWithInt:size],
- (NSString *)kCGImageSourceCreateThumbnailWithTransform : @YES,
- });
- CFRelease(source);
- CFRelease(provider);
- if (!imageRef) {
- return nil;
- }
- UIImage *toReturn = [UIImage imageWithCGImage:imageRef];
- CFRelease(imageRef);
- return toReturn;
- }
采用上面的方法之后内存占用率很低!