iOS信号量的使用

时间:2021-07-16 15:14:55

 

Core Audio render thread and thread signalling

iOS信号量的使用iOS信号量的使用
up vote2down votefavorite 

Does iOS have any kind of very low level condition lock that does not include locking?

I am looking for a way to signal an awaiting thread from within the Core Audio render thread, without the usage of locks. I was wondering if something low level as a Mach system call might exist.

Right now I have a Core Audio thread that uses a non-blocking thread safe message queue to send messages to another thread. The other thread then pulls every 100ms to see if messages are available in the queue.

But this is very rudimentary and the timing is awful. I could use condition locks, but that involves locking, and I would like to keep any kind of locking out of the rendering thread.

What I am looking for is having the message queue thread wait until the Core Audio render thread signals it. Just like pthread conditions, but without locking and without immediate context switching? I would like the Core Audio thread to complete before the message queue thread is woken up.

shareimprove this question
 
    
You want to do inter-thread signaling without any locking and/or context-switching? If you are queueing audio buffer pointers, why would locking and/or context-switching be any kind of bottleneck? – Martin James Dec 30 '13 at 17:35 
    
This is not about the audio rendering. It's about the Core Audio thread sending messages of what's going on: It switched playback from one audio buffer to another (a track change) or it has run out of audio frames to play (buffer under run), etc. So what happens is, it puts this message in a struct, which it writes to a non-blocking circular buffer. The buffer is then checked every now and then (100ms) by another thread, which in return act on the messages it receives from the rendering thread. So I am basically just looking for at way to push to another thread instead of pulling every 100ms. – Trenskow Dec 30 '13 at 17:41 
add a comment

1 Answer

activeoldestvotes
up vote3down voteaccepted

Updated

dispatch_semaphore_t works well and is more efficient than a mach semaphore_t. The original code looks like this using a dispatch semaphore:

#include <dispatch/dispatch.h>

// Declare mSemaphore somewhere it is available to multiple threads
dispatch_semaphore_t mSemaphore;


// Create the semaphore
mSemaphore = dispatch_semaphore_create(0);
// Handle error if(nullptr == mSemaphore)


// ===== RENDER THREAD
// An event happens in the render thread- set a flag and signal whoever is waiting
/*long result =*/ dispatch_semaphore_signal(mSemaphore);


// ===== OTHER THREAD
// Check the flags and act on the state change
// Wait for a signal for 2 seconds
/*long result =*/ dispatch_semaphore_wait(mSemaphore, dispatch_time(dispatch_time_now(), 2 * NSEC_PER_SEC));


// Clean up when finished
dispatch_release(mSemaphore);

Original answer:

You can use a mach semaphore_t for this purpose. I've written a C++ class that encapsulates the functionality: https://github.com/sbooth/SFBAudioEngine/blob/master/Semaphore.cpp

Whether or not you end up using my wrapper or rolling your own the code will look roughly like:

#include <mach/mach.h>
#include <mach/task.h>

// Declare mSemaphore somewhere it is available to multiple threads
semaphore_t mSemaphore;


// Create the semaphore
kern_return_t result = semaphore_create(mach_task_self(), &mSemaphore, SYNC_POLICY_FIFO, 0);
// Handle error if(result != KERN_SUCCESS)


// ===== RENDER THREAD
// An event happens in the render thread- set a flag and signal whoever is waiting
kern_return_t result = semaphore_signal(mSemaphore);
// Handle error if(result != KERN_SUCCESS)


// ===== OTHER THREAD
// Check the flags and act on the state change
// Wait for a signal for 2 seconds
mach_timespec_t duration = {
.tv_sec = 2,
.tv_nsec = 0
};

kern_return_t result = semaphore_timedwait(mSemaphore, duration);

// Timed out
if(KERN_OPERATION_TIMED_OUT != result)
;

// Handle error if(result != KERN_SUCCESS)


// Clean up when finished
kern_return_t result = semaphore_destroy(mach_task_self(), mSemaphore);
// Handle error if(result != KERN_SUCCESS)
shareimprove this answer
 
    
Thank you! This was exactly what I was looking for. I would give you a thousand in reputation if I could. I knew there would be some low level stuff I could use. Thanks a lot! PS. Will take a look at your wrapper. – Trenskow Dec 31 '13 at 0:42
    
It works like a charm... – Trenskow Dec 31 '13 at 1:46
    
I'm glad to hear it – sbooth Dec 31 '13 at 1:55
    
Using a lock-free queue or circular fifo to communicate from real-time audio callbacks to the UI has been recommended by several audio developers. Polling at display frame rate (60Hz or 16.6 mS or a CADisplayLink of 1, not 100 mS) will allow updating the UI at full frame rate with the minimum number of threads. – hotpaw2Dec 31 '13 at 20:32 
    
@hotpaw2 This is definitely true and depending on the purpose even updating at the display frame rate can be excessive (for example when updating the playback time often 5 times per second is adequate). – sbooth Jan 1 '14 at 15:52
add a comment

 

iOS信号量的使用
    //    创建一个信号量,值为0        
dispatch_semaphore_t sema = dispatch_semaphore_create(0);
// 在一个操作结束后发信号,这会使得信号量+1
ABAddressBookRequestAccessWithCompletion(addressBook, ^(bool granted, CFErrorRef error) {

dispatch_semaphore_signal(sema);

});
// 一开始执行到这里信号量为0,线程被阻塞,直到上述操作完成使信号量+1,线程解除阻塞
dispatch_semaphore_wait(sema, DISPATCH_TIME_FOREVER);
iOS信号量的使用

 

iOS信号量的使用
    //    创建一个组 
dispatch_group_t group = dispatch_group_create();
// 创建信号 信号量为10
dispatch_semaphore_t semaphore = dispatch_semaphore_create(10);
// 取得默认的全局并发队列
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
for(inti = 0; i < 100; i++)
{
// 由于信号量为10 队列里面最多会有10个人任务被执行,
dispatch_semaphore_wait(semaphore,DISPATCH_TIME_FOREVER);
// 任务加到组内被监听
dispatch_group_async(group, queue, ^{
NSLog(@"%i",i);
sleep(2);
dispatch_semaphore_signal(semaphore);
});
}
// 等待组内所有任务完成,否则阻塞
dispatch_group_wait(group, DISPATCH_TIME_FOREVER);
dispatch_release(group);
dispatch_release(semaphore);
 
   

block: block是c的一个运行时特性,和函数指针类似,用户回调函数。主要用于并行线程。

iOS信号量的使用
//创建一个分发队列,第一个参数为队列名,第二个参数是保留的 dispatch_queue 属性,设为null
//可使用函数 dispatch_queue_t dispatch_get_global_queue(long priority, unsigned long flags);来获得全局的 dispatch_queue,参数 priority 表示优先级,
dispatch_queue_t queue = dispatch_queue_create("test_queue", NULL);
//dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
//将一个 block 加入一个 dispatch_queue,这个 block 会再其后得到调度时,并行运行。一个queue可以加入多个block,这些 blocks 是按照 FIFO(先入先出)规则调度的,先加入的先执行,后加入的一定后执行,但在某一个时刻,可能有多个 block 同时在执行。实际结果是第一个执行完执行第二个。
dispatch_sync(queue, ^(void){
for (int i=0; i<100; ++i) {
NSLog(@"i:%d", i);
}
});
iOS信号量的使用

信号:sem

iOS信号量的使用
//创建信号,将其资源初始值设置为 0 (不能少于 0),表示任务还没有完成,没有资源可用主线程不要做事情。
__block dispatch_semaphore_t sem = dispatch_semaphore_create(0);
__block dispatch_semaphore_t sem2 = dispatch_semaphore_create(0);

dispatch_queue_t queue = dispatch_queue_create("test_queue", NULL);
dispatch_sync(queue, ^(void){
for (int i=0; i<100; ++i) {
NSLog(@" block 1 i:%d", i);
}
//增加 semaphore 计数(可理解为资源数),表明任务完成,有资源可用主线程可以做事情了。
dispatch_semaphore_signal(sem);
});
dispatch_sync(queue, ^(void){
dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
for (int i=0; i<100; ++i) {
NSLog(@" block 2 i:%d", i);
}
dispatch_semaphore_signal(sem2);
});

//等待信号,主线程继续运行,减少 semaphore 的计数,如果资源数少于 0,则表明资源还可不得,我得按照FIFO(先等先得)的规则等待资源就绪,一旦资源就绪并且得到调度了,我再执行。
dispatch_semaphore_wait(sem2, DISPATCH_TIME_FOREVER);
dispatch_release(queue);
dispatch_release(sem);
iOS信号量的使用

 

group

将block加入到group,group中所有block执行完之后,主线程才可以继续运行

iOS信号量的使用
//创建信号,将其资源初始值设置为 0 (不能少于 0),表示任务还没有完成,没有资源可用主线程不要做事情。
__block dispatch_semaphore_t sem = dispatch_semaphore_create(0);

dispatch_queue_t queue = dispatch_queue_create("test_queue", NULL);
dispatch_group_t group = dispatch_group_create();
dispatch_group_async(group, queue, ^(void){
for (int i=0; i<100; ++i) {
NSLog(@" block 1 i:%d", i);
}
//增加 semaphore 计数(可理解为资源数),表明任务完成,有资源可用主线程可以做事情了。
dispatch_semaphore_signal(sem);
});

dispatch_block_t block2 = ^(void){
dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
for (int i=0; i<100; ++i) {
NSLog(@" block 2 i:%d", i);
}
};
dispatch_group_async(group, queue, block2);

//主线程等待block执行完成
dispatch_group_wait(group, DISPATCH_TIME_FOREVER);

dispatch_release(group);
dispatch_release(queue);
dispatch_release(sem);
iOS信号量的使用

 

子线程运行完

iOS信号量的使用
dispatch_async(getDataQueue,^{
//获取数据,获得一组后,刷新UI.
dispatch_aysnc (mainQueue, ^{
//UI的更新需在主线程中进行
};
}
)
iOS信号量的使用

 

 


GCD下的几种实现同步的方式

 

GCD多线程下,实现线程同步的方式有如下几种:

1.串行队列 2.并行队列 3.分组 4.信号量

实例: 去网上获取一张图片并展示在视图上. 实现这个需求,可以拆分成两个任务,一个是去网上获取图片,一个是展示在视图上. 这两个任务是有关联的,所以需要同步处理.

下面看这几种方式如何实现.

 

一、

1.串行队列

1.1[GCD相关:]

(1)GCD下的dispatch_queue队列都是FIFO队列,都会按照提交到队列的顺序执行.

只是根据队列的性质,分为<1>串行队列:用户队列、主线程队列 <2>并行队列. 

(2)同步(dispatch_sync)、异步方式(dispatch_async). 配合串行队列和并行队列使用.

1.2同步队列直接提交两个任务就可以.

iOS信号量的使用
    // 串形队列
dispatch_queue_t serilQueue = dispatch_queue_create("com.quains.myQueue", 0);

//开始时间
NSDate *startTime = [NSDate date];


__block UIImage *image = nil;

//1.先去网上下载图片
dispatch_async(serilQueue, ^{
NSString *urlAsString = @"http://avatar.csdn.net/B/2/2/1_u010013695.jpg";
NSURL *url = [NSURL URLWithString:urlAsString];

NSError *downloadError = nil;

NSData *imageData = [NSURLConnection sendSynchronousRequest:[NSURLRequest requestWithURL:url] returningResponse:nil error:&downloadError];

if (downloadError == nil && imageData != nil) {
image = [[UIImage imageWithData:imageData] retain];
}
else if(downloadError != nil){
NSLog(@"error happened = %@", downloadError);
}
else{
NSLog(@"No data download");
}
});

//2.在主线程展示到界面里
dispatch_async(serilQueue, ^{

NSLog(@"%@",[NSThread currentThread]);

// 在主线程展示
dispatch_async(dispatch_get_main_queue(), ^{
if (image != nil) {

UIImageView *imageView = [[UIImageView alloc] initWithFrame:self.view.bounds];

[imageView setImage:image];

[imageView setContentMode:UIViewContentModeScaleAspectFit];
[self.view addSubview:imageView];
[imageView release];

NSDate *endTime = [NSDate date];
NSLog(@"串行异步 completed in %f time", [endTime timeIntervalSinceDate:startTime]);
}
else{
NSLog(@"image isn't downloaded, nothing to display");
}
});

});

//3.清理
dispatch_release(serilQueue);
[image release];
iOS信号量的使用

注意:

(1) __block变量分配在栈,retain下,防止被回收.

(2)dispatch要手动create和release.

(3)提交到主线程队列的时候,慎用同步dispatch_sync方法,有可能造成死锁. 因为主线程队列是串行队列,要等队列里的任务一个一个执行.所以提交一个任务到队列,如果用同步方法就会阻塞住主线程,而主线程又要等主线程队列里的任务都执行完才能执行那个刚提交的,所以主线程队列里还有其他的任务的话,但他已经被阻塞住了,没法先完成队列里的其他任务,即,最后一个任务也没机会执行到,于是造成死锁.

(4)提交到串行队列可以用同步方式,也可以用异步方式.

 

2.并行队列

采用并行队列的时候,可以采用同步的方式把任务提交到队列里去,即可以实现同步的方式

iOS信号量的使用
//新建一个队列
dispatch_queue_t concurrentQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);

//记时
NSDate *startTime = [NSDate date];

//加入队列
dispatch_async(concurrentQueue, ^{
__block UIImage *image = nil;

//1.先去网上下载图片
dispatch_sync(concurrentQueue, ^{
NSString *urlAsString = @"http://avatar.csdn.net/B/2/2/1_u010013695.jpg";
NSURL *url = [NSURL URLWithString:urlAsString];

NSError *downloadError = nil;

NSData *imageData = [NSURLConnection sendSynchronousRequest:[NSURLRequest requestWithURL:url] returningResponse:nil error:&downloadError];

if (downloadError == nil && imageData != nil) {
image = [UIImage imageWithData:imageData];
}
else if(downloadError != nil){
NSLog(@"error happened = %@", downloadError);
}
else{
NSLog(@"No data download");
}
});

//2.在主线程展示到界面里
dispatch_sync(dispatch_get_main_queue(), ^{
if (image != nil) {
UIImageView *imageView = [[UIImageView alloc] initWithFrame:self.view.bounds];
[imageView setImage:image];

[imageView setContentMode:UIViewContentModeScaleAspectFit];
[self.view addSubview:imageView];
[imageView release];

NSDate *endTime = [NSDate date];
NSLog(@"并行同步 completed in %f time", [endTime timeIntervalSinceDate:startTime]);
}
else{
NSLog(@"image isn't downloaded, nothing to display");
}
});
});
iOS信号量的使用

两个同步的任务用一个异步的包起来,提交到并行队列里去,即可实现同步的方式.

 

3.使用分组方式

3.1 group本身是将几个有关联的任务组合起来,然后提供给开发者一个知道这个group结束的点.

虽然这个只有一个任务,但是可以利用group的结束点,去阻塞线程,从而来实现同步方式.

iOS信号量的使用
dispatch_group_t group = dispatch_group_create();

dispatch_queue_t queue = dispatch_get_global_queue(0, 0);

NSDate *startTime = [NSDate date];

__block UIImage *image = nil;

dispatch_group_async(group, queue, ^{

//1.先去网上下载图片
NSString *urlAsString = @"http://avatar.csdn.net/B/2/2/1_u010013695.jpg";
NSURL *url = [NSURL URLWithString:urlAsString];

NSError *downloadError = nil;

NSData *imageData = [NSURLConnection sendSynchronousRequest:[NSURLRequest requestWithURL:url] returningResponse:nil error:&downloadError];

if (downloadError == nil && imageData != nil) {
image = [[UIImage imageWithData:imageData] retain];
}
else if(downloadError != nil){
NSLog(@"error happened = %@", downloadError);
}
else{
NSLog(@"No data download");
}

});

// 2.等下载好了再在刷新主线程
dispatch_group_notify(group, queue, ^{

//在主线程展示到界面里
dispatch_async(dispatch_get_main_queue(), ^{
if (image != nil) {
UIImageView *imageView = [[UIImageView alloc] initWithFrame:self.view.bounds];
[imageView setImage:image];
[image release];

[imageView setContentMode:UIViewContentModeScaleAspectFit];
[self.view addSubview:imageView];
[imageView release];

NSDate *endTime = [NSDate date];
NSLog(@"分组同步 completed in %f time", [endTime timeIntervalSinceDate:startTime]);
}
else{
NSLog(@"image isn't downloaded, nothing to display");
}
});

});

// 释放掉
dispatch_release(group);
iOS信号量的使用

dispatch_group 也要手动创建和释放.

dispatch_notify()提供了一个知道group什么时候结束的点. 当然也可以使用dispatch_wait()去阻塞.

 

4.信号量

信号量 和 琐 的作用差不多,可以用来实现同步的方式. 

但是信号量通常用在 允许几个线程同时访问一个资源,通过信号量来控制访问的线程个数.

iOS信号量的使用
// 信号量初始化为1
dispatch_semaphore_t semaphore = dispatch_semaphore_create(1);

dispatch_queue_t queue = dispatch_get_global_queue(0, 0);

NSDate *startTime = [NSDate date];

__block UIImage *image = nil;


//1.先去网上下载图片
dispatch_async(queue, ^{

// wait操作-1
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);

// 开始下载
NSString *urlAsString = @"http://avatar.csdn.net/B/2/2/1_u010013695.jpg";
NSURL *url = [NSURL URLWithString:urlAsString];

NSError *downloadError = nil;

NSData *imageData = [NSURLConnection sendSynchronousRequest:[NSURLRequest requestWithURL:url] returningResponse:nil error:&downloadError];

if (downloadError == nil && imageData != nil) {

image = [[UIImage imageWithData:imageData] retain];
//NSLog(@"heap %@", image);
//NSLog(@"%d",[image retainCount]);
}
else if(downloadError != nil){
NSLog(@"error happened = %@", downloadError);
}
else{
NSLog(@"No data download");
}

// signal操作+1
dispatch_semaphore_signal(semaphore);
});


// 2.等下载好了再在刷新主线程
dispatch_async(dispatch_get_main_queue(), ^{

// wait操作-1
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);

if (image != nil) {

UIImageView *imageView = [[UIImageView alloc] initWithFrame:self.view.bounds];

[imageView setImage:image];
NSLog(@"%d",[image retainCount]);
[image release];

[imageView setContentMode:UIViewContentModeScaleAspectFit];
[self.view addSubview:imageView];
[imageView release];

NSDate *endTime = [NSDate date];
NSLog(@"信号量同步 completed in %f time", [endTime timeIntervalSinceDate:startTime]);
}
else{
NSLog(@"image isn't downloaded, nothing to display");
}

// signal操作+1
dispatch_semaphore_signal(semaphore);
});
iOS信号量的使用

dispatch_wait会阻塞线程并且检测信号量的值,直到信号量值大于0才会开始往下执行,同时对信号量执行-1操作.

dispatch_signal则是+1操作.


semaphore_create

http://web.mit.edu/darwin/src/modules/xnu/osfmk/man/semaphore_wait.html


Function - Create a new semaphore.

SYNOPSIS

kern_return_tsemaphore_create
(task_t task,
semaphore_t *semaphore,
int policy,
int value);

PARAMETERS

task
[in task port] The task receiving the send right of the newly created semaphore.

 

semaphore
[out send right] The port naming the created semaphore.

 

policy
[in scalar] The blocked thread wakeup policy for the newly created semaphore. Valid policies are:

 

SYNC_POLICY_FIFO
a first-in-first-out policy for scheduling thread wakeup.

 

SYNC_POLICY_FIXED_PRIORITY
a fixed priority policy for scheduling thread wakeup.

 

value
[in scalar] The initial value of the semaphore count.

DESCRIPTION

The semaphore_create function creates a new semaphore, associates the created semaphore with the specified task, and returns a send right naming the new semaphore. In order to support a robust producer/consumer communication service, Interrupt Service Routines (ISR) must be able to signal semaphores. The semaphore synchronizer service is designed to allow user-level device drivers to perform signal operations, eliminating the need for event counters. Device drivers which utilize semaphores are responsible for creating (via semaphore_create) and exporting (via device_get_status) semaphores for user level access. Device driver semaphore creation is done at device initialization time. Device drivers may support multiple semaphores.

RETURN VALUES

 

KERN_INVALID_ARGUMENT
The task argument or the policy argument was invalid, or the initial value of the semaphore was invalid.

 

KERN_RESOURCE_SHORTAGE
The kernel could not allocate the semaphore.

 

KERN_SUCCESS
The semaphore was successfully created.

RELATED INFORMATION

Functions: semaphore_destroysemaphore_signalsemaphore_signal_allsemaphore_waitdevice_get_status.

   

semaphore_wait


Function - Wait on the specified semaphore.

SYNOPSIS

kern_return_t   semaphore_wait
(semaphore_t semaphore);

PARAMETERS

 

semaphore
[in send right] The port naming the semaphore that the wait operation is being performed upon.

DESCRIPTION

The semaphore_wait function decrements the semaphore count. If the semaphore count is negative after decrementing, the calling thread blocks. Device driver interrupt service routines (ISR) should never executesemaphore_wait, since waiting on a semaphore at the ISR level may, and often will, lead to a deadlock.

RETURN VALUES

 

KERN_INVALID_ARGUMENT
The specified semaphore is invalid.

 

KERN_TERMINATED
The specified semaphore has been destroyed.

 

KERN_ABORTED
The caller was blocked due to a negative count on the semaphore, and was awoken for a reason not related to the semaphore subsystem (e.g.  thread_terminate).

 

KERN_SUCCESS
The semaphore wait operation was successful.

RELATED INFORMATION

Functions: semaphore_createsemaphore_destroysemaphore_signalsemaphore_signal_alldevice_get_status.

 


那你在代码中是否很好的使用了锁的机制呢?你又知道几种实现锁的方法呢?

今天一起来探讨一下Objective-C中几种不同方式实现的锁,在这之前我们先构建一个测试用的类,假想它是我们的一个共享资源,method1与method2是互斥的,代码如下:

 
12345678910111213 @implementationTestObj -(void)method1{  NSLog(@"%@",NSStringFromSelector(_cmd));} -(void)method2{  NSLog(@"%@",NSStringFromSelector(_cmd));  } @end

1.使用NSLock实现的锁

 
12345678910111213141516171819     //主线程中    TestObj *obj=[[TestObjalloc]init];    NSLock *lock=[[NSLockalloc]init];     //线程1    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0),^{        [locklock];        [objmethod1];        sleep(10);        [lockunlock];    });     //线程2    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0),^{        sleep(1);//以保证让线程2的代码后执行        [locklock];        [objmethod2];        [lockunlock];    });

看到打印的结果了吗,你会看到线程1锁住之后,线程2会一直等待走到线程1将锁置为unlock后,才会执行method2方法。

NSLock是Cocoa提供给我们最基本的锁对象,这也是我们经常所使用的,除lock和unlock方法外,NSLock还提供了tryLock和lockBeforeDate:两个方法,前一个方法会尝试加锁,如果锁不可用(已经被锁住),刚并不会阻塞线程,并返回NO。lockBeforeDate:方法会在所指定Date之前尝试加锁,如果在指定时间之前都不能加锁,则返回NO。

2.使用synchronized关键字构建的锁

当然在Objective-C中你还可以用@synchronized指令快速的实现锁:

 
123456789101112131415161718     //主线程中    TestObj *obj=[[TestObjalloc]init];     //线程1    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0),^{        @synchronized(obj){            [objmethod1];            sleep(10);        }    });     //线程2    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0),^{        sleep(1);        @synchronized(obj){            [objmethod2];        }    });

@synchronized指令使用的obj为该锁的唯一标识,只有当标识相同时,才为满足互斥,如果线程2中的@synchronized(obj)改为@synchronized(other),刚线程2就不会被阻塞,@synchronized指令实现锁的优点就是我们不需要在代码中显式的创建锁对象,便可以实现锁的机制,但作为一种预防措施,@synchronized块会隐式的添加一个异常处理例程来保护代码,该处理例程会在异常抛出的时候自动的释放互斥锁。所以如果不想让隐式的异常处理例程带来额外的开销,你可以考虑使用锁对象。

3.使用C语言的pthread_mutex_t实现的锁

 
123456789101112131415161718192021     //主线程中    TestObj *obj=[[TestObjalloc]init];     __blockpthread_mutex_tmutex;    pthread_mutex_init(&mutex,NULL);     //线程1    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0),^{        pthread_mutex_lock(&mutex);        [objmethod1];        sleep(5);        pthread_mutex_unlock(&mutex);    });     //线程2    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0),^{        sleep(1);        pthread_mutex_lock(&mutex);        [objmethod2];        pthread_mutex_unlock(&mutex);    });

pthread_mutex_t定义在pthread.h,所以记得#include <pthread.h>
4.使用GCD来实现的”锁”
以上代码构建多线程我们就已经用到了GCD的dispatch_async方法,其实在GCD中也已经提供了一种信号机制,使用它我们也可以来构建一把”锁”(从本质意义上讲,信号量与锁是有区别,具体差异参加信号量与互斥锁之间的区别):

 
12345678910111213141516171819     //主线程中    TestObj *obj=[[TestObjalloc]init];    dispatch_semaphore_tsemaphore=dispatch_semaphore_create(1);     //线程1    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0),^{        dispatch_semaphore_wait(semaphore,DISPATCH_TIME_FOREVER);        [objmethod1];        sleep(10);        dispatch_semaphore_signal(semaphore);    });     //线程2    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0),^{        sleep(1);        dispatch_semaphore_wait(semaphore,DISPATCH_TIME_FOREVER);        [objmethod2];        dispatch_semaphore_signal(semaphore);    });

至于代码产生的效果当然和上一例是一模一样的,关于信号机制,熟悉C编程的你肯定也不会陌生的,关于GCD中更多关于dispatch_semaphore_t的信息,可以跳转到本博客的这一往篇文章:GCD介绍(三): Dispatch Sources

好了,以上就是我所列举了几种方式来实现锁,当然锁大多数情况下也是配合多线程一起使用的,关于多线程编程,我这儿就不赘述了。

 

 

 

在上一文中,我们已经讨论过用Objective-C锁几种实现(跳转地址),也用代码实际的演示了如何通过构建一个互斥锁来实现多线程的资源共享及线程安全,今天我们继续讨论锁的一些高级用法。

1.NSRecursiveLock递归锁

平时我们在代码中使用锁的时候,最容易犯的一个错误就是造成死锁,而容易造成死锁的一种情形就是在递归或循环中,如下代码:

 

 
123456789101112131415161718192021222324252627282930     //主线程中    NSLock *theLock=[[NSLockalloc]init];    TestObj *obj=[[TestObjalloc]init];     //线程1    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0),^{         staticvoid(^TestMethod)(int);        TestMethod=^(intvalue)        {            [theLocklock];            if(value>0)            {                [objmethod1];                sleep(5);                TestMethod(value-1);            }            [theLockunlock];        };         TestMethod(5);    });     //线程2    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0),^{        sleep(1);        [theLocklock];        [objmethod2];        [theLockunlock];    });

 

以上的代码中,就是一种典型的死锁情况,因为在线程1中的递归block中,锁会被多次的lock,所以自己也被阻塞了,由于以上的代码非常的简短,所以很容易能识别死锁,但在较为复杂的代码中,就不那么容易发现了,那么如何在递归或循环中正确的使用锁呢?此处的theLock如果换用NSRecursiveLock对象,问题便得到解决了,NSRecursiveLock类定义的锁可以在同一线程多次lock,而不会造成死锁。递归锁会跟踪它被多少次lock。每次成功的lock都必须平衡调用unlock操作。只有所有的锁住和解锁操作都平衡的时候,锁才真正被释放给其他线程获得。

2.NSConditionLock条件锁

当我们在使用多线程的时候,有时一把只会lock和unlock的锁未必就能完全满足我们的使用。因为普通的锁只能关心锁与不锁,而不在乎用什么钥匙才能开锁,而我们在处理资源共享的时候,多数情况是只有满足一定条件的情况下才能打开这把锁:

 

 
1234567891011121314151617181920     //主线程中    NSConditionLock *theLock=[[NSConditionLockalloc]init];     //线程1    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0),^{        for(inti=0;i<=2;i++)        {            [theLocklock];            NSLog(@"thread1:%d",i);            sleep(2);            [theLockunlockWithCondition:i];        }    });     //线程2    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0),^{        [theLocklockWhenCondition:2];        NSLog(@"thread2");        [theLockunlock];    });

 

在线程1中的加锁使用了lock,所以是不需要条件的,所以顺利的就锁住了,但在unlock的使用了一个整型的条件,它可以开启其它线程中正在等待这把钥匙的临界地,而线程2则需要一把被标识为2的钥匙,所以当线程1循环到最后一次的时候,才最终打开了线程2中的阻塞。但即便如此,NSConditionLock也跟其它的锁一样,是需要lock与unlock对应的,只是lock,lockWhenCondition:与unlock,unlockWithCondition:是可以随意组合的,当然这是与你的需求相关的。

3.NSDistributedLock分布式锁

以上所有的锁都是在解决多线程之间的冲突,但如果遇上多个进程或多个程序之间需要构建互斥的情景该怎么办呢?这个时候我们就需要使用到NSDistributedLock了,从它的类名就知道这是一个分布式的Lock,NSDistributedLock的实现是通过文件系统的,所以使用它才可以有效的实现不同进程之间的互斥,但NSDistributedLock并非继承于NSLock,它没有lock方法,它只实现了tryLock,unlock,breakLock,所以如果需要lock的话,你就必须自己实现一个tryLock的轮询,下面通过代码简单的演示一下吧:

程序A:

 

 
12345678     dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0),^{        lock=[[NSDistributedLockalloc]initWithPath:@"/Users/mac/Desktop/earning__"];        [lockbreakLock];        [locktryLock];        sleep(10);        [lockunlock];        NSLog(@"appA: OK");    });

 

程序B:

 

 
12345678910 dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0),^{        lock=[[NSDistributedLockalloc]initWithPath:@"/Users/mac/Desktop/earning__"];         while(![locktryLock]){            NSLog(@"appB: waiting");            sleep(1);        }        [lockunlock];        NSLog(@"appB: OK");    });

 

先运行程序A,然后立即运行程序B,根据打印你可以清楚的发现,当程序A刚运行的时候,程序B一直处于等待中,当大概10秒过后,程序B便打印出了appB:OK的输出,以上便实现了两上不同程序之间的互斥。/Users/mac/Desktop/earning__是一个文件或文件夹的地址,如果该文件或文件夹不存在,那么在tryLock返回YES时,会自动创建该文件/文件夹。在结束的时候该文件/文件夹会被清除,所以在选择的该路径的时候,应该选择一个不存在的路径,以防止误删了文件。