在Cocoa中加载OpenAL的.wav文件

时间:2021-12-24 19:42:47

I need to load sound files to a Cocoa-based OpenAL app.

我需要将声音文件加载到基于Cocoa的OpenAL应用程序。

Progress:

  • The OpenAL utility function alutLoadWAVFile has been deprecated; the alut header is no longer included in Mac OS X SDKs. According to the TechNotes, the actual code is still there for binary compatibility. However, if I attempt to add a declaration for the function, the code will compile but the linker will abort, complaining that the symbol for alutLoadWAVFile could not be found. (I am linking to the OpenAL.framework).

    OpenAL实用程序函数alutLoadWAVFile已被弃用; Mac OS X SDK中不再包含alut标头。根据TechNotes,实际代码仍然存在二进制兼容性。但是,如果我尝试为函数添加声明,代码将编译但链接器将中止,抱怨无法找到alutLoadWAVFile的符号。 (我正在链接到OpenAL.framework)。

  • Yet, Apple OpenAL sample code still uses this symbol. When I Clean the sample code project, it compiles and links just fine. Yet there is no declaration of the function to be found. (Side question: how can it build and link, then?)

    但是,Apple OpenAL示例代码仍然使用此符号。当我清理示例代码项目时,它编译和链接就好了。然而,没有声明可以找到该功能。 (旁边的问题:如何构建和链接呢?)

So, I found some code by George Warner at Apple, containing replacement functions for alutCreateBufferFromFile and alutLoadMemoryFromFile. Although capable of creating an OpenAL buffer directly from most any kind of audio file, the code appears to support only 8bit mono sound files. 16bit stereo or mono 44khz files result in a nasty hissing sound and clipping. (The files are ok; Quicktime plays them just fine.)

所以,我在Apple找到了George Warner的一些代码,其中包含alutCreateBufferFromFile和alutLoadMemoryFromFile的替换函数。虽然能够直接从大多数任何类型的音频文件创建OpenAL缓冲区,但代码似乎仅支持8位单声道声音文件。 16位立体声或单声道44khz文件会导致令人讨厌的嘶嘶声和剪辑。 (文件没问题; Quicktime播放它们很好。)

Thus, my question: can someone please point me to some .wav loading code/help for Cocoa/Carbon, suitable for use with an OpenAL Buffer? Thankyou.

因此,我的问题:有人可以指点一些.wav加载Cocoa / Carbon的代码/帮助,适合与OpenAL Buffer一起使用吗?谢谢。

3 个解决方案

#1


I'm sure you've solved this already, but for people who find this via Google, here's some barely tested WAV loading code. It works but you'd better double check for memory leaks and whatnot before using for something real.

我相信你已经解决了这个问题,但是对于那些通过谷歌发现这个问题的人来说,这里有一些经过测试的WAV加载代码。它可以工作,但你最好仔细检查内存泄漏和什么之前使用真正的东西。

static bool LoadWAVFile(const char* filename, ALenum* format, ALvoid** data, ALsizei* size, ALsizei* freq, Float64* estimatedDurationOut)
{
    CFStringRef filenameStr = CFStringCreateWithCString( NULL, filename, kCFStringEncodingUTF8 );
    CFURLRef url = CFURLCreateWithFileSystemPath( NULL, filenameStr, kCFURLPOSIXPathStyle, false );
    CFRelease( filenameStr );

    AudioFileID audioFile;
    OSStatus error = AudioFileOpenURL( url, kAudioFileReadPermission, kAudioFileWAVEType, &audioFile );
    CFRelease( url );

    if ( error != noErr )
    {
        fprintf( stderr, "Error opening audio file. %d\n", error );
        return false;
    }

    AudioStreamBasicDescription basicDescription;
    UInt32 propertySize = sizeof(basicDescription);
    error = AudioFileGetProperty( audioFile, kAudioFilePropertyDataFormat, &propertySize, &basicDescription );

    if ( error != noErr )
    {
        fprintf( stderr, "Error reading audio file basic description. %d\n", error );
        AudioFileClose( audioFile );
        return false;
    }

    if ( basicDescription.mFormatID != kAudioFormatLinearPCM )
    {
        // Need PCM for Open AL. WAVs are (I believe) by definition PCM, so this check isn't necessary. It's just here
        // in case I ever use this with another audio format.
        fprintf( stderr, "Audio file is not linear-PCM. %d\n", basicDescription.mFormatID );
        AudioFileClose( audioFile );
        return false;
    }

    UInt64 audioDataByteCount = 0;
    propertySize = sizeof(audioDataByteCount);
    error = AudioFileGetProperty( audioFile, kAudioFilePropertyAudioDataByteCount, &propertySize, &audioDataByteCount );
    if ( error != noErr )
    {
        fprintf( stderr, "Error reading audio file byte count. %d\n", error );
        AudioFileClose( audioFile );
        return false;
    }

    Float64 estimatedDuration = 0;
    propertySize = sizeof(estimatedDuration);
    error = AudioFileGetProperty( audioFile, kAudioFilePropertyEstimatedDuration, &propertySize, &estimatedDuration );
    if ( error != noErr )
    {
        fprintf( stderr, "Error reading estimated duration of audio file. %d\n", error );
        AudioFileClose( audioFile );
        return false;
    }

    ALenum alFormat = 0;

    if ( basicDescription.mChannelsPerFrame == 1 )
    {
        if ( basicDescription.mBitsPerChannel == 8 )
            alFormat = AL_FORMAT_MONO8;
        else if ( basicDescription.mBitsPerChannel == 16 )
            alFormat = AL_FORMAT_MONO16;
        else
        {
            fprintf( stderr, "Expected 8 or 16 bits for the mono channel but got %d\n", basicDescription.mBitsPerChannel );
            AudioFileClose( audioFile );
            return false;
        }

    }
    else if ( basicDescription.mChannelsPerFrame == 2 )
    {
        if ( basicDescription.mBitsPerChannel == 8 )
            alFormat = AL_FORMAT_STEREO8;
        else if ( basicDescription.mBitsPerChannel == 16 )
            alFormat = AL_FORMAT_STEREO16;
        else
        {
            fprintf( stderr, "Expected 8 or 16 bits per channel but got %d\n", basicDescription.mBitsPerChannel );
            AudioFileClose( audioFile );
            return false;
        }
    }
    else
    {
        fprintf( stderr, "Expected 1 or 2 channels in audio file but got %d\n", basicDescription.mChannelsPerFrame );
        AudioFileClose( audioFile );
        return false;
    }

    UInt32 numBytesToRead = audioDataByteCount;
    void* buffer = malloc( numBytesToRead );

    if ( buffer == NULL )
    {
        fprintf( stderr, "Error allocating buffer for audio data of size %u\n", numBytesToRead );
        return false;
    }

    error = AudioFileReadBytes( audioFile, false, 0, &numBytesToRead, buffer );
    AudioFileClose( audioFile );

    if ( error != noErr )
    {
        fprintf( stderr, "Error reading audio bytes. %d\n", error );
        free(buffer);
        return false;
    }

    if ( numBytesToRead != audioDataByteCount )
    {
        fprintf( stderr, "Tried to read %lld bytes from the audio file but only got %d bytes\n", audioDataByteCount, numBytesToRead );
        free(buffer);
        return false;
    }

    *freq = basicDescription.mSampleRate;
    *size = audioDataByteCount;
    *format = alFormat;
    *data = buffer;
    *estimatedDurationOut = estimatedDuration;

    return true;
}

#2


Use the AudioFileReadBytes function from Audio Services. Examples can be found in the Finch sound engine, see the Sound+IO category.

使用Audio Services中的AudioFileReadBytes函数。可以在Finch声音引擎中找到示例,请参阅Sound + IO类别。

#3


This may be an obvious suggestion, but since you didn't mention it: have you tried the library at http://www.openal.org/ as suggested in Apple's technote?

这可能是一个明显的建议,但由于你没有提到它:你是否按照Apple的技术说明中的建议在http://www.openal.org/上试用了这个库?

As for how the sample code links and builds, it's not finding a prototype (if you turn on -Wall, you'll get an implicit function declaration warning), but OpenAL.framework--at least in the SDK they are using in the sample project--does in fact export _alutLoadWAVFile, which you can check with nm. What's the exact link error you get, and what SDK are you using?

至于示例代码链接和构建的方式,它没有找到原型(如果你打开-Wall,你会得到一个隐含的函数声明警告),但OpenAL.framework - 至少在他们使用的SDK中示例项目 - 实际上是导出_alutLoadWAVFile,您可以使用nm检查。您获得的确切链接错误是什么,以及您使用的是什么SDK?

#1


I'm sure you've solved this already, but for people who find this via Google, here's some barely tested WAV loading code. It works but you'd better double check for memory leaks and whatnot before using for something real.

我相信你已经解决了这个问题,但是对于那些通过谷歌发现这个问题的人来说,这里有一些经过测试的WAV加载代码。它可以工作,但你最好仔细检查内存泄漏和什么之前使用真正的东西。

static bool LoadWAVFile(const char* filename, ALenum* format, ALvoid** data, ALsizei* size, ALsizei* freq, Float64* estimatedDurationOut)
{
    CFStringRef filenameStr = CFStringCreateWithCString( NULL, filename, kCFStringEncodingUTF8 );
    CFURLRef url = CFURLCreateWithFileSystemPath( NULL, filenameStr, kCFURLPOSIXPathStyle, false );
    CFRelease( filenameStr );

    AudioFileID audioFile;
    OSStatus error = AudioFileOpenURL( url, kAudioFileReadPermission, kAudioFileWAVEType, &audioFile );
    CFRelease( url );

    if ( error != noErr )
    {
        fprintf( stderr, "Error opening audio file. %d\n", error );
        return false;
    }

    AudioStreamBasicDescription basicDescription;
    UInt32 propertySize = sizeof(basicDescription);
    error = AudioFileGetProperty( audioFile, kAudioFilePropertyDataFormat, &propertySize, &basicDescription );

    if ( error != noErr )
    {
        fprintf( stderr, "Error reading audio file basic description. %d\n", error );
        AudioFileClose( audioFile );
        return false;
    }

    if ( basicDescription.mFormatID != kAudioFormatLinearPCM )
    {
        // Need PCM for Open AL. WAVs are (I believe) by definition PCM, so this check isn't necessary. It's just here
        // in case I ever use this with another audio format.
        fprintf( stderr, "Audio file is not linear-PCM. %d\n", basicDescription.mFormatID );
        AudioFileClose( audioFile );
        return false;
    }

    UInt64 audioDataByteCount = 0;
    propertySize = sizeof(audioDataByteCount);
    error = AudioFileGetProperty( audioFile, kAudioFilePropertyAudioDataByteCount, &propertySize, &audioDataByteCount );
    if ( error != noErr )
    {
        fprintf( stderr, "Error reading audio file byte count. %d\n", error );
        AudioFileClose( audioFile );
        return false;
    }

    Float64 estimatedDuration = 0;
    propertySize = sizeof(estimatedDuration);
    error = AudioFileGetProperty( audioFile, kAudioFilePropertyEstimatedDuration, &propertySize, &estimatedDuration );
    if ( error != noErr )
    {
        fprintf( stderr, "Error reading estimated duration of audio file. %d\n", error );
        AudioFileClose( audioFile );
        return false;
    }

    ALenum alFormat = 0;

    if ( basicDescription.mChannelsPerFrame == 1 )
    {
        if ( basicDescription.mBitsPerChannel == 8 )
            alFormat = AL_FORMAT_MONO8;
        else if ( basicDescription.mBitsPerChannel == 16 )
            alFormat = AL_FORMAT_MONO16;
        else
        {
            fprintf( stderr, "Expected 8 or 16 bits for the mono channel but got %d\n", basicDescription.mBitsPerChannel );
            AudioFileClose( audioFile );
            return false;
        }

    }
    else if ( basicDescription.mChannelsPerFrame == 2 )
    {
        if ( basicDescription.mBitsPerChannel == 8 )
            alFormat = AL_FORMAT_STEREO8;
        else if ( basicDescription.mBitsPerChannel == 16 )
            alFormat = AL_FORMAT_STEREO16;
        else
        {
            fprintf( stderr, "Expected 8 or 16 bits per channel but got %d\n", basicDescription.mBitsPerChannel );
            AudioFileClose( audioFile );
            return false;
        }
    }
    else
    {
        fprintf( stderr, "Expected 1 or 2 channels in audio file but got %d\n", basicDescription.mChannelsPerFrame );
        AudioFileClose( audioFile );
        return false;
    }

    UInt32 numBytesToRead = audioDataByteCount;
    void* buffer = malloc( numBytesToRead );

    if ( buffer == NULL )
    {
        fprintf( stderr, "Error allocating buffer for audio data of size %u\n", numBytesToRead );
        return false;
    }

    error = AudioFileReadBytes( audioFile, false, 0, &numBytesToRead, buffer );
    AudioFileClose( audioFile );

    if ( error != noErr )
    {
        fprintf( stderr, "Error reading audio bytes. %d\n", error );
        free(buffer);
        return false;
    }

    if ( numBytesToRead != audioDataByteCount )
    {
        fprintf( stderr, "Tried to read %lld bytes from the audio file but only got %d bytes\n", audioDataByteCount, numBytesToRead );
        free(buffer);
        return false;
    }

    *freq = basicDescription.mSampleRate;
    *size = audioDataByteCount;
    *format = alFormat;
    *data = buffer;
    *estimatedDurationOut = estimatedDuration;

    return true;
}

#2


Use the AudioFileReadBytes function from Audio Services. Examples can be found in the Finch sound engine, see the Sound+IO category.

使用Audio Services中的AudioFileReadBytes函数。可以在Finch声音引擎中找到示例,请参阅Sound + IO类别。

#3


This may be an obvious suggestion, but since you didn't mention it: have you tried the library at http://www.openal.org/ as suggested in Apple's technote?

这可能是一个明显的建议,但由于你没有提到它:你是否按照Apple的技术说明中的建议在http://www.openal.org/上试用了这个库?

As for how the sample code links and builds, it's not finding a prototype (if you turn on -Wall, you'll get an implicit function declaration warning), but OpenAL.framework--at least in the SDK they are using in the sample project--does in fact export _alutLoadWAVFile, which you can check with nm. What's the exact link error you get, and what SDK are you using?

至于示例代码链接和构建的方式,它没有找到原型(如果你打开-Wall,你会得到一个隐含的函数声明警告),但OpenAL.framework - 至少在他们使用的SDK中示例项目 - 实际上是导出_alutLoadWAVFile,您可以使用nm检查。您获得的确切链接错误是什么,以及您使用的是什么SDK?