Advertise here




Advertise here

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Help with my AudioToolBox implementation

PowderPowder Posts: 3 Noob
edited September 2016 in iOS SDK Development
Hi folks, first time here so nice to meet you :)

I'm writing a cross-platform sound recorder for mobile devices, it has to run both on Android and iOS and must be embeddable in Cocos2d-x.

On Android I used OpenSL and it worked great in about few hours of coding.

On iOS, i'm trying to use AudioToolBox but i'm having a lot of issues. I tried a LOT of different approaches but i never got a result.

I started from here: https://developer.apple.com/library/ios/documentation/MusicAudio/Conceptual/AudioQueueProgrammingGuide/AQRecord/RecordingAudio.html

Then i got the entire SpeakHere code and copied it to my code.
http://www.cs.vu.nl/~eliens/media/mobile-application-10-DerbyApp-build-iphone-Classes-AQRecorder.h

What i need is to get 10secs of recorded sound then play it on demand.

My issues:

1) In the recorder callBack, the inBuffer->mAudioDataByteSize is always zero, and im getting up to 3 callbacks and no more, all of them with zero data inside. Same for numPackets. My bufferByteSize is 441000 (im using PCM 16 bits Mono). It seems that although my buffer and dataformat are good, no data is being processed in the recording buffer.
// if we want pcm, default to signed 16-bit little-endian
			mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger  kLinearPCMFormatFlagIsPacked;
            		mRecordFormat.mChannelsPerFrame = 1; // mono
			mRecordFormat.mBitsPerChannel = 16;
			mRecordFormat.mBytesPerPacket = mRecordFormat.mBytesPerFrame = (mRecordFormat.mBitsPerChannel / 8) * mRecordFormat.mChannelsPerFrame;
			mRecordFormat.mFramesPerPacket = 1;
           		 mRecordFormat.mBytesPerPacket   =
           		 mRecordFormat.mBytesPerFrame =
           		 mRecordFormat.mChannelsPerFrame  sizeof (SInt16);
           		 mRecordFormat.mFramesPerPacket  = 1;

2) If i would not use a recording file, do i have to stock the bytes from input callback inside something or its fine to pass just the recorded buffer when i call an AudioQueueNewOutput to play the recorded sound?

3) With OpenSL, i was able to set a buffer length var basing on bytes * seconds and that was useful to get a callback whenever the mic stopped from recording automatically. Can i achieve something similar with AudioQueues too?

Beside this, i'm also looking for a way to convert from std::string to CFStringRef since i have some path vars which are coming from external sources and they are coming as C++ std::string



Here's my aggregated class for audio management (rename the extension in .cpp) If someone would help me i'd be really grateful. Please remember i have to use only C++, no Obj-C and No Foundation.

Thank you so much :smile:


Post edited by Powder on
Sign In or Register to comment.