I’m creating a time dependent array as below, but cannot figure out how to pass a float** array to the function without causing an error. In TD using a chop in I see only a single sample being written.
TEAudioInput.take(TEFloatBufferCreateTimeDependent(SampleRate, 2, nb, nullptr));
//nb is the number of values for the float array coming from the caller.
std::vector<float> vbuffer;
vbuffer.resize(2 * nb);
for (int i = 0; i < nb; i++)
{
vbuffer[i] = buffer[2 * i];
vbuffer[nb + 1] = buffer[(2 * i) + 1];
}
float* a = vbuffer.data();
float** channels = &a;
TEResult result = TEFloatBufferSetValues(TEAudioInput, (const float**)channels, nb);
// Tried std::array<const float*, 2> channels{ &buffer[2 * i], &buffer[(2 * i) + 1] }; but with the same result.
So I was able to find a solution in the UE plugin using vector and vector<float*> for the data pointers, however I’m only seeing what appears to be a single value in the input chop on either channel in TD. I should be seeing around 480 or 576 floats for the sample rate of 48000. Not sure if there is another function to call when creating the timed buffer to pass to link?
Hi - you’ll also need TEFloatBufferSetStartTime() which normally (unless you’ve dropped samples) will increase by the number of samples previously submitted for every buffer.
Seeing the same result when calling this before passing it to the add function. Setting the start to whatever the count of previous samples submitted was and while it is less than the SampleRate of the system
I’m not sure I follow your “while it is less than the SampleRate” - you want it to only increase, these times will be aligned with the render time passed to TEInstanceStartFrameAtTime() (likewise you need to pass meaningful values to that function).
If you post your updated code to fill the TEFloatBuffer - you said you’d changed it from the snippet above - I can confirm if it’s right.
Right, so as I said above, the start time for the buffer must be aligned to the time for the frame you’re about to cook - so don’t reset it to 0 when it exceeds the sample rate, keep increasing it.
The time for TEInstanceStartFrameAtTime() should be treated as the end time for your samples - you are supplying samples for the period since the previous frame (so provide negative time to TEFloatBufferSetStartTime() at the start if your first frame time is zero). If you can use a timebase for frame times that is a multiple of your audio sample rate then you will save yourself some headaches.
I don’t fully understand the TETest/TEAudioInput difference - looks like you might be using a non-existent buffer for the first frame.
Interesting, so that worked, it just injected a massive delay, must be something to do with the value I am passing to TEInstanceStartFrameAtTime which is SampleRate*60 assuming a 60fps render.
Haven’t solved the delay but found an interesting issue now where after some amount of time the audio breaks randomly and goes back to being only a single sample changing.
Great to see you’re making progress. I haven’t fully reviewed your code but here are a few points in no particular order
before line 389 you don’t need to create a TEFloatBuffer to get a value, delete lines 386-388
I’m still not clear on why you are juggling multiple input buffers
// you only need sufficient capacity, not equal capacity, the actual size is set when you add the samples
if (!yourInputBuffer || nb > TEFloatBufferGetCapacity(yourInputBuffer))
{
yourInputBuffer.take(TEFloatBufferCreateTimeDependent(SampleRate, 2, nb, nullptr));
}
else
{
yourInputBuffer.take(TEFloatBufferCreateCopy(yourInputBuffer));
}
it’s hard to follow the rationale for your totalSamples - I’d log the times you are setting on input buffers and the times you are sending to TEInstanceStartFrameAtTime() and make sure they are what you expect - there’s one definite mistake on line 360 but the whole handling of it could be much simpler
to receive time-sliced output you should only call TEInstanceLinkGetFloatBufferValue() from the link event callback for TELinkEventValueChange and then queue those buffers. You will then use TEFloatBufferGetStartTime() and TEFloatBufferGetValueCount() to locate the samples for output from your queue. Dispose of a buffer from the queue when you have consumed all of its samples.
because you are only cooking in OnDraw() (which is probably correct) you will not see any audio output until you have cooked a frame, meaning you will output audio at one video frame’s delay. You will have to account for this frame’s delay when outputting audio from your queue, and output silence for the first frame.
I’m not sure which iteration of the code you read, but I have it working very well now, I’m using a time base of sample rate driven by total samples. The delay I was seeing was due to some mismatch in the sample counts. But it is working consistently otherwise. Moving forward I will probably implement a timer to better handle the frame cooking.