|
Post by kcatthedog on Nov 16, 2017 18:15:14 GMT -6
Yes symphony doesn’t have midi: just take the symphony and your brain USB to your computer/ daw, signals will interface there.
I used to do this with my apollo and Yamaha e drum brain: worked fine.
|
|
|
Post by popmann on Nov 16, 2017 19:06:45 GMT -6
I’m not sure why usb would be bad It's not when the TD25 has the USB. Sorry....he mentioned those MIDI to USB converter cables. I didn't look at the Roland box.
|
|
|
Post by matt on Nov 16, 2017 21:45:41 GMT -6
MIDI over USB works fine. The real latency rests with the hardware buffer size in your DAW. I can record MIDI into PT through BFD as a plugin at 64 samples if it's the only record-enabled track. Obviously, lower buffer size is better from a performance perspective. SD3 might kick ass as a stand-alone drum recording/playback tool. Makes me want to try it.
|
|
|
Post by popmann on Nov 16, 2017 21:58:34 GMT -6
Just use BFD. It's a better engine for an eKit.
Record the audio as you play....and go ahead and record the MIDI--then try to render the MIDI into THAT audio. Cliff's Notes**: you can't.
**old guy ref....before the internets, if you were busy partying instead of studying for a test, you got these little yellow books at the WaldenBooks.....
For perspective, I ran BFD2 here on a Pentium4 at 32samples (44.1) standalone. Since I was recording at 44.1 for the project, I had 10 outputs....8 analog, 2 SPDIF to the DAW.
|
|
|
Post by ragan on Nov 17, 2017 1:46:52 GMT -6
Just use BFD. It's a better engine for an eKit. Record the audio as you play....and go ahead and record the MIDI--then try to render the MIDI into THAT audio. Cliff's Notes**: you can't. **old guy ref....before the internets, if you were busy partying instead of studying for a test, you got these little yellow books at the WaldenBooks..... For perspective, I ran BFD2 here on a Pentium4 at 32samples (44.1) standalone. Since I was recording at 44.1 for the project, I had 10 outputs....8 analog, 2 SPDIF to the DAW. I️ totally didn’t realize the TD-25 module had USB. Well that solves that I️ guess. I️ don’t want to use BFD and don’t understand what you’re talking about rendering MIDI back into audio you already tracked. I️ will only be using the Roland sounds as something to monitor, if that. I’ll mess with low buffer PT monitoring of SD3 too as well as the standalone mode. See what works.
|
|
|
Post by ragan on Nov 17, 2017 1:47:32 GMT -6
Also RGO is turning the letter that sounds like “EYE” into weird squares right now for some reason.
|
|
|
Post by kcatthedog on Nov 17, 2017 2:43:37 GMT -6
You’ve got your fast iMac so load on the computer shouldn’t be a problem.
I used to bounce my mix, no drums and import that into a new session to record my drums with midi in so all power was really focussing on that task.
Then i’d Import the midi tracks to my real session. Of course you can render them to Audio too.
|
|
|
Post by NoFilterChuck on Nov 17, 2017 3:00:28 GMT -6
|
|
|
Post by ragan on Nov 17, 2017 9:08:21 GMT -6
Oh interesting. It only just started doing it, even though me types on this phone on RGO all the time (with this iOS too) and me has never had the issue before.
|
|
|
Post by Johnkenn on Nov 17, 2017 9:51:27 GMT -6
Also RGO is turning the letter that sounds like “EYE” into weird squares right now for some reason. You may apologize to RGO at any time... support.apple.com/en-us/HT208240
|
|
|
Post by Johnkenn on Nov 17, 2017 9:52:24 GMT -6
Oh interesting. It only just started doing it, even though me types on this phone on RGO all the time (with this iOS too) and me has never had the issue before. Now it's changing your "I's" to "Me's"
|
|
|
Post by popmann on Nov 17, 2017 10:52:47 GMT -6
In short MIDI isn't....has never been....accurate in the time domain. You have to record audio....LIVE.
I will let Kat explain the routing needed to record a LIVE VSTi output as audio. It has little to do with resources of a machine. It's a PIA in both Logic and Cubase-it's the only way I record VIs that I PLAY. WAY easier/faster/flexible with a second box. But, that makes ZERO sense if you're thinking about recording MIDI. You're actually BETTER off at that point using the VI inside your DAW, because at least you'll only have the inherent PPQN grid rounding. You'll never get that MIDI OUT of the app without another loss/change in the time domain.
|
|
|
Post by Martin John Butler on Nov 17, 2017 11:00:11 GMT -6
Now if only my iPhone would add a few zeros in my bank account.
|
|
|
Post by ragan on Nov 17, 2017 12:14:44 GMT -6
In short MIDI isn't....has never been....accurate in the time domain. You have to record audio....LIVE. I will let Kat explain the routing needed to record a LIVE VSTi output as audio. It has little to do with resources of a machine. It's a PIA in both Logic and Cubase-it's the only way I record VIs that I PLAY. WAY easier/faster/flexible with a second box. But, that makes ZERO sense if you're thinking about recording MIDI. You're actually BETTER off at that point using the VI inside your DAW, because at least you'll only have the inherent PPQN grid rounding. You'll never get that MIDI OUT of the app without another loss/change in the time domain. I won't pretend to understand this because I don't. But I have used e-drums before and I use MIDI for softsynths all the time. Somehow it miraculously works that the MIDI data I record as I play along plays itself back without altering itself and I can use that MIDI as the source of the VI for however long I like. At some point, I usually turn it into audio (when I print hardware) but the MIDI track functions as advertised. I record the MIDI as a performance and it captures that. Then I can do whatever I want with it. I'm probably misunderstanding what it is you're claiming.
|
|
|
Post by kcatthedog on Nov 17, 2017 13:25:22 GMT -6
Ya I never did anything sophisticated other than mapping the drums as triggers. Once that was done and you midi tracks were recognizing the trigger inputs, it just mapped in time. Only problem I ever had other than my bad drumming was a slate beta testing bug for ssd4 blowing up my ram and cpu usage !
|
|
|
Post by popmann on Nov 17, 2017 13:31:50 GMT -6
This is well understood repeatable scientific fact. MIDI sequencers based on a PPQN grid system (all of them) won't play ANY note at the place in time you played it.....unless it happens to fall on a PPQN grid line. So functionally, exponentially more notes than not will be played back at a different place in time. The scale is small, but for something like a drum kit where the RELATIVE timing of say a hat to kick or simply the hat to the last hat is what MAKES the pocket....it does have a tangible real world cumulative effect, IME.
If you believe it to be accurate enough for your work, then it is. Doesn't make it accurate. Like Wiz, I wish you luck.
|
|
|
Post by ragan on Nov 17, 2017 13:36:21 GMT -6
This is well understood repeatable scientific fact. MIDI sequencers based on a PPQN grid system (all of them) won't play ANY note at the place in time you played it.....unless it happens to fall on a PPQN grid line. So functionally, exponentially more notes than not will be played back at a different place in time. The scale is small, but for something like a drum kit where the RELATIVE timing of say a hat to kick or simply the hat to the last hat is what MAKES the pocket....it does have a tangible real world cumulative effect, IME. If you believe it to be accurate enough for your work, then it is. Doesn't make it accurate. Like Wiz, I wish you luck. I see. Are you of the opinion that this (admittedly dorky) dude's pocket is being affected by these MIDI discrepancies? I'm not even saying you're wrong, I may hate the V Drums thing (in which case they go back), just trying to understand how you think all these other drummers are able to fight through what you describe as pretty rhythmically harrowing circumstances.
|
|
|
Post by popmann on Nov 17, 2017 14:02:30 GMT -6
Where does he record MIDI? MIDI has no issues in TRANSMISSION timing. You can play that eKit with no more latency than any hardware sampler....and record that AUDIO output of Superior....and HAVE a functional replacement for recording an acoustic kit. Which is what you say you want.
When you RECORD MIDI in ANY sequencer....ProTools, Logic, Cubase, Performer....they take your input and round it to the closest Pulses Per Quarter Note (PPQN) grid. Meaning--not where it came in.
So, there's no issue using a MIDI cable to trigger something....it's in fact, the ONLY way to trigger a digital instrument of any kind. It's that you need to record the AUDIO output of that instrument if you want to capture that performance accurately.
Nick, BTW, is a world class drummer. He, like so many others, have taken gigs selling things because of the state of the industry. Feel free to look him up. I know him mostly as Kevin Gilbert's oft collaborator....but, he's got long resume of prog rock and shall we say intellectual pop.
|
|
|
Post by popmann on Nov 17, 2017 14:06:50 GMT -6
Here is Nick playing....on one of the all time funniest bits of music industry humor ever:
....if anyone hasn't heard one of the best albums never released I the 90s. It came out after Kevin died. Nick was one of the people who chipped in to finish it.
|
|
|
Post by ragan on Nov 17, 2017 14:54:46 GMT -6
Where does he record MIDI? MIDI has no issues in TRANSMISSION timing. You can play that eKit with no more latency than any hardware sampler....and record that AUDIO output of Superior....and HAVE a functional replacement for recording an acoustic kit. Which is what you say you want. When you RECORD MIDI in ANY sequencer....ProTools, Logic, Cubase, Performer....they take your input and round it to the closest Pulses Per Quarter Note (PPQN) grid. Meaning--not where it came in. So, there's no issue using a MIDI cable to trigger something....it's in fact, the ONLY way to trigger a digital instrument of any kind. It's that you need to record the AUDIO output of that instrument if you want to capture that performance accurately. Nick, BTW, is a world class drummer. He, like so many others, have taken gigs selling things because of the state of the industry. Feel free to look him up. I know him mostly as Kevin Gilbert's oft collaborator....but, he's got long resume of prog rock and shall we say intellectual pop. Ok now Ragan sees (goddamn iPhone!). Ragan wasn’t understanding that distinction.
|
|
|
Post by NoFilterChuck on Nov 17, 2017 15:08:01 GMT -6
Just for fun, lets explore exactly how long a tick is, in milliseconds.
Most daws operate at 960 tpqn, when it comes to external midi.
at 120bpm, one quarter note is 60000 / 120 ms or 500ms.
divide 500ms by 960 and you'll see that one tick is 0.52083333 ms. How long is one sample at 441khz in milleseconds? 1000 / 44100 = 0.022675736961451 milliseconds long
what happens if we bump our tempo up to 300 bpm? 60000 / 300 = 200ms per quarter note 200 / 960 = 0.208333333333333ms
so, at 300bpm, the accuracy of a midi tick is 10x worse than sample accurate timing.
Don't forget that midi operates at 31250 bits per sec (or if we divide by 8, 3906.25 Bytes per sec). each message consists of at least 2 bytes, so that gives you a maximum throughput of 3906.25 / 2 messages per second or 1953.125 messages/sec. if we convert that figure to ms, that means that each message can be received as quickly as 0.512ms
Keep in mind that most DAWs don't store the recorded midi at midi rate. that's only when you export a midi file. they use sample accurate timing to keep the events lined up with audio files.
Some guesses: USB transmits data waaaaay faster than MIDI spec calls for, so who knows how those eKit brains with USB outputs are actually transmitting information to the computer. they might be sending SMPTE timecode with subframe accuracy, not that its any better. Who knows.
Just for fun, lets figure out how accurate a subframe is, in ms compared to Sample Rate. so, 29.97 frames/sec is pretty standard for NTSC syncing. that means one frame lasts 33.3667ms long In Logic, you can edit the position of events down to the subframe, which is 1/100th of a frame. so, 33.3667 / 100 = 0.333667ms per subframe. Definitely better than the Midi TPQN @ 120bpm, but not better than 300bpm tpqn accuracy, and way worse than sample accurate (0.0226ms)
none of this means anything tho. If you're using an ekit to trigger a drum library, it's probably transmitting the triggered midi notes from the brain to the computer with timing information based on sample rate, not midi rate, as the brain has to trigger samples too. Ever notice how those brains can show up as both Audio Devices and Midi devices over the same cable? So, that means it's up to the driver the eKit manufacturer created to convert those sample-accurate trigger packets into messages the DAW can use to trigger SoftSynths.
|
|
|
Post by ragan on Nov 17, 2017 15:16:23 GMT -6
Just for fun, lets explore exactly how long a tick is, in milliseconds. Most daws operate at 960 tpqn, when it comes to external midi. at 120bpm, one quarter note is 60000 / 120 ms or 500ms. divide 500ms by 960 and you'll see that one tick is 0.52083333 ms. How long is one sample at 441khz in milleseconds? 1000 / 44100 = 0.022675736961451 milliseconds long what happens if we bump our tempo up to 300 bpm? 60000 / 300 = 200ms per quarter note 200 / 960 = 0.208333333333333ms so, at 300bpm, the accuracy of a midi tick is 10x worse than sample accurate timing. Don't forget that midi operates at 31250 bits per sec (or if we divide by 8, 3906.25 Bytes per sec). each message consists of at least 2 bytes, so that gives you a maximum throughput of 3906.25 / 2 messages per second or 1953.125 messages/sec. if we convert that figure to ms, that means that each message can be received as quickly as 0.512ms Keep in mind that most DAWs don't store the recorded midi at midi rate. that's only when you export a midi file. they use sample accurate timing to keep the events lined up with audio files. Some guesses: USB transmits data waaaaay faster than MIDI spec calls for, so who knows how those eKit brains with USB outputs are actually transmitting information to the computer. they might be sending SMPTE timecode with subframe accuracy, not that its any better. Who knows. Just for fun, lets figure out how accurate a subframe is, in ms compared to Sample Rate. so, 29.97 frames/sec is pretty standard for NTSC syncing. that means one frame lasts 33.3667ms long In Logic, you can edit the position of events down to the subframe, which is 1/100th of a frame. so, 33.3667 / 100 = 0.333667ms per subframe. Definitely better than the Midi TPQN @ 120bpm, but not better than 300bpm tpqn accuracy, and way worse than sample accurate (0.0226ms) none of this means anything tho. If you're using an ekit to trigger a drum library, it's probably transmitting the triggered midi notes from the brain to the computer with timing information based on sample rate, not midi rate, as the brain has to trigger samples too. Ever notice how those brains can show up as both Audio Devices and Midi devices over the same cable? So, that means it's up to the driver the eKit manufacturer created to convert those sample-accurate trigger packets into messages the DAW can use to trigger SoftSynths. Nice breakdown. So does this mean Ragan should be more concerned or less concerned about MIDI timing innacuracy? Kinda seems like “less”.
|
|
|
Post by popmann on Nov 17, 2017 15:52:09 GMT -6
Over 25 years ago, I started AS "the midi guy"....ie: the "young kid who knew how to work a computer" all those years ago. The DRUMMERS there were the ones that warned me and I ignored them about the timing inaccuracy. I blew it off...."sounds the same to me, old man".
So, discussions like this, I guess, are my karmic punishment for not heeding the word of more experienced musicians (or engineers) then.
|
|
|
Post by NoFilterChuck on Nov 17, 2017 16:00:33 GMT -6
Over 25 years ago, I started AS "the midi guy"....ie: the "young kid who knew how to work a computer" all those years ago. The DRUMMERS there were the ones that warned me and I ignored them about the timing inaccuracy. I blew it off...."sounds the same to me, old man". So, discussions like this, I guess, are my karmic punishment for not heeding the word of more experienced musicians (or engineers) then. I still don't see how using a separate machine to record helps the situation if the daw is receiving midi packets from a driver that got sample-accurate data packets from the eKit brain. Seems like you're just creating more work for yourself as every computer is gonna record the data the same way. Packet from hardware driver -> OS midi driver framework -> daw midi input callbacks with timing based on a high-res system timer -> packet timestamp converted to sample position for rendering and conversion to TPQN for display against a grid.
|
|
|
Post by popmann on Nov 17, 2017 16:53:38 GMT -6
You're missing the purpose of a the second machine, indeed.
The eKit is plugged into the second machine. It's basically converting a cheap PC into a Haus drum sampler with 16-24 analog outputs....PCi or PCIe will allow you to run that at 32 samples for hardware level response for the drummer....allows you to run the drum samples at whatever sample rate they sound/perform best at....without regard of the project rate. You never record MIDI anywhere--drummer plays the kit, triggering (and thus monitoring) the sounds of SD3(here)....he plays to them....the DAW records the performance as audio, because it's what a DAW does. You monitor like you would your acoustic kit....you can selectively process the feeds on the way in like you might want to the real kit--it's at that point and actual functional replacement for a drum kit in a studio that can't record real drums for acoustics or noise concerns. It really saves neither time nor money.The whole proposition is more expensive....and there's all kinds of set up time to get it all dialed in...and mixing samples is more tweaky, IME, on the back end. But, the upside is--you can have million dollar studio sounding drum tracks in a basement or spare bedroom somewhere....
|
|