|
Post by drbill on Dec 10, 2021 9:06:10 GMT -6
I've run 5 computers linked (Gigastudio's - the ooooold days) and later, 2 Mac towers (VEP - Vienna Ensemble Pro) via networking. Both were a pita. These days, I've maxed out my Mac Tower (2012?) to the fastest hardware, put in 64G of ram, run Samsung SSD's and run all my VSTI's inside PT. Once PT went 64 bit, the real "need" for an external server to host VSTi's became smaller. I still run into issues here and there, but the glitchiness of trying to do this in 32 bit is a long gone, horrible memory.
VEP is a very slick way to do this if you're going to. But for me, it overly complicates things and slows them down. If I am hosting HUGE orchestral templates that take 5+ minutes to load, and switching between cues constantly all day - the VEP setup is the ONLY way to go IMO. It will literally save you an hour or two every day of working as you never have to reload 100+ VSTi's every half hour. These days, I'm not working too much on large films, and it just seems cumbersome. But it's the way most big film composers work.
John - if you're just having a casual though like - "hey maybe this would be cool and/or speed things up", my advice is to not do it. You'll know when you need to. Things will slow down and start getting glitchy to the point of not being able to work. They will "look" OK, but the computer will end up gasping it's last breath. As computers get faster, ram becomes more plentiful, etc., the need becomes exponentially less. Good luck.
|
|
|
Post by drbill on Dec 10, 2021 9:11:26 GMT -6
PS - VEP can be run on your host computer too. Essentially off-loading your VSTi's outside your DAW - allowing the DAW to run faster while all the MIDI stuff gets hosted in VEP which communicates to your DAW.
|
|
|
Post by OtisGreying on Dec 12, 2021 14:35:53 GMT -6
I've run 5 computers linked (Gigastudio's - the ooooold days) and later, 2 Mac towers (VEP - Vienna Ensemble Pro) via networking. Both were a pita. These days, I've maxed out my Mac Tower (2012?) to the fastest hardware, put in 64G of ram, run Samsung SSD's and run all my VSTI's inside PT. Once PT went 64 bit, the real "need" for an external server to host VSTi's became smaller. I still run into issues here and there, but the glitchiness of trying to do this in 32 bit is a long gone, horrible memory. VEP is a very slick way to do this if you're going to. But for me, it overly complicates things and slows them down. If I am hosting HUGE orchestral templates that take 5+ minutes to load, and switching between cues constantly all day - the VEP setup is the ONLY way to go IMO. It will literally save you an hour or two every day of working as you never have to reload 100+ VSTi's every half hour. These days, I'm not working too much on large films, and it just seems cumbersome. But it's the way most big film composers work. John - if you're just having a casual though like - "hey maybe this would be cool and/or speed things up", my advice is to not do it. You'll know when you need to. Things will slow down and start getting glitchy to the point of not being able to work. They will "look" OK, but the computer will end up gasping it's last breath. As computers get faster, ram becomes more plentiful, etc., the need becomes exponentially less. Good luck. doc, how do you feel about just running a second computer/interface as a analog in through a converter? This is what I'm messing with at the moment, second laptop with small interface going into my main setup's converter. The second computer having all my VI's loaded at minimal latency being recorded into my main DAW.
|
|
|
Post by drbill on Dec 12, 2021 16:12:07 GMT -6
I've run 5 computers linked (Gigastudio's - the ooooold days) and later, 2 Mac towers (VEP - Vienna Ensemble Pro) via networking. Both were a pita. These days, I've maxed out my Mac Tower (2012?) to the fastest hardware, put in 64G of ram, run Samsung SSD's and run all my VSTI's inside PT. Once PT went 64 bit, the real "need" for an external server to host VSTi's became smaller. I still run into issues here and there, but the glitchiness of trying to do this in 32 bit is a long gone, horrible memory. VEP is a very slick way to do this if you're going to. But for me, it overly complicates things and slows them down. If I am hosting HUGE orchestral templates that take 5+ minutes to load, and switching between cues constantly all day - the VEP setup is the ONLY way to go IMO. It will literally save you an hour or two every day of working as you never have to reload 100+ VSTi's every half hour. These days, I'm not working too much on large films, and it just seems cumbersome. But it's the way most big film composers work. John - if you're just having a casual though like - "hey maybe this would be cool and/or speed things up", my advice is to not do it. You'll know when you need to. Things will slow down and start getting glitchy to the point of not being able to work. They will "look" OK, but the computer will end up gasping it's last breath. As computers get faster, ram becomes more plentiful, etc., the need becomes exponentially less. Good luck. doc, how do you feel about just running a second computer/interface as a analog in through a converter? This is what I'm messing with at the moment, second laptop with small interface going into my main setup's converter. The second computer having all my VI's loaded at minimal latency being recorded into my main DAW. Are you sync-ing midi between the two, or just using the second computer as a software/hardware instrument(s)? If the second, it's no different than recording your Nord or Prophet V. Simple! Go for it - take the interface outputs and record the same as any other piece of hardware. If the first, it gets exponentially more complicated than the second. Personally, I like simple, direct and fast when I'm creating. The only time I'll consider offloading to VEP either on my host or a second computer is if I'm running HUGE orchestral templates with every instrument and articulation known to man - 100-150+ instruments up and waiting to be armed at a whim. In those situations, hosting everything inside your DAW gets burdensome - and an external server - either host oriented if your comp has enough juice - or a secondary computer become almost mandatory.
|
|
|
Post by OtisGreying on Dec 12, 2021 17:16:36 GMT -6
doc, how do you feel about just running a second computer/interface as a analog in through a converter? This is what I'm messing with at the moment, second laptop with small interface going into my main setup's converter. The second computer having all my VI's loaded at minimal latency being recorded into my main DAW. Are you sync-ing midi between the two, or just using the second computer as a software/hardware instrument(s)? If the second, it's no different than recording your Nord or Prophet V. Simple! Go for it - take the interface outputs and record the same as any other piece of hardware. If the first, it gets exponentially more complicated than the second. Personally, I like simple, direct and fast when I'm creating. The only time I'll consider offloading to VEP either on my host or a second computer is if I'm running HUGE orchestral templates with every instrument and articulation known to man - 100-150+ instruments up and waiting to be armed at a whim. In those situations, hosting everything inside your DAW gets burdensome - and an external server - either host oriented if your comp has enough juice - or a secondary computer become almost mandatory. I haven't sync'ed any midi yet, I'm physically playing everything at the moment with the second computer (& midi keyboard) - which I'm totally fine with. My necessity for the extra computer power is mostly cause I'm working within sessions that I'm also mixing with lots of plug-in chains, like vocals, acs etc. Those plug-in chains are really setting my latency back so I can't make any more instances of VI's in my main session that are playable. This seems to be fixing that which is exciting
|
|
|
Post by the other mark williams on Dec 12, 2021 17:23:15 GMT -6
I haven't sync'ed any midi yet, I'm physically playing everything at the moment with the second computer (& midi keyboard) - which I'm totally fine with. My necessity for the extra computer power is mostly cause I'm working within sessions that I'm also mixing with lots of plug-in chains, like vocals, acs etc. Those plug-in chains are really setting my latency back so I can't make any more instances of VI's in my main session that are playable. This seems to be fixing that which is exciting What's your DAW? Logic has a "Low Latency Monitoring Mode" for exactly this purpose: it disables any plugins with latency above a certain user-definable threshold. When you're done tracking that VI, you just turn Low Latency Monitoring Mode off.
I imagine other DAWs have something similar.
Apologies if I'm misunderstanding your situation, OtisGreying.
|
|
|
Post by OtisGreying on Dec 14, 2021 1:26:52 GMT -6
I haven't sync'ed any midi yet, I'm physically playing everything at the moment with the second computer (& midi keyboard) - which I'm totally fine with. My necessity for the extra computer power is mostly cause I'm working within sessions that I'm also mixing with lots of plug-in chains, like vocals, acs etc. Those plug-in chains are really setting my latency back so I can't make any more instances of VI's in my main session that are playable. This seems to be fixing that which is exciting What's your DAW? Logic has a "Low Latency Monitoring Mode" for exactly this purpose: it disables any plugins with latency above a certain user-definable threshold. When you're done tracking that VI, you just turn Low Latency Monitoring Mode off.
I imagine other DAWs have something similar.
Apologies if I'm misunderstanding your situation, OtisGreying . Thanks for the tip mark, I actually forgot about that function, ha! It definitely solves the latency problem nicely, only thing is it doesn't solve for CPU power which is the other part of the equation in offloading the VI's and VST's to a secondary computer. Some of the soft synths I use can be CPU hogs. But as far as just LATENCY goes and if CPU isn't an issue in said session, yes this mode is the answer!
|
|
|
Post by the other mark williams on Dec 14, 2021 12:12:31 GMT -6
What's your DAW? Logic has a "Low Latency Monitoring Mode" for exactly this purpose: it disables any plugins with latency above a certain user-definable threshold. When you're done tracking that VI, you just turn Low Latency Monitoring Mode off.
I imagine other DAWs have something similar.
Apologies if I'm misunderstanding your situation, OtisGreying . Thanks for the tip mark, I actually forgot about that function, ha! It definitely solves the latency problem nicely, only thing is it doesn't solve for CPU power which is the other part of the equation in offloading the VI's and VST's to a secondary computer. Some of the soft synths I use can be CPU hogs. But as far as just LATENCY goes and if CPU isn't an issue in said session, yes this mode is the answer! Totally understandable - more CPU power is always welcome!
|
|
|
Post by Quint on Jan 28, 2022 11:38:41 GMT -6
Thats how i geenrally do it. If/when you need MIDI on the primary DAW, you plug the keyboard into the primary’s midi and the primary midi out TO the secondary. Audio connections remain the same. Youuse a midi track in the primary. You can sequence midi just like everyone did for the first 20 year of midi. I get your explanation on using a second interface out of the slave, and recording that into the primary interface on the master machine. I also understand that, in that scenario, keyboard controllers would be connected directly to the slave computer IF there is no need or desire to record midi in the DAW on the primary computer. However, I see that you're mentioning hooking up keyboard controllers to the primary computer, and then passing that midi on to the slave computer, IF you DO want to record midi in the primary DAW. I would like to do this, if possible. Can you elaborate on this? How would you do this in practice? Run your midi to a splitter box and then send it to the primary AND the slave at the same time? Midi into the primary DAW and then use something like VEP or the new BC Connector to pipe midi over to the slave? How would that affect tracking latency? Does this cause any syncing/drift issues?
|
|
|
Post by popmann on Jan 28, 2022 12:44:24 GMT -6
Thats how i geenrally do it. If/when you need MIDI on the primary DAW, you plug the keyboard into the primary’s midi and the primary midi out TO the secondary. Audio connections remain the same. Youuse a midi track in the primary. You can sequence midi just like everyone did for the first 20 year of midi. edit However, I see that you're mentioning hooking up keyboard controllers to the primary computer, and then passing that midi on to the slave computer, IF you DO want to record midi in the primary DAW. I would like to do this, if possible. Can you elaborate on this? How would you do this in practice? Run your midi to a splitter box and then send it to the primary AND the slave at the same time? Midi into the primary DAW and then use something like VEP or the new BC Connector to pipe midi over to the slave? How would that affect tracking latency? Does this cause any syncing/drift issues? I take it you've never used external midi devices? At that point, the "second computer" would behave and be addressed like you would say a Roland JD800 or Yamaha Motif. All you would technically need is your interface's MIDI in and out (and the one on the destination machine). Keyboard's midi to main computer....main computer's MIDI output to second computer's MIDI input. Audio connections back. Latency? It adds a little. Always did, but it's handfuls of milliseconds--not really functionally an "issue" unless that second machine is already running borderline too latent or something. Drift? This is MIDI, man. The timing SUCKS compared to audio. Always has, That said, No--it doesn't cause any drift, per se (which to me means a variable speeding up and slowing down independent of the main machine) because there's no additional "sync"...the MIDI lives in your main machine DAW's MIDI engine just like it's a software instrument. It fires (MIDI) notes at a second machine along side the audio playback.
|
|
|
Post by Quint on Jan 28, 2022 14:03:17 GMT -6
I take it you've never used external midi devices? At that point, the "second computer" would behave and be addressed like you would say a Roland JD800 or Yamaha Motif. All you would technically need is your interface's MIDI in and out (and the one on the destination machine). Keyboard's midi to main computer....main computer's MIDI output to second computer's MIDI input. Audio connections back. I hear what you're saying about comparing the slave machine to an external midi device, and that made sense when the comparison was made a few posts back. However, my interface (Apollo x16) does not have midi in and out. Many interfaces don't these days. So that's why I was asking, from a practical standpoint, how someone would send midi from the primary to the slave? Or are you saying that you'd have to have interfaces with midi I/O for this to work? Right now, it's not a problem, using a USB cable, to get my keyboard controllers into the first computer. It's after that point that I'm not sure how to do the rest of this. Latency? It adds a little. Always did, but it's handfuls of milliseconds--not really functionally an "issue" unless that second machine is already running borderline too latent or something. I'm with you on the latency being minimal for the initial run of midi signal into the first computer. That, I have experience with. If you're also saying the additional latency incurred, by going out of the first computer and into the second, is also very minimal, then that's good to hear. If not, that would give me pause, as I would need the total latency (controller > primary computer > primary DAW > slave computer > Kontakt > slave interface DA > primary interface AD > primary interface mixer > primary interface DA) to be pretty low in this tracking scenario. If the latency will be too high, I might just connect midi directly to the slave computer, run audio out of the slave interface, run it into the master interface (Apollo), and leave it at that. For the tracking scenario described above, what is a ballpark reasonable expectation for total latency, assuming I have a M1 Mac Mini as the primary and a 8700k i7 PC with NVME SSDs and 32gb ram as the slave? Is it even doable to get this to all operate in a usable low latency tracking scenario? Drift? This is MIDI, man. The timing SUCKS compared to audio. Always has, That said, No--it doesn't cause any drift, per se (which to me means a variable speeding up and slowing down independent of the main machine) because there's no additional "sync"...the MIDI lives in your main machine DAW's MIDI engine just like it's a software instrument. It fires (MIDI) notes at a second machine along side the audio playback. I get that the midi would live in the primary DAW and send signal to the slave machine. The question concerning drift mainly had to do with whether or not the primary interface and slave interface need to be or even should be word clock synced together? Or does your response assume that both interfaces would be word clock synced? Maybe a word clock sync between the two doesn't matter because, as you said, "this is midi"?
|
|
|
Post by popmann on Jan 29, 2022 21:40:06 GMT -6
Most interfaces have MIDI. UA is breaking tradition. that's fine--irrelevant, you just need a MIDI IO...and an audio IO for both machines. It DOES mean you have to find a MIDI IO that works with Apple Silicon, which leaves out the benchmark standard MOTU boxes, I understand. Apple done broke the shit out of a lot of old USB MIDI with their M1. I'm sure there are some that work--jsut saying, another research point for you. Jsut because a MIDI IO worked with MacOS doesnt' mean it works any MORE with MacOS on M1.
There should be no word clocking together. That's not what a PCM clock does. Easiest to connect analog audio. That way you can run it through analog gear or run the two machines at disparate sample rates. I mean technically, a SPDIF works fine...but, digital connections can be more functionally troublesome for the less experienced. MIDI from the main DAW to the MIDI machine...AUDIO connection from the midi machine to the main DAW. Those are the required connection.
The latency "added" with MIDI throughput isn't going to be more than a few milliseconds. There's no "total number" I can give you because I have zero idea what you think you're running on the slave machine, thus what kind of interface it has and all added up to the buffer. It's whatever latency you can Currently run the VIs on the 8700 system at, with a handful of milliseconds added for an extra ADC trip and the MIDI throughout of the main DAW.
This is a logistical PIA if the only thing you've ever known is working with VIs/MIDI inside the DAW and/or want to leave things liquid MIDI up until mix time.
What are you doing TODAY (or whenever in the past/present) that you are hoping going this way will make better? I mean this will entail booting two machines...loading your DAW project...THEN loading your Cantible (or whatever host on the slave) project you stored with all the VI settings...and making sure you have them monitored properly in Console.
|
|
|
Post by Quint on Jan 30, 2022 12:30:22 GMT -6
Most interfaces have MIDI. UA is breaking tradition. that's fine--irrelevant, you just need a MIDI IO...and an audio IO for both machines. It DOES mean you have to find a MIDI IO that works with Apple Silicon, which leaves out the benchmark standard MOTU boxes, I understand. Apple done broke the shit out of a lot of old USB MIDI with their M1. I'm sure there are some that work--jsut saying, another research point for you. Jsut because a MIDI IO worked with MacOS doesnt' mean it works any MORE with MacOS on M1. Understood. I'll have to identify a little stereo interface that meets my audio needs for the DA on the slave PC. I have some thoughts but I'm still researching that. As for the MIDI I/O, Motu was actually the first thing that popped into my head, so I was disappointed to hear that about their M1 compatibility. Regardless, I've found the Iconnectivity MioXM that seems like it will work: www.iconnectivity.com/mioxmFrom what I've read, it works just fine with Mac M1, and has pretty extensive routing capabilities. They also have the MioXC for more simple setups if you don't desire to be able to do MIDI routing: www.iconnectivity.com/mioxcThere should be no word clocking together. That's not what a PCM clock does. Easiest to connect analog audio. That way you can run it through analog gear or run the two machines at disparate sample rates. I mean technically, a SPDIF works fine...but, digital connections can be more functionally troublesome for the less experienced. MIDI from the main DAW to the MIDI machine...AUDIO connection from the midi machine to the main DAW. Those are the required connection. The latency "added" with MIDI throughput isn't going to be more than a few milliseconds. There's no "total number" I can give you because I have zero idea what you think you're running on the slave machine, thus what kind of interface it has and all added up to the buffer. It's whatever latency you can Currently run the VIs on the 8700 system at, with a handful of milliseconds added for an extra ADC trip and the MIDI throughout of the main DAW. So no physical word clock sync needed. Got it. I hear what you're saying about not being able to predict MY latency numbers. I just was asking in a more abstract manner about whether or not it was simply doable in a general sense. As I've been fine with the latency currently experienced going directly to my 8700k PC, a "handful" more milliseconds "shouldn't" likely be a big deal. On a related note, I get how the MIDI interface for the Mac and the MIDI interface for the PC will have to be physically connected to one another to pass MIDI back and forth. What I'm not sure about is how that MIDI is routed within the Mac to its respective MIDI interface and how the MIDI is routed within the PC from it's respective MIDI interface. I've never done this before, but I'm guessing that's relatively straightforward? However, I don't know that Luna has very advanced MIDI routing capabilities, so that "could" be a problem to route that MIDI out to the PC from the Mac. Maybe not. I'm not sure. This is a logistical PIA if the only thing you've ever known is working with VIs/MIDI inside the DAW and/or want to leave things liquid MIDI up until mix time. What are you doing TODAY (or whenever in the past/present) that you are hoping going this way will make better? I mean this will entail booting two machines...loading your DAW project...THEN loading your Cantible (or whatever host on the slave) project you stored with all the VI settings...and making sure you have them monitored properly in Console. Yes, I've only ever done VI/MIDI along with everything else within the same singular DAW. I'm up for figuring out the logistical challenge though. The reason this has all come up is because I'm interested in possibly switching to Mac for using Luna. I don't want to spend $2k or more on a high powered Mac though, when I have a high powered PC sitting right here. So the thought occurred to me that I could buy a more basic, cheaper Mac for running the main DAW and recording audio/mixing and slave my PC for running samples. Best of both worlds, or at least it seems that way to me. And I DO still want to record raw MIDI in Luna for use later on during mixdown, so it's important to me to have that, as I might end up later wanting to transpose the key or something like that. I'm fine with booting two machines and all of that. I don't run crazy sample libraries anyway. I just use Kontakt, Keyscape, and things like that to access piano/organ/wurli/etc. libraries. Maybe the occasional messing around with horn or string quartet or something like that. Templates would be my friend in a two machine scenario, obviously. .... To get back to the routing thing for a minute. Let me see what you think about this. The thought had also occurred to me about why, instead of routing MIDI to the Mac and then passing it through to the PC, you couldn't just use something like the MioXM, that I linked above, to SPLIT the MIDI signal immediately after the MIDI output devices (I have Akai MPK261, Casio PXS3000, Behringer Poly D) and then route the MIDI signal independently to the Mac and the PC in a parallel fashion at the same time? That ought to save on the latency incurred due to the aforementioned pass through between the Mac and PC, as well as avoid the logistics of having to route MIDI between the Mac and PC in a tracking situation. In this scenario, MIDI would still make its way to the PC for triggering samples during tracking, which would then in turn travel as audio from the PC to the Mac via PC DA interface to Mac AD interface. Raw MIDI would still go to the Mac to be recorded there in Luna, for later use in triggering different samples at mixdown, either through triggering samples loaded directly on the Mac (but only used at higher buffers that would have been inappropriate for tracking) or alternatively through something like triggering samples on the PC via an ethernet connection and something like the new Blue Cat Connector, or similar, between the Mac and PC. Best of both worlds, no? Wouldn't this actually be a BETTER way to do this instead of only sending MIDI to the Mac and then passing it through to the PC? You could use the MioXM on its own to parallel route MIDI to the Mac and PC via the network connection (and separately purchased network switcher) or a combo of the MioXM and MioXC to parallel route MIDI to the Mac and PC via two separate USB connections. Have you had any experience with either of these two scenarios?
|
|
|
Post by popmann on Jan 30, 2022 20:55:03 GMT -6
You don't need MIDI going from the PC to the Mac...MIDI sequencing happens on the Mac....it will send C1 and Vel123 when it's sequenced to happen on the midi channel assigned on the Mac (and matched on the PC)...that will result in what? audio of a C1 being triggered--THAT is audio that goes back to the Mac now over he analog wires.
You can't not split MIDI signal. (I don't mean there aren't boxes that will allow you to mult them, I mean) It will cause two midi notes to arrive at the VI...but, even if you set it up so that you only hear the direct mult...the machine "recording" MIDI will still have the latent feed as the timing of record. You want to HEAR that when you're playing--not just when it plays BACK. That just ensures that the overall timing would be WORSE than MIDI is natively.
I feel like you're sweating the wrong element here. This is a COMPELTE change to your workflow. The couple milliseconds of MIDI throughput is trivial. I mean I plug in directly because I'm playing live and recording the audio.
I don't get your question about MIDI "routing". There shouldn't be a need to do more than pick the channel in whatever host you're using on the PC...and also select that channel on the MIDI track you're sequencing on the Mac. You do NOT want to try to reroute things at OS or control panel level. You can see how channelized MIDI works with Kontakt in Cubase right now--set it up in the VI rack with 16 instruments...and 16 midi tracks--each on their own channel pointed at that instance of Kontakt. That's just how it will work if Kotnakt is on a PC in a real "rack". I think Kontakt has 64 or 128 midi channels it can use for one instance.
If you do what you're suggesting: record MIDI and audio at the same time...you'll find the objective truth is they're never the SAME. Make it keeping the midi for "mixdown" and you open up even more variables for change. The one ACTUAL advancement of MIDI with the instrument inside the DAW is there's only ONE point of loss--the capture. Once it's captured, the PLAYBACK timing is insanely consistent compared to external. You sound like you're wanting to do a lot of MIDI work. I'm not sure why you'd choose LUNA for that. I see why you're doing this now--LUNA isn't capable of hosting VIs.
If you're not recording live bands, I don't understand the attraction to LUNA-as cool as I think it IS for that given workflow. If most of what you do is MIDI, LUNA isn't going to really help. Just like HDX won't help. That's a cool tool for ANTOHER job than the one it sounds like you want to do. Cubase on that current PC will be exponentially easier to use.
|
|
|
Post by Quint on Jan 31, 2022 10:41:27 GMT -6
You don't need MIDI going from the PC to the Mac...MIDI sequencing happens on the Mac....it will send C1 and Vel123 when it's sequenced to happen on the midi channel assigned on the Mac (and matched on the PC)...that will result in what? audio of a C1 being triggered--THAT is audio that goes back to the Mac now over he analog wires. Agreed. I don't think I ever said anything to the contrary, so there must be some confusion somewhere. We're still discussing options on HOW the MIDI gets to the PC but, that aside, once the MIDI has gotten to the PC, audio comes from the PC, via "analog wires", to the Mac via the DA connected to the PC and into the Apollo AD connected to the Mac. No disagreement there. We're on the same page. You can't not split MIDI signal. (I don't mean there aren't boxes that will allow you to mult them, I mean) It will cause two midi notes to arrive at the VI...but, even if you set it up so that you only hear the direct mult...the machine "recording" MIDI will still have the latent feed as the timing of record. You want to HEAR that when you're playing--not just when it plays BACK. That just ensures that the overall timing would be WORSE than MIDI is natively. I think that there may be some confusion here because I did mention sending MIDI from the Mac to trigger samples on the PC in a NON-TRACKING scenario. That would be the only time I would do that. In such a case, I would be sending the previously recorded raw MIDI from the Mac to the PC, but this would only be in a mixdown kind of scenario where realtime latency isn't such a concern. So I just wanted to make that distinction clear. NO MIDI would be passing from the Mac to the PC during tracking in the MIDI split scenario I proposed. In this split scenario for tracking, there would be NO MIDI passing to the PC from the Mac. NONE. So I'm not sure what timing issues you think will exist? The PC will effectively be acting as a hardware MIDI module, which is then recorded analog into an AD input on the Apollo (which is attached to the Mac). The only thing I'm proposing to do over and above what I described above is to split the MIDI signal before it gets to the PC, so that I can take that mult and record the raw MIDI into the Mac for possible future use at a later date. While actually recording the part, the ONLY thing that will ever be heard is the audio coming out of the PC, which is being monitored (and also recorded to the Mac) via the Apollo Console. Upon playback, the ONLY thing that will ever be heard is the audio that was recorded into the Mac. Maybe it would help to think of the split of the raw MIDI as something analogous to a DI taken of a guitar before it goes to the amp. Just like that DI signal can be reamped later, I'm taking a "safety" of the MIDI, if you will, for the purposes of triggering a different piano library at a later date, if/when I decide I'd rather have a different piano or would prefer to transpose to a different key. If you're worried that, at a later date, I might try to use the originally recorded audio from the PC in conjunction with some additional library that I trigger from said raw MIDI on the Mac, I think I get what you're saying about timing issues there. Understandably, that's not something I would try to do. If/when I triggered a new library, it would be to replace the original audio, not to mix with it. Otherwise, I'm not seeing what the timing concerns are. Can you elaborate? I feel like you're sweating the wrong element here. This is a COMPELTE change to your workflow. The couple milliseconds of MIDI throughput is trivial. I mean I plug in directly because I'm playing live and recording the audio. I don't really see it as much of a change. I'm recording the audio out of the PC as if it was a real piano or guitar or any other live instrument, same as I always do. I often take raw guitar DI for possible reamping later. Taking the raw MIDI split is effectively no different and, if I WAS, as you propose I should do, taking just a single MIDI path and sending that directly to the Mac, to then pass through to the PC, that's not really all that different either. In both cases, I'm still maintaining raw MIDI to be able to further manipulate at a later date or otherwise trigger different samples. The only real difference is that I save a little latency and routing/throughput headaches by not having to route through to the PC from the Mac while tracking. I don't get your question about MIDI "routing". There shouldn't be a need to do more than pick the channel in whatever host you're using on the PC...and also select that channel on the MIDI track you're sequencing on the Mac. You do NOT want to try to reroute things at OS or control panel level. You can see how channelized MIDI works with Kontakt in Cubase right now--set it up in the VI rack with 16 instruments...and 16 midi tracks--each on their own channel pointed at that instance of Kontakt. That's just how it will work if Kotnakt is on a PC in a real "rack". I think Kontakt has 64 or 128 midi channels it can use for one instance. Ok. That sounds simple enough. If I end up going with the "pass through from Mac to PC" you're proposing, then that sounds easy enough to figure out. If you do what you're suggesting: record MIDI and audio at the same time...you'll find the objective truth is they're never the SAME. Make it keeping the midi for "mixdown" and you open up even more variables for change. The one ACTUAL advancement of MIDI with the instrument inside the DAW is there's only ONE point of loss--the capture. Once it's captured, the PLAYBACK timing is insanely consistent compared to external. You sound like you're wanting to do a lot of MIDI work. I'm not sure why you'd choose LUNA for that. I see why you're doing this now--LUNA isn't capable of hosting VIs. If you're not recording live bands, I don't understand the attraction to LUNA-as cool as I think it IS for that given workflow. If most of what you do is MIDI, LUNA isn't going to really help. Just like HDX won't help. That's a cool tool for ANTOHER job than the one it sounds like you want to do. Cubase on that current PC will be exponentially easier to use. I was already keeping raw MIDI data in the DAW session anyway, and keeping the VIs playing back live instead of printing the sound. So I guess I don't see how what I'm proposing is any different. Sometimes I just don't want to commit to a sound until later in the process. Plenty of people do that, so I'm not sure why that presents a problem. Also, sometimes (a lot of the time) I'll go ahead and print a track just to be able to turn the VIs off and save on resources, but I still maintain the raw MIDI to use at a later date for whatever I choose. I actually don't do a lot of MIDI work. I'm primarily traditional, live instrument, band kind of stuff, hence the appeal of Luna. I'm simply just looking for ways to get into Luna, which requires moving to Mac, while maintaining the ability to use MIDI for keys libraries in a manner that won't cause me to spend a ton on a new Mac. Leveraging the high powered PC I already own would seem to accomplish that for me, provided I can work out the kinks of doing something like this. Also, as far as I'm aware, Luna DOES host third party VIs. I've not seen anywhere that it doesn't allow that.
|
|
|
Post by popmann on Jan 31, 2022 14:14:32 GMT -6
I'm not sure how to bridge this communication gap. I'll think on it. I think what you're proposing, leaving things MIDI is a bad idea made WORSE by it being externally hosted. The only tech advancement* in MIDI sequencing VIs inside the DAW is that the timing loss happens ONLY on capture...afterwards it will play that incorrect capture (and whatever you edit it to) back more accurately than external ever could.
*other than people not having to learn MIDI 101 if that's applicable
|
|
|
Post by Quint on Jan 31, 2022 15:50:48 GMT -6
I'm not sure how to bridge this communication gap. I'll think on it. I think what you're proposing, leaving things MIDI is a bad idea made WORSE by it being externally hosted. The only tech advancement* in MIDI sequencing VIs inside the DAW is that the timing loss happens ONLY on capture...afterwards it will play that incorrect capture (and whatever you edit it to) back more accurately than external ever could. *other than people not having to learn MIDI 101 if that's applicable But the MIDI is NOT being externally hosted. It's being captured to the DAW (Luna) on the Mac, where everything else already is (including the audio of that same performance that came from the PC into the Mac via AD). The original MIDI performance is hosted in the DAW session on the Mac henceforth and forever. I think this may be the crux of the miscommunication issues we're having here. Let me try putting it this way. Forget about the tracking to or thru a separate machine for the moment. Let's say I played a performance which sent MIDI to Luna (or any DAW, for that matter) on the Mac. At this point, it's been "captured", as you say. Now I should be able to playback that MIDI performance through any VI hosted within that same DAW without any concern for timing changes at all, correct? Now let's take that same "capture" scenario described above, but send it out to an external device (hardware MIDI module, my PC, whatever) at some later date. There shouldn't be any timing issues with the MIDI track hosted within the DAW itself, at least for as long as it lives within the DAW. That I feel confident about. If not, you couldn't even depend on your audio tracks to stay in time from one playback to the next. However, are you saying that there is going to be all kinds of drift/timing issues, once it becomes external to the Mac, due to sending that MIDI track hosted in the DAW out to the aforementioned external MIDI device (my PC, in this case)? If so, how much are we talking and why should I care? As I understand it, people send MIDI out to external hardware all the time, and it doesn't seem to present an real world issue as far as I'm aware. Also, if this potential drift is such a problem, wouldn't I encounter those same issues using the pass through method you were proposing that I should use between the Mac and PC when tracking? I mean, that pass thru from the Mac to the PC will introduce those SAME sort of timing issues regardless of whether or not it's happening as I play real time or whether it's coming from a playback in the DAW. I guess I'm just not seeing how this a problem. I suspect there is a communication problem going on here between us. Help me better understand what you're saying and I will keep trying to do the same. In any case, I appreciate your help with all of this.
|
|
|
Post by popmann on Jan 31, 2022 21:30:09 GMT -6
I think maybe the disconnect is that you think MIDI is used and left as MIDI to "change sounds later" more than it is. That would lead you to assume it works better than it does, which is understandable.
If in what I've said you don't see a problem, there might not be one for you.
MIDI sequencers will NOT play back the timing of the performance. Ever. INSIDE the same host, though, it WILL play back the PPQN rounded capture of that performance back repeatedly without timing changes--in a DAW where MIDI/VI work is the developer's focus.
I think you should do VEP if you want to use a second machine. I didn't know you were going to leave things MIDI for the life of the project. That will handle your recalling of sounds--give you a bunch of audio pipelines back to the DAW without spending a ton on a big IO for both machines to do that routing analog.
|
|
|
Post by Quint on Feb 2, 2022 9:36:21 GMT -6
I think maybe the disconnect is that you think MIDI is used and left as MIDI to "change sounds later" more than it is. That would lead you to assume it works better than it does, which is understandable. If in what I've said you don't see a problem, there might not be one for you. I actually think MIDI is used in this way maybe more than you realize. I see people doing it this way all the time. Though I am not into the EDM/techno/dance thing, many in that crowd, in particular, do realtime, unprinted playback of recorded MIDI through multiple VIs in the DAW without issue. There's simply no reason that it shouldn't work, provided you have the CPU resources to handle all of that active processing. If the "change sounds later" approach works for those guys, it should work for my much more modest needs. In your quote below, you pretty much agreed with this, so I don't understand why we seem to be going around in circles on this? Anyway, I guess we'll have to agree to disagree on this particular subject, and that's fine, it's all good. MIDI sequencers will NOT play back the timing of the performance. Ever. INSIDE the same host, though, it WILL play back the PPQN rounded capture of that performance back repeatedly without timing changes--in a DAW where MIDI/VI work is the developer's focus. You're not wrong about the "PPQN rounded capture" thing, but when I used the word "capture" in my previous post, I meant it in the same way to which you refer. We are not in disagreement here, but that also underlies my original point, which is that, the same PPQN-rounding issues affecting the MIDI signal going to the Mac are the SAME PPQN-rounding issues affecting the MIDI signal going to the PC in the split scenario I proposed. They are the same PPQN-rounding issues I would still have if I wasn't doing a split and was, instead, only going to one computer, as you proposed I should. I don't see how worrying about any of that is useful? If it's inherent to using MIDI, there's not a whole lot any of us can do about it, no? So what does it hurt to mult the MIDI and send it to two computers? Serious question. I think you should do VEP if you want to use a second machine. I didn't know you were going to leave things MIDI for the life of the project. That will handle your recalling of sounds--give you a bunch of audio pipelines back to the DAW without spending a ton on a big IO for both machines to do that routing analog. VEP is still under consideration for the MIXING phase, but NOT for the TRACKING phase. Those are two different needs and I think the intertwining discussion of both of those things is what continues to cause some communication issues here. Also, VEP will, as you've mentioned before, come with it's own set of latency issues, which is NOT what I want for TRACKING. I like your earlier idea of using the PC standalone during TRACKING to print the analog audio out of the PC (via it's connected DA) and into the Apollo and then into the Mac, and that's what I'm going with if I end up deciding to purchase a Mac. This MIDI split TRACKING scenario will result in me getting printed VI audio into Luna on the Mac, with reliable low latency monitoring and no need to worry about buffers on the Mac. This MIDI split TRACKING scenario will result in me also getting raw MIDI into Luna on the Mac, for me to do what I want with it at a later date, if I so choose. BOTH of these things will happen at the same time during TRACKING. That's a win win as far as I see it. As for the MIXING phase (or really just at ANY point after I've laid down the original performance), VEP is still under consideration for passing the previously recorded raw MIDI to the PC to offload those VI processing resources there, though VEP might cost more than I maybe want to spend. I'll likely try out BC Connector first to see how that goes. Also, VEP might be a little more than I need for my relatively simple requirements. Also, it may, in fact, turnout that the Mac is capable of internally handling my VI processing needs in Luna all on its own during the MIXING phase, due to the ability to run everything at a much higher buffer that would have been unworkable for TRACKING. Regardless of which route (VEP, BC Connector, Mac, whatever) I go for in the MIXING phase, I will have an audio track recorded during TRACKING as well as a MIDI track to try something different with during MIXING. Win win. Hopefully this description finally gets my intent across. So where, as you see it, are the inherent flaws in any of this? Serious question. I'm already squared away on my I/O, other than the need to pick up a little two channel DA for the PC and the MioXM (or something like it) for interfacing MIDI to the Mac AND PC, so that's no great cost there. I'll follow all of this up by saying again that I appreciate all of your help with working through this stuff.
|
|
|
Post by popmann on Feb 2, 2022 13:17:31 GMT -6
few notes:
-You can't change the buffer size of LUNA at mixdown, unless something has changed recently. There are a lot of functional issues with a full time 128 buffer. One being there are VIs that require more linear time (vs CPU time) than that for input scripting. I never meant to imply it technically couldn't host a VI. Anything CAN host a VI--doesn't make it good at it. If it's not good at it, in my language it "can't". I can see how that's confusing. I need to stop that and be more specific.
-I understand you want to take audio and MIDI for later. It's always been what people did. I've rarely been able to use the MIDI later. Take that for whatever it's worth. So I stopped taking it in most scenarios. If I build something with MIDI that I can't play I render it when I get happy with it.
-CPU is never the reason I don't leave things MIDI until mixdown. It might be a side benefit, but I can open a 96khz project and have my drums and strings and pianos and Hammond all playing MIDI "live"...maybe ultimately that might limit my mixdown DSP, I guess-I wouldn't know, as I start over once I have the arrangement ideas fleshed out typically.
-You can't archive MIDI. Not really. So...you HAVE to render VIs to audio eventually or those tracks are lost to time. The thing is--you can spend a few minutes here and there as things are created...OR...you set up a block of time to render and double check all the tracks after you're done with the mix. I'm bringing this up just to say there's a lot reasons that add up to my "don't use MIDI unless you have to--and when you have to, render it to audio as soon as you can" attitude. Much like why I'll never use 44.1 to record shit--any given argument can be "that's not significant"--which can both be true on a SPECIFIC advantage of 96khz, and ignores that 20 "that's not that significant" adds up to being pretty significant.
|
|
|
Post by Quint on Feb 2, 2022 17:11:54 GMT -6
few notes: -You can't change the buffer size of LUNA at mixdown, unless something has changed recently. There are a lot of functional issues with a full time 128 buffer. One being there are VIs that require more linear time (vs CPU time) than that for input scripting. I never meant to imply it technically couldn't host a VI. Anything CAN host a VI--doesn't make it good at it. If it's not good at it, in my language it "can't". I can see how that's confusing. I need to stop that and be more specific. Yes, that's how I understand Luna to work too. It has a set buffer of 128 ms for VIs and 512 ms for audio. I'm not sure which VIs might fall into the category you mention, but there are apparently many VIs which don't have an issue with the 128 ms buffer in Luna, based on what I've read around the web. However, I can see what you mean about there being a potential issue if a VI needs more than 128 ms to simply function. I suppose that could be a problem, but then that's all the more reason that I'd be glad I was taking an audio capture off of my PC, as well as sending MIDI to the Mac. If all else fails, I've at least got the audio file and MIDI file so, if I just really wanted to mess with that recorded MIDI file, I suppose I could always pull it over to Reaper (where I could make the buffer as long as I want), render it there, and then pull it back over to Luna. Ideally, the audio file would already be the way I want it, and no further work is required. Since we were talking about VEP earlier, if I were to bump against the 128 Ms buffer running a VI in Luna, I'm guessing that farming that out to VEP on my PC, realtime, isn't going to fare me any better, as the latency would just be that much worse. The same could be said for trying it with BC Connector, so I guess dragging it to Reaper for rendering would be the only real option. -I understand you want to take audio and MIDI for later. It's always been what people did. I've rarely been able to use the MIDI later. Take that for whatever it's worth. So I stopped taking it in most scenarios. If I build something with MIDI that I can't play I render it when I get happy with it. Ok. Well that's more of a personal preference thing though, not an "it won't work" thing, so now I see what you're saying. I get what you're saying about rendering it on the spot, and I don't necessarily disagree with that approach either. In a lot of cases, that's exactly how I would do it to. The main reason I want a raw MIDI file in addition to the audio file is because I may want to edit the MIDI track to fix a screw up, or change the key. I also may decide I'd prefer a different piano library or something, and I'd need the raw MIDI to be able to do any of those things. That's not to say that I would necessarily always leave those things to the very end. In most cases I would be taking care of those things sooner than later, and often on the spot. I generally prefer to not delay these kind of decisions, but I also like having the raw MIDI for the same reason that I always take a DI of guitars and bass. You never know when you might need it. Ideally, you never would, but such is life. -CPU is never the reason I don't leave things MIDI until mixdown. It might be a side benefit, but I can open a 96khz project and have my drums and strings and pianos and Hammond all playing MIDI "live"...maybe ultimately that might limit my mixdown DSP, I guess-I wouldn't know, as I start over once I have the arrangement ideas fleshed out typically. Reducing the CPU load is always helpful in these situations, and I too run things at 96k, so every little bit can help. I'm not sure if I quite follow everything you're saying here though? -You can't archive MIDI. Not really. So...you HAVE to render VIs to audio eventually or those tracks are lost to time. The thing is--you can spend a few minutes here and there as things are created...OR...you set up a block of time to render and double check all the tracks after you're done with the mix. I'm bringing this up just to say there's a lot reasons that add up to my "don't use MIDI unless you have to--and when you have to, render it to audio as soon as you can" attitude. Much like why I'll never use 44.1 to record shit--any given argument can be "that's not significant"--which can both be true on a SPECIFIC advantage of 96khz, and ignores that 20 "that's not that significant" adds up to being pretty significant. I agree with all of that. Well, I think I may be looking for a Mac Mini pretty soon. I guess I just need to decide if I want the current gen M1 or want to wait to see how much more costly the new Minis will be when they come out in a few months. I'm leaning pretty hard to just getting one of the current 512gb 16gb ram Minis and giving it a go. You can get one for around $1k, which is in a price range I'm willing to do. By the way, do you have any opinions on the MioXM or MIDI over network, in general? How about compared to MIDI over USB?
|
|
|
Post by popmann on Feb 2, 2022 21:23:36 GMT -6
Ive never used midi over network.
I didnt know LUNA had a secondary (larger playback) buffer.
|
|