Here’s my experience:
Let’s get gear crap out of the way first, to contextualize what I’m writing.
Gear crap: I’m running about 1.5 generations behind in the latest greatest reasonable hardware. 8 gig ram, Intel x9650, Windows XP 64, Nvidia geforce gtx 280. A solid state system drive with crazy amount of page file space, and I always render to a separate external hard drive, but sometimes use the same hard drive to access movies from as I render to.
I’ve rendered out single movies that are over 125 gig at 4000 x 2000, uncompressed tif quicktimes.
Much of the time I’ve had to render stuff in chucks at times, (say the first 300 frames, and then the next 300, and so forth) and then reassemble them. The amount of frames I can render without a crash, directly corresponds with the complexity of my networks, or the amount of huge movie files being accessed. Usually I get around 500-1500 frames per uncomp tif QT created.
In fact I just rendered over seven minutes of “single- pass final-product” very complex mind melting animation (although network is fairly light, and all tops) at 4000 x 4040 (uncompressed tif QT) in a single night. I had to do it in batches of 900 frames to avoid a freeze of Touch, or a Windows crash. I think this might be a world record . What’s even crazier, is that it runs in at 39 FPS in realtime at 3000 x 3000.
I’ve also rendered out ungodly amounts of particles (as lines) with textured geo at 4k x 4k, as a single-pass as final product, with lights, multiple cameras, and a complex tops network and chops network. This also runs in realtime at 3K.
One thing I’ve noticed is that working at higher rez, cache tops can malfunction, and even cause crashes. The more cache tops, the more likely they’ll malfunction. I sometimes have to break my networks up into parts and do multiple passes, but this depends on the complexity of my networks, and the amount of cache tops and mov tops.
Also there is a resolution cut of (I think) 2048 x 2048 for any timemachine tops.
One thing that has helped me is say, instead of rendering at 4000 x 4000, I do 4000 x 4040, and use the extra 40 pixels for text/scene info. A $F in a text top. THat and certain values tracked in chops given helpful channel names, and then convert them to a DAT, and import that into a text top. Sometimes I even composite in an opview top to show me the operators data from an info chop.
One limitation that I’ve noticed is that anything over 4095 x 4095 will not render properly. It renders as a smeary squiggly digital mess. Perhaps this is a limit of Quicktime, or my graphics card, not sure. That said, I’ve had Touch networks calculating at 7000 x 7000, but when it’s time to make a QT, I crop out the best part to be below 4095 X 4095. If I rendered the 7000 x 7000 at 1000 x 1000, I might be able to get 3000-30000 frames of jpg QT before a freeze up… but again, it depends on the complexity of the network.
Another limitation (and I’m talking ntsc video rez or lower here) is trying to get animation and audio to record out of Touch at the same time. The Movie out top tends to bring performance to a screeching halt. My way around this has been to record live audio, and then animation/performance into multiple record-chops. Then I save the audio out, and render the animation out. The only problem with this, is that if your using a camera input, (or any “seeded recursion” that makes every performance quite different) this approach doesn’t work.
hope that helps
Jim