Render Farm 44

DISCLAIMER: This is a continuing series detailing the painful story of a DIY render farm build.  It is terribly technics and
somewhat frustrating.  Those who are unprepared for such “entertainment” are advised to ignore these posts.

I can’t leave well enough alone, it seems.

Not being able to remotely run jobs or even check on the render queue kind of sucks.  I should at least get these machines to email me when things have happened – that’s what a 21st century setup does.  Add to that fact that I live in L.A. and teach in Fullerton – which is 40 miles away.  I should be able to do some of this remotely.

My current solution is simply not going to do that for me.  Dr. Queue on that little PPC Mac Mini simply will not perform certain tasks.  Dr. Queue has a couple of extensions, namely “Dr. Keewee,” a Python front-end that offers a web interface for checking the status (but maybe not remote management and submission?), and “DrQueueOnRails,” a Ruby-On-Rails implementation which DOES offer remote management.

Previously I have struck out with either of these, as they require dependencies for which there do not seem to be PPC versions.  The old PPC Mac is simply too old.  OSX 10.4 is simply too many iterations back – something like 7 generations of operating system…

I tried Dr. Keewee first, as the Dr. Queue distribution seemed to include a Universal Binary of the Dr. Keewee server.  And, as you may already know, “Universal Binary” is the old term for Mac code that has PPC and Intel instruction sets – also called a “fat” binary back in the day.  But when fired up, this binary repeatedly asked for a library called “libdrqueue.”  Seems that in all my futzing around and building things I would think this library – which by its name sounds like something I would have had to complete before being able to use Dr. Queue at all – should be there.  Scouring this very log turned up no clues.

No information online, and only a “.h” file in the source code.  One of the versions of Dr. Queue online did have a “libdrqueue.a” file – an “arch” format archive of the source?  Try de-arching a file on a Mac – it’s not pretty.  Once I did it seemed as though I had the libdrqueue source, so I tried building it.

failed

Oh, Fail Stamp!  How I have been away from you!  It’s like seeing an old friend.  Why the fail?  Who knows.  The arch file does not seem to be the proper way to do this.  It crashed out with numerous dependencies.

Well, how about just building everything from scratch?  Why did I stop doing this before, anyway?

Oh yeah, there was an imageMagick dependency.  So I tried to install that via homebrew, (easier than MacPorts) but homebrew is too new for OSX 10.4.  There is such a thing as “tigerbrew,” made especially for my situation, it seemed.  Once that was downloaded and installed it told me what I already knew: that Tiger is too old.  MacPorts, also, failed to find a version that was compatible.

failed

So, how about that DrQueueOnRails?  It seems much better anyway, since you can manage jobs online.  I ran the .rb file that would tell me what dependencies I needed.  It stopped on LDAP, which was not on the machine.  I thought Macs came with LDAP, and it certainly seemed like there was a Control Panel for LDAP in the Systems Preferences, but OK…

So, off to download a version of LDAP.  I loaded the package, and tried to build it…

But of course it stopped when it asked for BerkeleyDB… Now THIS sounds familiar… remembering my db48 experiences, I just put the whole thing down…

failed

Well now, the answer is probably a lot simpler than it sounds.  I have the Octocore machine I retrofitted.  Surely we could spare a core from that beast to run this scheduling software and be the NAS for the farm.  If I set that machine up with an Intel version of Dr. Queue then I could be running it as the master.  There’s no reason I need a PDC at all!  Not on a farm this size.

Plus there’s Pipeline, which is a free render manager.  Plus there are the render managers you buy, like Qube and Deadline.  It might be time to stretch out a bit here if I want the extra functions.

THUS

I wheeled over the Octocore.  Next, I made a new software image based on OSX 10.11.3 including AEFX and Lux.  It’s still After Effects 2014, though, because of the well documented issues with 2015 and multicores – which is to say they broke multiple processor support in order to rush out the update.  The new Lux is on, though, and the new 1.5 promises more speed and better results.

This new machine is christened “JHVH-1” for obvious reasons.

In order to not get into TOO much trouble, I set up a new, separate small network on a 4 port hub.  If I can get one master and one slave talking, then deploying it to the rest of the farm is possible.  Until then I do not have to break the current farm.

I did jump right in on Pipeline, trying to get it to run an old AEFX project.  It set up easy, which is to say the Clients talked to the Manager immediately.  I could see that Pipeline was aware of the 10.11.3 client on the new small network. Running a job was not so easy, though.

A note about pathnames.  If the AEFX job I’m running is on the same internal volume that the application and the manager are on, then the pathname for the Octocore and the pathname for the client machine are always going to be different.  For example, a .aep file at JHVH-1/Desktop is seen as /Volumes/JHVH-1/Desktop by an attached client.  Whereas an attached hard drive called /Volumes/Lucifer/After Effects will be the same to JHVH-1 as it would be to the Client.  Maybe someone much cooler than I could figure out mount points and solve this, but it’s as likely I need to put things on an external anyway.

After I got pathnames worked out Pipeline still said that the job was being sent to the remote machine for editing.

failed

And yet the remote machine was never working on anything.  Though it said it was, there was no evidence that an serenader process was going and certainly no evidence of output.

Well, I did just kind of slap it together to see what would happen.  Maybe I should have spent the time to set it up proper?  So, I thought, maybe its best to get back to Dr. Queue, the one I know, and get that Ruby-On-Rails frontend up.

Of course that means setting up a whole new ecosystem on he new network, with accounts on machines, sharing setup, Dr. Queue variables and pathnames worked out… a little trouble, but not that big a deal.

But, the binaries did not run.  They crapped out.  I set the environment variables in a shell script that launched the binary – as I have done on the PPC machine – and tried that.

failed

It bombed.  Furthermore,  drqman, the X-11 implementation of the master program, crapped out because GTK was not installed on the machine, and it was looking for it.  GTK 2.0, apparently, an old implementation they don’t make anymore.  You can find the dylib installer, but it won’t run since 10.5 is so old and the installer doe snot recognize 10.11.

OK, time to install GTK otherwise, because ether have a project page, and I’m sure the current version will be fine for working with the current version of XQuartz, which is what this is all about anyway.  But GTK will not install until you first get Xcode and XCodeTools on the machine.  Some time later, I started the GTK script.  Which promptly ran, but did not seem to build anything.  But the script did warn me to add a pathname to the bash profile.

When I did exactly that, adding the line to my empty .bashprofile file, I began to get really unusual results out of the shell – it warned me that it did not understand any command, even “sudo.”

failed

I realized I was in over my head again…

Well, argh.  So I should wipe all Dr. Queue stuff clear and start from the very beginning, obviously.  The binaries right out of the distribution are clearly based on some very old implementations from 2007, or whenever Dr. Queue was last worked on.  If  can get that working, I’ll graduate to that Rails business.  The first priority should be to duplicate functionality on the new image, now with the possible future-proofing or the next several years or until those 2006 Macs die.

Or perhaps I’ll get Pipeline working and blow this whole thing off?

2 comments on “Render Farm 44

  1. All beyond my ken, but … OSX 10.5 was the last PPC version and no doubt what your mini is running. There are, or used to be, a few PPC Linux distros — might they be able to fill the bill here?

  2. The problem with the mini is the same with ANY PPC distro – the underlying tech, GTK in particular, has all been updated as well. And since Linux distress all use live repositories and download from there, you would have to have saved special PPC versions of all the relevant technology in compatible versions. So running this on, lets say, Debian or CentOS (which I’ve looked into!) is precisely the SAME hassle, just on an OS I understand even LESS…

Leave a Reply to pondCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.