DISCLAIMER: This is a continuing series detailing the painful story of a DIY render farm build. It is terribly technics and
somewhat frustrating. Those who are unprepared for such “entertainment” are advised to ignore these posts.
Can’t tell if JHVH-1 actually sees itself. I mean, it SHOULD. The master should be able to tell the slave is running. But since the command-line tools for Dr. Queue are so taciturn, it’s hard to tell. The query function returns:
That could mean anything. It is simultaneously completely reasonable AND completely wrong. Shouldn’t it say something more? Or maybe this is right since all its connected to is itself…
So I connected one slave, Wormwood, to the network to see what is up. It was not appearing on startup, prompting me to actually connect a monitor to it. I’ve been running all these machines headless and through VNC for so long I rarely hook anything up besides the ethernet cable.
A simple reboot did the trick, and I installed a fresh copy of Dr. Queue. I linked the startup script. This short shell script does the following:
- sets environment variables like DR_QUEUE_TEMP to /Volumes/Lucifer/drqueue/temp and DR_QUEUE_LOGS to /Volumes/Lucifer/drqueue/logs and
- starts up /Applications/drqueue/slave.
I eschewed the usual /usr/bin location for the application in favor of the Mac Standard /Applications.
I then linked Lucifer as a share that loads on startup, and ran the slave. Success! It seems now that the command-line query returns
ID:0, localhost ID:1, Wormwood
So I have my answer now. Master and slave are working properly and all variables and shares are set properly. Now to get that front-end working!
Still no good. Wants libpng12.dylib. Looking back over this build log I could find no clues. This post basically described how everything just worked at some point and I had no idea why. That post indicated that there seemed to be some kind of file called “drqman.Darwin.fat,” and I certainly do NOT see any such file in this distribution… the fat binaries are the ones that have worked before, and are working now. So maybe I just need to comb through the files on the old PDC and see if I can find this universal binary?
The python front-end, also indeterminate how remote one can be with it, although the existence of a files called “server” indicates it might be just the thing.
This also fails, as it is looking for “libdrqueue.”
OK, so I tried to build Dr. Keewee from scratch. I ran the setup.py file. It bombed out, looking for “swig,” which was not installed. OK,
sudo port install swig
That seems to go fine, so I go back to the setup.py file again. Which fails, looking for both “python.swig” and “type maps.i”. And everything is still dependent on the elusive “libdrqueue…” I think I did all this before, didn’t I? Yes, and recently too.
DR. QUEUE ON RAILS
Oh, why not switch gears and go with the Ruby on Rails again! That was never too much trouble, was it? I had abandoned the work after drqman started to work. So I never got too far with it.
The list of dependencies is long. Fortunately there is an .rb script to check all that for you. When I ran mine I got pretty much nothing except ruby and ruby gems installed. So I quickly ran the ruby bindings script provided in the Dr. Queue distribution. Which failed, claiming it needed quite a few dependencies, including rMagick, etc. I got this far last time, even installing imageMagick in preparation for installing rMagick. But that’s where I stopped.
And that’s where I stopped this time, too. I simply ran out of time. I’m now in such a time warp that Pipeline is looking better and better. But I feel so empowered having the Dr. Queue master and slave working and happy, that I feel like I should push on with this. If Dr. Queue is good enough for ILM, its good enough for Naked Rabbit and my pile of outdated old machines…