Fun with Fusion .. or at least reading about it, or at least organizing the reading about it
Posted 2021-10-12 12:28 PDT | Tags: hardware fusion
I've been on a nuclear physics jag for the last few years and neglecting my other projects.
Specifically, the potential for using emergent fusion advances as the first phase of a hybrid fusion/fission nuclear reactor has been on my mind a lot.
Fusion is a hyped-up technology these days, and there is a lot of capital and brain trust tied up in the race to make fusion practical.  Along the way, though, everyone seems to have overlooked how the side-products of fusion can be utilized to drive nuclear chemistry and novel fission, even though fusion is not yet a break-even technology.
Well, not everyone.  Some clever Russians seem to have thought about it, and are pursuing the use of inertial confinement fusion's neutron side-product to drive the thorium fission fuel cycle: (Russian Scientists Reveal Plans for Fusion-Fission Reactor)
While they're doing that, I'm noodling on a practical means of using multi-wire z-pinch fusion to drive lithium-6 fission.  It's a similar idea.  A z-pinch can implode lithium-deuteride wires using the Coulomb effect, fusing deuterium to produce high-energy neutrons, which then split lithium-6, a net-gain fissile fuel which cannot sustain a chain reaction on its own.  Only half of all lithium-6 fissions produce a neutron, which means sustained fission relies on an external neutron source.
Scientists have been imploding lithium deuteride wires to fuse D-D since the 1990s, but a lot more is known now about this kind of D-D fusion, and the hardware for making energy-efficient power supplies has advanced by leaps and bounds.  It seems plausible that with sufficiently careful engineering, lithium-6 fission could produce more energy than it takes to power the D-D fusion driving the reaction.
I have a design in mind which has characteristics I like, and I split my spare time working on practical components I can learn on through tinkering and reading up on the scientific literature.
My usual approach to the literature is to hoard publications on my laptop, so I can re-read and revisit them often.  Frequently I'll read something and not really understand its importance, and then hit a wall while fiddling with math, and remember that there was this paper which dealt with exactly this problem.  The next reading has more significance to me because I've run into that wall and need a way around it.
In recent years, though, I've been reading papers on my tablet, which is not conducive to that practice.  I will download a pdf and read it, but then it seems to disappear from storage after a while, and using the graphical file browser to see what papers I do have available is awkward and annoying.
Because of this I'm changing my practice.  I will go on "hoarding sessions" where I find a bunch of likely-seeming publications relevant to a topic of interest, download them to my laptop, and then read them later on my tablet.  This doesn't help with the papers I've already read and lost (and partly forgotten), but at least this way I will retain the material going forward.
The first fruits of this approach are located here -- (Z-Pinch relevant papers: Wire Explosions, Power Supplies) -- sync'd from my laptop and in no particular order.  More will follow in time, and I will get around to populating its parent directory eventually too.

Contemplating Clusters
Posted 2021-03-30 13:41 PDT | Tags: software hardware
As far as I can tell, there's not a lot of good documentation out there for making a computer cluster out of open-source software.
Back in 2003, I suggested to my boss at that we publish a How-To document, since we were supposed to be all open-sourcey and stuff, but he didn't think it was a good idea. He said everyone was doing what we were doing, and there was no point in documenting what would soon become common knowledge.
That sounded totally reasonable at the time, and the broad strokes had already been written out in the Beowulf Cluster How-To but the industry has taken a quirky turn since then. The FAANG companies have hoovered up nearly everyone with distributed systems experience, and most companies which would have built their own clusters fifteen years ago are renting VMs in "The Cloud" instead.
"The Cloud" is provided by those same FAANG companies. who run customers' work loads on their big-ass in-house clusters and jealously guard their operational art.  This bodes ill for open docmentation.
On one hand the economies of scale are hard to argue against, but on the other hand there are still niche uses for clustering up a big pile of hardware, and guides for doing it well haven't improved much in the last twenty years.  If anything they have grown dated or fallen off the web entirely.  I had to dig this practical guide out of the Wayback Machine:
Someone in r/HPC asked for advice making a thirty-node cluster, and after looking at the current How-To docs, I pointed them at those and offered some advice on power and heat management, since that aspect of cluster operations was totally absent from all of the documents I could find.
It's been eating at me, because there's a lot more also absent from those documents -- scaling factors, effective use of ssh multiplexing, job control frameworks (like Gearman), tiered master nodes, monitoring .. it makes me think that either there's documentation I haven't found or more needs to be written.
Rather than leaping right in and brain-dumping to this blog, I've joined #clusterlabs-dev and #linux-cluster on Freenode, to get a feel for how the community thinks about these things nowadays.  I haven't built a nontrivial cluster in nearly ten years, and some of my skills are bound to be a bit stale and perhaps irrelevant.
There are a few more modern guides, but they too are narrow of scope, like
The #clusterlabs-dev channel folks have thusfar pointed me at and!_and_why_do_I_care%3F which is nice and modern, but also quite narrow.
When I asked them about power/heat management best practices, they responded with a resounding "meh!" so perhaps I'll write about that first.  Doubtless my FAANG friends will rip it to shreds, but that will just make for a better second draft.

Reminiscing about First Pacific Networks
Posted 2020-09-08 19:46 PDT | Tags: hardware tales
In 1996 I was out of UCSC, living in San Jose, and needing a job.  My buddy Tash landed me a gig at First Pacific Networks as a junior system administrator, and that paid rent while I figured out how to get a programming job and kickstart my career.
FPN built these nifty little cigar box sized units which plugged into a broadband network, hijacked part of its band, and used it to stream voice, video and data (ethernet).  They also made headboards which plugged into a central office and managed the network.  It turned out that some countries (like Russia) there were extremely good broadband networks for squirting video around, but their telephone and data infrastructure were for crap.  FPN's project was an elegant solution that leveraged this existing infrastructure to provide the rest of it.
Well, at least it was "elegant" on paper.
The box had an RJ11 socket for plugging in a landline telephone, and that worked quite well.  It also had an AUI ethernet port, which worked rather less well.

The AUI port

My clearest memories of FPN was of those AUI ports.  Mostly that they were horribly, horribly unreliable, and hard to fasten/unfasten without bending the little clasps that flanked the socket, which of course made it even less secure.  They were always falling out, and our device drivers did not handle random disconnects well at all.
Part of the problem was that the AUI cable itself was really damn thick and stiff, so just bending it into a U-shape from the back of the box to the back of the PC put quite a bit of flexural tension on the whole rig.  Bumping the desk would often be enough to send the whole thing sproinging loose.
The temptation was to just glue the cursed things in place, but of course that was a no-no.  Such is the life of a junior sysadmin.  That, and replacing dead monitors, fixing power supplies, and rescuing borderline-tech-illiterate employees who deleted COMMAND.EXE or CONFIG.SYS to "save room" on their PCs.
Fortunately I didn't have to suffer that long.  I automated away some of my more annoying tasks when I could by writing software, and after a few months of that I was able to transfer to FPN's engineering team as a junior software engineer.
It was the break I needed.  After about half a year of writing network protocol extensions and simulations to demonstrate the scalability of our network (or lack thereof), I accepted a job at Cygnus Solutions as a GNU toolchain engineer, fixing bugs in GCC, GAS, GDB and the like.  My career had finally begun.