Now THAT is a rant

From deadmemes.net:

So...the best way to fix the pernicious issue of displaying /etc/motd to the end user (read: cat) was 1) to make a PAM module responsible for it, 2) get rid of the "old way" of updating it (cron job), and last but not least, 3) add a completely undocumented behavior (if /etc/motd is a symlink, it is dynamic) that contradicts the man page for said module.

I'm going back to blaming Ubuntu. I also need to step out of the room and have an aneurysm now.

Tags: rant ubuntu

Grub2, Unity, Ubuntu

  • With Grub 2, you can change the default menu entry without changing order by editing /boto/grub/grub.cfg. Edit "set default" line to be of the titles in /boot/grub/grub.cfg.

  • But in Ubuntu/Debian, you want to edit /etc/default/grub and change the appropriate line there, then run "update-grub".

  • In Unity, there is no easy, risk-free way of changing Unity settings. You can install Compiz Settings Manager but that will make baby Jesus cry:

I am an experienced Linux user, I've contributed to kernel and work on the Canonical OEM team; I only mention these facts to show my context, which is -- the other day, I did a fresh install of 11.10 on my laptop, and wanted to customize something (turning on focus-follows-mouse). I poked around in gnome-control-center for about 30 minutes before giving up and discovering the only way to do this was using ccsm.

After installing ccsm, I configured ffm, and then -- accidentally! -- my mouse cursor passed over the preferences button and the touchpad on my laptop registered a click.

Boom!

Unity session dead.

Holy crap, what a crock.

Tags: linux grub ubuntu

Oh the fun

At work we have a Sun X4540 (a Thumper). It acts as an NFS server for our network, serving home directories and such to local Linux servers. It does not do any other server duty.

I recently added a Perl script (a slightly modified version of the one found here: http://www.thedeepsky.com/blog/?p=54) to root's crontab to take and rotate ZFS snapshots for various filesystems. The script was run every hour. It can be configured to retain a certain number of hourly, daily, weekly or monthly snapshots named hourly.0, hourly.1, etc.

If the script decides that a snapshot should be taken, it's called .TEMP. The existing snapshots are then renamed: hourly.5 becomes hourly.6, hourly.4 becomes hourly.5, and so on. Eventually, it renames hourly.0 to hourly.1, then hourly.TEMP to hourly.0.

I had run the script for a few days for one ZFS filesystem, then added another for a few days. Since everything was working, I then added approximately 15 more filesystems. As with the previous filesystems, each was configured to keep 24 hourly snapshots.

4 hours after adding these new entries, the entire system became unresponsive while this script was running. These filesystems are shared out via NFS, and the Linux servers they were mounted on became similarly unresponsive. Because of a problem with our monitoring system, I did not respond to the problem for four hours. I SSH'd to Thumper's ILOM and ran "start /HOST/console"; this command worked, but I did not get a login prompt. The quickest way to get things working again seemed to be to power cycle Thumper, so I did so. Thumper came up fine, but I did not get a core dump.

Looking at the snapshots afterward, it appears to have made it through seven filesystems, and choked on the eighth. The snapshots for that filesystem look like this:

homepool/foo@hourly.3         0      -  11.9G  -
homepool/foo@hourly.1         0      -  11.9G  -
homepool/foo@hourly.0         0      -  11.9G  -
homepool/foo@hourly.TEMP      0      -  11.9G  -

Thus, the sequence appears to be:

  • take the snapshot hourly.TEMP
  • rename hourly.2 to hourly.1
  • choke renaming hourly.1 to hourly.2

I am assuming that it was the zfs rename that caused the problem; it could have been something else, but we've had very little trouble from this server and our loads are pretty modest.

The weird thing is that we've been taking daily snapshots for some time (named @YYYY-MM-DD). Deletion of snapshots has never caused a problem before. We have not done many renames at all, so it's possible we're tripping over a new (to us) bug here. And the filesystem in question (homepool/foo) has had no activity during that time (it actually belongs to a user who's returning to us RSN).

I've submitted a bug report to Oracle, so we'll see what happens.

Tags: zfs

Lululemon

From The Globe and Mail:

Two weeks ago, she was about to purchase a new pair of pants when a friend who knew her anti-Rand position told her of "John Galt" shopping bags.

"That was the last straw," she said, noting she had become increasingly disenchanted with the company after reading of its embrace of Werner's controversial Landmark Forum, and its 2007 run-in with the Competition Bureau that forced it to back down on claims that some of its clothes contained an ingredient with therapeutic attributes.

"I don't want people looking at me with that little logo on my pants or on my hoodie and thinking I'm going home to read Atlas Shrugged after, you know, downward dog," Ms. Kurchak said.

Tags: aynrand

The Social Graph is Neither

From The Social Graph is Neither:

Imagine the U.S. Census as conducted by direct marketers - that's the social graph.

Social networks exist to sell you crap. The icky feeling you get when your friend starts to talk to you about Amway, or when you spot someone passing out business cards at a birthday party, is the entire driving force behind a site like Facebook.

Because their collection methods are kind of primitive, these sites have to coax you into doing as much of your social interaction as possible while logged in, so they can see it. It's as if an ad agency built a nationwide chain of pubs and night clubs in the hopes that people would spend all their time there, rigging the place with microphones and cameras to keep abreast of the latest trends (and staffing it, of course, with that Mormon bartender).

We're used to talking about how disturbing this in the context of privacy, but it's worth pointing out how weirdly unsocial it is, too. How are you supposed to feel at home when you know a place is full of one-way mirrors?

We have a name for the kind of person who collects a detailed, permanent dossier on everyone they interact with, with the intent of using it to manipulate others for personal advantage - we call that person a sociopath. And both Google and Facebook have gone deep into stalker territory with their attempts to track our every action. Even if you have faith in their good intentions, you feel misgivings about stepping into the elaborate shrine they've built to document your entire online life.

Open data advocates tell us the answer is to reclaim this obsessive dossier for ourselves, so we can decide where to store it. But this misses the point of how stifling it is to have such a permanent record in the first place. Who does that kind of thing and calls it social?

Tags: goodreading

Cascadia IT Conference

I've been contacted by one of the co-chairs for this year's Cascadia IT Conference to ask if I'd add a link, and I'm happy to do so. This is the second year of the conference, and if last year's is anything to go by it should be another great time.

Unfortunately I won't be able to go, but if you're anywhere in the area next March I'd recommend it. And if anyone's interested, I have no financial interest in the conference, but I have met some of the organizers at past LISAs.

Cascadia 2012

Tags: sysadmin plug

Courier company of the DAMNED

Okay, I like to rant. But honestly, sometimes there's just such good material it would be a crime not to.

I got cross-shipped a computer part recently. Although I was in the middle of a ticket with the vendor, and thus it wasn't unexpected, the first I found out about it was when I received this email from Loomis, The Courier Company of the DAMNED :

Date: Fri, 16 Sep 2011 13:06:40 -0400 (EDT)
From: nx6122@loomis-express.com
To: undisclosed-recipients: ;
Subject: 09/16/2011 13:05:16  Easyship 2.0 Shipment Notification

09/16/2011 13:05:16  See attached file/Voir le fichier ci-joint.


There was an attachment: a base64-encoded file called "110916130516REPORT.DOC". Against my better judgement, I saved the attachment and viewed it with hexdump, fully expecting it to be full of Javascript or PDF or Flash or whatever the kids are using these days. Instead, I found that it was a plain ASCII file -- not a Word document, not even an RTF document -- that had delivery information for the part I was being shipped.

The "From:" address' domain is "loomis-express.com". The "Received:" headers mentioned various "dhl.com" servers. And the plain text in the REPORT.DOC said to visit "loomisexpress.com" for further information. That's three different domains.

This is unbelievably bad. I was sure I was being phished. If a user had asked me about this email, I would have told them to delete it. And then I would have scrubbed their hard drive with lye.

Against that, the plain ol' outdated info in the second email, telling me how to schedule a pickup for the returned part, was just endearing. The link to their web page just gave a 404; when I found the pickup page by myself, it wanted me to set up an account before it would do anything. (Probably a fair request, but I don't want to be bothered since I'M NOT GOING TO USE THEM AGAIN.)

I gave up and called the toll-free number in the email; I got a stern voicemail from DHL saying that the numbers had changed, and this one was for DHL customers ONLY and Loomis Express people should either visit loomis-hyphen-express-dot-com or call a new toll-free number...which I noticed was different from the toll-free number displayed boldly on the return shipping label.

I've emailed them about their poor email design, and got a response:

Thank you for taking the time to email us. On behalf of Loomis Express, please accept our sincerest apologies for the delayed response to this email.

We have copied the technical support group on this reply so that they could look into the matter and resolve if this actually came directly from Loomis Express.

I think that says it all.

Tags: rant

What I've been reading

High-throughput biological assays let us ask very detailed questions about how diseases operate, and promise to let us personalize therapy. Data processing, however, is often not described well enough to allow for reproduction, leading to exercises forensic where raw data and reported results are used to infer what the methods must have been. Unfortunately, poor documentation can shift from an inconvenience to an active danger when it obscures not just methods but errors.

In this talk, we examine several related papers using array-based signatures of drug sensitivity derived from cell lines to predict patient response. Patients in clinical trials were allocated to treatment arms based on these results. However, we show in several case studies that the reported results incorporate several simple errors that could put patients at risk. One theme that emerges is that the most common errors are simple (e.g., row or column offsets); conversely, it is our experience that the most simple errors are common. We briefly discuss steps we are taking to avoid such errors in our own investigations.

So, I make an open call for these two tasks: a simple tool to pin together slides and audio (and sides and video), and an effort to collate video from scientific conference talks and film them if it t exist, all onto a common distribution platform. S-SPAN could start as raw and underproduced as C-SPAN, but I am sure it would develop from there.

I'm looking at you, YouTube.

Tags: reading reproduciblescience

Rocks versus (OpenMPI versus MPICH2)

Last week I was running some benchmarks on the new cluster at $WORK; I was trying to see what effect compiling a new, Torque-aware version of OpenMPI would have. As you may remember, the stock version of OpenMPI that comes with Rocks is not Torque-aware, so a workaround was added that told OpenMPI which nodes Torque had allocated to it.

The change was in the Torque submission script. Stock version:

source /opt/torque/etc/openmpi-setup.sh
/opt/openmpi/bin/mpiexec -n $NUM_PROCS /usr/bin/emacs

New, Torque-aware version:

/path/to/new/openmpi/bin/mpiexec -n $NUM_PROCS /usr/bin/emacs

(Of course I benchmark the cluster using Emacs. Don't you have the code for the MPI-aware version?)

In the end, there wasn't a whole lot of difference in runtimes; that didn't surprise me too much, since (as I understand it) the difference between the two methods is mainly in the starting of jobs -- the overhead at the beginning, rather than in the running of the job itself.

For fun, I tried running the job with MPICH2, another MPI implementation:

/opt/mpich2/gnu/bin/mpiexec -n $NUM_PROCS /usr/bin/emacs

and found pretty terrible performance. It turned out that it wasn't running on all the nodes...in fact, it was only running on one node, and with as many processes as CPUs I'd specified. Since this was meant to be a 4-node, 8-CPU/node version, that meant 32 copies of Emacs on one node. Damn right it was slow.

So what the hell? First thought was that maybe this was a library-versus-launching mismatch. You compile MPI applications using the OpenMPI or MPICH2 versions of the Gnu compilers -- which are basically just wrappers around the regular tools that set library paths and such correctly. So if your application links to OpenMPI but you launch it with MPICH2, maybe that's the problem.

I still need to test that. However, I think what's more likely is that MPICH2 is not Torque-aware. The ever-excellent Debian Clusters has an excellent page on this, with a link to the other other other mpiexec page. Now I need to figure out if the Rocks people have changed anything since 2008, and if the Torque Roll documentation is incomplete or just misunderstood (by me).

Tags: rocks cluster torque

Compacting the Bacula catalog

Just compacted the Bacula catalog, which we keep in MySQL, as the partition it was on was starting to run out of space. (Seriously, 40 GB isn't enough?)

First thing I tried was running "optimize table" on the File table; that saved me 3 GB and took about 15 minutes. After that, I ran mysqldump and reloaded the db; that saved me another 300 MB and took closer to 30 minutes. Lesson: "optimize table" does just fine.

Tags: bacula mysql

icli, perlbrew

Two things I should remember:

  • icli is a command-line interface to Icinga, the Nagios fork. However, it works well enough for Nagios itself if invoked like so:
icli  -c /var/spool/nagios/objects.cache -f /var/spool/nagios/status.dat

(assuming you've got the Nagios files in those locations). And by "well enough", I mean that it will show the current state of all services.

  • perlbrew is an excellent way of maintaining Perl in your home directory...like, say, because icli requires a newer version of Perl than CentOS provides. It even lets you switch the version of Perl you use in a shell on the fly, like modules. Ver' nice.

Tags: nagios

How long until...

either:

a) financial regulations are deliberately made NP-complete, in order to ensure that prosecutions can prove that a human deliberately subverted them, or

b) lobbyists push to keep financial regulations deliberately simple, ostensibly to increase efficiency but in actuality to facilitate computer-assisted subversion without having to invest in AI?

ObXKCD.

Tags:

Quotas

So: the quota file on an ext3 filesystem contains usage information ("How much disk space is this user using?"). It's updated when quotacheck is run, typically at boot time. After that the kernel has up-to-date info on quotas but doesn't write it to disk for performance reasons. So the kernel will deny/allow writes as necessary.

But the userland tools used -- particularly by users -- to monitor or report on quota state ("How much space am I allowed to use?") only uses those files. And those aren't updated unless quotacheck is run...which is either at boot time, or when called from cron. And to run it on a live system, you've got to turn off quotas to prevent corruption.

Bleah.

UPDATE: Near as I can tell, I was checking quotas while not realizing that quotas had been turned off for the filesystem I was checking, and thus quota reported bogus data.

CONCLUSION: I am on crack. BIG CRACK. Ignore me.

Tags: rant

Well, which one would YOU pick?

At work, I'm about to open up the Rocks cluster to production, or at least beta. I'm finally setting up the attached disk array, along with home directories and quotas, and I've just bumped into an unsettled question:

How the hell do I manage this machine?

On our other servers, I use Cfengine. It's a mix of version 2 and 3, but I'm migrating to 3. I've used Cf3 on the front end of the cluster semi-regularly, and by hand, to set things like LDAP membership, automount, and so on -- basically, to install or modify files and make sure I've got the packages I want. Unlike the other machines, I'm not using cfexecd to run Cf3 continuously.

The assumption behind Cf3 and other configuration management tools -- at least in my mind -- is that if you're doing it once, you'll want to do it again. (Of course, there's also stuff like convergence, distributed management and resisting change, but leave that for now.) This has been a big help, because the changes I needed to apply to the Rocks FE were mostly duplicates of my usual setup.

If/when I change jobs/get hit by a bus, I've made it abundantly clear in my documentation that Cfengine is The Way I Do Things. For a variety of reasons, I think I'm fairly safe in the assumption that Cf3 will not be too hard for a successor to pick up. If someone wants to change it afterward, fine, but at least they know where to start.

OTOH, Rocks has the idea of a "Restore Roll" -- essentially a package you install on a new frontend (after the old one has burned down, say) to reinstall all the files you've customized. You can edit a particular file that creates this roll, and ask it to include more files. Edited /etc/bashrc? Add it to the list.

I think the assumption behind the Restore Roll is that, really, you set up a new FE once every N years -- that a working FE is the result of rare and precious work. The resulting configuration, like the hardware it rests on, is a unique gem. Replacing it is going to be a pain, no matter what you do. There aren't that many Rocks developers, and making it Really, Really Frickin' Nice is probably a waste of their time.

(I also think it fits in with the rest of Rocks, which seems like some really nice bits surrounded by furiously undocumented hacks and workarounds. But I'm probably just annoyed at YET ANOTHER UNDOCUMENTED SET OF HACKS AND WORKAROUNDS.)

And so you have both a number of places where you can list files to be restored, and an amusing uncertainty about whether the whole mechanism works:

I found that after a re-install of Rocks 5.0.3, not all the files I asked for were restored! I suspect it has to do with the order things get installed.

So now I'm torn.

Do I stick with Cf3? I haven't mentioned my unhappiness with its obtuseness and some poor choices in the language (nine positional arguments for a function? WTF?). I'm familiar with it because I've really dived into it and taken a course at LISA from Mark Burgess his own bad self, but it's taken a while to get here. But it is the way I do just about everything else.

Or do I use the Rocks Restore Roll mechanism? Considered on its own, it's the least surprising option for a successor or fill-in. I just wish I could be sure it would work, and I'm annoyed that I'd have to duplicate much of the effort I've put into Cf3.

Gah. What a mess.

Tags: rant rocks cfengine

LinuxCon, Day 3

Friday morning I pretty much skipped the keynotes. While waiting for the tutorials to start up, I got buttonholed by Kurt von Fink of the MariaDB project. Nice guy, and he pretty much convinced to take a serious look at it. I'd been dithering about whether to attend the talk on it in the afternoon, and decided it was worth my while.

First tutorial, though, was the Filesystem Tuning talk by Christoph Hellwig. After some minor problems (geek presentation tech problems mean 47 people in the audience shouting out suggestions for xrandr flags), the talk began...and ho boy, I took notes furiously during this one. He concentrated on ext4 and XFS, adding the disclaimer that he's an XFVS developer/fan. Hopefully his slides will be up soon; if not, I'll type up my notes RSN.

Okay, one quote/summary: do not buy a cheap RAID controller -- ie, anything without a battery-backed cache. Why not? Because you'll be doing stuff you could do with mdadm and running it on an undebugged RTOS, which itself will be running on a tiny, underpowered ARM chip.

In the afternoon was the Bufferbloat talk. Here are some links for you to look at while I come up with more summarizationalism of the talk itself:

I've got a lot of reading to do there.

Tags: linuxcon

Linux Con -- Day 2

Thursday morning was the keynote from Dr. Irving Wladawsky-Berger at IBM. His memories of Linux ascendancy were interesting...possibly because of the cheerleading/"We would simply prevail" feeling I felt. But his speculation on what would come was fuzzy and handwavy...slides with things like "Smart retail / Smart traffic/ Smart cities / Smart regions / Smart planet / Intelligent oil field technology" (wait, what happened to smart?) and graphs of Efficiency vs. Transformation, with a handy downward-sloping line delineating "Reinventing Business" from "Rethinking IT", just made THE RAGE come on.

The HP speech that came after wasn't much better, so I ducked out after five minutes...perhaps a mistake, in retrospective. I will say, though, that it amazes me that multitasking, in 2011, is something to brag about.

Next up was the presentation from IBM on "Improving Storage in KVM-based clouds". Despite teh buzzwords, it boiled down to an interesting war story about debugging crappy FS performance, from verifying ("Yes, the users are right when they say it sucks") to fixes ("This long-term kernel project will add the feature we need to stop sucking!"). If I can find the slides, I highly recommend reading them...there's a lot of practical advice in there.

Next up was a presentation by the mysteriously-employed Christoph on Linux in the world of finance. It was a short presentation -- a lot of presentations at LinuxCon have been short -- but he made up for it with a lively Q&A afterward. (To be fair, he explained at the beginning that he was used to a much more hostile/loud audience and a much more interactive presentation style, and actively solicited questions.)

Right, so: Linux is used in finance a lot, because it's fast and very, very tweakable. He describes this as "Linux hotrodding", that seems to capture the attitude very well. Sadly, a lot of this stays in-house because these tweaks are considered part of the "secret sauce" that makes them money.

I asked if the traders were involved in the technical side of things, or if it was more like "Let me know when my brilliant algorithm is sufficiently fast." Answer: no, traders are very, very technical (some give keynotes at tech conferences), and there is very tight integration between the two. I asked if the culture was as loud, macho and aggressive as the stereotype. Answer: yes. Someone asked why Solaris usage had declined. Answer: neither traders ("You got bought! You're a loser!") nor techies ("Oracle kills MySQL and puppies!") liked Oracle buying Sun.

And now for an opposing view.

I spoke after the talk to three sysadmins from the same trading company, and they disputed some of Christoph's points. First, their company contributes back to open source/Free software; their CTO says it's a moral imperative. They've open-sourced their own trading software, though not the algorithms ("algos" if you're a trader type) that make them money. They admit that this makes their company unusual; in their industry, secrecy is the rule.

Second, they said the culture varies from company to company, and that anyhow it's very different now that MIT PhDs and such are being hired. It's not all "Wall Street".

And one bit they confirmed: hotrodding. Things like overclocking their chips -- but to the degree that the vendors phone them up to say "You'll burn out your CPU in a week!" Response: "Okay." Because it'll make more money in the first hour it's running than the CPU costs.

I had lunch with Chris, who I used to work with, and caught up on everything. Then I hung out in the vendor area a bit. The PandaBoard was neat: Ubuntu 10.10, playing a 1080p movie trailer and drawing less than two watts. Incredible.

I buttonholed the FreeIPA guy; complimented him on the talk, and asked some questions. Master-slave in FreeIPA LDAP server? No, multi-master only. Doesn't that make you nervous? No. Doesn't keeping config information for the LDAP server in LDAP, rather than a plain text file, make you nervous? Shrug; if you can't read LDAP, you're probably hosed anyway. Oh, and btrfs is coming to Fedora 17, probably RHEL 7. Doesn't that make you nervous? No. (Conclusion for the home listeners: I am a misinformed worrywart.)

And Rik van Riel was there, but I forgot to hug him.

In the afternoon I went to a two-hour introduction to KVM-based virtualization. This was excellent; while I'm using KVM at the moment, I'm not familiar with the tools available. (Which probably means I shouldn't be using it....) He covered tools like virt-p2v, KSM, and how to monitor performance of VMs from the host, even if you don't have root privileges. Good stuff.

Tags: linuxcon mysql

LinuxCon -- Day 1

Morning came and I found myself at the beautiful Airport Hilton^W^WVancouver Hyatt, standing in line to register for LinuxCon. Ponytails, beards; jeans and t-shirts, but also jeans and open-neck dress shirts. (OH in line: "Yeah, we really have to leverage first-line early adopters to get that community buildup...") Coffee, starchy sweets and free stickers, then gawking at maddog and thinking that forgetting my FSF pin was worse than forgetting my business cards. Some signs of Scary Viking Sysadmins, which reassured me.

Sign on the window: "Don't miss the Complimentary Morning Yoga at Vancouver Corporate Yoga at the Royal Centre!"

Then at 8:55am, the BELLS OF GOD rang in the lobby, interrupting the conversation I was having with the Oracle folks ("No, we have no plans to close-source VirtualBox. No, we cannot let you fly Larry Ellison's jet.") to let me know it was time to go to THE OPENING KEYNOTE ZOMG THIS WAY. I clutched my hands to my head and staggered into the ballroom, bleeding from my ears, took a complimentary Band-Aid and found a seat.

I overheard someone behind me saying "Dude, you can see maddog here and Linus and all these people you only read about!" I turned around, noticed that they were from the Oregon State University Open Source Labs, and said "Dude! You work at a place that hosts Mozilla! And Gentoo! I'm not worthy!" Geek love...is it wrong?

The first presentation was from Jim Zemlin, pres. of the Linux Foundation. As the head of a non-profit, he was in the mandatory uniform of jeans and a grey hoodie, and his theme song was "Eye of the Tiger" (no, really). His talk: where would we be w/o Linux? Well, we wouldn't have that creepy as hell IBM commercial about Linux that I saw described as "Children of the Corn-like". Seriously, it was disturbing.

Next up, the CEO of Red Hat in jeans and open-neck dress shirt, and his theme song was that goddamn "Tonight's Gonna Be a Good Good Good Good Good (Good Good) Good Ni-i-i-i-ight" by the Black Eyed Peas. His talk: where is Linux going to be in 20 years? Spoiler: he doesn't know. I just saved you a) 30 minutes of a semi-interesting speech (he started with Slackware in the 90s, he says) and b) sorting through 8,000 goddamn retweets of that TechCrunch article("RH CEO: I Have No Idea What's Next") while looking for any, ANY interesting tweets re: the conference. (I'm spoiled from LISA.) (Note: they showed up later in the day...but they're still thin on the ground.)

Assertion from RH CEO: Google, Facebook et al. would be nowhere w/o Linux, for only Linux provides easy, cheap prototyping without bureaucracy/licensing/etc. Sorry, what happened to the BSDs? Did I miss something? And another example: "Could Facebook have taken off if they were using Oracle Solaris with Sparc Servers, charging $10 per user to register?" Notice a) ORACLE ORACLE ORACLE and b) ignoring the actual FB business model of selling your data to the highest bidder.

Also, he used the phrase "leading business thought leader" to describe someone.

Okay, so after more sugary starch and not enough coffee, I wen to the FreeIPA talk. This is definitely an interesting project, and I think I should have been using this a long time ago. Benefits:

  • Everyone does LDAP organization differently, which makes mergers hard. Also, there's no enforcement of relationship integrity. FreeIPA handles of those things (standard layout + automatic relationship updates).

  • XML-RPC, CLI, Web interfaces, all of which do everything.

  • System Security Services Daemon -- offline credential caching FTW.

  • Kerberos + NTP + DNS automagic.

(Contrary to rumour, Larry Ellison was NOT at the back shouting "This does NOTHING that NIS 4 won't do!" I don't know how these things get started.)

I left early, because while I agreed that I shoulda used FreeIPA a long time ago, it wasn't telling me much more than that. I ducked over to Greg Kroah-Hartman's talk on the stable kernel, and basically showed up in time for him to say "No more questions? Okay, then, thanks everyone." I wanted to buttonhole him afterward about how in hell Broadcom was persuaded to release wireless drivers under the GPL, but he was being buttonholed by other actual developer types...I figured I could stalk him later. (Incidentally, his voice is quite deep; someone in the audience had an app on their phone that showed it was 0.8 metric BarryWhites.)

I needed coffee, so I headed to the Starbuck's in the hotel. And who should be in line in front of me but Linus Torvalds his own bad self? True! Not only that, but he was trying to grift the poor cashier with the ol' San Francisco Shuffle. He was paying with a gift card, but there was a minimum order of $10 and he was $0.37 short, so can't you just add a small service charge...he walked away with his coffee, $53 in cash and the keys to the staff bathroom. If you ever meet him in person, hold on to your wallet with both hands.

Next up was James Turnbull's talk on OpenStack and Puppet. (ObSartorialNote: sat next to jeans + polo shirt + black & red running shoes, talking to jeans + open-neck white dress shirt; I'm guessing startup manager + mid-level VC.) This was quite a cool talk. If you're not familiar w/OpenStack (and I wasn't), it's meant to be a way of managing all your cloudy stuff (VMs, storage, networking, etc) no matter who the provider is. Think high-level API for spinning up/down instances, with low-level plugins (or something) worrying about how to do it with AWS, Rackspace, Eucalyptus...

So far they've got Nova (compute instances), Glance (image service) and Swift (simple blob storage, very little metadata); coming RSN is authorization, dashboard, block storage, message queueing, database and load balancing (code name Atlas, which is just the coolest thing ever). But even with just those three components production-ready (or nearly so), PuppetLabs is getting ready to migrate something like 20 VMs they use for testing over to OpenStack.

Interesting points:

  • Written in Python; an advantage of that is that it allows sysadmins and other non-ninja-programmer-types to contribute. It's all API-driven, with an HTTP/REST interface.

  • Currently supports Ubuntu; working to get CentOS/RHEL in there, but it's a while off.

  • His demo was using OpenStack within a VM to create another VM, then configure it with Puppet. He introduced this with a slide of five turtles all piled on top of each other in a pond.

  • Puppet is used for managing the new VMs; nothing prevents you using Cfengine. That said, they're building modules (is that the right term?) for Puppet that will be able to manipulate OpenStack directly.

  • One of PuppetLab's customers is quite interested in OpenStack. They are -- get this -- a mortgage broker company that drives an 18-wheeler full of computers to state fairs and such, then sell mortgages on the ground to punters. Right now, fixing problems with the computers/servers is a PITA because they're in back of beyond with no spare parts, service folks, etc. The ability to just magic VMs out of the ground, and destroy/re-create them if there are problems, is very, very appealing to them.

I talked to him after the talk. He insists that there is no bad blood between Luke and Mark Burgess, and that rumours of cage matches are completely unfounded. (I don't know how these things get started, I really don't.)

I asked him about packaging support in Puppet; Cf3 basically washes its hands of the matter, saying "I'll run your stupid installation commands but don't come crying to me if everything breaks." He said that this is a subject of much debate between Cf3, bcfg2 and Puppet; Mark's feeling is that it's simply not solvable, and his (James') own feeling is that it's merely non-trivial. He's trying to find a way to inhale the package manager's graph of dependencies and merging it with Puppet's own, but myriad differences in package manager behaviours are making this difficult.

After that was "What's inside benchmarks" by Oracle. I stuck around for a while, but it was simply not that interesting. I moved on to the "PowerNap your data centre" presentation by Dustin Kirkland, and this was definitely better. PowerNap is a Python script that will watch for activity (processes, disk or network IO, whatever) and lower power consumption if it thinks the machine has been idle long enough. Matthew Garrett was there, and offered to help put this in the kernel (if I understood his questions correctly).

At my work we don't pay for power (it's a university) so that incentive is out; instead, we worry about capacity, and this might help. A friend of mine who works with render farms was interested in modding the code so that it would throw an idle machine into the render farm, but return it to interactive use if someone sat down at it.

Oh, and PowerNap version 3 will be a client/server thing -- client says "Hey, looks like I'm idle...tell me what to do"; server will say "it's before 5pm, so stay fully powered no matter what."

I headed to the FreedomBox talk next. Eben Moglen was in the audience, and I took the opportunity to thank him for his speech (I think it was this one). (Hands shook: RMS, Eben Moglen; Linus next?)

The talk was interesting, as is the project itself. The goal is a personal server, running Free software, that creates and preserves privacy. Personal == something like a plug computer; if it's in your home, some legal jurisdictions treat data very differently than they do if it's on an external server. Privacy == enabling privacy-respecting/creating apps to replace current privacy sinkholes like Facebook et al. They're starting with Debian, due to long history with it, an eventual goal of creating FreedomBoxen easily with "apt-get", and to ensure that their work survives the project.

I'm going to be keeping an eye on this, and I suggest you do too.

I went to the Q&A with Linus, and it was interesting. He said he'd been asked by people to skip version 3.1 because of bad memories, but was still considering naming 3.11 "Linux for Workgroups".

He got asked about the Google/Android dispute. He said it'll probably happen, it's a couple years out at least, the Google team is relatively small and oversubscribed...and anyhow, he's not afraid of forks.

And after that, it was beer o'clock with Paul. Fun times.

Tags: linuxcon

Going to LinuxCon!

So a couple of weeks ago, a coworker said to me: "Hey, you going to LinuxCon? It's here in Vancouver." I had no idea. I took a look at the schedule (Greg Kroah-Hartman! Matthew Garret! Linus his own bad self!) and started to think about it. I hadn't budgeted for it, and it was the last three days of my vacation...but yeah, I wanted to go.

I checked with my lovely wife and she said it was okay with her. I checked with my lovely boss, and he said it was okay. I checked my budget and decided I (well, work) could afford it. And now, I'm busy checking the schedule to see when I should be out the door to arrive (looks like about 7am).

There's a lot that I want to see. The FreeIPA talk, Puppet and OpenStack, Linux and Finance...it's going to be a good time. I haven't seen a lot of Twitter traffic on LinuxCon, but I'm afraid I won't be able to contribute much. However, I will be writing up the day's experience as I do with the LISA conferences. So, you know, sharpen up your copy of Lynx, 'cos this text-only layout won't optimize itself.

In other news:

  • 3- and 5-year old kids are surprisingly adaptable. When we say something like, "It's 4.30 in the afternoon and we're bored, wanna drive to a beach?" they cheer. And then they keep their cool for the half hour of preparation and the 20 minute drive. We would have KILLED for this level of spontaneity two years ago.

  • I got a new scope! It's an 8"/200mm SkyWatcher dob from Craigslist. The owner stored it outside ("but only in the summer") so it wasn't in the greatest shape. But it works, and I'm happy with it. And I was able to see Jupiter's Great Red Spot yesterday morning with it, which is hard -- it's quite pale, and not bright red at all these days.

  • I'm taking my oldest son (5 years) to camp out overnight at the Aldergrove Park Star Party this Friday. We're both looking forward to this. Originally I was just going to stay out late and drive back, but then I found our old tent when I was doing some cleaning...and camping out just makes a lot more sense. It's going to be a long day for me -- duck out of LinuxCon, come home, and drive out to Aldergrove -- but hopefully a lot of fun. I'll power up on Red Bull or something.

Tags: linuxcon

Observing report - July 18, 2011

Last night was clear, so I headed out to the local park about 10pm. Even thought it was 45 minutes after sunset, it was still quite bright out, and I had a chance to set up, sit down and start scanning the skies with 10x50 binoculars.

I had a big list of stuff I wanted to get through: Saturn, some double stars from Sky and Telescope's latest issue, 55 Cancri...but Saturn was behind trees (they're quite high to the west where I observe), and it turned out to be more fun just to see what I could see.

I had a copy of July's Sky Maps, and it makes an excellent checklist. I started out with The Coathanger Cluster, which I'm still tickled at being able to find. And hey, what's this close by? M11, the Wild Duck Cluster? I haven't looked at that before...

It was tough to find just using SkyMaps, so I pulled out TriAtlas and tried to work out where to look. It's crowded in that part of the sky, so I switched from the A series to the B series (more detail, lower (higher?) limiting magnitude). It took a while to track down, but I finally saw it at 10:30pm with averted vision through the binoculars. (I came back to it a few times throughout the night, and it became quite noticeable with direct vision as the sky got darker.)

Having spent the time hunting it down with binos, it was pretty simple to zero in with the finder (though I swear it was fainter, despite being a 10x50 like the binos). After some experimentation, I settled on the 12.5mm Vixen (75X) for viewing.

And wow...it was just gorgeous. Faint nebulosity around a bright star, with one other just visible, seemed to sparkle and just be on the edge of resolution. I keep reading about this with globular clusters, but I hadn't actually seen this phenomenon 'til now.

I moved on to NGC 6633, an open cluster in Ophiuchus; again, I tracked it down with binoculars before moving on to the scope. It was a large, elongated smattering of faint stars in the binoculars, with more stars and more graininess coming out in the scope. It was quite beautiful.

(And in reading this PDF from Phil Harrington, I realize that I came across Poniatowski's Bull while searching for this cluster. And I could have seen Barnard's Star...dangit!)

I tried looking for M26, and was just able to glimpse some fuzziness with averted vision at 22X in the scope. I was unable to see anything else at any other power.

About this time, I noticed a coyote padding by. I made some noise and he ran off. Amazing what you can see, even in a big city, if you keep your eyes open.

And then the moon was up. I'd been planning to sketch, but as Douglas Adams said, sometimes the best part of planning is throwing it all away. (Or something like that.) I started looking and just couldn't stop. There was something like an X by the terminator -- quite striking. I couldn't really bump up the magnification past 75X or so due to seeing; the moon was still pretty low in the sky. But it was still breathtaking. I sometimes think I could really, really get into lunar observing...there's something about seeing all the detail, and the way it changes, that's captivating.

It was past midnight, and I started thinking I should go. But I noticed a bright star rising in the east, below Cassiopeia, and I wanted to know what it was. I got out my planisphere, moved the dials around...carry the two for daylight savings...Algol.

Algol?

Wow. I knew constellations moved around, and that it was entirely reasonable that you'd see off-season (so to speak) constellations if you stayed up late enough. But Algol? The last time I'd looked at that regularly it was January. Winter! What the hell?

Well, only one thing to do: I swivelled the scope over and looked up the Double Cluster. Whee, there it was! Not nearly as spectacular as when it was way overhead, but still.

And if I could see that, what about Andromeda? I hadn't been able to track that down in the winter (I'm still a newbie!), and it had been a big frustration for me. The moon was nearly full, it was no more than 30 degrees above the horizon, but what the heck...let's give it a try. And wow, there it was! Faint fuzzy visible in binocs and the scope. Wasn't much to look at, but I couldn't have been more thrilled.

It was a good way to end the night. I packed up, wiped the dew off things as best I could (so THAT'S what people mean when they complain about dew), and headed home.

Tags: astronomy

Observing report -- July 1, 2011

It's been four weeks of cloudy, crappy weather, but the clouds FINALLY cleared up tonight. I thought about heading to the dark site on the side of Seymour Mountain, but decided to keep it local and casual instead; I headed over to the local park with my dad.

We got out around 10pm and set up our lawn chairs to watch the stars slowly, slowly come out. It's not terribly dark at this park -- nearby street lights are visible, and we don't really have any way of shielding ourselves -- but it is nice and close.

We watched for an Iridium flare at 11:15pm...no luck. I got to show my dad Albireo, M13 and a bunch of satellites. I'd hoped to show him Saturn, but it was behind the trees by the time we got out. About 11.30, he headed in for the night.

After he left I got down to some serious observing (modulo the fact that I'm still a newbie). First on the list were some double stars. I tried splitting the Double-Double but no luck -- just a double tonight. I did split Eta Cassiopeiae, though I couldn't see any colour difference. (The last time I looked up Eta it was winter -- it's weird to think of Cassiopeia being up in the summer!). Ras Algethi was pretty, and held up to the high magification needed -- it took 240X to split it, and that did not seem unreasonable. The pair seemed orange and blue-green to me. Very pretty.

Next up were a couple of globular clusters. I started with M13, and spent about ten minutes underneath an astronomy-quality t-shirt, trying to let my eyes adapt and trying different magnfications. Having just read Rod Molise' "Urban Astronomy", I was enthusiastic to try high magnifications...but I gotta say, it was nearly invisible at 150X. Yes, it's a 4.5" scope on a night of probably not-great seeing, and my eyes never really dark-adapted, but I exptected more. At least at 75X I can see it. There might have been some little hint of graininess at higher powers, but nothing I could see with confidence. I'm willing to put this down to equal parts inexperience, lack of dark adaptation, impatience and smaller aperture.

I looked up M5 for comparison, and found it about to head behind some trees. I was surprised at how obvious it was in binoculars and the finder, and I agree with Molise' opinion that it at least rivals M13. Maybe the tiniest hint of graininess, but, but again I'm really not sure.

I saw M57 for the first time. Maybe saw the hole at higher powers (120x) but I can't say for sure. The sightq was a bit disappointing for me. I found myself wishing for the open clusters that abound in winter time; they're pretty and really shine in the small scope. And it was nice that there was no moon, but without a planet to look at I felt a bit lonely.

After a second look at Albireo (colours much better in the dark!), I packed it in about 1am. Even though I missed some favourite targets, it was so good to get out after the long, long drought.

Tags: astronomy