dig +short porttest.dns-oarc.net TXT
I use dia for network maps. Turns out you can export to PNG directly from the command line like so:
dia --export foo.png --filter=png foo.dia
However, if you do that the resolution is terrible. The undocumented way around this is to install libart and then use "--filter=png-libart" for exporting. On Ubuntu, that's:
sudo apt-get install libart
dia --export foo.png --filter=png-png foo.dia
Much better!
Yesterday I realized that, for the nth time, I'd sent emails to my local Request Tracker with the wrong subject line -- and thus they were attached to the wrong ticket. Fixing this turned out to be relatively easy (though not necessarily convenient).
First off, get the attachment number for the email you're trying to move; you can do this by viewing the ticket in your browser and hovering over the "show full" link. Second, fire up your MySQL client and run:
update Transactions set ObjectId=(new ticket number) where id=(attachment number) limit 1;
So if the attachment number was 30043, and it should be attached to ticket 4090, you'd run:
update Transactions set ObjectId=4090 where id=30043 limit 1;
Sorted!
Today I had to compile a program that needed a newer version of the Autoconf suite than is available on CentOS 5. I got around this like so:
Downloaded and rebuilt the SRPM for autoconf2.6 from the good folks at pkgs.org
Installed it on the build machine (dang! shoulda been using Vagrant for that! Or least Mock...)
This gives you /usr/bin/auto[whatever]2.6x -- which is good! don't overwrite stuff! But you'll still get complaints about not new enough versions.
Symlink all the 2.6 binaries to ~/bin/:
for i in /usr/bin/auto*2.6x ; do
ln -s $i ~/$(basename $i | sed -e's/2.6x//g')
done
~/bin
first.Walla.
If you're using LWP::UserAgent -- and if you're using WWW::Mechanize, you're using it -- you can point it at a self-signed SSL certificate like so:
HTTPS_CA_FILE=/path/to/cacert.pem <script file>
When compiling CHARMM, I'll sometimes encounter errors like this:
charmm/lib/gnu/iniall.o: In function `stopch_':
iniall.f:(.text+0x1404): relocation truncated to fit: R_X86_64_PC32 against symbol `ldbia_' defined in COMMON section in charmm/lib/gnu/iniall.o
iniall.f:(.text+0x14af): relocation truncated to fit: R_X86_64_PC32 against symbol `seldat_' defined in COMMON section in charmm/lib/gnu/iniall.o
iniall.f:(.text+0x14d7): relocation truncated to fit: R_X86_64_32S against symbol `seldat_' defined in COMMON section in charmm/lib/gnu/iniall.o
iniall.f:(.text+0x151b): relocation truncated to fit: R_X86_64_32S against symbol `seldat_' defined in COMMON section in charmm/lib/gnu/iniall.o
iniall.f:(.text+0x1545): relocation truncated to fit: R_X86_64_PC32 against symbol `shakeq_' defined in COMMON section in charmm/lib/gnu/iniall.o
iniall.f:(.text+0x1551): relocation truncated to fit: R_X86_64_PC32 against symbol `shakeq_' defined in COMMON section in charmm/lib/gnu/iniall.o
iniall.f:(.text+0x1560): relocation truncated to fit: R_X86_64_PC32 against symbol `kspveci_' defined in COMMON section in charmm/lib/gnu/iniall.o
iniall.f:(.text+0x156e): relocation truncated to fit: R_X86_64_PC32 against symbol `kspveci_' defined in COMMON section in charmm/lib/gnu/iniall.o
iniall.f:(.text+0x16df): relocation truncated to fit: R_X86_64_PC32 against symbol `shpdat_' defined in COMMON section in charmm/lib/gnu/iniall.o
charmm/lib/gnu/iniall.o: In function `iniall_':
iniall.f:(.text+0x1cae): relocation truncated to fit: R_X86_64_PC32 against symbol `cluslo_' defined in COMMON section in charmm/lib/gnu/iniall.o
iniall.f:(.text+0x1cb8): additional relocation overflows omitted from the output
collect2: ld returned 1 exit status
The problem is that the linker is running out of room:
What this means is that the full 64-bit address of foovar, which now lives somewhere above 5 gigabytes, can't be represented within the 32-bit space allocated for it.
The reason for this error is the size of data that you are using. This is seen to happen when your program needs more than 2GB of data. Well, who needs such big data at compile time? I do for one and there are other people in the HPC world who do that too. For all of them the life saver or may be a day saver is the compiler option -mcmodel.
This is a known problem with CHARMM's "huge" keyword.
There are a couple of solutions:
Edit the CHARMM makefiles to include the GCC mcmodel argument (though you should be aware of the subtleties of that argument)
Switch to XXLARGE or some other, smaller memory keyword.
This page, from the University of Alberta, also has excellent background information. (Oh, and also? They have a YouTube channel on using Linux clusters.)
I came across this tip on an old posting to the Bacula mailing list. To determine if exclusions in a fileset are working, run these commands in bconsole:
@output some-file
estimate job=<job-name> listing level=Full
@output
The file will contain a list of files Bacula will include in the backup.
(Incidentally, I came across this while trying to figure out why my
excludions weren't working; turned out I needed to remove the trailing
slash in my directory names in the Exclude
section.
Xmas vacation is when I get to do big, disruptive maintenance with a fairly free hand. Here's some of what I did and what I learned this year.
I made the mistake of rebooting one machine first: the one that held the local CentOS mirror. I did this thinking that it would be a good guinea pig, but then other machines weren't able to fetch updates from it; I had to edit their repo files. Worse, there was no remote console on it, and no time (I thought) to take a look.
Last year I tried getting machines to upgrade using Cfengine like so:
centos.some_group_of_servers.Hr14.Day29.December.Yr2009::
"/usr/bin/yum -q -y clean all"
"/usr/bin/yum -q -y upgrade"
"/usr/bin/reboot"
This didn't work well: I hadn't pushed out the changes in advance, because I was paranoid that I'd miss something. When I did push it out, all the machines hit on the cfserver at the same time (more or less) and didn't get the updated files because the server was refusing connections. I ended up doing it by hand.
This year I pushed out the changes in advance, but it still didn't work because of the problems with the repo. I ran cssh, edited the repos file and updated by hand.
This worked okay, but I had to do the machines in separate batches -- some needed to have their firewall tweaked to let them reach a mirror in the first place, some I wanted to watch more carefully, and so on. That meant going through a list of machines, trying to figure out if I'd missed any, adding them by hand to cssh sessions, and so on.
I may need to give in and look at RHEL, or perhaps func or better Cfengine tweaking will do the job.
Quick and dirty way to make sure you don't overload your PDUs:
sleep $(expr $RANDOM / 200 ) && reboot
Rebooting one server took a long time because the ILOM was not working well, and had to be rebooted itself.
Upgrading the database servers w/the 3 TB arrays took a long time: stock MySQL packages conflicted with the official MySQL rpms, and fscking the arrays takes maybe an hour -- and there's no sign of life on the console while you're doing it. Problems with one machine's ILOM meant I couldn't even get a console for it.
Holy mother of god, what an awful time this was. I spent eight hours on upgrades for just nine desktop machines. Sadly, most of it was my fault, or at least bad configuration:
Graphics drivers: awful. Four different versions, and I'd used the local install scripts rather than creating an RPM and installing that. (Though to be fair, that would just rebuild the driver from scratch when it was installed, rather than do something sane like build a set of modules for a particular kernel.) And I didn't figure out where the uninstall script was 'til 7pm, meaning lots of fun trying to figure out why the hell one machine wouldn't start X.
Lesson: This really needs to be automated.
Lesson: The ATI uninstall script is at /usr/share/ati/fglrx-uninstall.sh. Use it.
Lesson: Next time, uninstall the driver and build a goddamn RPM.
Lesson: A better way of managing xorg.conf would be nice.
Lesson: Look for prefetch options for zypper. And start a local mirror.
Lesson: Pick a working version of the driver, and commit that fucker to Subversion.
These machines run some scientific software: one master, three slaves. When the master starts up at boot time, it tries to SSH to the slaves to copy over the binary. There appears to be no, or poor, rate throttling; if the slaves are not available when the master comes up, you end up with the following symptoms:
The problem is that umpty scp processes on the slave are holding open the binary, and the kernel gets confused trying to run it.
I also ran into problems with a duff cable on the master; confusingly, both the kernel and the switch said it was still up. This took a while to track down.
It turned out that a couple of my kvm-based VMs did not have jumbo frames turned on. I had to use virt-manager to shut down the machines, turn on virtio on the drivers, then reboot. However, kudzu on the VMs then saw these as new interfaces and did not configure them correctly. This caused problems because the machines were LDAP clients and hung when the network was unavailable.
Reminder to myself: Got a file called .nfs.*
? Here's what's going
on:
# These files are created by NFS clients when an open file is
# removed. To preserve some semblance of Unix semantics the client
# renames the file to a unique name so that the file appears to have
# been removed from the directory, but is still usable by the process
# that has the file open.
That quote is from /usr/lib/fs/nfs/nfsfind
, a shell script on
Solaris 10 that's run once a week from root's crontab. Some
references:
Just came across maillog, which looks very cool. From TFM:
Maillog is a powerful tool for selecting and formatting entries from a
sendmail or postfix log. When a message is selected, it collects all
the mailer entries related to that message's queue id and formats them
in a more readable fashion. By default, the log fields that are
printed are: date, from, to, ctladdr, stat, and notes.
This is much better than my cobbled-together multiple-grep scripts. Rather surprised to not find it in Debian...
I'm testing Bacula 3; the new release has just come out, and I'm very much looking forward to rolling it out here.
One of the things I've been doing is trying to get TLS working, which I utterly failed at in my last job. I must've failed to see these pages, which a) point out that the otherwise-excellent Bacula manual is (ahem) sparing when it comes to TLS, and b) you need to put the cert files in places that strike me as unexpected.
Thus, in bacula-dir.conf
you put the directives listing the
director's cert/key in the client section — IOW, you say "and
use this key/cert combo when connecting to client foo." Meanwhile, on
client foo, you add the client's cert/key directives in the
director section ("and use this key/cert when talking to the
director"), along with things like the CA cert and required CNs.
Oh, and did you know that you can debug SSL handshakes with openssl? True story.
You can configure OpenSSH's ~/.ssh/authorized_keys
file to
restrict the commands that key is allowed to run via SSH...thus, say,
restricting a particular key to running rsync or dump. You can also
restrict it to connections only from certain hosts; as the manual
points out, this means that "name servers and/or routers would have to
be compromised in addition to just the key."
winerror.h
describes this as a General access denied error. In the end, it turned out that when the account was created, the "user cannot change password" option was checked. Hope that'll help someone else's google-fu…dig +short porttest.dns-oarc.net TXT
and watch the skies.
How to quiet noisy cron entries that send far too much to STDERR:
exec 3>&1 ; /path/to/script 2>&1 >&3 3>&- | egrep -v 'useless|junk' ; exec 3>&-
I've been very busy of late, but the biggest news is that I've started a 3-month temporary part-time assignment here. It's a neat place, and feels a lot like a software startup. Even though it's a small group, they've got certain hardware requirements that are a lot bigger than what I've worked with before; it'll be interesting, to say the least.
A DHCP server at work is listening on multiple interfaces, some of
which have multiple aliases, and so the reply appears to come from one
of those aliases -- not the "main" one that I want. The solution is to
use the option server-identifier
.
From Ant's Eye View:
You should assume that systems on the public network will be abused. This is a lesson as old as the Internet, but every programmer seems to have to learn it for him/herself: if you make a system available to the public, people will abuse it.
Worth reading.
From dmesg(8)
:
-nlevel
Set the level at which logging of messages is done to the con-
sole. For example, -n 1 prevents all messages, expect panic
messages, from appearing on the console. All levels of messages
are still written to /proc/kmsg, so syslogd(8) can still be used
to control exactly where kernel messages appear. When the -n
option is used, dmesg will not print or clear the kernel ring
buffer.