So the other day I was asked to help get a bioinformatics tool
working. Tarball was up on Sourceforge, so it shouldn't be a problem,
right? Right. Download, skim the instructions, run "make" and we're
done. Case closed!
Only I had to look. Which was a mistake. Because inside the
tarball was another tarball. It was GNU coreutils, version
8.22. Which was dutifully compiled and built as part of the
toolchain. It was committed about 18 months ago because:
this will create a new sort that is used by chrysalis to run sort in
parallel speedup on hour system running a 13g dataset was from 46min
to 6min runtime
That is a significant speedup. Yes. And sure, it's newer than the
version in the last Ubuntu LTS (8.13), and 'way newer than the version
in CentOS 5 (5.97). But that is a tarball, even if it is only 8 MB,
in the subversion repo for a project that was published in Nature
Protocols. Why in hell wasn't it written up as a dependency in
the README? So yeah, I got angry: "I think I'm gonna submit a
patch with an Ubuntu ISO in it, see if they accept it."
I'm struggling with what to write here. This is bad practice, yes,
but what constructive, helpful alternative do I have to offer? The
scientists I work with are brilliant, smart people who do amazing
research, but their knowledge of proper (add scare quotes if you like)
development practice is sorely lacking. It's not their fault, and
folks like Software Carpentry are doing the angel's work to get
them up to speed. But riddle me this: if you're trying to get a tool
into the hands of a pretty new Linux user -- one who's going to base
the next 18 months of their work on how well your tool works -- how
do you handle this sort of thing?
Mark it in the README? That's great if they've got a sysadmin, and
Lord knows they should...but there are many that don't, or it's the
grad student in the corner doing the work and they're more focussed
on their thesis. (That's not a criticism.)
Throw an error? Maybe, or maybe a warning if it's less than version
umptysquat. That gets into all sorts of fun version parsing
problems.
Distribute a VM? Maybe -- but read C. Titus Brown's comments on
this. Plus, if we wince at the idea of telling a newbie "Just
go get it installed", imagine our faces when we tell them "just go
get the VM and run it." Ditto Docker, Vagrant or whatever new
hotness we cool kids are using these days.
Ports tree? Now we're getting somewhere. All we need to do is have
a portable, customizable, easily-extended ports tree that works for
lots of different Linux distros and maybe Unices. Hear that sound?
That's the NetBSD ports tree committer berzerkers coming for your
brains. Because that work is HARD, and they are damned
unappreciated.
We have no good alternative to offer. I can be snotty all I want
(confession: IT'S SO MUCH FUN) but the truth is this is a hard
problem, and people who just want to get shit done are doing it the
best they can because they just want to get shit done. We have -- are
-- failing them. And I don't know what to do.
Did you know there was a fork of Bacula named Bareos? Not I.
Not sure whether to pronounce it "bar-ee-os" or "bear-o-s". Got
Kern Sibbald, Bacula's creator, rather upset. He promises to
bring over "any worthwhile features"...which is good, because there
are a lot.
Post by Matthew Green titled "How does the NSA break SSL?".
Should be reading that now but I'm writing this instead.
Got Netflix at home? Got IPv6? That might be why they think you're
in Switzerland and change your shows accordingly. In my case,
they thought I was in the US and offered to show "30 Rock" and
"Europa Report"...until I tried to actually stream them and they
figured out the truth. Damn.
Test-Driven Infrastructure with Chef. Have not used Chef
before, but I like the approach the author uses in the first half of
the book: here's what you need to accomplish, so go do it. The
second half abandons this...sort of unfortunate, but I'm not sure
explaining test infrastructure libraries (ServerSpec, etc) would
work well in this approach. Another minor nitpick: there's a lot
of boilerplate output from Chef et al that could easily be cut.
Overall, though, I really, really like this book.
Another busy day at $WORK, busy enough that I missed the Judea Pearl
lecture the CS dep't was hosting. On the way out the door I grabbed
the copy of "Communications of the ACM" with his face on the cover,
thinking I'd catch up. The two pieces on him were quite small,
though, so it was on to other fare.
"The Myth of the Elevator Pitch" caught my eye, but as I read it
I became more and more convinced that I was reading literary
criticism combined with mystic bullshit. Example:
At first glance, it appears that the elevator pitch is a component
of the envisioning process. The purpose of that practice is to tell
a compelling story (a "vision") of how the world would be if the
innovation idea were incorporated into it. [...] But the notion
that a pitch is a precis of an envisioning story is not quite right.
A closer examination reveals that a pitch is actually a combination
of a precis of the envisioning story and an offer. The offer to
make the vision happen is the heart of the third practice.
Ah, the heart of the third practice. (I feel like that should be
capitalized: "the Heart of the Third Practice." Better!)
The standard idea of a pitch is that it is a communication -- a
presentation transmitting information from a speaker to a listener.
In contrast, the core idea in the Eight Practices of innovation is
that each practice is a conversation that generates a commitment to
produce an outcome essential to an innovation. [....] The problem
with the communication idea is that communications do not elicit
commitments. Conversations elicit commitments. Commitments produce
action.
Mistaking it for communication -- yes, an easy mistake to make.
We can now define a pitch as a short conversation that seeks a
commitment to listen to an offer conversation. It is transitional
between the envisioning and offering conversations.
And back to the literary criticism. I feel like I've just watched a
new literary form being born. I'm only surprised that it seems the
CompSci folks scooped the LitCrit dep't.
(Of course, while I'm busy spellchecking all this, I am most definitely
not being published in "Communications of the ACM". So I suck.)
So Professor Michael Geist is running for a spot on the CIRA board of
directors. I want to vote for him. While I'm there I decide I should
really consider the other candidates as well. And after a very small
number of bios, it becomes very obvious that many -- maybe most -- are
focussing on growth, on marketing, on things that would never occur
to me to be part of what I thought was such an exclusively technical
domain. More fool me, I guess.
Example statement from candidate Jennifer Shelton, in answer to
the question: "What specific actions do you propose to overcome [CIRA's]
challenges and opportunities?"
Slowing organic growth rates will require CIRA to articulate, convey,
and deliver on a clear brand message that makes the value proposition
clear to all stakeholders. This message should include thought and
technological leadership to differentiate dot-ca from lower cost
alternatives.
GAH. GAH is my reaction. I can see what is probably meant, what I
could rephrase in a way that doesn't make me twitch, what would let me
avoid screwing up my face at the mention of this candidate's name --
but I'm damned if I can figure out why I should bother.
New competition from generic Top Level Domains (gTLDs) will arrive
this year. More than ever, CIRA needs to be focused and decisive with
marketing and strategic planning. CIRA needs the wisdom, the oversight
and the support of a knowledgeable and committed Board of Directors to
help shape the plan and to authorize appropriate action without
hesitation.
And:
CIRA is a high-performing organization. There is excellent leadership
from the CEO and in the functional areas. There is a culture of
innovation, achievement and respect. It's easy to be enthusiastic
about CIRA's very significant benefits for all Canadians and the
organization's continuing recognition as an international exemplar of
best practice in the domain industry.
(He goes on to compliment CIRA's "intentional [evolution] toward
greater effectiveness. Intelligent organizational design FTMFW!)
The further segmentation of the top domain name space will inherently
reduce some demand for .ca top level domain names. With many Canadian
companies focused on export markets and the possible availability of
custom specialized top level domains, the .ca top level domain space
may find itself facing limited growth and it may be a challenge to
project relevance to many Canadian potential registrants. It will be
critical for CIRA to project value to the registrants and seek to
maximize its relevance in the more complex and splintered top level
domain space in the near future.
The issue is a marketing one. I believe what is needed is a two pronged
approach. The first element would be designed to make the public much
more aware of the .ca domain name designation and to make the
designation much more attractive to and desirable for the users.
CIRA needs to engage its stakeholders to understand their needs and
expectations and prioritize targeted solutions.
By way of welcome relief, Adrian Buss is just plain confusing:
Members of the board of directors have to be seen to be the end
product that CIRA delivers. The membership is at the core of CIRA's
governance model, without an engaged membership .CA just becomes an
irrelevant TLD.
Jim Grey says, let's carpet the registrars with flyers:
Continuing to increase our .ca brand awareness and preference with
Canadians and to strengthen our relationship with registrars. In an
increasingly competitive world the .ca brand awareness becomes key to
continued growth. In addition our only sales channel to market is
through registrars who will be inundated with new gTLD's and increased
sales incentives from existing gTLD's. CIRA will need to strengthen
the relationship with registrars.
I understand my role as a director is to implement CIRA's mission,
adopt CIRA's vision, and live CIRA's values in my personal and
professional affairs, both with CIRA and in my office as CEO of a
group of registrars. I will at all times reflect CIRA's brand and
show my pride for .CA, which is the foremost Canadian internet
identity for a domain.
Continued implementation and deployment of current technology by
skilled professionals will provide the environment for an agile,
secure and stable registry service. Focusing on a talent management
program for all areas of the business will foster and create a culture
of excellence.
For instance, Rick Sutcliffe would best represent the interests of
academics (among the first ever connected to the net), small business,
and non-profits. He is also a fiscal conservative.
Also, "dot polar bear for all!":
In addition, [Rick Sutcliffe] believes that CIRA can better promote
its brand to Canadians, so that when they think "Internet" they
think CIRA, and when they think "domain" they automatically think
".ca".
With the many new TLDs now coming on line, he is more convinced than
ever that CIRA needs to expand its product line, offering registry
services for any new TLD that has relevance to its mandate for
Canadians.
There's the odd mention of IPv6 or DNSSEC. One candidate
mentions IDN French language support. (Sorry, two.) And
one, bless their heart, talks about the downside of a .CA domain
having its SSL certificate revoked by a foreign SSL cert registry.
I know this rant is not constructive. I know that short-sightedness
lives in my heart, that my moral failure is that I'm unable to stop
twitching at the jargon and see the good ideas that (may) lurk within.
I know that I'm giving Michael Geist and Kevin McArthur a free
pass...doubly so for McArthur, who talks about launching "a parallel
alternative root as a contingency planning exercise and as deterrent
to foreign political interference within the global root zone", which
is a BIG can of worms that is not (and cannot be) discussed in
nearly enough depth in a 500 word essay to be used as a campaign
plank.
But oh god, the...the relentless focus on growth, growth, GROWTH is
enough to make me want to huddle in the corner with a bag of chips and
a copy of "Das Kapital" on my laptop.
News flash: organizations evolve toward self-perpetuation. Film at
11.
The reason, at least for us, is that we'd installed the
python-matplotlib package from the OpenSuSE Education repository,
and python-numpy from the standard OpenSuSE library. The problem was
fixed by running:
sudo zypper install python-numpy-1.6.2-39.1.x86_6
which forced the upgrade from 1.3 to 1.6.
But, whee! you can actually install 1.6.2-39.1 from TWO repos:
Education and Science. Yes, my fault; my OpenSuSE installs are all
over the fucking map. But I wonder what it might take to ensure that
minimum versions are somehow noted in RPMs, or, you know, not having
multiple universes of packages. Fuck, I hate OpenSuSE.
Software project A is kept in git. A depends on unrelated project
B, so B is configured as a submodule in A. Unfortunately, A's
Makefile assumes B's location in that subdirectory, with no
provision for looking for its libraries and header files in the
usual locations. Good or whack? @FunnelFiasco: "I concur. It's
total bullshit."
Trying to take care of the HP RFU vulnerability. Miss the bit
that says my printer doesn't have the ability to disable this built
into the web interface. Decide I need to download HP Jet WebAdmin.
Forced to register for an "HP Passport Account". Fill in country of
origin, among other details. Click to go back to download page, get
"Sorry, we can't do that" message. Navigate back to download page.
Fill in country of origin again. Fill in name of company. Download
-- 300 MB. Go to download documentation; I see "installation
instructions", "terms of use" and "post-sales support." What a
crock.
-- Oh, and now I discover that it's going to install Microsoft SQL
Server. Fucking hell. And that's not even including the rat's nest
of menus.
Don't get me wrong: I can see how this would be immensely useful for
a large number of printers. (And I strongly suspect that "large"
means "greater than one".) But for one printer, it's an amazing
overhead for such a small thing. Worse, I'm willing to bet that my
whole task could be reduced to a single SNMP set command. But I'm too
lazy to install Wireshark and figure out what that would be.
So...the best way to fix the pernicious issue of displaying /etc/motd
to the end user (read: cat) was 1) to make a PAM module responsible
for it, 2) get rid of the "old way" of updating it (cron job), and
last but not least, 3) add a completely undocumented behavior (if
/etc/motd is a symlink, it is dynamic) that contradicts the man page
for said module.
I'm going back to blaming Ubuntu. I also need to step out of the
room and have an aneurysm now.
Okay, I like to rant. But honestly, sometimes there's just such good
material it would be a crime not to.
I got cross-shipped a computer part recently. Although I was in the
middle of a ticket with the vendor, and thus it wasn't unexpected, the
first I found out about it was when I received this email from
Loomis, The Courier Company of the DAMNED :
Date: Fri, 16 Sep 2011 13:06:40 -0400 (EDT)
From: nx6122@loomis-express.com
To: undisclosed-recipients: ;
Subject: 09/16/2011 13:05:16 Easyship 2.0 Shipment Notification
09/16/2011 13:05:16 See attached file/Voir le fichier ci-joint.
There was an attachment: a base64-encoded file called
"110916130516REPORT.DOC". Against my better judgement, I saved the
attachment and viewed it with hexdump, fully expecting it to be full
of Javascript or PDF or Flash or whatever the kids are using these
days. Instead, I found that it was a plain ASCII file -- not a Word
document, not even an RTF document -- that had delivery information
for the part I was being shipped.
The "From:" address' domain is "loomis-express.com". The "Received:"
headers mentioned various "dhl.com" servers. And the plain text in
the REPORT.DOC said to visit "loomisexpress.com" for further
information. That's three different domains.
This is unbelievably bad. I was sure I was being phished. If a
user had asked me about this email, I would have told them to delete
it. And then I would have scrubbed their hard drive with lye.
Against that, the plain ol' outdated info in the second email,
telling me how to schedule a pickup for the returned part, was
just endearing. The link to their web page just gave a 404; when I
found the pickup page by myself, it wanted me to set up an account
before it would do anything. (Probably a fair request, but I don't
want to be bothered since I'M NOT GOING TO USE THEM AGAIN.)
I gave up and called the toll-free number in the email; I got a stern
voicemail from DHL saying that the numbers had changed, and this one
was for DHL customers ONLY and Loomis Express people should either
visit loomis-hyphen-express-dot-com or call a new toll-free
number...which I noticed was different from the toll-free number
displayed boldly on the return shipping label.
I've emailed them about their poor email design, and got a response:
Thank you for taking the time to email us. On behalf of Loomis
Express, please accept our sincerest apologies for the delayed
response to this email.
We have copied the technical support group on this reply so that they
could look into the matter and resolve if this actually came directly
from Loomis Express.
So: the quota file on an ext3 filesystem contains usage information
("How much disk space is this user using?"). It's updated when quotacheck is
run, typically at boot time. After that the kernel has up-to-date
info on quotas but doesn't write it to disk for performance reasons.
So the kernel will deny/allow writes as necessary.
But the userland tools used -- particularly by users -- to monitor or
report on quota state ("How much space am I allowed to use?") only
uses those files. And those aren't updated unless quotacheck is
run...which is either at boot time, or when called from cron. And to
run it on a live system, you've got to turn off quotas to prevent
corruption.
Bleah.
UPDATE: Near as I can tell, I was checking quotas while not realizing
that quotas had been turned off for the filesystem I was checking,
and thus quota reported bogus data.
At work, I'm about to open up the Rocks cluster to production, or at
least beta. I'm finally setting up the attached disk array, along
with home directories and quotas, and I've just bumped into an
unsettled question:
How the hell do I manage this machine?
On our other servers, I use Cfengine. It's a mix of version 2 and 3,
but I'm migrating to 3. I've used Cf3 on the front end of the cluster
semi-regularly, and by hand, to set things like LDAP membership,
automount, and so on -- basically, to install or modify files and make
sure I've got the packages I want. Unlike the other machines, I'm not
using cfexecd to run Cf3 continuously.
The assumption behind Cf3 and other configuration management tools --
at least in my mind -- is that if you're doing it once, you'll want
to do it again. (Of course, there's also stuff like convergence,
distributed management and resisting change, but leave that for now.)
This has been a big help, because the changes I needed to apply to the
Rocks FE were mostly duplicates of my usual setup.
If/when I change jobs/get hit by a bus, I've made it abundantly clear
in my documentation that Cfengine is The Way I Do Things. For a
variety of reasons, I think I'm fairly safe in the assumption that Cf3
will not be too hard for a successor to pick up. If someone wants
to change it afterward, fine, but at least they know where to start.
OTOH, Rocks has the idea of a "Restore Roll" -- essentially a package
you install on a new frontend (after the old one has burned down, say)
to reinstall all the files you've customized. You can edit a
particular file that creates this roll, and ask it to include more
files. Edited /etc/bashrc? Add it to the list.
I think the assumption behind the Restore Roll is that, really, you
set up a new FE once every N years -- that a working FE is the result
of rare and precious work. The resulting configuration, like the
hardware it rests on, is a unique gem. Replacing it is going to be a
pain, no matter what you do. There aren't that many Rocks developers,
and making it Really, Really Frickin' Nice is probably a waste of
their time.
(I also think it fits in with the rest of Rocks, which seems like some
really nice bits surrounded by furiously undocumented hacks and
workarounds. But I'm probably just annoyed at YET ANOTHER
UNDOCUMENTED SET OF HACKS AND WORKAROUNDS.)
I found that after a re-install of Rocks 5.0.3, not all the files I
asked for were restored! I suspect it has to do with the order things
get installed.
So now I'm torn.
Do I stick with Cf3? I haven't mentioned my unhappiness with its
obtuseness and some poor choices in the language (nine positional
arguments for a function? WTF?). I'm familiar with it because I've
really dived into it and taken a course at LISA from Mark Burgess
his own bad self, but it's taken a while to get here. But it is the
way I do just about everything else.
Or do I use the Rocks Restore Roll mechanism? Considered on its own,
it's the least surprising option for a successor or fill-in. I
just wish I could be sure it would work, and I'm annoyed that I'd have
to duplicate much of the effort I've put into Cf3.
The Canadian Security Intelligence Service, Canada's principal
intelligence agency, routinely transmits to U.S. authorities the names
and personal details of Canadian citizens who are suspected of, but
not charged with, what the agency refers to as "terrorist-related
activity."
The criteria used to turn over the names are secret, as is the
process itself.
Quote:
In at least some cases, the people in the cables appear to have been
named as potential terrorists solely based on their associations with
other suspects, rather than any actions or hard evidence.
Quote:
The first stop for these names is usually the so-called Visa Viper
list maintained by the U.S. government. Anyone who makes that list is
unlikely to be admitted to the States.
Given Washington's policy of centralizing such information, though,
the names also go into the database of the U.S. National
Counterterrorism Centre. Inclusion in such databases can have several
consequences, such as being barred from aircraft that fly through
U.S. airspace.
Or, as Canadian Maher Arar discovered in 2002, the consequences can be
worse: much arrest, interrogation, even "rendition" to another country.
Quote:
"We don't want another Arar," said the security official. But at the
same time, he said, CSIS is acutely aware that if it did not pass on
information about someone it suspected, and that person then carried
out some sort of spectacular attack in the U.S., the consequences
could be cataclysmic for Canada.
U.S. authorities, already suspicious that Canada is "soft on terror,"
would likely tighten the common border, damaging hundreds of billions
of dollars worth of vital commerce.
A former senior official, who also spoke to CBC on the basis of
anonymity, put it more bluntly: "The reality is, sorry, there are bad
people out there.
"And it's very hard to get some of those people before a court of law
with the information you have. And so there has to be some sort of
process which allows you to provide some sort of safeguard to society
on both sides of the border."
Furthermore, he said, "it's not a fundamental human right to be able
to go to the United States."
No, it's not a fundamental human right to be able to go to the United
States. It is a fundamental human right not to be kidnapped and
tortured.
Xmas vacation is when I get to do big, disruptive maintenance with a
fairly free hand. Here's some of what I did and what I learned this year.
Order of rebooting
I made the mistake of rebooting one machine first: the one that held
the local CentOS mirror. I did this thinking that it would be a good
guinea pig, but then other machines weren't able to fetch updates from
it; I had to edit their repo files. Worse, there was no remote
console on it, and no time (I thought) to take a look.
Lesson: Don't do that.
Automating patching
Last year I tried getting machines to upgrade using Cfengine like so:
This didn't work well: I hadn't pushed out the changes in advance,
because I was paranoid that I'd miss something. When I did push it
out, all the machines hit on the cfserver at the same time (more or
less) and didn't get the updated files because the server was refusing
connections. I ended up doing it by hand.
This year I pushed out the changes in advance, but it still didn't
work because of the problems with the repo. I ran cssh, edited the
repos file and updated by hand.
This worked okay, but I had to do the machines in separate batches --
some needed to have their firewall tweaked to let them reach a mirror
in the first place, some I wanted to watch more carefully, and so
on. That meant going through a list of machines, trying to figure out
if I'd missed any, adding them by hand to cssh sessions, and so on.
Lesson: I need a better way of doing this.
Lesson: I need a way to check whether updates are needed.
I may need to give in and look at RHEL, or perhaps func or better
Cfengine tweaking will do the job.
Staggering reboots
Quick and dirty way to make sure you don't overload your PDUs:
sleep $(expr $RANDOM / 200 ) && reboot
Remote consoles
Rebooting one server took a long time because the ILOM was not working
well, and had to be rebooted itself.
Lesson: I need to test the SP before doing big upgrades; the simplest way of doing this may just be rebooting them.
Upgrading the database servers w/the 3 TB arrays took a long time:
stock MySQL packages conflicted with the official MySQL rpms, and
fscking the arrays takes maybe an hour -- and there's no sign of
life on the console while you're doing it. Problems with one machine's
ILOM meant I couldn't even get a console for it.
Lesson: Again, make sure the SP is okay before doing an upgrade.
Lesson: Fscking a few TB will take an hour with ext3.
Lesson: Start the console session on those machines before you reboot, so that you can at least see the progress of the boot messages up until the time it starts fscking.
Lesson: Might be worth editing fstab so that they're not mounted at boot time; you can fsck them manually afterward. However, you'll need to remember to edit fstab again and reboot (just to make sure)...this may be more trouble than it's worth.
OpenSuSE
Holy mother of god, what an awful time this was. I spent eight hours
on upgrades for just nine desktop machines. Sadly, most of it was my
fault, or at least bad configuration:
Two of the machines were running OpenSuSE 11.1; the rest were
running 11.2. The latter lets you upgrade to the latest release
from the command line using "zypper dist-upgrade"; the former does
not, and you need to run over with a DVD to upgrade them.
By default, zypper fetches packages one at a time, installs them,
then fetches them again. I'm not certain, but I think that means
there's a lot more TCP overhead and less chance to ratchet up the
speed. Sure as hell seemed slow downloading 1.8GB x 9 machines this
way.
Graphics drivers: awful. Four different versions, and I'd used the
local install scripts rather than creating an RPM and installing
that. (Though to be fair, that would just rebuild the driver from
scratch when it was installed, rather than do something sane like
build a set of modules for a particular kernel.) And I didn't
figure out where the uninstall script was 'til 7pm, meaning lots of
fun trying to figure out why the hell one machine wouldn't start X.
Lesson: This really needs to be automated.
Lesson: The ATI uninstall script is at /usr/share/ati/fglrx-uninstall.sh. Use it.
Lesson: Next time, uninstall the driver and build a goddamn RPM.
Lesson: A better way of managing xorg.conf would be nice.
Lesson: Look for prefetch options for zypper. And start a local mirror.
Lesson: Pick a working version of the driver, and commit that fucker to Subversion.
Special machines
These machines run some scientific software: one master, three slaves.
When the master starts up at boot time, it tries to SSH to the slaves
to copy over the binary. There appears to be no, or poor, rate
throttling; if the slaves are not available when the master comes up,
you end up with the following symptoms:
Lots of SSH/scp processes on the master
Lots of SSH/scp processes on the slave (if it's up)
If you try to run the slave binary on the slave, you get errors like
"lseek(3, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek)" (from strace) or
"ESPIPE text file busy" (from running it in the shell).
The problem is that umpty scp processes on the slave are holding open
the binary, and the kernel gets confused trying to run it.
Lesson: Bring up the slaves first, then bring up the master.
Lesson: There are lots of interesting and obscure Unix errors.
I also ran into problems with a duff cable on the master; confusingly,
both the kernel and the switch said it was still up. This took a
while to track down.
Lesson: Network cables are surprisingly fragile at the connection
with the jack.
Virtual Machines
It turned out that a couple of my kvm-based VMs did not have jumbo
frames turned on. I had to use virt-manager to shut down the
machines, turn on virtio on the drivers, then reboot. However, kudzu
on the VMs then saw these as new interfaces and did not configure them
correctly. This caused problems because the machines were LDAP
clients and hung when the network was unavailable.
Lesson: To get around this, go into single-user mode and copy
/etc/sysconfig/network-scripts/ifcfg-eth0.bak to ifcfg.eth0.
Lesson: Be sure you're monitoring everything in Nagios; it's a
sysadmin's regression test.
Refuse to update your software so that it uses a Makefile. Yes, I
know you're not only 'way smarter than I am but you've got a source
tree going back to 1977. I don't care; editing six different shell
scripts is not the way to do things.
Sprinkle those six scripts with assumptions about which software is
present and where the source code is being compiled. Document most
of them.
Run tests but carefully delete all the results; don't include an
option to save them. That way I have to edit your scripts to figure
out what the hell went wrong.
Assume the presence of csh for everything, rather than
POSIX-standard sh.
Put configurable options inside shell scripts, rather than in a
configuration file or allowing them to be set by arguments to those
scripts.
Include directions like "Perhaps change foo, bar and baz", without
explaining why or what they're set to. When tests later fail
because you didn't properly set foo, bar and baz, don't explain
where these are set or how they affect the tests.
Set a hard-coded location for temporary output. Die silently when
those locations aren't present, rather than explaining why or
offering to create them or using /tmp. Refuse to overwrite
already-present files, but don't explain this anywhere; instead, say
that they might be useful next time.
Have important variables, like the hard-coded location for temporary
output, set in two or more different places. Suggest editing some
of them.
Have test failure indicated by "!!FAILED", ensuring a moment's
confusion about whether that means "FAILED!", "NOT FAILED!" or
"NOT NOT FAILED".
_Update, April 20th 2010:
Be so glad that you're done installing this software that you never
come back to document how you did it in the first place, leaving no
clues but a short rant on your blog. Sigh.
The flash demo for Dell's ML6000 tape library boasts that it's "completely self-aware". Not sure I want SkyNet running my backups…
O'Reilly has an upcoming webcast on -- deep breath -- "Advanced Twitter for Business". (At least they didn't call it a webinar. When I told my wife about this, she said "So...you and O'Reilly break up yet?"
And did I mention the dream I had a while back about a Sun laptop that looked like an X4200 server folded in half? In the dream it ran nearly perfectly, except when you tried to go to a web page with flash; then it would crash, and a movie of Matt Stone would play, apologizing on behalf of Jonathan Schwartz and everyone else at Sun.
I'm playing with the CVS version of Emacs after reading about some of the new features in what will become Emacs 23. It's nice, but the daemon mode isn't quite multi-tty — you can run Emacs server, detached from any TTY, but if you try connecting to it with multiple emacsclient instances, the first one is where all the TTY action goes. Not sure what I'm missing.
From a list of known issues with installation of Office 2008 for Mac. Number one:
Office 2008 updates cannot be installed if the Microsoft Office 2008
folder was moved, renamed, or modified
Office Installer installs Microsoft Office 2008 for Mac in the
Applications folder. If you move the Microsoft Office 2008 folder to
another location on your computer, or if you rename or modify any of
the files in the Microsoft Office 2008 folder, you cannot install
product updates. To correct this issue so that you can install product
updates, drag the Microsoft Office 2008 folder to the Trash, and then
reinstall Office 2008 from your original installation disk.
I can't download the volume license version of Office 2008 for Mac by
using Safari
Cause: Downloading the volume license version of Microsoft Office 2008
for Mac is unsuccessful when you use the Safari browser.
Solution: We recommend that you use the latest version of Mozilla
Firefox Web browser ( MozillaClick this link to open a browser
window.http://www.mozilla.com) to download the volume license versions
of the Microsoft Office 2008 for Mac suite or stand-alone
applications.
There are always timesinks at a job: the things that suck up all your
spare time, that interrupt what you're doing and force you on to
something else. They're urgent, or they're complicated, or they're
obscure and you only ever touch them every six months. If you're
really unlucky, they're all three. They drain the life from you; a
good day turns shitty, and an already-shitty day becomes
nigh-unbearable.
The website is one such timesink at my current job. It's a veritable
Grand Canyon of different technologies, databases, and code. You can
examine it and, like a geologist, date particular pages or code with
great accuracy, judging by clues like composition, surroundings,
indentation patterns ("Oooh, K&R crossed with…crack?"), and
previous experience. When an Urgent Request for Web Changes comes in
(and they're all urgent), figuring out how to do it means figuring out
how that particular page was generated in the first place: static?
dynamic? CMS? And then you have to figure how you can meddle with it:
logging into Mambo, the CMS of the damned? If it's static: does the
URL map nicely to the filesystem, or is there a hidden Apache Alias
directive somewhere? Do you have permissions to open the file, or will
it take sudo to chown it, or another nagging email to a coworker to
please check their changes into RCS? And if, God help you, it's
dynamic…but no; that mess of spaghetti should stay down. There's no
sense bringing it up again simply for prurient purposes.
Sunray terminals are another timesink. When they work they work very
well. I like the energy-saving aspects of it — both electrical and my
own; one machine to manage is always better than 40. But when they
don't work, it's a pain. Has a session become wedged? Is it GNOME's
fault? Has Adobe Acrobat decided to eat up all the CPU again? If so,
is that worse than the security holes that remain unfixed in the later
version? Why is Solaris 10 randomly not sending RST packets when it
receives a SYN on a port it's not listening on? (If anyone has any
ideas, please let me know.) Has a cheap switch, installed because
no one believed that an office meant for one might someday hold four,
gone off its meds again?
These things make me throw up my hands and and curse my fortune. I
have no one unfortunate enough to be my subordinate, so it's up to me
to hack and slash through the possibilities until it's finished, or at
least put off for another day.
But LDAP is worse.
When it works it works very, very well. Failover works, replication
works, and an account created here zips there without a moment's
thought. But when it fails, it's urgent and complicated and
obscure all at once, and sometimes in degrees polynomial.
At last count we have four different master-master replicas, running
three or possibly four different versions of Sun's Directory Server
(under six different product names, no less). There are replication
agreements spanning versions that aren't even supposed to tolerate
each other's existence, using two different encryption protocols and
NetBEUI. Two completely different "helpful" management tools vie for
our attention, lacking only flash plugins to trigger seizures. Only
one server can be poked or prodded with a command line
tool. Diagnostics are by turns nonexistent or endearingly fickle.
To be fair, the vendor documentation is vast and makes fine kindling,
though its promise to fully document error codes like error
457758854b: BER error 45775885b4 is best regarded as a bitter joke by
a jaded software engineer who died alone, unloved and without stock
options. (Our own documentation is marginally better: no carbon is
released when it is destroyed.) Thus, keeping track of ACLs (say), and
exactly which unholy wrath you will invite upon your head should you
make a mistake when granting or revoking privileges to read a
particular entry, means digging through half-remembered conversations,
drunken Google searches, year-old notebooks and a quiet, solitary
introspection normally reserved for contemplating your own impending
doom.
On top of everything else, LDAP encompasses everything, or nearly
so. Email routing, website privileges, database access, even TCP
checksum computation: all are kept in, or depend on, or just like to
hold hands with, LDAP. It's enough to make me wistful for the good old
days of NIS.
In a few minutes I am going to go back to work and try to figure out
why a new account has stopped, in mid-replication, halfway between
$UNIVERSITY and $OTHER_UNIVERSITY. It will take me the rest of the
afternoon. I will use words that my own son does not know I know. And
I will come out of it shrunken, withered, beaten down and humble.
Okay, so it isn't quite as bad as the time I threw 3,000 incoming
messages for an ISP into my home directory. But I've just figured out
that the reason a) $VENDOR didn't get back to me and b) it's been so
quiet for the last few days is because all email was going to a file
called X-Original-Sender because of one missing *. (In fact, that
may also have been the cause of the first big error...)
$ sudo -u sympa /opt/pkg/bin/perl /opt/pkg/sympa/bin/sympa.pl --help
Line 38, unknown field: bounce_path in sympa.conf
No web archives directory: /opt/pkg/arc
MHonArc is not installed or /usr/bin/mhonarc is not executable.
Language::SetLang(), missing locale parameter
Missing Return-Path in mail::smtpto()
Missing directory '/opt/pkg/bounce' (defined by 'bounce_path'
parameter)
Configuration file /opt/pkg/etc/sympa.conf has errors.
What this error message doesn't bother saying is that it has silently
sourced wwsympa.confas well assympa.conf, and that the
errors come from that file. And no, there is no explicit sourcing
of wwsympa.conf in sympa.conf.
I've complained about Blastwave before, but this is just terrible.
Trying to install VLC on a Solaris 10 machine using Blastwave. Says
that CSWcommon is out of date, so please run pkg-get -u. As this
always includes thousands of prompts that look like this:
The following package is currently installed:
CSWoldapclient openldap_client - OpenLDAP client executables (oldapclient)
(sparc) 2.3.31,REV=2007.01.07
Do you want to remove this package? [y,n,?,q] y
## Removing installed package instance <CSWoldapclient>
## Verifying package <CSWoldapclient> dependencies in global zone
WARNING:
The <CSWoldap> package depends on the package currently
being removed.
Dependency checking failed.
Do you want to continue with the removal of this package [y,n,?,q]
...I look around for a way to automate this. And surprise, there
is, and I've missed it the whole time. My bad. So: pkg-get -f
upgrade it is, then.
It runs for 45 minutes and stops with an error about CSWcommon:
Current administration requires that a unique instance of the
<CSWcommon> package be created. However, the maximum number of
instances of the package which may be supported at one time on the
same system has already been met.
Hm, sez I. That's strange, but maybe that's what it's like for package
managers that suck. pkg-get -r common and pkg-get -i common, and
I'm ready for the upgrade again.
Somehow in the process I managed to remove the pkg_get package,
which (surprise) contains the pkg-get command. Fortunately I have a
backup copy around and use that to install pkg_get. Life continues.
And it's not for another 15 minutes after that that I notice that
the package manager is going in loops. It keeps going over the same
packages again and again, giving the same errror about unique
instances each time. A quick search turns up this link, which
tells me I'm a fool for believing the help offered by pkg-get:
$ pkg-get -h
pkg-get, by Philip Brown , phil@bolthole.com
(Internal SCCS code revision 3.6)
Originally from http://www.bolthole.com/solaris/pkg-get.html
pkg-get is used to install free software packages
pkg-get
Need one of 'install', 'upgrade', 'available','compare'
'-i|install' installs a package
'-u|upgrade' upgrades already installed packages if possible
'-a|available' lists the available packages in the catalog
'-c|compare' shows installed package versions vs available
'-l|list' shows installed packages by software name only
Optional modifiers:
'-d|download' just download the package, not install
'-D|describe' describe available packages, or search for one
'-U|updatecatalog' updates download site inventory
'-S|sync' Makes update mode sync to version on mirror site
'-f' dont ask any questions: force default pkgadd behaviour
Normally used with an override admin file
See /var/pkg-get/admin-fullauto
'-s ftp://site/dir' temporarily override site to get from
and that the correct way to do what I want is to run:
true | sudo pkg-get upgrade
I admit that I neither knew nor sought to find out what "default pkgadd behaviour" would be, so that's my fault. I admit that I was the one who borked things by removing the pkg-get command. I admit that I did not think to record all of this with script, so at the moment I'm going on scribbled notes and memory. This is not a bug report, which is what I really should be writing. These are all things I did wrong or badly.
But isn't this what apt has fixed? On its worst day, I've never
had to set up yes to be the drinking bird that would let me
get stuff done. And — when all was done, and I got to go back to
installing VLC — I've never had it depend on gcc.
I swear, Sympa has the worst fucking documentation of any
goddamned software I've ever used. I'd rather view /dev/random with
Internet Explorer then try to figure out what the hell this software
does.
Example: List creation. Since Sympa is mailing list software, this
would seem to be pretty basic. It is if you're using the web
management feature, which they document. But if you're not? Well, now,
why would you do such a thing? Are you a Communist or something?
Here's what I know: there's a Perl script here that creates the Sympa
config files for various lists as needed. Then these lists magically
show up. This appears to be related to the task manager portion of
Sympa, which seems to create lists for the new config files. Or maybe
Sympa checks the config files when incoming email comes in, and that's
how it works. Either way, it sure as hell isn't documented anywhere;
not for the older version we have, nor for the newer version (which
seems to be a nothing more than a wiki dump, with all its attendant
lack of organization).
This is how I imagine Samuel L. Jackson leading off a conversation
with the writers of the PHP language (edited to be less obscene and
offensive).
In the name of all that is holy and right, please explain to me why
the fuck PHP's preg_replace() takes delimiters for the first
argument, but not the second. IOW, Perl's
$foo =~ s/baz/bum/;
becomes
preg_replace('/baz/', 'bum', $foo);
Yes, I should've just RTFM. You're completely right. But this just bit
me in the ass, after spending 10 minutes wondering WTF was going
wrong, and a little fucking consistency goes a long fucking way.
Do you have any idea how fucking insane the h.323 protocol is? Anyone
who runs a h.323 should get shoved out a window, beaten, flayed,
spanked, shot, disembowled, hung, and forced to listen to hummpa music. If
you want to firewall h.323, go commit yourself to an asylum with
straight jackets and with padded walls -- at least you'll be in common
company with the other linux wacko's.
Up until today, I would've told you that the stupidest thing I'd read
on the Internet was a white paper titled "Is PowerPoint An E-Learning
Solution?" But OMG ponies, the bar has been raised.
Precisely why a made-up word making it to Google should be
considered news is never really explored. Wired's whole-hearted
gushing about someone who "has registered freedbacking domains and
plans to aggregate freedbacking comments on a new website next week"
is also a nice touch -- way to accelerate the IPO! Finally, you've got
the thoughtfully-placed-last obligatory OTOH about how "consumer
ignorance and laziness could also keep the value of the suggestions
low."
A user at work wanted to move from a desktop machine to a laptop. The
Windows profile moved over just fine, so all that was left to do was
copy over his outlook.pst. Only it turns out his desktop's hard
drive has been quietly failing for a while, and there's some
corruption right in his 1.2GB Outlook file. Well, fuck.
The Inbox Recover Tool is meant to help with this sort of
thing. It took me a while to find a mention of that, longer to realize
that it was actually called scanpst.exe, and even longer to decide
that the Windows search tool wasn't going to find C:\Program
Files\Common Files\MAPI\1033 -- a fact that is fucking buried in
Microsoft's Office support section. (Why 1033? Something to do with
Unicode and US English character sets.) Of course, it didn't work.
So okay, what about getting Outlook to export to another file? Good
idea! Only it fails about 700MB through, and there's no indication
what worked and what didn't -- so no chance for the user to decide if
that's enough or not.
So what about exporting a subset of the folders, seeing what fails,
and then repeating the process without the failing folder? A little
tedious, sure, but it'll work, right? Wrong: you can export one
folder, or you can export one folder and its subfolders, but you
cannot export more than one folder at one time. Jesus fucking
Christ!
Workaround for that was to copy folders (one at a fucking time,
natch) to another folder (call it Backup) and try exporting that --
and then see what fails, yadda yadda. But natch, that doesn't work
either. You have to watch closely to see what folders are being
exported, and anyway a folder may be displayed as being exported more
than once, so you still don't know whether a given folder may have
worked.
Plus, there was the failing hard drive (remember that?); I suspect
that it this new backup folder was just getting thrown on the same
crappy chunk of hard drive, making the export of the Backup folder
fail in interestingly inconsistent ways. And of course, the whole
process takes fifteen minutes to fail, during which time I can't do
anything else and neither can the user.
And in the middle of my frustration and rage, an even greater rage
welled up in me when I realized that Outlook had totally ruined this
guy's email.
Think about it! Here's all this plain text email -- even attachments
are encoded in ASCII -- and it has been completely fucking borked by
being irretrievably (well, in this case anyway) converted to some
proprietary binary format that is completely opaque to me, without at
least the saving grace of having good tools for its manipulation
available. Redundancy, ease of recovery and ease of manipulation has
been thrown away for the sake of (let's be generous here) speed and
functionality (indexing, correlation, etc). It's completely
ridiculous.
This led to the formation of Saint Aardvark's Axiom of Information
Utility:
Any sufficiently important information must be indistinguishable from plain text.
Plain text is redundant, easily (though not necessarily speedily)
recognized by the human brain, and has many automated tools to deal
with it (think of Unix). All these things make it very, very
recoverable. If the information is that important, you need to be
able to get at it even if there's a hardware failure. Binary formats
throw that away, and that is simply wrong.
But what's a self-important axiom without an equally self-important
corrollary?
Any gains in the functionality or speed of information access must be obtained from derived versions of the original information, leaving the original in its plain text form.
I'm perfectly willing to give Outlook the benefit of the doubt in this
case; having used a PDA for all of two weeks, I feel uniquely
qualified to recognize the utility of having cross-referenced
contacts, to-do lists, email, and so on. But this must not come at
the expense of recovery!
Think of source code. It's possible to hack on a binary with a hex
editor or a disassembler. You can even fix bugs or change the way a
program works in this way. But you would never maintain a program in
this way: it's hard to understand, it's easy to make a mistake, and
it's hard to (say) port to a new language or hardware platform. That's
what source code is for: it's easy to understand (assuming you're a
programmer), and even if some of it gets garbled it's easy to
recover. Plus, you can use tools like indent to change how it looks,
or grep to pick out interesting bits, or tags to cross-reference
function calls with their definitions.
Of course, you wouldn't try to run source code -- that's what a
compiler is for. You gain speed by transforming the source code while
still leaving that source code intact: nothing is lost in the
process. And that's what Outlook should have done: compiled the
plain text email into whatever database (I'm assuming) format Outlook
likes, that allows Outlook to do Outlook stuff quickly, while still
leaving the original source code -- the email -- intact.
Of course, you don't have to imagine recompiling Outlook's PST file
each time; this'd be an incremental thing. And really, it shouldn't be
that much different from what it does now...same speed, just a little
more disk space taken up. And if the PST file gets borked, no matter
-- the recovery tool is nothing more than a compiler that regenerates
it from the original email.
As much as I'm picking on Outlook though, this isn't Outlook's problem
alone. I've written before about how PHPWiki obscures the
information it stores in MySQL. And I did a similar thing to myself
years ago by compressing email, since I was running out of disk
space. Somewhere along the way the files got corrupted, and I can't
get that email back because gzip barfs on it.
And of course, this is just my opinion, formed in the heat of
anger. It's almost certainly not a new idea, and might even be
wrong. I'd love to hear some feedback on this.
The Atmosphere Player for Acrobat and Adobe Reader is designed to
enable use of Atmosphere environments within a PDF document enabling
the user the ability to experience a rich variety of interactive
content, including three-dimensional objects, directional sound,
streaming audio and video, SWF animations, and physical behaviors.
+1 to HP for using Linux on their diagnostic CD. -1 to HP for
suggesting that if poor performance is experienced, you should
consider "upgrading the graphics solution". I swear to God, I'm
starting to twitch whenever I hear that fucking word.
Would you like me to pick a character for you? [YES/NO]
YES
You are the sysadmin. A user comes to you and asks for a package of
Matplotlib for python 2.2.
SEARCH GOOGLE
Google says that packages are available for Debian at the Matplotlib
homepage, in the packages section.
GO MATPLOTLIB PACKAGES
You see packages for Python 2.3 and a link to the mailing lists.
GO MATPLOTLIB HOMEPAGE
You see source distributions for Matplotlib, including the
latest version.
DOWNLOAD MATPLOTLIB
The Matplotlib tarball is in your home directory now.
UNTAR MATPLOTLIB
Okay.
COMPILE MATPLOTLIB
Things proceed well until you see the following
error:
``
building 'matplotlib._nc_cntr' extension
gcc -DNDEBUG -g -O3 -fno-strict-aliasing -Wall -Wstrict-prototypes -fPIC -I/usr/local/include -I/usr/include -I/usr/include/python2.2 -c src/_nc_cntr.c -o build/temp.linux-i686-2.2/_nc_cntr.o -DNUMERIC=1
src/_nc_cntr.c:1718: error: syntax error before "init_nc_cntr"
src/_nc_cntr.c:1719: warning: return type defaults toint'
src/nccntr.c: In function init_nc_cntr':
src/_nc_cntr.c:1723: warning:return' with no value, in function returning non-void
src/nccntr.c:1729: warning: `return' with no value, in function returning non-void
error: command 'gcc' failed with exit status 1
VI SRC/_NC_CNTR.C
There is no such file.
VI src/_nc_cntr.c
There is no such file.
COMPILE MATPLOTLIB
Things proceed well un
CTRL-Z
The process has been suspended.
VI SRC/_NC_CNTR.C
There is nothing obviously wrong with this file.
GO MATPLOTLIB HOMEPAGE
You see packages for Python 2.3 and a link to the mailing lists.
GO MATPLOTLIB MAILINGLISTS
They're hosted on Sourceforge. You see lists for users and
developers.
GO LIST USERS
Sourceforge times out.
GO LIST DEVEL
Sourceforge times out.
GO LIST DEVEL
You see the search page, but without any stylesheet.
SEARCH _NC_CNTR.C
Sourceforge times out.
GO GMANE
You see the following message: "Welcome to GMANE, the NNTP-friendly
mailing list archive! You can search all your favourite mailing lists
here."
SEARCH _NC_CNTR.C
You see the following search results:
1. RH 3.2/Python 2.2 build problems (gmane.comp.python.matplotlib.general)
2. Re: (Matplotlib-users) Installation issue (gmane.comp.python.matplotlib.general)
3. Numarray problems on AIX (gmane.comp.python.numeric.general)
GO 1
You see the following post to gmane.comp.python.matplotlib.general:
I'm having problems building matplotlib on Redhat 3.2 with python
2.2. It complains about src/_nc_cntr.c. Does anyone know what's
happening? You notice a cute flag indicating the country of origin of
the email.
SEARCH REPLY
You can't do that on Gmane. You notice a cute flag indicating the
country of origin of the email.
SEARCH THREAD NEXT
You can't do that on Gmane. You notice a cute flag indicating the
country of origin of the email.
SEARCH THREAD LIST
You can't do that on Gmane. You notice a cute flag indicating the
country of origin of the email.
SEARCH SUBJECT LIST
You can't do that on Gmane. You notice a cute flag indicating the
country of origin of the email.
EDIT URL
You open the URL in vi.
EX:s/articles.gmane.org/blog.gmane.org/wq
You get the latest posts in gmane.comp.python.matplotlib.general and
lose the search result you had.
GO BACK
Okay.
EDIT URL
You open the URL in vi.
EX:s/articles.gmane.org/comments.gmane.org/wq
You see the same post with a different stylesheet. You notice a cute
flag indicating the country of origin of the email.
SEARCH REPLY
Ah, there it is! You see the following reply:
> My guess is that this is because cntr.c is using a python2.3 only
> macro, and we haven't sufficiently tested with python2.2. Try
> replacing all occurrences of MODINIT_FUNC with extern "C"
> DL_EXPORT(void).
You notice a cute flag indicating the country of origin of the email.
SEARCH REPLY
There doesn't seem to be one. The user comes back and asks if the package is done yet.
"Solution". Last seen in a SuSE install disk in place of the word "program" or "application". I swear to God, the next person who uses this word at me in this fashion will have their souls fried in a delicate cornstarch batter.
Autoyast. 'Cos, you know, why in God's name would you take a hint from Kickstart and create a config file to match the installation that just finished? You know, so you can install by hand once and then use that as a template for everything that follows?
Autoyast, part two. Some sort of checking would be nice...like, say, "Ha ha, you thought we would generate all the Grub entries that weren't specified? The ones that are necessary to boot? SUCKER!"
Debian. Why the fuck does it keep ignoring my MIME options and attempting to open, automatically, attachments of all sorts from Mutt? If I want this sort of stupid, insecure behaviour, I'll go back to Outlook Express.
Debian, part two. And why are there eleventeen goddamned user accounts, very obviously meant for system maintenance, with valid goddamned shells?
First, Netdisco just fucking ROX. A while back I was looking for
a way of telling what machines were hooked up to the ProCurve switches
scattered around our network. I'd written a half-assed program to do
just that and was glumly contemplating how much work it'd be to do it
right when I came across Netdisco. It's a pain to install, but good
Gopod, it works wonders.
However: Sourceforge irritates me beyond reason. Its mailing list
searchability is terrible, its thread display is worse, and it keeps
timing out when I try to open a bunch of mailing list links (like, 10)
in different tabs. ARGHHH.
Debian. I love the Debian. But the logs in Debian annoy me.
You can't read 'em unless you're part of the adm group, or root. Not right.
Iptables denied packets get logged to kernel.log, messages.log and syslog.
What, precisely, is the difference between kernel.log and syslog? Between daemon.log and debug.log? Why is exim4/mainlog not symlinked to mail.log?
There's the annoying habit of printing far too much to the console. My time and my screen's real estate are precious -- doubly so when I'm in single-user mode (another rant) trying to fix something. The last thing I want is to have precious, precious vi sessions drowned out with kernel: IN=eth2 SRC=24.82.14.99 LEN=64 TOS=0x00 PREC=0x00 TTL=45 ID=40654 DF PROTO=TCP SPT=2678 DPT=445 WINDOW=53760 RES=0x00 SYN URGP=0. Fuck that noise!
FreeBSD, by contrast, has it fucking down. There's messages.log
for everything you're likely to need as a normal user -- except mail
messages, which are conveniently located at mail.log. For the
sysadmin, you've got security.log (firewall stuff), auth.log
(login stuff) and all.log (everything). It's simple, easy to
understand and you can bloody well ready what you need to without
becoming root. Sigh.
In other news, thanks to Mr. Dickens I came across Belanix
today, a live OpenSolaris CD. I'm currently scrounging around for a
spare machine to boot from, as QEMU seems to confuse it.
FYT #1: New Firefox 1.5 Beta. It's great: wicked fast, and they've
added drag-n-drop tabs. Slashdot comment pages render in a
heartbeat. But it's pissing me off right now for two reasons. First,
the Profile Manager only seems to come up if no other Firefox window
is running. If there is another window running, it comes up with
that profile no matter what arguments you pass (-P, -ProfileManager,
-P Profile Name, ). (When I was first writing this entry, I tried that
last one just to make sure. When the current profile came up yet again
I closed it -- but closed the browser window that had this entry,
too. I'm writing this in vi in an xterm now.)
This is irritating because I have two profiles: Default and Wide
Open. Default is where I spend nearly all my time; Java, JavaScript,
pop-up windows and flash are turned off; AdBlock shoots to kill;
animations go once and then stop; I'm asked about cookies. I hate
dancing baloney. Wide Open is where I go if I need to visit my bank's
website (it's not that wide open, of course), or if there's
something that won't work in my Default profile that I'm convinced is
worth the effort (which doesn't happen often). Keeping two profiles is
much easier than toggling all that nonsense each time.
Second, a lot of the extensions I love aren't yet ready for 1.5 (or at
least, don't say they're ready...I seem to remember when the upgrade
to 1.0 happened that you could edit some of the extensions directly
and just lie about what version was required). Adblock is running --
if it wasn't for that, I don't think I'd be using the new version at
all. But Session Saver, Sage and Mozex aren't, and I've come to rely
on them. We'll have to see.
FYT #2: I went into work this morning to reboot a couple of
servers. I'd let everyone know about it, and got up with my wife at
4.45am. But when I got to the building, the card that let me in the
front door would not make the elevators go -- they just sat in the
lobby waiting for, I don't know, drugs or Jesus. (Double
punishment!) I'd used the card before to make the elevators go, so
WTF? (Stairwells are not an option; you can't get into your floor [or
any other] using your key or any access card.)
After failing to find a security guard anywhere, I called tenant
services for the building. They said that the elevators might be
turned off, but they couldn't be sure; I could get a better answer
calling back during the week. (Fair enough, since our building's
managed by a company that owns buildings all across Canada.) Oh, and
security starts at 8am. Fuck. I'll have to reschedule for during the
week, but after making sure that I can get in at 6am. Double fuck!
FYT #3: Why am I rebooting servers? Good question: they're running
FreeBSD, after all, so it's not like it should need to happen all that
often. The answer is: because amd sucks ass through straws. Not only
does amd:
create a mess of symlinks (people who complain about SysV init
symlinks messes need to look at amd: /home/foo symlinked to
/net/machine/home/foo symlinked to /.amd/_mnt/machine/host/home/foo,
the only place the directory is actually mounted) (interesting: quick
Google for sysv init symlink turns up this post by my
namesake)
interact badly with FreeBSD symlink caching (okay, FreeBSD's fault
maybe)
but it will also get wedged sometimes, requiring a reboot -- and
don't talk to me about the -r option for amd, because that simply
doesn't work.
F'r instance: a while back one guy at worked moved from FreeBSD to
Linux. I took the opportunity to give him a bigger hard drive; he'd
had a second one, mounted at /home/foo/scratch, because he'd run out
of room on the first. Unfortunately, one of the servers in question
had /home/foo/scratch mounted at the time through amd -- and when his
machine came back online w/no scratch directory, amd/NFS refused to
umount it and refused to mount his home directory, because the bogus
/home/foo/scratch was blocking it. That's what this morning's reboot
was meant to get around. Okay, again, not all amd's fault -- NFS and
me, not in that order -- but still.
I mentioned two servers, though, so what about the second? Aha,
that's the symlink caching thing. We get around this by running a
newer version of amd than is supplied w/FreeBSD; it doesn't have
quite so many problems. But I'd missed the second server, and it
didn't have the pointer to the newer version of amd. Again, my fault
-- I should've caught this a long time ago -- but dangit, it
shouldn't be necessary to do this just to restart amd. (I'm setting up
cfengine to catch this sort of thing. cfengine rox.)
Minor update re: earlier problems with Vinum and a Maxtor IDE
card: I picked up a new RocketRaid 454 that was reputed to work
much better, plus had four controllers rather than two. Cheap, too
-- $135. Long story short is that it still caused problems, I think;
the machine seized up again in the middle of backups, apropos of
nothing and with no message or panic. (Took a while for this to
happen, though, so it was an improvement. I think I should've taken
to heart the warning I got a while back that Vinum was not the most
stable of code.
Problem: Outlook 2003 User gets a message from System Administrator
saying that his message to a coworker is undeliverable -- something
about relaying denied -- and asks me why this happens.
Pretty simple, right? Just get him to forward the message and then
check the logs. Only no, it's not simple: despite twiddling all the
bits you're supposed to, I keep getting the message attachments in MS'
TNEF format. I use Mutt. I give up and decide to look at Outlook
itself. (Yes, I know about the decoder scripts you can get, but I was
being bullheaded.)
Now we have the problem of getting the proper Internet headers in the
email. (I've given up trying to persuade Outlook that I am ritually
pure enough to look upon the shining glory that is The Message Source
without melting like some kind of Nazi-collaborating French
archaeologist; it doesn't work.)
A quick Google turns up three suggestions: a $24.95 VB plugin, giving
up entirely, or right-clicking and selecting Options, then looking at
the box that sez Internet Headers. I'm game, so I try right-clicking
and choosing Options. There's the Internet Headers box, all right, but
it's empty. WTF? I look around, but there's nowhere else to
clickyclick.
I try right-clicking on another message, and sure enough it shows the
headers. I try selecting the From address in the problem message
(helpfully labelled as "System Administrator", which I'm pretty sure
is a bald-faced lie), then Properties. It just says it's from System
Administrator, and shows no actual, real email address. You
remember...email, one of the things Outlook is meant to do?
Then it dawns on me: the mentioned-in-passing comment from the user
that the message is probably from Outlook itself is true. I'd thought
this was just a friendly gloss on an unfriendly message, but it
wasn't. This fucking message is from Outlook. And it's not until I
skip ahead and tell you the exciting conclusion -- it was our mail
server refusing to relay and saying so, something never not once
mentioned in the offending message -- that you're going to realize the
full horror of the situation.
For Outlook does not only mangle email, and hide attachments in weird
files called "winmail.dat", and shake its baloney all over the place
like a drugged-out Hula girl in the "Before" picture in all those
rehab clinic advertisements. No. That is not all.
Outlook -- the mail client -- also takes error messages from mail
servers and disguises them as email messages that have just arrived,
rather than showing the user the fucking error as an error when
and as it occurs! It hides the origin of the error by pretending to
be some non-existant sysadmin when it sends this message! And it does
nothing to indicate that this false email is any different from the
other messages from Bill and Bob and Ted littering your inbox about
horizontal opportunity mission statements, complete with an animated
surfing guy for Bob shouting "Whoah!" to differentiate his mangling
of the English language from Ted's, leaving me to wonder what the
fuck kind of congenital brain damage must've been at work to make
this seem like a good idea to anyone.
I hate fscks that don't provide a progress report.
When you're fscking, don't fucking touch the keyboard. Especially don't hit the up arrow or the enter key so that the forty-minute fsck is repeated; you'll have to kill it and the disk will be marked dirty.
Make sure power cords can't be removed from servers.
"Server" means, among other things, "redundant power supplies". Yes, it does.
Non-journaled filesystems are for the fucking birds.
Make sure that your SCSI cables are firmly fastened.
An hour playing around with Xine and MythTV, and what do I get? A DVD
player that exits without any visible error at the menu screen two
times out of three. And if that wasn't enough, pressing the menu
button once will get you the Xine menu -- pressing it twice will get
the MythTV splash screen, but the movie is still playing: the sound is
there, but the video isn't. Holy crap.
The last time this was happening, I'd rented Hero. All was going
well, when I accidentally hit the Back button on the XBox remote; this
dumped me back into MythTV. Annoying but no problem, right? Just start
up Xine again, and...it crapped out after all the warnings about how
copying DVDs makes Baby Jesus cry. Well, I was skipping those, so how
about if I try sitting through them? Same thing.
My memories are clouded by the red cloud of rage, but I believe I
tried running Xine on my desktop machine with X forwarding on. I
managed to get an error message about how this was an encrypted DVD,
and I should look at getting libdvdcss (assuming it's legal in my
jurisdiction). I already had this, of course.
I tried removing the .libdvdcss files with the cached keys -- that
took a while to track down -- but nope, same thing. I tried other
players, too. gxine did the same thing, natch. mplayer dumped core
(this was the Debian package, which I'd never tried using before; I
didn't want to bother compiling it from source).
Ogle was the only damned thing that worked, but I couldn't figure out
how to make it do fullscreen. I didn't have these problems with the
older version of Xine that was installed before I upgraded
Xebian. Teach me to upgrade...
On the plus side, though, I've figured out that (for Napoleon
Dynamite at least) it does work with some persistance. It's a
damned good thing I love Linux so much.
I got the iBook, I got the Slashdot t-shirt, I got the beard...but do
you think I can get a wireless signal? Oh no. Thanks, Broadcom. But
hey, enough complaining. Time for an update.
The wireless ISP is gonna do a point-to-point link between
windows of our old and new temporary offices. Should give us 100Mb/s
access or so. Which is good, because for a while I thought I'd have to
walk down to London Drugs, grab some Linksys routers, and install my
own firmware to do it. Which would have been a lot of fun...but
would have been a fuck of a lot to get ready in, like, three days. Now
I just have to get OpenVPN talking at either end, get Shaw
installed, and set up a firewall. Oboy.
And then there's the troubles I've been having with our backup
server. A while back I decided to start racking all the boxes we've
been using as servers -- transfer the hard drives to proper servers,
then use the old shell as a desktop for a new hire. Welp, the backup
server was the first to go, and man it's been a headache.
First off, I didn't take care of cooling properly, and the tape drive
(HP Ultrium 215, for those paying attention) suffered a nice little
nervous breakdown and kept spitting out the tape. I tried downloading
the HP diagnostic tool, but it only runs on Linux and the server runs
FreeBSD -- neither Linux compatibility mode (not surprising) nor a
Knoppix disk (kept hanging) allowed it to work. So I had no real idea
what was going on other than the drive was too hot for my liking.
But HP, bless their souls, came to the rescue. Once I made it through
their speech recognition voicemail tree hell, they just sent out
another one -- they didn't even bitch about not being able to run the
diagnostic tool. Not only that, it came the next day, and we don't
even have any special contract with them -- that's just
warranty. Thumbs up for them.
But now I've got different problems: the damn machine keeps seizing up
on me. See, I've got this 500GB concatenated Vinum array of three
disks that I use as a copy of yesterday's home directory for people,
and I'm trying to move it to a four-disk RAID5 drive on the Promise
array. I tried using rsync, and it just froze...but eventually. I
thought maybe rsync was spending too much CPU time figuring out what
to transfer, so today I tried using dump | restore -- and sure
enough, it froze again.
I plugged in a monitor, hoping for a panic or something, but nope --
just unresponsive. I've found some mention in the FreeBSD mailing
lists about possible problems with write caching and the Adaptec 3960D
SCSI controller (which I thought was a 39160 SCSI controller, but I
guess not). I'll have to see if that does the trick or not -- but in
the meantime I'm wondering how I'm gonna get yesterday on the Promise.
Of course, figuring out why it's crashing in the first place would be
even better...
But it's not all bad news: earlier this week, the support manager at
Promise that I've been dealing with called to tell me that the word
had come down from on high. Yep...Promise is going to follow the GPL
and properly release the Linux and Busybox source code for the
firmware that goes into the VTrak 15100. Hurray! I'll have to watch,
of course, and make sure it shows up...but it sounds good. "Let's put
it this way," said the manager. "It's on my desk for me to do. And I
don't want it there for long." To the home front, now.
As if I didn't have enough on the go, I've blown my tax return on the
makings of a MythTV backend: 2.4GHz P4, umpty-GB hard drive, the
PCHDTV-300 (get it while you can!), generic 128MB Nvidia (no
onboard video on this mobo, or I would've stuck with that), a
Hauppauge PVR-500MCE, and a nice Asus mobo in an Antec case
to tie it all together. Random notes:
I like the case-- no sharp edges, very well put together, easy to assemble, and pretty damned quiet. Nice.
I think the graphics card was causing problems -- the machine kept seizing up for no apparent reason, and when I opened the case to have a look the heatsink was almost burning hot. (Memory was my first guess, but I'm running Gentoo on this and all the compiles went fine -- kernel, gcc, qt...'as a lotta compiles.)
The ivtv project lists the PVR-500 (dual tuners! yeah, baby!) as in alpha, but a fair few people have reported success with it. Me, I'm getting the finest MPEG-2 recordings of static you could ask for...but then, I'm pretty sure I'm doing something wrong, and I simply haven't had a chance to work on it since getting it assembled last weekend.
And now for something completely different: new mottoes for Harley
Davidson:
"Harley-Davidson: Because social contracts are for weak
pussy-ass losers with small dicks."
"Harley-Davidson: Because those other people aren't really
human. Not like you and me."
"Harley-Davidson: You deserve it. So do they."
"Harley-Davidson: Because if you pissed in their faces, you'd be
arrested."
"Harley-Davidson: Because 'Fuck you!' is just too damned hard to
remember."
"Harley-Davidson: Because 'Fuck you!' is just too damned eloquent."
Yet another person confusing the presence of a graphical browser
with passing the Turing test. O'Reilly's articles are usually
excellent, which makes even more confusing the lack of any
mention of text browsers or the disabled. Yo, Tim! You listening?
Microsoft said it would encourage the use of least permissions in
Longhorn by making it easier for users to do common tasks without
administrator privileges. For example....allow developers to create
per user installations of applications, with user-specific settings
saved in the "my programs" folder, rather than a globally accessible
program files directory that requires administrative permissions to
change....Windows programs commonly save user-specific files to
critical areas of the operating system, such as the program files
directory or protected parts of the Windows registry, which stores
configuration information and is off-limits to regular users...
...splutter...Gee, individual settings saved in areas controlled by
individual users...WHY IS THIS NEWS? How is it even possible that
this never occured to MS before?
The company also has an opportunity to brand LUA with its own
user-friendly features and interfaces, which would be a vast
improvement over platforms like Sun Microsystems's Trusted Solaris
and Unix, Gartner's Pescatore said. "They are so complex, nobody can
use them," he said. "They require every user to be a security
expert. But if you look at what Microsoft is good at, it's not
inventing ways to do security, but ways to make security easier to
implement for security administrators."
Okay, WHAT? What the fucking fuck was that? Have I been trolled? Is
this guy secretly laughing up his sleeve at the way my face is turning
RED WITH RAGE? Honestly.
Where the fuck was Microsoft when they were writing NT/2000/XP? Why
the hell are there so many fucking programs that demand admin or
power user access simply to use? No, MS did not write all these
programs themselves, but it's their damned operating system and their
damned culture of "Well of course you're the only one on the
computer! Of course you're running as a power use! Of course it
won't affect anyone else if you're given too much privilege!"
Microsoft has a LOT of shit to clean up, and it's not just in their
crappy, crappy OS: it's in the attitudes passed on to users and
developers too.
"[Solaris and Unix] require every user to be a security expert."
No, actually, they don't. That's the whole fucking point. The programs
are (generally, yes there are exceptions) well-behaved: they don't
need crazy privilege, they save user-specific files IN THE USER'S
FUCKING DIRECTORY, and so on. You need one security expert -- the
sysadmin (and hey, before anyone kicks I am not saying I'm a
security expert or anything like it) -- who sets things up safely. You
don't have a glorified text editor (hello, Code Composer!) that
requires power user to run it, and you don't have the accompanying
conversations about "please don't install that app again".
"But if you look at what Microsoft is good at, it's not inventing
ways to do security, but ways to make security easier to implement for
security administrators." HA! It is to laugh. I can hear you out
there wondering why I don't get a copy of Regmon to look at what
registry keys CC needs access to, and open up the permissions on
that. Excellent question, and I should be dropping everything to do
that right now -- point taken. But why the fuck isn't a tool like this
included with 2K to start with? Why are all the admin tools MS does
provide squirrelled away in different resource kits and download
areas, safely kept from the unschooled likes of me?
I'm ranting. There are flaws in my arguments. I don't like or trust MS
or Windows very much. I lhave drunk deeply of the Unix kool-aid, and
I am horribly, horribly biased. But for the love of all that is holy,
this whole article just leaves me agog. Redmond can't be that
ignorant, and I mean that sincerely. But what the hell else am I
supposed to think? Why has twenty-five years of open, easy-to-find
operating system knowledge passed them over? What lamb's blood did
they smear over their cubicle doors to prevent the Angel of Death from
entering?
(Story hit Slashdot today, and I saw it too late to get this comment
in...so this rant hits the journal. You lucky, lucky people.)
I am fucking pissed off. Over the last few weeks, I've been noticing
attempts to spam the wiki on my website. The spammers would create a
new page similar to one already existing, and fill it full of links to
Russian linkfarms (right term? who cares?). It was annoying, and I
figured it would only get worse, but I didn't get too worried. I
deleted the pages, blocked the IP address (it was all coming from one
open proxy), and watched the changes page for further action. Last
night I checked the changes page again. It was late (well, sort of; it
had been a long day) and I was making one last check before going to
bed. Just to make sure that everything was okay, you know? Every
single fucking goddamned page had been vandalized. Every single
page that I had put up had been replaced with spam, and there were a
dozen new pages with even more spam. Over the course of maybe four
hours, all my work had been removed. My only consolation is that
Google had not visited the wiki since the changes had been made. There
were maybe a hundred pages to revert. And PHPWiki, the software I
was using, sucks ass through straws when it comes to reverting
changes. Check this out, ladies and germs:
There is no easy, documented way to revert to a specific revision of a page using the web interface. The version I was using (1.3.4) forces you to go edit an old version, then save that version. The new version I tried upgrading to (1.3.10) allegedly has "action=revert", but I was unable to get this to work: it appeared to do nothing different from "action=edit". To be fair, this may be because the spammer seemed to edit most pages multiple times, perhaps to get around action=revert. But why couldn't I find any documentation on this? All I could find was this page and the words "See action=revert".
There is no easy way to revert to a specific revision of a page using the database directly. Check it out: The database appears to store metadata in a column dedicated to compressed, cached markup. That's right: instead of breaking out metadata like revision, author IP and so on into a separate table, it's stored in the middle of a big gzipped, serialized PHP object. This means I can't do something like "delete from version where versiondata like '%10.0.0.1%'"; going to the page I've done this on hits an assert in the code that appears to check that the revision listed in the cache column is available in the pagedata table. Whee! Let's get all our programming ideas from MS Office!
As a result, I'm pulling a backup of the database from Friday in order
to get the old pages back. I'm going to dump the pages to HTML, figure
out how to script whatever changes I want to make, then leave PHPWiki
forever the fuck behind me. Shame, really, 'cos I do like the ease of
use of Wikis. But I do not have time for this fucking nonsense. Shame
on me for not remembering these words:
Someone challenged me, Well, how am I supposed to continue hosting
these low-barrier discussions? I'm sorry, but I don't know. To quote
Bruce Schneier, "I feel rather like the physicist who just explained
relativity to a group of would-be interstellar travelers, only to be
asked, 'How do you expect us to get to the stars, then?' I'm sorry,
but I don't know that, either."
Those of you looking for info on the NWR04B, please continue to leave
comments on my blog. I'll get the documentation from the wiki back as
soon as I can.
Well, I did the right thing today -- twice. Damn right I'm
bragging.
First off, it turns out that the FreeBSD Foundation has run into
a (good!) problem: its donations have been too big. In order to keep
its US charitable status, it needs to have two-thirds of its donations
be relatively small. Due to a couple of big donations, this ratio is
a little out of whack at the moment, and they need a bunch of
small donations.
Welp, I've been administering FreeBSD systems for a living
for...well, I was gonna say four years, but it's more like two and a
half or three. I've been working on them for four, though; my rent
and food has been paid in large part because of the generosity of the
people who put together FreeBSD. A donation went off in short
order.
Then I remembered that I've been meaning to join the Free Software
Foundation for a while now. The motivation is the same: I've been
paying my bills for a long time now (and enjoying myself immensely in
the process) because of the generosity of Free-as-in-Freedom
software people: Stallman, Torvalds, Wall, and a
zillionothers. I have a hard time imagining what I'd be
doing now without Free software; I suspect that, if I was lucky, I'd
be working as a grocery store manager right now. So: off to the FSF
website to sign up for an associate membership.
And what did I find but two, count 'em TWO cool things:
If you refer three people to the FSF for associate memberships, RMS
or Eben Moglen will record a message for you, suitable for voicemail,
Hallowe'en or impressing the ladies. I did a quick search on Google,
but couldn't find anyone with the link...damn shame. Better than a
free iPod, cooler than a CmdrTaco TiVo -- join the FSF and get
RMS to say "All Hail Liddy!"
The FSF is looking for a senior sysadmin. God, that'd be
cool. Decent enough pay (no, it's not the sort of job you take
because of the money, but it's nice to think about), all the Free
software you can handle, and an IBM Thinkpad to run it on. Of course,
I think I'd have some 'plainin' to do about the laptop I'm writing
this on...and, of course, it would mean living in the US. Frankly,
that scares the crap out of me these days. Goddamned PATRIOT Act...
In other news, work continues apace. We're losing two coop students
and gaining one, gaining another full-time person, and I'm still
trying to get my RAID array -- credit app is with the boss, and
after that's done the order'll finally go in.
Rough guess (wild hope) at this point is that it'll be in my hands in
mid-January, which won't be a moment too soon. There's a new Linux
server I'm setting up that I'm desperately hoping won't have problems
due to proprietary kernel modules in the software I'm installing. (I'm
just writing myself further and further out of that job, aren't I?)
And I'm wondering if the simplest way to get Nagios to make sure the
right machines are exporting the right filesystems is to check if amd
is mounting them correctly. (No matter whether the machine or amd
fails, something needs to be fixed.) Or maybe I just need to figure
out the right wrapper for showmount -e.)
On the spam front: good god, what a smoking hole Movable Type is
turning out to be. First there were the license changes, then the
commentspammers (who seem to be posting a lot more
aggressive to MT than to WordPress)...Of course, comment
spam affects all blogs, not just MT. Still, this whole idea of
rebuilding static pages every time the stars move seems to be causing
them a lot of trouble. (Yep, that last sentence was pure FUD. Or
bullshit.) And okay, no, I don't use MT, so what precisely is my beef?
As I'm not going to put up, I should shut up. I still have to upgrade
WP -- though according to this posting, there are still lots of
XSS issues left unfixed. I'm also upgrading PHP, and I should
probably use ApacheToolbox to do that automagically, rather than
periodically editing my own Makefile.
The release party for Where Are They Coming From? came off JUST
FINE, thank you. EVERYONE was there. Top Stars include Topo,
Phil Knight and Mos Def, fresh from the set of HHGTTG. Uh huh.
Further thoughts on the MySQL + GPhoto2 thing: gphoto2 does have
the ability to pipe to STDOUT, which I don't think I knew...maybe it
won't be as much work to insert directly into a database as I
thought. Might even be able to do it as a Perl script.
Finally: what a gorgeous day. It's downtown Vancouver on the back
steps of the Art Gallery, it's sunny (in December, too) and just cold
enough to make you go "brr". The skater kids are practicing their
synchronised jumping -- just in time for the Olympics, I'm sure. A
far-too-generous co-worker has handed out chocolate, another has
handed out home-made rum and brandy balls, and I'm taking off early
to go drinking with a third. Feeling pretty damned good right
now.
Update: Too bad Topo's not so great -- fever of 102.8F, as of
a couple minutes ago. (Still haven't figured out what that is in
Celsius; bad Canuckistanian!) It's down a bit from earlier this
afternoon, though, so I'm thinking good things. And thesepages say to not worry if it's less than a couple days, so I'm
not worrying. Nope.
already-working pages in PHP that can connect successfully to the database in question,
account details for MySQL, and
all the necessary privileges in MySQL and the server
you DO NOT need me to install phpMyAdmin in order to manipulate
tables. Nor do you get bonus points for asking me how to connect to
MySQL without phpMyAdmin. No, thank you.
A real OS shows messages when it's booting to let you know how far
along it's gone. If something goes wrong, you can see where. A real OS
doesn't have a fucking marquee of slowly-changing colour that you have
to stare at intently from six inches away to see if it's frozen in
place and yet another cold reboot needs to happen. Oh, and it logs how
the boot went. Every time. Without being asked.
Also, if an OS comes with a package manager -- and a real OS does --
it shows you the fucking version of the software that's installed. A
real OS knows that knowing ~FooTastic installed is only half the
battle -- knowing that it's the buggy 6.2 build is just as important.
I try to give Windows and Microsoft a chance, I really do. But this is
fucking ridiculous.