Back in January, yo, I wrote about trying to figure out how to use Cfengine3 to do SELinux tasks; one of those was pushing out SELinux modules. These are encapsulated bits of policy, usually generated by piping SELinux logs to the audit2allow command. audit2allow usually makes two files: a source file that's human-readable, and a sorta-compiled version that's actually loaded by semodule.
So how do you deploy this sort of thing on multiple machines? One option would be to copy around the compiled module...but while that's technically possible, the SELinux developers don't guarantee it'll work (link lost, sorry). The better way is to copy around the source file, compile it, and then load it.
SANSNOC used this approach in puppet. I contacted them to ask if it was okay for me to copy their approach/translate their code to Cf3, and they said go for it. Here's my implementation:
bundle agent add_selinux_module(module) {
# This whole approach copied/ported from the SANS Institute's puppet modules:
# https://github.com/sansnoc/puppet
files:
centos::
"/etc/selinux/local/."
comment => "Create local SELinux directory for modules, etc.",
create => "true",
perms => mog("700", "root", "root");
"/etc/selinux/local/$(module).te"
comment => "Copy over module source.",
copy_from => secure_cp("$(g.masterfiles)/centos/5/etc/selinux/local/$(module).te", "$(g.masterserver)"),
perms => mog("440", "root", "root"),
classes => if_repaired("rebuild_$(module)");
"/etc/selinux/local/setup.cf3_template"
comment => "Copy over module source.",
copy_from => secure_cp("$(g.masterfiles)/centos/5/etc/selinux/local/setup.cf3_template", "$(g.masterserver)"),
perms => mog("750", "root", "root"),
classes => if_repaired("rebuild_$(module)");
"/etc/selinux/local/$(module)-setup.sh"
comment => "Create setup script. FIXME: This was easily done in one step in Puppet, and may be stupid for Cf3.",
create => "true",
edit_line => expand_template("/etc/selinux/local/setup.cf3_template"),
perms => mog("750", "root", "root"),
edit_defaults => empty,
classes => if_repaired("rebuild_$(module)");
commands:
centos::
"/etc/selinux/local/$(module)-setup.sh"
comment => "Actually rebuild module.",
ifvarclass => canonify("rebuild_$(module)");
}
Here's how I invoke it as part of setting up a mail server:
bundle agent mail_server {
vars:
centos::
"selinux_mailserver_modules" slist => { "postfixpipe",
"dovecotdeliver" };
methods:
centos.selinux_on::
"Add mail server SELinux modules" usebundle => add_selinux_module("$(selinux_mailserver_modules)");
}
(Yes, that really is all I do as part of setting up a mail server. Why do you ask? :-) )
So in the add_selinux_module
bundle, a directory is created for
local modules. The module source code, named after the module itself,
is copied over, and a setup script created from a Cf3 template. The
setup template looks like this:
#!/bin/sh
# This file is configured by cfengine. Any local changes will be overwritten!
#
# Note that with template files, the variable needs to be referenced
# like so:
#
# $(bundle_name.variable_name)
# Where to store selinux related files
SOURCE=/etc/selinux/local
BUILD=/etc/selinux/local
/usr/bin/checkmodule -M -m -o ${BUILD}/$(add_selinux_module.module).mod ${SOURCE}/$(add_selinux_module.module).te
/usr/bin/semodule_package -o ${BUILD}/$(add_selinux_module.module).pp -m ${BUILD}/$(add_selinux_module.module).mod
/usr/sbin/semodule -i ${BUILD}/$(add_selinux_module.module).pp
/bin/rm ${BUILD}/$(add_selinux_module.module).mod ${BUILD}/$(add_selinux_module.module).pp
Note the two kinds of disambiguating brackets here: {curly} to indicate shell variables, and (round) to indicate Cf3 variables.
As noted in the bundle comment, the template might be overkill; I think it would be easy enough to have the rebuild script just take the name of the module as an argument. But it was a good excuse to get familiar with Cf3 templates.
I've been using this bundle a lot in the last few days as I prep a new mail server, which will be running under SELinux, and it works well. Actually creating the module source file is something I'll put in another post. Also, at some point I should probably put this up on Github FWIW. (SANS had their stuff in the public domain, so I'll probably do BSD or some such... in the meantime,please use this if it's helpful to you.)
UPDATE: It's available on Github and my own server; released under the MIT license. Share and enjoy!
No native support in Cf3 for SELinux.
I've added a bundle that enables/disables booleans and have used it on one machine; this is pretty trivial.
File contexts and restorecon appear to be mainly controlled by plain old files in /etc/selinux/targeted/contexts/files, but there are stern warnings about letting libselinux manage them. However, this thread on the SELinux mailing list seems to say it's okay to copy them around.
Puppet appears to be further ahead in this. This guy compiles policy files locally using Puppet; this other dude has a couple of posts on this. There are yet other other folks using Puppet to do this, and it would be worth checking them out as a source of ideas.
I need to improve my collection of collective pronouns.
A long-standing project at $WORK is to move the website to a new server. I'm also using it as a chance to get our website working under SELinux, rather than just automatically turning it off. There's already one site on this server, running Wordpress, and I decided to get serious about migrating the other website, which runs Drupal.
First time I fired up Drupal, I got this error:
avc: denied { name_connect } for pid=30789 comm="httpd" dest=3306
scontext=system_u:system_r:httpd_t:s0
tcontext=system_u:object_r:mysqld_port_t:s0 tclass=tcp_socket
As documented here, the name_connect
permission allows you to
name sockets ("these are the mysql sockets, these are the SMTP
sockets...") and set permissions that way. Okay, so now
we're getting a note that prevented Drupal from working because
SELinux has denied httpd access to the mysqld TCP port.
What suprised me is that the Wordpress site did not seem to be encountering this error. The two relevant parts of the config files are:
$db_url = 'mysqli://user:password@127.0.0.1/database';
define('DB_NAME', 'wp_db');
define('DB_USER', 'wp_db_user');
define('DB_PASSWORD', 'password');
define('DB_HOST', 'localhost');
Hm, the only difference is that localhost-vs-127.0.0.1 thing...
After some digging, it appears to be PHP's mysqli at work. From the documentation:
host: Can be either a host name or an IP address. Passing the NULL value or the string "localhost" to this parameter, the local host is assumed. When possible, pipes will be used instead of the TCP/IP protocol.
See the difference? Without looking up the code for mysqli, I think that an IP address -- even 127.0.0.1 -- makes mysqli just try TCP connections; using "localhost" makes it try a named pipe first. Since TCP connections to the MySQL port apparently aren't allowed by default CentOS SELinux policy, the former fails.
Solution: make it "localhost" in both, and remember not to make assumptions.
Need to figure out what bit of selinux policy is forbidding something?
sesearch
is what you want.
Welp, after my training at LISA I finally got to start using SELinux. I was setting up a CentOS server with Mascot, search engine software for mass spectrometer software, and I thought I'd give it a try.
Mostly it turned out to be simple -- semanage fcontext
to add some
new httpd -friendly locations where the software had been installed,
restorecon
to set the labels. One thing that did take some tracking
down was digging up exactly what this meant:
type=AVC msg=audit(1259021236.914:280): avc: denied { execstack}
for pid=6845 comm="ld-linux-x86-64"
scontext=user_u:system_r:httpd_sys_script_t:s0
tcontext=user_u:system_r:httpd_sys_script_t:s0 tclass=process
This happened when the install script tested Perl to make sure everything was okay.
As described by Dan Walsh and Ulrich Drepper, this means
that the Perl executable was marked as needing an executable
stack. Not only is this a Bad Thing(tm), it's not usually
necessary these days (what with the Internet and all). execstack -c
cleared the flag, and things appeared to work after that; it was right
at the end of the day, though, so it's possible problems will show up
today.
And then when I got home...it was wonderful. The kids'd had two-hour naps each, there was a wild rice casserole in the oven (The Cheese Fairy is always amazing), and my parents had sent the kids a calendar full of pictures of Canadian wildlife. I got to tell Trombone how the beaks of different birds (great blue heron, snowy owl, cardinal) were adapted for eating different things; I think he was interested, and that was just flat out fascinating. Ah, domestic bliss.
First up was Elizabeth Zwicky's talk on distinguishing data from non-data, and how to deal with each when solving problems. She warned us that she was not a statistician, and what she was going to say would probably give a real statistician hives, but that it would be useful for dealing with computers -- "nothing with an ethics board."
Her talk was laced with examples from her career...like the time she tried to track down missing truck axles from a major defense contractor; this was complicated by their complete lack of data collection ("How many do you make in a week?" "The schedule calls for 100." "How many of those are completed by Friday?" "We're not collecting that data."). Or the time she broke into her CEO's office ("It has a lock!") by pushing up a ceililng tile, then reaching down with a coat hanger and pulling up the handle. Lesson learned: "If it stops at the ceiling, it's not really a wall."
Funny stories aside (and they were funny; I recommend listening to the talk), the point was the danger of assuming too much from initial observations -- we schedule X, so we must produce X; it looks like a wall, so it must be impervious. Data is observations, numbers with context -- not hearsay, or conclusions, or numbers without context. Again, listen to the talk; it's worth your time.
Hell, download every MP3 on this page and listen to them; that's what I'm going to do, and I've been to some of them.
Okay, after that came the refereed papers. Mostly I was there for the SEEdit paper, which describes the SEEdit tool (available on Sourceforge!) for editing/creating SELinux policy in a high-level language. After what Rik Farrow said about policy approaching his rule-of-thumb for human comprehension, I was interested to see if this could be used to generate/edit the existing policy. I tried asking this, but I don't think I made myself clear...and I meant to follow up with the presenter later, but I didn't. My bad.
The paper on the SSH-based toolkit was interesting, but it seemed complex; from what I could gather, you SSHd to a machine, then forwarded connections to (say) POP or SMTP over the tunnnel to a daemon at the other end, which would then forward it to the right destination. It kept seeming kludgy and complicated to me, especially compared to something like authpf plus the usual sort of encryption that should be on (say) POP or SMTP to start with. I asked him about this, and he wasn't familiar with authpf; he did say it was similar to another sort of tool, which I didn't write down in my notes. I'm guessing that I missed something.
With that the conference was over for the day; my roommate used my CD to install Ubuntu on his laptop (I knew bringing it along would come in handy!).
Monday morning:
I've seen the rains of the real world come forward on the plains
I've seen the Kansas of your sweet little myth...
I'm half-drunk on babble you transmit
Through your true dreams of Wichita.
"True Dreams of Wichita", Soul Coughing
This morning I had the SELinux tutorial, held by Rik Farrow. I took a moment to shake hands with Rik Farrow, who's teaching this class, and tell him that ;login: magazine, like, changed my life, man, you know?. If you haven't picked up copies of that magazine/journal, you owe it to yourself to do so. (And if you have and you agree with me, send him an email -- he usually only gets email as editor when there's a problem.)
Matt was there, as was Jay, who I met back in 2006.
The course was quite interesting. Some choice bits:
"How many of you are using SELinux?" (Two hands) "How many of you have disabled SELinux?" (a hundred hands and six tentacles; yes, even Cthulhu disables SELinux) "See, that's why I came up with this course; I kept seeing instructions that started with 'Disable SELinux' and I wanted to know why."
Telling Matt about Jay's firewall testing script.
Me: So how to the big guys test their firewall changes?
Matt: I dunno...probably separate routers, duplicate hardware...
Me: Probably golden coffee cup holders, too.
Matt: Jerks.
You don't write SELinux policy. SELinux policy is hard. It's NP-complete and makes baby Knuth cry. Instead, you use what other people have written, and make use of booleans to toggle different bits of policy.
However, the size of the SELinux policy is big and is only getting bigger. There are something like 85,000 or more rules in recent versions of RHEL/CentOS. This is very close to RF's rule of thumb that a really, really smart and experienced person, who's been intimately involved in its creation, can only comprehend about 100,000 lines of code. This worries him.
Also, the problem of using SELinux is complicated by a lack of up-to-date documentation; like everything else it's a fast-moving target, and a book published in 2007 is now half out-of-date.
But this should not stop you from using SELinux now,; it's handy, it's here, get used to it. Example of SELinux stopping ntpd from running /bin/bash; the SELinux audit file was the only sign.
"In a multi-level secure system, files tend to migrate to higher security levels, and the system becomes less unusable. But that's beyond the scope of this class."
(On programs with long histories of serious security problems) "Flash is the Sendmail of -- what do we call this decade? the naughts?"
(On the difficulty of trying to decode SELinux audit logs) "It says the program 'local' had a problem. 'Local'. What the heck is that? Part of Postfix. Oh, good. Thanks for the descriptive name, Wietse."
Something I hope to quiz him further on: "Most Linux systems have a single filesystem." Really?
During the break I met a guy who works with the Norwegian Meteorological service. This was interesting. He's got 250TB in production right now, and increasing CPU power means that their models can increase their spatial resolution, which means increasing (doubling?) their storage requirements. He talked briefly about running into problems with islands of storage, but I got distracted before I could quiz him further...
...by his story of building a new server room where they were capturing the waste heat and using it to heat the building. Interesting; what kind of contribution would it be making to the overall heating budget? Probably not much, but it all just goes on the grid anyhow, like the hot water from the garbage dump. What?
Turns out that there is a city-wide network of hot-water pipes that collects heat from, among other places, water heaters powered by waste methane from rotting garbage. So they don't use the methane to make electricity and dump it in the electrical grid; they use it to heat hot water and dump that in the hot water grid, consisting of insulated water pipes buried in the ground, which places around the city (and beyond!) will use. We've got what you could call a steam grid at UBC and probably other universities, but I'd never thought of doing this city-wide.
Oh, and he signed my LISA card, which was the second time he got asked today; he was wearing a LISA t-shirt and so he was fair game.
At lunch I buttonholed Jay a bit. I asked him about his coworker's firewall unit testing scheme. He said he's no longer working at that place, but it ended up being a lot less useful than they thought it would be. When I asked why, he said that 90% worked but 10% didn't; that 10% was things like network isolation (to avoid problems with using real IP addresses), and the fact that the interface to the three machines was QEMU serial connections...less than ideal.
The conversation shifted to firewalling, and another guy who was there mentioned that he loved OpenBSD's pf, but had to use iptables because of driver problems that prevented getting full performance out of 10GigE NICs with OpenBSD. Jay said they'd looked at the same problem at his place o' work, and in his words "It was cheaper to throw 8 GigE NICs in a box and pay someone to make Linux interface bonding not suck."