It took decades to find the first Earth-sized planet. Now, astronomers have found seven of them in one fell swoop! They orbit the cool (literally cool) red dwarf star, TRAPPIST-1, 40 light years away in the constellation Aquarius. Using ground and space telescopes, the planets were all detected as they passed in front of their parent star, causing its light dip temporarily.
According to a paper appearing earlier this week in the journal Nature, three of the planets lie in the habitable zone and might host lakes or even oceans on their surfaces, giving them the potential to support life. The system has both the largest number of Earth-sized planets yet found and the largest number of worlds that could support surface water.
Although it would be tempting to label the new worlds Doc, Grumpy, Happy, Sleepy, Bashful, Sneezy, and Dopey after the Seven Dwarfs in Snow White, that’s my wishful thinking. For now, they’re labelled, like all new exoplanets, after their parent star, thus: TRAPPIST-1b, c, d, e, f, g and h in order of increasing distance. Astronomers found that at least the inner six planets are comparable in both size and temperature to the Earth. The seventh and most distant, also similar in size to Earth, appears too cold for liquid water but may very well sparkle with ice.
“This is an amazing planetary system — not only because we have found so many planets, but because they are all surprisingly similar in size to the Earth!” wrote lead author Michaël Gillon of the STAR Institute at the University of Liège in Belgium. Of the 3,583 alien planets known, this system is one of the most exciting and unique. TRAPPIST 1e, f and g all orbit within the star’s habitable zone, a record finding.
The host star is interesting in its own right. With just 8% the mass of the Sun, TRAPPIST-1 is a tiny thing, only a bit bigger than the planet Jupiter, and though relatively near to Earth, it’s extremely faint. That’s good news for its planets, all of which orbit very close to the central fire. Indeed, the entire system would easily squeeze within the orbit of our solar system’s closet planet, Mercury. TRAPPIST-1’s small size and low temperature mean that the energy input to its planets is similar to that received by the inner planets in our solar system.
Given such rich targets, the next step will be to search for signs of life. Already the Hubble Space Telescope is being trained on each of these orbs to search for atmospheres. Might we find some of life’s well-known byproducts: methane, oxygen, carbon dioxide? Stay tuned!
In the span of a few months in 1980, more than 100,000 Cuban immigrants arrived in Miami. So what happened to Florida's economy with all these new people coming in? And what can we learn from it?
(Image credit: Santi Palacios/AP)
Thanks to Imre Vadasz, the virtio driver in DragonFly now has PCI MSI-X support. This should help with virtual performance, though I say that on principle, not with any actual numbers to back it up.
For months now, protesters have lived in tents and teepees during the frigid North Dakota winter, opposing the construction of the Dakota Access Pipeline. In that time, construction was halted by the Obama administration, then re-started by the Trump administration. Recently, state officials ordered the group of Native Americans and other activists from around the country to evacuate the Oceti Sakowin camp, located on federal land, due to impending spring floods. The deadline to evacuate is today, February 22, at 2 pm. Just ahead of the deadline, some protesters set fire to several tents and other structures that remained. Some campers have now left, but others say they will remain and defy any orders to leave.
The red dwarf known as TRAPPIST-1 could not have produced a more interesting scenario. Today we learn that the star, some 40 light years out in the constellation Aquarius, hosts seven planets, all of which turn out to be comparable to the Earth in terms of size. Moreover, these worlds were discovered through the transit method, meaning we have mass and radius information for all of them. Today’s report in Nature tells us that three of the planets lie in the habitable zone, and thus could have liquid water on their surfaces.
TRAPPIST-1 b, c, d, e, f, g and h are the worlds in question, and all but TRAPPIST-1h appear to be rocky in composition, based on density measurements drawn from the mass and radius information. Drawing on existing climate models, the innermost planets b, c and d are probably too hot to allow liquid water to exist, while h may be too distant and cold. But the European Southern Observatory is reporting that TRAPPIST-1e, f and g orbit within the star’s habitable zone, leaving us with the possibility of oceans and the potential for life.
Caution compels me to home in on the word ‘potential’ in the above sentence, and also to remind readers that we’ve seen many planets described as being in the habitable zone for which later study made a much less compelling case. Thus I appreciate lead author Michaël Gillon (STAR Institute, University of Liège), whose enthusiasm is evident when he says “This is an amazing planetary system — not only because we have found so many planets, but because they are all surprisingly similar in size to the Earth!” But I also look forward to the close analysis the community gives habitable zone issues and what it will reveal. In particular, let’s see what Andrew LePage comes up with in his own Habitable Zone Reality Check.
My own reservation about habitability: The age of TRAPPIST-1, thought to be in the range of 500 million years, points to a young dwarf of the kind given to flare activity. Here I note a paper from Peter Wheatley (University of Warwick), with Michaël Gillon as one of the co-authors. In “Strong XUV irradiation of the Earth-sized exoplanets orbiting the ultracool dwarf TRAPPIST-1,” Wheatley and team present XMM-Newton X-ray observations of TRAPPIST-1, finding ‘a relatively strong and variable coronal X-ray source with an X-ray luminosity similar to that of the quiet Sun.” A snip from the paper:
The TRAPPIST-1 system presents a fabulous opportunity to study the atmospheres of Earth-sized planets as well as the complex and uncertain mechanisms controlling planet habitability. Whatever the mechanisms at play, it is clear that these planets are subject to X-ray and EUV irradiation that is many-times higher than experienced by the present-day Earth and that is sufficient to significantly alter their primary and any secondary atmospheres. The high energy fluxes presented here are vital inputs to atmospheric studies of the TRAPPIST-1 planets.
None of that is to downplay the significance of this discovery, but simply to put it in context (it also should remind us how many factors come into play in the word ‘habitability’). Even so, with seven planets in a compact system around this dim red star, we certainly have some interesting real estate to work with. And we’ll certainly have plenty to investigate in a system with multiple transits. TRAPPIST-1 has about 8 percent the mass of the Sun. To be in the habitable zone here, a planet needs to be close to the parent star — indeed, the planetary orbits around TRAPPIST-1 are not much larger than what we find among Jupiter’s larger moons, and much smaller than the orbit of Mercury in our own system.
That means that transits are deep, as the planets are close to a very small star. Gillon and co-author Amaury Triaud (University of Cambridge) worked with space and ground instruments to make this detection, which follows up their original discovery of three Earth-sized planets there, announced in 2016. The TRAPPIST-South telescope at La Silla produced data complemented by the Very Large Telescope (Paranal) and the Spitzer Space Telescope and several other ground based instruments in the course of these observations.
The news conference on the TRAPPIST-1 findings goes online just as I publish this, and I’m sure we’ll have more to say about this fascinating system in short order.
To add or remove a tag from multiple headlines in an org-mode file, select a region containing multiple headlines and then run
M-x org-change-tag-in-region. You’ll be prompted for a tag and asked if you want to add or remove it from the selected headlines.
You’ll want to see the news conference scheduled by NASA at 1300 EST (1800 UTC) today, an exoplanet finding of considerable interest to Centauri Dreams readers (I’ll have more on this later in the day). The event will air live on NASA Television and the agency’s website. Links available here.
* Thomas Zurbuchen, associate administrator of the Science Mission Directorate at NASA Headquarters in Washington
* Michael Gillon, astronomer at the University of Liege in Belgium
* Sean Carey, manager of NASA’s Spitzer Science Center at Caltech/IPAC, Pasadena, California
* Nikole Lewis, astronomer at the Space Telescope Science Institute in Baltimore
* Sara Seager, professor of planetary science and physics at Massachusetts Institute of Technology, Cambridge
A Reddit AMA (Ask Me Anything) about exoplanets will be held following the briefing at 1500 EST (2000 UTC) with scientists available to answer questions in English and Spanish.
These days, it's rare that we learn something new from the Snowden documents. But Ben Buchanan found something interesting. The NSA penetrates enemy networks in order to enhance our defensive capabilities.
The data the NSA collected by penetrating BYZANTINE CANDOR's networks had concrete forward-looking defensive value. It included information on the adversary's "future targets," including "bios of senior White House officials, [cleared defense contractor] employees, [United States government] employees" and more. It also included access to the "source code and [the] new tools" the Chinese used to conduct operations. The computers penetrated by the NSA also revealed information about the exploits in use. In effect, the intelligence gained from the operation, once given to network defenders and fed into automated systems, was enough to guide and enhance the United States' defensive efforts.
This case alludes to important themes in network defense. It shows the persistence of talented adversaries, the creativity of clever defenders, the challenge of getting actionable intelligence on the threat, and the need for network architecture and defenders capable of acting on that information. But it also highlights an important point that is too often overlooked: not every intrusion is in service of offensive aims. There are genuinely defensive reasons for a nation to launch intrusions against another nation's networks.
Other Snowden files show what the NSA can do when it gathers this data, describing an interrelated and complex set of United States programs to collect intelligence and use it to better protect its networks. The NSA's internal documents call this "foreign intelligence in support of dynamic defense." The gathered information can "tip" malicious code the NSA has placed on servers and computers around the world. Based on this tip, one of the NSA's nodes can act on the information, "inject[ing a] response onto the Internet towards [the] target." There are a variety of responses that the NSA can inject, including resetting connections, delivering malicious code, and redirecting internet traffic.
Similarly, if the NSA can learn about the adversary's "tools and tradecraft" early enough, it can develop and deploy "tailored countermeasures" to blunt the intended effect. The NSA can then try to discern the intent of the adversary and use its countermeasure to mitigate the attempted intrusion. The signals intelligence agency feeds information about the incoming threat to an automated system deployed on networks that the NSA protects. This system has a number of capabilities, including blocking the incoming traffic outright, sending unexpected responses back to the adversary, slowing the traffic down, and "permitting the activity to appear [to the adversary] to complete without disclosing that it did not reach [or] affect the intended target."
These defensive capabilities appear to be actively in use by the United States against a wide range of threats. NSA documents indicate that the agency uses the system to block twenty-eight major categories of threats as of 2011. This includes action against significant adversaries, such as China, as well as against non-state actors. Documents provide a number of success stories. These include the thwarting of a BYZANTINE HADES intrusion attempt that targeted four high-ranking American military leaders, including the Chief of Naval Operations and the Chairman of the Joint Chiefs of Staff; the NSA's network defenders saw the attempt coming and successfully prevented any negative effects. The files also include examples of successful defense against Anonymous and against several other code-named entities.
I recommend Buchanan's book: The Cybersecurity Dilemma: Hacking, Trust and Fear Between Nations.
The aurora’s been practically out of sight the past couple months unless you live in central Canada and points north. For U.S. observers, only a couple of weak displays — usually a mustache of pale green light just above the northern horizon — have been sighted. The scene might get livelier soon with a minor G1 storm forecast for this evening. Skywatchers living in the northern states with clear skies in the forecast should step outside tonight, allow 5-10 minutes for night vision to take hold and then scan the sky for a greenish arc or two a fist or more above the northern horizon.
This latest opportunity for northern lights comes to us courtesy of a big coronal hole (see above) that was aimed directly at the Earth earlier this week. When the material in the hole sweeps by Earth later today and its magnetic “compass” happens to be pointing south at the time, it could link into our planet’s magnetic field. There it would be directed into the upper atmosphere where the particles would excite air molecules to bring forth a modest display of northern lights.
Dates of planned rover activities described in these reports are subject to change due to a variety of factors related to the Martian environment, communication relays and rover status.
I’m good at Monopoly, or at least I used to be. People stopped playing with me after a while.
It’s not like I’d rub it in when I won. I pride myself on not gloating. It’s just that the game of Monopoly sort of has rubbing it in built into the game. It’s rare to lose quickly at Monopoly. Usually, getting beaten takes about an hour. It’s a well-known feature of the game, so much so that on the rare occasion that someone does get decimated in one or two turns, it’s all the more embarrassing for them.
Hey, I was temporarily without a 10mm eyepiece (long story) and I have been sufficiently happy with the Bresser 20mm 70-degree that came with my AR102S Comet Edition that I plunked down thirty bucks for the 10mm version (sale price, down from $50). It was only my second-ever purchase from OpticalInstruments.com (after the Bresser Spektar spotting scope a couple of years ago), but they rewarded my ‘ongoing support’ with this deal. You can use this link and unique code:
to get $5 off your first purchase, and if you do, I’ll get a $5 kickback. As far as I know, there is no limit to how often this can be used by people making their first purchase there. So if you’ve been tempted by something at that store, here’s your chance to save a little dough. Happy shopping!
Planet is pleased to announce that we have once again partnered with Geoplex, a mapping and GIS systems company based in Australia, to deliver timely, high resolution satellite image data. We’re especially excited about this effort because a portion of the data to be used by the Queensland government, within its Department of Natural Resources and Mines (DNRM), will be made available to the public under creative commons.
A 2016 summer mosaic of Queensland, Australia.
While satellite imagery over the State of Queensland was previously captured only once every couple of years, this partnership between Planet and Geoplex will provide DNRM with seasonally updated mosaics multiple times in a single year. The first-ever stream of quarterly-updated data will help decision-makers at the state and agency level make timely, better-informed decisions.
Geoplex was one of Planet’s first international partners, and we could not be more grateful for their continued loyalty and collaboration in our mission to make data about our planet visible, accessible and actionable.
I don’t do guest posts here. This blog is my private soapbox. You want to scream into the void? Go get your own soapbox.
Yesterday, I was privy to a private email message discussing a topic I care deeply about. I contacted the author and said “You really need to make this public and give this a wider audience.” His response boiled down to “if I wanted it to get a wider audience, I was welcome to do so myself.” So here’s my first ever guest post, from Jordan K Hubbard, one of the founders of the FreeBSD Project. While this discussion focuses on FreeBSD, it’s applicable to any large open source project.
The email discussion was about the FreeBSD Project recently giving someone the boot. I’m not linking to who it was; you can dig up the controversy elsewhere.
I did my first install of FreeBSD, version 2.0.5, in late 1995. I started reading FreeBSD mailing lists shortly afterwards. Allow me to provide some context to say that when jkh says he was a dick: yes. He was.
Like any good member of the press, I’ll give my anchor commentary after the footage. Again, it’s my soapbox.
My, what an interesting thread this has been, as well as an interesting (and probably controversial) recent talk by Benno on much the same topic.
I’m known for my long and overly verbose PhD thesis style postings, so I’ll try to make this one short(er) with a few pithy points:
1. Some of the FreeBSD project’s most energetic, motivated, and capable people have also, when viewed through the long lens of history, been total dicks, at least in electronic form. They just can’t seem to help themselves from coming off that way, one person’s “passionate concern for topic X” being another person’s “totally over-the-top behavior concerning topic X”, with neither side usually having the benefit of all the information while they form conclusions about which of the two it is.
2. The project needs driven individuals capable of achieving “10X productivity” (a software industry term, not mine) in driving various agendas, inspiring others by their progress and allowing important opportunities for project growth to be seized rather than squandered, just as it needs nice, cooperative, team-players who go out of their way to avoid stepping on toes or driving more junior, perhaps easily intimidated, volunteers away from the project. It would also be awesome to find both attributes in the same people, obviously, but that goal is usually more aspirational than one immediately (if ever) achieved.
So, how to reconcile these two seemingly fundamentally opposed goals in project membership management? “What is core going to do about it?”*
First, let me be very honest: I can only speak to you from the perspective of someone who has committed many of the sins in paragraph 1.
I have said many things I subsequently regretted. I have engaged in furious, pitched battles over topics that subsequently proved to be almost nonsensically trivial. I have definitely alienated people. The fact that I have Asperger’s syndrome also made it easy for me to be both highly driven and insensitive to other people’s feelings at the same time (it helps when you don’t even notice them) but that’s certainly no excuse because I have also learned, along the way, to grasp intellectually what I did not always grasp instinctively: Just don’t be a dick. Take a deep breath, swallow the irritation that is often my first response, and try to figure out another way of expressing myself that will lead to a better long-term outcome with less friction with my colleagues / bosses / end-users.
Does that work 100%? Heck no, I’m still a work in progress, but I’m definitely better than I was 22 years ago, and pretty much the only reason that I’m better is that people took the time to talk to me about being a dick. They sent me (oh so many) private emails saying, in effect, “Dude! Really??” They called me on the phone when it was clear I really needed a Healthy Dose of Perspective and email just wasn’t doing the job. All of my fellow developers and colleagues (and yes, occasionally HR departments) have collectively conspired to slap me across the face with the Trout of Truth when it was clear I was going, or had gone, off the rails where interpersonal communications and decision making skills were concerned. I have, in short, learned some hard lessons about being more responsible for my actions on a number of levels and I’m glad I managed to stick around long enough to learn them. I am, as I said, a work in progress.
* That’s where all of you come in. You can’t just say “What is Core Doing About It?” when it comes to addressing problems like this, because by the time Core gets involved, it’s already too late. The damage is done and probably irrevocably so because it’s been done over a long period of time. People complained and complained and finally core wearily stepped in and pulled the trigger. Bang. Too late for anything else.
If you want better outcomes than this, then you simply need to start mentoring one another. You need to take extra time to call your fellow developers on the phone / Skype / WhatsApp / whatever works when it’s clear one of them is having a bad day, or escalating a situation that doesn’t warrant escalation, or simply being a dick when they don’t need to be (and probably don’t even realize they’re being). We had that kind of close and frequent communication a lot in the early days of the project, and I absolutely know that it held things together through some rather tempestuous times. It’s also no excuse to say that the project is bigger and has outgrown this now, either, because it only takes one person to call one other person at the right time for ad-hoc mentorship to work. Don’t just wait until you see someone at the next conference when it’s clear they are struggling to interact successfully with others in the here-and-now, reach out, just as so many reached out to me!
Please also take my word for it when I say that a truly successful FreeBSD project will continue to need driven people, people who are often tempted to drive right over others who won’t get out of their way or otherwise tend to show “less than perfect patience”, just as it will continue to need quieter folks who are content to follow someone else’s vision, assuming that there is one to follow, and instinctively do a better job of getting along with others. Each “type” can benefit and learn from the example the other provides, assuming there is a real commitment to doing so.
I’ll leave you with an analogy: This is like a marriage. If both in the couple are very passive, then that will probably be a long-lived but rather boring relationship where both ultimately wind up just counting the days until death comes for them. If both are fiery and impetuous, the relationship will probably be exciting but equally short-lived. The most successful marriages are usually some combination of the two extremes, the worst impulses of one being kept reasonably in check while the other gets to experience new and exciting things they just wouldn’t have thought to do (or had the will to do) on their own. Assuming that both also commit to communicating on a frequent basis and don’t just assume Everything Is Fine, it works.
What kind of marriage do you folks want?
Jordan is absolutely right here.
The open source community has some incredibly smart people in it. You folks are brilliant.
When Jordan says that he’s a “work in progress,” though, that’s applicable to every one of us. Including myself.
I won’t say that the open source community is full of people with problems like ADD, Aspergers, and so on. I will say that of the adults I know that I happen to have these conditions, I met every single one of them through the open source community. I strongly suspect they gravitate there because computers are comparatively easy compared to people.
Other groups have their own issues. The writers I know run really heavy into depression and social anxiety. (I’m a writer and a techie. Thanks to my writing career and Amazon Prime’s free two-day shipping, I almost never leave the house. That’s just best.)
Brilliance is great. I admire really really smart people.
But to belong to a community, a person must be able to work with that community. I’m using “must” in the RFC sense here. It’s an absolute, non-negotiable requirement.
BSD, and open source in general, is full of brilliant but incomplete people. Everyone is incomplete. In open source, the incompleteness is often in social skills and the understanding of how to behave.
Social correction, and the establishment of social norms, comes only from the community. It’s entirely bottom-up. One on one.
While you can go to a counselor to help develop those skills, the best advice comes from peers who have been in your exact situation, who have faced those problems, and who have developed those skills.
Are you good at communicating in your open source community? You have another contributor you like, but who has social problems? Unofficially mentor them.
Are you an open source contributor who keeps getting messages from people saying something like “Dude, that’s really messed up,” or “You were really inappropriate here, stop it,” or similar? One message might not be a big deal. But if you keep getting them, it’s a sign that you’re missing a skill. A skill that can be learned. If someone you get along with offers to help: listen.
And it’s far better done via voice than electronic text. Text communication strips vital context, and it’s much much slower than voice. If a person has problems communicating via email, more email isn’t going to solve it.
One of the hardest things to do is listen when someone calls you a dick. Yes, it’s happened to me. When it comes from people I respect, I listen. It makes me less incomplete.
And if Jordan can learn to not be a dick, anyone can.
So why am I not naming the person who got booted from FreeBSD? Because he, like everyone else, is an incomplete person who lacks a particular skill. I hope he will develop that skill. And I don’t want a blog post from 2017 to hurt his chances of getting a job in 2037, or even 2018, when he’s had an opportunity to add those skills.
You have the power to make that brilliant but poorly socialized contributor a better community member. Even if that brilliant member is you.
Over the past few days, thousands and thousands of citizens around the world marched through the streets of cities and towns, voicing their opposition to, or support for, dozens of issues. From anti-Trump protests in the U.S., U.K., and Mexico, to anti-brutality demonstrations in France, to a pro-law-enforcement march in Hong Kong and a massive pro-refugee demonstration in Spain, and much more. Gathered here are just a handful of images of the varied unrest that erupted into public protests worldwide this weekend.
Charl Botha has posted a handy guide to using a paper-specific bibliography file with Org mode. His use case is roughly that he wants to manage his bibliography with Zotero but wants to use Org Mode and John Kitchins org-ref package to write his papers. He also wants a small, paper-specific bibliography file that he can keep with the paper source.
That turns out to be pretty easy to do. You configure Org to use
latexmk in your
init.el file and add a few lines of headers to your paper source and everything works as you’d expect. Botha has a small example to show the entire process. See his post for the details.
For another take on the Org mode/Zotero workflow, see this comment by Muad Abd El Hay to a previous post of mine on Zotero workflows. It also uses
org-ref but exports the entire bibliography from Zotero every time the database is changed.
Here’s one of the reasons to have your own permanent server: The New York Times has a daily feature called, not surprisingly, “The Daily“. It’s a short 15-20 minute news segment, ready by 6 AM. It’s available through Google Play Music or iTunes, but I leave for work by 6:15, and I don’t want to use up cell data downloading something that should arrive on my phone just before I leave the house. Of course, there’s no obvious way to tell Google Play, “I know it’s there; go get it right now”. I don’t know the iPhone experience, but I imagine it’s the same. I want to download on my time, not on Google or Apple’s schedule.
Luckily, there’s an RSS feed for this podcast. That, plus this simple script on my DragonFly system, means I can pull it down whenever I’m ready:
fetch -o – http://feeds.podtrac.com/zKq6WZZLTlbM | grep enclosure | cut -d ‘”‘ -f2 | xargs fetch -m
So, it’s a matter of running that script, and syncing off my own local storage, on my own schedule. FolderSync Lite will happily sync back to my phone using sftp.
Amid the hustle and bustle of the RSA Security Conference in San Francisco last week, researchers at RSA released a startling report that received very little press coverage relative to its overall importance. The report detailed a malware campaign that piggybacked on a popular piece of software used by system administrators at some of the nation’s largest companies. Incredibly, the report did not name the affected software, and the vendor in question has apparently chosen to bury its breach disclosure. This post is an attempt to remedy that.
The RSA report detailed the threat from a malware operation the company dubbed “Kingslayer.” According to RSA, the attackers compromised the Web site of a company that sells software to help Windows system administrators better parse and understand Windows event logs. RSA said the site hosting the event log management software was only compromised for two weeks — from April 9, 2015 to April 25, 2015 — but that the intrusion was likely far more severe than the short duration of the intrusion suggests.
That’s because in addition to compromising the download page for this software package, the attackers also hacked the company’s software update server, meaning any company that already had the software installed prior to the site compromise would likely have automatically downloaded the compromised version when the software regularly checked for available updates (as it was designed to do).
RSA said that in April 2016 it “sinkholed” or took control over the Web site that the malware used as a control server — oraclesoft[dot]net — and from there they were able to see indicators of which organizations might still be running the backdoored software. According to RSA, the victims included five major defense contractors; four major telecommunications providers; 10+ western military organizations; more than two dozen Fortune 500 companies; 24 banks and financial institutions; and at least 45 higher educational institutions.
RSA declined to name the software vendor whose site was compromised, but said the company issued a security notification on its Web site on June 30, 2016 and updated the notice on July 17, 2016 at RSA’s request following findings from further investigation into a defense contractor’s network. RSA also noted that the victim software firm had a domain name ending in “.net,” and that the product in question was installed as a Windows installer package file (.msi).
Using that information, it wasn’t super difficult to find the product in question. An Internet search for the terms “event log security notification april 2015” turns up a breach notification from June 30, 2016 about a software package called EVlog, produced by an Altair Technologies Ltd. in Mississauga, Ontario. The timeline mentioned in the breach notification exactly matches the timeline laid out in the RSA report.
As far as breach disclosures go, this one is about the lamest I’ve ever seen given the sheer number of companies that Altair Technologies lists on its site as subscribers to eventid.net, an online service tied to EVlog. I could not locate a single link to this advisory anywhere on the company’s site, nor could I find evidence that Altair Technologies had made any effort via social media or elsewhere to call attention to the security advisory; it is simply buried in the site. A screenshot of the original, much shorter, version of that notice is here.
Perhaps the company emailed its subscribers about the breach, but that seems doubtful. The owner of Altair Technologies, a programmer named Adrian Grigorof, did not respond to multiple requests for comment.
“This attack is unique in that it appears to have specifically targeted Windows system administrators of large and, perhaps, sensitive organizations,” RSA said in its report. “These organizations appeared on a list of customers still displayed on the formerly subverted software vendor’s Web site. This is likely not coincidence, but unfortunately, nearly two years after the Kingslayer campaign was initiated, we still do not know how many of the customers listed on the website may have been breached, or possibly are still compromised by the Kingslayer perpetrators.”
It’s perhaps worth noting that this isn’t the only software package sold by Altair Technologies. An analysis of Eventid.net shows that the site is hosted on a server along with three other domains, eventreader.com, firegen.com and grigorof.com (the latter being a vanity domain of the software developer). The other two domains — eventreader.com and firegen.com — correspond to different software products sold by Altair.
The fact that those software titles appear to have been sold and downloadable from the same server as eventid.net (going back as far as 2010) suggests that those products may have been similarly compromised. However, I could find no breach notification mentioning those products. Here is a list of companies that Altair says are customers of Firegen; they include 3M, DirecTV, Dole Food Company, EDS, FedEx, Ingram Micro, Northrop Grumman, Symantec and the U.S. Marshals Service.
RSA calls these types of intrusions “supply chain attacks,” in that they provide one compromise vector to multiple targets. It’s not difficult to see from the customer lists of the software titles mentioned above why an attacker might salivate over the idea of hacking an entire suite of software designed for corporate system administrators.
“Supply chain exploitation attacks, by their very nature, are stealthy and have the potential to provide the attacker access to their targets for a much longer period than malware delivered by other common means, by evading traditional network analysis and detection tools,” wrote RSA’s Kent Backman and Kevin Stear. “Software supply chain attacks offer considerable ‘bang for the buck’ against otherwise hardened targets. In the case of Kingslayer, this especially rings true because the specific system-administrator-related systems most likely to be infected offer the ideal beachhead and operational staging environment for system exploitation of a large enterprise.”
A copy of the RSA report is available here (PDF).
Update, 3:35 p.m. ET: I first contacted Altair Technologies’ Grigorof on Feb. 9. I heard back from him today, post-publication. Here is his statement:
“Rest assured that the EvLog incident has been reviewed by a high-level security research company and the relevant information circulated to the interested parties, including antivirus companies. We are under an NDA regarding their internal research though the attack has already been categorized as a supply chain attack.”
“The notification that you’ve seen was based on their recommendations and they had our full cooperation on tracking down the perpetrators. It’s obviously not as spectacular as a high visibility, major company breach and surely there wasn’t anything in the news – we are not that famous.”
“I’m sure a DDoS against our site would remain unnoticed while the attack against your blog site made headlines all over the world. We also don’t expect that a large organization would use EvLog to monitor their servers – it is a very simple tool. We identified the problem within a couple of weeks (vs months or years that it takes for a typical breach) and imposed several layers of extra security in order prevent this type of problem.”
“To answer your direct question about notifications, we don’t keep track on who downloads and tries this software, therefore there is no master list of users to notify. Any anonymous user can download it and install it. I’m not sure what you mean by ‘you still haven’t disclosed this breach’ – it is obviously disclosed and the notification is on our website. The notification is quite explicit in my opinion – the user is warned that even if EvLog is removed, there may still be other malware that used EvLog as a bridgehead.”
My take on this statement? I find it to be wholly incongruent. Altair Technologies obviously went to great lengths to publish who its major customers were on the same sites it was using to the sell the software in question. Now the owner says he has no idea who uses the software. That he would say it was never intended to be used in major organizations seems odd in contrast.
Finally, publishing a statement somewhere in the thick of your site and not calling attention to it on any other portion of your site isn’t exactly a disclosure. If nobody knows that there’s a breach notice there to find, how should they be expected to find it? Answer: They’re not, because it wasn’t intended for that purpose. This statement hasn’t convinced me to the contrary.
Update, 11:13 p.m. ET: Altair Technologies now has a link to the breach notification on the homepage for Evlog: http://www.eventid.net/evlog/
Have you ever seen a super-bright meteor and at the same time heard hissing or rustling sounds? Some people claim to hear the sizzle of a bright fireball, comparing it to the sound of frying bacon. I’ve never heard any sounds associated with bright fireballs. Then again, I’ve never seen one that’s approached the moon or sun in brilliance. That’s apparently what you need to break the “sound barrier.”
If you think about it for a second, there’s no way you can simultaneously watch a bright meteor and hear the sound it’s making as it tears through the atmosphere. While the light travels to your eyes in a tiny fraction of a second, sound travels much more slowly, requiring at least several minutes to arrive from even the closest fireball.
So how might we explain that hissing and sizzling? Meteors obviously give off lots of visible light, but they also emit very low frequency (VLF) radio waves. While these are still beyond the range of human hearing, they travel at the speed of light toward the observer and interact with familiar materials. Plant foliage such as pine needles, thin wires, aluminum foil and even dry, frizzy hair can act as a “transducer,” a physical object that can be set to vibrating by radio waves.
Their vibrations create reverberations in the air that we hear as the buzzing or rustling of brilliant fireballs. The phenomenon is called electrophonics and it may also explain why the aurora is audible to some people — though not me of course despite having stood under some monster displays! Still, laboratory experiments have proven that VLF radio waves beamed at a variety of different objects produce sounds that were heard easily.
Now, new research by Richard Spalding and colleagues out of Sandia National Laboratories in Albuquerque, New Mexico, offers an alternative explanation for why we may sometimes hear meteors. He discovered that fireballs have strong, short-lived brightness fluctuations (on the order of milliseconds) that can rapidly warm common materials like hair, clothing, and leaves. These in turn heat the surrounding air, creating small pressure waves that our ears pick up as popping and hissing sounds. The fast-flashing flares have been recorded in nearly all bolides (large, exploding meteors) observed by the Czech Fireball Network.
This isn’t the first time that light has been observed to affect sound, called the photoacoustic effect. Telephone inventor Alexander Graham Bell and colleagues observed it in 1880 when they heard a tone in certain materials by alternately exposing them to and then blocking them from sunlight. For a wonderful demonstration on how to transmit sound using sunlight, learn how a Photophone works in this youtube video.
While the strongest signals produced by materials touched by a fireball’s light are of low frequency and not especially easy to hear, they still fall within the range of human hearing. The best materials for producing photo-acoustical sound are “dark paint, fine hair, leaves, grass, and dark clothing,” according to the researchers, who tested them using white LED lights inside a plastic dome.
Their conclusion was this: An observant person in a quiet environment surrounded by the right materials could hear photo-acoustically induced sound from a magnitude −12 or brighter fireball (one day before full-moon-brightness or brighter), assuming it emits light that can be converted into sound by those materials.
While it’s hard to believe that light from a meteor can heat objects at a distance, the experiments indicate that it’s a real possibility. Fascinating stuff! If you’d like to know more, check out the scientific paper. And now I’ve got to ask — have you ever heard a bright meteor?
Registration is now open for the 2017 Tennessee Valley Interstellar Workshop, which will be held in Huntsville, AL on October 4-6. The title for this year’s conference is “Step By Step: Building a Ladder to the Stars.” The registration page is here, and if you’re thinking of attending, I recommend registering right away, as spaces filled up swiftly the last time around. This year’s TVIW will take place in partnership with the Tau Zero Foundation as well as Starship Century, which has already produced two successful symposia of its own.
Despite its regional name, the Tennessee Valley Interstellar Workshop has become a well received forum for interstellar discussions on a global scale, with speakers and workshop participants well known to Centauri Dreams readers. Registration at this year’s event costs $175, with discounts available for students. Pre-symposium seminars for an additional fee are to be held on Tuesday October 3. This year’s topics are Conflict in Space; Laser Propulsion: An Introduction to Laser Propulsion and Assessment of Relevant Current Technologies; and Human Life in Space – Separating Reality from Wishful Thinking.
I’ve been pleased to attend all of the previous symposia except the last one, which I had to miss because of an untimely bout of the flu. The recent call for papers jogged me into getting my registration in early, as I don’t want to miss two in a row. The text below is taken directly from the submissions page on the TVIW site:
TVIW 2017 Call for Papers, Workshop Tracks, and Posters: The Tennessee Valley Interstellar Workshop (TVIW), in collaboration with Starship Century and Tau Zero Foundation, hereby invites participation in its 2017 Symposium to be held from Wednesday, October 4 through Friday, October 6, 2017, in Huntsville, Alabama. Our Program Committee is seeking proposals for Plenary Papers/Talks, Working Tracks, and Sagan Meetings as well as other content such as posters.
Sagan Meetings are new for TVIW 2017. Carl Sagan famously employed this format for his 1971 conference at the Byurakan Observatory in old Soviet Armenia, which dealt with the Drake Equation. Each Sagan Meeting will invite five speakers to give a short presentation staking out a position on a particular question. These speakers will then form a panel to engage in a lively discussion with the audience on that topic.
Invited Talks are presentations that contain significant results or describe major activities in the field and will be solicited by sponsoring organizations or the conference organizers.
Discussion Groups are for those not participating in the working tracks or Sagan Meetings and offer opportunities for free form discussion of subjects of mutual interest to attendees. They are unstructured and no specific output is expected although it is hoped that these groups might generate teams and/or topics that would lead to future Working Tracks and possible collaborative efforts in the interstellar field. Coffee, pen, and paper will be provided. An expanded list of possible topics will be available each day of the Symposium and anyone wishing to propose a topic is free to do so. Contact David Fields (email@example.com) to suggest topics.
Other Content includes, but is not limited to, posters, displays of art or models, demonstrations, panel discussions, interviews, or public outreach events. Please refer to Appendix 1 for more information and the abstract submission guidelines.
Full information on formats and other structural matters can be found on the Submissions page. If you’re wondering about ‘working tracks,’ TVIW has used these in the past to engage up to four parallel tracks on issues of interstellar import, such as mission targets, propulsion systems, life support and the human factors needed for interstellar exploration. Each working track will be allocated two-hour blocks each day. Proposals for working tracks are still open, with the letter of intent deadline coming up on March 3, and deadline for complete proposals on March 31. TVIW hopes to have four to six working tracks in the 2017 symposium.
I notice that Andrew Siemion (UC-Berkeley), who serves as director of the UC Berkeley Center for the Search for Extraterrestrial Intelligence (SETI) will be speaking in Huntsville. Siemion is also one of the leaders of the Breakthrough Listen Initiative, which under the aegis of the Breakthrough Prize Foundation is conducting the most sensitive search yet for signs of extraterrestrial technology. At TVIW 2017, Siemion will be discussing “The Search for Ourselves Among the Stars,” a look at the past, present and future of SETI activities.
I’ve also recently heard from Kelvin Long, who heads up the Initiative for Interstellar Studies, about a workshop to be held at City Tech, CUNY in New York from June 13-15. The group’s goal is “facilitating real progress on existing problems related to interstellar studies.” This year’s session is to have a propulsion focus, but according to the workshop’s web page, the focus will change with successive meetings as issues arise and concepts change.
Sponsored by the i4IS and the Center for Theoretical Physics (CTP) at City Tech, the workshop is intended as a small gathering with informal conversations and social interactions designed to promote discussion. The deadline for ‘extended abstracts’ is March 15, though this can be extended to the 25th. Early bird registration begins April 1, with regular registration beginning April 17. The advantage of early registration is a discounted fee for attendance ($200 per person); the fee goes up to $250 once regular registration begins.
Here is the group’s overview of the event:
At the start of this new millennium we are faced with one of the greatest challenges of our age Can we cross the vast distances of space to visit other worlds around other stars? At the end of the last century the idea of interstellar travel was considered one of science fiction. In recent times that has changed and interstellar flight has received much interest. This is particularly since the discovery of many planets outside of our Solar System around other stars. Indeed, we now know that an Earth sizes mass planet orbits one of our closest stars, Proxima b. In addition, national space agencies and private commercial industry are beginning to turn their attention to the planets and beyond. It is time to start considering the bold interstellar journey and how we might accomplish it. Yet, this challenge presents many difficult problems to solve and who better to address them than the global physics community.
The Institute For Interstellar Studies (I4IS) and the Center for Theoretical Physics (CTP) at City Tech have partnered to bring together some of the best minds in the fields of physics to address some of the fundamental problems associated with becoming an interstellar capable civilisation.
The first day of the workshop is devoted to ‘energetic reaction engines,’ i.e., engines that involve the ejection of matter or energy rearward from the vehicle to generate thrust. This could be electric, plasma, nuclear thermal, fission, fission-fragment, fusion, antimatter catalyzed fusion, antimatter. Day 2 focuses on sails and beamed energy via photons or particle beams, covering laser sails, microwave sails, particle beamers, stellar wind pushers. Day 3 is given over to breakthrough propulsion topics, “an area of technology development that seeks to explore and develop a deeper understanding of the nature of space-time, gravitation, inertial frames, quantum vacuum, and other fundamental physical phenomena.”
Harold ‘Sonny’ White is to chair day 3 of the workshop, with Kelvin Long taking day 1 and CUNY’s Roman Kezerashvili taking day 2. White’s presence will give the opportunity for those interested in his latest EmDrive work to learn and ask questions. I haven’t seen him since we ate cheeseburgers sitting around the swimming pool at the Dallas Starship Congress meeting some years back. I’ve enjoyed being at several conferences with Kelvin, and remember Roman Kezerashvili from the Aosta conference in Italy where I first met him. He’s a rigorous scholar and an engaging conversationalist. It will be interesting to see how this crew finalizes the lineup of presentations for the upcoming event.
Submissions to the workshop are open, and those accepted are to appear in the Journal of the British Interplanetary Society. On the nature of submissions, the group says this:
All submissions should be attempted solutions of existing problems, or at least a strong discussion on the pathway towards a solution. This is a working meeting and audience participation and discussion should be expected in any results. The rule for the workshop is “no solution, no presentation”. Some spaces will be reserved for ‘special observer status’ participation.
For further information on submission format, see Foundations of Interstellar Studies
Workshop at City Tech, CUNY.
Still cloudy here, but we got a gap earlier this evening, a persistent sucker hole right over Orion, and I got a whole 10 minutes of observing in. I was using the Bresser AR102S Comet Edition and for eyepieces the 20mm 70-degree that came with it, and my new 28mm RKE from Edmund.
Both eyepieces will just fit in the belt of Orion, with Alnitak and Mintaka in the last 5% or so of the field on either side. So the belt turns out to be a good test of edge characteristics. The 28mm RKE is way sharper at the edges, by the way. You might think that its 45-degree apparent field of view would feel positively claustrophobic after the 70-degree field of the Bresser eyepiece.
But it doesn’t, because of the magical floating stars effect. It’s real! It’s one of the most arresting things I have experienced in almost a decade of observing. As your eye gets closer to the eyepiece, you begin to be able to see the image. As you move in until you can see the entire field, the point where the eyepiece barrel disappears from view coincides exactly with the point where you are far enough to see the field stop of the eyepiece. If you hold up right there, you see the image created by the eyepiece floating in space, with a thin ring of unresolved darkness around it, which if you back out a bit will be the eyepiece barrel, and if you move in a bit will be the eyepiece field stop. In either case, the eye relief is great enough that you can still see the rest of the scope in your peripheral vision, past the thin ring of darkness at the edge of the field.
I have never, ever seen anything like this. It is exactly as cool and immersive as the legends have it. I can imagine building a whole observing kit consisting of this one eyepiece and a series of Barlows of various magnifications.
Anyway, if you have been on the fence about this eyepiece like I was, just get it. It’s amazing.
If you are anywhere near KnoxBUG’s meeting place (mid-Tennessee, US), Joe Maloney will be presenting on OpenRC and TrueOS, tomorrow night. See the link for address and times.
Karen Levy has an interesting new article critiquing blockchain-based “smart contracts.” The first part of her title, “Book-Smart, not Street-Smart,” sums up her point. Here’s a snippet:
Though smart contracts do have some features that might serve the goals of social justice and fairness, I suggest that they are based on a thin conception of what law does, and how it does it. Smart contracts focus on the technical form of contract to the exclusion of the social contexts within which contracts operate, and the complex ways in which people use them. In the real world, contractual obligations are enforced through all kinds of social mechanisms other than formal adjudication—and contracts serve many functions that are not explicitly legal in nature, or even designed to be formally enforced.
To review, “smart contracts” are a feature of some blockchain-based systems, which allow an interaction between multiple parties to be encoded as a set of rules which will be executed automatically by the system, so that neither the parties nor anyone else can prevent those rules from being enforced. There are lots of variations on the basic idea, which differ in aspects such as exactly what kind of code is used to program the rules, what kinds of actions can be expressed in a ruleset, and so on.
A simple example is an escrow arrangement, where Alice puts some money into escrow, and the money is released to Bob later if an arbiter Charlie determines that Bob performed some required action; otherwise the money returns to Alice. An escrow mechanism can be encoded as a “smart contract” so that once put into escrow the funds can only be disbursed to Alice or Bob, and only as specified by Charlie. Additional features, such as (say) splitting the money 50/50 between Alice and Bob if Charlie fails to act, can be built in. Indeed, the whole idea is that complicated rules can be encoded and then automatically executed with no dispute or appeal possible.
Karen’s argument, that contracts serve functions that are not merely legal, is correct–and that is one reason why “smart contracts” may not be street-smart. But in addition to failing to do the non-legal work that contracts do, “smart contracts” also fail to do much of the legal work that contracts do, because they don’t work in the same way as contracts.
To give just one example, a legal contract need not try to anticipate absolutely every relevant event that might occur. If some weird thing happens that is not envisioned in a regular legal contract, the parties can work out a modification to the contract that seems reasonable to them, and failing that, a judge might decide the outcome, subject to established legal principles. Similarly, a single error or “bug” in writing a regular contract, causing its literal meaning to differ from what the parties intended, is unlikely to lead to extreme results because the legal system will often resolve such a problem by trying to be reasonable.
Contrast this with “smart contracts” where a bug in a “contract’s” code can lead to a perverse result that may allow one party to exploit the bug, extracting much of the value out of the arrangement with no recourse for the other parties. That’s what happened with the DAO in Ethereum, leading to a controversial attempt to unwind a legal-according-to-the-rules set of transactions, and dividing the Ethereum community.
So if “smart contracts” may not be smart, and may not be contracts, what are they? It’s best to think of them not as contracts but as mechanisms. A mechanism is a sort of virtual machine that will do exactly what it is designed to do. Like an industrial machine, which can cause terrible damage if it’s not designed very carefully for safety or if it is used thoughtlessly, a mechanism can cause harm unless designed and used with great care. That said, in some circumstances a mechanism will be exactly what you need.
Discarding the term “smart contract” which promises too much in both respects–being sometimes not smart and sometimes unlike a contract–and instead thinking of these virtual objects as nothing more or less than mindless mechanisms is not only more accurate, but also more likely to lead to more prudent application of this powerful idea.
Last month, I had three (1, 2, 3) posts on how people are integrating their Google Calendars with Emacs. The common idea was to be able to see some or all of the calendar items in Emacs. Mike Zmansky’s solution goes further and allows you to move data in both directions so that you can add data to your Google Calendar from Emacs.
Even if you’re not a GCal user, you may be interested in
emacs-calfw. It can be configured for use with Org, Emacs diary, iCalendar (GCal, iCal, etc.), and howm.
These four solutions for integrating GCal show again how easily you can adapt Emacs to your workflow. And, of course, how you can spend most of your time in Emacs.
We don’t hear much about the Soviet program to explore and return samples of the moon to Earth, but the Russians were busy with lunar missions right from the start of the get-go. The first in a series of 24 flyby, lander and sample-return missions, dubbed the Luna Program, began with the Jan. 2, 1959 launch of Luna 1, a flyby mission and ended with Luna 24, sample return mission in August 1976. During Luna 1’s flight to the moon, it obtained new information about Earth’s Van Allen radiation belts, discovered that the moon had no magnetic field and that the sun’s “breath,” a.k.a. the solar wind, streamed through interplanetary space.
Luna 3, launched on Oct. 4, 1959, took and transmitted the first pictures ever taken of the unseen lunar farside. They showed a much more rugged landscape with few of the dark patches, called lunar seas, that make the distinctive face of the Man in the Moon on the familiar nearside. Later missions landed on the the surface and deployed rovers to explore and gather moon rocks and dust that were then launched from the moon and returned to Earth. All by machine — no human rock pickers as with the Apollo program.
The then-Soviet state launched three sample return missions between September 1970 and August 1976: Luna 16, 20 and 24. In each case, a drill was used to chew into the lunar regolith, the name given to the gritty lunar soil, and gather small rocks and dust in a tube. The sample was then placed in a small capsule and launched to Earth.
The first of the three sampling attempts gathered 3.5 ounces (101 grams) from a landing site in the Sea of Fertility; the second, one ounce (30 grams) from a different site in the same sea and the third, a sample of 6 ounces (170.1 grams) from the Sea of Crisis. All arrived safely back on Earth and were retrieved for study — a total of 10.5 ounces. In contrast, the Apollo astronauts’ haul came to 842 pounds (382 kg). Each year over 400 samples of these Gollumly-precious rocks are distributed to scientists across the globe for research.
Two of the Luna missions brought rovers — Lunokhod 1 and 2 — that trundled across the dusty terrain in 1970-71 and 1973, respectively. Lunokhod 1’s controllers drove the robot an astonishing 6.5 miles (10.5 km) and transmitted more than 20,000 TV pictures including 200 panoramas. During its ten and a half months of operation, it also conducted more than 500 lunar soil tests. Lunokhod 2 operated for about 4 months, covered 23 miles (37 km) of terrain including hilly upland areas and winding lunar crevasses called rills, and sent back 86 panoramic images and over 80,000 TV pictures.
By all accounts the Luna program was highly successful. Like the manned Apollo program, landers, rovers and equipment left their marks and presence on the moon’s surface. You’ve may have already seen photos taken from low orbit by LRO of each of the six Apollo landing sites that show descent modules, shiny reflections from equipment, rover tracks and even the winding paths made by the astronauts.
LRO has also photographed some of the Luna landers, rovers and their tracks, all of which will be nicely preserved for countless thousands of years. It’s fun to look back and see where we’ve been, and where we may be returning to very soon. Yes, it appears that the Trump administration has drawn up new priorities for NASA including a return to the moon in as little as three years! Read more about those plans here.
This is interesting:
The My Friend Cayla doll, which is manufactured by the US company Genesis Toys and distributed in Europe by Guildford-based Vivid Toy Group, allows children to access the internet via speech recognition software, and to control the toy via an app.
But Germany's Federal Network Agency announced this week that it classified Cayla as an "illegal espionage apparatus". As a result, retailers and owners could face fines if they continue to stock it or fail to permanently disable the doll's wireless connection.
Under German law it is illegal to manufacture, sell or possess surveillance devices disguised as another object.
I always say that I was a professional comedian for twelve years, but because I was clinging to the bottom few rungs of the showbiz ladder, and I was doing so in the American Northwest, really, I was more of a professional driver than an entertainer.
If someone offered me a job driving my own car, paying for my own fuel, going eight to twelve hours a day, four or five days in a row, usually with some obnoxious person I didn’t enjoy sitting in the passenger seat for $100 or so dollars a day, there’s no way I’d take it. But when I was in my twenties, if they added me doing my act in a bar when I arrived at my destination every night, I took the job and thanked them for the opportunity.
I was, in short, a moron.
On an unrelated note, The last panel was done as a favor for a reader who worked for a trucking company and hoped my comic would be good for the drivers’ morale. I don’t know if it worked, but they sent me a nice hat with the company logo on it, which I still have.
Note: we were just awarded this allocation on Jetstream for DIBSI. Huzzah!
Large datasets have become routine in biology. However, performing a computational analysis of a large dataset can be overwhelming, especially for novices. From June 18 to July 21, 2017 (30 days), the Lab for Data Intensive Biology will be running several different computational training events at the University of California, Davis for 100 people and 25 instructors. In addition, there will be a week-long instructor training in how to reuse our materials, and focused workshops, such as: GWAS for veterinary animals, shotgun environmental -omics, binder, non-model RNAseq, introduction to Python, and lesson development for undergraduates. The materials for the workshop were previously developed and tested by approximately 200 students on Amazon Web Services cloud compute services at Michigan State University's Kellogg Biological Station from 2010 and 2016, with support from the USDA and NIH. Materials are and will continue to be CC-BY, with scripts and associated code under BSD; the material will be adapted for Jetstream cloud usage and made available for future use.
Keywords: Sequencing, Bioinformatics, Training
Principal investigator: C. Titus Brown
Field of science: Genomics
We are requesting 100 m.medium instances with 6 cores, 16 GB RAM, and 130 GB VM space each for each instructor and student for 4 weeks. The total request is for 432,000 service units (6 cores * 24 hrs/day * 30 days * 100 people). To accommodate large size data files, an additional 100 GB of storage volumes are requested for each person. Persistent storage beyond the duration is not necessary for this training workshop.
These calculations are based on running the course for seven years with approximately 200 students total over the past six years on AWS cloud services.
Resources: IU/TACC (Jetstream)
A handful of readers have inquired as to the whereabouts of Microsoft‘s usual monthly patches for Windows and related software. Microsoft opted to delay releasing any updates until next month, even though there is a zero-day vulnerability in Windows going around. However, Adobe did push out updates this week as per usual to fix critical issues in its Flash Player software.
In a brief statement this week, Microsoft said it “discovered a last minute issue that could impact some customers” that was not resolved in time for Patch Tuesday, which normally falls on the second Tuesday of each month. In an update to that advisory posted on Wednesday, Microsoft said it would deliver February’s batch of patches as part of the next regularly-scheduled Patch Tuesday, which falls on March 14, 2017.
On Feb. 2, the CERT Coordination Center at Carnegie Mellon University warned that an unpatched bug in a core file-sharing component of Windows (SMB) could let attackers crash Windows 8.1, and Windows 10 systems, as well as server equivalents of those platforms. CERT warned that exploit code for the flaw was already available online.
The updates from Adobe fix at least 13 vulnerabilities in versions of Flash Player for Windows, Mac, ChromeOS and Linux systems. Adobe said it is not aware of any exploits in the wild for any of the 13 flaws fixed in this update.
The latest update brings Flash to v. 184.108.40.206. The update is rated “critical” for all OSes except Linux; critical flaws can be exploited to compromise a vulnerable system through no action on the part of the user, aside from perhaps browsing to a malicious or hacked Web site.
Flash has long been a risky program to leave plugged into the browser. If you have Flash installed, you should update, hobble or remove Flash as soon as possible. To see which version of Flash your browser may have installed, check out this page.
The smartest option is probably to ditch the program once and for all and significantly increase the security of your system in the process. An extremely powerful and buggy program that binds itself to the browser, Flash is a favorite target of attackers and malware. For some ideas about how to hobble or do without Flash (as well as slightly less radical solutions) check out A Month Without Adobe Flash Player.
If you choose to keep and update Flash, please do it today. The most recent versions of Flash should be available from the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).
Chrome and IE should auto-install the latest Flash version on browser restart (users may need to manually check for updates and/or restart the browser to get the latest Flash version). Chrome users may need to restart the browser to install or automatically download the latest version. When in doubt, click the vertical three dot icon to the right of the URL bar, select “Help,” then “About Chrome”: If there is an update available, Chrome should install it then.
Organic molecules — those made primarily of carbon, hydrogen, and oxygen atoms — are essential for life as we know it. Carbon is the coolest, most special atom of all. Not only can it link in multiple ways to other atoms, but it happily links with itself to form incredibly complex molecules, the kind that make proteins and bodies possible. And while organic compounds alone don’t necessarily mean a living thing, finding them in meteorites, Mars, Titan and now on the dwarf planet Ceres, gives us hope that all the necessary ingredients for life were readily available to the young Earth at the dawn of the solar system.
Scientists at NASA’s Dawn mission recently announced finding evidence for organic material on Ceres, a dwarf planet and the largest body in the main asteroid belt between Mars and Jupiter. They used the spacecraft’s visible and infrared mapping spectrometer (VIR), an instrument that detect the “fingerprints” of materials by studying the light they reflect from the sun, to find carbon-containing compounds around the northern-hemisphere crater called Ernutet.
The discovery, the first on a main asteroid belt object, adds to the growing list of bodies in the solar system where organics have been found. Ceres resembles the group of space rocks called carbonaceous chondrites which are rich in water and organics, strengthening the connection between the dwarf planets and these dark, crumbly meteorites we occasionally find here on Earth. This
Previously, scientists had identified carbonates (which form in water), water ice and clays on Ceres as well as evidence of heat in the formation of the dwarf planet’s tallest mountain, Ahuna Mons, a likely volcano made of oozing mud. Salts and sodium carbonate, such as those found in the bright areas of Occator Crater, are also thought to have been carried to the surface by liquid.
The organic materials on Ceres are mainly located in an area covering approximately 400 square miles (about 1,000 square kilometers). The new-found organic materials cover about 400 square miles (~1,000 sq. km) and spread across the floor of Ernutet, its southern rim and an area just outside the crater to the southwest. Organics also were found in a very small area in Inamahari Crater, about 250 miles (400 km) away from Ernutet. Scientists could not say exactly what kind of organic material Dawn picked up, just that it looked like a variety that have straight chains of carbon atoms instead of being arranged in rings.
Dawn is now in a stretched-out elliptical orbit at Ceres, going from an altitude of 4,670 miles (7,520 km) up to almost 5,810 miles (9,350 km). On Feb. 23, it will make its way to a new altitude of around 12,400 miles (20,000 km), about the height of GPS satellites above Earth, and to a different orbital plane. This will put the probe in a position to study Ceres from a completely new perspective. Who knows what we’ll find?
I use Evernote to keep copies of resources that I think may disappear over time and I’m pretty happy with it. Recently, they’ve introduced a new fee structure and there’s been some grumbling about finding another platform. As I say, I’m happy with them for now but if you’re looking for a way to migrate off Evernote and you’re an Org mode user, Karl Voit tells us one way to do it
— Karl Voit (@n0v0id) February 13, 2017
Everorg was written by Mario Martelli to migrate his Evernote data to Org mode because of the new pricing policy. It appears to be pretty complete" class="wp-smiley" style="height: 1em; max-height: 1em;" />see the README" class="wp-smiley" style="height: 1em; max-height: 1em;" />but if you have special needs you may have to do part of the migration by hand.
Again, I’m happy with Evernote for the time being so I haven’t used EverOrg but if you’re looking for a migration solution EverOrg is the best I’ve seen.
It’s Lazy Reading Science week!
This story started a few nights ago. I had been monkeying around with the AR102S, both at its native aperture and stopped down, and I decided to see how it compared to the C80ED. In particular, I wanted to compare the rich-field views of both scopes (such as they are here – I was observing from the driveway after all), so I was looking at the belt and sword of Orion. The results of that comparo were not very surprising – with it’s wider aperture and shorter focal length, the AR102S goes significantly wider and brighter, but the longer focal ratio and low-dispersion glass of the C80ED produce a better-corrected image.
What was not only surprising, but actively alarming, was that at low power I was getting ugly star images in the C80ED. Even in the center of the field, stars were not focusing down to nice little round points, but to crosses and shapes like flying geese. I wondered if my diagonal might have gotten banged up, so I swapped diagonals. The problem persisted. The scope will not reach focus without a diagonal or extension tube, and I don’t have an extension tube, so I couldn’t try straight-through viewing. Still, it was exceptionally unlikely that both of my good diagonals got horked in the same way.
I didn’t know what to make of that. I figured maybe the scope had gotten out of collimation somehow, and I was pondering whether to mess with it. It’s always been optically excellent and mechanically solid (overbuilt, in fact), and I was loathe to take it apart (as opposed to the TravelScope 70 and SkyScanner 100, both of which were crying out for disassembly).
Then a few days later I ran across this thread on CN, in which a guy was having the same problem I had. It sounded like it was more likely astigmatism (aka the Stig) in the eyes than in the telescope. Apparently it’s worse at low powers where the exit pupil is large, which makes sense – astigmatism is caused by having corneas that are out of round (football-shaped rather than basket-ball shaped), but as the exit pupils get smaller, the less of the cornea is involved in vision, and the more likely it is that the ‘active’ portion will approximate a radially even curvature.
One commenter recommended making a little diaphragm between thumb and forefinger to stop down the exit pupil. I tried that, but it was awfully difficult to hold my finger and my eye all steady and in alignment. Then I had the idea of using a collimation cap from one of my reflectors. That stopped down the exit pupil to a 1mm circle, which made the image d-i-m, but the star images cleaned right up. Then I took away the collimation cap and tried the view with and without glasses, and the glasses also cleaned up the star images.
It wasn’t the scope, it was me. I have astigmatism, and it’s bad enough that stars look ugly at low power unless I wear glasses.
On one hand, that’s a big relief, because the C80ED scope has always been a rock-solid performer. Along with the Apex 127, it’s my reference standard for good optics. I was feeling a bit queasy at the thought that it might have gotten out of whack.
On the other hand, I now need to prioritize eye relief in my eyepiece collection. I have a bunch that are too tight to show the whole field when I’m wearing glasses. So I have some decisions to make.
That was the first major discovery of the night.
The second was that the AR102S can take 2″ eyepieces with the most minor tinkering. The 2″-to-1.25″ adapter at the top of the AR102S focuser drawtube screws right off. I had been worried that it might be permanently affixed, but when I tried turning it, it spun with remarkable ease. Once I had it off, I dropped in the 32mm Astro-Tech Titan, which is my only 2″ eyepiece, and the views were pretty darned good. Way wider than with any of my 1.25″ eyepieces, and pretty clean as well, although I need to a little more head-to-head testing on that score. Possibly the star images looked good because they were so small at only 14x.
In any case, the 32mm Titan gives a significant boost in true field, from 3.6 degrees in the 32mm Plossl and 24mm ES68, to a whopping 4.88 degrees.
I don’t think there would be any advantage in going wider, at least in the AR102S. Astronomics seems to be out of Titans, but the equivalent 70-degree EPs are available through Bresser and Agena. The next step up would be a 35mm or 38mm, giving 13x and 12x, but those would push the exit pupil to 7.7mm and 8.5mm, and that’s just wasted light. At least in the AR102S – in the C80ED, longer 70-degree eyepieces would yield the following:
Focal length / magnification / exit pupil / true field
Either of those would be a good step up from the 3.7-degree max field that the 32mm Titan gives in the C80ED, without pushing the exit pupil uselessly wide.
Anyway, I’m just noodling now. The big news is that the C80ED is fine, I need to prioritize long eye relief in future EP purchases (and maybe thin the herd a bit?) so I can observe with glasses on, and the AR102S can take 2″ EPs after all.
The emergence and proliferation of Internet of Things (IoT) devices on industrial, enterprise, and home networks brings with it unprecedented risk. The potential magnitude of this risk was made concrete in October 2016, when insecure Internet-connected cameras launched a distributed denial of service (DDoS) attack on Dyn, a provider of DNS service for many large online service providers (e.g., Twitter, Reddit). Although this incident caused large-scale disruption, it is noteworthy that the attack involved only a few hundred thousand endpoints and a traffic rate of about 1.2 terabits per second. With predictions of upwards of a billion IoT devices within the next five to ten years, the risk of similar, yet much larger attacks, is imminent.
The Growing Risks of Insecure IoT Devices
One of the biggest contributors to the risk of future attack is the fact that many IoT devices have long-standing, widely known software vulnerabilities that make them vulnerable to exploit and control by remote attackers. Worse yet, the vendors of these IoT devices often have provenance in the hardware industry, but they may lack expertise or resources in software development and systems security. As a result, IoT device manufacturers may ship devices that are extremely difficult, if not practically impossible, to secure. The large number of insecure IoT devices connected to the Internet poses unprecedented risks to consumer privacy, as well as threats to the underlying physical infrastructure and the global Internet at large:
The large magnitude and broad scope of these risks implore us to seek solutions that will improve infrastructure resilience in the face of Internet-connected devices that are extremely difficult to secure. A central question in this problem area concerns the responsibility that each stakeholder in this ecosystem should bear, and the respective roles of technology and regulation (whether via industry self-regulation or otherwise) in securing both the Internet and associated physical infrastructure against these increased risks.
Risk Mitigation and Management
One possible lever for either government or self-regulation is the IoT device manufacturers. One possibility, for example, might be a device certification program for manufacturers that could attest to adherence to best common practice for device and software security. A well-known (and oft-used) analogy is the UL certification process for electrical devices and appliances.
Despite its conceptual appeal, however, a certification approach poses several practical challenges. One challenge is outlining and prescribing best common practices in the first place, particularly due to the rate at which technology (and attacks) progress. Any specific set of prescriptions runs the risk of falling out of date as technology advances; similarly, certification can readily devolve into a checklist of attributes that vendors satisfy, without necessarily adhering to the process by which these devices are secured over time. As daunting as challenges of specifying a certification program may seem, enforcing adherence to a certification program may prove even more challenging. Specifically, consumers may not appreciate the value of certification, particularly if meeting the requirements of certification increases the cost of a device. This concern may be particularly acute for consumer IoT, where consumers may not bear the direct costs of connecting insecure devices to their home networks.
The consumer is another stakeholder who could be incentivized to improve the security of the devices that they connect to their networks (in addition to more effectively securing the networks to which they connect these devices). As the entity who purchases and ultimately connects IoT devices to the network, the consumer appears well-situated to ensure the security of the IoT devices on their respective networks. Unfortunately, the picture is a bit more nuanced. First, consumers typically lack either the aptitude or interest (or both!) to secure either their own networks or the devices that they connect to them. Home broadband Internet access users have generally proved to be poor at applying software updates in a timely fashion, for example, and have been equally delinquent in securing their home networks. Even skilled network administrators regularly face network misconfigurations, attacks, and data breaches. Second, in many cases, users may lack the incentives to ensure that their devices are secure. In the case of the Mirai botnet, for example, consumers did not directly face the brunt of the attack; rather, the ultimate victims of the attack were DNS service providers and, indirectly, online service providers such as Twitter. To the first order, consumers suffered little direct consequence as a result of insecure devices on their networks.
Consumers’ misaligned incentives suggest several possible courses of action. One approach might involve placing some responsibility or liability on consumers for the devices that they connect to the network, in the same way that a citizen might be fined for other transgressions that have externalities (e.g., fines for noise or environmental pollution). Alternatively, Internet service providers (or another entity) might offer users a credit for purchasing and connecting only devices that it pass certification; another variation of this approach might require users to purchase ”Internet insurance” from their Internet service providers that could help offset the cost of future attacks. Consumers might receive credits or lower premiums based on the risk associated with their behavior (i.e., their software update practices, results from security audits of devices that they connect to the network).
A third stakeholder to consider is the Internet service provider (ISP), who provides Internet connectivity to the consumer. The ISP has considerable incentives to ensure that the devices that its customer connects to the network are secure: insecure devices increase the presence of attack traffic and may ultimately degrade Internet service or performance for the rest of the ISPs’ customers. From a technical perspective, the ISP is also in a uniquely effective position to detect and squelch attack traffic coming from IoT devices. Yet, relying on the ISP alone to protect the network against insecure IoT devices is fraught with non-technical complications. Specifically, while the ISP could technically defend against an attack by disconnecting or firewalling consumer devices that are launching attacks, such an approach will certainly result in increased complaints and technical support calls from customers, who connect devices to the network and simply expect them to work. Second, many of the technical capabilities that an ISP might have at its disposal (e.g., the ability to identify attack traffic coming from a specific device) introduce serious privacy concerns. For example, being able to alert a customer to (say) a compromised baby monitor requires the ISP to know (and document) that a consumer has such a device in the first place.
Ultimately, managing the increased risks associated with insecure IoT devices may require action from all three stakeholders. Some of the salient questions will concern how the risks can be best balanced against the higher operational costs that will be associated with improving security, as well as who will ultimately bear these responsibilities and costs.
Improving Infrastructure Resilience
In addition to improving defenses against the insecure devices themselves, it is also critical to determine how to better build resilience into the underlying Internet infrastructure to cope with these attacks. If one views the occasional IoT-based attack inevitable to some degree, one major concern is ensuring that the Internet Infrastructure (and the associated cyberphysical infrastructure) remains both secure and available in the face of attack. In the case of the Mirai attack on Dyn, for example, the severity of the attack was exacerbated by the fact that many online services depended on the infrastructure that was attacked. Computer scientists and Internet engineers should be thinking about technologies that can both potentially decouple these underlying dependencies and ensure that the infrastructure itself remains secure even in the event that regulatory or legal levers fail to prevent every attack. One possibility that we are exploring, for example, is the role that an automated home network firewall could play in (1) help- ing users keep better inventory of connected IoT devices; (2) providing users both visibility into and control over the traffic flows that these devices send.
Improving the resilience of the Internet and cyberphysical infrastructure in the face of insecure IoT devices will require a combination of technical and regulatory mechanisms. Engineers and regulators will need to work together to improve security and privacy of the Internet of Things. Engineers must continue to advance the state of the art in technologies ranging from lightweight encryption to statistical network anomaly detection to help reduce risk; similarly, engineers must design the network to improve resilience in the face of the increased risk of attack. On the other hand, realizing these advances in deployment will require the appropriate alignment of incentives, so that the parties that introduce risks are more aligned with those who bear the costs of the resulting attacks.
This week, our “Americans at Work” photo essay features photographs of millennial freelancers living in Los Angeles made by photographer Jessica Chou:
“A full-time job with one employer has been the norm for decades, but in recent years, the gig economy has steadily grown. A study by Intuit predicts that by 2020, 40 percent of the American workforce will be independent contractors. This project explores the everyday lives of young people in Los Angeles working in short-term, temporary positions as freelancers.
To explore the motivations and better understand the circumstances, I photographed people in their 20s and 30s from different cultural and educational backgrounds working on-demand. While individual paths to the gig economy are as unique as the people themselves, the decisions are typically driven by a two factors — the chance to pursue one’s passion or the necessity to make ends meet. In some cases, it can be a combination of both. I’ve found that once they have found this autonomy, the 9- to-5 work life seems less and less attractive.
The gig economy offers a unique opportunity for people looking for purpose in their work. There is the freedom to manage one’s own time, room to explore different work methods to better suit one’s personality, and the ability to provide meaningful contributions to one’s community. There is also the satisfaction through the ownership of the work — the process of investing time and effort results in the building of one’s own business.
On the downside, workers who are full-time independent contractors have little to no social safety nets. Independent contractors assume all risks, so getting sick means losing income. Additionally, all the responsibilities of running a business, like branding, marketing and bookkeeping, are now the sole responsibility of the individual. And with little financial stability, making decisions about the future becomes more difficult.
The gig economy seems to reflect people’s changing values and ideas about priorities in life and work. While greater personal freedom can result in income instability, it also provides an opportunity to shape one’s life in a more profound way. As Mai-Tam Nguyen, a pastry chef from said, ‘Even if you can make a lot of money, if you are not happy, what is the point?’”
Lots of storage this week.
Look what came in the mail today.
Something small, in a gold box.
An eyepiece wrapped in paper, and a rubber eyeguard.
And here they are.
That is a big honkin’ eye lens. And that’s why I got this eyepiece. The 28mm RKE from Edmund is legendary for its “floating stars” effect where the big eye lens, the sharply raked barrel, and the long eye relief combine to create the impression that the eyepiece has disappeared and the image is simply floating in space. I’ve never experienced this, because I’ve never gotten to look through one of these before. But the reputation of this eyepiece, illustrated by several glowing threads on Cloudy Nights (like the ones that follow), was enough to convince me to take the plunge:
It didn’t come with a case, so I made my own out of an old prescription pill bottle. A little bubble wrap stuffed in the bottom and taped inside the lid, and I’ve got a nice padded case for free.
And I need that case, because the new gear curse is in full effect. How does this eyepiece work in practice? No idea yet – with any luck, I might find out next Wednesday, when the clouds are finally supposed to part. I’ll keep you posted.
A charismatic populist president wanted to boost manufacturing and create jobs. She told companies, 'if you want to sell your stuff here, you have to build it here.' This is what happened.
(Image credit: David Paul Morris/Bloomberg via Getty Images)