Planet Aardvark

August 28, 2016

• Lunar Orbiter Earthset

Lunar Orbiter Earthset

• I guess #40weeksofgas wasn’t as catchy…

I guess #40weeksofgas wasn’t as catchy…

• Juno's first Jupiter close approach successful; best JunoCam images yet to come
NASA announced this afternoon that Juno passed through its first perijove since entering orbit successfully, with science instruments operating all the way. This is a huge relief, given all the unknowns about the effects of Jupiter's nasty radiation environment on its brand-new orbiter.
• Grant Rettke: outorg Lets You Convert Source-Code Buffers Temporarily To Org-Mode For Comment Editing

outorg lets you convert source-code buffers temporarily to org-mode for comment editing.

Outorg is for editing comment-sections of source-code files in temporary Org-mode buffers. It turns conventional literate-programming upside-down in that the default mode is the programming-mode, and special action has to be taken to switch to the text-mode (i.e. Org-mode).

Outorg depends on Outshine, i.e. outline-minor-mode with outshine extensions activated. An outshine buffer is structured like an org-mode buffer, only with outcommented headlines. While in Org-mode text is text and source-code is ’hidden’ inside of special src-blocks, in an outshine buffer source-code is source-code and text is ’hidden’ as comments.

Thus org-mode and programming-mode are just two different views on the outshine-style structured source-file, and outorg is the tool to switch between these two views. When switching from a programming-mode to org-mode, the comments are converted to text and the source-code is put into src-blocks. When switching back from org-mode to the programming-mode, the process is reversed – the text is outcommented again and the src-blocks that enclose the source-code are removed.

When the code is more important than the text, i.e. when the task is rather ’literate PROGRAMMING’ than ’LITERATE programming’, it is often more convenient to work in a programming-mode and switch to org-mode once in a while than vice-versa. Outorg is really fast, even big files with 10k lines are converted in a second or so, and the user decides if he wants to convert just the current subtree (done instantly) or the whole buffer. Since text needs no session handling or variable passing or other special treatment, the outorg approach is much simpler than the Org-Babel approach. However, the full power of Org-Babel is available once the outorg-edit-buffer has popped up.

Intriguing!

August 27, 2016

• Ghosts from a failed system
Westborough State Hospital, formerly known as the Westborough Insane Hospital, was built in the 1800s on hope and compassion. But with little scientific knowledge of mental illness, institutions like Westborough eventually became wretched warehouses. Civil rights activists, disability rights lawyers and politicians made it their mission to end the harsh restrictions imposed on people with mental illness, and the remedy was to close institutions like Westborough, which was shuttered in 2010. As part of the Spotlight team report on mental illness, Boston Globe photographer Suzanne Kreiter toured the abandoned hospital. “I know that technically these photographs have no people in them,” Kreiter says, “but they’re all right there. All these images contain the ghosts of the people who need our help the most.” -- By Photos Suzanne Kreiter and Boston Globe Staff

()

• Irreal: Resistance is Futile...

you will be assimilated:

• In Other BSDs for 2016/08/27

I don’t know how I ended up with 3 pfSense items to lead with – it just happened.

• An Interesting SETI Candidate in Hercules

A candidate signal for SETI is a welcome sign that our efforts in that direction may one day pay off. An international team of researchers has announced the detection of “a strong signal in the direction of HD164595” in a document now being circulated through contact person Alexander Panov. The detection was made with the RATAN-600 radio telescope in Zelenchukskaya, in the Karachay–Cherkess Republic of Russia, not far from the border with Georgia in the Caucasus.

The signal was received on May 15, 2015, 18:01:15.65 (sidereal time), at a wavelength of 2.7 cm. The estimated amplitude of the signal is 750 mJy.

No one is claiming that this is the work of an extraterrestrial civilization, but it is certainly worth further study. Working out the strength of the signal, the researchers say that if it came from an isotropic beacon, it would be of a power possible only for a Kardashev Type II civilization. If it were a narrow beam signal focused on our Solar System, it would be of a power available to a Kardashev Type I civilization. The possibility of noise of one form or another cannot be ruled out, and researchers in Paris led by Jean Schneider are considering the possible microlensing of a background source by HD164595. But the signal is provocative enough that the RATAN-600 researchers are calling for permanent monitoring of this target.

Image: The RATAN-600 radio telescope in Zelenchukskaya. Credit: Wikimedia Commons.

Here I’m drawing on a presentation forwarded to me by Claudio Maccone, from which I learn that the team behind the detection was led by N.N. Bursov and included L.N. Filippova, V.V. Filippov, L.M. Gindilis, A.D. Panov, E.S. Starikov, J. Wilson, as well as Claudio Maccone himself, the latter a familiar figure on Centauri Dreams. The work is to be discussed at a meeting of the IAA SETI Permanent Committee, to be held during the 67th International Astronautical Congress (IAC) in Guadalajara, Mexico, on Tuesday, September 27th, 2016,

What we know of HD 164595 is that it is a star of 0.99 solar masses at a distance of roughly 95 light years in the constellation Hercules, and an estimated age of 6.3 billion years. Its metallicity is almost identical to that of the Sun. A known planet in this system, HD 164595 b, is 0.05 Jupiter mass with a period of 40 days, considered to be a warm Neptune on a circular orbit. There could, of course, be other planets still undetected in this system.

Image: Strong signal from the direction of HD 164595. “Raw” record of the signal together with expected shape of the signal for point-like source in the position of HD 164595. Credit: Bursov et al.

From the presentation:

The estimated probability ~2 X 10-4 to simulate the signal from the direction of the HD164595 by signal-like noise is small, therefore HD164595 is good candidate SETI. Permanent monitoring of this target is needed.

All of which makes excellent sense. We can’t claim the detection of an extraterrestrial civilization from this observation. What we can say is that the signal is interesting and merits further scrutiny.

• The Best of the Physics arXiv (week ending August 27, 2016)
This week’s most thought-provoking papers from the Physics arXiv.
• The Milky Way Sets

The Milky Way Sets

• Wilfred Hughes: Rustdoc Meets The Self-Documenting Editor

Emacs Lisp has a delightful help system. You can view the docstring for any function under the cursor, making it easy to learn functionality.

Rust goes a step further. All the standard library documentation is written with the source code. This means we can find docs programmatically!

When I learnt that racer recently added support for rustdoc, I couldn’t resist adding support to racer.el.

The new racer-describe command actually renders the markdown in rustdoc comments. Since we’re showing a separate buffer, we can render the docs and throw away the markdown syntax. We can even convert external hyperlinks to clickable links!

This is a really nice example of composing Emacs functionality. Since we can easily highlight code snippets (it’s an editor!), we actually apply syntax highlighting to inline code! Note how Vec and T are highlighted as types in the above screenshot.

Whilst we don’t use *Help* buffers, we extend the same keymaps, so all the relevant help shortcuts just work too.

We have hit a few teething issues in racer (namely #594 and #597) but it’s changed the way I explore Rust APIs. It’s particularly useful for learning functionality via examples, without worrying about implementation:

I hope it will also encourage users to write great docstrings for their own projects.

Love it? Hate it? Let me know what you think in the /r/rust discussion.

(It’s hot off the press, so there will be bugs. If you find one, please file it on GitHub.)

August 26, 2016

• Episode 721: Unbuilding A City

Episode 721: Unbuilding A City

• <iframe src="https://www.npr.org/player/embed/491490744/491558720" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
• Transcript

Noel King/Noel King

Shrinking cities have a problem: Millions of abandoned, falling-apart houses. Often, knocking them down is the best solution. But it can be remarkably hard to do that.

On today's show, we visit a single block in Baltimore and figure out why it's so hard to knock down buildings — even when everybody wants them gone.

Music: "Sex Lies and Second Thoughts" and Elizabeth Cotten's "Take Me Back To Baltimore" from 'Shake Sugaree,' Smithsonian Folkways, Used by permission. Find us: Twitter/ Facebook.

Copyright 2016 NPR. To see more, visit NPR.
• Exquisite Venus-Jupiter Conjunction This Weekend

I wrote about this in an earlier blog, but I just wanted to give you a reminder about Saturday’s very close conjunction of Venus and Jupiter low in the western sky shortly after sundown. Tonight, the duo will be about 1° apart, on Saturday just 0.1° (⅕ of the moon’s diameter) and on Sunday they’ll separate again to 1° before parting ways. The two planets may appear to almost touch when viewed with the naked eye.

It’s the closest conjunction of Venus and Jupiter this year, so you don’t want to miss it. Face west starting about 25 minutes after sunset and look about 4° (about two fingers held together at arm’s length) above the due west point of the horizon. Since this is very low, be sure you find a location with a wide open view in that direction.

Venus is the brighter planet, so you’ll see it first. Once you’ve found Venus, use the animation to help you find nearby Jupiter. If you’re having any trouble with either planet sweep back and forth just above the western horizon with a pair of binoculars.

Happy planet-finding and please send me a photo at rking@duluthnews.com if you get something you like, so I can share with our readers.

• Friday Squid Blogging: Self-Repairing Fabrics Based on Squid Teeth

As shown in the video below, researchers at Pennsylvania State University recently developed a polyelectrolyte liquid solution made of bacteria and yeast that automatically mends clothes.

It doesn't have a name yet, but it's almost miraculous. Simply douse two halves of a ripped fabric in the stuff, hold them together under warm water for about 60 seconds, and the fabric closes the gaps and clings together once more. Having a bit of extra fabric on hand does seem to help, as the video mainly focuses on patching holes rather than re-knitting two halves of a torn piece.

The team got the idea by observing how proteins in squid teeth and human hair are able to self-replicate. Then, they recreated the process using more readily available materials. Best of all, it works with almost all natural fabrics.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

• Featured Item: The Trub Trapper

The Trub Trapper

Trub, pronounced “troob” is mostly composed of fatty acids and various proteins. Historically, it has been recommended to leave behind the break material (trub) from the kettle before putting your wort into the fermenter for fear of off-flavours. Check out this experiment done by Brulosophy for an in-depth look at the effects of trub in the fermenter.

If you use a plate chiller, hop material and trub can be a big problem, causing clogs. Make sure to leave as much of this material behind before running the wort through a plate chiller.

The Trub Trapper is a clever but simple design that helps to capture break material and hop pellets before draining your wort from the kettle. Check it out!

• High-Speed Video Footage Solves One of the Great Mysteries of Human Blood Flow
A key theory of blood dynamics has been overturned by video evidence showing how red blood cells can make blood less viscous.
• High-Speed Video Footage Reveals Why Blood Becomes Runnier in Microchannels
A key theory of blood dynamics has been overturned by video evidence showing how red blood cells can make blood less viscous.
• Collision Attacks Against 64-Bit Block Ciphers

We've long known that 64 bits is too small for a block cipher these days. That's why new block ciphers like AES have 128-bit, or larger, block sizes. The insecurity of the smaller block is nicely illustrated by a new attack called "Sweet32." It exploits the ability to find block collisions in Internet protocols to decrypt some traffic, even through the attackers never learn the key.

Paper here. Matthew Green has a nice explanation of the attack. And some news articles. Hacker News thread.

• Irreal: Data Sharing in Publications

John Kitchin has a really interesting article in ACS Catalysis on Effective Data Sharing in Scientific Publishing. In it, Kitchin discusses various strategies for embedding supporting data—such as tables and processing code—in a publication's PDF or source file so that other researchers can recreate the results or use the data for further analysis.

He begins by noting that one can simply embed supporting documents directly in the PDF, or Word document. The problem with that method is that it's easy for the actual data and its representation in the final paper to diverge.

A better solution, Kitchin says, is to use Org mode and generate the published tables and results directly from data embedded in the manuscript source. Followers of Kitchin's blog (or even Irreal) will recognize that this is Kitchin's longtime publishing process. He and his group write their papers in Org mode along with the supporting data and code. That way, future researchers have everything they need to reproduce and extend the results.

If you're interested in using Org mode to write papers or in reproducible research, you'll enjoy Kitchin's paper.

• Photos of the Week: 8/20-8/26 (35 photos)

Lenin underwater, a penguin weigh-in, monsoon flooding in India, an earthquake in central Italy, a deep blue lake in El Salvador, American partiers blown into Canada on the St. Clair River, towed back across the border by the Canadian Coast Guard, and much more.

Should Canada have an offensive cyber war capability? Comments by former National Security Advisor Richard Fadden, who retired at the end of March, suggest that Canadians need to debate this question.

Fadden raised the issue in a recent wide-ranging interview with Tom Clark of Global News. (You can watch the interview here.)

The discussion unfortunately conflated the concepts of cyber attack (also known as Computer Network Attack) and cyber spying (Computer Network Exploitation). Chinese cyber espionage operations against Canadian targets were described as "cyber attacks", for example, as if the operations were attempting to destroy or damage Canadian data or systems, or even the physical infrastructure they control, rather than simply trying to steal information.

This blog does not endorse pedantry for the sake of pedantry, but in this case a little terminological clarity would be helpful.

Computer Network Operations are commonly divided into three kinds of activity: Computer Network Attack (CNA), Computer Network Defence (CND), and Computer Network Exploitation (CNE). Stealing information falls into the category of Computer Network Exploitation.

As the diagram above shows, there are important overlaps between these three activities. CNE can be used to find vulnerabilities in an adversary's systems and prepare the ground for CNA. CNA can contribute to the effectiveness of CND. CND can collect information about adversary capabilities that can be used to support CNE operations.

All three activities draw on the same kinds of capabilities and can be used to support the others.

But there is still a crucial distinction to be drawn between cyber espionage and cyber war. One is spying, and Canada—through CSE—is already deeply engaged in it. The other seeks to damage or destroy data or information systems or even, potentially, to destroy physical objects and kill people. Cyber warfare can range from simple disruption, interfering with the communications of a terrorist organization for example, to total war.

Should Canada develop a cyber war capability?

“It may well be that in some circumstances it’s something that we’d want to do,” Fadden suggests in the interview.

But he also says it would be "expensive and dangerous", and he argues for greater emphasis on CND: "Personally I think we should be better at defensive. Really develop our capacity to resist these attacks and to make sure that people understand the level of threat that we’re under."

So, put him down—tentatively at least—as a cyber war skeptic.

It all sounds very hypothetical.

But I suspect Fadden chose to raise the issue because Canada is moving rapidly towards creating a CNA capability, and it is doing so largely in the dark, with very little public awareness or debate.

NITRO ZEUS: CNA against Iran

Recent revelations about U.S. and Israeli contingency plans for a major cyber war campaign against Iran highlight the extent to which CNA capabilities are moving from the theoretical to the real.

The Stuxnet worm, which the U.S. and Israel used to damage and delay Iran's uranium enrichment program, is the best-known example of a state-sponsored CNA operation.

But Stuxnet was only the tip of the iceberg. According to the New York Times (David E. Sanger & Mark Mazzetti, "U.S. Had Cyberattack Plan if Iran Nuclear Dispute Led to Conflict," New York Times, 16 February 2016), preparations were made for a much wider range of attacks against Iran's "air defenses, communications systems and crucial parts of its power grid" in the event that the dispute over Iran's nuclear program escalated into open use of force.

Preparations for the campaign, codenamed NITRO ZEUS, began in early 2009, and ultimately involved "thousands of American military and intelligence personnel, spending tens of millions of dollars and placing electronic implants in Iranian computer networks to “prepare the battlefield,” in the parlance of the Pentagon."

The operation was envisaged as an adjunct, or possibly an alternative, to a traditional military campaign against Iran. Bringing Israel on board was seen in part as a means of restraining the Netanyahu government from launching a unilateral attack that might prematurely foreclose options for resolving the dispute diplomatically. (More about NITRO ZEUS here.)

Unlike traditional military contingency plans, which normally don't involve actual operations within the target country prior to a decision to go to war, preparations for cyber operations require prior entry into the systems that ultimately would be attacked in order to choose targets, ensure access at the moment of attack, and maximize the effects of the operation. Thus, although the cyber warfare plan was never executed, preparations within the Iranian cyber infrastructure undoubtedly took place.

Similar contingency plans are probably also in place for other potential adversaries such as China and Russia.

As a close NSA ally and a significant CNE player in its own right—one that we know had active operations in Iran at the time NITRO ZEUS preparations were apparently underway—CSE could not fail to be aware at some level of the presence of the U.S.-Israeli operation, although almost certainly not of its details. If nothing else, NSA would have wanted to ensure that CSE's CNE operations did not interfere with or accidentally expose the NITRO ZEUS preparations.

But there is no evidence of any direct Canadian involvement in the NITRO ZEUS preparations, and there's little reason to expect there would have been any Canadian involvement.

CSE and CNA

This 2013 NSA document describing the state of NSA-CSE cooperation confirms that the two agencies work together on CNE operations in the Middle East, among other regions, but it contains no suggestion that they collaborate on CNA operations.

There are many reasons why the U.S. might want to minimize the number of additional players whose participation would complicate as sensitive and tightly-held a CNA operation as NITRO ZEUS.

But the most important roadblock to such collaboration, at least as far as CSE is concerned, is that CSE has had little or no mandate to conduct CNA activities (although it has shown interest in such capabilities; see p. 22 here).

[Update 19 April 2016: An even better example can be found on p. 23 of this presentation, where CSE says "We will seek the authority to conduct a wide spectrum of Effects operations in support of our mandates."]

The 2015 passage of Bill C-51 has probably opened the way for CSE participation in small-scale CNA activities such as efforts to disrupt the operations of terrorist organizations. Since such activities can now be conducted by CSIS under the "disruption" powers granted to the agency in Bill C-51, CSE's Mandate C, which authorizes it to assist CSIS operations, should provide a legal basis for CSE participation in limited CNA activities under CSIS auspices.

Those powers are unlikely to extend to outright cyber warfare, however. Large-scale activities against the armed forces or domestic infrastructure of an adversary state on the scale of the NITRO ZEUS plan would probably require a different set of authorities.

The Canadian Forces and cyber war

Although CSE's CNE operators might be called upon to provide advice and assistance, large-scale offensive cyber operations would probably be executed by the Canadian Forces acting under the laws of war.

In the United States, a similar division of roles has already been formalized, with the Pentagon's Cyber Command, created in 2010, now responsible for CNA. Although run by the same officer who serves as Director of the NSA and able to draw upon NSA knowledge and resources, Cyber Command is a military organization under military command.

Canada does not yet have a direct equivalent to Cyber Command, but the development of CNA authorities and capabilities has been under discussion within the Canadian Forces for a long time.

A draft strategy paper called on the Canadian Forces to develop the ability to conduct offensive computer operations as long ago as July 2000 (Jim Bronskill, “Cyber-attack capability in military’s plans?” Edmonton Journal, 11 March 2001). [Update 19 April 2016: I am reminded by a reader that early discussions of these issues can be found in documents dating to the mid-1990s.]

But few if any steps were taken in the direction of creating an actual CNA capability for many years. A December 2009 report by DND's Centre for Operational Research and Analysis (CF Cyber Operations in the Future Cyber Environment Concept) confirmed that the CF's network operations were still "not established to conduct offensive network operations".

There is reason to believe, however, that this situation has begun to change.

In April 2011, DND created the position of Director General Cyber to help "develop the military’s future cyber capabilities", potentially including offensive capabilities (Chris Thatcher, "Operationalizing the cyber domain," Vanguard, 26 June 2013).

The current DG Cyber (or DG Cyber Warfare, or DG Cyberspace) is Brigadier General Frances J. Allen, a former Commander of the Canadian Forces Information Operations Group (CFIOG) and an early advocate of CNA capabilities for the CF. (Allen wrote a paper recommending the development of CNA capabilities in 2002 when she was still a lieutenant-colonel. [Update 22 April 2016: I mistakenly said major originally.])

More recently, in September 2015, Defence Minister Jason Kenney implied that such a capability either already exists or soon would, saying, "I think you can reasonably assume that when the military develops a command, it has to have the capability to be both offensive and defensive. Potentially hostile countries need to know that, if they are going to launch cyber attacks against our critical systems, Canada and its allies have the capacity to retaliate." (Justin Ling, "Canada’s Defense Minister Talks Fighting the Islamic State, Arming the Kurds, and Cyber Warfare," Vice News, 28 September 2015)

DG Cyber is not a command as such, but Kenney's comments do suggest that Canada may be close to fielding operational CNA capabilities.

The appointment in early 2015 of a Canadian Forces liaison officer to the U.S. Cyber Command also suggests the potential existence of Canadian CNA capabilities.

The discussion document prepared by the government for the current defence policy review (Defence Policy Review: Public Consultation Document 2016) is uninformative about the state of Canada's current cyber warfare capabilities, but it does at least admit that the question is one that needs to be addressed:
Cyber capabilities can be used to disrupt threats at their source, and can offer alternative options that can be utilized with less risk to personnel and that are potentially reversible and less destructive than traditional uses of force to achieve military objectives. Some of our key allies, such as the US and the UK, have stated that they are developing cyber capabilities to potentially conduct both defensive and offensive military activities in cyberspace. We must consider how to best position the Canadian military to operate effectively in this domain.

CNA versus ISIS

CSE and/or the Canadian Forces may already be operating offensively in the cyber domain in a limited way, conducting CNA operations against the Islamic State.

Fadden floated this possibility in a hypothetical way in his interview with Global:
If we have Canadian troops somewhere around the world, Iraq as an example, and they can use somewhat offensive cyber initiatives in order to reduce the threat that they and allies are facing, I would say that’s not an unreasonable thing for the public service to pull together and ask the government if they want to do.
My own suspicion (see Murray Brewster, "Canada's electronic spy service to take more prominent role in ISIS fight," Canadian Press, 18 February 2016) is that this possibility is considerably less hypothetical than Fadden's comments suggested. The only thing that has been confirmed to date, however, is that CSE is playing a force protection role in Operation Impact.

The U.S. recently acknowledged that its own forces have begun using cyber warfare capabilities against ISIS (Phil Stewart & David Alexander, "U.S. waging cyber war on Islamic State, commandos active," Reuters, 29 February 2016), and, unlike the NITRO ZEUS plan, it seems likely that a Canadian contribution to CNA operations against ISIS would be welcomed by the U.S.

The bigger picture

The development and spread of cyber warfare capabilities poses significant new security problems for Canada and other countries.

In principle, CNA operations can be very precise and limited, but they may also have the potential to produce indiscriminate nationwide or even global effects, destroying or disabling vital infrastructure, paralyzing government operations and economic activity, and causing significant civilian casualties.

The potentially game-changing nature of cyber warfare capabilities has been compared to that of nuclear weapons.

There are of course many important differences between cyber weapons and nuclear weapons. Nuclear weapons pose a true existential threat to human civilization. Cyber weapons might cause catastrophic damage in a worst-case scenario, but they are more likely to be used like conventional weapons to produce much more limited and localized (although not necessarily entirely predictable) effects.

Still, a world with widespread cyber weaponry could prove highly unstable. Cyber weapons pose a significant attribution problem (how do you know who's actually attacking you?), and the barriers to the acquisition of cyber weapons are low, meaning a wide range of states, groups, and even individuals may be able to develop significant cyber capabilities. In addition, the effectiveness of cyber capabilities may depend on maintaining access to and even deliberately introducing vulnerabilities into potential target systems during peacetime, which could end up increasing the likelihood of hostilities. Finally, the huge range of possible damage levels in cyber warfare and the overlap between CNA and CNE activities mean there is no clear threshold between cyber peace and cyber war, and thus the possibility of blundering into an unintended conflict is potentially very high. With no clear agreement on cyber rules of the road, there are many ways even a CNA strategy focused on deterrence could fail catastrophically.

It is not necessary to frame the risks posed by cyber warfare in apocalyptic terms to nonetheless recognize that, as Fadden suggested, CNA activities could be both expensive and dangerous. A focus on defence and resilience may well be the best path to take.

At the very least, Canadians should have an open debate on the pros and cons of taking the cyber war path before the government launches us down that road.

Update 22 June 2016: More from a somewhat less skeptical-sounding Fadden here: Murray Brewster, "Former CSIS head says Canada should have its own cyber-warriors," CBC News, 22 June 2016. Transcript of Fadden's remarks to CBC The Current, 22 June 2016.

Update 26 August 2016: Former CSE Chief John Adams joins the debate, calling for the development of offensive cyber war capabilities for the Canadian Forces: John Adams, "Canada and Cyber," Canadian Global Affairs Institute, July 2016.

• Inside ‘The Attack That Almost Broke the Internet’

In March 2013, a coalition of spammers and spam-friendly hosting firms pooled their resources to launch what would become the largest distributed denial-of-service (DDoS) attack the Internet had ever witnessed. The assault briefly knocked offline the world’s largest anti-spam organization, and caused a great deal of collateral damage to innocent bystanders in the process. Here’s a never-before-seen look at how that attack unfolded, and a rare glimpse into the shadowy cybercrime forces that orchestrated it.

The following are excerpts taken verbatim from a series of Skype and IRC chat room logs generated by a group of “bullet-proof cybercrime hosts” — so called because they specialized in providing online hosting to a variety of clientele involved in spammy and scammy activities.

Facebook profile picture of Sven Olaf Kamphuis

Gathered under the banner ‘STOPhaus,’ the group included a ragtag collection of hackers who got together on the 17th of March 2013 to launch what would quickly grow to a 300+Gigabits per second (Gbps) attack on Spamhaus.org, an anti-spam organization that they perceived as a clear and present danger to their spamming operations.

The attack –a stream of some 300 billion bits of data per second — was so large that it briefly knocked offline Cloudflare, a company that specializes in helping organizations stay online in the face of such assaults. Cloudflare dubbed it “The Attack that Almost Broke the Internet.

The campaign was allegedly organized by a Dutchman named Sven Olaf Kamphuis (pictured above). Kamphuis ran a company called CB3ROB, which in turn provided services for a Dutch company called “Cyberbunker,” so named because the organization was housed in a five-story NATO bunker and because it had advertised its services as a bulletproof hosting provider.

Kamphuis seemed to honestly believe his Cyberbunker was sovereign territory, even signing his emails “Prince of Cyberbunker Republic.” Arrested in Spain in April 2013 in connection with the attack on Spamhaus, Kamphuis was later extradited to The Netherlands to stand trial. He has publicly denied being part of the attacks and his trial is ongoing.

According to investigators, Kamphuis began coordinating the attack on Spamhaus after the anti-spam outfit added to its blacklist several of Cyberbunker’s Internet address ranges. The following logs, obtained by one of the parties to the week-long offensive, showcases the planning and executing of the DDoS attack, including digital assaults on a number of major Internet exchanges. The record also exposes the identities and roles of each of the participants in the attack.

The logs below are excerpts from a much longer conversation. The entire, unedited chat logs are available here. The logs are periodically broken up by text in italics, which includes additional context about each snippet of conversation. Also please note that the logs below may contain speech that some find offensive.

====================================================================

THE CHAT LOG MEMBERS
————————————————————
Aleksey Frolov : vainet[dot]biz, vainet[dot].ru, Russian host.
————————————————————
Alex Optik : Russian ‘BP host’. AKA NEO
————————————————————
Andrei Stanchevici : secured[dot]md Moldova
————————————————————
Cali : Vitalii Boiko AKA Vitaliyi Boyiko AKA Cali Yhzar, alleged by Spamhaus to be dedicated crime hosters urdn[dot]com.ua AKA Xentime[dot]com AKA kurupt[dot]ru
————————————————————
Darwick : Zemancsik Zsolt, 23net[dot]hu, Hungarian host.
————————————————————
eDataKing : Andrew Jacob Stephens, Ohio/Florida based spamware seller formerly listed on Spamhaus’s Register of Known Spam Operations (ROKSO). Was main social media mouthpiece of Stophaus (e.g. see @stophaus). Andrew threatens to sue everyone for libel, and is likely to show up in the comments below and do the same here.
————————————————————
Erik Bais : A2B Internet, Netherlands
————————————————————
Goo : Peter van Gorkum AKA Gooweb.nl, alleged by Spamhaus to be a botnet supplier in the Netherlands.
————————————————————
Hephaistos : AKA @AnonOps on Twitter
————————————————————
HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: Sven Olaf Kamphuis
AKA Cyberbunker AKA CB3ROB
————————————————————
Karlin König : Suavemente/SplitInfinity, San Diego based host.
————————————————————
marceledler : German hoster that Spamhaus says has a history of hosting spammers, AKA Optimate-Server[dot]de
————————————————————
Mark – Evgeny Pazderin : Russian, alleged by Spamhaus to be hoster of webinjects used for man-in-the-middle attacks (MITM) against online banking sessions.
————————————————————
Mastermind of Possibilities : Norman “Chris” Jester AKA Suavemente/SplitInfinity, alleged by Spamhaus to be San Diego based spam host.
————————————————————
Narko :Sean Nolan McDonough, UK-based teenager, trigger man in the attack. Allegedly hired by Yuri to perform the DDoS. Later pleaded guilty to coordinating the attack in 2013.
————————————————————
NM : Nikolay Metlyuk, according to Spamhaus a Russian botnet provider
————————————————————
simomchen : Simon Chen AKA idear4business counterfeit Chinese products, formerly listed on Spamhaus ROKSO.
————————————————————
Spamahost : As its name suggests, a Russian host specializing in spam, spam and spam.
————————————————————
————————————————————
valeralelin : Valerii Lolin, infiumhost[dot]com, Ukraine
————————————————————
Valeriy Uhov : Per Spamhaus, a Russian ‘bulletproof hoster’.
————————————————————
WebExxpurts : Deepak Mehta, alleged cybercrime host specializing in hosting botnet C&Cs. AKA Turbovps (<bd[at]turbovps[dot]com>).
————————————————————
wmsecurity : off-sho[dot]re ‘Bulletproof’ hoster. Lithuania. AKA “Antitheist”. Profiled in this story.
————————————————————
Xennt : H.J. Xennt, owner of Cyberbunker.
————————————————————
Yuri : Yuri Bogdanov, owner of 2×4[dot]ru. According to Spamhaus, 2×4[dot]ru is a longtime spam friendly Russian host, formerly part of Russian Business Network (RBN). Allegedly hired Narko to launch DDoS attack against Spamhaus.
============================================================

[17.03.2013 19:51:31] eDataKing: watch the show: http://www.webhostingtalk.com/showthread.php?t=1247982
[17.03.2013 19:52:02] -= Darwick =-: hell yeah!
[17.03.2013 19:52:09] -= Darwick =-: hit them hard
[17.03.2013 19:54:07] -= Darwick =-: is that a ddos attack?
[17.03.2013 19:54:56] eDataKing: but let’s forget what it is and focus on it’s consequence lol

====================================================================

A number of chat members chastise eDataKing for incessantly posting comments to what they refer to as “nanae,” a derisive reference to the venerable USENET anti-spam list (news.admin.net-abuse.email) that focused solely on exposing spammers and their spamming activities. eDataKing is flustered and posting on nanae with rapid-fire, emotional replies to anti-spammers, but his buddies don’t want that kind of attention to their cause.

[17.03.2013 20:27:57] Mastermind of Possibilities: Andrew why are you posting in nanae? Stop man lol

====================================================================

Some of the chat participants begin debating whether they should consider adopting residence in a country that does not play well with the United States in terms of extradition.

[18.03.2013 02:28:30] eDataKing: what about a place that takes an ex-felon from the US for citizenship or expat?

====================================================================

The plotters begin running scans to find misconfigured or ill-protected systems that can be enslaved in attacks. They’re scanning the Web for domain name servers (DNS) systems that can be used to amplify and disguise or “reflect” the source of their attacks. Narko warns Sven about trying to enlist servers hosted by Dutch ISP Leaseweb, which was known to anticipate such activity and re-route attack traffic back to the true source of the traffic.

[18.03.2013 16:39:22] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: is just global transit thats filtered with them
[18.03.2013 16:39:33] narko: they change the ip back to your real server ip
[18.03.2013 16:39:38] narko: you will ddos your own server if you try this attack at leaseweb
[18.03.2013 16:39:46] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: hmm
[18.03.2013 16:39:50] Antitheist: what about root.lu?
[18.03.2013 16:39:54] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: very creative of them
[18.03.2013 16:39:55] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: lol
[18.03.2013 16:40:21] Antitheist: and nforce
18.03.2013 16:49:22] Antitheist: i host many cc shops, they even appeared on krebs blog
[18.03.2013 16:49:27] narko: where?

At around 4 p.m. GMT that same day, Sven announces that the group’s various cyber armies had succeeded in knocking Spamhaus off the Internet. Incredibly, Sven advertises his involvement with the group to all 3,850 of his Facebook friends.

17.03.2013 22:30:01] my 3850 facebook friends " class="wp-smiley" style="height: 1em; max-height: 1em;" />" class="wp-smiley" style="height: 1em; max-height: 1em;" />" class="wp-smiley" style="height: 1em; max-height: 1em;" /> www.spamhaus.org still down, and that criminal bunch of self declared internet dictators will still remain down, until our demands are met " class="wp-smiley" style="height: 1em; max-height: 1em;" />" class="wp-smiley" style="height: 1em; max-height: 1em;" />" class="wp-smiley" style="height: 1em; max-height: 1em;" /> over 48h already " class="wp-smiley" style="height: 1em; max-height: 1em;" />" class="wp-smiley" style="height: 1em; max-height: 1em;" />" class="wp-smiley" style="height: 1em; max-height: 1em;" /> resolving your shit. end of the line buddy " class="wp-smiley" style="height: 1em; max-height: 1em;" />" class="wp-smiley" style="height: 1em; max-height: 1em;" />" class="wp-smiley" style="height: 1em; max-height: 1em;" /> should have called and paid for the damages.
[17.03.2013 22:25:54] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: rokso no longer exists haha
[17.03.2013 22:29:51] Mastermind of Possibilities: Where is that posted ?
[17.03.2013 22:30:01] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: my 3850 facebook friends
[17.03.2013 22:30:12] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: you know, stuff people actually -use-… unlike smtp and nntp
[17.03.2013 22:30:12] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: lol
[17.03.2013 22:30:23] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP:facebook.com/cb3rob

====================================================================

Spamhaus uses a friendly blog — Wordtothewise.com — to publish an alert that it is “under major dDos.” While Spamhaus is offline, various parties to the attack begin hatching ways to take advantage by poisoning search-engine results so that when one searches for “Spamhaus,” the first several results instead redirect to Stophaus[dot]org, the forum this group set up to coordinate the attacks.

18.03.2013 13:09:09] Alex Optik:http://www.stopspamhaus.org/2013_02_01_archive.html
[18.03.2013 13:09:35] Alex Optik: as i see there is already has same projects
[18.03.2013 13:09:59] narko: (wave)
[18.03.2013 13:10:17] eDataKing: that site is owned by a person in this group Alex
stealing seo to bump spamhaus while it’s offline 3 days
[18.03.2013 16:14:14] Antitheist: do you mind if we put spamhaus metatags on stophaus?
[18.03.2013 16:14:24] Antitheist: so we can come up first on google soon
file fake info alert to ICANN
[18.03.2013 16:26:45] narko: Your report concerning whois data inaccuracy regarding the domain spamhaus.org has been confirmed. You will receive an email with further details shortly. Thank you.
[18.03.2013 16:29:26] narko: Any future correspondence sent to ICANN must contain your report ID number.
Please allow 45 days for ICANN’s WDPRS processing of your Whois inaccuracy
claim. This 45 day WDPRS processing cycle includes forwarding the complaint
to the registrar for handling, time for registrar action and follow-up by
ICANN if necessary.

====================================================================

Sven Kamphuis then posts to Pastebin about “OPERATION STOPHAUS,” a tirade that includes a lengthy list of demands Sven says Spamhaus will have to meet in order for the DDoS attack to be called off. Meanwhile, another spam-friendly hosting provider — helpfully known as “Spamahost[dot].com,” joins the chat channel. At this point, the attack has kept Spamhaus.org offline for the better part of 48 hours.

Narko’s account on Stophaus.

[19.03.2013 00:02:43] Yuri: another one hoster, spamahost.com added.
[19.03.2013 00:02:48] Yuri: i hope he can help with some servers.
[19.03.2013 00:02:57] spamahost: Will do ^^
[19.03.2013 00:05:49] eDataKing: be safe when accessing this link, but there was an edu writeup:http://isc.sans.edu/diary/Spamhaus+DDOS/15427
[19.03.2013 00:05:51] spamahost: Spamhaus can blow me.
[19.03.2013 00:06:00] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: me too
[19.03.2013 00:06:20] spamahost: What software you using to send out attacks?
[19.03.2013 00:06:22] spamahost: IRC and bots?
[19.03.2013 00:06:28] Yuri: spamhaus like spamahost very very much.
[19.03.2013 00:06:35] Yuri: that’s the realy true love
[19.03.2013 00:06:37] spamahost: Yes they love us
[19.03.2013 00:38:20] Yuri: MEGALOL
[19.03.2013 00:38:27] Yuri: spamhaus is down 3 days
[19.03.2013 00:38:58] Yuri: this is the graph of our mail server http://mx1.2×4.ru/cgi-bin/mailgraph.cgi
that shows amount of spam rejected by our mail server.
last days there are much less SPAm
[19.03.2013 00:39:13] Yuri: http://mail.2×4.ru same graph here.

====================================================================

The Stophaus members discover that Spamhaus is now protected by Cloudflare. This amuses the Stophaus members, who note that Spamhaus has frequently listed large swaths of Cloudflare Internet addresses as sources of spam.

[19.03.2013 00:47:07] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: cloudflare
[19.03.2013 00:47:48] Antitheist: fuck who would believe
[19.03.2013 00:48:10] Antitheist: after they listed all cloudlares /24 for being criminal supportive because of free reverse proxying
[19.03.2013 00:49:11] Antitheist: here we go again…
[19.03.2013 00:49:12] Antitheist: http://www.spamhaus.org/sbl/query/SBL179312
[19.03.2013 00:49:14] Antitheist: lol
[19.03.2013 00:49:46] Antitheist: it had been officialy bought…b-o-u-g-h-t
[19.03.2013 00:50:45] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: hmm
[19.03.2013 00:50:57] Antitheist: narko?
[19.03.2013 00:51:11] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: k… just take down the spamhaus.org nameservers…all 8 of em
[19.03.2013 00:51:22] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: after all the client on cloudflare is ‘spamhaus.eu’
[19.03.2013 00:51:33] Cali: spamhaus under cloudflare?
[19.03.2013 00:51:35] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: they still need the spamhaus.org nameservers for that and their shitlist to work
[19.03.2013 00:51:40] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: yeah with spamhaus.eu
[19.03.2013 00:51:46] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: which is a cname to spamhaus.org
[19.03.2013 00:51:59] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: so just take out the 8 spamhaus nameservers and stop targetting the old website
[19.03.2013 00:52:09] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: that ALSO takes out their dns shitlists…
[19.03.2013 00:52:12] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: indirectly
[19.03.2013 00:52:22] Yuri: that’s a fuck. a lot of work for us
[19.03.2013 00:53:20] Yuri: may be just let’s make cloudflare down ?
[19.03.2013 00:53:29] Antitheist: thats hard yuri
[19.03.2013 00:53:31] Yuri: so they will refuse any spamhaus
[19.03.2013 00:53:43] Antitheist: you need to cripple level3 and nlayer
[19.03.2013 00:54:04] Antitheist: |OR|
[19.03.2013 00:54:12] Antitheist: you need to spend too much traffic
[19.03.2013 00:54:16] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: narko: new target… the 8 nameservers of spamhaus.org… and still smtp-ext-layer.spamhaus.org ofcourse
[19.03.2013 00:54:20] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: no morewww.spamhaus.org
[19.03.2013 00:54:24] Antitheist: since cloudflares packages are traffic volume priced
[19.03.2013 00:55:44] Karlin Konig: I don’t think they are charging spamhaus
[19.03.2013 00:56:27] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: as stated before, unfair competition, in many ways
[19.03.2013 00:56:28] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: lulz
[19.03.2013 00:57:46] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: hmm is cloudflare hosting? or a reverse proxy?
[19.03.2013 00:57:57] Cali: reverse proxy.
[19.03.2013 00:58:00] Yuri: reverse
[19.03.2013 00:58:09] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: as when its a reverse proxy, it probably goes to that spamhaus.as1101.net box
[19.03.2013 00:58:13] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: aka, surfnet.
[19.03.2013 01:00:17] Cali: This website is offline
[19.03.2013 01:02:26] narko: I will make down their cloudflare if I have enough free servers
[19.03.2013 01:02:30] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: they moved it to cloudlfare
[19.03.2013 01:02:31] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: lol
[19.03.2013 01:02:43] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: then just go for the nameservers on spamhaus.org
[19.03.2013 01:02:49] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: which also breaks their dns shitlist
[19.03.2013 01:02:52] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: after 24h
[19.03.2013 01:02:55] Cali: usually websites use cloudflare dns as well.
[19.03.2013 01:02:58] Cali: so they might change soon.
[19.03.2013 01:03:03] Cali: I think you should give them some hope
[19.03.2013 01:03:10] Cali: because they will be so proud to bring it back
[19.03.2013 01:03:14] Cali: then you switch it off again
[19.03.2013 01:03:20] Cali: they will rage
[19.03.2013 01:03:23] Karlin Konig: it’s down again
[19.03.2013 01:03:24] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: they do… spamhaus.EU is on cloudflare dns
[19.03.2013 01:03:25] Karlin Konig: lol
[19.03.2013 01:03:30] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP:spamhaus.org… is on spamhaus dns
[19.03.2013 01:03:45] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: for the very obvious reason that they have 70 dns shitlist servers in that zone
[19.03.2013 01:03:49] Cali: yeah but I think they might change that soon.
[19.03.2013 01:03:52] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: and those use their weird rotating system
[19.03.2013 01:03:54] Cali: ahah
[19.03.2013 01:03:57] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: cloudflare can’t do that
[19.03.2013 01:04:04] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: they can’t change the domain of the dns shitlist
[19.03.2013 01:04:05] Cali: even with the paid version?
[19.03.2013 01:04:07] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: so they have to keep that
[19.03.2013 01:04:30] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: soo… if they come up again, just kill the dns servers on their main domainspamhaus.org
[19.03.2013 01:04:33] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP:
[19.03.2013 01:04:33] Cali: ok, now it is online and responds.
[19.03.2013 01:04:50] narko: ok
[19.03.2013 01:04:52] narko: moment
[19.03.2013 01:05:07] Cali:http://www.spamhaus.org/images/spamhaus_dnsbl_basic.gif “meet spamhaus policy”
[19.03.2013 01:05:07] Cali: lol
[19.03.2013 01:05:14] Cali: like IPs have to meet Spamhaus policies
[19.03.2013 01:05:18] Cali: lol
[19.03.2013 01:05:24] narko: they are using the cloudflare paid plan
[19.03.2013 01:05:31] narko: as they have 5 IP
[19.03.2013 01:05:31] narko: not 2
[19.03.2013 01:05:44] narko: i think it means that cf will keep them longer
[19.03.2013 01:05:46] narko:
[19.03.2013 02:09:03] narko: added some extra gbit/s to two dns servers that seemed half-up lets see if google dns renews it now
[19.03.2013 02:09:28] Yuri: fuck.. no dns resolve :))))
[19.03.2013 02:09:45] narko: (mm)
[19.03.2013 02:09:57] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: when -these- time out, they’re out of business
[19.03.2013 02:10:01] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: <<>> DiG 9.8.1-P1 <<>> A b.ns.spamhaus.org
[19.03.2013 08:01:24] Yuri: good morning
[19.03.2013 08:01:32] Yuri: it was short night for me…. fuck
[19.03.2013 08:01:40] Yuri: spamhaus is down ? again ?
[19.03.2013 08:02:09] Yuri: looks it’s some our friend work
[19.03.2013 08:10:30] simomchen: how about we hijack spamhaus’s IP together , if can not take them down again ?
[19.03.2013 08:10:59] Yuri: we would like to.
[19.03.2013 08:11:08] Yuri: but we need upstream who will allow us to do that
[19.03.2013 08:11:25] simomchen: we can just announce those over IX exchange
[19.03.2013 08:11:34] simomchen: them , do not need upstream allow this
[19.03.2013 08:11:39] nmetluk: Russian upstreams allow:)
[19.03.2013 08:13:10] Yuri: (at least we have one good russian upstream here)
[19.03.2013 08:14:15] Yuri: spamhaus desided to bring some shit sbls toinfiumhost.com, /22 listed just for nothing.and some extra SBLs to pinspb
[19.03.2013 08:14:28] eDataKing: that is how they do it
[19.03.2013 08:14:35] eDataKing: that is why it is terrorism
[19.03.2013 08:14:57] simomchen: SH will force upstreams disconnect them
[19.03.2013 08:15:05] simomchen: that’s their next step
[19.03.2013 08:15:15] Yuri: they are too big to be disconneted
[19.03.2013 08:15:22] eDataKing: yes, the upstream does not really make the decision because the decision is coerced through damages
[19.03.2013 08:15:43] eDataKing: who is too big to be disconnected?
[19.03.2013 08:16:03] simomchen: infiumhost.com ?
[19.03.2013 08:16:31] Yuri: pinspb.ru
[19.03.2013 08:16:33] Yuri: gpt.ru
[19.03.2013 08:16:42] Yuri: and other that was with some new sbls today
[19.03.2013 08:16:50] Yuri: currenty it’s just nothing serious
[19.03.2013 08:16:58] Yuri: they keep searching
[19.03.2013 08:24:33] simomchen: Donate to the fund needed to shut SH down for good. Send your donations via Bitcoin to 17SgMS56W6s1oMU7oEZ66NFkbEk1socnTJ

====================================================================

At this point, several media outlets begin erroneously reporting that the DDoS attack on Spamhaus and Cloudflare is the work of Anonymous (probably because Kamphuis ended his manifesto with the Anonymous tagline, “We do not forgive. We do not forget”).

[19.03.2013 12:35:51] Antitheist: lol, anonymous indonesia took the responsibility for the spamhaus ddos
[19.03.2013 12:36:38] Antitheist: wait no, its all over softpedia! hahaha
[19.03.2013 12:37:31] Antitheist: http://news.softpedia.com/news/Anonymous-Hackers-Launch-DDOS-Attack-Against-Spamhaus-338382.shtml
[19.03.2013 12:46:11] narko: http://www.spamhaus.org/sbl/query/SBL179322
[19.03.2013 12:46:39] Antitheist: http://www.spamhaus.org/sbl/query/SBL179321
[19.03.2013 12:55:30] Yuri: people report that MAIL from spamhaus start working
[19.03.2013 12:55:42] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: oeh! spam!
[19.03.2013 12:56:03] Antitheist: the mail is their weakest point, since cloudflare cannot protect it
[19.03.2013 12:56:22] Antitheist: so we need to hit there. the result means no SBL removals
[19.03.2013 14:46:09] Yuri: news.softpedia.com
[19.03.2013 14:46:16] Antitheist: they think its anonymous because of Svens pastebin
[19.03.2013 14:46:48] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: also good
[19.03.2013 14:46:56] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: then the rest of anon also thinks its anon
[19.03.2013 14:47:00] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: and starts to help
[19.03.2013 14:47:01] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: lol
[19.03.2013 14:47:17] Yuri: wow what a news
[19.03.2013 14:47:17] Antitheist: lol anon-amplification yeah
[19.03.2013 14:47:26] Yuri: spamhaus says in twitter that softpedia new is false
[19.03.2013 14:47:29] Yuri: :)))
[19.03.2013 15:10:05] eDataKing: 1. Let them think Anons were behind it and do not dispute
[19.03.2013 15:10:05] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: can’t sign up for twitter as i don’t have any working email lol
[19.03.2013 15:10:21] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: edataking: its allready all over the press that its not anons lol.
[19.03.2013 15:10:22] Antitheist: I know Mohit from thehackernews, if it gets posted there it will soon be viral
[19.03.2013 15:10:26] eDataKing: or 2. Remind them that Anons are everyone and Anonymous as a group did not orchestrate it
[19.03.2013 15:10:30] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: at least in .nl its quite clear that its the republic cyberbunker and others
[19.03.2013 15:10:30] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: haha
[19.03.2013 15:10:58] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: that anon also has some ehm… stuff to ‘arrange’ with spamhaus, is a different story
[19.03.2013 15:11:19] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: *points out that over half of my facebook friends have the masks anyway*
[19.03.2013 15:11:28] eDataKing: Anonymous name gets major media
[19.03.2013 15:11:33] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: and that i’m still officially the PR guy for anonymous germany
[19.03.2013 15:14:36] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: y my name don’t fit twitter..
[19.03.2013 15:14:40] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: HRH Sven Olaf Prince
getting twitter accounts shut down, listing stophaus on the sbl.

====================================================================

Spamhaus has by now worked out the identity of many Stophaus members, and has begun retaliating at them individually by listing Internet addresses tied to their businesses and personal life. Here, Narko reveals that he runs his own (unprofitable) hosting firm that Spamhaus found and listed it as an address to be blocked because it was hosting stophaus[dot]org.

[19.03.2013 17:50:04] narko: im back
[19.03.2013 17:50:25] narko: the nameservers for stophaus need to be changed
[19.03.2013 17:51:04] narko: spamhaus SBLed my site and my host will terminate me unless spamhaus tells them that it’s ok
[19.03.2013 17:51:08] narko: fucking internet police
[19.03.2013 17:52:57] eDataKing: ok, what are we changing them to?
[19.03.2013 17:53:40] narko: i will set up dns servers on my home connection
[19.03.2013 17:53:41] narko: lol
[19.03.2013 17:53:45] narko: i dont think my isp gives a shit
[19.03.2013 17:53:48] narko: i’m alraedy in PBL
[19.03.2013 17:53:56] eDataKing: lol, as long as you are safe
[19.03.2013 17:53:59] narko: what does it matter if i’m in SBL?
[19.03.2013 17:54:04] narko: well.. as long as they won’t ddos me
[19.03.2013 17:54:05] eDataKing: ok, then it should be all good
[19.03.2013 17:54:06] narko: I have a static ip
[19.03.2013 17:54:50] narko: I want to buy a /24 and host this just to fuck spamhaus
[19.03.2013 17:54:57] narko: anyone selling /24 i pay â‚¬200
[19.03.2013 17:55:34] narko: i cannot believe that my host is telling me i need to leave for a fake SBL listing that is not even hosted at their network
[19.03.2013 17:55:38] Yuri: they will list all network at once and put upsteam
[19.03.2013 17:55:39] narko: why do they listen to spamhaus..?
[19.03.2013 18:21:28] simomchen: let me make a CC to them in China
[19.03.2013 18:21:35] eDataKing: then this will kill them in the end
[19.03.2013 18:22:10] Yuri: stophaus.com moved to new DNS.
[19.03.2013 18:22:16] simomchen: I brought 50K adsl Broilers just now
[19.03.2013 18:22:48] eDataKing: Then their DNS is a ticking timebomb dependent on public support. They don’t have a lot of that left
[19.03.2013 18:23:46] Yuri: 50k of what?
[19.03.2013 18:23:52] Antitheist: DNS of stophaus should be hosted on cloudflare imho
[19.03.2013 18:24:13] Antitheist: they will be afraid to list it lol
[19.03.2013 18:24:20] simomchen: 50000 ADSL broilers zombies , hehe
[19.03.2013 18:24:23] Yuri: cloudflare will kick off
[19.03.2013 18:24:27] Yuri: oohh.. shit.
[19.03.2013 18:24:48] Yuri: we need a plan how to fight
[19.03.2013 18:27:02] simomchen: Antitheist:
<<< we need bots that will do large POST requests on the search form of ROKSOyes, that’s CC attack I said just now. ROKSO is not big enought , I’m CC their http://www.spamhaus.org/sbl/latest/ currently
[19.03.2013 18:27:11] simomchen: do not know cloudflare can handle that
[19.03.2013 18:27:24] Antitheist: SBL are not in mysql
[19.03.2013 18:27:53] Antitheist: there is no search on the DB when you request them [19.03.2013 18:28:06] eDataKing: true
[19.03.2013 18:28:12] Antitheist: but a search form, any of them, must have at least 1 SELECT statement [19.03.2013 18:28:15]
[19.03.2013 18:28:23] Antitheist: yes, see the search form
[19.03.2013 18:28:27] eDataKing: RBLs are on a Logistics server at abuseat.org
[19.03.2013 18:28:29] Antitheist: you need to post long random shit there
[19.03.2013 18:28:34] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: SBL157600 5.157.0.0/22 webexxpurts.com 19-Mar 13:53 GMT Spammer hosting (escalation) SBL157599 5.153.238.0/24 webexxpurts.com 19-Mar 13:53 GMT Spammer hosting (escalation)
[19.03.2013 18:28:36] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: lol
[19.03.2013 18:28:41] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: wasn’t he in here the other day
[19.03.2013 18:28:46] eDataKing: at least the cbl is
[19.03.2013 18:28:54] eDataKing: yes
[19.03.2013 18:28:59] eDataKing: He left?
[19.03.2013 18:29:05] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: dunno
[19.03.2013 18:29:05] simomchen: okay, let me make a ‘search’
[19.03.2013 18:29:08] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: changed names?
[19.03.2013 18:29:12] eDataKing: maybe
[19.03.2013 18:29:21] eDataKing: that was who I thought Darwin was
[19.03.2013 18:29:47] eDataKing: like he changed his name in the middle of a conversation
[19.03.2013 18:29:54] eDataKing: and Darwin picked up the chat
[19.03.2013 18:29:54] Antitheist: oh good news, its available in GET as well
[19.03.2013 18:30:01] Antitheist: http://www.spamhaus.org/rokso/search/?evidence=LONGSHITGOESHERE
[19.03.2013 18:30:40] eDataKing: They are desperate to take down the content though
[19.03.2013 18:30:55] eDataKing: I knew they would be scared to show their faces to public scrutiny
[19.03.2013 18:36:03] Yuri: SBL179370 66.192.253.42/32 twtelecom.net 19-Mar 15:15 GMT Suavemente/SplitInfinity/Innova Direct
: Feed to Jelly Digital (AS4323 >>> AS33431)
SBL179369 4.53.122.98/32 level3.net
19-Mar 15:03 GMT Suavemente/SplitInfinity/Innova Direct : Feed to Critical Data Network, Inc. (AS3356 >>> AS53318) spamhaus started to fuck hardly everywhere. they are angry.
[19.03.2013 18:37:39] Antitheist: no mercy anymore, everyone who they scraped out of stophaus members gets the entire /24 listed in ROKSO
[19.03.2013 18:37:40] simomchen: cloudflare service them , we are angry too
[19.03.2013 18:40:35] simomchen: but if the ddos keeping , I think spamhaus would go bankrupt
[19.03.2013 18:40:52] narko: they won’t go bankrupt
[19.03.2013 18:40:55] narko: he will just buy a smaller boat
[19.03.2013 18:41:00] simomchen: because cloudflare must charge tons of money form them
[19.03.2013 18:41:34] simomchen: what they can do in that boat ? if they do not pay to cloudflare , they will down again
[19.03.2013 18:41:48] narko: cloudflare only cost $200 per month [19.03.2013 19:02:27] Yuri: For SBLs spamhaus use [19.03.2013 19:02:27] Yuri: <<< http://stopforumspam.com/ https://www.projecthoneypot.org/ – ÑÑ‚Ð¾Ñ‚ Ñ‚Ð¾Ñ‡Ð½Ð¾ https://zeustracker.abuse.ch/ https://spyeyetracker.abuse.ch/those sites 100% [19.03.2013 19:02:39] narko: ok let’s make these down [19.03.2013 21:32:06] narko: i run my host company since FEB 2012 and i am still losing like 350$ per month lol
[19.03.2013 21:32:28] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: we’ve been doing it commercially since 1996 on ‘cb3rob’
[19.03.2013 21:32:34] eDataKing: how much would that be?
[19.03.2013 21:32:39] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: and well.. there are times where it runs at a loss
[19.03.2013 21:32:45] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: and there are times where it makes heaps
[19.03.2013 21:32:55] narko: i have not had a single month
[19.03.2013 21:33:01] narko: where the costs of servers+licenses were covered..
[19.03.2013 21:33:12] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: you don’t have your own servers either/
[19.03.2013 21:33:13] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: ?
[19.03.2013 21:33:16] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: just reselling?
[19.03.2013 21:33:32] narko: rent server, install cpanel, advertise
[19.03.2013 21:33:33] narko: (y)
[19.03.2013 21:33:45] eDataKing: agreed
[19.03.2013 21:33:54] narko: but I think soon i will buy my own servers and colo
[19.03.2013 21:33:56] narko: it will be cheaper
[19.03.2013 21:34:04] eDataKing: agreed as well
[19.03.2013 21:34:06] narko: the problem is
[19.03.2013 21:34:11] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: i’d say thats the only way to do it
[19.03.2013 23:43:05] narko: i don’t understand this
[19.03.2013 23:43:16] narko: how can cloudflare take 100gbps of udp and latency is not even increased by 1ms
[19.03.2013 23:47:05] Antitheist:http://www.apricot2013.net/__data/assets/pdf_file/0009/58878/tom-paseka_1361839564.pdf
[19.03.2013 23:47:19] Antitheist: CloudFlare has seen DNS reflection attacks hit 100Gbit traffic globally
[19.03.2013 23:47:23] Antitheist: they are used to it
[19.03.2013 23:47:49] narko: when they were hosting at rethem hosting
[19.03.2013 23:47:52] narko: I took down sprint
[19.03.2013 23:47:54] narko: i took down level3
[19.03.2013 23:47:56] narko: i took down cogent
[19.03.2013 23:48:06] narko: but cloudflare nothing!
[19.03.2013 23:48:26] narko: back in 2009 cloudflare went down with 10gbps
[19.03.2013 23:48:28] narko: all down..
[19.03.2013 23:49:34] narko: o i’m causing some dropped packets now
[19.03.2013 23:56:06] Cali: narko, was it you who DDoSed us like a year and half ago ?
[19.03.2013 23:56:14] narko: what network?
[19.03.2013 23:56:27] narko: or site
[19.03.2013 23:56:32] narko: sent it me in private chat and i can tell you
[20.03.2013 00:05:39] narko: http://i.imgur.com/M2mbNE0.png
[20.03.2013 00:05:44] narko: Spamhaus cloudflare current status
[20.03.2013 00:05:48] narko: with over 100Gbps of attack traffic
[20.03.2013 00:07:39] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: hmm does this affect other cloudflare customers, as in that case its bye bye spamhaus pretty soon
[20.03.2013 00:07:40] HRH Prinz Sven Olaf von CyberBunker-Kamphuis MP: lol
[20.03.2013 00:07:49] narko: i dont know
[20.03.2013 00:07:56] narko: i hope so because i cant keep such traffic up for a long time
[20.03.2013 00:08:02] narko: it’s probably closer to 200 than 100 Gbps
[20.03.2013 00:08:07] Cali: it will be harder than that I think.
[20.03.2013 00:09:35] Cali: no more icmp @cloudflare?
[20.03.2013 00:09:52] narko: 7 * * * Request timed out.

[20.03.2013 00:22:24] Antitheist: they list every IP/DNS that resolves stophaus in any way
[20.03.2013 00:22:31] narko: “Please update us when this client no longer utilises *any* part of our network so we can get back in touch with Spamhaus.”
[20.03.2013 00:22:35] Antitheist: we can change it every hour and block the entire internet lol
[20.03.2013 00:22:47] narko: They do not understand the word “THIS CLIENT HAS NOTHING TO DO WITH YOUR NETWORK”
[20.03.2013 00:22:53] narko: they treat it like it’s a request from law enforcement
[20.03.2013 00:22:56] narko: not some moron on a boat
[20.03.2013 00:47:00] Antitheist: so whats up with wordtothewise
[20.03.2013 00:47:02] narko: i only met you peoples on friday and never heard of most of you before then
[20.03.2013 00:47:29] eDataKing: lol, I just talk like I know everyone
[20.03.2013 00:47:48] eDataKing: It’s better than being secretive. I get nervous around quite people.
[20.03.2013 00:47:59] eDataKing: I think they are plotting on me lol
[20.03.2013 00:48:01] narko: I said too much already in this chat
[20.03.2013 00:48:04] narko: I’m expecting the raid soon
[20.03.2013 00:48:06] narko:

====================================================================

Narko has directed most of his botnet resources at Cloudflare now instead of Spamhaus, and the group is surprised to see Spamhaus go offline when it was hidden behind Cloudflare’s massive DDoS protection resources. Also, Yuri enlists the help of some other attackers to join in the assault.

[20.03.2013 01:00:32] Antitheist: This website is offline. No cached version is available
[20.03.2013 01:00:33] Antitheist: LOL
[20.03.2013 01:00:47] narko: lol
[20.03.2013 01:00:50] narko: not working for me either
[20.03.2013 01:00:56] Antitheist: narko you are the king
[20.03.2013 01:00:59] Antitheist: haha
[20.03.2013 01:01:00] narko: i didnt do anything
[20.03.2013 01:01:03] narko: i was just attacking cloudflare
[20.03.2013 01:01:16] Antitheist: well, thats not something they wanted to have
[20.03.2013 01:01:17] narko: see now its back up
[20.03.2013 01:01:36] Cali: It is offline here.
[20.03.2013 01:01:44] Antitheist: off…
[20.03.2013 01:01:45] narko: it went down again
[20.03.2013 01:01:51] narko: and back
[20.03.2013 01:03:11] Cali: yup
[20.03.2013 01:04:33] narko: let’s create some more records
[20.03.2013 01:04:36] narko: for DNS of stophaus
[20.03.2013 01:04:47] narko: dummy records, such as the IP of softlayer.com , etc
[20.03.2013 01:04:55] narko: it won’t affect the site because it will just try from the next server
[20.03.2013 01:05:01] narko: but they’re going to SBL some big sites
[20.03.2013 01:05:02] narko: lol
[20.03.2013 01:05:47] Antitheist: it will create more damage if we list MTAs
[20.03.2013 01:06:06] narko: ok let’s see
[20.03.2013 01:06:20] narko:
[20.03.2013 02:16:57] narko: Cloudflare changed the ips
[20.03.2013 02:16:59] narko: put only 2 IPs now
[20.03.2013 02:17:05] narko: will move attack to these IPs
[20.03.2013 02:18:24] narko: also I have a friend with a small botnet. I asked him to contribute
[20.03.2013 02:19:45] Yuri: i see.
[20.03.2013 02:19:59] Yuri: i asked some hackers to assist also
[20.03.2013 02:20:31] narko: my friend is in saudi arabia. he has bots in arab regions. will provide some diversity to the attack.
[20.03.2013 02:20:52] Yuri: spamhaus sbl site is the high end of iceberg
[20.03.2013 02:21:11] Yuri: did you try to put down spamhas relates sites?
[20.03.2013 02:21:23] narko: after spamhaus.org main site :))
[20.03.2013 02:21:55] narko: i am just getting very annoyed at this company now
[20.03.2013 02:22:08] narko: i just received 2 minutes ago “We are sorry to inform that your account has been terminated.” from my host.
[20.03.2013 02:22:14] narko: due to SBL
[20.03.2013 02:22:43] Yuri: on what host?
[20.03.2013 02:22:52] narko: EuroVPS.com
[20.03.2013 02:23:02] Yuri: write me pm what do you need
[20.03.2013 03:13:26] narko: lets host here
[20.03.2013 03:13:45] narko: i dont think they can even speak english. to read the abuse report from spamhaus.
[20.03.2013 03:14:03] Cali: lol
20.03.2013 17:07:45] eDataKing: lol
[20.03.2013 17:27:58] narko: looks like one of the cloudflare dc is down
[20.03.2013 17:28:08] narko: previously my connection to spamhaus was to amsterdam
[20.03.2013 17:28:10] narko: now it’s to paris
[20.03.2013 17:28:53] simomchen: keeping ddos them , then , cloudflare will cick SH out
[20.03.2013 17:29:03] narko: i am adding more

Episode 2

• <iframe src="https://www.npr.org/player/embed/491342091/491367503" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">

We paid $40 a barrel of our oil. A few weeks earlier, we would have paid$50. Who really sets the price of oil? And why is it so volatile?

We go from an oil well in Kansas to the Chicago Mercantile Exchange. And we track down an oil speculator.

Episode 3: How Fracking Changed The World

<script type="text/javascript" src="https://s.npr.org/templates/javascript/vendor/jquery/dist/jquery.min.js"></script>
<script type="text/javascript"> // Require.js is on the page (new Seamus) if (typeof requirejs !== 'undefined') { // Create a local require.js namespace var require_planet_money_oil_3_20160817 = requirejs.config({ context: 'planet-money-oil-3-20160817', paths: { 'pym': 'http://apps.npr.org/dailygraphics/graphics/planet-money-oil-3-20160817/js/lib/pym', 'CarebotTracker': '//apps.npr.org/dailygraphics/graphics/planet-money-oil-3-20160817/js/lib/carebot-tracker' }, shim: { 'pym': { exports: 'pym' } } }); // Load pym into locale namespace require_planet_money_oil_3_20160817(['require', 'pym', 'CarebotTracker'], function (require, Pym, CarebotTracker) { // Create pym parent var pymParent = new Pym.Parent( 'responsive-embed-planet-money-oil-3-20160817', 'http://apps.npr.org/dailygraphics/graphics/planet-money-oil-3-20160817/child.html', {} ); // Unbind events when the page changes document.addEventListener('npr:pageUnload', function (e) { // Unbind *this* event once its run once e.target.removeEventListener(e.type, arguments.callee); // Pym versions with "remove" if (typeof pymParent.remove == 'function') { pymParent.remove(); // Pym version without "remove" } else { // Unbind pym events window.removeEventListener('message', pymParent._processMessage); window.removeEventListener('resize', pymParent._onResize); } // Explicitly unload pym library require_planet_money_oil_3_20160817.undef('pym'); require_planet_money_oil_3_20160817 = null; }) // Add Carebot linger time tracker var lingerTracker = new CarebotTracker.VisibilityTracker('responsive-embed-planet-money-oil-3-20160817', function(result) { pymParent.sendMessage('on-screen', result.bucket); }); // Add Carebot scroll depth tracker var scrollTracker = new CarebotTracker.ScrollTracker('storytext', function(percent, seconds) { pymParent.sendMessage('scroll-depth', JSON.stringify({ percent: percent, seconds: seconds })); }); }); // Require.js is not on the page, but jQuery is (old Seamus) } else if (typeof $!== 'undefined' && typeof$.getScript === 'function') { // Load pym $.getScript('http://apps.npr.org/dailygraphics/graphics/planet-money-oil-3-20160817/js/lib/pym.js').done(function () { // Wait for page load$(function () { // Create pym parent var pymParent = new pym.Parent( 'responsive-embed-planet-money-oil-3-20160817', 'http://apps.npr.org/dailygraphics/graphics/planet-money-oil-3-20160817/child.html', {} ); // Load carebot and add tracker // Separate from pym so that any failures do not affect loading // the actual graphic. \$.getScript('http://apps.npr.org/dailygraphics/graphics/planet-money-oil-3-20160817/js/lib/carebot-tracker.js').done(function () { // Add Carebot tracker var tracker = new CarebotTracker.VisibilityTracker('responsive-embed-planet-money-oil-3-20160817', function(result) { pymParent.sendMessage('on-screen', result.bucket); }); // Add Carebot scroll depth tracker // Uncomment on one graphic per story var scrollTracker = new CarebotTracker.ScrollTracker('storytext', function(percent, seconds) { pymParent.sendMessage('scroll-depth', JSON.stringify({ percent: percent, seconds: seconds })); }); }); }); }); // Neither require.js nor jQuery are on the page } else { console.error('Could not load planet-money-oil-3-20160817! Neither require.js nor jQuery are on the page.'); } </script>

Episode 3

• <iframe src="https://www.npr.org/player/embed/491342091/491368222" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">

We make a deal to sell our oil to a middleman.

And we meet the mild-mannered oil engineer who unlocked the secret of modern fracking, largely by accident. We ask him what he regrets, and how it feels to change the world.

Episode 4: How Oil Got Into Everything

Episode 4

• <iframe src="https://www.npr.org/player/embed/491342091/491368895" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">

We follow our oil to a refinery, where it'll become gasoline, diesel, and fertilizer.

We meet the chemist who helped put oil into everything — and another chemist who's trying to get it out.

Episode 5: Imagine A World Without Oil

Episode 5

• <iframe src="https://www.npr.org/player/embed/491342091/491368965" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">

We follow the Planet Money oil to the end of the line.

And we ask: What would the world be like if fossil fuels did not exist?

Oil, coal and natural gas are this incredible store of energy, just sitting there in the ground waiting for us to dig them up. Amazing boon to humanity! But also: Climate change!

Would a world without oil be better? Worse? Or just different?

Copyright 2016 NPR. To see more, visit NPR.
• CometWatch – late August round-up

This week's CometWatch entry was taken with Rosetta's NAVCAM on 17 August 2016, when the spacecraft was 13.9 km from the nucleus of Comet 67P/Churyumov-Gerasimenko.

Enhanced NAVCAM image of Comet 67P/C-G taken on 17 August 2016, 13.9 km from the nucleus. The scale is 1.2 m/pixel and the image measures 1.2 km across. Credits: ESA/Rosetta/NAVCAM – CC BY-SA IGO 3.0

This close-up view shows a portion of the Imhotep region, on the large comet lobe.

The top part of the image portrays the flat, smooth-covered portion of Imhotep, scattered with a variety of boulders of different sizes. Towards the top right is a cluster of three large boulders, including the 45-m sized Cheops, named after the Great Pyramid at Giza near Cairo in Egypt.

Around the comet's perihelion, Rosetta observed many spectacular changes on this portion of Imhotep (see blog post 'Comet surface changes before Rosetta’s eyes').

To have an idea of the surface changes, you can compare the new CometWatch with a number of images of the same region taken in the past months by Rosetta: from early images such as CometWatch 26 October 2014 and the close fly-by of 14 February 2015 to more recent ones, for example this OSIRIS wide-angle camera image taken on 25 May 2016.

In the lower part of today's image, a number of circular features are visible, many of which appear to be stacked on top of one another. These roundish features can also be seen, under a different perspective, in an OSIRIS narrow-angle camera image from 16 July 2016.

A view of the Imhotep region in the overall context of Comet 67P/C-G is provided in a recent OSIRIS wide-angle camera image, taken on 10 August 2016, and you can find more details about the various geological aspects of this region in the blog post 'Inside Imhotep'.

Meanwhile, the OSIRIS team have published a number of striking new views of the comet via their image of the day website.

An OSIRIS wide-angle camera image captured less than 7 km from the comet centre on 15 August depicts portions of both lobes of 67P/C-G. The steep cliffs of the small lobe Hathor region are visible on the right, declining towards the neck, which is hidden from sight in this view by the dust-covered terrains of Ash, visible on the right.

OSIRIS wide-angle camera image taken on 15 August 2016, when Rosetta was 6.8 km from Comet 67P/C-G. The scale is 0.59 m/pixel and the image measures about 1.2 km across. Credits: ESA/Rosetta/MPS for OSIRIS Team MPS/UPD/LAM/IAA/SSO/INTA/UPM/DASP/IDA

The small, roundish feature visible in the lower part of this image was described the only unambiguously identified impact crater on the comet surface in a paper by N. Thomas et al. (part of the Science special issue: Catching a Comet, 2015). About 35 m in diameter, the crater appears to have been partially buried by the smooth material that covers the Ash region.

Another image, taken with the OSIRIS narrow-angle camera about 6 km from the comet centre on 18 August, depicts in extraordinary detail some of the boulders on a different portion of the Ash region.

OSIRIS narrow-angle camera image taken on 25 May 2016, when Rosetta was 6.1 km from Comet 67P/C-G. The scale is 0.11 m/pixel and the image measures about 225 m across. Credits: ESA/Rosetta/MPS for OSIRIS Team MPS/UPD/LAM/IAA/SSO/INTA/UPM/DASP/IDA

You can try and find these boulders in another recent OSIRIS image (second from top in this post), which portrays Ash in a broader context – including the above mentioned crater.

Another stunningly detailed OSIRIS narrow-angle camera image, taken only a couple of days ago, on 24 August, reveals the rough texture of the Khonsu region (left and lower part of the image) next to the slopes that separate it from Atum, a portion of which is visible in the top right corner. A recent NAVCAM image taken on 8 August 2016 presents a broader view on this region.

OSIRIS narrow-angle camera image taken on 24 August 2016, when Rosetta was 9.3 km from Comet 67P/C-G. The scale is 0.16 m/pixel and the image measures about 330 m across. Credits: ESA/Rosetta/MPS for OSIRIS Team MPS/UPD/LAM/IAA/SSO/INTA/UPM/DASP/IDA

This week's original NAVCAM image is provided below.

• Differential Privacy is Vulnerable to Correlated Data — Introducing Dependent Differential Privacy
[This post is joint work with Princeton graduate student Changchang Liu and IBM researcher Supriyo Chakraborty. See our paper for full details. — Prateek Mittal ] The tussle between data utility and data privacy Information sharing is important for realizing the vision of a data-driven customization of our environment. Data that were earlier locked up […]
• See Me in 2016

I have two more public appearances in 2016.

October 7-8, I’ll be at Ohio LinuxFest. They’ve asked me to speak on Introducing ZFS.

November 8, mug.org has invited me to talk about PAM. This is election day in the United States, so the talk is on how PAM in Un-American.

Sadly, family commitments prevent me from going to MeetBSD in Berkeley. Plus, there’s the whole “get on a plane” thing, which I try really really hard to avoid. I’d probably do it to see Berkeley, though. I’m pretty sure a pilgrimage to Berkeley is required once during my lifetime.

Other than that, you can catch me at a Semibug meeting.

• Astronomers Discover ‘Ghost Galaxy’ Made Of 99.99% Dark Matter

Dark matter is infuriating. It’s everywhere, outnumbering regular matter by 5 to 1. Yet it doesn’t reveal itself to our eyes or telescopes. Dark matter gives off no light and interacts with ordinary matter in just one way — through gravity. By measuring how much it tugs on stars and galaxies, we can estimate the mass of dark matter in a galaxy or cluster of galaxies, but that’s as far as it goes. If we look across the known universe, most ordinary matter is made up almost entirely of hydrogen and helium with a tiny fraction of all the other elements. Dark matter? No clue.

Nice explanation of how dark matter affects a star’s speed around the center of a galaxy

A leading theory says it could be exotic “weakly interacting massive particles” or WIMPs, another that it’s neutrinos. It might even be Macros, superdense chunks the size of apples to asteroids composed of chilled quarks, the wee bits that make up the familiar protons and neutrons inside atoms.

The Milky Way is embedded in a huge halo of invisible dark matter that extends well beyond the visible spiral disk. To account for the extra kick it gives stars in their revolution about the galactic center, astronomers estimate there may be up to one trillion solar masses of the stuff! When you come across those spectacular images of galactic pinwheels taken with the Hubble Space Telescope, picture them surrounded by invisible dark moats four times as big.

As if ordinary matter’s puny contribution to the universe’s energy account wasn’t pitiful enough, an international team of astronomers has discovered a massive galaxy named Dragonfly 44 that consists almost entirely of dark matter. 99.99% to be exact. Even though the galaxy is relatively nearby at 300 million light years from Earth, it’s been missed by astronomers for decades because it’s so incredibly faint.

Dragonfly 44 was discovered just last year with the Dragonfly Telephoto Array in the springtime constellation Coma Berenices. When astronomers took a closer look, they realized it had too few stars for the glue of gravity to keep it from flying apart.  Unless something was holding it together, it would quickly rip itself to pieces. Without enough visible matter to do the job, dark matter had to be at play.

To determine the amount of dark matter in Dragonfly 44, astronomers used the giant Keck II telescope in Hawaii with its 10-meter (393.7-inch) mirror to measure the velocities of stars around it core for 33.5 hours over a period of six nights so they could determine the galaxy’s mass.

“Motions of the stars tell you how much matter there is,” said Pieter van Dokkum, who headed up the international team studying the galaxy. “They don’t care what form the matter is, they just tell you that it’s there. In the Dragonfly galaxy stars move very fast. So there was a huge discrepancy: using Keck Observatory, we found many times more mass indicated by the motions of the stars, then there is mass in the stars themselves.”

The mass of the galaxy is estimated to be a trillion times the mass of the Sun – very similar to the mass of our own Milky Way galaxy. However, only one hundredth of one percent of that is in the form of stars and “normal” matter; the other 99.99% is in the form of dark matter. The Milky Way has more than a hundred times more stars than Dragonfly 44.

Finding a galaxy with the mass of the Milky Way that is almost entirely dark was unexpected. “We have no idea how galaxies like Dragonfly 44 could have formed,” Roberto Abraham, a co-author of the study, said. “The Gemini data show that a relatively large fraction of the stars is in the form of very compact clusters, and that is probably an important clue. But at the moment we’re just guessing.”

“This has big implications for the study of dark matter,” van Dokkum said. “It helps to have objects that are almost entirely made of dark matter so we don’t get confused by stars and all the other things that galaxies have.”

Before the discovery of Dragonfly 44, astronomers could only study dark matter up close in tiny galaxies which tend to be richer in the stuff than larger ones. Dwarf galaxies are stripped of stars and gas (normal matter) through interactions with other larger galaxies or when material is ejected into space in supernovae, leaving behind greater concentrations of the dark stuff. Finding a massive dark galaxy closer to home give astronomers hope that there are more out there.

“The race is on to find massive dark galaxies that are even closer to us than Dragonfly 44, so we can look for feeble signals that may reveal a dark matter particle,” said van Dokkum.

• The NSA Is Hoarding Vulnerabilities

The National Security Agency is lying to us. We know that because of data stolen from an NSA server was dumped on the Internet. The agency is hoarding information about security vulnerabilities in the products you use, because it wants to use it to hack others' computers. Those vulnerabilities aren't being reported, and aren't getting fixed, making your computers and networks unsafe.

On August 13, a group calling itself the Shadow Brokers released 300 megabytes of NSA cyberweapon code on the Internet. Near as we experts can tell, the NSA network itself wasn't hacked; what probably happened was that a "staging server" for NSA cyberweapons -- that is, a server the NSA was making use of to mask its surveillance activities -- was hacked in 2013.

The NSA inadvertently resecured itself in what was coincidentally the early weeks of the Snowden document release. The people behind the link used casual hacker lingo, and made a weird, implausible proposal involving holding a bitcoin auction for the rest of the data: "!!! Attention government sponsors of cyber warfare and those who profit from it !!!! How much you pay for enemies cyber weapons?"

Still, most people believe the hack was the work of the Russian government and the data release some sort of political message. Perhaps it was a warning that if the US government exposes the Russians as being behind the hack of the Democratic National Committee -- or other high-profile data breaches -- the Russians will expose NSA exploits in turn.

But what I want to talk about is the data. The sophisticated cyberweapons in the data dump include vulnerabilities and "exploit code" that can be deployed against common Internet security systems. Products targeted include those made by Cisco, Fortinet, TOPSEC, Watchguard, and Juniper -- systems that are used by both private and government organizations around the world. Some of these vulnerabilities have been independently discovered and fixed since 2013, and some had remained unknown until now.

All of them are examples of the NSA -- despite what it and other representatives of the US government say -- prioritizing its ability to conduct surveillance over our security. Here's one example. Security researcher Mustafa al-Bassam found an attack tool codenamed BENIGHCERTAIN that tricks certain Cisco firewalls into exposing some of their memory, including their authentication passwords. Those passwords can then be used to decrypt virtual private network, or VPN, traffic, completely bypassing the firewalls' security. Cisco hasn't sold these firewalls since 2009, but they're still in use today.

Vulnerabilities like that one could have, and should have, been fixed years ago. And they would have been, if the NSA had made good on its word to alert American companies and organizations when it had identified security holes.

Over the past few years, different parts of the US government have repeatedly assured us that the NSA does not hoard "zero days" ­ the term used by security experts for vulnerabilities unknown to software vendors. After we learned from the Snowden documents that the NSA purchases zero-day vulnerabilities from cyberweapons arms manufacturers, the Obama administration announced, in early 2014, that the NSA must disclose flaws in common software so they can be patched (unless there is "a clear national security or law enforcement" use).

Later that year, National Security Council cybersecurity coordinator and special adviser to the president on cybersecurity issues Michael Daniel insisted that US doesn't stockpile zero-days (except for the same narrow exemption). An official statement from the White House in 2014 said the same thing.

The Shadow Brokers data shows this is not true. The NSA hoards vulnerabilities.

Hoarding zero-day vulnerabilities is a bad idea. It means that we're all less secure. When Edward Snowden exposed many of the NSA's surveillance programs, there was considerable discussion about what the agency does with vulnerabilities in common software products that it finds. Inside the US government, the system of figuring out what to do with individual vulnerabilities is called the Vulnerabilities Equities Process (VEP). It's an inter-agency process, and it's complicated.

There is a fundamental tension between attack and defense. The NSA can keep the vulnerability secret and use it to attack other networks. In such a case, we are all at risk of someone else finding and using the same vulnerability. Alternatively, the NSA can disclose the vulnerability to the product vendor and see it gets fixed. In this case, we are all secure against whoever might be using the vulnerability, but the NSA can't use it to attack other systems.

There are probably some overly pedantic word games going on. Last year, the NSA said that it discloses 91 percent of the vulnerabilities it finds. Leaving aside the question of whether that remaining 9 percent represents 1, 10, or 1,000 vulnerabilities, there's the bigger question of what qualifies in the NSA's eyes as a "vulnerability."

Not all vulnerabilities can be turned into exploit code. The NSA loses no attack capabilities by disclosing the vulnerabilities it can't use, and doing so gets its numbers up; it's good PR. The vulnerabilities we care about are the ones in the Shadow Brokers data dump. We care about them because those are the ones whose existence leaves us all vulnerable.

Because everyone uses the same software, hardware, and networking protocols, there is no way to simultaneously secure our systems while attacking their systems ­ whoever "they" are. Either everyone is more secure, or everyone is more vulnerable.

Pretty much uniformly, security experts believe we ought to disclose and fix vulnerabilities. And the NSA continues to say things that appear to reflect that view, too. Recently, the NSA told everyone that it doesn't rely on zero days -- very much, anyway.

Earlier this year at a security conference, Rob Joyce, the head of the NSA's Tailored Access Operations (TAO) organization -- basically the country's chief hacker -- gave a rare public talk, in which he said that credential stealing is a more fruitful method of attack than are zero days: "A lot of people think that nation states are running their operations on zero days, but it's not that common. For big corporate networks, persistence and focus will get you in without a zero day; there are so many more vectors that are easier, less risky, and more productive."

The distinction he's referring to is the one between exploiting a technical hole in software and waiting for a human being to, say, get sloppy with a password.

A phrase you often hear in any discussion of the Vulnerabilities Equities Process is NOBUS, which stands for "nobody but us." Basically, when the NSA finds a vulnerability, it tries to figure out if it is unique in its ability to find it, or whether someone else could find it, too. If it believes no one else will find the problem, it may decline to make it public. It's an evaluation prone to both hubris and optimism, and many security experts have cast doubt on the very notion that there is some unique American ability to conduct vulnerability research.

The vulnerabilities in the Shadow Brokers data dump are definitely not NOBUS-level. They are run-of-the-mill vulnerabilities that anyone -- another government, cybercriminals, amateur hackers -- could discover, as evidenced by the fact that many of them were discovered between 2013, when the data was stolen, and this summer, when it was published. They are vulnerabilities in common systems used by people and companies all over the world.

So what are all these vulnerabilities doing in a secret stash of NSA code that was stolen in 2013? Assuming the Russians were the ones who did the stealing, how many US companies did they hack with these vulnerabilities? This is what the Vulnerabilities Equities Process is designed to prevent, and it has clearly failed.

If there are any vulnerabilities that -- according to the standards established by the White House and the NSA -- should have been disclosed and fixed, it's these. That they have not been during the three-plus years that the NSA knew about and exploited them -- despite Joyce's insistence that they're not very important -- demonstrates that the Vulnerable Equities Process is badly broken.

We need to fix this. This is exactly the sort of thing a congressional investigation is for. This whole process needs a lot more transparency, oversight, and accountability. It needs guiding principles that prioritize security over surveillance. A good place to start are the recommendations by Ari Schwartz and Rob Knake in their report: these include a clearly defined and more public process, more oversight by Congress and other independent bodies, and a strong bias toward fixing vulnerabilities instead of exploiting them.

And as long as I'm dreaming, we really need to separate our nation's intelligence-gathering mission from our computer security mission: we should break up the NSA. The agency's mission should be limited to nation state espionage. Individual investigation should be part of the FBI, cyberwar capabilities should be within US Cyber Command, and critical infrastructure defense should be part of DHS's mission.

I doubt we're going to see any congressional investigations this year, but we're going to have to figure this out eventually. In my 2014 book Data and Goliath, I write that "no matter what cybercriminals do, no matter what other countries do, we in the US need to err on the side of security by fixing almost all the vulnerabilities we find..." Our nation's cybersecurity is just too important to let the NSA sacrifice it in order to gain a fleeting advantage over a foreign adversary.

This essay previously appeared on Vox.com.

• Closest Star has Potentially Habitable Planet

The star closest to the Sun has a planet similar to the Earth.

• Linear Regression
• 2015-16 OCSEC report: News from the salvage operation
The 2015-2016 annual report of the Office of the CSE Commissioner (OCSEC), CSE's watchdog agency, was tabled in parliament on July 20th, whereupon it immediately sank without a trace. To the best of my knowledge, not a single news article has been published touching on any aspect of the report. [Until now.] (There was at least one commentary, however.) Not even Lloyd's List reported on the document when it went down.

It is perhaps not surprising that the report caused not a ripple. Last year's effort, tabled just six months earlier, was accompanied by a first-of-its-kind declaration that CSE had violated Canadian law. This year's report has no comparable James Cameron-class shocker: "This past year, all of the CSE activities reviewed complied with the law" (page 16).

Still, there's plenty of Glomar-worthy material in the wreck if you're willing to undertake the deep dive to recover it.

Join me as we watch the watchers' watchers and try to salvage some click-worthy items from this year's OCSEC report.

I for one would click on a headline like that.

According to this year's report, CSE's foreign intelligence, or Mandate A, program used or retained as potentially useful 342 "private communications"—communications with at least one end in Canada—that were intercepted by CSE under ministerial authorization during the 2014-2015 authorization period (page 31).

As I discussed last year, this number is only the tip of the much larger iceberg that comprises Canadian communications processed by CSE, but it is an important statistic nonetheless. And this year what it shows is a dramatic increase in the number of private communications being used or retained by the Mandate A program.

Last year, the Commissioner reported that only 16 PCs had been used or retained at the end of the 2013-2014 authorization period, and this year he adjusted that figure without explanation to just 13 PCs. Maybe three of the retained PCs were subsequently deleted, maybe there was a change in the counting rules, maybe there is some other explanation that the Commissioner was unable to provide, or maybe I'm just missing something.

In any case, 342 is 26 times larger than 13.

And the change in the rate of PC use or retention was even greater, as the 2014-2015 authorization period was abnormally short, only seven months long. (This is discussed further below.) The rate at which the CSE Mandate A program used or retained Canadian communications that had been intercepted by CSE was 45 times as high in the 2014-15 authorization period as it was in 2013-2014 period. That's right, forty-five times.

Now, you might think the Commissioner would offer an explanation for such a dramatic change in one of the few statistical measures that OSCEC reports provide, and—mirabile dictu—he does. In a manner of speaking.
[The increase] was a consequence of the technical characteristics of a particular communications technology and of the manner in which private communications are counted. (page 33)
Now all we need is an explanation of the explanation.

My guess, and it's just a guess, is that this refers to something like SMS texting or a Facebook chat, in which each part of an extended conversation might be counted as a separate message.

If this is correct, then the dramatic rise in the number of private communications used or retained in 2014-2015 may have resulted from a relatively small number of conversations between just a few individuals. The overall number of Canadians whose communications were used or retained may not have increased at all.

An explanation along these lines might in turn explain the striking lack of concern with which the Commissioner greets what at first glance would appear to be a huge jump in the monitoring of Canadians.

But all this is just guesswork. Those of a less Pollyannish bent might make other guesses.

Nowhere does the Commissioner explicitly say there's nothing to be concerned about, and if that's how he actually feels about it, it would have been helpful if he had let his readers know.

This simple trick cut Ministerial Authorization periods by 42%

Another fact that surfaces only when you raise and reassemble portions of the text is that the five Ministerial Authorizations (MAs) that enable CSE to lawfully intercept private communications, which normally run for one full year apiece and which in recent years have extended from December 1st of one year until November 30th of the following, were cut short last year. Instead of lasting twelve full months, they were all replaced after seven, on June 30th, 2015 (see pages 30 and 34).

No explanation is provided for this change.

It is conceivable that Jason Kenney, who became Defence Minister on February 9th of that year, had his own ideas about the MA regime and didn't want to wait 10 months to introduce them, especially with an election looming. Another possibility is they were rewritten to accommodate new activities authorized by Bill C-51, which received Royal Assent on June 18th.

What the actual explanation may be I have no idea.

Our allies promised not to target Canadians and you'll never guess what happened next

We are often told that the Five Eyes partners do not target one another's citizens. Compared to the way other countries' citizens are treated, this appears to be largely true. But exceptions certainly occur.

In recent years, the CSE Commissioner has acknowledged that our Second Party partners do sometimes target Canadians, in "exceptional circumstances". This year he put it this way (page 19):
The cooperative agreements that exist between the five eyes partners include a commitment to respect the privacy of each nation’s citizens and to act in a manner consistent with each nation’s policies relating to privacy. Nevertheless, it is recognized that each of the partners is an agency of a sovereign nation that may, in exceptional circumstances, derogate from the agreements if it is judged necessary for their respective national interests. In such exceptional circumstances, one of CSE’s partners may acquire and report on information about a Canadian or a person in Canada.
So, OK, fair enough. Exceptional circumstances. Ticking nuclear bombs, national emergencies. Who could really expect otherwise?

But how widely do those national interests extend? I recall speculating a few years ago that
If, for example, the U.S. were to decide that its national interests required it to check into the possibility that would-be terrorists are plotting against the U.S. from inside Canada, we might very well expect them to go ahead and do exactly that. (But of course what are the chances that they would decide that?)

The Commissioner goes on to say:
For example.

Let's be clear here. I have no problem with the monitoring of people who are engaged in terrorist activities (assuming due process is followed), but according to CSIS there are some 180 individuals "with a nexus to Canada" who are engaged in terrorist activity abroad.

This is starting to sound a lot more routine than exceptional.

And there's more:
When a partner does undertake an activity relating to a Canadian, the partner may acquire information that, in addition to meeting its own national security requirements, relates to the security of Canada and, as such, may be provided to the Canadian Security Intelligence Service (CSIS) in support of its mandate to investigate and advise government on threats to the security of Canada.

Prior to February 2015, the process to provide this kind of reporting to CSIS was manual and did not involve CSE. To help address the evolving terrorist threat and the increase in the number of foreign fighters, CSIS required a more timely mechanism to securely exchange information. To this end, CSIS requested CSE assistance under part (c) of CSE’s mandate (paragraph 273.64(1)(c) of the National Defence Act (NDA)), to establish a mechanism for CSIS to receive and handle these reports via CSE’s established channels. ...

The Commissioner found that CSE’s activities to transmit these reports to CSIS were conducted in accordance with the law and with ministerial direction relating to the protection of the privacy of Canadians.
So we've gone from "naw, doesn't happen" to "oh, well, sure, but only in exceptional circumstances" to "pretty much all the time" to "we had to formalize the exchange of all this stuff to ensure its regular and timely delivery".

But terrorists, right?

Or, maybe, as former Solicitor General Wayne Easter said in 2013, “terrorism, crime or sex offenders.”

That crime bit covers a pretty wide range of exceptions.

It's worth noting that all of this is separate from Canada's own ability to monitor such persons, based on judicial warrants granted to CSIS or the RCMP, which, aside from those agencies' own capabilities, includes CSE's worldwide intercept capabilities, CSE's ability to use Second Party intercept facilities by supplying Canadian "identifiers" to those systems, and the government's ability, acting through CSE, to request that the Second Parties themselves monitor specific Canadian targets using capabilities that may not be available for direct Canadian use.

Canada's ability to enlist Second Party systems suffered a setback in November 2013 when the process for Domestic Intercept of Foreign Telecommunications and Search (DIFTS) warrants blew up.

But everything appears to be back on track in that regard. The Commissioner is currently planning to conduct "a follow-up review of CSE assistance to the Canadian Security Intelligence Service (CSIS)... relating to the interception of the telecommunications of specified Canadians located outside Canada (formerly called Domestic Intercept of Foreign Telecommunications and Search warrants)." (page 52)

This little-known legal case caused CSE to suspend more metadata activities

OCSEC continues to work its way through a sweeping, multi-year review of CSE's metadata activities. This year the Commissioner finished his examination of "specific foreign signals intelligence metadata activities that were set aside during the first part of the review in order to fully investigate incidents relating to CSE’s failure to minimize Canadian identity information in certain metadata it shared with its second party partners" (i.e., the omnishambles that earned CSE its first declaration of legal non-compliance and led to the ongoing suspension of a wide range of metadata sharing with the Second Parties).

One set of activities examined by the Commissioner (see page 24), which were conducted by CSE's Office of Counter Terrorism, sparked a number of concerns. These included "guidance on a specific metadata activity that involves Canadian identity information remains vague and should be clarified", "a small number of the activities raised questions about CSE authorities", and "the Commissioner noted inconsistencies in CSE documentation and record-keeping practices".

No recommendations resulted from these "issues and irregularities", however,
because, subsequent to the period under review, CSE suspended indefinitely these particular metadata analysis activities in response to case law developments (Canadian Security Intelligence Service Act (Re), 2012 FC 1437, relating to the application of “directed at”)." It is positive to observe that CSE followed and modified its practices to address related jurisprudence. Prior to its decision to suspend these activities, CSE did not meet its commitment to address a recommendation the Commissioner made in a February 2014 review of the activities of the Office of Counter Terrorism (OCT) to amend relevant policy to reflect current practices and to enhance record keeping. However, this can be explained by the short period of time between the OCT review and the suspension of the activities. As long as the suspension remains in effect, the Commissioner does not expect CSE to implement the recommendation.
A couple of things are worth noting here. As the Commissioner says, it is certainly good to see CSE modifying its practices to respond to relevant jurisprudence.

It is less good to see that the suspension apparently took place sometime after February 2014, i.e., at least 15 months after Madam Justice Mactavish's ruling. Does the Commissioner have a view on the legality of CSE's conduct during the period between December 2012 and the suspension of the activities? Are we back to this model?

Also, how is it that these activities—possibly contact chaining involving Canadian identifiers—were the subject of an OCSEC recommendation back in February 2014, but that recommendation was simply to "amend relevant policy to reflect current practices and to enhance record keeping" and not to suspend the activities in response to the December 2012 ruling? Doesn't OCSEC follow and respond to related jurisprudence as well?

In last year's report, the Commissioner commented that "the Canadian legal landscape has... changed since my office last conducted an in-depth review of CSE’s collection and use of metadata". The Supreme Court's Wakeling and Spencer cases were specifically cited in this regard, but the Commissioner gave no indication of what implications, if any, he believed those and other rulings might have for CSE's activities.

The topic of the Mactavish ruling is worth a closer look. CSIS wanted to monitor the communications of one or more Canadian individuals or entities during an operation to collect foreign intelligence in Canada in accordance with s.16 of the CSIS Act. The agency argued that the Canadian communications could be directly (not just incidentally) collected despite an explicit ban on directing s.16 operations at Canadians since the operation would in fact be directed at gathering intelligence about a foreign target. The court rejected CSIS's view.

What makes this ruling especially relevant for CSE is that CSE's mandate, spelled out in the National Defence Act, dictates that the agency's foreign intelligence and cyber defence activities "shall not be directed at Canadians or any person in Canada"; CSE is permitted to intercept private communications in the course of foreign intelligence collection if a suitable Ministerial Authorization is in place, but such operations must be "directed at foreign entities located outside Canada". The meaning of the phrase "directed at" is thus fundamental to the relationship between CSE and Canadians.

That CSE suspended certain activities of the Office of Counter Terrorism in the wake of the Mactavish ruling suggests that the agency may have been directing some of its foreign intelligence activities a little too directly at its compatriots.

On a separate issue, the Commissioner also reported (pages 24-25) that he had recommended that CSE "issue written guidance to formalize and strengthen existing practices for addressing potential privacy concerns with second party partners" and, further, that the agency had subsequently "issued guidance to operational employees to address cases where the privacy of Canadians may be at risk."

One hopes this guidance is more than just "transfer the information to CSIS forthwith."

This named Canadian could be you

When a CSE report mentions a Canadian individual, corporation, or other organization, specific identifying information (name, phone number, etc.) is normally "suppressed" and replaced with a generic reference such as "a named Canadian". SIGINT clients reading the report can subsequently request the suppressed information from CSE, and if the department or agency has a suitable mandate and operational justification, CSE will provide it (without any warrant, as far as I can tell).

This year for the first time the Commissioner reported the total number of requests made by Government of Canada clients for Canadian identity information over the course of one year (1 July 2014–30 June 2015). That number was 1,126 (page 40), or about three requests per day, a total that may or may not be down slightly from the previous year.

How many of those requests were approved was not reported. CSE does sometimes deny requests for identity information, but no data has been provided as to how often this occurs; my impression is that the percentage approved is very high.

In some ways, the number of Canadian identity requests made may be more revealing of the degree to which Canadians are monitored in the course of CSE's operations than the 342 PCs number noted above. But it is far from an ideal measure. It shows only the number of requests that were made, not the total number of suppressed Canadian identities that appeared in CSE reporting during the year. (That number might be in the tens of thousands if identity requests are made in something like 10% of cases; if identity requests are made in more like 80 or 90 percent of cases, on the other hand, the practice of suppressing identities would seem to be largely a sham.) The figure also excludes both those Canadians who appear in Second Party reports made available to Canadian government clients through CSE and those who appear in intercepts or other information provided by CSE to CSIS and the RCMP under CSE's Mandate C.

It also needs to be noted, as the report itself states, that the number of identity requests is not the same as the number of individual identities requested:
the number of requests represent[s] the number of instances that institutions or partners submitted separate requests for disclosure of identity information suppressed in reports, providing a unique operational justification in each case. One request may involve multiple Canadian identities, and one Canadian identity may be disclosed multiple times to different institutions or partners.
In addition to reporting the number of identity requests by Canadian clients, the OCSEC report also provided for the first time the number of Canadian identity requests made to CSE by Canada's Five Eyes partners (111) and the number made for "disclosure to non-five eyes entities" (six: five made by a government of Canada client and one—which was denied—made by a Five Eyes partner). The approval rate for the 111 partner requests was not provided, but last year's report, which did not provide a request number, stated that partner requests "resulted in roughly an equal number of denials and disclosures of Canadian identity information".

Data recently released in the U.S. about NSA collection under the FAA Section 702 program (just one part of overall NSA collection) provides a potentially useful point of comparison: "In 2015, NSA disseminated 4,290 FAA Section 702 intelligence reports that included U.S. person information. Of those 4,290 reports, the U.S. person information was masked [equivalent to minimized] in 3,168 reports and unmasked in 1,122 reports." Some of the reports with masked identities probably contained more than one masked identity, so the total number of masked identities was probably closer to 5,000, or maybe even 10,000. (The same individual might turn up in more than one report, however, so the total number of separate identities was probably considerably lower than that.)

The U.S. data also reported that "654 U.S. person identities" were unmasked in response to requests related to these reports. This suggests that something like ten percent of masked identities were ultimately unmasked in U.S. reporting, at least with respect to the 702 program.

If the NSA can publish the number of masked U.S. identities that are later revealed in response to its reporting, albeit for just one program, I see no reason why CSE cannot release comparable information for the number of minimized Canadian identities ultimately revealed. Similarly, although the U.S. data doesn't give the exact percentage of masked identities that are ultimately revealed, I see no reason why CSE couldn't release that information, and the percentage of requests that are approved, as well.

Such information would reveal a great deal to the public about the effectiveness of the measures that exist to protect their privacy while providing little or nothing of use to SIGINT targets seeking to evade CSE monitoring. What is CSE hiding, and from whom is it hiding it, when it won't show us this data?

The CSE Commissioner should insist on reporting this kind of information. And if CSE refuses to allow it, the Commissioner should indicate that parts of his report have been censored. (And, yes, in this respect the power of classification/declassification is indeed a censorship power.)

At least, that's my view.

There's more stuff worth examining in the Commissioner's 2015-2016 report, but that's it for this blog post. I'll report on my follow-up expedition in a future post.

Update 24 August 2016:

The Commissioner's report gets some news coverage:

Ian MacLeod, "Federal spies suddenly intercepting 26 times more Canadian phone calls and communications," National Post, 24 August 2016.

Update 25 August 2016:

And a very similar article:

Rachel Browne, "Canada’s Spy Agency Now Intercepting Private Messages 26 Times More Than Previously," Vice News, 25 August 2016.

...And another one:

Ian Allen, "Did domestic snooping by Canadian spy agency increase 26-fold in a year?" IntelNews, 25 August 2016.

Gotta love this line: "According to the CSE commissioner’s report for 2015, which was released in July, but was only recently made available to the media..." So that's what happened!

• How to Tell Someone How You Are "Doing"

The idea of the bowel cannon is one that predates the comic. It started out as a piece of stand-up material I could never get to work.

It’s amazing how many of my memories of my stand-up career start with the phrase “There was this bit I could never get to work ...”

The original idea was that a typical meal at a steakhouse—a bunch of bread followed by a big piece of meat followed by a cup of coffee—was analogous to the wadding, cannonball and explosive charge they used in breach loading cannons.

Back then I described the results in greater, or at least more graphic detail. The bit did not contain the word “analogous.” Drunk comedy club patrons in smaller towns do not take kindly to words like “analogous.”

You can comment on this comic on Facebook.

• Priorities in security
I read this tweet a couple of weeks ago:

and it got me thinking. Security research is often derided as unnecessary stunt hacking, proving insecurity in things that are sufficiently niche or in ways that involve sufficient effort that the realistic probability of any individual being targeted is near zero. Fixing these issues is basically defending you against nation states (who (a) probably don't care, and (b) will probably just find some other way) and, uh, security researchers (who (a) probably don't care, and (b) see (a)).

Unfortunately, this may be insufficient. As basically anyone who's spent any time anywhere near the security industry will testify, many security researchers are not the nicest people. Some of them will end up as abusive partners, and they'll have both the ability and desire to keep track of their partners and ex-partners. As designers and implementers, we owe it to these people to make software as secure as we can rather than assuming that a certain level of adversary is unstoppable. "Can a state-level actor break this" may be something we can legitimately write off. "Can a security expert continue reading their ex-partner's email" shouldn't be.

August 25, 2016

• Stargazing Simplified: The following is a brief excerpt from a Sky & Telescope Magazine article by James Mullaney. Something for contemplation!

Stargazing Simplified!
James Mullaney, F.R.A.S.

Stargazing Simplified! Of the more than 1,000 articles on observing I’ve published over the past 60 years, this is the title of the one I consider to be the most important of them all. It appeared in the April, 2014, issue of Sky & Telescope magazine. The opening paragraph appears below. If this speaks to you and you have access to back issues of the magazine, hopefully you will take time to check out the entire article! –Jim Mullaney

Today’s hectic lifestyle, obsession with computers and high-tech electronic gadgets and mantra that “bigger is better” (in TV screens at least) has carried over into amateur astronomy. Witness the Messier and other observing “marathons,” computer-controlled remote CCD-imaging telescopes, and observatory-sized trailer-mounted Dobsonian reflectors. Casual, relaxing stargazing seems to be largely a thing of the past — something practiced by only a few of us purists. To me, stargazing should provide a relaxing interlude from the pressures and worries of everyday living rather than contribute to them.
——————————————————————————————————————————————
This little glass has yet another virtue over big ones: it has a relatively limited number of targets! Now most readers probably would not consider this an advantage — but it is! I’m not tempted to find large numbers of objects when I go out — eliminating the malady I refer to as “saturated stargazing.” Michael Covington tells us that “All galaxies deserve to be stared at for a full 15 minutes.” I would extend this advice to every celestial object. I prefer to view at most a dozen of the sky’s wonders (including the Moon and planets) during the course of an evening in a relaxed and contemplative manner. To me, glancing at an object, then rushing on to another and another is like reading the Cliff’s Notes of the world’s great novels.   James Mullaney

• Big Number Thresholds and Metrics Deep Dive

Big number thresholds let you set a visual indicator on a big number chart. We are offering critical and warning thresholds which can be customized for both going above and going below a numerical threshold. Chart thresholds update in real time, and are a perfect addition to any production spaces you have.

Short of ideas for big number thresholds? Here are a few examples.

Check out a live demo with our Bay Area DMV Wait Time dashboard.

Chart looking awry? Use Metrics Deep Dive view for all chart types.

You’ve set up a big number threshold, and things are all :-( ?  You need to learn more? This is where our newly launched Metrics Deep Dive comes to the rescue. Metrics Deep Dive provides an easier way to view all metrics sources, without having to jump back into the metrics list.

This feature is available for both big number and line charts. To Deep Dive, double click a chart to bring up the Explore View, then click on any metric name for a deep dive breakout view of that metric.

Thank you for checking out these new features. If you have any comments or suggestions, please send them in.

• Animals Rescued From the 'Worst Zoo in the World' in Gaza (22 photos)

Four Paws, an international animal welfare group, has just completed the removal of the surviving 15 animals from the Khan Younis Zoo—dubbed the “worst zoo in the world”—in the Gaza Strip.  Years of war, few visitors, and an Israeli blockade made it increasingly difficult to keep the zoo open and the remaining animals healthy—their numbers dwindling from 44 to 15. The zoo’s owner asked for outside help, and an evacuation was organized. A tiger, several monkeys, emus, a porcupine, and other creatures were given veterinary attention and shipped to facilities in Jordan, South Africa, and elsewhere. Khan Younis Zoo, once reduced to displaying the mummified carcasses of animals that had died in their care, has now shut down.

• BSDNow 156: The Fresh BSD Experience

It’s been a very slow news week, but at least there’s a new BSDNow episode: The Fresh BSD Experience.  There’s an interview with the FreeBSD Foundation intern, Drew Gurkowski, and a lot of ARM news.

• Confusing Security Risks with Moral Judgments

Interesting research that shows we exaggerate the risks of something when we find it morally objectionable.

From an article about and interview with the researchers:

To get at this question experimentally, Thomas and her collaborators created a series of vignettes in which a parent left a child unattended for some period of time, and participants indicated the risk of harm to the child during that period. For example, in one vignette, a 10-month-old was left alone for 15 minutes, asleep in the car in a cool, underground parking garage. In another vignette, an 8-year-old was left for an hour at a Starbucks, one block away from her parent's location.

To experimentally manipulate participants' moral attitude toward the parent, the experimenters varied the reason the child was left unattended across a set of six experiments with over 1,300 online participants. In some cases, the child was left alone unintentionally (for example, in one case, a mother is hit by a car and knocked unconscious after buckling her child into her car seat, thereby leaving the child unattended in the car seat). In other cases, the child was left unattended so the parent could go to work, do some volunteering, relax or meet a lover.

Not surprisingly, the parent's reason for leaving a child unattended affected participants' judgments of whether the parent had done something immoral: Ratings were over 3 on a 10-point scale even when the child was left unattended unintentionally, but they skyrocketed to nearly 8 when the parent left to meet a lover. Ratings for the other cases fell in between.

The more surprising result was that perceptions of risk followed precisely the same pattern. Although the details of the cases were otherwise the same -­ that is, the age of the child, the duration and location of the unattended period, and so on -­ participants thought children were in significantly greater danger when the parent left to meet a lover than when the child was left alone unintentionally. The ratings for the other cases, once again, fell in between. In other words, participants' factual judgments of how much danger the child was in while the parent was away varied according to the extent of their moral outrage concerning the parent's reason for leaving.

• Attack of the week: 64-bit ciphers in TLS
A few months ago it was starting to seem like you couldn't go a week without a new attack on TLS. In that context, this summer has been a blessed relief. Sadly, it looks like our vacation is over, and it's time to go back to school.

Today brings the news that Karthikeyan Bhargavan and Gaëtan Leurent out of INRIA have a new paper that demonstrates a practical attack on legacy ciphersuites in TLS (it's called "Sweet32", website here). What they show is that ciphersuites that use 64-bit blocklength ciphers -- notably 3DES -- are vulnerable to plaintext recovery attacks that work even if the attacker cannot recover the encryption key.

While the principles behind this attack are well known, there's always a difference between attacks in principle and attacks in practice. What this paper shows is that we really need to start paying attention to the practice.

So what's the matter with 64-bit block ciphers?
 source: Wikipedia

Block ciphers are one of the most widely-used cryptographic primitives. As the name implies, these are schemes designed to encipher data in blocks, rather than a single bit at a time.

The two main parameters that define a block cipher are its block size (the number of bits it processes in one go), and its key size. The two parameters need not be related. So for example, DES has a 56-bit key and a 64-bit block. Whereas 3DES (which is built from DES) can use up to a 168-bit key and yet still has the same 64-bit block. More recent ciphers have opted for both larger blocks and larger keys.

When it comes to the security provided by a block cipher, the most important parameter is generally the key size. A cipher like DES, with its tiny 56-bit key, is trivially vulnerable to brute force attacks that attempt decryption with every possible key (often using specialized hardware). A cipher like AES or 3DES is generally not vulnerable to this sort of attack, since the keys are much longer.

However, as they say: key size is not everything. Sometimes the block size matters too.

You see, in practice, we often need to encrypt messages that are longer than a single block. We also tend to want our encryption to be randomized. To accomplish this, most protocols use a block cipher in a scheme called a mode of operation. The most popular mode used in TLS is CBC mode. Encryption in CBC looks like this:

 (source: wikipedia)
The nice thing about CBC is that (leaving aside authentication issues) it can be proven (semantically) secure if we make various assumptions about the security of the underlying block cipher. Yet these security proofs have one important requirement. Namely, the attacker must not receive too much data encrypted with a single key.

The reason for this can be illustrated via the following simple attack.

Imagine that an honest encryptor is encrypting a bunch of messages using CBC mode. Following the diagram above, this involves selecting a random Initialization Vector (IV) of size equal to the block size of the cipher, then XORing the IV with the first plaintext block (P), and enciphering the result (PIV). The IV is sent (in the clear) along with the ciphertext.

Most of the time, the resulting ciphertext block will be unique -- that is, it won't match any previous ciphertext block that an attacker may have seen. However, if the encryptor processes enough messages, sooner or later the attacker will see a collision. That is, it will see a ciphertext block that is the same as some previous ciphertext block. Since the cipher is deterministic, this means the cipher's input (PIV) must be identical to the cipher's previous input (P' ⊕ IV') that created the previous block.

In other words, we have (P ⊕ IV) = (P' ⊕ IV'), which can be rearranged as (P ⊕ P') = (IV ⊕ IV'). Since the IVs are random and known to the attacker, the attacker has (with high probability) learned the XOR of two (unknown) plaintexts!

What can you do with the XOR of two unknown plaintexts? Well, if you happen to know one of those two plaintext blocks -- as you might if you were able to choose some of the plaintexts the encryptor was processing -- then you can easily recover the other plaintext. Alternatively, there are known techniques that can sometimes recover useful data even when you don't know both blocks.

The main lesson here is that this entire mess only occurs if the attacker sees a collision. And the probability of such a collision is entirely dependent on the size of the cipher block. Worse, thanks to the (non-intuitive) nature of the birthday bound, this happens much more quickly than you might think it would. Roughly speaking, if the cipher block is b bits long, then we should expect a collision after roughly 2^{b/2} encrypted blocks.

In the case of a 64-bit blocksize cipher like 3DES, this is somewhere in the vicinity of 2^32, or around 4 billion enciphered blocks.

(As a note, the collision does not really need to occur in the first block. Since all blocks in CBC are calculated in the same way, it could be a collision anywhere within the messages.)

Whew. I thought this was a practical attack. 4 billion is a big number!

It's true that 4 billion blocks seems like an awfully large number. In a practical attack, the requirements would be even larger -- since the most efficient attack is for the attacker to know a lot of the plaintexts, in the hope that she will be able to recover one unknown plaintext when she learns the value (P ⊕ P').

However, it's worth keeping in mind that these traffic numbers aren't absurd for TLS. In practice, 4 billion 3DES blocks works out to 32GB of raw ciphertext. A lot to be sure, but not impossible. If, as the Sweet32 authors do, we assume that half of the plaintext blocks are known to the attacker, we'd need to increase the amount of ciphertext to about 64GB. This is a lot, but not impossible.

The Sweet32 authors take this one step further. They imagine that the ciphertext consists of many HTTPS connections, consisting of 512 bytes of plaintext, in each of which is embedded the same secret 8-byte cookie -- and the rest of the session plaintext is known. Calculating from these values, they obtain a requirement of approximately 256GB of ciphertext needed to recover the cookie with high probability.

That is really a lot.

But keep in mind that TLS connections are being used to encipher increasingly more data. Moreover, a single open browser frame running attacker-controlled Javascript can produce many gigabytes of ciphertext in a single hour. So these attacks are not outside of the realm of what we can run today, and presumably will be very feasible in the future.

How does the TLS attack work?

While the cryptographic community has been largely pushing TLS away from ciphersuites like CBC, in favor of modern authenticated modes of operation, these modes still exist in TLS. And they exist not only for use not only with modern ciphers like AES, but they are often available for older ciphersuites like 3DES. For example, here's a connection I just made to Google:

Of course, just because a server supports 3DES does not mean that it's vulnerable to this attack. In order for a particular connection to be vulnerable, both the client and server must satisfy three main requirements:
1. The client and server must negotiate a 64-bit cipher. This is a relatively rare occurrence, but can happen in cases where one of the two sides is using an out-of-date client. For example, stock Windows XP* does not support any of the AES-based ciphersuites. Similarly, SSL3 connections may negotiate 3DES ciphersuites.
2. The server and client must support long-lived TLS sessions, i.e., encrypting a great deal of data with the same key. Unfortunately, most web browsers place no limit on the length of an HTTPS session if Keep-Alive is used, provided that the server allows the session. The Sweet32 authors scanned and discovered that many servers (including IIS) will allow sessions long enough to run their attack. Across the Internet, the percentage of vulnerable servers is small (less than 1%), but includes some important sites.
3.  Sites vulnerable to the attack (source: Sweet32 paper).
4. The client must encipher a great deal of known data, including a secret session cookie. This is generally achieved by running adversarial Javascript code in the browser, although it could be done using standard HTML as well.
These caveats aside, the authors were able to run their attack using Firefox, sending at a rate of about 1500 connections per second. With a few optimizations, they were able to recover a 16-byte secret cookie in about 30 hours (a lucky result, given an expected 38 hour run time).

So what do we do now?

While this is not an earthshaking result, it's roughly comparable to previous results we've seen with legacy ciphers like RC4.

In short, while these are not the easiest attacks to run, it's a big problem that there even exist semi-practical attacks that succeed against the encryption used in standard encryption protocols. This is a problem that we should address, and papers like this one can make a big difference in doing that.

Notes:

* Note that by "stock" Windows XP, I'm referring to Windows XP as it was originally sold. According to Stefan Kanthak, Microsoft added AES support to SChannel via a series of updates in August 11, 2009. It's not clear when these became "automatic install". So if you haven't updated your XP in a long time, that's probably a bad thing.
• Chen Bin (redguardtoo): Emacs as C++ IDE, easy way

I design a quick and newbie friendly solution.

It works at Linux/OSX/Cygwin (should work at Windows, but I don't develop at Windows).

Setup is minimum. You only need install GNU Global and two Emacs plugins:

Here is the step to step guide.

1 Step 1, create sample projects for experiment

Say I have two projects ~/proj1 and ~/proj2. Both projects will use third party library C++ header files from read-only directory /usr/include.

A new directory ~/obj to store the index files of third party libraries.

mkdir -p ~/{proj1,proj2,obj}


The content of ~/proj2/lib.cpp,

void proj2_hello(int a2, char* b2) {
}


The content of ~/proj1/main.cpp,

void proj1_hello(int a1, char* b1) {
}

int main(int argc, char *argv[]) {
return 0;
}


2 Step 2, scan C++ code and setup Emacs

Run below command in shell to scan code,

cd /usr/include && MAKEOBJDIRPREFIX=~/obj gtags -O && cd ~/proj1 && gtags && cd ~/proj2 && gtags


After setting up the corresponding Emacs plugins (minimum setup copied from their website is enough), insert below code into ~/.emacs,

;; Please note file-truename' must be used!
(setenv "GTAGSLIBPATH" (concat "/usr/include"
":"
(file-truename "~/proj2")
":"
(file-truename "~/proj1")))
(setenv "MAKEOBJDIRPREFIX" (file-truename "~/obj/"))
(setq company-backends '((company-dabbrev-code company-gtags)))


3 Usage

Use the Emacs plugins as usual.

But you need install latest company built on 25th August because I fixed a company issue yesterday.

Screenshot,

4 Technical Details (Optional)

Check GNU Global manual to understand environment variables GTAGSLIBPATH and MAKEOBJDIRPREFIX.

• Ben Simon: An Emacs Friendly Caps Lock Configuration on Windows

While this may be obvious, I was pretty dang pleased with myself when I managed to turn the Caps Lock key on my Windows 10 computer into an emacs friendly Hyper key. Here's what I did:

Step 1. Use AutoHotKey to trivially map the Caps Lock key to the Windows Menu key, or as AutoHotKey calls it, the AppsKey.

;; Add this to your standard AutoHotKey configuration
CapsLock::AppsKey


Step 2. Use this elisp code to capture the Menu key from within emacs and map it to the Hyper modifier:

;; http://ergoemacs.org/emacs/emacs_hyper_super_keys.html
(setq w32-pass-apps-to-system nil)
(setq w32-apps-modifier 'hyper) ; Menu/App key


Step 3. Enjoy! I can now map any key binding using the H- modifier. Here's some code I added to my PHP setup:

(defun bs-php-mode-hook ()
(local-set-key '[backtab] 'indent-relative)
(local-set-key (kbd "<H-left>") 'beginning-of-defun)
(local-set-key (kbd "<H-right>") 'end-of-defun)
(auto-complete-mode t)
(require 'ac-php)
(setq ac-sources  '(ac-source-php ))
(yas-global-mode 1)
(setq indent-tabs-mode nil)
(setq php-template-compatibility nil)
(setq c-basic-offset 2))


The result: when I open up a PHP file, I can jump between function definitions by holding down Caps Lock and left or right arrow.

I feel like I just won the keyboard shortcut lottery!

• William Denton: Image display size in Org

I just discovered that it’s possible to change the size of an image as displayed in Org while leaving the actual file unchanged. This is great: I can scale it down so it’s just large enough I know what it is but it doesn’t get in my way or take up much real estate.

The variable is org-image-actual-width. C-h v org-image-actual-width shows the documentation:

org-image-actual-width is a variable defined in ‘org.el’. Its value is t

Documentation: Should we use the actual width of images when inlining them?

When set to t, always use the image width.

When set to a number, use imagemagick (when available) to set the image’s width to this value.

When set to a number in a list, try to get the width from any #+ATTR.* keyword if it matches a width specification like

#+ATTR_HTML: :width 300px

and fall back on that number if none is found.

When set to nil, try to get the width from an #+ATTR.* keyword and fall back on the original width if none is found.

This requires Emacs >= 24.1, build with imagemagick support.

(I build Emacs from source, and it has ImageMagick support, though I forget if I had to do anything to get that working. I think just installing ImageMagick is enough. Do ./configure | grep -i imagemagick to check if Emacs knows about it.)

I could set the variable in an init file:

(setq org-image-actual-width nil)

But for now I’m just using it as a file local variable, with this as the first line of the Org file:

# -*- org-image-actual-width: nil; -*-

Then I have, for example, this raw text:

#+NAME: fig:moodleviz
#+CAPTION: Screenshot from Moodleviz.
#+ATTR_ORG: :width 600
#+ATTR_LATEX: :width 5in
[[file:figures/moodleviz-laps.png]]


That image is 1520 pixels wide (wider than my personal laptop—it’s a screenshot taken on a larger screen) and it’s annoying to move by it up or down, so shrinking the displayed size is great. It looks like this scaled down to 600 pixels wide:

ATTR_LATEX shrinks the image to a nice size when I export the document to PDF. There is no HTML version so I don’t care about resizing for that.

• Rosetta captures comet outburst

Rosetta’s OSIRIS wide-angle camera captured an outburst from the Atum region on Comet 67P/Churyumov–Gerasimenko’s large lobe on 19 February 2016. The images are separated by half an hour each, covering the period 08:40–12:10 GMT, and as such show the comet rotating. Credits: ESA/Rosetta/MPS for OSIRIS Team MPS/UPD/LAM/IAA/SSO/INTA/UPM/DASP/IDA

In unprecedented observations made earlier this year, Rosetta unexpectedly captured a dramatic comet outburst that may have been triggered by a landslide.

Nine of Rosetta’s instruments, including its cameras, dust collectors, and gas and plasma analysers, were monitoring the comet from about 35 km in a coordinated planned sequence when the outburst happened on 19 February.

“Over the last year, Rosetta has shown that although activity can be prolonged, when it comes to outbursts, the timing is highly unpredictable, so catching an event like this was pure luck,” says Matt Taylor, ESA’s Rosetta project scientist.

“By happy coincidence, we were pointing the majority of instruments at the comet at this time, and having these simultaneous measurements provides us with the most complete set of data on an outburst ever collected.”

The data were sent to Earth only a few days after the outburst, but subsequent analysis has allowed a clear chain of events to be reconstructed, as described in a paper led by Eberhard Grün of the Max-Planck-Institute for Nuclear Physics, Heidelberg, accepted for publication in Monthly Notices of the Royal Astronomical Society.

A strong brightening of the comet’s dusty coma was seen by the OSIRIS wide-angle camera at 09:40 GMT, developing in a region of the comet that was initially in shadow.

Over the next two hours, Rosetta recorded outburst signatures that exceeded background levels in some instruments by factors of up to a hundred. For example, between about 10:00–11:00 GMT, ALICE saw the ultraviolet brightness of the sunlight reflected by the nucleus and the emitted dust increase by a factor of six, while ROSINA and RPC detected a significant increase in gas and plasma, respectively, around the spacecraft, by a factor of 1.5–2.5.

In addition, MIRO recorded a 30ºC rise in temperature of the surrounding gas.

This diagram of Rosetta highlights (in red) the science instruments that were on and made detections of the 19 February 2016 outburst event, and that are presented in the study reporting the first analysis of the event. Credits: ESA/ATG medialab

Shortly after, Rosetta was blasted by dust: GIADA recorded a maximum hit count at around 11:15 GMT. Almost 200 particles were detected in the following three hours, compared with a typical rate of 3–10 collected on other days in the same month.

At the same time, OSIRIS narrow-angle camera images began registering dust grains emitted during the blast. Between 11:10 GMT and 11:40 GMT, a transition occurred from grains that were distant or slow enough to appear as points in the images, to those either close or fast enough to be captured as trails during the exposures.

In addition, the startrackers, which are used to navigate and help control Rosetta’s attitude, measured an increase in light scattered from dust particles as a result of the outburst.

The startrackers are mounted at 90º to the side of the spacecraft that hosts the majority of science instruments, so they offered a unique insight into the 3D structure and evolution of the outburst.

Astronomers on Earth also noted an increase in coma density in the days after the outburst.

By examining all of the available data, scientists believe they have identified the source of the outburst.

“From Rosetta’s observations, we believe the outburst originated from a steep slope on the comet’s large lobe, in the Atum region,” says Eberhard.

The source of the 19 February 2016 outburst was traced back to a location in the Atum region, on the comet’s large lobe. The inset image was taken a few hours after the outburst by Rosetta’s NavCam and shows the approximate source location. The image at left was taken on 21 March 2015 and is shown for context, and so there are some differences in shadowing/illumination as a result of the images being acquired at very different times. Credits: ESA/Rosetta/NavCam – CC BY-SA IGO 3.0

The fact that the outburst started when this area just emerged from shadow suggests that thermal stresses in the surface material may have triggered a landslide that exposed fresh water ice to direct solar illumination. The ice then immediately turned to gas, dragging surrounding dust with it to produce the debris cloud seen by OSIRIS.

“Combining the evidence from the OSIRIS images with the long duration of the GIADA dust impact phase leads us to believe that the dust cone was very broad,” says Eberhard.

“As a result, we think the outburst must have been triggered by a landslide at the surface, rather than a more focused jet bringing fresh material up from within the interior, for example.”

“We’ll continue to analyse the data not only to dig into the details of this particular event, but also to see if it can help us better understand the many other outbursts witnessed over the course of the mission,” adds Matt.

“It’s great to see the instrument teams working together on the important question of how cometary outbursts are triggered.”

The 19 Feb. 2016 outburst of comet 67P/CG: A Rosetta multi-instrument study,” by E. Grün et al is published in the Monthly Notices of the Royal Astronomical Society. doi: 10.1093/mnras/stw2088

• Irreal: Say What?!?

I don't have the words

• Ben Simon: Emacs + PHP - A Modern and (Far More Complete) Recipe

Recently, I had a chance to do a bit of emacs evangelism. Man, is that a soapbox I like to climb on! I hit all my favorite features from dired and dynamic abbreviation expansion to under appreciated gems like ispell. I talked about the power of having a fully programmable, self documenting editor at your fingertips. When I was done, I thought for sure I had managed to add another member to the tribe.

Then, yesterday, my possible convert came to me with a simple question: what mode do you use to edit PHP? Considering that most of the code I write is PHP, you'd think I would be ready to deliver a solid answer. Instead, I mumbled something about having a hacked bit of code I relied on, but really wouldn't recommend it for the general use. Yeah, not cool. I closed out the conversation with a promise: I'd see what the latest PHP options were and report back.

PHP is actually a fairly tricky language to build an editor for. That's because depending on the style it's written in, it can range from disciplined C like code to a gobbly gook C like code with HTML, CSS and JavaScript all mixed in. Add to that the classic emacs problem of having too many 85% solutions, and there's definitely room for frustration. Check out the emacs wiki to see what I mean. You'll find lots of promising options, but no one definitive solution.

After reading up on my choices and trying out some options, I do believe I have a new recipe for PHP + emacs success. Here goes.

Step 1: Setup Melpa

Melpa is a code repository that emacs can magically pull in packages from. Back in the day, adding packages to emacs meant downloading, untarring, and running make. Like all things emacs, the process has been both streamlined, and of course, is fully executable from within emacs. To add Melpa, you'll need to follow the instructions here. Assuming you have a modern version of emacs, this probably just means adding the following to your .emacs:

(add-to-list 'package-archives
'("melpa" . "https://melpa.org/packages/"))


Step 2: Install the Melpa available PHP Mode

Emacs comes with a PHP mode out of the box, but it seems as though this one (also named php-mode) is more modern. I love that the README talks about PHP 7, showing just how up to date this mode is with respect to the latest language constructs.

Installing this mode, assuming Melpa is cooperating, couldn't be easier. Just run: M-x package-install and enter php-mode.

My preference for indentation is 2 spaces and no insertion of tab characters. Two other tweaks I was after was to turn off HTML support from within PHP and enable subword-mode. All these tweaks are stored in a function and attached to the php-mode-hook. This is all standard .emacs stuff:

(defun bs-php-mode-hook ()
(setq indent-tabs-mode nil)
(setq c-basic-offset 2)
(setq php-template-compatibility nil)
(subword-mode 1))



Step 3: Install Web-Mode.el

php-mode above is ideal for code centric PHP files. But what about those pesky layout files that contain HTML, CSS and JavaScript? For that, web-mode.el looks especially promising. Installation and configuration mirrors php-mode. Here's the code I use to customize it:

(defun bs-web-mode-hook ()
(local-set-key '[backtab] 'indent-relative)
(setq indent-tabs-mode nil)
(setq web-mode-markup-indent-offset 2
web-mode-css-indent-offset 2
web-mode-code-indent-offset 2))



Web-mode.el is quite impressive and who knows, it may actually fulfill all my PHP needs. If that's the case, I may choose to stop using the php-mode. However, for now, I like the idea of being able to switch between the two modes. Which brings me to the next step...

Step 4: Add a Quick Mode Toggle

Inspired by this tip on the EmacsWiki, I went ahead and setup a quick toggle between php-mode and web-mode. Here's that code:

(defun toggle-php-flavor-mode ()
(interactive)
"Toggle mode between PHP & Web-Mode Helper modes"
(cond ((string= mode-name "PHP")
(web-mode))
((string= mode-name "Web")
(php-mode))))

(global-set-key [f5] 'toggle-php-flavor-mode)


Now I'm only an F5 keystroke away from two different editor strategies.

When I have a say in the matter, I tend to be pretty particular about which files in my source tree are pure code and which are pure markup. At some point, I could codify this so that files in the snippets directory, for example, always open in web-mode, whereas files loaded from under lib start off in php-mode. Such is the joy of having a fully programmable editor at your fingertips. You make up the rules!

Step 5: Bonus - Setup aggressive auto-completion

For bonus points, I decided to play with ac-php, a library that supports auto completion of function and class names. I followed the install here, updated my .emacs file as noted below, created the empty file named .ac-php-conf.json in my project's root and then ran M-x ac-php-remake-tags-all. Once that's done, emacs now shouts completions like crazy at me:

It's slick, I'll give you that. I think I may have to see if I can turn down the volume, a bit. Here's what my .emacs now looks like to configure php-mode: now looks like:

(defun bs-php-mode-hook ()
(auto-complete-mode t)                 ;; «
(require 'ac-php)                      ;; «
(setq ac-sources  '(ac-source-php ))   ;; «
(yas-global-mode 1)                    ;; «
(setq indent-tabs-mode nil)
(setq php-template-compatibility nil)
(setq c-basic-offset 2))


Bye-bye hacked PHP code. Hello modern, feature filled, super easy to install, mega powerful code.

Update: updated the web-mode hook I'm using to make sure all code on the page, not just markup code, is indented 2 steps.

• Curiosity at Murray Buttes on Mars

What are these unusual lumps on Mars?

• Sols 1441-1442: Cruising through the Murray Buttes

Curiosity is making good progress through the Murray Buttes, and on Sol 1439 we drove another 34 m to the south.  Today’s two-sol plan fits our familiar routine: a pre-drive science block, drive, post-drive imaging for targeting, and an untargeted science block on the second sol.  The plan starts with Mastcam and ChemCam observations of the targets “Viana,” “Ukuma,” and “Waku Kungo” to assess the composition and sedimentary structures in the local bedrock.  We’ll also acquire a large Mastcam mosaic to document some of the buttes.  After the drive we’ll take some post-drive imaging for targeting and context, as well as an autonomously selected ChemCam target using AEGIS.  The second sol is mostly devoted to atmospheric monitoring, including a ChemCam passive sky activity, and Navcam observations to search for dust devils and clouds.  If we keep up this driving pace, we could be looking for our next drill target as early as next Wednesday!

By Lauren Edgar

Dates of planned rover activities described in these reports are subject to change due to a variety of factors related to the Martian environment, communication relays and rover status.

August 24, 2016

Search and rescue crews are using whatever they can to locate survivors from a magnitude 6.2 earthquake that reduced three central Italian towns to rubble early today. The death toll stood at 120, but certainly will rise said officials. ‘‘The town isn’t here anymore,’’ said Sergio Pirozzi, the mayor of the hardest-hit town, Amatrice. -- By Lloyd Young

A man is rescued from the ruins following an earthquake in Amatrice, central Italy, on Aug. 2. (Remo Casilli/Reuters)

• Oil #5: Imagine A World Without Oil

Oil #5: Imagine A World Without Oil

• <iframe src="https://www.npr.org/player/embed/491216303/491245772" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
• Transcript

On today's show, we follow the Planet Money oil to the end of the line.

And we ask: What would the world be like if fossil fuels did not exist? What if you dug down in the ground and there was nothing but dirt and rock.

Oil, coal and natural gas are this incredible store of energy, just sitting there in the ground waiting for us to dig them up. Amazing boon to humanity! But also: Climate change!

Would a world without oil be better? Worse? Or just different?

Copyright 2016 NPR. To see more, visit NPR.
• Language necessarily contains human biases, and so will machines trained on language corpora
I have a new draft paper with Aylin Caliskan-Islam and Joanna Bryson titled Semantics derived automatically from language corpora necessarily contain human biases. We show empirically that natural language necessarily contains human biases, and the paradigm of training machine learning on language corpora means that AI will inevitably imbibe these biases as well. Specifically, we look at […]
• Earth-Like Planet Discovered In Habitable Zone Of Nearest Star

Amazing news! Today, a team of European astronomers announced the discovery of a potentially habitable planet around Proxima Centauri, the nearest star beyond the sun. That planet, called Proxima b, is at least 1.3 times the mass of Earth and squarely located within the star’s habitable zone, where temperatures are clement enough for liquid water to pool on its surface. And where there’s water, life may follow.

I bet you thought Alpha Centauri was the closest star. Almost. Alpha has two companions, one bright and close called Alpha Centauri B and Proxima, a cool red dwarf too faint to be seen without a small telescope. Since Proxima revolves about the pair, its distance from Earth varies. For the past 32,000 years and the next 25,000, it’s been and will be the closest star to ours at 4.2 light years or 25.2 trillion miles. Around about 27,000 A.D., Alpha Centauri will assume the role.

Off the more than 3,500 extrasolar planets discovered to date, Proxima b combines potential habitability with being closer to home than any of them. But before we get too excited about flying there tomorrow, the journey is still long in the clock. With present technology it would take upward of 75,000 years to send a probe to explore the planet. Other kinds of propulsion methods will have to be developed as we consider the prospect of launching spaceships to the stars.

Finding the new planet took a lot of firepower. During the first half of 2016 Proxima Centauri was regularly observed with the HARPS spectrograph (used to find exoplanets) on the European Southern Observatory’s (ESO) 3.6-meter (141.7-inch) telescope in Chile and simultaneously monitored by other telescopes around the world as part of the Pale Red Dot campaign. The goal of the campaign was to look for the tiny back and forth wobble of the star that would be caused by the gravitational pull of a possible orbiting planet.

“The first hints of a possible planet were spotted back in 2013, but the detection was not convincing,” according to Guillem Anglada-Escudé, who headed up the initiative. “Since then we have worked hard to get further observations off the ground with help from ESO and others.”

When more extensive Pale Red Dot data were combined with earlier observations made at ESO observatories and elsewhere, they revealed clear signs of a wobble. Proxima Centauri is approaching Earth at a normal walking pace of about 3 miles per hour (5 kph) and then receding at the same speed. This regular pattern of changing to-and-fro velocities repeats with a period of 11.2 days. Careful analysis of the velocity shifts indicated the presence of a planet with a mass at least 1.3 times that of the Earth, orbiting about 4.3 million miles (7 million km) from Proxima Centauri.

Guillem Anglada-Escudé could barely contain his excitement:  “I kept checking the consistency of the signal every single day during the 60 nights of the Pale Red Dot campaign. The first 10 were promising, the first 20 were consistent with expectations, and at 30 days the result was pretty much definitive, so we started drafting the paper!”

Red dwarfs like Proxima Centauri can vary in ways that would mimic the presence of a planet because they’re prone to huge flares on scales that cause their light to fluctuate. Astronomers monitored the star closely and eliminated data taken during its flaring fits to get as “clean” .

Although Proxima b orbits much closer to its star than Mercury does to the Sun in the Solar System, the star itself is far fainter than the Sun. As a result Proxima b lies well within the habitable zone around the star and has an estimated surface temperature that would allow the presence of liquid water. Despite the temperate orbit of Proxima b, the conditions on the surface may be strongly affected by the ultraviolet and X-ray flares from the star far more intense than the Earth experiences from the sun.

Still, liquid water on the planet today can’t be ruled out, though it may be present only in the sunniest regions, either in an area in the hemisphere of the planet facing the star or in a tropical belt depending whether the planet always presents the same face (tidally locked as the moon is to the Earth) to it star or not. Proxima b’s rotation, the strong radiation from its star and the formation history of the planet makes its climate quite different from that of the Earth.

These and other questions will have to wait until larger instruments such as the European Extremely Large Telescope and space-based James Webb Space Telescope come online later this decade and next. Proxima b will likely be a prime target in the hunt for evidence of life beyond our own solar system and the first travel destination to another star system perhaps using this radical new method to get there.

For me, the takeaway from today’s news is this: If we can find an Earth analog right next door,  they might well be everywhere. Rocky terrestrial planets could be as common as red dwarf stars, with millions more waiting to be found.

Guillem Anglada-Escudé concludes: “Many exoplanets have been found and many more will be found, but searching for the closest potential Earth-analogue and succeeding has been the experience of a lifetime for all of us. The search for life on Proxima b comes next …”

This is the product of only about 5 minutes worth of thought, so take it with a grain of salt. When it comes to how to write maintainable, understandable code, there are as many opinions out there as there are developers. Personally I favour simple, understandable, even “boring” method bodies that don’t try to be flashy or use fancy language features. Method and class names should clearly signal intent and what the thing is or does. And, code should (IMHO) include good comments.

This last part is probably an area I’ve seen the most dissent. For some reason people hate writing comments, and think that the code should be “self-documenting”. I’ve rarely, perhaps never seen this in practice. While perhaps the intent was for it to be self-documenting, that never arose in practice.

Recently (and this is related, I promise), I watched a lot of talks (one, in person) and read a lot about the Zalando engineering principles. They base their engineering organisation around three pillars of How, What and Why. I think the same thing can be said for how you should write code and document it:

 

class Widget
def initialize
@expires_at = Time.now + 86400
end

# Customer X was asking for the ability to expire     #  <--- Why
# widgets, but some may not have an expiry date or
# do not expire at all. This method handles these
# edge cases safely.
def is_expired?()                                     #  <--- What
return !!@expires_at && Time.now() > @expires_at    #  <--- How
end
end

 

This very simple example shows what I mean (in Ruby, since it's flexible and lends itself well to artificial examples like this). The method body itself should convey the How of the equation. The method name itself should convey the intent of the method - What does this do? Ultimately, the How and What can probably never fully explain the history and reasoning for their own existence. Therefore I find it helpful to accompany these with the Why in a method comment to this effect (and a comment above the method could also be within the method, or distributed across the method - it's not really important).

You could argue that history and reasoning for having the method can be determined from version control history. This turns coding from what should be a straightforward exercise into some bizarre trip through the Wheel of Time novels, cross-referencing back to earlier volumes in order to try to find some obscure fact that may or may not actually exist, so that you can figure out the reference you are currently reading. Why make the future maintainer of your code go through that? Once again, it relies entirely on the original committer having left a comprehensive and thoughtful message that is also easy to find.

The other counter argument is that no comments are better than out of date or incorrect comments. Again, personally I haven't run into this (or at least, not nearly as frequently as comments missing completely). Usually it will be pretty obvious where the comment does not match up with the code, and in this (hopefully outlier) case you can then go version control diving to find out when they diverged. Assessing contents of the code itself is usually far easier than searching for an original comment on the first commit of that method, so it seems like this should be an easier exercise.

Writing understandable code (and let's face it, most of the code written in the world is probably doing menial things like checking if statements, manipulating strings or adding/removing items from arrays) and comments is less fun than hacking out stuff that just works when you are feeling inspired, so no wonder we've invented an assortment of excuses to avoid doing it. So if you are one of the few actually doing this, thank you.

• Proxima Centauri b: Have we just found Earth’s cousin right on our doorstep?
What began as a tantalizing rumor has just become an astonishing fact. Today a group of thirty-one scientists announced the discovery of a terrestrial exoplanet orbiting Proxima Centauri. The discovery of this planet, Proxima Centauri b, is a huge breakthrough not just for astronomers but for all of us. Here’s why.
• Proxima Centauri Planet

A planet in the habitable zone around Proxima Centauri? The prospect dazzles the imagination, but then, I’ve been thinking about just that kind of planet for most of my life. Proxima Centauri is, after all, the closest star to our own, about 15000 AU from the primary Alpha Centauri stars (though thought to be moving with that system). A dim red dwarf, Proxima wasn’t discovered until 1915, but it quickly seized the imagination of science fiction writers who pondered what might exist around such a star. Murray Leinster’s story “Proxima Centauri” (1935) is a clanking, thudding tale but it still evokes a bit of the magic of one of the earliest fictional interstellar voyages.

Image: This wide-field image shows the Milky Way stretching across the southern sky. The beautiful Carina Nebula (NGC 3372) is seen at the right of the image glowing in red. It is within this spiral arm of our Milky Way that the bright star cluster NGC 3603 resides. At the centre of the image is the constellation of Crux (The Southern Cross). The bright yellow/white star at the left of the image is Alpha Centauri, in fact a system of three stars, at a distance of about 4.4 light-years from Earth. The star Alpha Centauri C, Proxima Centauri, is the closest star to the Solar System. Credit: A. Fujii.

More recently, Stephen Baxter pretty much nailed Proxima Centauri b in his depiction of a just over one Earth-mass planet in the habitable zone called Per Ardua — this was in Baxter’s 2015 novel Proxima. Baxter’s planet was at 0.04 AU, and a little more massive than Earth; the real thing is 0.05 AU and 1.3 Earth masses. I would call that very nice work. Baxter has also noted, with considerable justification, that if we find a truly habitable planet in the very next system to our own, the implication is that such planets are quite common.

Image: This image of the sky around the bright star Alpha Centauri AB also shows the much fainter red dwarf star, Proxima Centauri, the closest star to the Solar System. The picture was created from pictures forming part of the Digitized Sky Survey 2. The blue halo around Alpha Centauri AB is an artifact of the photographic process, the star is really pale yellow in colour like the Sun. Credit: Digitized Sky Survey 2. Acknowledgement: Davide De Martin/Mahdi Zamani.

Having been at the Breakthrough Starshot meetings all this week, I’m delighted to see that we now have a potential destination; i.e. an actual rather than assumed planet around one of the stars in the system nearest to us. Finding Proxima’s planet has been a long process, drilling down to the kind of measurements that can reveal its presence. Up until now we’ve been excluding larger planets in various kinds of orbits around Proxima, but the prospect of something Earth-sized in the habitable zone remained open. I hasten to add that Breakthrough Starshot has made no decisions about its target at this point, but it’s clear that Proxima b is going to be a prime contender.

I’m going to let Guillem Anglada-Escudé, head of the Pale Red Dot project, and his collaborators describe what his team has found. Noting that uneven sampling and the longer-term variability of the star are reasons why the signal could not be confirmed from the earlier data, the researchers go on to describe these key characteristics of the planet. From the paper:

The Doppler semi-amplitude of Proxima b (∼ 1.4 ms−1) is not particularly small compared to other reported planet candidates. The uneven and sparse sampling combined with longer-term variability of the star seem to be the reasons why the signal could not be unambiguously confirmed with pre-2016 rather than the amount of data accumulated.

And here’s what we’ve been waiting to hear:

The corresponding minimum planet mass is ∼ 1.3 M . With a semi-major axis of ∼0.05 AU, it lies squarely in the center of the classical habitable zone for Proxima. As mentioned earlier, the presence of another super-Earth mass planet cannot yet be ruled out at longer orbital periods and Doppler semi-amplitudes <3 ms −1 . By numerical integration of some putative orbits, we verified that the presence of such an additional planet would not compromise the orbital stability of Proxima b.

Image: Guillem Anglada-Escudé, head of the Pale Red Dot project and lead author of the paper on the discovery of Proxima Centauri b.

And there we are, our first assessment of a planetary system around Proxima Centauri. The team’s analysis taps into previous Doppler measurements of Proxima Centauri coupled with the follow-up Pale Red Dot campaign of 2016. The Doppler data draws on the HARPS (High Accuracy Radial velocity Planet Searcher) spectrometer and UVES (the Ultraviolet and Visual Echelle Spectrograph). The search methods and signal assessment are thoroughly discussed in the paper (citation below). Key to the effort was what Anglada-Escudé and team call “[a] well isolated peak at ∼11.2 days” that appeared in the pre-2016 Doppler data. The HARPS Pale Red Dot campaign was created to confirm or refute this 11.2-day signal. And confirm it they did.

Image: This artist’s impression shows a view of the surface of the planet Proxima b orbiting the red dwarf star Proxima Centauri, the closest star to the Solar System. The double star Alpha Centauri AB also appears in the image to the upper-right of Proxima itself. Proxima b is a little more massive than the Earth and orbits in the habitable zone around Proxima Centauri, where the temperature is suitable for liquid water to exist on its surface. Credit: ESO/M. Kornmesser.

We have a long way to go before knowing whether a planet around a red dwarf like this can truly be habitable. Tidal locking is always an issue because a planet this close to its host (Proxima Centauri b is on an 11.2-day orbit) is probably going to have one side fixed facing the star, the other in permanent night. There are papers arguing, however, that tidal lock does not prevent a stable atmosphere with global circulation and heat distribution from occurring.

And what about Proxima’s magnetic field? The average global magnetic flux is high compared to the Sun’s (600±150 Gauss vs. the Sun’s 1 G). Couple this with flare activity and there are scenarios where a planet gradually has its atmosphere stripped away. A strong planetary magnetic field could, however, prevent this erosion. Nor would X-rays (400 times the flux the Earth receives) necessarily destroy the planet’s ability to keep an atmosphere.

Image: An angular size comparison of how Proxima will appear in the sky seen from Proxima b, compared to how the Sun appears in our sky on Earth. Proxima is much smaller than the Sun, but Proxima b lies very close to its star. Credit: ESO/G. Coleman.

And then there’s the matter of the planet’s origins, and how that could affect what is found there. From the paper:

…forming Proxima b from in-situ disk material is implausible because disk models for small stars would contain less than 1 M Earth of solids within the central AU. Instead, either 1) the planet migrated in via type I migration, 2) planetary embryos migrated in and coalesced at the current planet’s orbit, or 3) pebbles/small planetesimals migrated via aerodynamic drag and later coagulated into a larger body. While migrated planets and embryos originating beyond the ice-line would be volatile rich, pebble migration would produce much drier worlds.

We can now hope for further data on Proxima Centauri b through transit searches, direct imaging and further spectroscopy. Ultimately, of course, we can think about the prospects of robotic exploration, the sort of thing we’ve been discussing here on Centauri Dreams for the last twelve years. No star is closer, and few will reward follow-up study more than this one. I need to get into a meeting and will have to let that wrap this up, but you can be sure there will be a lot more to say about Proxima and the entire Alpha Centauri system as the analysis continues.

The paper is Anglada-Escudé et al., “A terrestrial planet candidate in a temperate orbit around Proxima Centauri,” Nature 536 (25 August 2016), 437-440 (abstract).

• Pragmatic Emacs: A tweak to elfeed filtering

I wrote recently about my enthusiasm for the elfeed feed reader. Here is a microscopic tweak to the way elfeed search filters work to better suit my use.

By default, if I switch to a bookmarked filter to view e.g. my feeds tagged with Emacs (as discussed in the previous post), and then hit s to run a live filter, I can type something like “Xah” to dynamically narrow the list of stories to those containing that string. The only problem is I actually have to type ” Xah”, i.e. with a space before the filter text, since it is appended to the filter that is already present “+unread +emacs” in this case.

Since life is too short to type extra spaces, I wrote a simple wrapper for the elfeed filter command:

;;insert space before elfeed filter
(defun bjm/elfeed-search-live-filter-space ()
"Insert space when running elfeed filter"
(interactive)
(let ((elfeed-search-filter (concat elfeed-search-filter " ")))
(elfeed-search-live-filter)))


I add this to the elfeed keybindings when I initialise the package

(use-package elfeed
:ensure t
:bind (:map elfeed-search-mode-map
("A" . bjm/elfeed-show-all)
("E" . bjm/elfeed-show-emacs)
("D" . bjm/elfeed-show-daily)
("/" . bjm/elfeed-search-live-filter-space)
("q" . bjm/elfeed-save-db-and-bury)))


and now I can use / to filter my articles without needing the extra space.

• Irreal: Reading Code with Emacs

I have long believed that one of the best ways to move from journeyman to master coder is to read the code of the masters. I learned most of my advanced C techniques by reading the Unix source code. Other languages have different masters that you can learn from. Nathaniel Knight has a post that suggests some convenient methods of reading code with Emacs. These boil down to learning the marking and narrowing commands.

Once you learn to conveniently mark expressions and functions, for example, you can narrow to the region and look just at the code you're interested in. That may not seem like a huge win but as Knight explains, it often makes working with the code easier. Once you get in the habit of using narrowing, you'll want to take a look at Artur Malabarba's excellent post on narrow-or-widen-dwim. It's really great because you need only call a single command and it figures out what you want to do by context. It's a huge timesaver.

Knight also covers the little-known clone-indirect-buffer` command. That's just what you need when you want to narrow to two (or more) separate areas at the same time. Again, the utility of doing this may not be obvious but it turns out to be tremendously useful. One common use case is where you have different types of code in the same buffer. You can clone the buffer, narrow to the desired code segments, and then work in the appropriate Emacs mode for each segment independently.

You can follow Knight's recipe for doing this but there's an easier way. The code discussed at the link is just a little bit of Elisp to automate the cloning and narrowing steps but it's a real time saver. Follow the link to Zane Ashby's post (at the easier way link) to see an example of the use case I mentioned.

• United Airlines Sets Minimum Bar on Security

United Airlines has rolled out a series of updates to its Web site that the company claims will help beef up the security of customer accounts. But at first glance, the core changes — moving from a 4-digit PINs to password and requiring customers to pick five different security questions and answers — may seem like a security playbook copied from Yahoo.com, circa 2009. Here’s a closer look at what’s changed in how United authenticates customers, and hopefully a bit of insight into what the nation’s fourth-largest airline is trying to accomplish with its new system.

United, like many other carriers, has long relied on a frequent flyer account number and a 4-digit personal identification number (PIN) for authenticating customers at its Web site. This has left customer accounts ripe for takeover by crooks who specialize in hacking and draining loyalty accounts for cash.

Earlier this year, however, United began debuting new authentication systems wherein customers are asked to pick a strong password and to choose from five sets of security questions and pre-selected answers. Customers may be asked to provide the answers to two of these questions if they are logging in from a device United has never seen associated with that account, trying to reset a password, or interacting with United via phone.

Some of the questions and answers United come up with.

Yes, you read that right: The answers are pre-selected as well as the questions. For example, to the question “During what month did you first meet your spouse or significant other,” users may select only from one of…you guessed it — 12 answers (January through December).

The list of answers to another security question, “What’s your favorite pizza topping,” had me momentarily thinking I using a pull down menu at Dominos.com — waffling between “pepperoni” and “mashed potato.” (Fun fact: If you were previously unaware that mashed potatoes qualify as an actual pizza topping, United has you covered with an answer to this bit of trivia in its Frequently Asked Questions page on the security changes.)

I recorded a short video of some of these rather unique questions and answers.

United said it opted for pre-defined questions and answers because the company has found “the majority of security issues our customers face can be traced to computer viruses that record typing, and using predefined answers protects against this type of intrusion.”

This struck me as a dramatic oversimplification of the threat. I asked United why they stated this, given that any halfway decent piece of malware that is capable of keylogging is likely also doing what’s known as “form grabbing” — essentially snatching data submitted in forms — regardless of whether the victim types in this information or selects it from a pull-down menu.

Benjamin Vaughn, director of IT security intelligence at United, said the company was randomizing the questions to confound bot programs that seek to automate the submission of answers, and that security questions answered wrongly would be “locked” and not asked again. He added that multiple unsuccessful attempts at answering these questions could result in an account being locked, necessitating a call to customer service.

United said it plans to use these same questions and answers — no longer passwords or PINs — to authenticate those who call in to the company’s customer service hotline. When I went to step through United’s new security system, I discovered my account was locked for some reason. A call to United customer service unlocked it in less than two minutes. All the agent asked me for was my frequent flyer number and my name.

(Incidentally, United still somewhat relies on “security through obscurity” to protect the secrecy of customer usernames by very seldom communicating the full frequent flyer number in written and digital communications with customers. I first pointed this out in my story about the data that can be gleaned from a United boarding pass barcode, because while the full frequent flyer number is masked with “x’s” on the boarding pass, the full number is stored on the pass’s barcode).

Conventional wisdom dictates that what little additional value security questions add to the equation is nullified when the user is required to choose from a set of pre-selected answers. After all, the only sane and secure way to use secret questions if one must is to pick answers that are not only incorrect and/or irrelevant to the question, but that also can’t be guessed or gleaned by collecting facts about you from background checking sites or from your various social media presences online.

Google published some fascinating research last year that spoke to the efficacy and challenges of secret questions and answers, concluding that they are “neither secure nor reliable enough to be used as a standalone account recovery mechanism.”

Overall, the Google research team found the security answers are either somewhat secure or easy to remember—but rarely both. Put another way, easy answers aren’t secure, and hard answers aren’t as useable.

But wait, you say: United asks you to answer up to five security questions. So more security questions equals more layers for the bad guys to hack through, which equals more security, right? Well, not so fast, the Google security folks found.

“When users had to answer both together, the spread between the security and usability of secret questions becomes increasingly stark,” the researchers wrote. “The probability that an attacker could get both answers in ten guesses is 1%, but users will recall both answers only 59% of the time. Piling on more secret questions makes it more difficult for users to recover their accounts and is not a good solution, as a result.”

Vaughn said the beauty of United’s approach is that it uniquely addresses the problem identified by Google researchers — that so many people in the study had so much trouble remembering the answers — by providing users with a set of pre-selected answers from which to choose.

The security team at United reached out a few weeks back to highlight the new security changes, and in a conversation this week they asked what I thought about their plan. I replied that if United is getting pushback from security experts and tech publications about its approach, that’s probably because security people are techies/nerds at heart, and techies/nerds want options and stuff. Or at least the ability to add, enable or disable certain security features.

But the reality today is that almost any security system designed for use by tens of millions of people who aren’t techies is always going to cater to the least sophisticated user on the planet — and that’s about where the level of security for that system is bound to stay for a while.

So I told the United people that I was a somewhat despondent about this reality, mainly because I end up having little other choice but to fly United quite often.

“At the scale that United faces, we felt this approach was really optimal to fix this problem for our customers,” Vaughn said. “We have to start with something that is universally available to our customers. We can’t sent a text message to you when you’re on an airplane or out of the country, we can’t rely on all of our customers to have a smart phone, and we didn’t feel it would be a great use of our customers’ time to send them in the mail 93 million secure ID tokens. We felt a powerful onus to do something, and the something we implemented we feel improves security greatly, especially for non-technical savvy customers.”

Arlan McMillan, United’s chief information security officer, said the basic system that the company has just rolled out is built to accommodate additional security features going forward. McMillan said United has discussed rolling out some type of app-based time-based one-time password (TOTP) systems (Google Authenticator is one popular TOTP example).

“It is our intent to provide additional capabilities to our customers, and to even bring in additional security controls if [customers] choose to,” McMillan said. “We set the minimum bar here, and we think that’s a higher bar than you’re going to find at most of our competitors. And we’re going to do more, but we had to get this far first.”

Lest anyone accuse me of claiming that the thrust of this story is somehow newsy, allow me to recommend some related, earlier stories worth reading about United’s security changes:

TechCrunch: It’s Time to Publicly Shame United Airlines’ So-called Online Security

Slate: United Airlines Uses Multiple Choice Security Questions

• How the Mathematics of Algebraic Topology Is Revolutionizing Brain Science
Nobody understands the brain’s wiring diagram, but the tools of algebraic topology are beginning to tease it apart.
• An Earthquake in Central Italy Topples Buildings, Killing Dozens (26 photos)

Central Italy was struck by a powerful, shallow, 6.2-magnitude earthquake at 3:36 am local time, devastating several mountain villages, and resulting in at least 73 deaths so far. Buildings in towns close to the epicenter collapsed on top of each other, falling into the streets, trapping hundreds in enormous piles of rubble. The death toll is still expected to rise as rescue teams reach some of the more remote villages in the region.

• Interesting Internet-Based Investigative Techniques

In this article, detailing the Australian and then worldwide investigation of a particularly heinous child-abuse ring, there are a lot of details of the pedophile security practices and the police investigative techniques. The abusers had a detailed manual on how to scrub metadata and avoid detection, but not everyone was perfect. The police used information from a single camera to narrow down the suspects. They also tracked a particular phrase one person used to find him.

This story shows an increasing sophistication of the police using small technical clues combined with standard detective work to investigate crimes on the Internet. A highly painful read, but interesting nonetheless.