Today more than two thirds of the guns in America are owned by just 20 percent of gun owners. That's not always good for gunmakers.
(Image credit: NPR)
Today more than two thirds of the guns in America are owned by just 20 percent of gun owners. That's not always good for gunmakers.
(Image credit: NPR)
What technology risks are faced by people who experience intimate partner violence? How is the security community failing them, and what questions might we need to ask to make progress on social and technical interventions?
Speaking Tuesday at CITP was Thomas Ristenpart (@TomRistenpart), an associate professor at Cornell Tech and a member of the Department of Computer Science at Cornell University. Before joining Cornell Tech in 2015, Thomas was an assistant professor at the University of Wisconsin-Madison. His research spans a wide range of computer security topics, including digital privacy and safety in intimate partner violence, alongside work on cloud computing security, confidentiality and privacy in machine learning, and topics in applied and theoretical cryptography.
Throughout this talk, I found myself overwhelmed by the scope of the challenges faced by so many people– and inspired by the way that Thomas and his collaborators have taken thorough, meaningful steps on this vital issue.
Intimate partner violence (IPV) is a huge problem, says Thomas. 25% of women and 11% of men will experience rape, physical violence, and/or stalking by an intimate partner, according to the National Intimate Partner and Sexual Violence Survey. To put this question in context for tech companies, this means that 360 million Facebook users and 252 million Android users will experience this kind of violence.
Prior research over the years has shown that abusers are taking advantage of technology to harm victims in a wide range of ways, including spyware, harassment, and non-consensual photography. In a team with Nicki Dell, Karen Levy, Damon McCoy, Rahul Chatterjee, Peri Doerfler, and Sam Havron, Thomas and his collaborators have working with the New York City Mayor’s office to Combat Domestic Violence (NYC CDV).
To start, the researchers spent a year doing qualitative research with people who experience domestic violence. The research that Thomas is sharing today draws from that work.
The research team worked with the New York City Family Justice Centers, who offer a range of services for domestic violence, sex trafficking, and elder abuse victims– from civil and legal services to access to shelters, counseling, and support from nonprofits. The centers were a crucial resource for the researchers, since they connect nonprofits, government actors, and survivors and victims. Over a year-long qualitative study, researchers held 11 focus groups with 39 women who speak English and Spanish from 18-165. Most of them are no longer working with the abusive partner. They also held semi-structured interviews with 50 professionals working on IPV– case managers, social workers, attorneys/paralegals, and police officers. Together, this study represents the largest and most demographically diverse study to date on IPV.
The researchers spotted a range of common themes across clients of the NYC CDV. They talked about stalkers who accessed their phones and social media, installed spyware, took compromising images through the spyware, and then impersonating them to use the account to send compromising, intimate images to employers, family, and friends. Abusers are taking advantage of every possible technology to create problems through many modes. Overall, they identified four kinds of common attacks:
In many of these cases, abusers are re-purposing ordinary software for some kind of unhelpful purpose. For example, abusers use two-factor authentication to prevent victims from accessing and recovering access to their own account.
Thomas tells us that despite these risks, they didn’t find a single technologist in the network of support for people facing intimate partner violence. So it’s not surprising that these services don’t have any best practices for evaluating technology risks. On top of that, victims overwhelmingly report having insufficient technology understanding to deal with tech abuse.
Abusers are typically considered to be “more tech-savvy” than victims, and professionals overwhelmingly report having insufficient technology understanding to help with tech abuse. Many of them just google as they go.
Thomas also points out that the intersection of technology and intimate partner violence raises important legal and policy issues. First, digital abuse is usually not recognized as a form of abuse that warrants a protection order. When someone goes to a family court, they have to convince a judge to get a protection order- and judges aren’t convinced by digital harassment– even though the protection order can legally restrict an abuser from sending the message. Second, when an abuser creates a fake account on a site like Tinder and creates “come rape me” style ads, the abuser is technically the legal owner of the account, so it can be difficult to take down the ads, especially for smaller websites that don’t respond to copyright takedown requests.
Abusers aren’t the sophisticated cyber-operatives that people sometimes talk about at security conferences. Instead, researchers saw two classes of attacks: (a) UI-bound adversaries: an adversarial but authenticated user who interacts with the system via the normal user interface, and (b) Spyware adversaries, who installs/repurposes commodity software for surveillance of the victim. Neither of these require technical sophistication.
Why are these so effective? Thomas says that the reason is that the threat models and the assumptions in the security world don’t match threats. For example, many systems are designed to protect from a stranger on the internet who doesn’t know the victim personally and connects from elsewhere. With intimate partner violence, the attacker knows the victim personally, they can guess or compel disclosure, they may connect from the victim’s computer or same home, and may own the account or device that’s being used. The abuser is often an earner who pays for accounts and devices.
The same problems apply with fake accounts and detection of abusive content. Many fake social media profiles obviously belong to the abuser but survivors are rarely able to prove it. When abusers send hurtful, abusive messages, someone who lacks the content may not be able to detect it. Outside of the context of IPV, a picture of a gun might be just a picture of a gun- but in context, it can be very threatening.
Much of the common advice just won’t work. Sometimes people are urged to delete their account. You can’t just shut off contact with an abuser- you might be legally obligated to communicate (shared custody of children). You can’t get new devices because the abuser pays for phones, family plan, and/or children’s devices (which is a vector of surveillance). People can’t necessarily get off social media, because they need it to get access to their friends and family. On top of that, any of these actions could escalate abuse; victims are very worried about cutting off access or uninstalling spyware because they’re worried about further violence from the abuser.
Next, Thomas tells us about intimate partner surveillance (IPS) from a new paper on How Intimate Partner Abusers Exploit Technology. Shelters and family justice centers have had problems where someone shows up with software on their phone that allowed the abuser to track them, kick down a door, and endanger the victim. No one could name a single product that was used by abusers, partly because our ability to diagnose spyware from a technical perspective is limited. On the other hand, if you google “track my girlfriend,” you will find a host of companies that are peddling spyware.
To study the range of spyware systems, Thomas and his colleagues used “snowball” searching and used auto-complete to look for other queries that other people were searching. From a set of roughly 27k urls, they investigated 100 randomly sampled URLs. They found that 60% were related to intimate partner surveillance: how-to blogs, Q&A forums, news articles, app websites, and links to apps on the Google Play Store and the Apple App Store. Many of the professional-grade spyware providers provide apps directly through app stores, as well as “off-store” apps. They labeled a thousand of the apps they found and discovered that about 28% of them were potential IPS tools.
The researchers found overt tools for intimate partner surveillance apps, as well as systems for safety, theft-tracking, child tracking, and employee tracking that were repurposed for abuse. In many cases, it’s hard to point to a single piece of software and say that it’s bad. While apps sometimes purport to provide services to parents to track children, searches for intimate partner violence also surface paid ads to products that don’t directly claim to be for use within intimate partners. Ever since a ruling from the FTC, companies work to preserve plausible deniability.
In an audit study the researchers emailed customer support for 11 apps (on-store and off-store) posing as an abuser. They received nine responses. Eight of them condoned intimate partner violence and gave them advice on making the app hard to find. Only one indicated that it could be illegal.
Many of these systems have rich capabilities: location tracking, texts, call recordings, media contents, app usage, internet activity logs, keylogging, geographic tracking. All of the off-store systems have covert features to hide the fact that the app is installed. Even some of the Google Play Store apps have features to make the apps covert.
What’s the current state of the art? Right now, practitioners tell people that if your battery runs unusually low, they may be a victim of spyware– not very effective. Do spyware removal tools work? They had high but not perfect detection rates for off-store intimate-purpose surveillance systems. However they did a poor job at detecting on-store spyware tools.
Thomas recaps what they learned from this study: There’s a large ecosystem of spyware apps, the dual use of these apps creates a significant challenge, many developers are condoning intimate partner surveillance, and existing anti-spyware technologies are insufficient at detecting tools.
Based on this work, Thomas and his collaborators are working with the NYC Mayor’s office and the National Network to end Domestic Violence to develop ways to detect spyware, to develop new surveys of technology risks, and find new kinds of interventions.
Thomas concludes with an appeal to companies and computer scientists that we pay more attention to the needs of the most vulnerable people affected by our work, volunteer for organizations that support victims, and develop new approaches to protect people in these all-too-common situations.
I saw this tweet:
— Thomas Mattacchione (@tmkjone) February 20, 2018
and recoiled in horror. It’s not that I think Mattacchione is doing something wrong or that I doubt he’s a good developer, it’s just that I know I couldn’t work that way.
I appreciate the idea of picking the best tool for the task at hand and it may be that Mattacchione’s choices are exactly that but I keep thinking, “How in the world does he cope with muscle memory?” I’ve been using Emacs for a decade and before that I used Vi/Vim for even longer. Only recently have I stopped doing things like using Ctrl+k to scroll back a line. That’s the result of using only two editors, not even at the same time, over several decades.
I’m sure all this says something uncomplimentary about my flexibility but it is, nevertheless, reality for me. I don’t know what I’d do if I were working in something like Java that requires—I’ve heard—an IDE to use productively.
Anyway, the tweet got me wondering about other people. Do you, like Mattacchione, use several different editors depending on what you’re doing or do you, like me, stick with one editor for all your text-based work? If you use more than one editor, do you find that muscle memory gets in the way? If you have opinions on the matter, leave a comment.
From Jac Goudsmit on Tindie:
Build a Replica of your favorite early 6502 Computer. Or Create Your Own.
A flour war in Greece, mountain hares in Scotland, a massive blue rooster in Washington, D.C., flying sparks in China, the Mach Loop in Wales, students march against guns in the U.S., curling and skicross in Pyeongchang, a soaring rocket above California, and much more.
At the last Tennessee Valley Interstellar Workshop, I was part of a session on biosignatures in exoplanet atmospheres that highlighted how careful we have to be before declaring we have found life. Given that, as Alex Tolley points out below, our own planet has been in its current state of oxygenation for a scant 12 percent of its existence, shouldn’t our methods include life detection in as wide a variety of atmospheres as possible? A Centauri Dreams regular, Alex addresses the question by looking at new work on chemical disequilibrium and its relation to biosignature detection. The author (with Brian McConnell) of A Design for a Reusable Water-Based Spacecraft Known as the Spacecoach (Springer, 2016), Alex is a lecturer in biology at the University of California. Just how close are we to an unambiguous biosignature detection, and on what kind of world will we find it?
by Alex Tolley
Image: Archaean or early Proterozoic Earth showing stromatolites in the foreground. Credit: Peter Sawyer / Smithsonian Institution.
The Kepler space telescope has established that exoplanets are abundant in our galaxy and that many stars have planets in their habitable zones (defined as having temperatures that potentially allow surface water). This has reinvigorated the quest to answer the age-old question “Are We Alone?”. While SETI attempts to answer that question by detecting intelligent signals, the Drake equation suggests that the emergence of intelligence is a subset of the planets where life has emerged. When we envisage such living worlds, the image that is often evoked is of a verdant paradise, with abundant plant life clothing the land and emitting oxygen to support respiring animals, much like our pre-space age visions of Venus.
Naturally, much of the search for biosignatures has focused on oxygen (O2), whose production on Earth is now primarily produced by photosynthesis. Unfortunately, O2 can also be produced abiotically via photolysis of water, and therefore alone is not a conclusive biosignature. What is needed is a mixture of gases in disequilibrium that can only be maintained by biotic and not abiotic processes. Abiotic processes, unless continually sustained, will tend towards equilibrium. For example, on Earth, if life completely disappeared today, our nitrogen-oxygen dominated atmosphere would reach equilibrium with the oxygen bound as nitrate in the ocean.
Image: Schematic of methodology for calculating atmosphere-ocean disequilibrium. We quantify the disequilibrium of the atmosphere-ocean system by calculating the difference in Gibbs energy between the initial and final states. The species in this particular example show the important reactions to produce equilibrium for the Phanerozoic atmosphere-ocean system, namely, the reaction of N2, O2, and liquid water to form nitric acid, and methane oxidation to CO2 and H2O. Red species denote gases that change when reacted to equilibrium, whereas green species are created by equilibration. Details of aqueous carbonate system speciation are not shown. Credit: Krissansen-Totton et al. (citation below).
Another issue with looking for O2 is that it assumes a terrestrial biology. Other biologies may be different. However environments with large, sustained, chemical disequilibrium are more likely to be a product of biology.
A new paper digs into the issue. The work of Joshua Krissansen-Totton (University of Washington, Seattle), Stephanie Olson (UC-Riverside) and David C. Catling (UW-Seattle), the paper tackles a question the authors have addressed in an earlier paper:
“Chemical disequilibrium as a biosignature is appealing because unlike searching for biogenic gases specific to particular metabolisms, the chemical disequilibrium approach makes no assumptions about the underlying biochemistry. Instead, it is a generalized life-detection metric that rests only on the assumption that distinct metabolisms in a biosphere will produce waste gases that, with sufficient fluxes, will alter atmospheric composition and result in disequilibrium.”
This approach also opens up the possibility of detecting many more life-bearing worlds as the Earth’s highly oxygenated atmosphere has only been in this state for about 12% of the Earth’s existence.
Image: Heinrich D. Holland derivative work: Loudubewe (talk) – Oxygenation-atm.svg, CC BY-SA 3.0,
With the absence of high partial pressures of O2 before the Pre-Cambrian, are there biogenic chemical disequilibrium conditions that can be discerned from the state of primordial atmospheres subject to purely abiotic equilibrium?
The new Krissansen-Totton et al paper attempts to do that for the Archaean (4 – 2.5 gya) and Proterozoic (2.5 – 0.54) eons. Their approach is to calculate the Gibbs Free Energy (G), a metric of disequilibrium, for gases in an atmosphere-oceanic environment. The authors use a range of gas mixtures from the geologic record and determine the disequilibrium they represent using calculations of G for the observed versus the expected equilibrium concentrations of chemical species.
The authors note that almost all the G is in our ocean compartment from the nitrogen (N2)-O2 not reaching equilibrium as ionic nitrate. A small, but very important disequilibrium between methane (CH4) and O2 in the atmosphere is also considered a biosignature.
Using their approach, the authors look at the disequilibria in the atmosphere-ocean model in the earlier Archaean and Proterozoic eons. The geologic and model evidence suggests that the atmosphere was largely N2 and carbon dioxide (CO2), with a low concentration of O2 (2% or less partial pressure) in the Proterozoic.
In the Proterozoic, as today, the major disequilibrium is due to the lack of nitrate in the oceans and therefore the higher concentrations of O2 in the atmosphere. Similarly, an excess concentration of CH4 that should quickly oxidize to CO2 at equilibrium. In the Archaean, prior to the increase in O2 from photosynthesis, the N2, CO2, CH4 and liquid H2O equilibrium should consume the CH4 and increase the concentration of ammonium ions (NH4+ ) and bicarbonate (HCO3-) in the ocean. The persistence of CH4 in both eons is primarily driven by methanogen bacteria.
Image: Atmosphere-ocean disequilibrium in in the Archean. Blue bars denote assumed initial abundances from the literature, and green bars denote equilibrium abundances calculated using Gibbs free energy minimization. Subplots separate (A) atmospheric species and (B) ocean species. The most important contribution to Archean disequilibrium is the coexistence of atmospheric CH4, N2, CO2, and liquid water. These four species are lessened in abundance by reaction to equilibrium to form aqueous HCO3 and NH4. Oxidation of CO and H2 also contributes to the overall Gibbs energy change. Credit: Krissansen-Totton et al.
Therefore a biosignature for such an anoxic world in a stage similar to our Archaean era, would be to observe an ocean coupled with N2, CO2 and CH4 in the atmosphere. There is however an argument that might make this biosignature ambiguous.
CH4 and carbon monoxide (CO) might be present due to impacts of bolides (Kasting). Similarly, under certain conditions, it is possible that the mantle might be able to outgas CH4. In both cases, CO would be present and indicative of an abiogenic process. On Earth, CO is consumed as a substrate by bacteria, so its presence should be absent on a living world, even should such outgassing or impacts occur. The issue of CH4 outgassing, at least on earth, is countered by the known rates of outgassing compared to the concentration of CH4 in the atmosphere and ocean. The argument is primarily about rates of CH4 production between abiotic and biotic processes. Supporting Kasting, the authors conclude that on Earth, abiotic rates of production of CH4 fall far short of the observed levels.
Image: Probability distribution for maximum abiotic methane production from serpentinization on Earth-like planets. This distribution was generated by sampling generous ranges for crustal production rates, FeO wt %, maximum fractional conversion of FeO to H2, and maximum fractional conversion of H2 to CH4, and then calculating the resultant methane flux 1 million times (see the main text). The modern biological flux (58) and plausible biological Archean flux (59) far exceed the maximum possible abiotic flux. These results support the hypothesis that the co-detection of abundant CH4 and CO2 on a habitable exoplanet is a plausible biosignature. Credit: Krissansen-Totton et al.
The authors conclude that their biosignature should also exclude the presence of CO to confirm the observed gases as a biosignature:
“The CH4-N2-CO2-H2O disequilibrium is thus a potentially detectable biosignature for Earth-like exoplanets with anoxic atmospheres and microbial biospheres. The simultaneous detection of abundant CH4 and CO2 (and the absence of CO) on an ostensibly habitable exoplanet would be strongly suggestive of biology.”
Given these gases in the presence of an ocean, can we use them to detect life on exoplanets?
Astronomers have been able to detect CO2, H2O, CH4 and CO in the atmosphere of HD 189733b, which is not Earthlike, but rather a hot Jupiter with a temperature of 1700F, far too hot for life. So far these gases have not been detectable on rocky worlds. Some new ground-based telescopes and the upcoming James Webb Space Telescope should have the capability of detecting these gases using transmission spectroscopy as these exoplanets transit their star.
It is important to note that the presence of an ocean is necessary to create high values of G. The Earth’s atmosphere alone has quite a low G, even compared to Mars. It is the presence of an ocean that results in G orders of magnitude larger than that from the atmosphere alone. Such an ocean is likely to be first detected by glints or the change in color of the planet as it rotates exposing different fractions of land and ocean.
An interesting observation of this approach is that a waterworld or ocean exoplanet might not show these biosignatures as the lack of weathering blocks the geologic carbon cycle and may preclude life’s emergence or long term survival. This theory might now be testable using spectroscopy and calculations for G.
This approach to biosignatures is applicable to our own solar system. As mentioned, Mars’ current G is greater than Earth’s atmosphere G. This is due to the photochemical disequilibrium of CO and O2. The detection of CH4 in Mars’ atmosphere, although at very low levels, would add to their calculation of Mars’ atmospheric G. In future, if the size of Mars’ early ocean can be inferred and gases in rocks extracted, the evidence for paleo life might be inferred. Fossil evidence of life would then confirm the approach.
Similarly, the composition of the plumes from Europa and Enceladus should also allow calculation of G for these icy moons and help to infer whether their subsurface oceans are abiotic or support life.
Within a decade, we may have convincing evidence of extraterrestrial life. If any of those worlds are not too distant, the possibility of studying that life directly in the future will be exciting.
The Secure Elections Act strikes a careful balance between state and federal action to secure American voting systems. The measure authorizes appropriation of grants to the states to take important and time-sensitive actions, including:
- Replacing insecure paperless voting systems with new equipment that will process a paper ballot;
- Implementing post-election audits of paper ballots or records to verify electronic tallies;
- Conducting "cyber hygiene" scans and "risk and vulnerability" assessments and supporting state efforts to remediate identified vulnerabilities.
The legislation would also create needed transparency and accountability in elections systems by establishing clear protocols for state and federal officials to communicate regarding security breaches and emerging threats.
This was one of those times where I made the comic, then realized like a day later that it was factually inaccurate, but I felt it was too funny, (and, let's be honest, I had put too much work into it,) to let it go. So I just slapped a disclaimer on the bottom of the comic and ran that puppy.
Also, I figured that the “fact” I had gotten wrong was about imaginary car robots, so how much sleep was I going to lose over that, really?
Multiple Chase.com customers have reported logging in to their bank accounts, only to be presented with another customer’s bank account details. Chase has acknowledged the incident, saying it was caused by an internal “glitch” Wednesday evening that did not involve any kind of hacking attempt or cyber attack.
Trish Wexler, director of communications for the retail side of JP Morgan Chase, said the incident happened Wednesday evening, for “a pretty limited number of customers” between 6:30 pm and 9 pm ET who “sporadically during that time while logged in to chase.com could see someone else’s account details.”
“We know for sure the glitch was on our end, not from a malicious actor,” Wexler said, noting that Chase is still trying to determine how many customers may have been affected. “We’re going through Tweets from customers and making sure that if anyone is calling us with issues we’re working one on one with customers. If you see suspicious activity you should give us a call.”
Wexler urged customers to “practice good security hygiene” by regularly reviewing their account statements, and promptly reporting any discrepancies. She said Chase is still working to determine the precise cause of the mix-up, and that there have been no reports of JPMC commercial customers seeing the account information of other customers.
“This was all on our side,” Wexler said. “I don’t know what did happen yet but I know what didn’t happen. What happened last night was 100 percent not the result of anything malicious.”
The account mix-up was documented on Wednesday by Fly & Dine, an online publication that chronicles the airline food industry. Fly & Dine included screenshots of one of their writer’s spouses logged into the account of a fellow Chase customer with an Amazon and Chase card and a balance of more than $16,000.
Kenneth White, a security researcher and director of the Open Crypto Audit Project, said the reports he’s seen on Twitter and elsewhere suggested the screwup was somehow related to the bank’s mobile apps. He also said the Chase retail banking app offered an update first thing Thursday morning.
Chase says the oddity occurred for both chase.com and users of the Chase mobile app.
“We don’t have any evidence it was related to any update,” Wexler said.
“There’s only so many kind of logic errors where Kenn logs in and sees Brian’s account,” White said. “It can be a devil to track down because every single time someone logs in it’s a roll of the dice — maybe they get something in the warmed up cache or they get a new hit. It’s tricky to debug, but this is like as bad as it gets in terms of screwup of the app.”
White said the incident is reminiscent of a similar glitch at online game giant Steam, which caused many customers to see account information for other Steam users for a few hours. He said he suspects the problem was a configuration error someplace within Chase.com “caching servers,” which are designed to ease the load on a Web application by periodically storing some common graphical elements on the page — such as images, videos and GIFs.
“The images, the site banner, all that’s fine to be cached, but you never want to cache active content or raw data coming back,” White said. “If you’re CNN, you’re probably caching all the content on the homepage. But for a banking app that has access to live data, you never want that to be cached.”
“It’s fairly easy to fix once you identify the problem,” he added. “I can imagine just getting the basics of the core issue [for Chase] would be kind of tricky and might mean a lot of non techies calling your Tier 1 support people.”
Update, 8:10 p.m. ET: Added comment from Chase about the incident affecting both mobile device and Web browser users.
BSDNow 234 is up, and has an interview with Benno Rice of FreeBSD. There’s also chatter about jails, Summer of Code, FreeBSD’s new Code of Conduct, libhijack, and so on.
This week I made a mistake that ultimately enlightened me about the nature of function objects in Emacs Lisp. There are three kinds of function objects, but they each behave very differently when evaluated as objects.
But before we get to that, let’s talk about one of Emacs’
embarrassing, old missteps:
One of the long-standing issues with Emacs is that loading Emacs Lisp files (.el and .elc) is a slow process, even when those files have been byte compiled. There are a number of dirty hacks in place to deal with this issue, and the biggest and nastiest of them all is the dumper, also known as unexec.
The Emacs you routinely use throughout the day is actually a previous
instance of Emacs that’s been resurrected from the dead. Your undead
Emacs was probably created months, if not years, earlier, back when it
was originally compiled. The first stage of compiling Emacs is to
compile a minimal C core called
temacs. The second stage is loading
a bunch of Emacs Lisp files, then dumping a memory image in an
unportable, platform-dependent way. On Linux, this actually requires
special hooks in glibc. The Emacs you know and love is this
dumped image loaded back into memory, continuing from where it left
off just after it was compiled. Regardless of your own feelings on the
matter, you have to admit this is a very lispy thing to do.
There are two notable costs to Emacs’ dumper:
The dumped image contains hard-coded memory addresses. This means Emacs can’t be a Position Independent Executable (PIE). It can’t take advantage of a security feature called Address Space Layout Randomization (ASLR), which would increase the difficulty of exploiting some classes of bugs. This might be important to you if Emacs processes untrusted data, such as when it’s used as a mail client, a web server or generally parses data downloaded across the network.
It’s not possible to cross-compile Emacs since it can only be dumped
temacs on its target platform. As an experiment I’ve
attempted to dump the Windows version of Emacs on Linux using
Wine, but was unsuccessful.
The good news is that there’s a portable dumper in the works
that makes this a lot less nasty. If you’re adventurous, you can
already disable dumping and run
temacs directly by setting
CANNOT_DUMP=yes at compile time. Be warned, though, that a
non-dumped Emacs takes several seconds, or worse, to initialize
before it even begins loading your own configuration. It’s also
somewhat buggy since it seems nobody ever runs it this way
The other major way Emacs users have worked around slow loading is aggressive use of lazy loading, generally via autoloads. The major package interactive entry points are defined ahead of time as stub functions. These stubs, when invoked, load the full package, which overrides the stub definition, then finally the stub re-invokes the new definition with the same arguments.
To further assist with lazy loading, an evaluated
defvar form will
not override an existing global variable binding. This means you can,
to a certain extent, configure a package before it’s loaded. The
package will not clobber any existing configuration when it loads.
This also explains the bizarre interfaces for the various hook
run-hooks. These accept symbols — the
names of the variables — rather than values of those variables as
would normally be the case. The
add-to-list function does the same
thing. It’s all intended to cooperate with lazy loading, where the
variable may not have been defined yet.
Sometimes this isn’t enough and you need some some configuration to
take place after the package has been loaded, but without forcing it
to load early. That is, you need to tell Emacs “evaluate this code
after this particular package loads.” That’s where
comes into play, except for its fatal flaw: it takes the word “eval”
The first argument to
eval-after-load is the name of a package. Fair
enough. The second argument is a form that will be passed to
after that package is loaded. Now hold on a minute. The general rule
of thumb is that if you’re calling
eval, you’re probably doing
something seriously wrong, and this function is no exception. This is
completely the wrong mechanism for the task.
The second argument should have been a function — either a (sharp
quoted) symbol or a function object. And then instead of
would be something more sensible, like
funcall. Perhaps this
improved version would be named
The big problem with passing an s-expression is that it will be left
uncompiled due to being quoted. I’ve talked before about the
importance of evaluating your lambdas.
only encourages badly written Emacs Lisp, it demands it.
;;; BAD! (eval-after-load 'simple-httpd '(push '("c" . "text/plain") httpd-mime-types))
This was all corrected in Emacs 25. If the second argument to
eval-after-load is a function — the result of applying
non-nil — then it uses
funcall. There’s also a new macro,
with-eval-after-load, to package it all up nicely.
;;; Better (Emacs >= 25 only) (eval-after-load 'simple-httpd (lambda () (push '("c" . "text/plain") httpd-mime-types))) ;;; Best (Emacs >= 25 only) (with-eval-after-load 'simple-httpd (push '("c" . "text/plain") httpd-mime-types))
Though in both of these examples the compiler will likely warn about
httpd-mime-types not being defined. That’s a problem for another
But what if you need to use Emacs 24, as was the situation that
sparked this article? What can we do with the bad version of
eval-after-load? We could situate a lambda such that it’s evaluated,
but then smuggle the resulting function object into the form passed to
eval-after-load, all using a backquote.
;;; Note: this is subtly broken (eval-after-load 'simple-httpd `(funcall ,(lambda () (push '("c" . "text/plain") httpd-mime-types)))
When everything is compiled, the backquoted form evalutes to this:
(funcall #[0 <bytecode> [httpd-mime-types ("c" . "text/plain")] 2])
Where the second value (
#[...]) is a byte-code object.
However, as the comment notes, this is subtly broken. A cleaner and
correct way to solve all this is with a named function. The damage
eval-after-load will have been (mostly) minimized.
(defun my-simple-httpd-hook () (push '("c" . "text/plain") httpd-mime-types)) (eval-after-load 'simple-httpd '(funcall #'my-simple-httpd-hook))
But, let’s go back to the anonymous function solution. What was broken about it? It all has to do with evaluating function objects.
So what happens when we evaluate an expression like the one above with
eval? Here’s what it looks like again.
eval notices it’s been given a non-empty list, so it’s probably
a function call. The first argument is the name of the function to be
funcall) and the remaining elements are its arguments. But
each of these elements must be evaluated first, and the result of that
evaluation becomes the arguments.
Any value that isn’t a list or a symbol is self-evaluating. That is, it evaluates to its own value:
(eval 10) ;; => 10
If the value is a symbol, it’s treated as a variable. If the value is a list, it goes through the function call process I’m describing (or one of a number of other special cases, such as macro expansion, lambda expressions, and special forms).
eval recurses on the function object
function object is not a list or a symbol, so it’s self-evaluating. No
;; Byte-code objects are self-evaluating (let ((x (byte-compile (lambda ())))) (eq x (eval x))) ;; => t
What if this code wasn’t compiled? Rather than a byte-code object, we’d have some other kind of function object for the interpreter. Let’s examine the dynamic scope (shudder) case. Here, a lambda appears to evaluate to itself, but appearances can be deceiving:
(eval (lambda ()) ;; => (lambda ())
However, this is not self-evaluation. Lambda expressions are not self-evaluating. It’s merely coincidence that the result of evaluating a lambda expression looks like the original expression. This is just how the Emacs Lisp interpreter is currently implemented and, strictly speaking, it’s an implementation detail that just so happens to be mostly compatible with byte-code objects being self-evaluating. It would be a mistake to rely on this.
Instead, dynamic scope lambda expression evaluation is
eval to the result will return
equal, but not identical (
eq), expression. In contrast, a
self-evaluating value is also idempotent under evaluation, but with
;; Not self-evaluating: (let ((x '(lambda ()))) (eq x (eval x))) ;; => nil ;; Evaluation is idempotent: (let ((x '(lambda ()))) (equal x (eval x))) ;; => t (let ((x '(lambda ()))) (equal x (eval (eval x)))) ;; => t
So, with dynamic scope, the subtly broken backquote example will still work, but only by sheer luck. Under lexical scope, the situation isn’t so lucky:
;;; -*- lexical-scope: t; -*- (lambda ()) ;; => (closure (t) nil)
These interpreted lambda functions are neither self-evaluating nor
t as the second argument to
eval tells it to
use lexical scope, as shown below:
;; Not self-evaluating: (let ((x '(lambda ()))) (eq x (eval x t))) ;; => nil ;; Not idempotent: (let ((x '(lambda ()))) (equal x (eval x t))) ;; => nil (let ((x '(lambda ()))) (equal x (eval (eval x t) t))) ;; error: (void-function closure)
I can imagine an implementation of Emacs Lisp where dynamic scope lambda expressions are in the same boat, where they’re not even idempotent. For example:
;;; -*- lexical-binding: nil; -*- (lambda ()) ;; => (totally-not-a-closure ())
Most Emacs Lisp would work just fine under this change, and only code that makes some kind of logical mistake — where there’s nested evaluation of lambda expressions — would break. This essentially already happened when lots of code was quietly switched over to lexical scope after Emacs 24. Lambda idempotency was lost and well-written code didn’t notice.
There’s a temptation here for Emacs to define a
closure function or
special form that would allow interpreter closure objects to be either
self-evaluating or idempotent. This would be a mistake. It would only
serve as a hack that covers up logical mistakes that lead to nested
evaluation. Much better to catch those problems early.
So how do we fix the subtly broken example? With a strategically placed quote right before the comma.
(eval-after-load 'simple-httpd `(funcall ',(lambda () (push '("c" . "text/plain") httpd-mime-types)))
So the form passed to
;; Compiled: (funcall (quote #[...])) ;; Dynamic scope: (funcall (quote (lambda () ...))) ;; Lexical scope: (funcall (quote (closure (t) () ...)))
The quote prevents
eval from evaluating the function object, which
would be either needless or harmful. There’s also an argument to be
made that this is a perfect situation for a sharp-quote (
exists to quote functions.
Imagine being able to work fewer hours during a difficult time in your life, without having to quit your job or interrupt your career. In Germany, for many workers, that's now a reality.
(Image credit: NPR)
Between Kepler and the ensuing K2 mission, we’ve had quite a haul of exoplanets. Kepler data have been used to confirm 2341 exoplanets, with NASA declaring 30 of these as being less than twice Earth-size and in the habitable zone. K2 has landed 307 confirmed worlds of its own. K2 offers a different viewing strategy than Kepler’s fixed view of over 150,000 stars. While the transit method is still at work, K2 pursues a series of observing campaigns, its fields of view distributed around the ecliptic plane, and with photometric precision approaching the original.
Why the relationship with the ecliptic? Remember that what turned Kepler into K2 was the failure of two reaction wheels, the second failing less than a year after the first. Working in the ecliptic plane minimizes the torque produced by solar wind pressure, thus minimizing pointing drift and allowing the spacecraft to be controlled by its thrusters and remaining two reaction wheels. Each K2 campaign is limited to about 80 days because of sun angle constraints.
Image: After detecting the first exoplanets in the 1990s, astronomers have learned that planets around other stars are the rule rather than the exception. There are likely hundreds of billions of exoplanets in the Milky Way alone. Credit: ESA/Hubble/ESO/M. Kornmesser.
More K2 planets have now turned up in an international study led by Andrew Mayo (National Space Institute, Technical University of Denmark). The research, underway since the first release of K2 data in 2014, uncovered 275 planet candidates, of which 149 were validated. 56 of the latter had not previously been detected, while 39 had already been identified as candidates, and 53 had already been validated, with one previously classed as a false positive.
Overall, the work increases the validated K2 planet sample by almost 50 percent, while increasing the K2 candidate sample by 20 percent. What stands out here is not so much the trove of new planets but the validation techniques brought to bear, which were applied to a large sample as part of a framework developed to increase validation speed. From the paper:
This research will also be useful even after the end of the K2 mission. The upcoming TESS mission (Ricker et al. 2015) is expected to yield more than 1500 total exoplanet discoveries, but it is also estimated that TESS will detect over 1000 false positive signals (Sullivan et al. 2015). Even so, one (out of three) of the level one baseline science requirements for TESS is to measure the masses of 50 planets with Rp < 4 R⊕. Therefore, there will need to be an extensive follow-up program to the primary photometric observations conducted by the spacecraft, including careful statistical validation to aid in the selection of follow-up targets. The work presented here will be extremely useful in that follow-up program, since only modest adjustments will allow for the validation of planet candidate systems identified by TESS rather than K2.
The work involves not only analysis of the K2 light curves but also follow-up spectroscopy and high contrast imaging involving ground-based observation of candidate host stars. The processes of data reduction, candidate identification, and statistical validation described here using a statistical validation tool called vespa clearly have application well beyond K2.
The paper is Mayo et al., “275 Candidates and 149 Validated Planets Orbiting Bright Stars in K2 Campaigns 0-10,” accepted at the Astronomical Journal (preprint).
I’ve written several posts that celebrate Org mode for its ability to document various tasks in a literate way, execute the code, and gather the results. The nice thing about this process is that you have everything together:
They’re all in a single file that makes it easy to share and reproduce.
My favorite example of this is Howard Abrams’ post and video on Literate DevOps. His presentation is specialized to DevOps but the techniques he uses are easily adaptable to other types of tasks.
Alex Bennée has a nice example of using these techniques for running benchmarks. The problems with running benchmarks are similar to those encountered in DevOps so the same sort of “literate” solution can be applied. The post doesn’t show the entire benchmarking file but Bennée does provide a pointer to it. The file is on GitHub and therefore gets reformatted so if you want to see what’s really happening, click on the Raw button to see the actual Org file.
If you’ve been a developer for a while, your tendency—or at least mine—is to just run the necessary commands from the command line and be done with it. That’s quick and easy but now you have to remember exactly what you did to share the results or to repeat the task at a later date. I’ve been making a concerted effort to slow down and do these tasks in a literate fashion using Org mode. The extra time and effort pay off in the end and you can even link to the file from your lab notebook—you are keeping one, right?—so that everything is documented.
It has no equal. Sirius is the brightest star in the entire sky, twice as bright as its nearest competitor, Canopus, in the southern constellation Carina. As the most brilliant, it’s spectacularly easy to identify. A winter star for northern hemisphere skywatchers, it first appears in the late evening sky in November. Now in February, Sirius already stands two fists high in the southeastern sky in evening twilight.
It would be a fun observing project to see how soon after sunset you might see it. I suspect that once you’re familiar with where it is in the sky, you might catch it before sunset this time of year. Sirius gets its name from the Greek word for “searing” or “scorching” and refers to its pre-dawn rising in August during the hottest time of the year. It heated shimmer was thought to parch the land, cause lethargy and drive dogs mad, hence it’s nickname, the Dog Star.
But if the ancients knew how far Sirius lay from Earth, they might have seen this stellar diamond differently. At 8.7 light years or some 52,200,000,000,000 (trillion) miles distant, the trickle of heat added by Sirius to the Earth is virtually nil. Yet as stars go, it’s close to Earth, one of the main reasons it dazzles. Sirius is also whiter and hotter than the sun with a surface temperature of 17,300°F (9,600°C) compared to the sun’s 10,000°F (5,500°C). When you combine that extra heat with the star’s girth — 1.75 times that of the sun — it’s easy to understand why the star appears so bright in the night sky.
Sirius has a companion star called the Pup, a tiny but exceedingly dense white dwarf only about 7,000 miles (11,200 km) across (smaller than the Earth!) but with 98% the mass of the sun. With its matter squeezed so tightly, the Pup’s gravitational pull is 350,000 times greater than Earth’s. A 150 pound person standing on the star would weigh 50,000,000 pounds!
A white dwarf is the end of the road for average-size stars like the sun. In the distant future, when the sun runs out of nuclear fuel to “burn,” its core will contract to form a white dwarf, while the solar atmosphere will poof off into space to form a temporary wreath of glowing gas called a planetary nebula. Someday, the same will happen to our brilliant pal, Sirius, and instead of one tiny star orbiting a big one, twin white dwarfs will whirl about the other.
Stars and constellations only appear steady and unchangeable because we see them during a lifetime, a mere blink compared to all the time that came before and all that will come to be. Not only are all stars in motion but they’re also growing old and being born just like people.
Sirius heads up the constellation Canis Major the Greater Dog. With a little imagination and a reasonably dark sky, you can easily picture a dog jumping up on its hind legs to greet neighboring Orion the Hunter. Much of Canis Major lies within the fuzzy band of the Milky Way with lots of star clusters and nebulae visible in both binoculars and telescopes. Some dark night, when Sirius is twinkling, take out your binoculars, start at Dog Star and slowly sweep across and up and down the area.
Twinkling is what Sirius is most famous for. As the brightest star, it displays the ever-present turbulence in the atmosphere best. From mid-latitudes, the star spends a good amount of time in the lower part of the southern sky, and the lower a star is, the more atmosphere its light has to penetrate to reach our eyes. Various pockets of air at different temperatures “focus” starlight this way and that like small lenses. We see the shifts as twinkling. Since white light is made of every color of the rainbow, one pocket might send a bit of red our way, then blue, yellow or green in totally random order. That’s the reason that Sirius not only twinkles but does it in color.
On especially turbulent nights, the effect is mesmerizing. Clear skies!
For this week’s Hack Chat, we’re talking about trusting the autorouter. The autorouter is just a tool, and like any tool, it will do exactly what you tell it. The problem, therefore, is being smart enough to use the autorouter.
Our guest for this week’s Hack Chat is Ben Jordan, Director of Community Tools and Content at Altium. Ben is a Computer Systems engineer, with 25 years experience in board-level hardware and embedded systems design. He picked up a soldering iron at 8, and wrote some assembly at 12. He’s also an expert at using an autorouter successfully.
American Heritage Dictionary editor Steve Kleinedler was recently interviewed by Sarah Grey for Conscious Style Guide. They discuss pronoun usage, the use of die by suicide in place of commit suicide, and Latinx, among several other topics.
Thank you for visiting the American Heritage Dictionary at ahdictionary.com!
Here’s everything I know about it.
People harassing women by delivering anonymous packages purchased from Amazon.
On the one hand, there is nothing new here. This could have happened decades ago, pre-Internet. But the Internet makes this easier, and the article points out that using prepaid gift cards makes this anonymous. I am curious how much these differences make a difference in kind, and what can be done about it.
(This article is written jointly with my colleague Kyle Jamieson, who specializes in wireless networks.)
[See also: The myth of the hacker-proof voting machine]
The ES&S model DS200 optical-scan voting machine has a cell-phone modem that it uses to upload election-night results from the voting machine to the “county central” canvassing computer. We know it’s a bad idea to connect voting machines (and canvassing computers) to the Internet, because this allows their vulnerabilities to be exploited by hackers anywhere in the world. (In fact, a judge in New Jersey ruled in 2009 that the state must not connect its voting machines and canvassing computers to the internet, for that very reason.) So the question is, does DS200’s cell-phone modem, in effect, connect the voting machine to the Internet?
The vendor (ES&S) and the counties that bought the machine say, “no, it’s an analog modem.” That’s not true; it appears to be a Multitech MTSMC-C2-N3-R.1 (Verizon C2 series modem), a fairly complex digital device. But maybe what they mean is “it’s just a phone call, not really the Internet.” So let’s review how phone calls work:
The voting machine calls the county-central computer using its cell-phone modem to the nearest tower; this connects through Verizon’s “Autonomous System” (AS), part of the packet-switched Internet, to a cell tower (or land-line station) near the canvassing computer.
Verizon attempts to control access to the routers internal to its own AS, using firewall rules on the border routers. Each border router runs (probably) millions of lines of software; as such it is subject to bugs and vulnerabilities. If a hacker finds one of these vulnerabilities, he can modify messages as they transit the AS network:
Do border routers actually have vulnerabilities in practice? Of course they do! US-CERT has highlighted this as an issue of importance. It would surprising if the Russian mafia or the FBI were not equipped to exploit such vulnerabilities.
Even easier than hacking through router bugs is just setting up an imposter cell-phone “tower” near the voting machine; one commonly used brand of these, used by many police departments, is called “Stingray.”
I’ve labelled the hacker as “MitM” for “man-in-the-middle.” He is well positioned to alter vote totals as they are uploaded. Of course, he will do better to put his Stingray near the county-central canvassing computer, so he can hack all the voting machines in the county, not just one near his Stingray:
So, in summary: phone calls are not unconnected to the Internet; the hacking of phone calls is easy (police departments with Stingray devices do it all the time); and even between the cell-towers (or land-line stations), your calls go over parts of the Internet. If your state laws, or a court with jurisdiction, say not to connect your voting machines to the Internet, then you probably shouldn’t use telephone modems either.
My first fully shielded house light, with one more to go. I purchased this one at Lowe’s on Sunday, and put it up today. Threw away the old standard brass coach light which spewed light in your eyes when coming in the front door. You can actually see the house much better when driving up the street. All the light shines down in a nice concentrated beam. I need one more for my back door. The fixture is well made and was very simple to install….
Tosol we planned to finish up our suite of observations on the ‘Ogunquit Beach’ sand sample that was off-loaded from the rover over the weekend. As described in yesterday’s blog, the rover had dumped two piles of the Ogunquit Beach sample – a pre-sieved and post-sieved portion – on the ground in front of us. Because we spent yestersol observing the pre-sieved dump pile with MAHLI and APXS, we are now ready to blast that pile around with ChemCam‘s laser, which we will do in today’s plan!
We will also continue to collect observations to characterize the bedrock in front of us. We have planned several APXS and MAHLI observations on the area around ‘Lake Orcadie.’ The small offsets between these data and previous measurements will enable us to understand small scale variability in the chemistry of the target. We will also take an additional ChemCam LIBS observation and associated Mastcam documentation image of a bedrock target ‘Peterculler.’
Written by Abigail Fraeman, Planetary Geologist at NASA’s Jet Propulsion Laboratory
A week after 17 people were murdered in a mass shooting at Marjory Stoneman Douglas High School in Parkland, Florida, teenagers across South Florida, in areas near Washington, D.C., and in other parts of the United States walked out of their classrooms to stage protests against the horror of school shootings and to advocate for gun law reforms. Student survivors of the attack at Marjory Stoneman Douglas High School traveled to their state Capitol to attend a rally, meet with legislators, and urge them to do anything they can to make their lives safer. These teenagers are speaking clearly for themselves on social media, speaking loudly to the media, and they are speaking straight to those in power—challenging lawmakers to end the bloodshed with their “#NeverAgain” movement.
Our guest, Tyler Cowen, has insights into a ridiculously wide range of subjects. Our conversation touches everything from Afrofuturist flicks to the mouth-numbing qualities of the Sichuan peppercorn.
(Image credit: NPR)
While working on a major re-factor of QEMU’s softfloat code I’ve been doing a lot of benchmarking. It can be quite tedious work as you need to be careful you’ve run the correct steps on the correct binaries and keeping notes is important. It is a task that cries out for scripting but that in itself can be a compromise as you end up stitching a pipeline of commands together in something like perl. You may script it all in a language designed for this sort of thing like R but then find your final upload step is a pain to implement.
One solution to this is to use a literate programming workbook like this. Literate programming is a style where you interleave your code with natural prose describing the steps you go through. This is different from simply having well commented code in a source tree. For one thing you do not have to leap around a large code base as everything you need is on the file you are reading, from top to bottom. There are many solutions out there including various python based examples. Of course being a happy Emacs user I use one of its stand-out features org-mode which comes with multi-language org-babel support. This allows me to document my benchmarking while scripting up the steps in a variety of “languages” depending on the my needs at the time. Let’s take a look at the first section:
1 Binaries To Test
Here we have several tables of binaries to test. We refer to the
current benchmarking set from the next stage, Run Benchmark.
For a final test we might compare the system QEMU with a reference
build as well as our current build.
Binary title /usr/bin/qemu-aarch64 system-2.5.log ~/lsrc/qemu/qemu-builddirs/arm-targets.build/aarch64-linux-user/qemu-aarch64 master.log ~/lsrc/qemu/qemu.git/aarch64-linux-user/qemu-aarch64 softfloat-v4.log
Well that is certainly fairly explanatory. These are named org-mode tables which can be referred to in other code snippets and passed in as variables. So the next job is to run the benchmark itself:
2 Run Benchmark
This runs the benchmark against each binary we have selected above.import subprocess import os runs= for qemu,logname in files: cmd="taskset -c 0 %s ./vector-benchmark -n %s | tee %s" % (qemu, tests, logname) subprocess.call(cmd, shell=True) runs.append(logname) return runs
So why use python as the test runner? Well truth is whenever I end up munging arrays in shell script I forget the syntax and end up jumping through all sorts of hoops. Easier just to have some simple python. I use python again later to read the data back into an org-table so I can pass it to the next step, graphing:
set title "Vector Benchmark Results (lower is better)" set style data histograms set style fill solid 1.0 border lt -1 set xtics rotate by 90 right set yrange [:] set xlabel noenhanced set ylabel "nsecs/Kop" noenhanced set xtics noenhanced set ytics noenhanced set boxwidth 1 set xtics format "" set xtics scale 0 set grid ytics set term pngcairo size 1200,500 plot for [i=2:5] data using i:xtic(1) title columnhead
This is a GNU Plot script which takes the data and plots an image from it. org-mode takes care of the details of marshalling the table data into GNU Plot so all this script is really concerned with is setting styles and titles. The language is capable of some fairly advanced stuff but I could always pre-process the data with something else if I needed to.
Finally I need to upload my graph to an image hosting service to share with my colleges. This can be done with a elaborate curl command but I have another trick at my disposal thanks to the excellent restclient-mode. This mode is actually designed for interactive debugging of REST APIs but it is also easily to use from an org-mode source block. So the whole thing looks like a HTTP session:
:client_id = feedbeef # Upload images to imgur POST https://api.imgur.com/3/image Authorization: Client-ID :client_id Content-type: image/png < benchmark.png
Finally because the above dumps all the headers when run (which is very handy for debugging) I actually only want the URL in most cases. I can do this simply enough in elisp:
#+name: post-to-imgur #+begin_src emacs-lisp :var json-string=upload-to-imgur() (when (string-match (rx "link" (one-or-more (any "\":" whitespace)) (group (one-or-more (not (any "\""))))) json-string) (match-string 1 json-string)) #+end_src
The :var line calls the restclient-mode function automatically and passes it the result which it can then extract the final URL from.
And there you have it, my entire benchmarking workflow document in a single file which I can read through tweaking each step as I go. This isn’t the first time I’ve done this sort of thing. As I use org-mode extensively as a logbook to keep track of my upstream work I’ve slowly grown a series of scripts for common tasks. For example every patch series and pull request I post is done via org. I keep the whole thing in a git repository so each time I finish a sequence I can commit the results into the repository as a permanent record of what steps I ran.
If you want even more inspiration I suggest you look at John Kitchen’s scimax work. As a publishing scientist he makes extensive use of org-mode when writing his papers. He is able to include the main prose with the code to plot the graphs and tables in a single source document from which his camera ready documents are generated. Should he ever need to reproduce any work his exact steps are all there in the source document. Yet another example of why org-mode is awesome
How do you reinvent something as simple as the wooden shipping pallet?
(Image credit: Elvert Barnes/Flickr)
Planet’s mission from the beginning was to disrupt space. But the rise of cloud computing in Silicon Valley six years ago gave us another opportunity on the data side: to build our whole ground segment in a cloud native manner. Everything Planet has built – from the ground stations and our data processing pipeline to web APIs and browser-based front-ends and now our analytic-derived data products – was designed first for the cloud.
Over the years, we have helped our customers and partners move toward a cloud-first consumption model. Through building out all those components and working with different partners, it’s come to our attention that there’s a great deal of systems thinking and engineering that makes our approach fairly unique. Our internal architecture runs 30,000 virtual machines at once to process millions of images every day on the cloud. Making the same architecture available to customers and partners enables them to add value to their customers at the same scale.
I recently wrapped up a seven-part series on Planet Stories that looks at the overall landscape and details Planet’s approach, which we call “Cloud Native Geospatial.” The core question that the first post asks is: “What would the geospatial landscape look like if we built everything from the ground up on the cloud?” Although most software vendors offer a cloud version of their software, the main workflows have remained fundamentally static the past couple of decades.
From a technology standpoint, doing everything on the cloud enables massive increases in scalability and speed, enabling analytics at a country and even global level. Even more interesting is how Cloud Native Geospatial enables more collaboration between, increased ease in finding and accessing information, and an ability to process and understand information in near real time.
From there the series explores the Cloud Optimized GeoTIFF, a file format that Planet uses extensively to stream data to users and serves as the main building block of Cloud Native Geospatial Architectures. I argue that the Cloud Optimized GeoTIFF is the most important enabling technology for cloud native geospatial.
To help readers understand what Cloud Native Geospatial means in practical terms, the next posts offer a deep dive into Planet’s architecture as well as Open Aerial Map’s, and then places the main tenets of those architectures into a clear definition of cloud native geospatial. The series closes out with my thoughts on the opportunity to get metadata and data provenance right in the cloud, and presents a vision for a robust ecosystem of diverse but interoperable Cloud Native Geospatial architectures.
We hope this series gives insight into how we’ve been thinking about a cloud native approach and what we’ve been building. We look forward to more collaboration with others in the geospatial industry to usher in an age where people spend their time focused on the actual problems, instead of all the pains and manual labor dealing with large amounts of special data.
Read the full series, and share your thoughts or experiences with cloud-native geospatial.
A new DragonFly user found setting keybell=”visual” in rc.conf caused a crash on the next terminal beep. In this case, the user is Deaf and so prefers something other than an auditory bell. The issue is fixed in development and release, but I always like seeing variations on “many eyes make bugs shallow“.
Need 100 square inches or more?
Our 2 Layer Medium Run service is $1 per square inch and ships in 15 calendar days or fewer:
$1 per square inch, 100 square inch minimum. You can have as many different designs as you want, as long as each design is ordered in a multiple of 10 boards.
For example, if you had two different 5 square inch designs, you could order 10 of each for a total cost of $100.
100 inches is just the minimum order. You can order as much as you’d like beyond that.
Fabrication time can vary for medium run orders, but boards will ship in 15 calendar days or fewer.
You can get a quote, approve a design, and pay for an order at OSH Park.
2 Layer PCB Specs
Mike Zamansky has posted another video in his Using Emacs Series. This video looks at playing music with MPD and Emacs. Of course, Emacs can’t play music on its own but it can be used to control music players, effectively moving music playing functions into Emacs.
MPD provides the server side of a music playing client/server system so you have to choose a client (the player) as well. There are lots of clients available and Zamansky considers three that leverage the MPC player and integrate well with Emacs. He tried but didn’t like the built-in Emacs MPC mode and Mingus. The video shows them both in action so you can decide for yourself if they work for you. Finally, Zamansky demonstrates a player that he did like: simple-mpc. It has a simple interface that is explicitly modeled after that of mu4e so it will be familiar to those using mu4e for their email.
All of this is really intended for Linux but as far as I can tell it’s possible to make it work with macOS as well. Apple, of course, has the iTunes player—which despite what many say, I find easy to use and flexible. I keep my music collection (mainly) on my iMac but AirPlay makes it possible to stream music to my laptop or other Apple device. All in all, I’m happy with the Apple solution.
Still, I like moving everything possible into Emacs so I’ve added a TODO to investigate getting MPD running on my Mac. I know that MPD can run on a different machine from the player so I could still have my centralized music collection. If anyone has any experience with MPD on Macs, please leave a comment.
The video runs just short of 15 minutes so you can probably watch it on a break. As usual with Zamansky’s videos, it full of information and definitely worth watching.
Frank Wilczek has used the neologism ‘quintelligence’ to refer to the kind of sentience that might grow out of artificial intelligence and neural networks using genetic algorithms. I seem to remember running across Wilczek’s term in one of Paul Davies books, though I can’t remember which. In any case, Davies has speculated himself about what such intelligences might look like, located in interstellar space and exploiting ultracool temperatures.
A SETI target? If so, how would we spot such a civilization?
Wilczek is someone I listen to carefully. Now at MIT, he’s a mathematician and theoretical physicist who was awarded the Nobel Prize in Physics in 2004, along with David Gross and David Politzer, for work on the strong interaction. He’s also the author of several books explicating modern physics to lay readers. I’ve read his The Lightness of Being: Mass, Ether, and the Unification of Forces (Basic Books, 2008) and found it densely packed but rewarding. I haven’t yet tackled 2015’s A Beautiful Question: Finding Nature’s Deep Design.
Perhaps you saw Wilczek’s recent piece in The Wall Street Journal, sent my way by Michael Michaud. Here we find the scientist going at the Fermi question that we have tackled so many times in these pages, always coming back to the issue that we have a sample of one when it comes to life in the universe, much less technological society, and our sample is right here on Earth. For the record, Wilczek doesn’t buy the idea that life is unusual; in fact, he states not only that he thinks life is common, but also makes the case for advanced civilizations:
Generalized intelligence, that produces technology, took a lot longer to develop, however, and the road from amoebas to hominids is littered with evolutionary accidents. So maybe we’re our galaxy’s only example. Maybe. But since evolution has supported many wild and sophisticated experiments, and because the experiment of intelligence brings spectacular adaptive success, I suspect the opposite. Plenty of older technological civilizations are out there.
Civilizations may, of course, develop and then, in Wilczek’s phrase, ‘flame out,’ just as Edward Gibbon would describe the fall of Rome as “the natural and inevitable effect of immoderate greatness. . . . The stupendous fabric yielded to the pressure of its own weight.” We can pile evidence onto that one, from the British and Spanish empires to the decline of numerous societies like the Aztec and the Mayan. Catastrophe is always a possible human outcome.
But is it an outcome for non-human technological societies? Wilczek doubts that, preferring to hark back to the idea with which we opened. The most advanced quantum computation — quintelligence — he believes, works best where it is cold and dark. And a civilization based on what we would today call artificial intelligence may be one that basically wants to be left alone.
Image: The Local Group of galaxies. Is the most likely place for advanced civilization to be found in the immensities between stars and galaxies? Credit: Andrew Z. Colvin.
All this is, like all Fermi question talk, no more than speculation, but it’s interesting speculation, for Wilczek goes on to discuss the notion that one outcome for a hyper-advanced civilization may be to embrace the small. After all, the speed of light is a limit to communications, and effective computation involves communications that are affected by that limit. The implication: Fast AI thinking works best when it occurs in relatively small spaces. Thus:
Consider a computer operating at a speed of 10 gigahertz, which is not far from what you can buy today. In the time between its computational steps, light can travel just over an inch. Accordingly, powerful thinking entities that obey the laws of physics, and which need to exchange up-to-date information, can’t be spaced much farther apart than that. Thinkers at the vanguard of a hyper-advanced technology, striving to be both quick-witted and coherent, would keep that technology small.
A civilization based, then, on information processing would achieve its highest gains by going small in search of the highest levels of speed and integration. We’re now back out in the interstellar wastelands, which may in this scenario actually contain advanced and utterly inconspicuous intelligences. As I mentioned earlier, it’s hard to see how SETI finds these.
Unstated in Wilczek’s article is a different issue. Let’s concede the possibility of all but invisible hyper-intelligence elsewhere in the cosmos. We don’t know how long it would take to develop such a civilization, moving presumably out of its initial biological state into the realm of computation and AI. Surely along the way, there would still be societies in biological form leaving detectable traces of themselves. Or should we assume that the Singularity really is near enough that even a culture like ours may be succeeded by AI in the cosmic blink of an eye?
Edmund Scientific was the company that spawned my interest in amateur astronomy. From the following books to my first serious telescope, an Edmund 4.25-inch EQ reflector. A Palomar Jr.
It’s really too bad that books like the ones pictured below are no longer available. It was the “Edmund Sky Guide” that taught me all about Sirius and the companion. However, it would be almost forty years later, before I was finally able to see the companion.
It was the mid to late 60’s thru the 70s that were what I call the golden years of amateur astronomy. The days when 6-inch reflectors ruled the day (or night) and fortunate indeed was the amateur that owned an Edmund Scientific or Criterion 6-inch f/8 EQ reflector.
The days when the solitary observer spent many nights in their backyard. The days when every amateur wanted to see all of the Messier’s.
Being really young, always feeling great, no responsibilities, dreaming of a better telescope, or another Kellner eyepiece.
Now what more could anyone ask for…..
No situation is ever made better by yelling “Geez.” Just saying the word instantly robs you of any gravitas or illusion of having the upper hand.
Here, I’ll prove it. Picture Liam Neeson saying the following.
“I can tell you I don't have money... but what I do have are a very particular set of skills. Skills I have acquired over a very long career. Skills that make me a nightmare for people like you. I mean, geez!”
I think the problem is that everyone knows that “Geez” is the midway point between saying “Gee,” and shouting “Jesus!” Saying “geez” tells the listener that you really want to curse, but you can’t quite bring yourself to do it, and if you can’t bring yourself to say something drastic, what are the odds that you can bring yourself to do something drastic?
Of course, I could be wrong. Geez may have some deep meaning. For all I know, when you yell “Geez” you could be calling out to Saint Geez, the patron saint of ineffectual anger.
Over the weekend, Curiosity successfully off-loaded the sample she acquired previously, the ‘Ogunquit Beach’ sand sample, in preparation for what the science team hopes is acquisition of a new *drilled* rock sample very soon. Curiosity has a sophisticated sample handling and preparation system, known as the Sample Acquisition/Sample Processing and Handling (SA/SPaH, ‘saw-spa’) system. SA/SPaH has the ability to divide a drilled or scooped sample up into different ranges of particle sizes. In the case of the Ogunquit Beach sample, the finest particle size range corresponded to the material that was delivered to both SAM and CheMin. It is known as the post-sieve sample. The larger particle size range material, which was just along for the ride within SA/SPaH, is known as the pre-sieve sample. The first round of MAHLI imaging of both the pre-sieve and post-sieve samples, dumped into separate piles in the workspace, was successful over the weekend, as was the APXS analysis of the post-sieve pile. In today’s plan, MAHLI will return to both dump piles for closer approach images to better resolve the fine sand particles in each pile, and APXS will analyze the pre-sieve dump pile. ChemCam will get a turn at the dump piles, acquiring reflectance spectra from the pre-sieve dump pile and a raster over the post-sieve dump pile. ChemCam is kind enough to wait to shoot the dump piles until MAHLI and APXS look closely at them so the pile is not blasted away by the laser! ChemCam will also acquire a raster over the dark gray pebble target ‘Black Cuillin,’ which is one of the larger pebble targets strewn among the bedrock in the workspace. Curiosity will squeeze in some looks skyward, measuring dust load in the atmosphere and acquiring movies to look for dust devils.
Written by Michelle Minitti, Planetary Geologist at Framework
Russia spent 73 million Rubles a month to influence an American election. But what did they get for their money?
(Image credit: NPR)
Over the course of several recent months, Reuters photographer Stéphane Mahé visited and photographed a farmer named Jean-Bernard Huon on his farm in western France. Huon, now 70, grew up here, and deliberately lives a traditional, non-mechanized farm life, favoring ox teams over tractors. From a Reuters article: “When farm machinery revolutionized French agriculture in the years after World War II, a young Jean-Bernard Huon turned his back on the new technology. Half a century later, in a corner of southern Brittany on France’s west coast, Huon still uses oxen to plow his fields, determined to preserve an ancestral, peasant way of life.”
A few years ago, while managing the power management product line at work, I started an initiative with the development team to optimize new products by achieving ESE. ESE stands for Equations = Simulations = Experimentation. The idea is centered on the engineering goal of product design to verify that the systems design equations match the simulation results and ultimately the experimental results.
When these three items match, not only do you understand a system, but you have the best chance to optimize a better solution. I’ll have to say that in today’s mad dash to get new products out the door, achieving ESE is not always possible. But to break through the ordinary and have a chance for the extraordinary, I would say this is a requirement. Since this power supply is just a fun design for an upcoming nixie tube clock project of mine, I have the time to achieve ESE.
The updated schematic, BOM, Kicad Layout, and design files are located at Github:
surfncircuits has shared the board on OSH Park:
Here is a quick video showing six IN-4 Nixie tubes being powered by a 5v iPhone charger:
A Hubble project called Outer Planet Atmospheres Legacy (OPAL) has been producing long-term information about the four outer planets at ultraviolet wavelengths, a unique capability that has paid off in deepening our knowledge of Neptune. If you kept pace with Voyager 2 at Neptune, you’ll recall that the spacecraft found huge dark storms in the planet’s atmosphere. Neptune proved to be more atmospherically active than its distance from the Sun would have suggested, and Hubble found another two storms in the mid-1990’s that later vanished.
Image: Neptune’s Great Dark Spot, a large anticyclonic storm similar to Jupiter’s Great Red Spot, observed by NASA’s Voyager 2 spacecraft in 1989. The image was shuttered 45 hours before closest approach at a distance of 2.8 million kilometers. The smallest structures that can be seen are of an order of 50 kilometers. The image shows feathery white clouds that overlie the boundary of the dark and light blue regions. Credit: NASA/JPL.
Now we have evidence of another storm, discovered by Hubble in 2015 and evidently vanishing before our eyes. This is a storm that was once large enough to have spanned the Atlantic, evidently visible because primarily composed of hydrogen sulfide drawn up from the deeper atmosphere. UC-Berkeley’s Joshua Tollefson, a co-author on the new paper on this work, notes that the storm’s darkness is relative: “The particles themselves are still highly reflective; they are just slightly darker than the particles in the surrounding atmosphere.”
Image: This series of Hubble Space Telescope images taken over 2 years tracks the demise of a giant dark vortex on the planet Neptune. The oval-shaped spot has shrunk from 5,000 kilometers across its long axis to 3,700 kilometers across, over the Hubble observation period. Immense dark storms on Neptune were first discovered in the late 1980s by the Voyager 2 spacecraft. Since then only Hubble has tracked these elusive features. Hubble found two dark storms that appeared in the mid-1990s and then vanished. This latest storm was first seen in 2015. The first images of the dark vortex are from the Outer Planet Atmospheres Legacy (OPAL) program, a long-term Hubble project that annually captures global maps of our solar system’s four outer planets. Credit: NASA, ESA, and M.H. Wong and A.I. Hsu (UC Berkeley).
The differences between Neptune’s storms and famous Jovian features like the Great Red Spot are interesting though not yet fully understood. The Great Red Spot has been a well described feature on Jupiter for more than two centuries, still robust though varying in size and color. A storm that once encompassed four Earth diameters had shrunk to twice Earth’s diameter in the Voyager 2 flyby of 1979, and has now dropped to perhaps 1.3. As to its heat sources, they are still under investigation, as we saw in 2016 (check Jupiter’s Great Red Spot as Heat Source).
Neptune is another story, with storms that seem to last but a few years. Thus the fading of the recent dark spot, which had been observed at mid-southern latitudes. Michael H. Wong (UC-Berkeley) is lead author of the paper:
“It looks like we’re capturing the demise of this dark vortex, and it’s different from what well-known studies led us to expect. Their dynamical simulations said that anticyclones under Neptune’s wind shear would probably drift toward the equator. We thought that once the vortex got too close to the equator, it would break up and perhaps create a spectacular outburst of cloud activity.”
But the storm drifted not toward the equator but the south pole, not constrained by the powerful alternating wind jets found on Jupiter. Moreover, we have no information on how these storms form or how fast they rotate. And as the new paper notes, the five Neptune dark spots we’ve thus far found have differed broadly in terms of size, shape, oscillatory behavior and companion cloud distribution. We have much to learn about their formation, behavior and dissipation.
When you think about flyby missions like Voyager 2 at Neptune, the value of getting that first look at a hitherto unknown object is obvious. But we are moving into an era when longer-term observations become paramount. The OPAL program with Hubble is an example of this, studying in the case of Neptune a phenomenon that seems to exist on a timescale that suits an annual series of observations. Hubble has been complemented by observations from other observatories, including not just Spitzer but, interestingly, Kepler K2. A robotic adaptive optics system tuned for planetary atmospheric science is being prepared for deployment in Hawaii, offering a way to scrutinize these worlds over even smaller periods.
From the paper:
Clearly, there is much room in the discovery space of solar system time domain science. There is room in this discovery space for exploration by a dedicated solar system space telescope, a network of ground facilities, and cadence programs at astrophysical observatories with advanced capabilities.
The paper is Wong et al., “A New Dark Vortex on Neptune,” Astronomical Journal Vol. 155, No. 3 (15 February 2018). Abstract.
Nick Holland is giving a talk tonight on “Scripting Tips & Hacks” at SemiBUG. “Nick has been scripting roughly since scripts were hard-coded into
plugboards.” Go, see, if you are near Michigan.
Oh, hey, that’s a nice thing to say. (via tuxillo on EFNet #dragonflybsd)
It's not a great solution, but it's something:
The process of using postcards containing a specific code will be required for advertising that mentions a specific candidate running for a federal office, Katie Harbath, Facebook's global director of policy programs, said. The requirement will not apply to issue-based political ads, she said.
"If you run an ad mentioning a candidate, we are going to mail you a postcard and you will have to use that code to prove you are in the United States," Harbath said at a weekend conference of the National Association of Secretaries of State, where executives from Twitter Inc and Alphabet Inc's Google also spoke.
"It won't solve everything," Harbath said in a brief interview with Reuters following her remarks.
But sending codes through old-fashioned mail was the most effective method the tech company could come up with to prevent Russians and other bad actors from purchasing ads while posing as someone else, Harbath said.
It does mean a several-days delay between purchasing an ad and seeing it run.
Patrick Reames had no idea why Amazon.com sent him a 1099 form saying he’d made almost $24,000 selling books via Createspace, the company’s on-demand publishing arm. That is, until he searched the site for his name and discovered someone has been using it to peddle a $555 book that’s full of nothing but gibberish.
Reames is a credited author on Amazon by way of several commodity industry books, although none of them made anywhere near the amount Amazon is reporting to the Internal Revenue Service. Nor does he have a personal account with Createspace.
But that didn’t stop someone from publishing a “novel” under his name. That word is in quotations because the publication appears to be little more than computer-generated text, almost like the gibberish one might find in a spam email.
“Based on what I could see from the ‘sneak peak’ function, the book was nothing more than a computer generated ‘story’ with no structure, chapters or paragraphs — only lines of text with a carriage return after each sentence,” Reames said in an interview with KrebsOnSecurity.
The impersonator priced the book at $555 and it was posted to multiple Amazon sites in different countries. The book — which as been removed from most Amazon country pages as of a few days ago — is titled “Lower Days Ahead,” and was published on Oct 7, 2017.
Reames said he suspects someone has been buying the book using stolen credit and/or debit cards, and pocketing the 60 percent that Amazon gives to authors. At $555 a pop, it would only take approximately 70 sales over three months to rack up the earnings that Amazon said he made.
“This book is very unlikely to ever sell on its own, much less sell enough copies in 12 weeks to generate that level of revenue,” Reames said. “As such, I assume it was used for money laundering, in addition to tax fraud/evasion by using my Social Security number. Amazon refuses to issue a corrected 1099 or provide me with any information I can use to determine where or how they were remitting the royalties.”
Reames said the books he has sold on Amazon under his name were done through his publisher, not directly via a personal account (the royalties for those books accrue to his former employer) so he’d never given Amazon his Social Security number. But the fraudster evidently had, and that was apparently enough to convince Amazon that the imposter was him.
Reames said after learning of the impersonation, he got curious enough to start looking for other examples of author oddities on Amazon’s Createspace platform.
“I have reviewed numerous Createspace titles and its clear to me that there may be hundreds if not thousands of similar fraudulent books on their site,” Reames said. “These books contain no real content, only dozens of pages of gibberish or computer generated text.”
For example, searching Amazon for the name Vyacheslav Grzhibovskiy turns up dozens of Kindle “books” that appear to be similar gibberish works — most of which have the words “quadrillion,” “trillion” or a similar word in their titles. Some retail for just one or two dollars, while others are inexplicably priced between $220 and $320.
“Its not hard to imagine how these books could be used to launder money using stolen credit cards or facilitating transactions for illicit materials or funding of illegal activities,” Reames said. “I can not believe Amazon is unaware of this and is unwilling to intercede to stop it. I also believe they are not properly vetting their new accounts to limit tax fraud via stolen identities.”
Reames said Amazon refuses to send him a corrected 1099, or to discuss anything about the identity thief.
“They say all they can do at this point is send me a letter acknowledging than I’m disputing ever having received the funds, because they said they couldn’t prove I didn’t receive the funds. So I told them, ‘If you’re saying you can’t say whether I did receive the funds, tell me where they went?’ And they said, “Oh, no, we can’t do that.’ So I can’t clear myself and they won’t clear me.”
Amazon said in a statement that the security of customer accounts is one of its highest priorities.
“We have policies and security measures in place to help protect them. Whenever we become aware of actions like the ones you describe, we take steps to stop them. If you’re concerned about your account, please contact Amazon customer service immediately using the help section on our website.”
Beware, however, if you plan to contact Amazon customer support via phone. Performing a simple online search for Amazon customer support phone numbers can turn up some dubious and outright fraudulent results.
Earlier this month, KrebsOnSecurity heard from a fraud investigator for a mid-sized bank who’d recently had several customers who got suckered into scams after searching for the customer support line for Amazon. She said most of these customers were seeking to cancel an Amazon Prime membership after the trial period ended and they were charged a $99 fee.
The fraud investigator said her customers ended up calling fake Amazon support numbers, which were answered by people with a foreign accent who proceeded to request all manner of personal data, including bank account and credit card information. In short order, the customers’ accounts were used to set up new Amazon accounts as well as accounts at Coinbase.com, a service that facilitates the purchase of virtual currencies like Bitcoin.
This Web site does a good job documenting the dozens of phony Amazon customer support numbers that are hoodwinking unsuspecting customers. Amazingly, many of these numbers seem to be heavily promoted using Amazon’s own online customer support discussion forums, in addition to third-party sites like Facebook.com.
Interestingly, clicking on the Customer Help Forum link link from the Amazon Support Options and Contact Us page currently sends visitors to the page pictured below, which displays a “Sorry, We Couldn’t Find That Page” error. Perhaps the company is simply cleaning things up after being notified last week by KrebsOnSecurity about the bogus phone numbers being promoted on the forum.
In any case, it appears some of these fake Amazon support numbers are being pimped by a number dubious-looking e-books for sale on Amazon that are all about — you guessed it — how to contact Amazon customer support.
If you wish to contact Amazon by phone, the only numbers you should use are:
Amazon’s main customer help page is here.
Update, 11:44 a.m. ET: Not sure when it happened exactly, but this notice says Amazon has closed its discussion boards.
Update, 4:02 p.m. ET: Amazon just shared the following statement, in addition to their statement released earlier urging people to visit a help page that didn’t exist (see above):
“Anyone who believes they’ve received an incorrect 1099 form or a 1099 form in error can contact email@example.com and we will investigate.”
“This is the general Amazon help page:”
Update 4:01 p.m ET: Reader zboot has some good stuff. What makes Amazon a great cashout method for cybercrooks as opposed to, say, bitcoin cashouts, is that funds can be deposited directly into a bank account. He writes:
“It’s not that the darkweb is too slow, it’s that you still need to cash out at the end. Amazon lets you go from stolen funds directly to a bank account. If you’ve set it up with stolen credentials, that process may be faster than getting money out of a bitcoin exchange which tend to limit fiat withdraws to accounts created with the amount of information they managed to steal.”
Interesting history of the security of walls:
Dún Aonghasa presents early evidence of the same principles of redundant security measures at work in 13th century castles, 17th century star-shaped artillery fortifications, and even "defense in depth" security architecture promoted today by the National Institute of Standards and Technology, the Nuclear Regulatory Commission, and countless other security organizations world-wide.
Security advances throughout the centuries have been mostly technical adjustments in response to evolving weaponry. Fortification -- the art and science of protecting a place by imposing a barrier between you and an enemy -- is as ancient as humanity. From the standpoint of theory, however, there is very little about modern network or airport security that could not be learned from a 17th century artillery manual. That should trouble us more than it does.
Fortification depends on walls as a demarcation between attacker and defender. The very first priority action listed in the 2017 National Security Strategy states: "We will secure our borders through the construction of a border wall, the use of multilayered defenses and advanced technology, the employment of additional personnel, and other measures." The National Security Strategy, as well as the executive order just preceding it, are just formal language to describe the recurrent and popular idea of a grand border wall as a central tool of strategic security. There's been a lot said about the costs of the wall. But, as the American finger hovers over the Hadrian's Wall 2.0 button, whether or not a wall will actually improve national security depends a lot on how walls work, but moreso, how they fail.
Lots more at the link.
Hackaday comes together in Ireland on April 7th and we want you to be there. Get your free ticket right now for the Hackaday Dublin Unconference!
An Unconference is the best way to put your finger on the pulse of what is happening in the hardware world right now. Everyone who attends should be ready to stand and deliver a seven-minute talk on something that excites them right now — this means you. The easiest thing to do is grab your latest hack off the shelf and talk about that.
Talks may be about a prototype, project, or product currently in progress at your home, work, or university. It could also be an idea, concept, or skill that you’re now exploring. The point is to channel your excitement and pass it on to others in a friendly presentation environment where everyone will cheer as your story unfolds.
Hackaday hosted an excellent Unconference in London back in September to a packed house for dozens of amazing presentations on a huge range of topics. We heard about bicycle turn signals, laser enhancing NES zappers, telepresence robots with IKEA origin stories, tiny-pitch LED matrix design, driving flip-dot displays, not trusting hardware 2-factor, and much more.
All the tickets for that event were scooped up in a few hours, and a huge waitlist followed. Don’t wait to grab your ticket!
Mike Zamansky has another video up in his excellent Using Emacs Series. This time he looks at Git Gutter and Git Time Machine. These are a couple of small utilities that make working with Git files and repositories a bit easier.
I’ve used Git Time Machine for a while. It’s one of those things you probably aren’t going to use that often—unless you have a special use case like Zamansky—but when you want to see how a file has changed over time it’s just the thing. You can see how it works in the video.
I haven’t used the other utility, Git Gutter, but it looks interesting. What it does is mark the differences between your current file and what’s in the repository. That makes it easy to see what changes you’ve made. That can be useful, especially when your working on a file for an extended time. You can also stage or revert individual hunks of code right from the utility. Again, Zamansky demonstrates this in the video. After watching the video, I’m going to install Git Gutter and see how it fits with my work flow.
The video is about 8 and a half minutes so it will fit nicely into a coffee break.
For Immediate Release
Vancouver – On Friday, February 16, 2018, the Attorney General of Canada filed a notice of appeal in a bid to overturn last month’s historic judgment that ordered an end to indefinite solitary confinement in prisons across Canada. The decision, which struck down the federal government’s administrative segregation regime as unconstitutional, was the result of a legal challenge brought by the BC Civil Liberties Association (BCCLA) and John Howard Society of Canada (JHSC).
Josh Paterson, Executive Director of the BCCLA, stated: “We find it shocking that our federal government has chosen to appeal this decision when the government came into office on a promise to put an end to indefinite solitary confinement. Instead, this appeal shows they intend to fight to save a system that breaks the law and makes our society less safe.”
Paterson pointed out that after the court win, the BCCLA and the Canadian Civil LIberties Association wrote to the federal ministers of Justice and Public Safety urging them to end court battles. He stated: “Having won in court, we extended a hand to the government to work together to fix this problem to no avail. Despite us reaching out, to-date, the federal government has given us no response other than filing this appeal.”
The B.C. Supreme Court issued judgment in favour of the BCCLA and JHSC on January 17, 2018. The Court held that the laws governing administrative segregation are unconstitutional in that they permit prolonged, indefinite solitary confinement, fail to provide independent review of segregation placements and deprive inmates of the right to counsel at segregation review hearings. The regime violates prisoners’ Charter section 7 rights because it places prisoners at increased risk of self-harm and suicide and causes psychological and physical harm. The Court further held that the laws were unconstitutional because they discriminate against the mentally ill and disabled, and against Indigenous prisoners.
Catherine Latimer, Executive Director of John Howard Society of Canada, states: “The problems with solitary confinement have been obvious for decades, with recommendations for reform coming from all quarters of society, including the Correctional Investigator of Canada and the United Nations Committee Against Torture. Now, the B.C. Supreme Court has recognized that the practice discriminated against Indigenous people and persons with mental illness. It is deeply unfortunate that, rather than accept that truth and work to correct it, the government wishes to ignore it.”
Caily DiPuma, Acting Litigation Director for the BCCLA: “We know that prisoners continue to spend weeks, months and even years in small cells without meaningful human contact. They continue to suffer from severe physical and psychological harm because of that isolation. And we know that some will be driven to end their lives there. The government’s decision to appeal is another example of justice deferred for prisoners – some of the most vulnerable and marginalized members of our Canadian society. We will not turn our backs to them, nor will Canadians. We will fight this appeal.”
Caily DiPuma, Acting Litigation Director, at firstname.lastname@example.org or 604-349-1423
Josh Paterson, Executive Director, at email@example.com or 778-829-8973
Le gouvernement fédéral dépose un appel de la victoire historique contre l’isolement cellulaire
19 février 2018
Pour diffusion immédiate
Vancouver – Le vendredi 16 février 2018, le procureur général du Canada a déposé un avis d’appel dans le but d’annuler le jugement historique du mois dernier qui ordonnait la fin de l’isolement cellulaire illimité dans les prisons du Canada. La décision, qui a invalidé le régime de ségrégation administrative du gouvernement fédéral, était le résultat d’une contestation judiciaire intentée par l’Association des libertés civiles de la Colombie-Britannique (ALCCB) et la Société John Howard du Canada (SJHC).
Josh Paterson, directeur-général de l’ALCCB, a déclaré: « Nous trouvons choquant que notre gouvernement fédéral ait décidé de faire appel de cette décision, lorsque le gouvernement est arrivé au pouvoir en promettant de mettre fin à l’isolement cellulaire illimité. Au lieu de cela, ils ont déclaré qu’ils continueraient à se battre pour préserver un système qui enfreint la loi et rend notre société moins sécure. »
Paterson a souligné qu’après la victoire du tribunal, la BCCLA et l’Association canadienne des compagnies civiles ont écrit aux ministres fédéraux de la Justice et de la Sécurité publique pour les exhorter à mettre fin aux batailles judiciaires. Il a déclaré: « Après avoir gagné au tribunal, nous avons tendu la main au gouvernement pour travailler ensemble pour résoudre ce problème. En dépit de nos efforts, à ce jour, le gouvernement fédéral ne nous a donné aucune autre réponse que de déposer cet appel. »
La Cour suprême de la C.-B. a rendu un jugement en faveur de l’ALCCB et du SJHC le 17 janvier 2018. La Cour a statué que les lois régissant l’isolement préventif sont inconstitutionnelles puisqu’elles permettent l’isolement cellulaire prolongé et indéfini, ne permettent pas un examen indépendant des placements en isolement et privent les détenus de le droit à l’assistance d’un avocat lors des audiences d’examen de l’isolement. Le régime viole les droits garantis aux détenus en vertu de l’article 7 de la Charte parce qu’il expose les détenus à un risque accru d’automutilation et de suicide et cause des préjudices psychologiques et physiques. La Cour a également estimé que les lois étaient inconstitutionnelles parce qu’elles discriminaient les malades mentaux et les handicapés, ainsi que les détenus autochtones.
Catherine Latimer, directrice-générale de la Société John Howard du Canada, déclare: « Les problèmes d’isolement cellulaire sont évidents depuis des décennies et des recommandations de réforme émanent de tous les secteurs de la société, y compris l’Enquêteur correctionnel du Canada et le Comité de l’ONU contre la torture. Maintenant, la Cour suprême de la C.-B. a reconnu que cette pratique était discriminatoire envers les Autochtones et les personnes atteintes de maladie mentale. Il est profondément regrettable que, plutôt que d’accepter cette vérité et de travailler à la corriger, le gouvernement souhaite l’ignorer. »
Caily DiPuma, directrice par intérim des litiges de la BCCLA: « Nous savons que les prisonniers continuent de passer des semaines, des mois et même des années dans de petites cellules sans contact humain significatif. Ils continuent à souffrir de graves dommages physiques et psychologiques à cause de cet isolement. Et nous savons que certains seront poussés à y mettre fin. La décision du gouvernement d’interjeter appel est un autre exemple de justice différée pour les prisonniers – certains des membres les plus vulnérables et marginalisés de notre société. Nous ne leur tournerons pas le dos. Nous allons combattre cet appel. »
Caily DiPuma, Directrice par intérim des litiges, à firstname.lastname@example.org ou au 604-349-1423
Josh Paterson, directeur exécutif, à email@example.com ou au 778-829-8973
The post Press Release: Federal Government Appeals Historic Solitary Confinement Victory appeared first on BC Civil Liberties Association.
Is there life elsewhere in the universe? These is one of the most often asked questions we have when looking up at the night sky. As if to give shape to our wondering, Leo the Lion thrusts a question mark in our faces every clear February evening.
Face east and look a little more than two fists above the horizon around 7 p.m. and you’ll spot it. It’s a backwards question mark but unmistakable. The brightest star, Regulus, marks the dot and five stars unfurl above outlining the stroke or curl.
The figure also reminded our more agrarian minded ancestors of a sickle, a short-handled tool with a curved blade used for cutting grain. And since Leo has represented a lion since ancient times, the curl still works as originally intended as the beast’s head. So now you have at least three ways of looking at it, a reminder that there’s always more than one way of looking at anything.
You can simply enjoy discovering the pattern for yourself and leave it at that. But there’s more here. Regulus is a quadruple star 79 light years from Earth. The bright one we see with the naked eye is 3.5 times as massive as the sun and rotates so rapidly — one spin takes just 15.9 hours — that it’s stretched into the shape of an egg. For comparison, the sun’s equatorial regions complete a rotation once every 24.5 days.
Only one of Regulus’ companion stars is easily visible. It shines at magnitude 8.1 about 3 arc minutes (1/10th the diameter of a full moon) to the northwest of the main star. A pair of sharply-focused binoculars might just pick it up, but any telescope will show it near the blazing, blue-white primary.
Algieba (al-JEE-bah), the brightest and most colorful of Leo’s double stars, gleams just 8.5° NW of Regulus. The orange primary star and its companion are separated by just 4 arc seconds (4″), so you’ll need a small scope and magnifying 75x to split them apart. Algieba’s bright at magnitude 2.0 and visible with the naked eye even from the suburbs.
The path taken by the sun, moon and planets across the sky called the ecliptic passes just south of Regulus. The ecliptic marks the plane of Earth’s orbit and is considered the plane of the solar system, since most of the planets orbit very close to that plane. Dust boiled off comets or resulting from asteroid collisions collects in a vast cloud along the ecliptic. Sunlight reflecting off the dust is visible as a faint “thumbprint” of diffuse light sticking up from the horizon at dusk every spring called the zodiacal light.
Directly opposite the sun, at the midnight position in the night sky, sunlight reflects directly off the dust, creating a weak patch of light called the gegenschein along the ecliptic. It’s very faint but visible from a dark, light-pollution-free sky. In in mid to late February, you’ll find it just beside Regulus at the bottom of the question mark. The patch is oval-shaped and about 8° across. Look for it on moonless nights around midnight (1 a.m. when Daylight Saving Time kicks in again) now through March.
Looking in the direction of Leo, we gaze up and out of the thick disk of the Milky Way galaxy, where most of its stars are concentrated. With little between us and intergalactic space — and no galactic dust to block the view of what’s beyond — we can peer millions of light years into the distance to see countless other galaxies and clusters of galaxies. Leo is a rich hunting ground for those who like to travel into deep space with mind’s eye and telescope.
And that brings us back to the central question. With all those galaxies, each comprised of billions of stars and billions of planets, what do you think the likelihood is of finding life beyond the Earth is?
Identity thieves who specialize in tax refund fraud have been busy of late hacking online accounts at multiple tax preparation firms, using them to file phony refund requests. Once the Internal Revenue Service processes the return and deposits money into bank accounts of the hacked firms’ clients, the crooks contact those clients posing as a collection agency and demand that the money be “returned.”
In one version of the scam, criminals are pretending to be debt collection agency officials acting on behalf of the IRS. They’ll call taxpayers who’ve had fraudulent tax refunds deposited into their bank accounts, claim the refund was deposited in error, and threaten recipients with criminal charges if they fail to forward the money to the collection agency.
This is exactly what happened to a number of customers at a half dozen banks in Oklahoma earlier this month. Elaine Dodd, executive vice president of the fraud division at the Oklahoma Bankers Association, said many financial institutions in the Oklahoma City area had “a good number of customers” who had large sums deposited into their bank accounts at the same time.
Dodd said the bank customers received hefty deposits into their accounts from the U.S. Treasury, and shortly thereafter were contacted by phone by someone claiming to be a collections agent for a firm calling itself DebtCredit and using the Web site name debtcredit[dot]us.
“We’re having customers getting refunds they have not applied for,” Dodd said, noting that the transfers were traced back to a local tax preparer who’d apparently gotten phished or hacked. Those banks are now working with affected customers to close the accounts and open new ones, Dodd said. “If the crooks have breached a tax preparer and can send money to the client, they can sure enough pull money out of those accounts, too.”
The domain debtcredit[dot]us hasn’t been active for some time, but an exact copy of the site to which the bank’s clients were referred by the phony collection agency can be found at jcdebt[dot]com — a domain that was registered less than a month ago. The site purports to be associated with a company in New Jersey called Debt & Credit Consulting Services, but according to a record (PDF) retrieved from the New Jersey Secretary of State’s office, that company’s business license was revoked in 2010.
“You may be puzzled by an erroneous payment from the Internal Revenue Service but in fact it is quite an ordinary situation,” reads the HTML page shared with people who received the fraudulent IRS refunds. It includes a video explaining the matter, and references a case number, the amount and date of the transaction, and provides a list of personal “data reported by the IRS,” including the recipient’s name, Social Security Number (SSN), address, bank name, bank routing number and account number.
All of these details no doubt are included to make the scheme look official; most recipients will never suspect that they received the bank transfer because their accounting firm got hacked.
The scammers even supposedly assign the recipients an individual “appointed debt collector,” complete with a picture of the employee, her name, telephone number and email address. However, the emails to the domain used in the email address from the screenshot above (debtcredit[dot]com) bounced, and no one answers at the provided telephone number.
Along with the Web page listing the recipient’s personal and bank account information, each recipient is given a “transaction error correction letter” with IRS letterhead (see image below) that includes many of the same personal and financial details on the HTML page. It also gives the recipient instructions on the account number, ACH routing and wire number to which the wayward funds are to be wired.
Tax refund fraud affects hundreds of thousands, if not millions, of U.S. citizens annually. Victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.
On Feb. 2, 2018, the IRS issued a warning to tax preparers, urging them to step up their security in light of increased attacks. On Feb. 13, the IRS warned that phony refunds through hacked tax preparation accounts are a “quickly growing scam.”
“Thieves know it is more difficult to identify and halt fraudulent tax returns when they are using real client data such as income, dependents, credits and deductions,” the agency noted in the Feb. 2 alert. “Generally, criminals find alternative ways to get the fraudulent refunds delivered to themselves rather than the real taxpayers.”
The IRS says taxpayer who receive fraudulent transfers from the IRS should contact their financial institution, as the account may need to be closed (because the account details are clearly in the hands of cybercriminals). Taxpayers receiving erroneous refunds also should consider contacting their tax preparers immediately.
If you go to file your taxes electronically this year and the return is rejected, it may mean fraudsters have beat you to it. The IRS advises taxpayers in this situation to follow the steps outlined in the Taxpayer Guide to Identity Theft. Those unable to file electronically should mail a paper tax return along with Form 14039 (PDF) — the Identity Theft Affidavit — stating they were victims of a tax preparer data breach.
The launch of the first Falcon Heavy developed and built by aerospace upstart, SpaceX founded in 2002 by entrepreneur Elon Musk, was accompanied with headlines that it was the most powerful launch vehicle currently flying – and rightfully so. The total thrust of its recoverable first stage and nearly identical pair of boosters (each sporting nine Merlin 1D engines) is rated at 22,819 kilonewtons at liftoff and, in combination with its upper stage, the Falcon Heavy is capable of placing 63,800 kilograms of payload into low Earth orbit (LEO). No other launch vehicle currently in production comes close. While the Falcon Heavy is without doubt the current holder of “the largest launch vehicle” title, it is just the latest in a long line of rockets to hold this title over the last six decades of the Space Age.
The first rocket to hold the title of “the largest launch vehicle” was the 8K71PS which launched the first satellite, Sputnik, on October 4, 1957 (see “Sputnik: The Launch of the Space Age”). Designed and built by OKB-1 (Experimental Design Bureau 1) headed by the legendary Chief Designer Sergei Korolev, this launch vehicle was based on the R-7 ICBM also known as the 8K71. This unique rocket used a parallel staging concept where all of the engines of the Blok A core and its four tapered boosters (called Blok B, V, G and D) ignited at liftoff to produce 3,904 kilonewtons of thrust. This was done to avoid the then-untried procedure of igniting large rocket engines at altitude during flight. The four boosters would drop away after they had exhausted their kerosene and liquid oxygen (LOX) propellants leaving the Blok A core to continue for the rest of the ascent. In its role as an ICBM, the R-7 could hurl a 5,400 kilogram warhead with a yield of five megatons over a range of 8,000 kilometers.
Originally, Korolev and his team had planned to launch their first satellite using a purpose-built variant of their ICBM designated the 8A91. Delays in its development prompted a decision to strip all nonessential systems from an 8K71 and modify it to become the 8K71PS satellite launch vehicle in order to orbit a small test satellite as soon as possible. While the 83-kilogram mass of Sputnik was far greater than the approximately ten-kilogram payload capability of America’s first satellite launch vehicles (see “Vanguard TV-3: America’s First Satellite Launch Attempt” and “Explorer 1: America’s First Satellite”), this mass hardly pushed the limits of the 8K71PS design. On November 3, 1957 the second 8K71PS launched Sputnik 2 with a mass of 508 kilograms into orbit carrying a dog (see “Sputnik 2: The First Animal in Orbit”). With the more capable 8A91 finally ready in early 1958, Korolev and his team were able to orbit the 1,327 kilogram Sputnik 3 on their second (and last) launch of their purpose-built satellite launch vehicle on May 15, 1958.
With the limits of the basic two-stage R-7 design reached with the 8A91, a new approach was needed to increase the usable orbital payload. The 8K71 Blok A core was adapted to carry a small upper stage designated Blok E which would ignite at altitude after core burnout. Called the 8K72, the first mission of this rocket was to launch the 170-kilogram E-1 lunar probes on a direct ascent trajectory to impact the Moon. With a liftoff thrust now increased to 3,998 kilonewtons, the first launch of the 8K72 on August 18, 1958 was unsuccessful as were the following three launch attempts. The first fully successful launch of the 8K72 was on January 2, 1959 when Luna 1 was sent to the Moon.
With its greatly enhanced performance compared to the two-stage variants of the R-7, the 8K72 was quickly adapted to launch payloads of up to about 4,700 kilograms into low Earth orbit. The first orbital mission of the 8K72 launched a prototype of the Vostok manned spacecraft called Korabl Sputnik 1 on May 15, 1960. Additional upgrades to the rocket resulted in the 8K72K (better known as the “Vostok”) which had its first successful launch on March 9, 1961 when it sent Korabl Sputnik 4 into orbit. The 8K72K was later used to launch the Soviet’s crewed Vostok missions through 1963.
In parallel with the development of the 8K72K, Korolev and his team at OKB-1 were busy working on still more powerful variants of the R-7 designed to launch probes to the planets. Engineers started with an improved version of their R-7 ICBM called the R-7A (or 8K74) whose core and four strap on boosters now produced 4,020 kilonewtons at launch. To this they added a new third stage called Blok I which was a significantly enlarged version of the second stage of the R-9A (or 8K75) ICBM also being developed at OKB-1. The first three stages of this new rocket, called 8K78, were designed to place an escape stage, known as Blok L, and its payload into a temporary parking orbit before the stage would ignite to send its payload to the Moon or planets.
The first launch attempts of the 8K78 on October 10 and 14, 1960 failed to reach orbit with their Mars-bound 1M spacecraft (see “The First Mars Mission Attempts”). The third launch of the 8K78 on February 4, 1961 managed to place the Blok L escape stage with its 1VA Venus probe into orbit. With a total mass of 6,483 kilograms, this was the heaviest object ever launched into LEO up to this time. Although the Blok L escape stage failed to operate on this flight, the launch of Venera 1 on February 12 was completely successful (see “Venera 1: The First Venus Mission Attempt”). The 8K78 would continue to be improved over the following years and eventually be named “Molniya” after the series of Soviet communications satellites which regularly used this launch vehicle.
While the four-stage 8K78 was designed specifically to launch payloads beyond LEO, it was not long before a three-stage variant was required for LEO payloads which were too heavy for the 8K72K and its successors to carry. Called the 11A57, the new rocket was designed from the start at OKB-1 to be a “unified launcher” that could orbit a range of different unmanned and manned payloads with masses of up to 6,000 kilograms and more. The configuration of the 11A57 was similar to the first three stages of the 8K78 and borrowed heavily from the technology being incorporated into the improved 8K78M. While the 11A57, with a liftoff thrust of 4,054 kilonewtons, had a superficial resemblance to the 8K78, it employed many new systems which were designed and built to a set of strict requirements known as the “3KA Regulations” so that the launch vehicle was man-rated from the start.
The first launch of the 11A57 orbited a prototype of the Vostok-based Zenit-4 photoreconnaissance satellite called Kosmos 22 on November 16, 1963 (see “Vostok’s Legacy”). Later the 11A57 was used to launch the first manned Voskhod mission on October 12, 1964 (see “50 Years Ago Today: The Mission of Voskhod 1”) earning the rocket its informal name of “Voskhod”. The 11A57 would eventually evolve into the 11A511 which was used to launch the first Soyuz missions into Earth orbit (see “The Avoidable Tragedy of Soyuz 1”). This first incarnation of the Soyuz rocket would continue to be incrementally improved over the next half a century eventually becoming the 14A14 “Soyuz-2” launch vehicle still used today to launch crews to the International Space Station (ISS).
While the family of launch vehicles based on the R-7 ICBM gave the Soviet Union an early lead in payload capability to LEO and beyond, the US was quick to develop its own heavy lift capabilities to support increasingly ambitious national goals in space. Among the earliest American programs to produce heavy lift rockets was started in 1957 by German-American rocket pioneer Wernher von Braun and his team at what would become NASA’s Marshall Space Flight Center (MSFC). The family of proposed launch vehicles, called Saturn, would support a large range of NASA’s heavy lift requirements culminating in the Apollo lunar landing missions by the end of the 1960s.
The first of these new rockets to be developed at MSFC was designated the Saturn I. Like Korolev and his team at OKB-1, von Braun and his team relied on the cluster concept to produce a large launch vehicle quickly by incorporating as much proven technology as possible. The first stage structure of the Saturn I employed clusters of tanks originally used on Redstone and Jupiter missiles which had been developed in the 1950s by von Braun and his team when they were part of the Army Ballistic Missile Agency. While the availability of high-thrust engines was still some years off, the Saturn I used a cluster of eight H-1 engines which were improved versions of the earlier S-3D engine flown on the Jupiter IRBM. This gave the Saturn I an impressive liftoff thrust that eventually reached 6,690 kilonewtons.
The first stage of the Saturn I was flown successfully on four suborbital test flights with dummy upper stages between October 1961 and March 1963. With the flightworthiness of the Saturn I first stage and its cluster concept verified, it was ready for orbital test flights using a live second stage. Unlike the first stage which burned RP-1 grade kerosene and LOX, the second stage of the Saturn I (and subsequent Saturn rockets) used the high energy cryogenic combination of liquid hydrogen and LOX which produced half again as much thrust as a like mass of more conventional propellants. Using a cluster of six RL-10 engines of the sort first flown on NASA’s Centaur upper stage (see “50 Years Ago: The Launch of Atlas-Centaur 5”), the two-stage Saturn I was capable of placing 9,000 kilograms into LEO. This capability was tested for the first time on January 29, 1964 when Saturn SA-5 placed its spent upper stage and a load of sand ballast into LEO (see “The Coolest Rocket Ever”). The subsequent five flights of the Saturn I over the next year and a half launched boilerplate models of Apollo hardware to gather vital flight test data to support that program (see “50 Years Ago Today: The First Apollo Orbital Test Flight”). After completing this series of test flights, the uprated Saturn I (known as the Saturn IB) would be called into service for orbital test flights for the Apollo program.
Before the improved Saturn IB would fly for the first time, a competing family of rockets would seize the title for the largest launch vehicle from NASA. But instead of coming from the Soviet Union, this rocket was built for the US Air Force (USAF). During the late 1950s and early 1960s, the USAF performed a series of studies on the feasibility of adapting its Titan II ICBM for use as a satellite launch vehicle. One of the fruits of this effort was NASA’s selection of the Titan II as the Gemini Launch Vehicle (GLV) in October 1961 for their follow on to the Mercury program (see “50 Years Ago Today: The Launch of Gemini 1”). The eventual result of these studies into USAF launch needs was the Titan III family of rockets.
The Titan III launch vehicle concept was based on a modular approach that could lift payloads into a variety of orbits and included a heavy lift capability that was, for political reasons, independent of NASA’s Saturn family of launch vehicles. Literally at the core of all versions of the Titan III was a modified two-stage Titan II ICBM that was structurally reinforced to handle heavier payloads and extra stages. The initial heavy-lift version of this rocket, known as the Titan IIIC, strapped a pair of three-meter in diameter solid rocket motors to the sides of the core. Consisting of five-segments each that were assembled near the launch pad, this pair of solid rocket motors made up “Stage 0” of the Titan IIIC and generated a total of 10,500 kilonewtons of thrust at lift off making the Titan IIIC the most powerful rocket flown at the time. Topping the core of the Titan IIIC was an upper stage known as the Transtage. Capable of multiple restarts, the Transtage could deliver payloads into a range of Earth orbits or even beyond.
Although the USAF would typically use the Titan IIIC to place more modest sized payloads into medium to high Earth orbits, on paper it was capable of lifting up to 13,000 kilograms into LEO. This capability was tested on the maiden flight of the Titan IIIC launched on June 18, 1965 when it placed 9,700 kilogram of ballast in LEO – the heaviest payload ever orbited up to this time (see “The First Missions of the Titan IIIC”). While future variants of the Titan III were planned to have improved payload capabilities to LEO and beyond, NASA would launch still larger rockets before them in support of their Apollo program.
The next holder of the largest launch vehicle title went to NASA’s improved Saturn IB. Based on lessons learned from the Saturn I, the uprated first stage was lighter and employed eight uprated versions of the H-1 which initially produced a total of 7,100 kilonewtons of thrust at launch. A new, significantly enlarged second stage nearly identical to the third stage of the still larger Saturn V moon rocket boosted the LEO payload capability of the initial batch of five Saturn IB rockets to 17,000 kilograms – sufficient to launch either an Apollo Command-Service Module (CSM) or Lunar Module (LM) into LEO for initial test flights of this hardware.
The first flight of the Saturn IB was as part of the Apollo AS-201 mission which launched an Apollo CSM on an unmanned suborbital test flight on February 26, 1966 (see “The First Flight of the Apollo-Saturn IB”). The first orbital flight of the Saturn IB was the AS-203 mission launched on July 5, 1966. Since the objective of this mission was to test design features of its second stage, it did not carry a payload save for instrumentation and over eight metric tons of residual liquid hydrogen in its fuel tank (see “AS-203: NASA’s Odd Apollo Mission”). The first Saturn IB to orbit an actual spacecraft was the unmanned Apollo 5 mission launched on January 22, 1968 to test the first 14,300-kilogram LM flight article in LEO (see “Apollo 5: The First Flight of the Lunar Module”).
As the initial batch of Saturn IB rockets were making their first flights, the title for the largest launch vehicle briefly went back to the Soviet Union. But unlike the R-7-based launch vehicles developed at OKB-1 (whose named changed to TsKBEM in March 1966 – the Russian acronym for “Central Construction Bureau of Experimental Machine Building”), this time a rival Soviet design bureau took the lead. TsKBM (Central Design Bureau for Machine Building, known as OKB-52 before 1965) under Vladimir Chelomei actively competed with Korolev’s OKB-1 in the development of ballistic missiles and spacecraft during the early years of the Space Age. One of the larger members of their family of modular “universal rockets” was the UR-500 which originally had been proposed as a super-heavy ICBM. This two-stage rocket would have been capable of hurling a thermonuclear warhead with a mass of about 12 metric tons and a yield of 100 megatons over a range of about 12,000 kilometers.
Although the UR-500 was never adopted by the Soviet government for use as an ICBM, Chelomei did secure approval to develop this rocket as the basis for a heavy lift satellite launch vehicle. The first launch of the UR-500 on July 16, 1965 orbited the 12,200 kilogram Proton 1 cosmic ray observatory. Although the Titan IIIC theoretically had better performance (especially just beyond LEO), Proton 1 took the record for the most massive usable payload launched into orbit up until that time (as opposed to inert ballast or publicly released in-orbit mass totals inflated by the inclusion of a spent upper stage). Although the rocket was originally to be named “Hercules”, instead the name “Proton” was adopted for the family of launch vehicles based on the UR-500.
Even though the UR-500 was a capable launch vehicle, it was retired after only a year following four launches (of which three succeeded) in favor of a more capable design known as the UR-500K (or 8K82K) to launch heavy payloads into LEO as well as Chelomei’s proposed LK-1 manned circumlunar ship. The Proton-K, as it is popularly called today, retained the first stage of the UR-500 with its six RD-253 engines producing 10,500 kilonewtons of thrust at lift off. The UR-500K now sported an enlarged second stage and included a new third stage to boost its initial LEO payload capability to 19,000 kilograms – slightly more than the first batch of Saturn IB rockets which flew from 1966 to 1968.
While the Proton-K would eventually be used to launch the first Soviet space stations and other manned spacecraft prototypes into LEO, its first use was for launching circumlunar spacecraft which would make a simple loop around the Moon and return to Earth. Much to Chelomei’s disappointment, the Proton-K would not launch his proposed circumlunar ship but the 7K-L1 originally designed by Korolev. Korolev’s circumlunar spacecraft consisted of a stripped down 7K-OK Soyuz without an orbital module and added a Blok D escape stage borrowed from the N-1 Moon rocket being developed by OKB-1/TsKBEM. The design of the 19,000 kilogram payload was tailored specifically for the Proton-K. The first launch of the Proton-K lifted the 7K-L1P/Blok D prototype known as Kosmos 146 into LEO on March 10, 1967. While Kosmos 146 met its mission objectives after the Blok D boosted the spacecraft into an elongated orbit, problems with this upper stage during the subsequent Kosmos 154 flight launched on April 8 doomed the flight and foreshadowed the many problems the Proton-K/D would experience in the years to come as the hurriedly developed rocket experienced numerous growing pains.
The penultimate heavy launch vehicle of the early years of the Space Age was NASA’s Saturn V. Developed by von Braun and his team at MSFC, the Saturn V was designed from the start to send the 44,000-kilogram Apollo CSM/LM to the Moon – a mass that exceeded even the LEO capabilities of all earlier rockets. Like the previous Saturn rockets, the first stage of the Saturn V burned RP-1 and LOX. But with the use of five huge F-1 engines, the liftoff thrust of the Saturn V was an unparalleled 33,950 kilonewtons. The upper two stages used the high energy propellants liquid hydrogen and LOX for a set of five J-2 engines in the second stage and a single J-2 for the third (which had already been flown as the second stage of the Saturn IB).
During an Apollo lunar mission, the first two stages and a short burn of the third stage would place the last stage and the Apollo into a temporary LEO. After reaching the proper position, the third stage would reignite to send the Apollo CSM/LM to the Moon. Used instead to launch a payload into LEO, the Saturn V could orbit about 118,000 kilograms. The first launch of the Saturn V took place on November 9, 1967 for the highly successful unmanned Apollo 4 mission (see “Apollo 4: The First Flight of the Saturn V”). Although some problems were encountered with the Saturn V during the unmanned Apollo 6 mission launched on March 4, 1968, the Saturn V went on to chalk up a string of successes during the actual manned Apollo lunar missions leading to the first manned lunar landing on July 20, 1969 and beyond.
In order to increase the payload capability to the Moon to almost 47,000 kilograms to support the more capable (and heavier) J-series missions starting with Apollo 15 launched on July 26, 1971, a number of improvements were made to the Saturn V. The stages were lighted by removing unneeded equipment and improved engines were used which increased the liftoff thrust to 35,445 kilonewtons. After the completion of the final Apollo lunar mission in December 1972, a two-stage version of the Saturn V was used to launch the 77,000 kilogram Skylab space station on May 14, 1973. This would prove to be the final flight of the Saturn V as NASA turned its attention to the development of the Space Shuttle to support future heavy-lift requirements.
After the retirement of the Saturn V, the title of the largest launch vehicle fell back to NASA’s Saturn IB. Improvements made to the second batch of this rocket, which made it first flight launching Skylab’s SL-2 mission on May 28, 1973, boosted the lift off thrust to 7,285 kilonewtons. With a LEO payload capability now pushed up to 21,000 kilograms, it now just outperformed the Soviet Proton-K launch vehicle. But with the final launch of the Saturn IB in support of the Apollo-Soyuz Test Project (ASTP) on July 15, 1975, the last member of the Saturn family had been retired. As NASA waited for the availability of the Space Shuttle, they relied on the USAF Titan III family to meet its heavy lift requirements.
With the last of NASA’s Saturn rockets phased out of service, the largest launch vehicle in the world once again became the Soviet Proton-K. Although it was used to launch an increasing range of heavy payloads into LEO and beyond, the Proton-K continued to experience a fair number of failures as problems were encountered and resolved. It was not until its 60th launch on September 29, 1977 when a Proton-K orbited the 19,800 kilogram Salyut 6 space station that this rocket officially completed its state trails and was deemed to be as reliable as other launch vehicles around the world. The Proton-K would be incrementally improved over the years and decades to come and continues to be used to orbit payloads including Russian elements of the ISS.
With the long-delayed introduction of NASA’s Space Shuttle, the US would once again take the lead in LEO payload capability. Approved for development in 1972, the Space Shuttle was the first element of the Space Transportation System which promised to make missions to LEO and beyond more affordable and routine. The Space Shuttle consisted of a reusable, 80,000 kilogram space plane or Orbiter with a large cargo bay which could deploy payloads (and any additional rocket stages it needed) into LEO as well as retrieve payloads for return to Earth. The Orbiter included a trio of high-performance, reusable Space Shuttle Main Engines (SSME but now known today as the RS-25) which together generated a nominal thrust of about 5,000 kilonewtons at sea level. The liquid hydrogen and LOX propellants for these engines was supplied by a large external tank (ET) attached to the underside of the Shuttle during launch and ascent. Unlike the other elements of the Space Shuttle, it was decided not to recover the ET for practical reasons.
While the RS-25 engines of the Space Shuttle supplied most of the energy to reach orbit, they had insufficient performance to lift the orbiter and its fully loaded ET off of the ground. Attached to either side of the ET were a pair of Solid Rocket Boosters (SRBs) which generated 14,680 kilonewtons of thrust each at lift off. After about two minutes of flight, the SRBs were jettisoned and descended to the ocean below on parachutes where they would be recovered, refurbished and reused. Combined with the Orbiter’s RS-25 engines, the typical liftoff thrust of the Space Shuttle with all of its propulsion systems throttled up was 34,677 kilonewtons.
While theoretically the Space Shuttle could lift as much as 29,500 kilograms of payload into LEO, practical considerations meant that this figure would not be reached in actual applications. The first spaceworthy Orbiter, OV-102 better known as Columbia which made its maiden STS-1 flight on April 12, 1981, was capable of launching 21,104 kilograms of payload into a 204-kilometer orbit with an inclination of 28.5° assuming a five person crew. Each additional kilometer of altitude reduced this total by 25 kilograms while each additional crew member cut 230 kilograms. With the introduction of four more orbiters over the next decade which included improvements in structures and systems to decrease mass and improve performance, this LEO capability eventually climbed to 24,950 kilograms.
The original intent was for the Space Shuttle to replace all of America’s expendable launch vehicles (ELVs) in the belief that it would save launch costs and improve reliability. In the end this proved to be an unwise policy as it became apparent that the Space Shuttle was more expensive and took more effort to refurbish for reuse than had been originally anticipated. The loss of OV-099 Challenger along with its crew of seven and payload during the launch of the STS-51L mission on January 28, 1986 resulted in a 32-month stand down of the program as the causes of the accident were investigated and changes to the Shuttle design were made. In the wake of this tragedy, national launch policy was also officially changed with renewed efforts made to expand and update America’s families of ELVs. After a backlog of Shuttle-specific payloads were launched following the resumption of flights in September 1988, all other government and commercial satellites were transitioned back to ELVs. As time wore on, Orbiters and crews would only be risked on missions which could make use of the Shuttle’s unique capabilities to meet the nation’s space objectives.
During this time, Soviet design bureaus were busy developing their own heavy lift capabilities to compete with the STS. After the failure of their program to develop the Soviet N-1 Moon rocket, the successor of TsKBEM called RKK Energia commenced the development of a new launch vehicle family in 1976 to meet future heavy lift needs for the Soviet Union. Originally designated 11K25 but better known by the name “Energia”, this new Soviet rocket would be the first to use the high energy cryogenic propellants, liquid hydrogen and LOX. The large core stage of the Energia employed four RD-0102 engines to produce about 5,800 kilonewtons of thrust at sea level. In the baseline Energia design, four boosters using kerosene and LOX for propellants were strapped to the sides of the core. Each booster used a four-chamber RD-170 engine to produce about 7,250 kilonewtons at sea level. Combined with the core, whose engines would also ignite at launch, the Energia had a total liftoff thrust of 34,800 kilonewtons. Although an additional propulsive maneuver (from an extra stage or the payload) was needed to reach orbit, this baseline Energia design was capable of placing on the order of 100,000 kilograms into LEO.
The first test flight of the Energia came on May 15, 1987. While the new launch vehicle successfully completed its task, a guidance software error in the 80,000 kilogram Polyus spacecraft prevented the payload from reaching orbit. Energia’s second launch on November 15, 1988 was likewise successful orbiting the new Soviet Buran space shuttle on its first (and only) unmanned test flight in space. The 105,000 kilogram Buran OK-1K1 successfully completed two orbits before returning to an automated landing at the Baikonur Cosmodrome where it was launched 3½ hours earlier.
Although Energia had much promise, the declining economic and political situation in the Soviet Union prevented development of payloads and further launches. While there was some hope after the dissolution of the Soviet Union at the end of 1991 that Energia would fly again perhaps in a modified form, there were simply no resources available in Russia to continue the program. Energia’s strap on boosters, however, would serve as the basis of the Zenit family of Soviet (now Ukrainian) launch vehicles and its propulsion technology would be adapted into the two-chamber RD-180 used in the current incarnation of the American Atlas launch vehicle (see “A History of American Rocket Engine Development”).
After a change in American launch policy following the Challenger accident, American aerospace companies embarked on a program to revive ELV production and new development of the Titan, Atlas and Delta families of rockets. Among these were new heavy lift variants that would equal or exceed the payload capabilities of the Space Shuttle which was no longer available to most customers. The ultimate example of this effort was the Delta IV Heavy.
Since its introduction in 1960, the Delta family of launch vehicles slowly evolved over time to include larger stages and increasing numbers of larger strap on solid rocket motors to increase its payload performance significantly over the decades. The Delta IV series was a completely new design using a two-stage core employing liquid hydrogen and LOX as propellants. The first stage, known as the Common Booster Core (CBC), has a RS-68 engine to produce 3,140 kilonewtons of thrust at lift off. Although not quite as efficient as the RS-25 used on the Space Shuttle, the RS-68 is a simpler design which produces higher thrust for much less cost. The upper stage of the Delta IV core uses a single RL-10-B-2 engine whose design is based on the RL-10 engines used decades earlier starting with the Centaur and Saturn I second stage.
This basic core, known as the Delta IV Medium, can lift about 11,500 kilograms into LEO. With the addition of various strap on boosters, the payload capability of the Delta IV can be increased to meet various customer needs. The first Delta IV flight, using a variant known as the Delta IV Medium+(4,2) which sported a pair of GEM-60 strap on solid rocket motors to increase liftoff thrust, was launched from Cape Canaveral on November 20, 2002.
The largest member of this family, known as the Delta IV Heavy, uses the basic core with two RS-68-powered CBCs strapped to its sides. With a liftoff thrust of 9,420 kilonewtons, the Delta IV Heavy is capable of placing up to 28,800 kilograms into LEO from Cape Canaveral – somewhat more than the Space Shuttle which was retired in 2011. The first launch of the Delta IV Heavy was on December 21, 2004 but bubbles in the LOX lines resulted in the early shutdown of the CBC strap ons and core preventing the test payloads from reaching orbit. The first fully successful Delta IV Heavy launch came on November 11, 2007 when it orbited the USAF DSP-23 early warning satellite. Since then, the Delta IV has been used primarily to launch heavy defense payloads from Cape Canaveral and Vandenberg Air Force Base as well as the first test flight of NASA’s Orion spacecraft on December 5, 2014 (see “From Apollo to Orion: Space Launch Complex 37”).
The launch of the first Falcon Heavy on February 6, 2018 with its impressive 63,800 kilogram LEO capability now brings us to the present holder of the title “the largest launch vehicle”. With heavy lift requirements needed for future missions to the Moon, Mars and beyond, still larger rockets are already under development which will seize the title. One example which is well along in its development is NASA’s SLS (Space Launch System) which has adapted technology from the now-retired Space Shuttle to create a new heavy lift launch vehicle. With an initial LEO payload capability of about 70,000 kilograms, the SLS Block I is expected to make its first flight in support of NASA’s Orion program probably in 2020. Later, the SLS Block II should increase the LEO payload capability of this design to 130,000 kilograms. Although it seems likely the SLS launch date will slip further in the months to come (and with other contenders under development), it is only a matter of time before there is a new title holder for “the largest launch vehicle”.
Follow Drew Ex Machina on Facebook.
Detailed articles on these and other rockets can be found on the Drew Ex Machina Rockets & Propulsion page.
I’ve always thought that it would be cool to have a themed wedding based on the end credits of The Adventures of Buckaroo Banzai Across the Eighth Dimension, which may just prove that my definition of the word “cool” is a bit off. Of course, you’d have to hold your wedding in a paved drainage canal, but then again, if your bride-to-be agreed to your best man wearing a jacket with no shirt underneath, she’ll probably agree to anything.
You might have better luck with a wedding themed after the end credits of The Life Aquatic with Steve Zissou instead, as it’s the same exact thing, except that it’s by the water, and they end up on a boat, which would be a great place to hold the reception.
Note from Missy: the only issue being that both of those end credits only have one woman in them, and the day there’s a wedding where the only woman involved is the bride is a day I’ll eat my hat. (Side note: dammit, Hollywood, can we get over the single-woman-in-a-sausage-fest-movie trope already?)