Moving a home through the streets of San Francisco, ski jumping in Germany, hiking the Great Wall in China, visiting a ski resort in Tehran, opening a “hug room” in Rome, taking a vaccination selfie in Spain, surfing in front of Mount Fuji, walking a snow maze in Manitoba, and much more.
A violent eruption spews ash more than a kilometer into the sky above Mount Etna in Sicily on February 23, 2021.
(
Marco Restivo / Barcroft Media via Getty)
we have built a robot to send to another planet and programmed it to send radio signal indicating its planned descent and here we are, calling it a heartbeat.
“we have just received confirmation that Perseverance is alive and well” space travel is so full of love for robots but who am i to judge i am crying too
The U.S. Labor Department’s inspector general said this week that roughly $100 million in fraudulent unemployment insurance claims were paid in 2020 to criminals who are already in jail. That’s a tiny share of the estimated tens of billions of dollars in jobless benefits states have given to identity thieves in the past year. To help reverse that trend, many states are now turning to a little-known private company called ID.me. This post examines some of what that company is seeing in its efforts to stymie unemployment fraud.
These prisoners tried to apply for jobless benefits. Personal information from the inmate IDs has been redacted. Image: ID.me
A new report (PDF) from the Labor Department’s Office of Inspector General (OIG) found that from March through October of 2020, some $3.5 billion in fraudulent jobless benefits — nearly two-thirds of the phony claims it reviewed — was paid out to individuals with Social Security numbers filed in multiple states. Almost $100 million went to more than 13,000 ineligible people who are currently in prison.
The OIG acknowledges that the total losses from all states is likely to be tens of billions of dollars. Indeed, just one state — California — disclosed last month that hackers, identity thieves and overseas criminal rings stole more than $11 billion in jobless benefits from the state last year. That’s roughly 10 percent of all claims.
Bloomberg Lawreports that in response to a flood of jobless claims that exploit the lack of information sharing among states, the Labor Dept. urged the states to use a federally funded hub designed to share applicant data and detect fraudulent claims filed in more than one state. But as the OIG report notes, participation in the hub is voluntary, and so far only 32 of 54 state or territory workforce agencies in the U.S. are using it.
Much of this fraud exploits weak authentication methods used by states that have long sought to verify applicants using static, widely available information such as Social Security numbers and birthdays. Many states also lacked the ability to tell when multiple payments were going to the same bank accounts.
To make matters worse, as the Coronavirus pandemic took hold a number of states dramatically pared back the amount of information required to successfully request a jobless benefits claim.
77,000 NEW (AB)USERS EACH DAY
In response, 15 states have now allied with McLean, Va.-based ID.me to shore up their authentication efforts, with six more states under contract to use the service in the coming months. That’s a minor coup for a company launched in 2010 with the goal of helping e-commerce sites validate the identities of customers for the purposes of granting discounts for veterans, teachers, students, nurses and first responders.
ID.me says it now has more than 36 million people signed up for accounts, with roughly 77,000 new users signing up each day. Naturally, a big part of that growth has come from unemployed people seeking jobless benefits.
To screen out fraudsters, ID.me requires applicants to supply a great deal more information than previously requested by the states, such as images of their driver’s license or other government-issued ID, copies of utility or insurance bills, and details about their mobile phone service.
When an applicant doesn’t have one or more of the above — or if something about their application triggers potential fraud flags — ID.me may require a recorded, live video chat with the person applying for benefits.
This has led to some fairly amusing attempts to circumvent their verification processes, said ID.me founder and CEO Blake Hall. For example, it’s not uncommon for applicants appearing in the company’s video chat to don disguises. The Halloween mask worn by the applicant pictured below is just one example.
Image: ID.me
Hall said the company’s service is blocking a significant amount of “first party” fraud — someone using their own identity to file in multiple states where they aren’t eligible — as well as “third-party” fraud, where people are tricked into giving away identity data that thieves then use to apply for benefits.
“There’s literally every form of attack, from nation states and organized crime to prisoners,” Hall said. “It’s like the D-Day of fraud, this is Omaha Beach we’re on right now. The amount of fraud we are fighting is truly staggering.”
According to ID.me, a major driver of phony jobless claims comes from social engineering, where people have given away personal data in response to romance or sweepstakes scams, or after applying for what they thought was a legitimate work-from-home job.
“A lot of this is targeting the elderly,” Hall said. “We’ve seen [videos] of people in nursing homes, where folks off camera are speaking for them and holding up documents.”
“We had one video where the person applying said, ‘I’m here for the prize money,'” Hall continued. “Another elderly victim started weeping when they realized they weren’t getting a job and were the victim of a job scam. In general though, the job scam stuff hits younger people harder and the romance and prize money stuff hits elderly people harder.”
Many other phony claims are filed by people who’ve been approached by fraudsters promising them a cut of any unemployment claims granted in their names.
“That person is told to just claim that they had their identity stolen when and if law enforcement ever shows up,” Hall said.
REACTIONS FROM THE UNDERGROUND
Fraudsters involved in filing jobless benefit claims have definitely taken notice of ID.me’s efforts. Shortly after the company began working with California in December 2020, ID.me came under a series of denial-of-service (DDoS) attacks aimed at knocking the service offline.
“We have blocked at least five sustained, large-scale DDoS attacks originating from Nigeria trying to take our service down because we are blocking their fraud,” Hall said.
Mentions of id.me in cybercrime forums, Telegram channels throughout 2020. Source: Flashpoint-intel.com
Asked about the efficacy of those methods, Hall said while his service can’t stop all phony jobless claims, it can ensure that a single scammer can only file one fraudulent application.
“I’d say in this space it’s not about being perfect, but about being better,” he said.
That’s something of an understatement in an era when being able to limit each scammer to a single fraudulent claim can be considered progress. But Hall says one of the reasons we’re in this mess is that the states have for too long relied on data broker firms that sell authentication services based on static data that is far too easy for fraudsters to steal, buy or trick people into giving away.
“There’s been a real shift in the market from data-centric identity verification to verifying through something you have and something you are, like a phone or face or ID,” he said. “And those aren’t in the provenance of the incumbents, the data-centric brokers. When there have been so many data breaches that the toothpaste is basically out of the tube, you need a full orchestration platform.”
A BETTER MOUSETRAP?
Collecting and storing so much personal data on tens of millions of Americans can make one an attractive target for hackers and ID thieves. Hall says ID.me is certified against the NIST 800-63-3 digital identity guidelines, employs multiple layers of security, and fully segregates static consumer data tied to a validated identity from a token used to represent that identity.
“We take a defense-in-depth approach, with partitioned networks, and use very sophisticated encryption scheme so that when and if there is a breach, this stuff is firewalled,” he said. “You’d have to compromise the tokens at scale and not just the database. We encrypt all that stuff down to the file level with keys that rotate and expire every 24 hours. And once we’ve verified you we don’t need that data about you on an ongoing basis.”
With such a high percentage of jobless claims now being filed by identity thieves, many states have instituted new fraud filters that ended up rejecting or delaying millions of legitimate claims.
Jim Patterson, a Republican assemblyman from California, held a news conference in December charging that ID.me’s system “continually glitches and rejects legitimate forms of identification, forcing applicants to go through the manual verification process which takes months.”
ID.me says roughly eight users will pass through its automated self-serve flow for every one user who needs to use the video chat method to verify their identity.
“The majority of legitimate claimants pass our automated, self-serve identity verification process in less than five minutes,” Hall said. “For individuals who fail this process, we are the only company in the United States that offers a secure, video chat based method of identity verification to ensure that all users are able to prove their identity online.”
Hall says his company also exceeds the industry standard in terms of validating the identities of people with little or no credit history.
“If you just rely on credit bureaus or data brokers for this, it means anyone who doesn’t have a credit history doesn’t get through,” he said. “And that tends to have a disproportionate affect on those more likely to be less affluent, such as minority communities.”
A century ago, Russia was enduring a terrible famine, the Irish Free State was created, U.S. President Warren Harding was inaugurated, the Tulsa race massacre took place in Oklahoma, a new machine called a “dishwasher” was introduced, New York’s Madison Square Garden was home to “the world’s largest indoor swimming pool,” and much more. Please take a moment to look back at some of the events and sights from around the world 100 years ago.
A row of New York City policemen, photographed wearing sets of portable traffic lights. At the time, effective methods of traffic control were still being developed in big cities in the U.S.
(
General Photographic Agency / Getty)
Really Simple Syndication - RSS use has been in steady decline since the death of Google Reader. These days even when a feed is provided it’s often just a snippet that lets you read the first 50 or 100 words then you have to visit the site to finish what you started. I refuse to read or subscribe to sites that provide only partial feeds. Every word of the last ten posts are served up in the feed here. Having no advertisements or appeals for donations I don’t care how or where you read my content. I use a now defunct application to collect and read blog feeds but you can use your favorite news reader. Enjoy!
Navigation Tip - I made a minor content adjustment on the site this week. All of the last 20 posts appear on the main page, but eventually these scroll off as new posts appear on top. But the older posts are still available. Click the archive link at the top of every page to reveal quick links to all of the content.
Twitter Purgatory - my twelve-hour banishment has ended and I have full-access to my account again. I want to be clear that while I thought the action was silly, I understand it was the result of over zealous AI bots. I harbor no hard feelings, but you may have noticed that I almost never mention my Twitter account here. That’s because I’ve been on the fence for a long time about whether or not to close the account. I don’t want to create links here that won’t age well. This is my motivation to flee social media in all forms…
About - just an FYI that I have recently updated the About page with a few brief details about myself. It’s my least favorite thing to do as I hate writing about me. It’s not an easy thing to do and I cringe whenever I read a bio written in third person. “Jeff enjoys German beer and fly fishing” is an abomination that I won’t countenance. But a few details seemed necessary so I put something there and I probably won’t update it again anytime soon.
The SID Chip is one of the most hallowed components of electronic equipment, housed inside the original Commodore 64 and responsible for some of the most iconic chiptunes ever made. The Commodore 64 & 128 GOLD SID Sound Interface Device is a direct replacement for the original SID chip which will ensure the rare and valuable chip is safe, while accurately replicating its output and performance.
The chip installation will include desoldering the original chip, which will require some advanced soldering skills – but there are many tutorials online which will help you with this and it can be done however scary it may seem! The SID chip in the Commodore 64 came in two versions – the MOS 6581 and the 8580, both of which can be replaced by this neat board.
In my never ending quest to get more opensource stuff around rocketry going I posted over at the FreeCAD forum about a concept for a Rocketry themed workbench. It seemed like it would be a good fit and that it might be an interesting area for developers to work on. I explained how I'd often used Openrocket to design and simulate a rocket design and then, quite often, I found myself copying dimensions into FreeCAD or sometimes OpenSCAD to create 3D objects of nosecones, centring rings, fins and more. I suggested that automating some of this would be a good starting point, but that also the sky was the limit, ultimately it is *possible* to recreate something like Openrocket or Rocksim in FreeCAD, with the benefit then of any design is auto-magically in a CAD format for other techniques to be applied. FreeCAD already has a wealth of tools that can be leveraged at rocket parts, from 3D printing mesh tools, to CNC toolpath work, through snazzy computational fluid dynamics tools, FEA and more. Hell, there'ss even a glider workbench for designing gliding parachutes within the community so the end really is limitless!
What happened next was utterly brilliant and testimony to opensource culture! People got interested and one chap, Dave, (AKA DavesRocketShop ) got stuck in. Within a couple of weeks we now have a rocketry workbench released with some useful tools already up and running. Also there is a tantalising roadmap of exciting features to come.
The first release when installed as a workbench gives you some parametric tools to create rocket air-frame components. Nosecones in all the common geometries are all there, Haack, VK, Ogive, Parabolic series, Power series etc. Transitions are similar in choices and there are tools to create body tubes, centring rings and bulkheads. It's really straight forward to begin to assemble great parts that can be adorned and edited using the entire suite of tools FreeCAD has to offer. Fins, in the first release, are limited to trapezoidal designs but there are a wealth of parameters to play with and it's trivial to auto generate aerofoil geometries etc. The rocket workbench is already available via the addon manager which makes installation and updating a breeze and on the forum thread the mighty Dave is occasionally releasing manual install versions for those of us keen to try experimental features.
Dave is currently working on importing parts from the openrocket databases. This is really interesting as all the common manufacturers parts are in there, so if you want a particular Estes replacement nosecone, or to upscale one for some crazy project this will be but a click away! I'm most excited about that there is a plan to allow the workbench to import Openrocket, Rocksim and RASAero files, whilst that may be some way off it really opens up some astonishing opportunities for rocket designers. Also having got to grips with the technical drawing workbench in FreeCAD I am looking forward to being able to document my designs well, important if one day I go for L3 HPR certification.
Finally if you are new to FreeCAD but want to try FreeCAD I'm writing a reccurring series for Hackspace Magazine on FreeCAD and it started back in issue 37 here.
Viruses evolve. It’s what they do. That’s especially true for a pandemic virus like SARS-CoV-2, the one behind COVID-19. When a population lacks immunity and transmission is extensive, we expect viral mutations to appear frequently simply due to the number of viruses replicating in a short period of time. And the growing presence of immune individuals means that the viruses that can still transmit in these partially immune populations will be favored over the original version. Sure enough, that’s what we’ve been seeing, as news reports warn of the appearance of novel variants (viruses with several mutations, making them distinct from their ancestors) and strains (variants that are confirmed to behave differently from the original).
To be clear, mutations are random errors that occur when a virus reproduces. In the case of SARS-CoV-2, which has an RNA genome based on adenine, cytosine, guanine and uracil, sometimes mistakes happen. Maybe an adenine gets swapped with a uracil (a substitution mutation that could also occur with any of the base pairs), or perhaps one or more bases get inserted or deleted. If a mutation actually changes the protein encoded by that part of the RNA sequence, it’s referred to as a non-synonymous mutation. Mutations that do not result in a protein change are referred to as synonymous, or silent, mutations.
Luckily, the mutation rate of coronaviruses generally is relatively slow, due to a proofreading ability in the virus that allows for some correction of replication mistakes. Typically SARS-CoV-2 will accumulate only two mutations per month among its genome’s 30,000 base pairs; that’s half the rate of an influenza virus, and a quarter of the rate of HIV. But with more than 100 million people infected to date, non-synonymous mutations are inevitable. The bigger issue is determining which mutations actually provide the virus enough of an advantage to increase its spread through the population.
Fortunately, at this point we have the knowledge to answer some of the most pressing questions.
When did different strains of the SARS-CoV-2 virus start appearing?
The first mutation we learned about was the D614G mutation, first reported in March 2020. When a mutation causes a change in the protein sequence, its name refers to the ancestral amino acid, its location and then the new amino acid. This mutation changed the amino acid aspartate (abbreviated as D) at the 614th position in the virus spike protein into glycine (G). Because the spike protein enables the virus to bind to host cells, the change is significant; mutations here could help it to bind more efficiently to the host receptor (called ACE2).
However, it’s not clear yet if that’s the case with D614G. The authors of a paper describing the mutation suggested that the rapid spread of variants carrying this mutation, combined with in vitro analyses of viral behavior and clinical data involving people infected with it, meant that D614G provided a selective advantage to these variants, and the mutation was therefore spreading. Others were not convinced, suggesting an alternative rationale for the dominance of the D614G mutation: the shift in the geographic focus of the epidemic, from China to Europe (especially Italy) to the U.S. In China, the original version of the virus, with aspartate (D) in the 614th position, was most prevalent; in Europe, and subsequently in the U.S., it was the new one, with glycine. With additional exported cases including the D614G mutation, this variant may have become the major lineage due merely to luck or the “founder effect”— meaning that the lineage dominated simply because it was the first one to populate that area — rather than a selective advantage. We’re still not sure.
Since September 2020, a number of other SARS-CoV-2 mutations have been identified around the globe. Some of the variants currently circulating in the population do seem to be more evolutionarily fit than their older counterparts, with improved transmission, lethality or both. Now that the virus has spread almost everywhere, when we see new variants overtake a population, it is much more likely to be due to selection — improved fitness — than the founder effect. This is supported by the fact that many of the variants show signs of convergent evolution: The viruses have independently landed on the same mutations that make them more transmissible, giving them an evolutionary advantage over preexisting strains.
What are some of the more notable strains?
The most well-known is probably variant B.1.1.7, first detected in the U.K. in September of 2020. Here the name derives from a system called Pango lineages, where A and B represent early lineages, and the numbers after the letter represent branches from those lineages. B.1.1.7 contains 23 mutations that differentiate it from its wild-type ancestor. A study suggested that the variant is 35%-45% more transmissible and that it was likely introduced into the U.S. via international travel at least eight times. While increased transmission — but not lethality — seems to be a hallmark of this variant, one group has reported that B.1.1.7 may also be associated with an increased risk of death.
Meanwhile, in December 2020, another variant dubbed B.1.351 was first identified in South Africa, and soon after a variant called P.1 was found in Manaus, Brazil, during a second surge of infections in that city. (Manaus had already been hard hit in April, and officials thought herd immunity had been reached.) Both of these variants also seem to make the virus easier to catch.
As they all appear to have a transmission advantage over the established lineages, we will likely see these variants continue to spread. Recent work predicted that the B.1.1.7 variant could become the dominant lineage and account for more than half of identified cases in the U.S. in mid-March.
How do these variants differ from the original virus?
As with D614G, many mutations involve changes to the spike protein. A key mutation in B.1.1.7 is called N501Y, which changes the residue of an amino acid named asparagine (N) to one named tyrosine (Y) at the 501st position along the spike protein. Exactly why this may make the virus more transmissible isn’t yet understood; perhaps it allows for better binding to host cells, higher amounts of the virus in the respiratory system, improved viral replication, a combination of these, or something else entirely. Experiments to figure this out are underway in labs around the globe.
B.1.351 and P.1 have the N501Y mutation and another one called E484K, which switches glutamic acid (E) for lysine (K) at spike protein position 484. This mutation is especially concerning as it seems to be better at escaping antibody-mediated immunity: It makes it more difficult for the body’s antibodies to bind to the spike protein and thus prevent the virus from entering cells.
In addition to these specific changes, the B.1.351 and P.1 lineages also have approximately 20 additional unique mutations each. If both variants are indeed better than older viruses at escaping immunity, this could explain some of the second surge in Manaus, and it may leave previously infected individuals at risk of reinfection by these variants. Indeed, severalcasereports in Brazil have already documented such reinfections with variants containing the E484K mutation.
Are vaccines still effective against these variants?
Yes, but perhaps not quite as effective.
In a pair of recent manuscripts, the developers of the Moderna and Pfizer-BioNTech vaccines examined whether antibodies from vaccinated individuals would neutralize (prevent from replicating) viruses containing mutated forms of the SARS-CoV-2 spike protein in cell culture. The antibodies functioned well against a virus carrying the B.1.1.7 mutations, but neutralization was reduced when the B.1.351 mutations were introduced. However, both companies expect the vaccines to work well even against this variant; the lower level of protective antibodies is still considered enough to prevent infection. The Novavax and Johnson & Johnson vaccines, not yet available in the U.S., also appeared to be less effective against the B.1.351 and P.1 variants in trials.
Boosters tailored to new variants may be required in the future, and many are already in development.
Where did these new versions of the virus come from?
We’re not sure. For the B.1.1.7 strain in the U.K., there don’t appear to be any clear “intermediate” viral variants to demonstrate that this strain evolved from the prior dominant strains, accumulating mutations slowly over time in a stepwise pattern.
Instead, scientists are beginning to think there may have been a massive evolutionary leap, which could have occurred in a known individual suffering a lingering infection. A case report from December 2020 describes a SARS-CoV-2 infection in a man who was severely immunocompromised. Over time, scientists found that the population of viruses he harbored underwent “accelerated viral evolution,” likely due to the inability of his immune system to keep the virus in check. When examining the specific mutations, doctors spotted both N501Y and E484K — also part of the B.1.351 and P.1 variants that showed up around the same time, even though the man didn’t have either variant himself.
Now imagine this process happening again and again, around the globe. It only takes one variant replicating in the right person and in the right setting to take off and spread in the population.
What are we doing in the U.S. to find and stop these variants?
Not as much as we need to do, but more than we were doing. As of February 7, 2021, the U.S. ranked 36th in the world in terms of sequencing our viral isolates, carrying out genomic analyses of only 0.36% of our confirmed cases. For comparison, the U.K. sequences approximately 10% of its cases, and Denmark 50%. The Biden administration has increased sequencing goals dramatically and earmarked additional funds for viral sequencing.
As far as stopping them is concerned, we must continue to do what we’ve been doing all along: wear masks, maintain social distancing, stay home, wash hands. We can now also add getting vaccinated as soon as a vaccine is available to you. This is important even if the variants reduce vaccine effectiveness somewhat, as at least the B.1.351 and P.1 variants seem to do — decreased effectiveness is still better than no effectiveness, and even a vaccine that is less effective at preventing infection can still protect against serious illness.
The key is to provide less tinder for the fire: Reduce susceptible hosts for the variants and stop their replication by following basic public health interventions and getting vaccinated. When the virus chances upon beneficial mutations, it’s as if it won the lottery; as virologist Angela Rasmussen suggests, we need to “stop selling it tickets.”
The planets orbiting the young star TOI 451 should be useful for astronomers working on the evolution of atmospheres on young planets. This is a TESS find, three planets tracked through their transits and backed by observations from the now retired Spitzer Space Telescope, with follow-ups as well from Las Cumbres and the Perth Exoplanet Survey Telescope. TOI 451 (also known as CD-38 1467) is about 400 light years out in Eridanus, a star with 95% of the Sun’s mass, some 12% smaller and rotating every 5.1 days.
That rotation is interesting, as it’s more than five times faster than our Sun rotates, a marker for a young star, and indeed, astronomers have ways of verifying that the star is only about 120 million years old. Here the Pisces-Eridanus stream, only discovered in 2019, becomes a helpful factor. A stream of stars forms out of gravitational interactions between our galaxy and a star cluster or dwarf galaxy, shoe-horning stars out of their original orbit to form an elongated flow.
Named after the two constellations in which the bulk of its stars reside, the Pisces-Eridanus stream is actually some 1,300 light years in length and as seen from Earth extends across fourteen different constellations. And while Stefan Meingast (University of Vienna) and team, who discovered the stream, pegged its age as somewhat older, follow-up work by Jason Curtis at Columbia University (New York) determined that the stream was 120 million years old.
Stars of the same age with a common motion through space occur in several forms. A stellar association is a loose grouping of stars, with a common origin although now gravitationally unbound and moving together (I’m simplifying here, to be sure, because there are a number of sub-classifications of stellar associations). A moving group is still coherent, but now the stars are less obviously associated as the formation ages. The Ursa Major moving group is the closest one of these to Earth. A stellar stream like the Pisces-Eridanus stream has been stretched out by tidal forces, a remnant fragment of a dwarf galaxy now torn apart and gradually dispersing.
Image: The Pisces-Eridanus stream spans 1,300 light-years, sprawling across 14 constellations and one-third of the sky. Yellow dots show the locations of known or suspected members, with TOI 451 circled. TESS observations show that the stream is about 120 million years old, comparable to the famous Pleiades cluster in Taurus (upper left). Credit: NASA GSFC.
As with stellar moving groups we’ve looked at before, the Pisces-Eridanus stream seems to feature many stars that share common traits of age and metallicity. TESS comes into its own when studying a system like TOI 451 because its measurements of stars in the Pisces-Eridanus stream show strong evidence of starspots (rotating in and out of view and thus causing the kind of brightness variation TESS was made to measure). Starspots are prominent in younger stars, as is fast rotation. And all of that helps narrow down the possible age of the TOI 451 system.
The three planets around TESS 451 have a story of their own to tell. With temperatures ranging from 1200° C to 450° C, these are super-Earths, with orbits of 1.9 days, 9.2 days and 16 days. Despite the intense heat of the star, the researchers believe these worlds will have retained their atmospheres, making them laboratories for theories of how atmospheres evolve and what their properties should be. Already we know there is a strong infrared signature between 12 and 24 micrometers, which suggests the likely presence of a debris disk. The paper describes it this way, likening the age of stars in the Pisces-Eridanus stream to that found in the Pleiades:
The frequency of infrared excesses decreases with age, declining from tens of percent at ages less than a few hundred Myr to a few percent in the field (Meyer et al. 2008; Siegler et al. 2007; Carpenter et al. 2009). In the similarly-aged Pleiades cluster, Spitzer 24µm excesses are seen in 10% of FGK stars (Gorlova et al. 2006). This excess emission suggests the presence of a debris disk, in which planetesimals are continuously ground into dust…
And in this case we have a debris disk with a temperature near or somewhat less than 300 K.
Image: This illustration sketches out the main features of TOI 451, a triple-planet system located 400 light-years away in the constellation Eridanus. Credit: NASA’s Goddard Space Flight Center.
A comparatively close system like this one should help us piece together the chemical composition of the planetary atmospheres as well as evidence of clouds and other features, with follow-up studies through instruments like the James Webb Space Telescope using transmission spectroscopy. Adding to the interest of TOI 451 is the fact that there may be a distant companion star, TOI 451 B, identified based on Gaia data on what appears to be a faint star about two pixels away from TOI 451. Or perhaps this is a triple system, as the paper suggests:
We note that Rebull et al. (2016), in their analysis of the Pleiades, detect periods for 92% of the members, and suggest the remaining non-detections are due to non-astrophysical effects. We have suggested TOI 451 B is a binary, which we might expect to manifest as two periodicities in the lightcurve. We only detect one period in our lightcurve; however, a second signal could have been impacted by systematics removal or be present at smaller amplitude than the 1.64 day signal, and so we do not interpret the lack of a second period further.
The difficulty of data collection here is apparent:
TOI 451 and its companion(s) are only separated by 37 arcseconds, or about two TESS pixels, so the images of these two stars overlap substantially on the detector. The light curve of the companion TOI 451 B is clearly contaminated by the 14x brighter primary star.
The non-standard methods used to extract the light curve of the companion star(s) are explained in the paper, and I’ll send you there if interested in the details. Note, too, the useful synergy of the TESS and Gaia datasets, which allowed the age of this system to be constrained and also resulted in the discovery of the three planets. As always, rapid growth in our datasets and cross correlations between them trigger the prospect of continuing discovery.
In connection with this work, I should also mention another finding from THYME, the TESS Hunt for Young and Maturing Exoplanets, out of which grew the TOI 451 work. HD 110082 b is a Neptune-class world of approximately 3.2 Earth radii, assumed to be about 11 times as massive as the Earth in a 250 million year old stellar system, another useful find when it comes to examining planet formation and evolution. The F-class primary is about 343 light years away.
The paper is Newton et al., “TESS Hunt for Young and Maturing Exoplanets (THYME). IV. Three Small Planets Orbiting a 120 Myr Old Star in the Pisces–Eridanus Stream,” Astronomical Journal Vol. 161, No. 2 (14 January 2021). Abstract / Preprint. The paper on HD 110082 b is Tofflemire et al., “TESS Hunt for Young and Maturing Exoplanets (THYME) V: A Sub-Neptune Transiting a Young Star in a Newly Discovered 250 Myr Association,” accepted at the Astronomical Journal (preprint).
There’s not a whole lot on TLA+ technique out there: all the resources are either introductions or case studies. Good for people starting out, bad for people past that. I think we need to write more intermediate-level stuff, what Ben Kuhn calls Blub studies. Here’s an attempt at that.
Most TLA+ properties are invariants, properties that must be true for every state in the behavior. If we have a simple counter:
Then an invariant of this spec is x \in 1..3. One property that is not an invariant is that x is always increasing. Going from x = 2 to x = 1 would violate this, but those are both individually valid states. It’s only the transition that is invalid. This property on a transition is called an action property and is written [][x' > x]_x.
show syntax
(x' > x) is the statement that the value of x in the next state (x') is greater than the value of x in the current state. [](x' > x) would say that this is always true: every state must have a greater value for x than the state before.
We cannot write that directly, since it would be violated by stutter steps. Since TLA+ is stutter-invariant, we should be able to insert a stutter anywhere without breaking the property. We instead want [](x' > x \/ UNCHANGED x), that either x is increasing or doesn’t change. TLA+ provides the shorthand [][A]_x syntax, finally giving us [][x' > x]_x.
In the Toolbox, action properties go in the “Temporal Properties” box. If you’re running from the command line, they’re PROPERTYs in the config file.
Use Cases
Conditional Properties
Using =>, we can make “conditional” properties that only must hold if the precondition is true. For example, if our system has a “kill switch”, we can say some value should not change while system is disabled:1
[][disabled=>UNCHANGEDx]_<<disabled,x>>
Or we can say that certain things must change in lockstep:
\* If x changes, y must become the old value of x[][x'/=x=>y'=x]_<<x,y>>
We can also use actions as the preconditions for invariants, where we only need the invariant to hold under those specific actions. If we split our spec into Machine and World actions, we might want that only Machine actions maintain the invariant:
\* Next == Machine \/ World\* Inv is some invariant[][Machine=>Inv]_vars
We can use actions on both sides of the condition, such as to say that if a certain change happens, it must have been “because of” a certain action:
\* owner \in [Credit -> User]\* offers \in SUBSET (User \X User \X Credit)Accept(from,to,credit)==\* (User, User, Credit)/\<<from,to,credit>>\inoffers/\offers'=offers\{<<from,to,credit>>}/\owner'=[ownerEXCEPT![credit]=to]\* If ownership changes from A to B\* It's because B accepted an offer from AValidChange(credit)==LETco==owner[credit]INco/=co'=>Accept(co,co',credit)\* All changes in the system are valid changes ChangeProp==[][\Ac\inCredits:ValidChange(c)]_owner
State Transitions
A server has three states: Offline, Booting, and Online. We can say the server cannot go directly from Offline to Online:
\* status \in {Offline, Booting, Online}[][status=Offline=>status'/=Online]_status
If we have many different possible state transitions, we can abstract the valid transitions into an operator:
\* States == {A, B, C, D}\* state \in StatesTransitions=={<<A,B>>,<<A,C>><<B,D>>,<<C,A>>,<<C,B>>}[][<<state,state'>>\inTransitions]_state
We can combine transitions with conditional properties:
In my experience, action properties are less common than invariants and more common than liveness properties. However, liveness properties are generally “more important” to a spec being valid than the action properties are.3 Action properties are very useful, but not critical in the same way.
The x in [A]_x can be any state predicate, not just a variable. Usually people add a helper operator vars == <<x, y, z, ...>> and then write [A]_vars. The only requirement is that all variables appearing in A must also appear in the subscript.
Action properties can only describe pairs of states. You can’t natively write a property that spans three states without adding an auxilary variable.
There’s a second form of action syntax: <<A>>_x means A /\ x' /= x, or “A happens and x changes.” Just as we can check “eventually predicate P is true” with <>P, we can express “eventually action A happens” with <><<A>>_x. Unfortunately, TLC cannot check this. It can only check properties of the form []<><<A>>_x, or “A happens infinitely often”.
TLC can also check properties of the form <>[][A]_x, or “eventually, the action property [][A]_x always holds.” I used that to form a conditional property in this post:
That property, in English, is “If it’s eventually the case that World actions only happen when Safe holds, then even if World breaks Safe, we will always eventually return to Safe holding.”
I don’t think I’ve actually used <>[][A]_x in a real world spec, though, just in examples.
Most specs are written as Init /\ [][Next]_vars. TLC can check [][Next]_vars just like any other action property! This is the launching point into refinement, or using an entire specification as the checked property of another spec. That’ll have to be the topic of another post.
Thanks to
Andrew Helwer
for feedback.
UNCHANGED x is syntactic sugar for x' = x.
[return]
I could have just written lock' = NULL instead of lock' \in {lock, NULL}. [P]_lock means P \/ UNCHANGED lock, which would have implicitly permitted the thread lock to not change. But that would “offload” spec logic to the syntax, which isn’t clear to readers.
[return]
Invariants and action properties only cover bad states and transitions. They don’t cover “the system actually does what you need it to do.” That’s liveness.
[return]
LibreSSL in DragonFly has had a minor update, from 3.2.3 to 3.2.4, thanks to Daniel Fojt. It’s a bugfix update, but I’m using it as a chance to remind everyone you can use LibreSSL for everything in dports, too.
Most people find two problems when it comes to flip-dot displays: where to buy them and how to drive them. If you’re [Pierre Muth] you level up and add the challenge of driving them fast enough to rival non-mechanical displays like LCDs. It was a success, resulting in a novel and fast way of controlling flip-dot displays.
Patti Grace Smith Fellowship — Empowering Black Excellence in Aerospace
First Mode is honored and delighted to be a part of the Patti Grace Smith Fellowship program. Created in 2020, the program is designed to combat the longstanding and well-quantified under-representation of Black and African-American employees in the US aerospace workforce.
Originally from Schenectady, NY. Niya Hope-Glenn is now a first-year student at Howard University studying Chemical Engineering.
Originally from Ethiopia by way of Peabody, MA, Hermon Kaysha is now a first-year student at the Massachusetts Institute of Technology studying Computer Science.
About the Patti Grace Smith Fellowship
Though the aerospace industry has made important strides since the days when African-Americans were legally barred from studying in many universities and holding many positions in the aerospace workforce, there is still a great deal of progress to be made. While African-Americans make up 13.4% of the US population and 15.3% of American undergraduate and graduate students, a recent study conducted by Aviation Week Network found that only 6% of US Aerospace and Defense workers and only 3% of aerospace executives are Black.
Patti Grace Smith Fellows each earn a challenging internship at one of the nation’s leading aerospace firms, a living wage, two hand-picked personal mentors, and a cash grant of approximately $2,000 to go towards professional or school expenses.
“The Patti Grace Smith Fellowship exists to serve extraordinarily talented students who possess everything that is needed to thrive in aerospace, but who come from a community where talent has long been overlooked by our industry,” said Col. B. Alvin Drew, Jr., (USAF, Ret.), a two-time Space Shuttle astronaut and a co-founder of the Fellowship. “These new Patti Grace Smith Fellows inspire us with their drive, their intellect, their work ethic, and their deep commitment to advancing the state of the aerospace industry – not only in terms of our science and engineering, but also in terms of how we cultivate and honor talent in our workforce. The level of interest we received from applicants, and the caliber of the students who’ve made it through three intense rounds of selection show beyond a shadow of a doubt the incredible impact that Black excellence can, has, and will make in aerospace.”
The program, which is based closely on the award-winning Brooke Owens Fellowship, was founded by Drew, undergraduate student and Brooke Owens Fellowship alumna Khristian Jones, aerospace engineer Tiffany Russell Lockett, and aerospace executive Will Pomerantz. The program’s name was chosen to honor a beloved aerospace industry leader who overcame a system of legalized racial segregation: as a young girl, Patti Grace Smith (then Patricia Jones) was one of a dozen Black students to integrate Tuskegee High School, and was a plaintiff in a landmark case that integrated the public schools in Alabama, as upheld by the Supreme Court of the United States. Her illustrious career was highlighted by her role leading the Federal Aviation Administration’s Office of Commercial Space Transportation in the early days of the nation’s space renaissance.
I was ordered to serve 12 hours in Twitter purgatory today for a tweet that violated its terms. I had posted what I thought was just a little funny, but I guess not everyone saw it that way.
I’m not complaining, I understand it’s their sandbox and they can do with it whatever they want. I understand even more that there is a lot of misinformation out there (about everything) and Twitter is just trying to scrub it up as best it can.
They told me if I would remove the tweet they would unlock my account for regular use again after 12 hours so I did.
This hasn’t caused me to look deep inside to see why I’m such a terrible person, I still think what I posted was funny - and perhaps a backhanded slap at the crazy conspiracy theories embraced as “facts” by millions.
Still, I will forego anything that even looks like humor on Twitter from here out.
By the way, here is the message I received from Twitter and in it you can see the horrible thing I posted that got me locked out of my account.
But be warned, you might be shocked and for that, I apologize.
It often goes unmentioned that protons, the positively charged matter particles at the center of atoms, are part antimatter.
We learn in school that a proton is a bundle of three elementary particles called quarks — two “up” quarks and a “down” quark, whose electric charges (+2/3 and −1/3, respectively) combine to give the proton its charge of +1. But that simplistic picture glosses over a far stranger, as-yet-unresolved story.
In reality, the proton’s interior swirls with a fluctuating number of six kinds of quarks, their oppositely charged antimatter counterparts (antiquarks), and “gluon” particles that bind the others together, morph into them and readily multiply. Somehow, the roiling maelstrom winds up perfectly stable and superficially simple — mimicking, in certain respects, a trio of quarks. “How it all works out, that’s quite frankly something of a miracle,” said Donald Geesaman, a nuclear physicist at Argonne National Laboratory in Illinois.
Thirty years ago, researchers discovered a striking feature of this “proton sea.” Theorists had expected it to contain an even spread of different types of antimatter; instead, down antiquarks seemed to significantly outnumber up antiquarks. Then, a decade later, another group saw hints of puzzling variations in the down-to-up antiquark ratio. But the results were right on the edge of the experiment’s sensitivity.
So, 20 years ago, Geesaman and a colleague, Paul Reimer, embarked on a new experiment to investigate. That experiment, called SeaQuest, has finally finished, and the researchers report their findings today in the journal Nature. They measured the proton’s inner antimatter in more detail than ever before, finding that there are, on average, 1.4 down antiquarks for every up antiquark.
The data immediately favors two theoretical models of the proton sea. “This is the first real evidence backing up those models that has come out,” said Reimer.
One is the “pion cloud” model, a popular, decades-old approach that emphasizes the proton’s tendency to emit and reabsorb particles called pions, which belong to a group of particles known as mesons. The other model, the so-called statistical model, treats the proton like a container full of gas.
Planned future experiments will help researchers choose between the two pictures. But whichever model is right, SeaQuest’s hard data about the proton’s inner antimatter will be immediately useful, especially for physicists who smash protons together at nearly light speed in Europe’s Large Hadron Collider. When they know exactly what’s in the colliding objects, they can better piece through the collision debris looking for evidence of new particles or effects. Juan Rojo of VU University Amsterdam, who helps analyze LHC data, said the SeaQuest measurement “could have a big impact” on the search for new physics, which is currently “limited by our knowledge of the proton structure, in particular of its antimatter content.”
Three’s Company
For a brief period around half a century ago, physicists thought they had the proton sorted.
In 1964, Murray Gell-Mann and George Zweig independently proposed what became known as the quark model — the idea that protons, neutrons and related rarer particles are bundles of three quarks (as Gell-Mann dubbed them), while pions and other mesons are made of one quark and one antiquark. The scheme made sense of the cacophony of particles spraying from high-energy particle accelerators, since their spectrum of charges could all be constructed out of two- and three-part combos. Then, around 1970, researchers at Stanford’s SLAC accelerator seemed to triumphantly confirm the quark model when they shot high-speed electrons at protons and saw the electrons ricochet off objects inside.
But the picture soon grew murkier. “As we started trying to measure the properties of those three quarks more and more, we discovered that there were some additional things going on,” said Chuck Brown, an 80-year-old member of the SeaQuest team at the Fermi National Accelerator Laboratory who has worked on quark experiments since the 1970s.
Scrutiny of the three quarks’ momentum indicated that their masses accounted for a minor fraction of the proton’s total mass. Furthermore, when SLAC shot faster electrons at protons, researchers saw the electrons ping off of more things inside. The faster the electrons, the shorter their wavelengths, which made them sensitive to more fine-grained features of the proton, as if they’d cranked up the resolution of a microscope. More and more internal particles were revealed, seemingly without limit. There’s no highest resolution “that we know of,” Geesaman said.
The results began to make more sense as physicists worked out the true theory that the quark model only approximates: quantum chromodynamics, or QCD. Formulated in 1973, QCD describes the “strong force,” the strongest force of nature, in which particles called gluons connect bundles of quarks.
QCD predicts the very maelstrom that scattering experiments observed. The complications arise because gluons feel the very force that they carry. (They differ in this way from photons, which carry the simpler electromagnetic force.) This self-dealing creates a quagmire inside the proton, giving gluons free rein to arise, proliferate and split into short-lived quark-antiquark pairs. From afar, these closely spaced, oppositely charged quarks and antiquarks cancel out and go unnoticed. (Only three unbalanced “valence” quarks — two ups and a down — contribute to the proton’s overall charge.) But physicists realized that when they shot in faster electrons, they were hitting the small targets.
Yet the oddities continued.
Self-dealing gluons render the QCD equations generally unsolvable, so physicists couldn’t — and still can’t — calculate the theory’s precise predictions. But they had no reason to think gluons should split more often into one type of quark-antiquark pair — the down type — than the other. “We would expect equal amounts of both to be produced,” said Mary Alberg, a nuclear theorist at Seattle University, explaining the reasoning at the time.
Hence the shock when, in 1991, the New Muon Collaboration in Geneva scattered muons, the heavier siblings of electrons, off of protons and deuterons (consisting of one proton and one neutron), compared the results, and inferred that more down antiquarks than up antiquarks seemed to be splashing around in the proton sea.
Proton Parts
Theorists soon came out with a number of possible ways to explain the proton’s asymmetry.
One involves the pion. Since the 1940s, physicists have seen protons and neutrons passing pions back and forth inside atomic nuclei like teammates tossing basketballs to each other, an activity that helps link them together. In mulling over the proton, researchers realized that it can also toss a basketball to itself — that is, it can briefly emit and reabsorb a positively charged pion, turning into a neutron in the meantime. “If you’re doing an experiment and you think you’re looking at a proton, you’re fooling yourself, because some of the time that proton is going to fluctuate into this neutron-pion pair,” said Alberg.
Specifically, the proton morphs into a neutron and a pion made of one up quark and one down antiquark. Because this phantasmal pion has a down antiquark (a pion containing an up antiquark can’t materialize as easily), theorists such as Alberg, Gerald Miller and Tony Thomas argued that the pion cloud idea explains the proton’s measured down antiquark surplus.
Several other arguments emerged as well. Claude Bourrely and collaborators in France developed the statistical model, which treats the proton’s internal particles as if they’re gas molecules in a room, whipping about at a distribution of speeds that depend on whether they possess integer or half-integer amounts of angular momentum. When tuned to fit data from numerous scattering experiments, the model divined a down-antiquark excess.
The models did not make identical predictions. Much of the proton’s total mass comes from the energy of individual particles that burst in and out of the proton sea, and these particles carry a range of energies. Models made different predictions for how the ratio of down and up antiquarks should change as you count antiquarks that carry more energy. Physicists measure a related quantity called the antiquark’s momentum fraction.
When the “NuSea” experiment at Fermilab measured the down-to-up ratio as a function of antiquark momentum in 1999, their answer “just lit everybody up,” Alberg recalled. The data suggested that among antiquarks with ample momentum — so much, in fact, that they were right on the end of the apparatus’s range of detection — up antiquarks suddenly became more prevalent than downs. “Every theorist was saying, ‘Wait a minute,’” said Alberg. “Why, when those antiquarks get a bigger share of the momentum, should this curve start to turn over?”
As theorists scratched their heads, Geesaman and Reimer, who worked on NuSea and knew that the data on the edge sometimes isn’t trustworthy, set out to build an experiment that could comfortably explore a larger antiquark momentum range. They called it SeaQuest.
Junk Spawned
Long on questions about the proton but short on cash, they started assembling the experiment out of used parts. “Our motto was: Reduce, reuse, recycle,” Reimer said.
They acquired some old scintillators from a lab in Hamburg, leftover particle detectors from Los Alamos National Laboratory, and radiation-blocking iron slabs first used in a cyclotron at Columbia University in the 1950s. They could repurpose NuSea’s room-size magnet, and they could run their new experiment off of Fermilab’s existing proton accelerator. The Frankenstein assemblage was not without its charms. The beeper indicating when protons were flowing into their apparatus dated back five decades, said Brown, who helped find all the pieces. “When it beeps, it gives you a warm feeling in your tummy.”
Gradually they got it working. In the experiment, protons strike two targets: a vial of hydrogen, which is essentially protons, and a vial of deuterium — atoms with one proton and one neutron in the nucleus.
When a proton hits either target, one of its valence quarks sometimes annihilates with one of the antiquarks in the target proton or neutron. “When annihilation occurs, it has a unique signature,” Reimer said, yielding a muon and an antimuon. These particles, along with other “junk” produced in the collision, then encounter those old iron slabs. “The muons can go through; everything else stops,” he said. By detecting the muons on the other side and reconstructing their original paths and speeds, “you can work backwards to work out what momentum fraction the antiquarks carry.”
Because protons and neutrons mirror each other — each has up-type particles in place of the other’s down-type particles, and vice versa — comparing the data from the two vials directly indicates the ratio of down antiquarks to up antiquarks in the proton — directly, that is, after 20 years of work.
In 2019, Alberg and Miller calculated what SeaQuest should observe based on the pion cloud idea. Their prediction matches the new SeaQuest data well.
The new data — which shows a gradually rising, then plateauing, down-to-up ratio, not a sudden reversal — also agrees with Bourrely and company’s more flexible statistical model. Yet Miller calls this rival model “descriptive, rather than predictive,” since it’s tuned to fit data rather than to identify a physical mechanism behind the down antiquark excess. By contrast, “the thing I’m really proud of in our calculation is that it was a true prediction,” Alberg said. “We didn’t dial any parameters.”
In an email, Bourrely argued that “the statistical model is more powerful than that of Alberg and Miller,” since it accounts for scattering experiments in which particles both are and aren’t polarized. Miller vehemently disagreed, noting that pion clouds explain not only the proton’s antimatter content but various particles’ magnetic moments, charge distributions and decay times, as well as the “binding, and therefore existence, of all nuclei.” He added that the pion mechanism is “important in the broad sense of why do nuclei exist, why do we exist.”
In the ultimate quest to understand the proton, the deciding factor might be its spin, or intrinsic angular momentum. A muon scattering experiment in the late 1980s showed that the spins of the proton’s three valence quarks account for no more than 30% of the proton’s total spin. The “proton spin crisis” is: What contributes the other 70%? Once again, said Brown, the Fermilab old-timer, “something else must be going on.”
At Fermilab, and eventually at Brookhaven National Laboratory’s planned Electron-Ion Collider, experimenters will probe the spin of the proton sea. Already Alberg and Miller are working on calculations of the full “meson cloud” surrounding protons, which includes, along with pions, rarer “rho mesons.” Pions don’t possess spin, but rho mesons do, so they must contribute to the overall spin of the proton in a way Alberg and Miller hope to determine.
Fermilab’s SpinQuest experiment, involving many of the same people and parts as SeaQuest, is “almost ready to go,” Brown said. “With luck we’ll take data this spring; it will depend” — at least, partly — “on the progress of the vaccine against the virus. It’s sort of amusing that a question this deep and obscure inside the nucleus is depending on the response of this country to the COVID virus. We’re all interconnected, aren’t we?”
Since before the beginning of the Space Age, engineers have sought to develop increasingly efficient propulsion systems. Chemical propulsion systems that burn a fuel and oxidizer to produce thrust were the first to be developed. With their high thrust-to-mass ratios (i.e. a small size engine can produce a large amount of thrust), liquid fueled chemical rockets were the first to allow us to overcome the bonds of gravity and pass the threshold into space.
The Nuclear Option
The most efficient chemical propulsion systems today burn liquid hydrogen and oxygen and have an Isp of up to about 450 seconds. Called “specific impulse”, Isp is a measure of the efficiency of a propulsion system. It can be thought of as the amount of thrust you get from a unit mass of propellant. For those who like to work in Imperial units, an Isp of 450 seconds, for example, one pound-mass (0.45 kilograms) of propellant yields a thrust of 450 pounds-force (2,000 Newtons) for one second. The Isp also gives an engine’s exhaust velocity when it is multiplied by the acceleration due to gravity. As every rocket scientist knows, a higher exhaust velocity translates proportionally to the faster a rocket of a given mass and propellant load will travel. Conversely, higher exhaust velocities can mean a larger payload for a given rocket.
Higher exhaust velocities can be achieved by increasing an engine’s operating temperature since the velocity of the exhaust products is proportional to the square root of temperature. But limitations in the strength of available materials used in an engine’s combustion chamber restricts how high these can go. The best of today’s chemical propulsion systems are already close to the theoretical maximum Isp. Use of the most energetic chemical propellant combination, liquid hydrogen and fluorine, could provide a modest increase in engine Isp. But the engineering difficulties of using dangerously reactive liquid fluorine offsets any performance advantages. Today, rocket engine developers are more concerned with maximizing the engine’s thrust-to-weight ratio and minimizing manufacturing costs. Significant new developments in engine efficiency lie elsewhere.
Another family of propulsion systems that offer significantly higher Isp are based on ion or plasma technology. Here electromagnetic fields are used to accelerate an ionized working fluid to very high velocity (see “The First Ion Engine Test in Space”). Although such systems can have an Isp in excess of thousands of seconds, they have minuscule thrust-to-mass ratios. With the addition of the mass of the power generation system required to run these engines, these systems are only capable of tiny rates of acceleration. While these propulsion systems do have their applications, those seeking high acceleration rates combined with high Isp have to look elsewhere.
A schematic of a nuclear thermal propulsion system. Click on image to enlarge. (LANL)
One of the most promising possibilities within the reach of our technology is nuclear thermal propulsion. Unlike a chemical rocket that uses combustion to heat the reactive mass (i.e. the combustion products, in this case) that are expelled to generate thrust, a nuclear rocket uses a nuclear reactor to superheat a lightweight propellant – ideally hydrogen. Although chemical and nuclear engines share similar engineering limitations in terms of operating temperatures and pressures, the much lower molecular weight of hydrogen compared with the combustion products of a hydrogen-oxygen rocket engine (which are largely water vapor) results in much higher exhaust velocities for a given engine temperature and pressure. This yields an Isp that can be on the order of 1,000 seconds. But can such an engine be built?
The Birth of Nuclear Rocketry
Not long after the first successful atomic bomb tests, scientists and engineers began to ponder the potential peaceful uses of this potent source of energy. As early as 1944, Stanislaus Ulam and Frederick de Hoffman at the Los Alamos Scientific Laboratory (LASL – today the Los Alamos National Laboratory) considered how nuclear detonations might be used for space travel. While such a scheme was later studied in detail as part of ARPA’s (Advanced Research Project Agency) Project Orion and the British Interplanetary Society’s Project Daedalus, it was felt that a slower, controlled release of nuclear energy would be more suitable.
In July of 1946, North American Aviation and the Douglas Aircraft Company’s Project RAND each delivered secret reports on their internal nuclear propulsion studies to the USAF. These landmark reports identified the “heat transfer” nuclear rocket (where a reactor heats a working fluid which acts as the reaction mass) as the most promising form of nuclear propulsion. Such a propulsion system could, in principle, be incorporated into an ICBM to lob nuclear warheads across the globe. But despite the glowing report and the promise of the technology, it was recognized that there were still many technical issues that needed to be resolved.
The American educated Chinese scientist named Hsue-Shen Tsien proposed a nuclear engine or “thermal jet” concept in 1948 during a lecture at MIT.
Not aware of the earlier secret studies, a group of engineers from the Applied Physics Laboratory at Johns Hopkins University openly published the results of their own independent studies in January 1947. In 1948 and 1949, two British space enthusiasts, A.V. Cleaver and L.R. Shepherd, also published a series of ground breaking papers in the Journal of the British Interplanetary Society on the same topic. But even before this series of papers was published, an American educated, Chinese scientist named Hsue-Shen Tsien (or Qian Xuesen, using the more modern Chinese transliteration, who later went on to head the Chinese atomic bomb program) gave a talk at the Massachusetts Institute of Technology about nuclear powered “thermal jets”. In all these studies, it was concluded that nuclear propulsion seemed to be viable. And given the number of people who independently arrived at the same conclusions, it was clear that the USAF would not have a monopoly in nuclear propulsion studies.
But all this early enthusiasm for nuclear rockets was dampened by a subsequent technical report done by North American Aviation. This report concluded that nuclear powered ICBMs were not practical. North American scientist felt that the reactor of a nuclear rocket would have to operate at the fantastically high temperature of 3,400 K – many times that of existing reactors. No known material could withstand such temperatures and maintain the strength required in a rocket engine. With this and other problems identified, interest in nuclear rockets faded noticeably as the 1950s began.
An Idea Resurrected
But not everyone agreed with the apparently bleak prospects for nuclear rockets. While development of nuclear rocket engines was largely abandoned after the North American report, work on nuclear-powered jet aircraft engines continued. In the early 1950s Robert W. Bussard who had been working on these nuclear aircraft propulsion systems at AEC’s (US Atomic Energy Commission) Oak Ridge National Laboratory in Tennessee reexamined nuclear rockets. Based on his work he concluded that the earlier reports were far too pessimistic and that nuclear rockets were probably practical after all. Bussard felt that they could effectively compete with chemical rockets especially on long flights with heavy payloads. Based on Bussard’s calculations and salesmanship, the USAF decided to reopen studies on the concept for possible use in ICBMs in 1955.
The work of Robert W. Bussard in the early-1950s showed that nuclear rocket propulsion was practical leading to the USAF starting work on the concept in 1955.
As part of the new AEC-USAF program, the Nuclear Propulsion Division headed by Raemer E. Schreiber was formed at LASL. A similar group was also formed at AEC’s Lawrence Radiation Laboratory operated by the University of California. But budget cutbacks in the June of 1956 resulted in an elimination of duplicate efforts and a consolidation of the various nuclear propulsion groups. The result was Livermore taking on the task of developing a nuclear ramjet under the code name “Project Pluto”. The nuclear rocket program went to Los Alamos under the code name “Project Rover”.
Raemer E. Schreiber giving a briefing on the Kiwi-A reactor in 1959. (LANL)
A series of different paper studies with such fanciful names like “Dumbo” (an engine reactor design) and “Condor” (a proposed nuclear rocket) were studied. Eventually a reactor design named “Kiwi” was selected as a first step for a nuclear rocket engine. Like its flightless namesake from New Zealand, the Kiwi test reactors would not fly but were nonetheless essential to the development of a practical nuclear rocket engine.
Kiwi-A was a series of “battleship” test reactors that would use compressed hydrogen gas to perform ground-based studies of potential nuclear rocket engine components. In the first Kiwi reactor, the 960 uranium oxide (UO2) infused graphite fuel plates (which was transformed into uranium carbide during manufacture) were stacked along with 240 plain graphite plates inside of a 43-centimeter thick, annular graphite reflector that could operate at temperatures as high as 3,000 K. Not only could the graphite withstand temperatures up to 3,300 K before beginning to weaken, it was also an excellent moderator that could slow fission-producing neutrons so they could maintain a nuclear chain reaction inside the core. The reactor core itself was 84 centimeters in diameter and 137 centimeters long. In the center of the reactor was a 46-centimeter in diameter “island” filled with heavy water (D2O) which not only served as a moderator (further reducing the required mass of uranium-235 to go critical) but cooled the moveable control rods located there as well.
A cutaway drawing of the Kiwi-A reactor. Click on image to enlarge. (LANL)
The reactor and its graphite reflector were housed inside an aluminum pressure vessel. The nozzle, manufactured by Rocketdyne, was a double-walled, water-cooled design made of nickel. This nozzle did not include a bell since the test objectives centered on the performance of the reactor itself. The first Kiwi-A reactor was intended to produce 70 megawatts of thermal power at a gaseous hydrogen flow rate of 3.2 kilograms per second for 300 seconds.
The Kiwi-A Tests
But even before the first Kiwi-A was built, there were already changes in the wind. Towards the end of 1957 it had become apparent to USAF planners that the Atlas missile would provide the US with an ICBM capability without the need to resort to exotic technologies like nuclear rockets. The infant nuclear rocket program would have died for a second time were it not for the launch of Sputnik on October 3, 1957 (see “Sputnik: The Launch of the Space Age”). The competitive pressures produced by the new Space Race meant that advanced technologies like nuclear rockets would be aggressively developed to give the country an edge in space exploration.
With the formation of NASA on October 1, 1958, the joint AEC-USAF nuclear rocket program was transformed into a joint AEC-NASA activity. While no longer needed for defense, nuclear rockets were ideal for space applications. In August of 1960 the joint AEC-NASA Space Nuclear Propulsion Office (SNPO) was formed with Harold B. Finger (who seven years later would become the Associate Administrator of NASA) as its manager. The goal of SNPO was to develop nuclear rockets that would aid the country’s effort to beat the Soviet Union to the Moon and planets.
The test firing of the Kiwi-A reactor on July 1, 1959. The yellow color of the plume is caused by a methane burner igniting the hydrogen exhaust of the reactor. (LANL)
While all these administrative changes were taking place, engineers were busy preparing for the first actual hardware tests. The first Kiwi-A reactor firing took place on July 1, 1959 at the Nuclear Rocket Development Station in Jackass Flats, Nevada about 150 kilometers outside of Las Vegas. It successfully fired for five minutes producing 70 megawatts of thermal power. But the test was not without its problems. During the test, a graphite closure plate above the reactor’s central island shattered with its debris being ejected from the engine. The damaged caused by the incident altered the flow of gaseous hydrogen through the reactor allowing temperatures to reach as high as 2,900 K. A post mortem inspection of the engine showed much cracking in the structures holding the reactor components in place caused by the unintended high radial thermal gradient inside the reactor. The graphite-rich fuel plates also experienced more hydrogen corrosion than expected. Despite the issues uncovered, the Kiwi-A test was considered a success with many practical lessons learned.
Diagram showing the modified Kiwi-A’ reactor. Click on image to enlarge. (NASA)
The next reactor, called Kiwi-A’ (pronounced Kiwi-A Prime), incorporated a number of improvements based on the experience with its predecessor. Instead of the fuel plates being used, the UO2 fuel was embedded in a matrix of graphite which was extruded into the form of long, rod-like cylinders which were then coated with niobium carbide to help reduce hydrogen corrosion. A total of six of these 23-centimeter long fuel elements were placed into each of the seven holes of a graphite module to produce a 137-centimeter long fuel module with these modules placed inside the reactor core.
The Kiwi-A’ reactor in its test cell ready for its firing on July 8, 1960. (LANL)
The first attempt to startup the Kiwi-A1’ was aborted when problems with the data channels resulted in an automatic reactor shutdown. A second attempt was also aborted when the methane flare system designed to ignite the hydrogen exiting the engine nozzle failed to operate. The next attempt on July 8, 1960 was successful with the thermal power output reaching 88 megawatts and an average temperature of the hydrogen exiting the nozzle of 2,178 K during an almost six minute run. But as with the Kiwi-A test, problems were encountered. Major power output perturbations were noted during the firing with debris seen exiting the nozzle. A subsequent inspection of the reactor showed that while the majority of the fuel elements had survived the test with little or no damage, 2.5% of them showed moderate to severe thermal damage from graphite corrosion and blistering of the niobium carbide coatings. Four of the fuel modules also experienced transverse cracking leading to their failures with the subsequent changes in power output and ejected debris observed.
The Last Kiwi-A Test
A third and final Kiwi-A reactor, designated Kiwi-A3, was built to further refine the Kiwi-A’ reactor design with the objective of operating at a 92 megawatt power level for 250 seconds. As before, improvements were made to this reactor based on previous experience. The short, 23-centimeter fuel elements were replaced with longer 69-centimeter elements. Different types of graphite components using various manufacturing techniques were also employed to determine which would provide the best performance inside the reactor. The various fuel module components underwent more extensive inspections to help eliminate those with hidden flaws.
The first test firing was attempted on October 7, 1960 but called off because the winds were blowing in the wrong direction for the deployed fallout detectors on the test range. The next attempt on October 10 was started successfully with the plan to fire at half power for 106 seconds before ramping up to full power for 250 seconds. During the half-power portion of the test, hydrogen exhaust temperatures reached 1,833 K or 305 K hotter than intended. After 159 seconds at half power, the reactor output was ramped up to an expected 92 megawatt level for the full-power portion of the firing. Again, the hydrogen exhaust temperature was much higher than expected with the gaseous hydrogen flow rate increased to 3.8 kilogram per second in order to maintain an exhaust temperature of 2,173 K. During the full-power plateau, several swings in temperature and thermal power output with an amplitude of up to 13 megawatts were observed.
The Kiwi-A3 during its test firing on October 10, 1960. (LANL)
Afterwards, it was discovered that a neutron monitoring instrument calibration error had led to an underestimation of the reactor’s thermal power output in realtime. Instead of running at the indicated 90 megawatt level for 259 seconds, the reactor was actually running at 112.5 megawatts – 122% of the engine’s power rating. As before, a post mortem inspection showed damage to the reactor components including the same type of damage to the fuel elements. But overall, the damage to Kiwiw-A3 was not as extensive as that found in Kiwi-A’.
Despite the many problems uncovered during the Kiwi-A firings, the reactor tests largely met their objectives demonstrating that a high-power density nuclear reactor could be controlled and heat hydrogen gas to high temperatures. With more design and engineering changes to come, efforts turned to the Kiwi-B series of reactors which would use liquid instead of gaseous hydrogen as a coolant.
Here is a documentary film produced by the Los Alamos Scientific Laboratory in the early 1960s about Project Rover giving a primer on nuclear thermal propulsion with footage from some early Kiwi reactor tests.
Related Reading
“The First Nuclear Reactor in Orbit”, Drew Ex Machina, April 3, 2015 [Post]
“The First Ion Engine Test in Space”, Drew Ex Machina, July 20, 2014 [Post]
General References
William R. Corliss, Nuclear Propulsion for Space, AEC, 1967
J.L. Finseth, “Rover Nuclear Rocket Engine Program: Overview of Rover Engine Tests – Final Report”, Prepared for NASA MSFC by Sverdrup Technology, Inc., February 1991
Daniel R. Koenig, “Experience Gained from the Space Nuclear Rocket Engine Program (Rover)”, LA-1006-H, Los Alamos National Laboratory, May 1986
The perfect Father’s Day gift for this stay-at-home year has been an Ooni pizza oven.
A bit dusty on the top from errant flour and a little sooty below from flour dust combusted at high temperatures.
It conveniently hooks up to the natural gas line for the grill, so I never have to worry about exchanging propane tanks, and it gets blazing hot.
It’s a little hard to read, but that thermometer says 443°C (830°F). I wrote the desired temperature of the baking stone on the face in Sharpie because it’s so hot I kept second-guessing myself.
I use the classic pizza dough recipe from the Ooni app, scaled here for a 200 g dough ball, which is perfect for a single pizza. (I typically make dough for 4–5 pizzas.)
* This assumes instant dry yeast and a 4-hour proof at room temperature (70°F, 21°C). The aforementioned app adjusts for different yeasts, proof temps, and times.
I use weights not measures for multiple reasons: it’s more accurate, it’s easier to weigh ingredients than to properly measure, and it’s easier to scale whole numbers than fractions (English-units, amirite?).
Directions
Measure 2/3 of the water into bowl and get the other 1/3 boiling on the stove (with some margin for boil-off). Mixing boiling water with cold tap water in a 1:2 ratio generates the correct temperature for the yeast and avoids having to mesure and adjust the water temp.
Weigh the flour directly into the stand mixer’s bowl.
Whisk the salt and yeast into the lukewarm water.
Affix the dough hook attachment, get the mixer going on the lowest setting, slowly pour in the water mixture, and then let it knead on low speed for roughly 7 minutes. It should have a nice smooth ball shape, be not too sticky, and seem plenty stretchy.
Cover the dough in plastic wrap so it doesn’t dry out and set it somewhere to proof. (A light dusting of flour will keep the wrap from sticking.) A 4-hour proof is perfect for making dough at lunch to have for dinner.
When the dough is done proofing, it’s time to form dough balls. Divide the dough into 200-g balls. I cut it with a bench scraper and verify with a scale. Forming the dough balls means folding the sides back underneath while rotating it around, like this:
Put the finished dough balls on a cookie sheet, re-cover with plastic wrap, and let rise for at least another 30-minutes.
The video above also has good tips for hand-stretching the dough, but the bottom line is that it just takes practice. A key is having room-temperature dough. Early-on, before I started making dough myself, I bought some from the grocery store, but I didn’t give it enough time to come up to temp, which made it too stiff and rubbery to properly stretch.
Pizza Sauce
2 T olive oil
2 cloves crushed garlic
800 g San Marzano tomatoes (crushed or whole)
2 t sugar
1 t salt
2 t basil
1 t oregano
several twists of fresh-ground pepper
Directions
Fry the garlic in the oil over medium heat to release its aroma, but don’t brown.
Add the rest of the ingredients.
If using whole tomatoes, I use an immersion blender to break them up and mix the ingredients.
Simmer on low heat for at least 20-minutes. I also check seasoning.
Pizza Topping and Prep
The pizzas cook quickly, but I find the dough can tend to stick if it’s formed and left to sit too long, so if I don’t have enough help to keep an assembly line going I tend to make them one-by-one. This also gives the oven temperature time to recover between bakes.
I start with a thin layer of sauce, then cheese, then any toppings. We’ve used everything from sliced buffalo mozzarela to pre-grated “pizza cheese”. It was all pretty good. For toppings we’ve tried pepperoni (natch), sausgage (pre-cook it), procuitto, pineapple, mushrooms (slice thin), olives, and more. It’s pretty hard to mess this step up other than by overloading things. This is a plain cheese, with sliced mozzarela, on a well-dusted peel, with an olive-oil brushed and salted edge.
I typically get the oven going just before I start forming the first pizza, to give it at least 10-minutes to warm up. I want the baking stone to be around 450°C (775°F). Then just before I slide the pizza in I turn it down to medium. This keeps the crust and cheese from burning in the radiant heat while the stone cooks and browns the crust from below.
I usually give the pizza 30-40 seconds before I start to rotate it, a quarter-turn at a time, using a pair of long tongs. It’s usually completely done cooking in just under two minutes.
I pull the pizza out and turn the oven back up to high so it can recover while I take the pizza inside to cool and slice and make or fetch another.
Using the peel takes a bit of practice but the keys are using enough flour so nothing sticks and being fluid and confident in the jerking motion that slides the pizza off into the oven or that pushes the peel under a finished pizza.
The results are delicious.
“00” flour has high protein content, is ground very fine, and isn’t stocked in my local grocery stores—not even Whole Foods. It was available on Amazon, and although it’s imported from Italy and more expensive than normal all-purpose flour, a dough ball still only contains 43¢ of flour. (I have no idea how the modern economy works to make that possible.) “00” flour is important and contributes to the je ne sais quoi of wood-fired-style pizza. The first time I ran out I used bread flour, which the packaging assured me was “good for pizza dough”. This is false; it was doughy and tough. ↩
We had a bit of a lengthy Council Meeting on Monday; an Agenda heavy with Public Hearing and Opportunities to be Heard. It is telling about the unpredictability of City Council that we had two really big topics – the annual budget and a change to the Secondary Suites program that effectively impacts every single family detached home in the City – and no-one came to speak to those, but we had three hours of delegations on the renovation of a butcher shop. This job is strange.
Financial Plan, 2021 – 2025
The 5 year financial plan bylaws have been sketched up, and we had an Opportunity to be Heard on the Bylaws. We received two pieces of correspondence asking questions, but nobody came to delegate or clarify on points in the plan. I have a follow up blog to go into a bit more detail about the feedback we received in the last month or so as the annual budget was finalized, but for now I just want to note it was a challenging year to do this work, and I have to send kudos to our staff for getting this work done swiftly, fur the extraordinary efforts they took to engage the public in the budget process, and the quality of the presentations that Council has received that allow us to understand the finances well enough to not just vote on it, but to explain to our constituents about how and why we made the decisions we did.
Council gave the Bylaw three readings and adopted it. On to 2021!
We then had three Public Hearings:
Zoning Amendment Bylaw (Secondary Suite Requirements) No. 8154, 2021
We are updating and streamlining our Secondary Suite program. There are currently more than 800 authorized secondary suites in ostensibly “single family homes” in the City (likely more, as there are undoubtedly many unauthorized units in the City). Most new houses are built with a secondary suite, or to accommodate a secondary suite if the owner so decides. With changes in the BC building Code on 2019, staff saw a need to revise our Zoning Bylaw to make it easier for compliance and approval. In short, these changes streamline and simplify approvals of secondary suites.
There are slightly different standards for secondary suites in buildings that already exist and for new secondary suites to be built after September, 2021. Base-line life safety standards exist for both, but as we can be more prescriptive on new builds we are adding further livability requirements, such as limiting ventilation between the main and secondary living units, assuring tenants have control of their own utilities and heating.
In the end, this a Zoning Bylaw amendment will replace more complicated documents and processes while assuring life safety and livability standards are met and avoid some redundancy with the BC Building Code. This also requires amendments to our enforcement bylaws.
We received a couple of pieces of correspondence about this, more inquiries about details than opposition or support, and no-one came to speak to the issue. Council voted to support the changes.
Zoning Amendment Bylaw (1135 Tanaka Court) No. 8250, 2021
There is a business that would like to start manufacturing food products that include cannabis ingredients in a Light Industrial property in Queensborough. If there wasn’t cannabis involved, this kind of use would be within zoning and no public hearing would occur, but Provincial Regulations and our own Bylaws around the devil’s cabbage are still peri-prohibitionist. So a zoning language amendment is required to permit this use.
There is no growing of cannabis, or retail sales of the product anticipated at the site. There shouldn’t be any specific odour or security concerns apart from any other light industrial manufacturing site, and their operations will be licensed by Health Canada.
We received one piece correspondence (the applicant, in favour), and we had no-one come to speak to us on this application. Council voted to support it.
Heritage Revitalization Agreement Bylaw (404 Second Street) No. 8235, 2020 and Heritage Designation Bylaw (404 Second Street) No. 8236, 2020 The owner of the little butcher shop that has been (in various forms) on the corner of Second Street and Fourth Ave in the middle of Queens Park for more than 100 years is looking to do some building improvements and expansion to make her business operate better. This includes a small expansion on the Fourth Ave side, the completion of a full basement, and some building restoration and rejuvenation to bring it back to its Mid-Century glory (but not its original 1920’s form). As the current use is non-conforming with the zoning bylaw (because the current use is decades older than the zoning bylaw itself), this expansion either required rezoning for formalize that use, or a Heritage Restoration Agreement which provides essentially the same function as a rezoning, but adds permanent protection of the historic building. The application was for the latter.
We received 79 pieces of correspondence on this (about 55% in favour and 45% against) and had 24 speakers (about evenly split between for and against), with significant overlap between the two. I cannot go through all of the comments (you can watch the hearing if you really want to dig in), but do want to paraphrase some of the major concern I heard, with my own (editorializing?) responses.
This is a violation of the rules (OCP, zoning, HRA), or of the spirit of the rules: This application complies with the language and spirit of the Official Community Plan. It is indeed non-conforming with the Zoning Bylaw, but the Heritage Restoration Agreement is a tool to bring this into compliance. This is the exact purpose of HRAs, and it is being used appropriate, both within the Local government Act and out local Bylaws.
Tripling the FSR is too much: The current FSR is 35%, and with the expansion planned the FSR above grade will be 50%. This is in line with form common in the neighbourhood. The full FSR will be 100% only when a full basement is added, and the FSR below grade will have no impact on the street expression or massing of the building. Allowing a full basement for storage and preparation is a reasonable request to me considering it has no impact whatsoever on the neighbourhood or form of the building.
Parking is insufficient: Yep, we are giving a parking relaxation, recognizing this business has operated at this site for almost 100 years, and in its current setting for about 70 years, and is not noted for creating parking chaos. Second Street is a quiet street with curbside parking almost always available, and this is a local small business that mostly serves local customers in a cycling- and walking-friendly neighbourhood. Parking can’t stop us having good things, or all we will ever have is parking.
Too many alternate uses possible: The current non-conforming retail use of the site is being formalized through the HRA, but the HRA does not fundamentally change the possible uses. As the current non-conforming use predates the zoning bylaw, another form of retail/commercial would be able to operate in the site (pending compliance with business bylaws, health code, etc.) if the butcher shut down even before the HRA. Notably, many people from the neighbourhood suggested a café or expanded retail might not be a bad thing for the community, but a welcome addition. That said, the owner has indicated they plan to operate a butcher and deli similar to the current operation for the foreseeable future.
Why 1950s and not 1920s: Many Heritage advocates would prefer if this building was re-shaped to better reflect the pre-1940 fashion of the bulk of the preserved residential houses in the neighbourhood. The problem with this is that most of the building is not from the original 1920s structure, but was added in the extensive 1951 renovation, that is the form that exists to be preserved. I was also compelled by some correspondence and the heritage statement that talks about architectural diversity, 1950s examples being an important compliment to the pre-war heritage of the neighbourhood, and speaking to the importance of the retail strip that used to exist along this block of Second Street before the others were torn down to build modern residential houses. There is a mid-century story being told by this building that is as important from a heritage point of view as the aesthetics of the pre-war period.
What is heritage win?: Some neighbours feel there was a lot given the landowner here (essentially FSR and reduced setbacks on two sides) for little community benefit. Aside from the community benefit of having this business operating in the community (the value of which many neighbours came to speak to), the HRA will assure permanent preservation of a mid-century small retail space, the last of its type after similar businesses were demolished along this block. Fundamentally, preservation of a building of demonstrated heritage value is the “heritage win” of any HRA.
There were other concerns raised, and indeed many came to speak in support of the project. The discussion even got a little philosophical at times as we delved into the balance between heritage as physical objects and heritage and intangibles (it is, most would agree, both). This was put into context by the final delegate whose father used to operate the butcher in question, and who actually grew up in the attached residential unit.
In the end, I supported the land use question put before us. And was comfortable that the HRA was an appropriate tool appropriately applied in this case. I hope the butcher can continue to provide a valued local service in Queens Park. Council voted unanimously to support the application and gave Third Reading to the Bylaws.
There’s a thing flying around Facebook, and probably other social media sites, that purports to show a panorama of a starry night sky over the Perseverance rover. There are a couple of problems with it.
For one, it’s a fake. The landscape around Perseverance is real, and the sky is real, but it’s an Earth sky, not a Mars sky, and the two have been composited together. How do we know?
If it was dark enough to see all those stars, it would be waaaay too dark to see the ground in front of the rover.
So I checked the NASA website and found the original photo (link):
EDIT: to be perfectly clear, so there’s no confusion: the Martian landscape is a genuine photographic panorama from NASA, which is shown and linked below, and the night sky is a genuine photographic panorama taken from Earth, and the two have been misleadingly composited by a YouTuber, who I am not going to name or link to because I don’t want to promote his work. NASA didn’t fake anything here!
Not only is the composite a fake, it’s a particularly clumsy and hilarious fake. I realized that since the sky above the horizon at any one time is a hemisphere, there is a 50% chance that Mars would be in the sky in the panorama, and I thought that would be pretty hilarious. So I went looking, and I found it. Here’s the proof:
The ecliptic–the plane on which the sun, moon, and all of the planets appear to move across the sky as seen from Earth–goes right by Regulus. There is no bright star at the circled point, so it must be a planet. And I’m certain that the bright “star” in the image is Mars, because it’s red, and because Jupiter hasn’t been by there in a few years–it’s currently on the other side of the sky.
So the composite panorama has the amazing spectacle of Mars in the night sky above…Mars. That’s a pretty spectacular fail.
This composite thing is bogus and stupid. If it comes your way, don’t give it any likes, or any clicks. Put up a link to this post instead. It’s not like Mars isn’t amazing on its own! Reality doesn’t need any enhancement.
The SLS axiomatically cannot provide good value to the US taxpayer. In that regard it has already failed, regardless of whether it eventually manages to limp to orbit with a Falcon Heavy payload or two.
The question here is whether it is allowed to inflict humiliation and tragedy on the US public, who so richly deserve an actual legitimate launch program run by and for actual technical experts.
The best time to cancel SLS was 15 years ago. The second best time is now.
Oh yeah, the disclaimer. I do not speak for my employer. This blog should not be construed as an attack on the rank and file staff who have no control, or who are ideologically motivated, or believe that it’s better than nothing. I know and respect many people who will disagree with at least some of what’s in this blog. My usual fare is more constructive, forward looking essays but occasionally one has to do the dirty work.
Because everyone has a soundcloud now, a recording of this blog can be heard here:
Unlike the mainstream of my recent blog series on popular misconceptions in space journalism, the SLS is often covered accurately, that is to say, negatively, in the mainstream press, at least in recent years. Despite that, the only thing that ever seems to change is that the schedule moves to the right and the budget overruns go up. After the recent SLS Green Run test failure I went looking for an article that dug into the architectural and organizational issues at play, and didn’t find much. This blog, therefore, serves as an annotated index documenting a huge, complex, multileveled and ongoing failure.
How hath SLS offended me – let us count the ways. It is hard to know where to start. The SLS is such a monumental, epochal failure at every possible level that at any level it’s self-similar – a fractal. This post was initially intended to be lean and concise (like the thread it’s based on) but like its subject has ballooned in size and scope beyond all reasonable limits.
Brief history – necessarily incomplete
The Saturn V was the most powerful launch vehicle ever flown, and it took 12 people to the Moon’s surface between 1969 and 1972. It was the crowning pinnacle of a decade of frenzied development under the Apollo program, a monumental achievement and in many ways the best possible rocket to solve the given problem: men on the Moon, waste anything but time.
It did not, however, have a credible path to anything like a sustainable cost so it was summarily canceled and the search for a successor launch system began. Reusable vehicles then, as now, seemed like a good idea and, after the many various stakeholders had had their say, we got the Shuttle.
This alone should be reason for pause. Flying vehicles are by their nature marginal, and if well designed they can do one thing well. As the old saying goes, a turkey is a chicken designed by committee and the Shuttle, serving too many masters, turned out to be incurably problematic.
Awe inspiring, technically reusable, enormous cargo bay for delivering and retrieving satellites and space station parts. The Shuttle promised routine, inexpensive space launch and it never quite got there. The program took more than a decade to develop and by the time it ended 30 years later, none of the initial requirements had been achieved. Launch costs were a hundred times higher than projected, launch cadence 20 times less, and safety… we lost two crews. Far from the official value of “one in a million”, rigorously estimated Shuttle risks ranged from 1 in 10 for the early flights to about 1 in 100 for the later ones. A lot of time and effort went into that improvement but the consensus is that we were lucky to lose only two.
BASE jumping is safer. Much safer. This does not make a good slogan for routine space flight.
How could this be? Does spaceflight necessarily have to be unsafe? No. But aspects of the shuttle design baked in risk at the architectural level and, as a result, that tech stack could never progress to something more reliable than a science experiment.
Starting in the early 1990s, a variety of studies examined re-configuring Shuttle hardware to produce a more conventional launch vehicle, one with a big new upper stage and commensurately impressive launch capability. Remember, even though the Shuttle’s payload capacity was 25 T to LEO, the orbiter itself weighed about 80 T so the stack, even without an upper stage, placed 105 T in LEO, nearly as much as the Saturn V.
For Earth-launched rockets with conventional propellants, 2 or 3 stage rockets can deliver more mass to LEO than the Shuttle’s “stage and a half” design, so the addition of a second stage should see the performance increase to around 200 T to LEO.
This heavy lift launcher design formed the basis of Zubrin and Baker’s Mars Direct concept, where one launch per year could sustain a program of Mars surface exploration by 4-6 people at a time. The SLS can trace its conceptual ancestry, via the Ares V, to the Mars Direct Ares launch vehicle that grew out of the ashes of the 90 Day Report.
We’ve seen how SLS was chimeraed together out of politically valuable pieces of the Shuttle program, but what mission is it designed to serve?
Not every aspect of mission confusion can be placed at the feet of the SLS program. In the early 2000s, Congress switched human space exploration priorities with every new president, leading to a “Lucy and the Football” type situation. Moon? Mars? Asteroid? How about a program that can do none of them?
The problem is that high performance systems, including things that have to fly, can in general do at most one thing well. The Saturn V was pared down, brutally, in answer to the precise requirements of the mission at hand. As Houbolt said “Do we want to go to the Moon or not?” It is not clear that his ideological descendants have been either willing or able to inject such clarity into any subsequent NASA launch vehicle development program.
SLS, together with the Orion space capsule, have no destination. Together, they lack the performance to even get into low lunar orbit and return – something that crewed Soviet rockets could do in the early 70s, though in the event the only passengers were a couple of turtles.
I have participated in several mission development projects, of varying levels of seriousness, such as the Caltech Space Challenge. Even in these hypotheticals where SLS is notionally available to use, even though no-one wanted it, it’s still remarkably hard to shoehorn into any useful human exploration architecture for the Moon, Mars or deep space. While its payload capacity to LEO is marginally higher than Falcon Heavy, its low flight rate, unproven nature, uncertain schedule, and weird political attachments make it way too sporty for anything on the critical path. Even if SLS wasn’t architecturally unsafe, poorly managed, incredibly expensive, a technological dead end, obsolete, and cursed by a low production rate, it would still have nowhere to go.
And so the SLS has been developed for two decades without a destination that could actually drive requirements. This is no way to run a program, and early managers of the SLS program under Obama said as much. In public!
I’m 100% okay with Congress cutting generous checks to politically important constituencies to develop hardware for human space exploration, but maybe, just maybe we should have worked out where we were going and what we were going to do there first? Can we now act surprised that SLS is, at best, an incandescently expensive turkey that’s not much use to anyone?
Every decision was made to reduce cost and risk and had the exact opposite effect
Reconfiguring existing Shuttle hardware appears great on paper but reality had other ideas. Take away the Shuttle orbiter and the proto-SLS, now called the Ares V as part of George W. Bush’s Constellation Program, was still hobbled by a hydrogen booster stage, giant solid boosters, finicky SSME engines, and a loss of institutional expertise. Reusing Shuttle hardware should retain workforce and derisk development, but it’s important to remember that despite flying more than 100 missions, practically none of the Shuttle hardware was reliable enough, at an architectural level, to meet any kind of certification standard.
What does this mean?
If I want to modify a C-172 to fly at 300 kts that’s fine, but I won’t be able to do it without a much more slippery wing with far less forgiving flight characteristics. And if my new plane stalls dirty at more than 61 kts, the FAA simply won’t certify it for general aviation under Part 91 of the FARs. I can slap an experimental sticker on the side and cancel my life insurance, but I can never fool myself that my hotrod Cessna meets basic certification requirements.
The Shuttle hardware stack is no different. In pursuit of ultimate performance and top-down design, we have a case study in Conway’s Law with respect to the major aerospace contractors in 1970, and the tech to match. This was 50 years ago. Can we do better today? Yes, and not by a small amount. Multiple private rocket companies are shipping rocket motors today which are both more reliable and higher performance (ceteris paribus) than the SSME.
The Shuttle parts bin can never be low risk.
The SLS attempts to limit development risk by reusing bits of the Shuttle, which itself often drew from parts of the Saturn V. So the new best rocket will use tech that’s 50 years old. The Shuttle parts, however, are not exactly rock solid reliable. We lost two Shuttles but comprehensive FMEA found thousands of potentially fatal failure modes. Playing whack-a-mole with architecturally unsound design flaws, decades after the designers have retired, at a rate of one per lost mission is no way to run a program.
For example, Challenger was lost because hot gasses from the solid boosters escaped through leaky O-rings and damaged the main fuel tank.
Why? Because it was too cold, engineering was ignored in favor of launch fever, and the O-rings weren’t flexible enough when cold.
Why? Because program management was incentivized to subscribe to a version of reality where the Shuttle was a reliable, dispatchable launch system, not a ticking time bomb.
Why? Because the solid boosters had to be integrated from multiple sections and sealed with rubber O-rings.
Why? Because previous launches had seen evidence of O-ring blow by and hadn’t crashed, even though they probably should have.
Why? Because humans normalize deviance.
Why? Because the design needed enormous and unstoppable solid boosters to compensate for the main engine’s low thrust at lift off. Just ask any Shuttle astronaut what they thought of the RTLS abort scenario.
Why? Because the design insisted on a SSTO-like architecture in the belief that it would reduce costs.
Why? Because the Germans had largely left the US rocket development program by this point.
Why? Because veterans of Vanguard thought it was time for Americans to lead the rocket development program.
Why? Because veterans of the airforce X-15 program thought that space planes were superior to conventional rockets.
Why? Because the result of this cascade of design decisions landed at the feet of the execution team and no-one was sufficiently empowered to say “clearly a terrible mistake has been made, this architecture can never achieve its promised performance, let’s start again.”
So they went forth and built the impossible. This is hubris.
For example, the SSME was the first staged combustion cycle engine developed in the US. It too had a series of terrifying “teething” issues, complete with combustion instabilities, cracking turbine blades, and leaking seals. At launch they were operated at up to 113% design power, with no engine-out capability until very late in the launch profile. At the design stage, this made sense – they would be no less reliable than a modern jet engine. Unfortunately, the reliability estimate was off by a factor of a thousand or so, so each re-usable engine had to be meticulously disassembled and reassembled and test fired before every flight, at enormous expense.
Richard Feynman was part of the Rogers Commission investigating the loss of Challenger and ended up publishing a sort of minority report. Due to his prominence and contrarianism, numerous members of the NASA rank-and-file approached him with otherwise obscure or obfuscated information leading him to some insights about the nature of the problem at the managerial level, which only made it into the report’s appendix when Feynman threatened to publicly repudiate the entire process if excluded.
Even at the time of the Rogers Commission in 1986, Feynman’s report pointed out that no SSME ever built had even got close to the design qualification requirement. Ordinarily, this would trigger a ground up redesign and, if necessary, modification or cancellation of the entire program. Instead, program managers routinely shifted the goalposts over objections from engineering and without any kind of rigorous analysis. For example, instead of working out why the engines showed signs of damage after a single full duration test firing, when they were designed to last for dozens of flights between inspections, they decided that anything short of a catastrophic failure after a single flight meant that they were safely within design margin.
The broader point from Feynman’s politically contentious minority report was that he accurately perceived that the Rogers Commission began with its conclusion already decided. Namely, that with a year or two of frantic investment in risk mitigation, the Shuttle could be “fixed” and live up to its original promise of routine affordable safe space flight. Of course this was axiomatically impossible, Feynman (among thousands of others) recognized this, and that aspect of the Commission for the sham it was. But anyone who has been around major development projects, public or private, knows that no Commission that reports such obvious facts is politically tenable. The reality in 1981, 1986, 2003, and today is that the most technically accurate summation of the Shuttle was that it was fundamentally ill-conceived and ill-executed, and not just irredeemably unsafe but downright dangerous.
There is precedent for testing surfacing such egregious safety issues that an entire aircraft development program is cancelled. Instead, the Rogers Commission buckled to perceived political pressure, kicked the can down the road, and sealed the fate of the Columbia 7. Then, as now, it was just a matter of time.
Not convinced? Let me quote a paragraph of Feynman’s Appendix directly, addressing the concept of “factor of safety”.
“In spite of these variations from case to case, officials behaved as if they understood it, giving apparently logical arguments to each other often depending on the “success” of previous flights. For example. in determining if flight 51-L was safe to fly in the face of ring erosion in flight 51-C, it was noted that the erosion depth was only one-third of the radius. It had been noted in an [F2] experiment cutting the ring that cutting it as deep as one radius was necessary before the ring failed. Instead of being very concerned that variations of poorly understood conditions might reasonably create a deeper erosion this time, it was asserted, there was “a safety factor of three.” This is a strange use of the engineer’s term ,”safety factor.” If a bridge is built to withstand a certain load without the beams permanently deforming, cracking, or breaking, it may be designed for the materials used to actually stand up under three times the load. This “safety factor” is to allow for uncertain excesses of load, or unknown extra loads, or weaknesses in the material that might have unexpected flaws, etc. If now the expected load comes on to the new bridge and a crack appears in a beam, this is a failure of the design. There was no safety factor at all; even though the bridge did not actually collapse because the crack went only one-third of the way through the beam. The O-rings of the Solid Rocket Boosters were not designed to erode. Erosion was a clue that something was wrong. Erosion was not something from which safety can be inferred.”
I am far from a professional clipboard wielding box checker but I’ve done my share of checklist-oriented activities, including earning a PPL, skydiving, SCUBA diving, and mountaineering, and I lived to tell the tale. This description of outcome-oriented real time faking of design criteria has never failed to trigger internal screams. The Shuttle was riddled with systems where the design never met requirements, failed in all kinds of odd and scary ways, and were basically ignored in service of the larger goal. Namely, expediting the next round of Russian roulette. So it’s hardly a surprise to find that “flight proven” Shuttle hardware which never achieved remotely safe performance has continued to fail throughout the SLS development program, though apparently without ever triggering a now-40-year-overdue trashing of the entire tech stack.
In 1986 there were still enough Apollo veterans floating around that, had anyone involved had the courage to declare Shuttle a total loss and permanently ground it, there is every chance that a rocket in the style of the Falcon 9 could have been human rated and operational by 1990, representing a genuine path to steady improvements in reusability and cost, and a commitment to fact-based reality as the program’s guiding star.
Indeed this post isn’t intended to cheer endlessly for SpaceX but it is telling that with a tiny fraction of the time and money they, motivated by correct interrogation of the fundamental architectural questions, developed the F9 rocket and Dragon spacecraft, a launch system considerably less exotic than Shuttle and yet orders of magnitude cheaper, safer, and better performing.
Today, we have a few dozen SSMEs left in warehouses, exquisite examples of 1970s-era tech, every bit as wonderful as a Faberge Egg. Fit for a museum, not a modern rocket. Are they reliable enough? No. But are they expendable? No. But are they at least affordable, because we already have them? Also no. The contractors involved are providing them for the SLS at a cost of $150m per engine.
Let’s get this straight. We’re going to take these priceless antique reusable rocket engines and fly them once and drop them in the Atlantic Ocean. And the engines alone will cost us about the same as 10 Falcon 9 flights.
Let’s talk about hydrogen. Shuttle designers gravitated towards hydrogen as a fuel because its specific impulse is substantially higher than other fuels, and the Shuttle was not a design that could afford to leave performance on the table. Modern consensus is that once the cost, complexity, and additional mass required to deal with hydrogen is factored in, its performance is not favorable compared to other more conventional fuels, such as RP-1 or methane, especially for launching to LEO. For example, hydrogen is a hard cryogen, requiring extra insulation and mass. It easily leaks through sub micron valves, seeps through metals causing embrittlement, and is so cold that it restricts choices on materials. Further, ISP isn’t everything. At launch, a rocket is moving slowly and doesn’t particularly need high exhaust velocity and efficiency, it needs high thrust to punch through the atmosphere. Here hydrogen is next to useless, which is why the Shuttle needed to play Russian roulette with the giant solid boosters.
By comparison, the Falcon 9 has a similar payload capacity (~22 T expendable), uses kerosene and oxygen propellant, and costs 5% what the Shuttle does on a per kg to LEO basis. Not a 5% cost improvement. A 20x cost improvement. The ISS required about 35 Shuttle flights to assemble. If the modules had been flown for Falcon 9 prices, the entire thing could have been launched for less than the cost of two Shuttle flights, or a single year of the program’s operation, and it might not have taken 20 years to build.
The SLS is about 30% bigger than Shuttle. It has 4 engines, not 3. It has bigger solid boosters, and more fuel. And yet its supposed LEO capacity is a mere 70 T, a whisker more than a well-motivated Falcon Heavy. The Shuttle could do 105 T including the mass of the orbiter. The SLS is the Shuttle stack simplified, straightened out, and increased in size, and it lost, relatively speaking, more than half of its overall performance. Adding a second stage should improve that, and the Exploration Upper Stage is planned to do just that. But engineering and building that stage has never been funded and is expected to take at least another decade (why so long? I have no idea) so the SLS cannot even be useful until 2035, almost a century after its earliest flight heritage components were first designed.
How can it be that a rocket assembled from existing flight-tested components can be the most expensive, hardest to build, have the worst performing schedule, and by far the least safe of any contemporary option? I’ve asked around and no senior engineer at any major aerospace company could really explain how this was possible. The best guess was that as engineers at MSFC hadn’t built a new rocket system in a few decades (which may as well be eternity) they had recursively put in fudge factors until more than half the rocket’s potential performance had been thrown away.
Indeed, as long ago as 2015 when the schedule had barely begun to slip, Rand Simberg pointed out that the SLS program is, at best, a reconstitution of “Apolloism”, the idea that deep space exploration is only possible with a national mandate and huge expensive rocket, because that’s how it happened the first time. Simberg’s analysis includes deep discussion of the text of the acts of Congress that brought SLS into being.
My focus here is on risk mismanagement at the architectural and organizational level – the ultimate root cause of NASA’s previous tragedies and the lesson that is yet to be learned.
Normalization of deviance
NASA has made mistakes and lost astronauts. Like the FAA and the NTSB, NASA investigates accidents, studies the root causes, and publishes the reports so that everyone can learn from them and try to prevent them from happening again.
In total this is more than 1000 pages of expert commentary on a variety of root causes.
Here, we return to Feynman’s appendix within the Rogers Commission Report, which closes with the sentence:
For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.
It goes without saying, but if Feynman of all people is calling a deliberative process out for arrogance then there might be a real problem.
I’ve touched on aspects of Feynman’s report earlier but here my specific concern is the thread that runs through all four accidents. Not the specifics of system failure, though they are of course interesting, but the malfunctioning of the human-management systems that are intended to prevent this exact sort of problem and instead often becomes its root cause. Normalization of deviance. Group think. Selective re-interpretation of results. Acceptance of the ways things are done, even if they lack any justification in terms of first principles.
After Challenger, substantial effort went into redesigning parts of the solid boosters to avoid a repetition of that particular failure. In the process, dozens of other critical design flaws were discovered and rectified. Do you think they got them all? Of course not, not even close.
There are multiple ways to approach big development programs like SLS or Shuttle. One favors detailed analysis, careful design and review, and admittance testing of the finalized design. Another favors moving fast and routinely breaking things, but treating it as a learning experience and aggressively investing in the execution capability of the underlying team.
The Shuttle started out as the former but, in fixing Challenger-derived issues, jumped to the latter. Yet we can’t afford to crash 300 Shuttle orbiters to surface the 300 most critical design flaws. And so the Shuttle program continued to operate with thousands of architecture-level design flaws, any one of which could kill everyone on board.
I am writing this blog because, in every possible way, the SLS program embodies the logical union of all of these organizational problems that have troubled NASA since its inception in 1958. This isn’t bureaucratic inefficiency at the local DMV. This is a multi billion dollar multi decade national flagship project oriented towards launching living breathing humans into deep space, and it’s being run in a way that maximizes the odds of very public failure.
To a certain extent all organizations have areas of suboptimal performance. Building cutting edge flight systems is a known hard problem, which is precisely why it must jealously avoid adding marginal requirements. This is the too-many-cooks problem, warned about concisely by Kelly Johnson in his rules for operating the Skunk Works.
I don’t know anyone who has read about NASA’s major disasters and read about the SLS and has failed to join the dots. It’s not a huge leap of the imagination. It’s hard to imagine a way of making the system more likely to deliver national humiliation and tragedy.
Combine organization hubris, political expediency, thoroughly characterized and utterly obsolete flaky hardware, questionable design methodology, piss poor program management, unaccountable contractor behavior, normalized creative accounting, and routine denial of reality. Not a recipe for success.
We have seen how Shuttle and SLS came to exist, and we have seen how utterly unjustifiable the architecture is in terms of first principles. Not only is it axiomatically unsafe, it is incredibly expensive, a technological dead end, utterly irrelevant, and of terminally low economic value.
Wasn’t this already canceled before?
We have seen how utterly unfit the SLS is at every possible level, so it may not surprise the reader to learn that it was already canceled once, resurrected, and zombified. In 2009, the Augustine Commission found that the Ares V, an SLS predecessor with slightly better performance, could not fly before the late 2020s at then-current funding levels. This turned out to be remarkably prescient.
Obama agreed and tried to axe the entire Constellation program. SLS and the spacecraft Orion (also a bloated contractor cash cow, crippled by poor design and no reference mission, ripe for immediate cancellation, but otherwise beyond the scope of this post) were dragged kicking and screaming back out of the dumpster and re-animated to restore the money laundering mechanism required to satisfy the program’s key constituents: the NASA human space flight office and the science community.
Ha, I joke. I mean, Boeing with its army of well-connected lobbyists and key congressional districts in southern states, who have made out like bandits.
In return for acceding to the prolonged torturous confabulation of a Frankenstein’s Monster Rocket by pillaging parts of the dismembered Shuttle corpse from beyond the grave, Obama was able to kick off the commercial cargo and crew programs. Despite deliberate Congressional sabotage through withheld funds, these programs have delivered on their mission basically on time and budget, at something like 50 times the value for money that SLS could have achieved under its best case scenario.
The SLS was reconstituted but at the head of a non-existent program. Where would it go? What sort of mission would it serve? LEO? Space stations? Moon? Mars? Asteroids? Don’t know, don’t care, doesn’t matter. In the end, it went to none of them.
The entire damn lunar gateway only exists because SLS is too anaemic to launch the incredibly overweight Orion anywhere useful, so perhaps we should just drop the whole thing into the Atlantic ocean and be done with it.
Are you not entertained? Let’s talk about an extensive list of fubared systems.
This section is indebted to NASAWatch for collating industry scuttlebutt over the decade or so that this zombie program has assiduously picked the public pocket and delivered negative value.
Due to sheer quantity of major programmatic hiccups I must necessarily be brief and mention only the highlights. Of course, every project has problems. Problems cost time and money. And the more time and money is spent, the more money the cost plus prime contractor makes. Can we really profess surprise then that SLS has cost $20b and taken more than a decade and still hasn’t flown?
The tank broke
Someone dropped an incredibly expensive tank dome, damaging it beyond repair. Of course, if this was a real production facility it wouldn’t be that hard to replace but because everything is meticulously handcrafted by elves with nail files, this caused a major delay.
I previously mentioned that despite 40 years of flight heritage, the once reusable SSMEs are still subject to a variety of technical issues and have never gotten close to the original certification criteria.
A cheaper, expendable variant of the SSME called the RS-68 was certified 20 years ago and originally slated to be used on the Ares V, until it was canceled the first time. It is routinely used on the Delta IV, which is now defunct due to highly non-competitive launch costs, and would have required over 200 design changes to meet human rating standards.
Between the fuel tank and the engine is the thrust structure plumbing. The SLS project managed to contaminate that plumbing with paraffin, and it wasn’t detected until after the plumbing had been completed. FOD in tanks and engines is a surprisingly common source of rocket crashes (such as the Antares), and for that reason everyone knows to look out for it and take precautionary measures. Well, nearly everyone.
The process of developing flight software is (usually) a very serious endeavor requiring experience and deep expertise to ensure that something like Ariane 5’s first launch doesn’t happen.
I think it’s safe to assume, given how confused everyone was over the TVC parameters in the Green Run test failure, that the software is still an utter shambles.
It is not well known anymore, but leading up to the first (and much delayed) Shuttle launch in 1981 the entire software stack and test system had to be rewritten from scratch. Having endured hundreds of design changes it was such a mess that tests couldn’t even be reliably started.
The booster test failed
Again, in these programs the tests are admittance tests. They are specifically designed to be boring. Nothing in this 50 year old tech stack is meant to break. And yet the mission critical, single point of failure, thrust vectoring nozzle on a closely related solid motor failed for no apparent reason. If this happened on a Shuttle or SLS flight, it is safe to assume total loss of crew, payload, and vehicle. Solids have no engine out capability. They have moderately common failure modes which are apparently still not understood and still not corrected. It is safe to assume that the “black swan” failure rates of solids are around 1 in 100, which is nowhere near good enough for a modern rocket.
A failure rate of 1 in 100 in any critical subsystem means that overall system performance will never be better than that. Almost infinite levels of engineering effort can continue to be expended on solid boosters but they will never be reliable enough for any kind of commercial flight certification, not even close. If we were talking about aircraft we’d be talking about catastrophic structural failure, and aircraft that occasionally shed wings cannot be certified.
Scream it until their ears bleed: Solid-boosted hydrogen first stages are architecturally unsafe.
The political insistence that the Shuttle and SLS use solid boosters means that no amount of time or money will ever make them safer than a 1 in 100 chance of catastrophic failure. Why spend infinite money on an obsolete system that has failure baked in?
Orion’s PDU broke and needs a year of work to replace. Is this a joke?
I’m not focusing on Orion here but it’s worth mentioning that during some recent testing it was discovered that a redundant power/data unit (PDU) had failed, and what’s better, it’s an essentially permanent part of the spacecraft structure so replacing it will take the better part of a year. I guess if it takes 12 years to build a spacecraft and launch it, one of the ten thousand fancy widgets inside will exceed its shelf life and let out the magic smoke? Well and good, but don’t put stuff that might break inside the wall.
It’s also impossibly expensive. Who are the stakeholders?
Cui bono? Boeing is the prime contractor and they’ve made lots of money on screwing SLS, way way more than they ever could have by managing it properly and delivering it on time. Obviously NASA employees and local contractors at MSFC and JSC have earned a livelihood. Boeing spends squillions every year on lobbying and other government activities, which supports both a staggering number of lobbyists, fancy lunches, and political campaigns, but is also basically petty cash even in the context of the contractor performance bonuses which, somehow, Boeing has mostly received.
The picture here is that all the key constituencies with power and money are very satisfied with the SLS, because it provides the operational cover necessary to move such vast sums of public wealth into the favored hands. Indeed, actually flying the rocket is pretty risky in terms of the existing scheme, because it might fail and thus end the good times.
Somehow, the human space exploration budget has been “favored” with this sort of unwinnable grift, with the result that yet another generation of idealistic engineers has aged into retirement, most of the Moon walkers have died of old age, and a basic Moon base or Mars base seems more out of reach than ever. It doesn’t have to be this way.
Every month I stare at the full Moon and it mocks us humans with our weak rockets and lame technology and cruddy organization. We should be up there playing Lunar tennis.
Well I have an axe to grind and said so at the top. But don’t take my word for it. Read yourselves, how greed can ruin a good thing.
Endless reports by toothless government and agency watchdogs pointing out the obvious – the rocket is expensive, unsafe, and will never work properly. One of them even found that NASA hid nearly $800m in costs to avoid a Congressionally mandated budget cap by spending money earmarked for future development programs. Just try that in your annual tax filing and see how far you get. (Actually, don’t.) There’s a word for this and it rhymes with broad.
Here are some articles on how literally everyone finds the SLS management to be worse than useless and yet all the key players keep awarding themselves and each other with performance bonuses. If only gaming the system was their only task – they sure act like it is.
Can you imagine showing up for your day job and telling your boss that your salary is now a secret, but at least 3x higher than the day before, and that your work product was going to be a decade late? Even the people whose only job is to know exactly how much the SLS costs apparently do not know.
The only metrics that matter for big rockets and humans in space is $/T and T/year. By an unbelievably huge margin, the SLS has mismanaged itself into the wrong end of the field on both these axes, with a rocket that costs maybe 20x more per tonne and, due to its appallingly low flight rate, delivers less mass to orbit in a year than SpaceX can in a fortnight, in 2021.
The SpaceX Starship is designed to deliver on order a million tonnes to orbit a year, for about $100/kg. That’s 15,000 times the stuff for 1/500th the cost. I have no doubt that the Starship development program will have its surprises and setbacks but they’ve already flown to 12.5km – roughly as high as the stack of $100 bills already spent on SLS would reach. Even if Starship comes in at 10x the design cost it will still be 50x cheaper than the competition. Would you spend $20k on a car, or $1m on the same car? It’s hard to even make meaningful comparisons here.
Just days before President-Elect Biden was inaugurated, NASA conducted the crucial Green Run Test for SLS. This is basically a full duration static fire intended to show that the flight hardware on the test stand is ready and able to operate for the full 8 minute launch profile. A test at this stage of development is an admittance test. It’s meant to work properly, and not to surface any unexpected issues.
Full duration static fires are a standard part of any modern rocket’s development, and the test is designed to mimic the launch profile as closely as possible so that the rocket is actually subjected to flight-relevant conditions. Test as you fly, fly as you test.
Program management at one point suggested skipping the Green Run entirely to make up for some of the endless program delays, especially as the testing program would take many months to execute. The rocket should be safe enough to launch people (presumably someone else’s parents/spouses/children), first time, they said. Fortunately, cooler heads prevailed.
The timing of the test was important to raise the profile of the Artemis program within the new Biden administration. With that in mind, it appears that several key test parameters were altered to reduce the odds of test failure. This is not the same as avoiding catastrophic failure by shutting everything down before a major explosion. The use of “precautionarily conservative test parameters” is cooking the books, plain and simple.
Of course, it’s impossible to say for sure since the test parameters were nothing like the actual launch. What we know is that the SLS has at least one obscure failure mode that results in the engines shutting off for no immediately apparent reason. If this happened in flight, the axiomatically unsafe boosters would keep firing for a couple of minutes before the stack could pull apart and abort procedures occur. So, basically guaranteed total loss of mission, payload, and anyone riding it.
The cadence and content of these releases mirrors, almost exactly, the statements and reporting after the Starliner test failure, also due to untested software failures.
I wrote this blog because this test failed in a way that reveals that exactly the same sorts of issues that directly caused previous catastrophes are still alive and well at NASA.
Schedule pressure and political expediency leads to cutting corners and running fake tests of extremely limited value, just like in Apollo 1.
Contractor-provided systems fail in obscure ways and a clear disconnect is apparent between the contractor’s engineers, NASA’s engineers, and management, just like Challenger. And Mars Climate Orbiter. And Mars Polar Lander.
Excuses are being made for failure modes that were not anticipated, and deviance is being normalized, just like Columbia. And Challenger. And half a dozen other near misses.
Is this unique to NASA? No, but it’s on brand for Boeing
Boeing was once synonymous with innovation, elite achievement, and flawless execution. By most accounts, the 777 was a masterclass in program management, with ~280 separate sub-system teams performing decentralized design and optimization to bring the plane into production on time and schedule.
With this enviable legacy built up over decades of hard work, Boeing jealously safeguarded the institutional knowledge that they had hard won. Right?
So naturally they installed this failed executive team at the top of their org, who then pushed out Boeing’s existing management, moved HQ to Chicago for no reason, stovepiped the organization, pushed about 10,000 experienced engineers and technicians into early retirement, embarked on the enormously ambitious 787 project, pushed design work out to ~50 subcontractors and acted surprised when this failed egregiously, in exactly the same way as it had for McDonnell before and Douglas Aviation before them.
After blowing $50b on a $5b program, the 787 limped into service, only to suffer a series of agonizingly embarrassing failures. In every case, their sub (sub sub) contractors had banked profit in direct proportion to how much value they failed to deliver to the project integrator, whose outsourcing had spectacularly failed to diversify risk.
Boeing learned their lesson and rehired the sort of in house talent necessary to once again vertically integrate construction of advanced aircraft. Right? Of course not. Instead, they squandered billions in lobbying and regulatory capture, and then fumbled development on the 737 MAX (killing 346 people), F35 (not the prime), KC-46, Starliner, and SLS. In the case of Starliner, untested outsourced software was so abysmal that extremely public system tests failed in ways that turned out to be stupidly obvious. What sort of Prime contractor flies untested third party software on NASA’s commercial crew test flight?
Boeing has lost the plot. All orgs get to the point where their internal processes and systems are decaying faster than any sort of intervention can save them. Like leprosy or necrotizing fasciitis, the patient still lives but their days are numbered.
Is it any surprise, then, that Boeing runs the SLS program, bamboozles friendly NASA program management into giving them most, if not all, of their award fees, bribes lobbies the crap out of Congress to keep the gravy flowing, and delivers half-baked hardware six years late that can’t even pass the sandbagged PR test firing in front of the new Presidential transition team?
America is a place where getting rich by doing good is celebrated. No-one objects to a contractor or private company making a tidy profit where they deliver exceptional service or generate and share incredible wealth. For example, compared to SLS even Microsoft is a family friendly American institution!
Where I object is when a politically-connected Prime makes bank at the expense of the ultimate customer, the US people. The SLS is not in anyone’s best interests, except the well-positioned few who are raking in the dough hand over fist. The SLS should be a machine for transporting stuff into space, not pumping public money into well-connected pockets. Shame!
The rest of NASA is not like this
At least, it tries not to be. The science programs are run with reference to the decadal survey. This provides a mechanism for programmatic discipline. Decadal surveys represent the consensus of the scientific community. They drive steady progress in key questions, at affordable prices. Generally they have a great track record.
Prominent exceptions, such as JWST, unsurprisingly bear the fingerprints of a major aerospace contractor (Northrop Grumman) who specializes in buying up smaller contractors with new government contracts and then bleeding them for all they’re worth.
Here’s a good business plan if you’re the third best major aerospace contractor, in a field of three. (Raytheon and GD don’t generally play in this space). Get more lawyers than the government. Execute hostile takeovers of big government contracts. Shaft engineering, obstruct product development and delivery. Demand more money. Get it. Spend some of that money on political contributions and lobbying. Keep the rest. Rinse repeat.
Human space flight should have a decadal survey. Do we wonder why it doesn’t?
Not even a jobs program
There is an argument that Space Coast senators using NASA funding as a personal piggy bank are kinda sorta legitimately investing public money in industrial development in the historically under-developed Deep South.
MSFC employs about 6000 people and runs an annual budget of $2.8b, which works out to $466k per head, quite a bit higher than salaries. While the end of the Shuttle Program otherwise implied a loss of livelihood for about 10,000 people (no wonder it was so expensive to run), keeping them all busy on some make work Potemkin project doesn’t provide job security. It just kicks the can down the road.
To illustrate this, let’s say that in 2011 at the end of the Shuttle program, MSFC employees were offered voluntary redundancy with a hefty bonus, equivalent to $100,000/year for ten years. A million dollar payout to anyone (or everyone) who wanted it. If everyone took it, not only would this still cost less than a quarter of what was spent on the SLS, at less than $600m a year, all of these talented and well-trained people would be free to invest that money in local industrial development that actually met a market need and generated economic value.
Instead, these people are trapped in salaried positions building a nearly fake rocket so that Boeing can skim off 300% overhead and a handful of antique maintenance tools can be kept operational. Is any of the Shuttle-related tooling or processes relevant to any other modern market need? No.
Instead of every dollar being invested in actual productive industry, ~25c went to a local jobs program with no useful industrial development, and ~75c disappeared into “overhead”. This is significantly less useful than Universal Basic Income. Is it any wonder that despite a reliable river of public treasure (by mostly “small government” politicians no less) the South remains relatively unindustrialized?
Astronauts want to risk their lives
Astronauts are adults in a risky profession and it’s their call whether to risk their lives or not, ultimately. I respect and admire that. But let’s not pretend that that somehow erases the responsibility for their deaths.
Astronauts are willing and able to take enormous personal risks, just like the brave men and women in uniform. This doesn’t provide carte blanche for their superior officers to throw their lives away. This doesn’t justify purposely building unsafe launch systems to gratify political or contractor expediency. It certainly doesn’t build a legacy of steadily more routine, cheap, and safe access to space and a culture of exploration.
There will always be a few brave souls willing to get on any rocket that points up. But we cannot regard ourselves as a serious space faring nation if we run the space program like an exercise in Patagonian BASE jumping. Which, incidentally, is safer and much cheaper.
Seventeen NASA astronauts have died in spaceflight related accidents, and a few others in training accidents. Like the four cosmonaut deaths, which occurred due to similar organizational issues, not one single death happened for a good reason.
27 January 1967. Apollo 1. Virgil “Gus” Grissom, Ed White, Roger B. Chaffee. Launch fever, atypical testing situation, incomplete FMEA, architectural safety issues.
24 April 1967. Soyuz 1. Vladimir Komarov. Launch fever to meet political anniversary, in a woefully incomplete capsule.
30 June 1971. Soyuz 11. Georgy Dobrovolsky, Viktor Patsayev, Vladislav Volkov. Political demand to include three cosmonauts excluded pressure suits, loss of cabin pressure during re-entry.
28 January 1986. Challenger. Gregory Jarvis, Christa McAuliffe, Ronald McNair, Ellison Onizuka, Judith Resnik, Michael J. Smith, Dick Scobee. Launch fever, engineering concerns overridden, safety parameters out of bounds.
1 February 2003. Columbia. Rick D. Husband, William C. McCool, Michael P. Anderson, David M. Brown, Kalpana Chawla, Laurel Clark, Ilan Ramon. Normalization of deviance allowed known recurring problem to continue until luck ran out.
In every case, the loss of crew and of mission was for a stupid reason, usually well understood, anticipated, forewarned, and ignored. I believe it tarnishes the legacy of these victims to foist the language of posthumous heroism upon them, because it cynically obscures the culpability of the people responsible for these catastrophes. It is not something the loved ones of these people relish hearing, but their lives were wasted and their sacrifice achieved nothing.
I am a proud proponent of insanely ambitious space exploration policy. I want to see hundreds of people on the Moon and on Mars by the end of the decade. In no way is that vision impeded by the laws of physics – only by the so called “laws” of organizational inefficiency and trained myopia. I have no doubt whatsoever that, over a long enough time scale, many many more people will die unnatural deaths in space. If we are lucky, they will die pushing the boundaries of human experience, ambition, and imagination. Not in a tin-foil rocket that everyone knew was a death trap waiting to happen.
Culture of silence
There is a common (though not universal) line of thought within NASA which is congruent to a form of learned helplessness. SLS was forced on NASA by powerful senators. The decisions are made above my pay grade. My job is to execute and implement the grand plan specified by my political masters. I can’t see the big picture. I’m just watching the clock. I’ll retire before it flies. Don’t rock the boat. This is better than nothing, or the alternative. Kick the world, hurt your foot.
I reject this point of view. It is cowardice. It is malpractice. Civil engineers are certified and often have rings supposedly made from a bridge that collapsed. The ring touches the page as we work and our work determines the safety and well being of our fellow humans.
All it takes is one brave person to take a stand and say “no”.
It could be the NASA administrator or the tech about to take off the last red “remove before flight” tag, or anyone in between. All these people are hired to apply their professional judgment and skill in the execution of a collective noble endeavor and its success will often come down to the actions of the right person at the right time making the right judgment call. It can be difficult in a culture that expressly values conformity or punishes excessive displays of critical thinking, not to mention one that disposes of widely respected senior leadership such as Doug Loverro, or Bill Gerstenmaier before him, with scarcely a second thought. But in every case of launch vehicle failure I am aware of there was always someone on the ground who knew right away what had gone wrong. It is obviously not widely advertised but that person, or any person, can stop this crazy train and prevent disaster.
All it takes is one brave person to take a stand and say “no”.
No, the SLS is not safe enough for robots or humans, and nor can it be.
No, the SLS is not good value for NASA or the US taxpayer. It’s not even bad value. It is negative value.
No, I do not accept that the laws of physics are of secondary importance to fleeting political expediency.
No, I will not build a system I know has no path to future improvements in performance and safety – a dead end.
No, I will not build a system where tragedy and national humiliation is only a matter of time.
What is the point of ten thousand engineers devoting lifetimes to developing deep insight into the workings of the universe if these, the cream of the cream who run space flight at NASA, cannot be trusted to know what’s wrong and what’s right.
What sort of program do we deserve if we let non-technical political leaders force scientifically wrong decisions for decades? Lysenko? Great Leap Forward? Another Challenger? Another Columbia? Consequence-free profiteering at public expense by major aerospace contractors?
Not on my watch. Enough is enough.
Physics wins. Repudiate everything. Salt the Earth.
In America, Netflix has a series called Losers. There is an episode called “Stone Cold,” about a Canadian curler who thought of a perfectly legal, legitimate strategy that resulted in an almost guaranteed victory. The only reason nobody had ever used it before was that nobody seemed to have thought of it, even though it had been possible pretty much since the sport was invented. In the end, all of the other teams implemented his strategy and it rendered the game so unwatchable that one year, during the national championships, the crowd spontaneously chanted “boring.” In the end, they had to change the rules to eliminate his strategy and make the game fun to watch again.
It sounds unbelievable, but it is true. Simple, powerful, and sometimes painfully obvious things are out there just waiting to be noticed.
For example, Netflix doesn’t seem to have a way for someone to easily link to a specific episode of one of their shows, or even a trailer for that episode, to make it easier for someone with a blog to direct potential paying customers to that bit of content.
As always, thanks for using my Amazon Affiliate links (US, UK, Canada).
An ultra-class haul truck carries tons of ore. (Anglo American Photo)
Geekwire’s Bryan Corliss interviewed Chris Voorhees about First Mode’s work on a hydrogen power system that will go into a 3-story tall, 1 million pound truck! In just a few months, the First Mode team will take the generator that’s being built in Seattle to South Africa where it will be retrofitted into a massive, ultra-class haul truck for Anglo American.
Article excerpt:
Ultra-class haul trucks are a far different type of vehicle than the Mars rovers the First Mode teams have worked on in the past.
For starters, they’re as tall as a three-story building, and they weigh “about a million pounds,” Voorhees said. They’re built to carry up to 300 metric tons of ore from open-pit mines to nearby processing plants. Anglo-American operates fleets of these dinosaur-sized Tonka trucks at its mines, each one burning thousands of gallons of diesel to fuel the 2-megawatt on-board generator that powers each vehicle.
“It’s a lot of diesel and you have dozens of machines running continuously on site,” he said. Taking just one of the haulers out of operation will eliminate about as much in carbon emissions as 10,000 diesel-powered automobiles.
First Mode is designing and building a hydrogen fuel cell generator to do just that.
Check out the full article for more details about what makes this technology different from anything that’s come before it and to learn how this project could make a difference long-term for the future of clean energy.
A few years ago my wife and I attended our first OzarkCon, an annual QRP Convention conducted by the Four State QRP Group. The event took place in Branson, Missouri and didn’t require much arm twisting to get my wife to come along. We both had a great time and made plans to come back again, but the virus made other arrangements. The event was canceled last year and won’t be conducted in real life this year either.
But now plans are set for Virtual OzarkCon 2021 that will take place on Saturday April 10th. Registration is open (over 200 have already signed up!) and proceeds from the sale of kits will fund the event so there’s no cost to attend. The agenda is in place and it looks like another great lineup of speakers.
Special event station K0N will be active daily April 4th thru April 10th 2300 UTC to 0300 UTC to celebrate the convention. Look for that action on 7.122, 3.564 or 14.061 MHz.
4SQRP is a friendly group who like to have a lot of fun with radio. It may be one of the last QRP club that still produces new and innovative kits for low-power enthusiasts. They maintain regular nets, publish a newsletter the Ozark QRP Banner, organize events, and offer awards. I’ve been a member (#127) for quite awhile, you can join the fun too!
Swap out your LDO for a switcher today, with these designs for a modern take on the TO-220 mounted LM1117 and 78xx series LDO regulators! This project is my take on a quick and easy replacement for the 3-pin LDO. The aim is to replace TO-220 linear regulators with a switching converter, in pursuit of higher efficiencies and current capacity.
Using a Recom RPX series DC-DC module for its small size and incorporating SMD feedback resistors and bulk capacitance on board allows for a drop-in replacement to existing LDO designs, while remaining in the same overall footprint as the counterpart.
As LM1117 LDOs have a different pinout to the 78xx series of regulators, I designed two versions of the layout.
Got the boards in from @oshpark and got them put together! I'm pretty happy with how well they work. Its crazy that the silk is so sharp when each letter is less than 1mm across!
Easily the most sophisticated skimming devices made for hacking terminals at retail self-checkout lanes are a new breed of PIN pad overlay combined with a flexible, paper-thin device that fits inside the terminal’s chip reader slot. What enables these skimmers to be so slim? They draw their power from the low-voltage current that gets triggered when a chip-based card is inserted. As a result, they do not require external batteries, and can remain in operation indefinitely.
A point-of-sale skimming device that consists of a PIN pad overlay (top) and a smart card skimmer (a.k.a. “shimmer”). The entire device folds onto itself, with the bottom end of the flexible card shimmer fed into the mouth of the chip card acceptance slot.
The overlay skimming device pictured above consists of two main components. The one on top is a regular PIN pad overlay designed to record keypresses when a customer enters their debit card PIN. The overlay includes a microcontroller and a small data storage unit (bottom left).
The second component, which is wired to the overlay skimmer, is a flexible card skimmer (often called a “shimmer”) that gets fed into the mouth of the chip card acceptance slot. You’ll notice neither device contains a battery, because there simply isn’t enough space to accommodate one.
Virtually all payment card terminals at self-checkout lanes now accept (if not also require) cards with a chip to be inserted into the machine. When a chip card is inserted, the terminal reads the data stored on the smart card by sending an electric current through the chip.
Incredibly, this skimming apparatus is able to siphon a small amount of that power (a few milliamps) to record any data transmitted by the payment terminal transaction and PIN pad presses. When the terminal is no longer in use, the skimming device remains dormant.
The skimmer pictured above does not stick out of the payment terminal at all when it’s been seated properly inside the machine. Here’s what the fake PIN pad overlay and card skimmer looks like when fully inserted into the card acceptance slot and viewed head-on:
The insert skimmer fully ensconced inside the compromised payment terminal. Image: KrebsOnSecurity.com
Would you detect an overlay skimmer like this? Here’s what it looks like when attached to a customer-facing payment terminal:
The PIN pad overlay and skimmer, fully seated on a payment terminal.
REALLY SMART CARDS
The fraud investigators I spoke with about this device (who did so on condition of anonymity) said initially they couldn’t figure out how the thieves who plant these devices go about retrieving the stolen data from the skimmer. Normally, overlay skimmers relay this data wirelessly using a built-in Bluetooth circuit board. But that also requires the device to have a substantial internal power supply, such as a somewhat bulky cell phone battery.
The investigators surmised that the crooks would retrieve the stolen data by periodically revisiting the compromised terminals with a specialized smart card that — when inserted — instructs the skimmer to dump all of the saved information onto the card. And indeed, this is exactly what investigators ultimately found was the case.
“Originally it was just speculation,” the source told KrebsOnSecurity. “But a [compromised] merchant found a couple of ‘white’ smartcards with no markings on them [that] were left at one of their stores. They informed us that they had a lab validate that this is how it worked.”
Some readers might reasonably be asking why it would be the case that the card acceptance slot on any chip-based payment terminal would be tall enough to accommodate both a chip card and a flexible skimming device such as this.
The answer, as with many aspects of security systems that decrease in effectiveness over time, has to do with allowances made for purposes of backward compatibility. Most modern chip-based cards are significantly thinner than the average payment card was just a few years ago, but the design specifications for these terminals state that they must be able to allow the use of older, taller cards — such as those that still include embossing (raised numbers and letters). Embossing is a practically stone-age throwback to the way credit cards were originally read, through the use of manual “knuckle-buster” card imprint machines and carbon-copy paper.
“The bad guys are taking advantage of that, because most smart cards are way thinner than the specs for these machines require,” the source explained. “In fact, these slots are so tall that you could fit two cards in there.”
IT’S ALL BACKWARDS
Backward compatibility is a major theme in enabling many types of card skimming, including devices made to compromise automated teller machines (ATMs). Virtually all chip-based cards (at least those issued in the United States) still have much of the same data that’s stored in the chip encoded on a magnetic stripe on the back of the card. This dual functionality also allows cardholders to swipe the stripe if for some reason the card’s chip or a merchant’s smartcard-enabled terminal has malfunctioned.
Chip-based credit and debit cards are designed to make it infeasible for skimming devices or malware to clone your card when you pay for something by dipping the chip instead of swiping the stripe. But thieves are adept at exploiting weaknesses in how certain financial institutions have implemented the technology to sidestep key chip card security features and effectively create usable, counterfeit cards.
Many people believe that skimmers are mainly a problem in the United States, where some ATMs still do not require more secure chip-based cards that are far more expensive and difficult for thieves to clone. However, it’s precisely because some U.S. ATMs lack this security requirement that skimming remains so prevalent in other parts of the world.
Mainly for reasons of backward compatibility to accommodate American tourists, a great number of ATMs outside the U.S. allow non-chip-based cards to be inserted into the cash machine. What’s more, many chip-based cards issued by American and European banks alike still have cardholder data encoded on a magnetic stripe in addition to the chip.
When thieves skim non-U.S. ATMs, they generally sell the stolen card and PIN data to fraudsters in Asia and North America. Those fraudsters in turn will encode the card data onto counterfeit cards and withdraw cash at older ATMs here in the United States and elsewhere.
Interestingly, even after most U.S. banks put in place fully chip-capable ATMs, the magnetic stripe will still be needed because it’s an integral part of the way ATMs work: Most ATMs in use today require a magnetic stripe for the card to be accepted into the machine. The main reason for this is to ensure that customers are putting the card into the slot correctly, as embossed letters and numbers running across odd spots in the card reader can take their toll on the machines over time.
Unsurprisingly, the past two decades have seen the emergence of organized gas theft gangs that take full advantage of the single weakest area of card security in the United States. These thieves use cloned cards to steal hundreds of gallons of gas at multiple filling stations. The gas is pumped into hollowed-out trucks and vans, which ferry the fuel to a giant tanker truck. The criminals then sell and deliver the gas at cut rate prices to shady and complicit fuel station owners and truck stops.
A great many people use debit cards for everyday purchases, but I’ve never been interested in assuming the added risk and pay for everything with cash or a credit card. Armed with your PIN and debit card data, thieves can clone the card and pull money out of your account at an ATM. Having your checking account emptied of cash while your bank sorts out the situation can be a huge hassle and create secondary problems (bounced checks, for instance).
The next skimmer post here will examine an inexpensive and ingenious analog device that helps retail workers quickly check whether their payment terminals have been tampered with by bad guys.
To climate scientists, clouds are powerful, pillowy paradoxes: They can simultaneously reflect away the sun’s heat but also trap it in the atmosphere; they can be products of warming temperatures but can also amplify their effects. Now, while studying the atmospheric chemistry that produces clouds, researchers have uncovered an unexpectedly potent natural process that seeds their growth. They further suggest that, as the Earth continues to warm from rising levels of greenhouse gases, this process could be a major new mechanism for accelerating the loss of sea ice at the poles — one that no global climate model currently incorporates.
This discovery emerged from studies of aerosols, the tiny particles suspended in air onto which water vapor condenses to form clouds. As described this month in a paper in Science, researchers have identified a powerful overlooked source of cloud-making aerosols in pristine, remote environments: iodine.
The full climate impact of this mechanism still needs to be assessed carefully, but tiny modifications in the behavior of aerosols, which are treated as an input in climate models, can have huge consequences, according to Andrew Gettelman, a senior scientist at the National Center for Atmospheric Research (NCAR) who helps run the organization’s climate models and who was not involved in the study. And one consequence “will definitely be to accelerate melting in the Arctic region,” said Jasper Kirkby, an experimental physicist at CERN who leads the Cosmics Leaving Outdoor Droplets (CLOUD) experiment and a coauthor of the new study.
Just as dew condenses onto blades of grass, water vapor in the atmosphere can condense around aerosols to create clouds. Two types of aerosols can act as cloud condensation nuclei (CCNs): primary aerosols, which can be tiny particles of almost any kind, such as bacteria, sand, soot or sea salt spray; and secondary aerosols, which are trace gases that participate in a process known as “new particle formation.” If the atmospheric conditions are right, sunlight and ozone can set off a chain reaction that causes secondary aerosols to clump together and rapidly snowball into a particle with more than a million molecules.
Yet details of which chemicals end up becoming CCNs and how exactly it happens have largely remained a mystery, even though the particles composed of secondary aerosols are thought to make up more than half of all CCNs. With more CCNs, clouds tend to be longer-lasting, wider and more reflective — characteristics that can tangibly change the Earth’s temperature but that have been notoriously difficult to include in climate models, according to Charles Brock, a research physicist at the National Oceanic and Atmospheric Administration.
Scientists have observed this new particle formation process with gases such as sulfuric acid, particularly over urban areas, where the chemical is abundant, and hypothesize that smog arises largely as a result of new particle formation. But recent measurements have found that this process is not limited to anthropogenic chemicals — it may also occur in the atmosphere over wilder, less inhabited locations. “Two-thirds of the world’s surface is ocean,” Brock said. “Most clouds form over the ocean, so you really need to understand these processes in remote areas to be able to understand climate.”
Researchers have observed in remote areas of Ireland, Greenland and Antarctica that iodine, which is released naturally from melting sea ice, algae and the ocean surface, may also be a significant driver of new particle formation. But researchers still wondered how molecular iodine grows into a CCN, and how efficiently it does so, compared with other secondary aerosols. “Even though these particles were known to exist, we weren’t able to link a measured concentration in the atmosphere to a predicted formation of particles,” Kirkby said.
For answers, they turned to the CLOUD chamber at CERN, a giant aerosol chamber 3 meters wide and nearly 4 meters tall that tries to recreate the Earth’s atmosphere with extreme precision. (The chamber was originally constructed to investigate the possible link between cloud formation and galactic cosmic rays.) For eight weeks straight, more than two dozen scientists worked in eight-hour shifts around the clock, tweaking the temperature and composition of the artificial atmosphere in the chamber and anxiously watching what happened when iodine was added to the mix. The scientists could watch the particles evolving in the chamber in real time: “Literally, we’re watching it minute by minute,” Kirkby said. “It’s really a return to old-fashioned physics experiments.”
The CERN scientists found that aerosol particles made of iodic acid could form very quickly — even more quickly than the rates of sulfuric acid mixed with ammonia. In fact, the iodine was such an effective nucleator that the researchers had a difficult time scrubbing it away from the sides of the chamber for subsequent experiments, which required a completely clean environment.
The findings are important for understanding the fundamental chemistry in the atmosphere that underlies cloud processes, Kirkby said, but also as a warning sign: Global iodine emissions have tripled over the past 70 years, and scientists predict that emissions will continue to accelerate as sea ice melts and surface ozone increases. Based on these results, an increase of molecular iodine could lead to more particles for water vapor to condense onto and spiral into a positive feedback loop. “The more the ice melts, the more sea surface is exposed, the more iodine is emitted, the more particles are made, the more clouds form, the faster it all goes,” Kirkby said.
The results could also help scientists understand how much the planet will warm on average when carbon dioxide levels double compared with pre-industrial levels. For decades, estimates have put this number, called the equilibrium climate sensitivity, between 1.5 and 4.5 degrees Celsius (2.6 to 8.1 degrees Fahrenheit) of warming, a range of uncertainty that has remained stubbornly wide for decades. If Earth were no more complicated than a billiard ball flying through space, calculating this number would be easy: just under 1 degree C, Kirkby said. But that calculation doesn’t account for amplifying feedback loops from natural systems that introduce tremendous uncertainty into climate models.
Aerosols’ overall role on climate sensitivity remains unclear; estimates in the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report suggest a moderate cooling effect, but the error bars range from a net warming effect to a more significant cooling effect. Clouds generally cool the planet, as the white tops of the clouds reflect sunlight into space. But in polar regions, snowpack has a similar albedo, or reflectivity, as cloud tops, so an increase in clouds would reflect little additional sunlight. Instead, it would trap longwave radiation from the ground, creating a net warming effect.
Now atmospheric scientists can try to confirm whether what they observed in the CLOUD chamber occurs in nature. “What they’ve accomplished gives us a target to shoot for in the atmosphere, so now we know what instruments to take on our aircraft and what molecules to look for to see that these processes are actually occurring in the atmosphere,” Brock said.
To be sure, while these findings are a step in the right direction, Gettelman said, there are many other factors that remain large sources of uncertainty in global climate models, such as the structure and role of ice in cloud formation. In 2019, NCAR’s model projected a climate sensitivity well above IPCC’s average upper bound and 32% higher than its previous estimate — a warming of 5.3 degrees C (10.1 degrees F) if the global carbon dioxide is doubled — mostly as a result of the way that clouds and their interactions with aerosols are represented in their new model. “But we fix one problem and reveal another one,” Gettelman said.
Brock remains hopeful that future research into new particle formation will help to chip away at the uncertainty in climate sensitivity. “I think we’re gaining an appreciation for the complexity of these new particle sources,” he said.
The reaction to Avi Loeb’s new book Extraterrestrial (Houghton Mifflin Harcourt, 2021) has been quick in coming and dual in nature. I’m seeing a certain animus being directed at the author in social media venues frequented by scientists, not so much for suggesting the possibility that ‘Oumuamua is an extraterrestrial technological artifact, but for triggering a wave of misleading articles in the press. The latter, that second half of the dual reaction, has certainly been widespread and, I have to agree with the critics, often uninformed.
Image credit: Kris Snibbe/Harvard file photo.
But let’s try to untangle this. Because my various software Net-sweepers collect most everything that washes up on ‘Oumuamua, I’m seeing stark headlines such as “Why Are We So Afraid of Extraterrestrials,” or “When Will We Get Serious about ET?” I’m making those particular headlines up, but they catch the gist of many of the stories I’ve seen. I can see why some of the scientists who spend their working days digging into exoplanet research, investigate SETI in various ways or ponder how to build the spacecraft that are helping us understand the Solar System would be nonplussed.
We are, as a matter of fact, taking the hypothesis of extraterrestrial life, even intelligent extraterrestrial life, more seriously now than ever before, and this is true not just among the general public but also within the community of working scientists. But I don’t see Avi Loeb saying anything that discounts that work. What I do see him saying in Extraterrestrial is that in the case of ‘Oumuamua, scientists are reluctant to consider a hypothesis of extraterrestrial technology even though it stands up to scrutiny — as a hypothesis — and offers as good an explanation as others I’ve seen. Well actually, better, because as Loeb says, it checks off more of the needed boxes.
Invariably, critics quote Sagan: “Extraordinary claims require extraordinary evidence.” Loeb is not overly impressed with the formulation, saying “evidence is evidence, no?” And he goes on: “I do believe that extraordinary conservatism keeps us extraordinarily ignorant. Put differently, the field doesn’t need more cautious detectives.” Fighting words, those. A solid rhetorical strategy, perhaps, but then caution is also baked into the scientific method, as well it should be. So let’s talk about caution and ‘Oumuamua.
Loeb grew up on his family’s farm south of Tel Aviv, hoping at an early age to become a philosopher but delayed in the quest by his military service, where he likewise began to turn to physics. An early project was the use of electrical discharges to propel projectiles, a concept that wound up receiving funding from the US Strategic Defense Initiative during the latter era of the Cold War. He proceeded to do postgraduate work at the Institute for Advanced Study in Princeton, mixing with the likes of Freeman Dyson and John Bahcall, and moved on to become a tenured professor at Harvard. Long before ‘Oumuamua, his life had begun to revolve around the story told in data. He seems to have always believed that data would lead him to an audacious conclusion, and perhaps primed by his childhood even to expect such an outcome.
I also detect a trace of the mischief-maker, though a very deliberate one. To mix cultures outrageously, Loeb came out of Beit Hanan with a bit of Loki in him. And he’s shrewd: “You ask nature a series of questions and listen carefully to the answers from experiments,” he writes of that era, a credo which likewise informs his present work. Extraterrestrial is offered as a critique of the way we approach the unknown via our scientific institutions, and the reaction to the extraterrestrial hypothesis is displaying many of the points he’s trying to make.
Can we discuss this alien artifact hypothesis in a rational way? Loeb is not sure we can, at least in some venues, given the assumptions and accumulated inertia he sees plaguing the academic community. He describes pressure on young postdocs to choose career paths that will fit into accepted ideas. He asks whether what we might call the science ‘establishment’ is simply top-heavy, a victim of its own inertia, so that the safer course for new students is not to challenge older models.
These seem like rational questions to me, and Loeb uses ‘Oumuamua as the rhetorical church-key that pops open the bottle. So let’s look at what we know about ‘Oumuamua with that in mind. The things that trigger our interest and raised eyebrows arrive as a set of anomalies. They include the fact that the object’s brightness varied by a factor of ten every eight hours, from which astronomers could deduce an extreme shape, much longer than wide. And despite a trajectory that had taken it near the Sun, ‘Oumuamua did not produce an infrared signature detectable by the Spitzer Space Telescope, leading to the conclusion that it must be small, perhaps 100 yards long, if that.
‘Oumuamua seemed to be cigar-like in shape, or else flat, either of these being shapes that had not been observed at these extremes in naturally occurring objects in space. Loeb also notes that despite its small size and odd shape, the object was ten times more reflective than typical asteroids or comets in our system. Various theories spawned from all this try to explain its origins, but a slight deviation in trajectory as ‘Oumuamua moved away from the Sun stood out in our two weeks of data. That deviation also took it out of the local standard of rest, which in itself was an unusual place for it to have been until its encounter with our Sun caused its motion to deviate.
I don’t want to go over ground we’ve already covered in some detail here in the past — a search for ‘Oumuamua in the archives will turn up numerous articles, of which the most germane to this review is probably ‘Oumuamua, Thin Films and Lightsails. This deals with Loeb’s work with Shmuel Bialy on the non-gravitational acceleration, which occurred despite a lack of evidence for either a cometary tail or gas emission and absorption lines. All this despite an approach to the Sun of a tight 0.25 AU.
The fact that we do not see outgassing that could cause this acceleration is not the problem. According to Loeb’s calculations, such a process would have caused ‘Oumuamua to lose about a tenth of its mass, and he points out that this could have been missed by our telescopes. What is problematic is the fact that the space around the object showed no trace of water, dust or carbon-based gases, which makes the comet hypothesis harder to defend. Moreover, whatever the cause of the acceleration, it did not change the spin rate, as we would expect from asymmetrical, naturally occurring jets of material pushing a comet nucleus in various directions.
Extraterrestrial should be on your shelf for a number of reasons, one of which is that it encapsulates the subsequent explanations scientists have given for ‘Oumuamua’s trajectory, including the possibility that it was made entirely of hydrogen, or the possibility that it began to break up at perihelion, causing its outward path to deviate (again, no evidence for this was evident to our instruments). And, of course, he makes the case for his hypothesis that sunlight bouncing off a thin sail would explain what we see, citing recent work on the likelihood that the object was disk-shaped.
So what do we do with such an object, beyond saying that none of our hypotheses can be validated by future observation since ‘Oumuamua is long gone (although do see the i4IS work on Project Lyra). Now we’re at the heart of the book, for as we’ve seen, Extraterrestrial is less about ‘Oumuamua itself and more about how we do science, and what the author sees as a too conservative approach that is fed by the demands of making a career. He’s compelled to ask: Shouldn’t the possibility of ‘Oumuamua being an extraterrestrial artifact, a technological object, be a bit less controversial than it appears to be, given the growth in our knowledge in recent decades? Let me quote the book:
Some of the resistance to the search for extraterrestrial intelligence boils down to conservatism, which many scientists adopt in order to minimize the number of mistakes they make during their careers. This is the path of least resistance, and it works; scientists who preserve their images in this way receive more honors, more awards, and more funding. Sadly, this also increases the force of their echo effect, for the funding establishes ever bigger research groups that parrot the same ideas. This can snowball; echo chambers amplify conservatism of thought, wringing the native curiosity out of young researchers, most of whom feel they must fall in line to secure a job. Unchecked, this trend could turn scientific consensus into a self-fulfilling prophecy.
Here I’m at sea. I’ve been writing about interstellar studies for the past twenty years and have made the acquaintance of many scientists both through digital interactions and conversations at conferences. I can’t say I’ve found many who are so conservative in their outlook as to resist the idea of other civilizations in the universe. I see ongoing SETI efforts like the privately funded Breakthrough Listen, which Loeb is connected to peripherally through his work with the Breakthrough Starshot initiative to send a probe to Proxima Centauri or other nearby stars. The book contains the background of Starshot by way of showing the public how sails might make sense as the best way to cross interstellar distances, perhaps like Starshot propelled by beamed energy.
I also see active research on astrobiology, while the entire field of exoplanetary science is frothing with activity. To my eye as a writer who covers these matters rather than a scientist, I see a field that is more willing to accept the possibility of extraterrestrial intelligence than ever before. But I’m not working within the field as Loeb is, so his chastening of tribal-like patterns of behavior reflects, I’m sure, his own experience.
When I wrote the piece mentioned above, ‘Oumuamua, Thin Films and Lightsails, it was by way of presenting Loeb’s work on the deviation of the object’s trajectory as caused by sunlight, which he produced following what he describes in the book as “the same scientific tenet I had always followed — a hypothesis that satisfied all the data ought to be considered.” If nature wasn’t producing objects shaped like that of a lightsail that could apparently accelerate through the pressure of photons from a star, then an extraterrestrial intelligence was the exotic hypothesis that could explain it.
The key statement: “If radiation pressure is the accelerating force, then ‘Oumuamua represents a new class of thin interstellar material, either produced naturally…or is of an artificial origin.”
After this, Loeb goes on to say, “everything blew up.” Which is why on my neighborhood walks various friends popped up in short order asking: “So is it true? Is it ET?” I could only reply that I had no idea, and refer them to the discussion of Loeb’s paper on my site. Various headlines announcing that a Harvard astronomer had decided ‘Oumuamua was an alien craft have been all over the Internet. I can see why many in the field find this a nuisance, as they’re being besieged by people asking the same questions, and they have other work they’d presumably like to get on with.
So there are reasons why Extraterrestrial is, to some scientists, a needling, even cajoling book. I can see why some dislike the fact that it was written. But having to talk about one’s work is part of the job description, isn’t it? It was Ernest Rutherford who said that a good scientist should be able to explain his ideas to a barmaid. In these parlous times, we might change Rutherford’s dismissive ‘barmaid’ to a gender-neutral ‘blog writer’ or some such. But the point seems the same.
Isn’t communicating ideas part of the job description of anyone employed to do scientific research? So much of that research is funded by the public through their tax dollars, after all. If Loeb’s prickly book is forcing some scientists to take the time to explain why they think his hypothesis is unlikely, I cannot see that as a bad thing. Good for Avi Loeb, I’d say.
And whatever ‘Oumuamua is, we may all benefit from the discussion it has created. I enjoyed Loeb’s section on exotic theories within the physics community — he calls these “fashionable thought bubbles that currently hold sway in the field of astrophysics,” and in many quarters they seem comfortably accepted:
Despite the absence of experimental evidence, the mathematical ideas of supersymmetry, extra-spatial dimensions, string theory, Hawking radiation, and the multiverse are considered irrefutable and self-evident by the mainstream of theoretical physics. In the words of a prominent physicist at a conference that I attended: ‘These ideas must be true even without experimental tests to support them, because thousands of physicists believe in them and it is difficult to imagine that such a large community of mathematically gifted scientists could be wrong.”
That almost seems like a straw man argument, except that I don’t doubt someone actually said this — I’ve heard more or less the same sentiment voiced at conferences myself. Even so, I doubt many of the scientists I’ve gotten to know would go that far. But the broader point is sound. Remember, Loeb is all about data, and isn’t it true that multiverse ideas take us well beyond the realm of testable hypotheses? And yet many support them, as witness Leonard Susskind in his book The Black Hole War (2008):
“There is a philosophy that says that if something is unobservable — unobservable in principle — it is not part of science. If there is no way to falsify or confirm a hypothesis, it belongs to the realm of metaphysical speculation, together with astrology and spiritualism. By that standard, most of the universe has no scientific reality — it’s just a figment of our imaginations.”
So Loeb is engaging on this very charged issue that goes to the heart of what we mean by a hypothesis, about the falsifiability of an idea. We know where he stands:
Getting data and comparing it to our theoretical ideas provides a reality check and tells us we are not hallucinating. What is more, it reconfirms what is central to the discipline. Physics is not a recreational activity to make us feel good about ourselves. Physics is a dialogue with nature, not a monologue.
You can see why Extraterrestrial is raising hackles in some quarters, and why Loeb is being attacked for declaring ‘Oumuamua a technology. But of course he hasn’t announced ‘Oumuamua was an alien artifact. He’s said this is a hypothesis, not a statement of fact, and that it fits what we currently know, and that it is a plausible hypothesis and perhaps the most plausible among those that have been offered.
He goes on to call for deepening our commitment to Dysonian SETI, looking for signs of extraterrestrial intelligence through its artifacts, a field becoming known as astro-archaeology. And he considers what openness to the hypothesis could mean in terms of orienting our research and our imagination under the assumption that extraterrestrial intelligence is a likely outcome that should produce observables.
As I said above, Extraterrestrial should be on your shelf because it is above all else germane, with ‘Oumuamua being the tool for unlocking a discussion of how we do research and how we discuss the results. My hope is that it will give new public support to ongoing work that aims to answer the great question of whether we are alone in the universe. A great deal of that work continues even among many who find the ‘Oumuamua as technology hypothesis far-fetched and believe it over-reaches.
Is science too conservative to deal with a potentially alien artifact? I don’t think so, but I admire Avi Loeb for his willingness to shake things up and yank a few chains along the way. The debate makes for compelling drama and widens the sphere of discourse. He may well be right that by taking what he calls ‘’Oumuamua’s Wager” (based on Pascal’s Wager, and advocating for taking the extraterrestrial technology hypothesis seriously) we would open up new research channels or revivify stagnant ones.
Some of those neighbors of mine that I’ve mentioned actually dug ‘Oumuamua material out of arXiv when I told them about that service and how to use it, an outcome Ernest Rutherford would have appreciated. I see Extraterrestrial as written primarily for people like them, but if it does rattle the cages of some in the physics community, I think the field will somehow muddle through. Add in the fact that Loeb is a compelling prose stylist and you’ll find your time reading him well spent.
i was curious what the other side of the moon looked like so i googled it and
i’m so glad we got the side we did the moon’s ass ugly
You’re so rude to Miss Moon the reason her ass is so fucked up is cuz she’s protecting us from meteors. Her face is beautiful so her ass can be disgusting and we can be safe.
Learn something new everyday! I was just scrolling on Twitter when I saw my astronomy crush Will Gater tweet about using the raw black and white images from the Perseverance Rover on Mars to create colored images. Here's how to do it yourself, and a little about how it works.
My color processed image from Mars!
Any 24-bit color image is really made up of 3 individual 8-bit images - one for red, one for green, and one for blue (Wikipedia). These are referred to as RGB channel data. But how can a black and white image represent a color? The black and white images show how prominent the color is - the white represents the color and the black represents the absence of the color. So the brighter the image, the more of that color is present.
All you have to do to get a color photo out of the individual 8-bit images is the combine them. All the information is already right there, you just have to tell the computer which black and white image represents which color channel.
Alright so the first thing you need is raw RGB channel images from Mars. Luckily NASA has tons of free images that you can use. After all, your taxes paid to get this rover to Mars in the first place. Here's a link to the page where you can download raw images: https://mars.nasa.gov/mars2020/multimedia/raw-images/
I just looked for 3 of the same image in the gallery assuming the 3 nearly identical images go together to create one color photo. The next bit took me a second, which one is which color? It's a safe bet that if you have a photo of the red soil that red should be the brightest image. I played around with it for a bit and then realized the photos are actually labeled in the file name. Here are the 3 raw images I used to make my photo: Red, Green, Blue
Next, assign the images to the RGB channel layers in Photoshop. This is actually really easy. I watched this helpful video, but I think I can just show you in screen shots.
First open all 3 images. Start with the red image (says NLR in file name) and then go to the Channel tab in the bottom right. All 3 channels are the same because this is a black and white image. All you have to do is copy the green image and paste it into the green tab, copy the blue image and paste it into the blue tab, and that's it!
Click the Channel tab then click on a layer to paste the image for that color. If you started with the red image, you only have to paste blue and green to complete it
You will see the color change after each layer you add, and the final image is all 3 channels combined
I didn't do much post processing at all, just adjusted the white balance to the body of the rover ever so slightly. How cool is this! This color photo hasn't even been released to the public yet but I created it myself from the raw images.
Show off your Doctor Who cred with a DIY badge you can solder yourself. Powered by an Attiny85, lights and sounds like an 8-bit TARDIS. CR2032 Battery not included.
Mount Etna has erupted four times in the past six days, sending lava down its slopes and showering nearby villages with ash. Etna, on the Italian island of Sicily, is one of the most active volcanoes in the world. No significant damage or injuries have been reported during this recent outburst, and officials have said they do not think there is immediate danger of escalation, but the views have been spectacular.
Lava flows down Mount Etna near Catania, Sicily, on February 16, 2021.
(
Salvatore Allegra / AP)
February 22, 2021, (Xʷməθkʷəy̓əm (Musqueam), Sḵwx̱wú7mesh (Squamish) and səlilwətaɬ (Tsleil-Waututh)/Vancouver, B.C – On Friday February 19, Indigenous youth with the group Braided Warriors were violently removed and arrested by the Vancouver Police Department (VPD) after a peaceful sit-in in downtown Vancouver. For the past week, Braided Warriors has been peacefully occupying insurance companies to call on these companies to stop backing the Trans Mountain Expansion pipeline project.
On February 19, Braided Warriors occupied and engaged in ceremony in the BMO building which houses AIG Insurance. They were violently removed by approximately 25 VPD officers, who began violently assaulting and arresting the Indigenous youth. Four young people were arrested and, upon release, two youth had to receive medical treatment in the hospital.
Video footage of VPD violence and the injuries sustained can be found on @BraidedWarriors on Twitter and Instagram.
Braided Warriors is issuing the following media statement:
“We went to AIG offices in the BMO building of Downtown Vancouver to demand AIG stop insuring TMX and stop insuring genocide against Indigenous peoples. As Indigenous youth, we are taking a stand with all our Indigenous relatives in our collective fight for Land Back and an end to resource extraction on our lands and waters.
Approximately 70 VPD officers, many of whom were not wearing masks, all came in at once. We were given no warning or any time prior to being violently assaulted and removed from the property. The phone that we were livestreaming from was confiscated. We were violently thrown to the ground, dragged across floors and down stairways, pulled by the hair and braid, thrown to a surface covered in glass, strangled in a chokehold, or dragged face down on concrete. Our ceremonial items including drums, abalone shells, feathers, and red dresses to remember MMIW were desecrated, thrown, stepped on, and broken. Some of the VPD badge numbers that we collected were: 3314, 3241, 2993, 3330, and 3221.
Four of us were held in custody for over six hours and were initially denied access to lawyers. When we mentioned our bruises and injuries, we were told “She’s not going to the hospital, she’s going to jail,” and “You are lucky you are not in the States.” One of the youth arrested was refused urgent medical attention while in custody despite her head bleeding open, and had to receive emergency medical attention upon being released. One other youth who was not arrested also had to seek urgent medical care the same night, as well as another youth the following day. One person’s hand has severe soft tissue damage and requires a splint and physiotherapy. The person who was refused medical attention in-custody after having her head beaten open had to have her forehead glued and was diagnosed with a concussion.
After all this, we are the ones facing a number of criminal charges under this colonial law that has constantly tried to break and criminalize Indigenous resistance. But we will not be intimidated or silenced. We call on all people to make complaints to the Vancouver Police Department, the Vancouver Police Board, the Office of the Police Complaints Commissioner, and AIG Insurance to demand accountability for violence against Indigenous youth. Drop the charges! Stop insuring genocide! Canada must end the Trans Mountain Pipeline Expansion!”
From February 22-26, 2021 Kanahus Manuel and Mayuk Manuel, members of the Nekonlith Indian Bank, are facing a political show trial for unjust charges stemming from their opposition to Trans Mountain Expansion near Blue River in Secwepemc territories. According to Kanahus Manuel of the Tiny House Warriors, “When Indigenous land defenders protect our waters and lands and insist on our right to affirm our laws and jurisdiction, we are arrested and jailed. We stand in full support of the Braided Warriors against Justin Trudeau’s Trans Mountain Pipeline and its private corporate backers like AIG Insurance. Neither the federal nor the provincial government has secured our consent for the Trans Mountain Expansion pipeline as required under the United Nations Declaration on the Rights of Indigenous Peoples, which Canada has signed. We will never allow the Trans Mountain Expansion to be built on Secwepemc land.”
In January 2021, the United Nations Committee on the Elimination of Racial Discrimination wrote to Canada regarding the violations of the rights of Indigenous peoples, in particular the absence of the free, prior and informed consent of the Secwepemc and Wet’suwet’en communities in relation to the approval of the Trans Mountain Pipeline Expansion project and the Coastal Gas Link Pipeline, as well as the development of Site C dam project.
“We are appalled at the VPD’s treatment of peaceful Indigenous youth protesting the Trans Mountain Pipeline,” says Kukpi7 Judy Wilson, Secretary-Treasurer of the Union of BC Indian Chiefs. “While the government claims to be taking action to end systemic racism and to seek reconciliation, Indigenous youth are being thrown on the ground, their hair is pulled, and they have to go to the hospital for police-inflicted injuries. Indigenous youth must not be criminalized and targeted for peacefully standing with Indigenous nations asserting their Title and Rights; this is in clear opposition to BC’s obligations under the Declaration on the Rights of Indigenous Peoples Act. We are inspired by the action of Indigenous youth and we raise our hands to them for their actions for all our future generations.”
States Harsha Walia, Executive Director of BC Civil Liberties Association, “The VPD’s horrifying treatment of Indigenous youth should outrage us all. The use of unbridled policing power is an attempt to silence and criminalize Indigenous people. With global movements decrying systemic racism in policing, this is a clear example of racist policing. The brutality meted out on Indigenous youth must be reviewed, the officers must be held accountable, and all charges must be dropped. We urgently need to transform the colonial disaster of policing.”
MEDIA CONTACTS:
Braided Warriors: braidedwarriors@gmail.com
Kanahus Manuel, Tiny House Warriors: 250-852-3924
Kukpi7 Judy Wilson, Union of BC Indian Chiefs: 604-812-5972
Harsha Walia BC Civil Liberties Asssociation: 778-885-0040
Well, that’s not exactly correct: you can mount more than one tmpfs, and you can mount multiples at the same spot, but I can’t think of a reason to do so. In fact, it could happen by accident, but there’s a fix for that in DragonFly, thanks to Aaron LI. Not a major problem, but mentioning it in case you saw it and were confused.
Joseph Silverman remembers when he began connecting the dots that would ultimately lead to a new branch of mathematics: April 25, 1992, at a conference at Union College in Schenectady, New York.
It happened by accident while he was at a talk by the decorated mathematician John Milnor. Milnor’s subject was a field called complex dynamics, which Silverman knew little about. But as Milnor introduced some basic ideas, Silverman started to see a striking resemblance to the field of number theory where he was an expert.
“If you just change a couple of the words, there’s an analogous sort of problem,” he remembers thinking to himself.
Silverman, a mathematician at Brown University, left the room inspired. He asked Milnor some follow-up questions over breakfast the next day and then set to work pursuing the analogy. His goal was to create a dictionary that would translate between dynamical systems and number theory.
At first glance, the two look like unrelated branches of mathematics. But Silverman recognized that they complement each other in a particular way. While number theory looks for patterns in sequences of numbers, dynamical systems actually produce sequences of numbers — like the sequence that defines a planet’s position in space at regular intervals of time. The two merge when mathematicians look for number-theoretic patterns hidden in those sequences.
In the decades since Silverman attended Milnor’s talk, mathematicians have dramatically expanded the connections between the two branches of math and built the foundations of an entirely new field: arithmetic dynamics.
The field’s reach continues to grow. In a paper published in Annals of Mathematics last year, a trio of mathematicians extended the analogy to one of the most ambitious and unexpected places yet. In doing so, they resolved part of a decades-old problem in number theory that didn’t previously seem to have any clear connection to dynamical systems at all.
The new proof quantifies the number of times that a type of curve can intersect special points in a surrounding space. Number theorists previously wondered if there is a cap on just how many intersections there can be. The authors of the proof used arithmetic dynamics to prove there is an upper limit for a particular collection of curves.
“We wanted to understand the number theory. We didn’t care if there was a dynamical system, but since there was one, we were able to use it as a tool,” said Laura DeMarco, a mathematician at Harvard University and co-author of the paper along with Holly Krieger of the University of Cambridge and Hexi Ye of Zhejiang University.
Moving on a Curve
In May 2010, a group of mathematicians gathered at a small research institute in Barbados where they spent sunny days discussing math just a few dozen feet from the beach. Even the lecture facilities — with no walls and simple wooden benches — left them as close to nature as possible.
“One evening when it was raining you couldn’t even hear people, because of the rain on the metal roof,” said Silverman.
The conference was a pivotal moment in the development of arithmetic dynamics. It brought together experts from number theory, like Silverman, and dynamical systems, like DeMarco and Krieger. Their goal was to expand the types of problems that could be addressed by combining the two perspectives.
Their starting point was one of the central objects in number theory: elliptic curves. Just like circles and lines, elliptic curves are both numbers and shapes. They are pairs of numbers, x and y, that serve as solutions to an algebraic equation like y2 = x3 − 2x. The graph of those solutions creates a geometric shape that looks vaguely like a vertical line extruding a bubble.
Mathematicians have long been interested in quantifying and classifying various properties of these curves. The most prominent result to date is Andrew Wiles’ famed 1994 proof of Fermat’s Last Theorem, a question about which equations have solutions that are whole numbers. The proof relied heavily on the study of elliptic curves. In general, mathematicians focus on elliptic curves because they occupy the sweet spot of inquiry: They’re not easy enough to be trivial and not so hard that they’re impossible to study.
“Elliptic curves are still mysterious enough that they’re generating new math all the time,” said Matt Baker, a mathematician at the Georgia Institute of Technology.
Mathematicians are particularly interested in points on elliptic curves that act like a home base for a special way of moving around on the curves. On an elliptic curve, you can add points to each other using standard addition, but this approach is not very useful: the sum is unlikely to be another point on the curve.
But elliptic curves come packaged with a special internal structure that creates a different type of arithmetic. This structure is called a group, and the result of adding points together using its self-contained arithmetic rules is quite different.
If you add two points on an elliptic curve according to the group structure, the sum is always a third point on the curve. And if you continue this process by, for example, adding a point to itself over and over, the result is an infinite sequence of points that all lie along the elliptic curve.
Different starting points will result in different sequences. The “home base” points are starting points with a very unique property. If you repeatedly add one of these points to itself, it does not generate an infinite sequence of new points. Instead, it creates a loop that comes back to the point you started with.
These special starting values that create loops are called torsion points. They are of immediate interest to number theorists. They also have a striking correspondence to a specific type of point on dynamical systems — and it was this correspondence that really set arithmetic dynamics in motion.
“That’s truly the basis of why this field has become a field,” said Krieger.
Repeating Patterns
Dynamical systems are often used to describe real-world phenomena that move forward in time according to a repeated rule, like the ricocheting of a billiard ball in accordance with Newton’s laws. You begin with a value, plug it into a function, and get an output that becomes your new input.
Some of the most interesting dynamical systems are driven by functions like f(x) = x2 − 1, which are associated with intricate fractal pictures known as Julia sets. If you use complex numbers (numbers with a real part and an imaginary part) and apply the function over and over — feeding each output back into the function as the next input — you generate a sequence of points in the complex plane.
This is just one example of what’s called a quadratic polynomial, in which the variable is raised to the second power. Quadratic polynomials are the foundation of research in dynamical systems, just as elliptic curves are the focus of a lot of basic inquiry in number theory.
“Quadratic polynomials play a similar role as elliptic curves in number theory,” said Baker. “They’re the ground that we always seem to return to to try to actually prove something.”
Dynamical systems generate sequences of numbers as they evolve. Take for example that quadratic function f(x) = x2 − 1. If you start with the value x = 2, you generate the infinite sequence 2, 3, 8, 63, and so on.
But not all starting values trigger a series that grows larger forever. If you begin with x = 0, that same function generates a very different type of sequence: 0, −1, 0, −1, 0, and so on. Instead of an infinite string of distinct numbers, you end up in a small, closed loop.
In the world of dynamical systems, starting points whose sequences eventually repeat are called finite orbit points. They are a direct analog of torsion points on elliptic curves. In both cases, you start with a value, apply the rules of the system or curve, and end up in a cycle. This is the analogy that the three mathematicians exploit in their new proof.
“This simple observation — that torsion points on the elliptic curve are the same as finite orbit points for a certain dynamical system — is what we use in our paper over and over and over again,” said DeMarco.
Setting a Ceiling
Both Krieger and Ye received their doctorates from the University of Illinois, Chicago in 2013 under DeMarco’s supervision. The trio reconvened in August 2017 at the American Institute of Mathematics in San Jose, California, which hosts intensive, short-term research programs.
“We stayed in a room for five days. We needed to work through some questions,” said Ye.
During this period, they began to envision a way to extend the crucial analogy between torsion points of elliptic curves and finite orbit points of dynamical systems. They knew that they could transform a seemingly unrelated problem into one where the analogy was directly applicable. That problem arises out of something called the Manin-Mumford conjecture.
The Manin-Mumford conjecture is about curves that are more complicated than elliptic curves, such as y2 = x6 + x4 + x2 − 1. Each of these curves comes with an associated larger geometric object called a Jacobian, which mimics certain properties of the curve and is often easier for mathematicians to study than the curve itself. A curve sits inside its Jacobian the way a piece sits inside a jigsaw puzzle.
Unlike elliptic curves, these more complicated curves don’t have a group structure that enables adding points on a curve to get other points on the curve. But the associated Jacobians do. The Jacobians also have torsion points, just like elliptic curves, which circle back on themselves under repeated internal addition.
The Manin-Mumford conjecture has to do with how many times one of these complicated curves, nestled inside its Jacobian, intersects the torsion points of the Jacobian. It predicts that these intersections only occur finitely many times. The conjecture reflects the interrelationship between the algebraic nature of a curve (in the way that torsion points are special solutions to the equations defining the curve) and its life as a geometric object (reflecting how the curve is embedded inside its Jacobian, like one shape inside another). Torsion points are crowded in every region of the Jacobian. If you zoom in on any tiny part of it, you will find them. But the Manin-Mumford conjecture predicts that, surprisingly, the nestled curve still manages to miss all but a finite number of them.
In 1983 Michel Raynaud proved the conjecture true. Since then, mathematicians have been trying to upgrade his result. Instead of just knowing that the number of intersections is finite, they’d like to know it’s below some specific value.
“Now that you know that they have only finitely many points in common, then every mathematician you would meet would say, well, how many?” said Krieger.
But the effort to count the intersection points was impeded by the lack of a clear framework in which to think about the complex numbers that define those points. Arithmetic dynamics ended up providing one.
Translating the Problem
In their 2020 paper, DeMarco, Krieger and Ye established that there is an upper bound on the intersection number for a family of curves. A newer paper by another mathematician, Lars Kühne of the University of Copenhagen, presents a proof establishing an upper bound for all curves. That paper was posted in late January and has not been fully vetted.
Raynaud’s previous result proved simply that the number of intersections is finite — but it left room for that finite number to be as large as you could possibly want (in the sense that you can always make a larger finite number). The trio’s new proof establishes what’s called a uniform bound, a cap on how big that finite number of intersections can be. DeMarco, Krieger and Ye didn’t identify that cap exactly, but they proved it exists, and they also identified a long series of steps that future work could take to calculate the number.
Their proof relies on a unique property of the Jacobians associated to this special family of curves: They can be split apart into two elliptic curves.
The elliptic curves that make up the Jacobians take their solutions from the complex numbers, which gives their graphs a bulkier appearance than the graphs of elliptic curves whose solutions come from the real numbers. Instead of a wiggly line, they look like the surface of a doughnut. The specific family of curves that DeMarco, Krieger and Ye studied has Jacobians that look like two-holed doughnuts. They break apart nicely into two regular doughnuts, each of which is the graph of one of the two constituent elliptic curves.
The new work focuses on the torsion points of those elliptic curves. The three mathematicians knew that the number they were interested in — the number of intersection points between complicated curves and the torsion points of their Jacobians — could be reframed in terms of the number of times that torsion points from one of those elliptic curves overlap torsion points from the other. So, to put a bound on the Manin-Mumford conjecture, all the authors had to do was count the number of intersections between those torsion points.
They knew this could not be accomplished directly. The two elliptic curves and their torsion points could not be immediately compared because they do not necessarily overlap. The torsion points are sprinkled on the surfaces of the elliptic curves, but the two curves might have very different shapes. It’s like comparing points on the surface of a sphere to points on the surface of a cube — the points can have similar relative positions without actually overlapping.
“You can’t really compare the points on those elliptic curves, because they’re in different places; they’re living on different geometric objects,” said Krieger.
But while the torsion points don’t actually necessarily overlap, it’s possible to think of pairs of them as being in the same relative position on each doughnut. And pairs of torsion points that occupy the same relative position on their respective doughnuts can be thought of as intersecting.
In order to determine precisely where these intersections take place, the authors had to lift the torsion points off their respective curves and transpose them over each other — almost the way you’d fit a star chart to the night sky.
Mathematicians knew about these star charts, but they didn’t have a good perspective that allowed them to count the overlapping points. DeMarco, Krieger and Ye managed it using arithmetic dynamics. They translated the two elliptic curves into two different dynamical systems. The two dynamical systems generated points on the same actual space, the complex plane.
“It’s easier to think of one space with two separate dynamical systems, versus two separate spaces with one dynamical system,” said DeMarco.
The finite orbit points of the two dynamical systems corresponded to the torsion points of the underlying elliptic curves. Now, to put a bound on the Manin-Mumford conjecture, the mathematicians just needed to count the number of times these finite orbit points overlapped. They used techniques from dynamical systems to solve the problem.
Counting the Overlap
In order to count the number of overlaps, DeMarco, Krieger and Ye turned to a tool which measures how much the value of an initial point grows as it’s repeatedly added to itself.
The torsion points on elliptic curves have no growth or long-term change, since they circle back to themselves. Mathematicians measure this growth, or lack of it, using a “height function.” It equals zero when applied to the torsion points of elliptic curves. Similarly, it equals zero when applied to the finite orbit points of dynamical systems. Height functions are an essential tool in arithmetic dynamics because they can be used on either side of the divide between the two branches.
The authors studied how often points of zero height coincide for the dynamical systems representing the elliptic curves. They showed that these points are sufficiently scattered around the complex plane so that they are unlikely to coincide — so unlikely, in fact, that they can’t do it more than a specific number of times.
That number is difficult to compute, and it’s probably much larger than the actual number of coinciding points, but the authors proved that this hard ceiling does exist. They then translated the problem back into the language of number theory to determine a maximum number of shared torsion points on two elliptic curves — the key to their original question and a provocative demonstration of the power of arithmetic dynamics.
“They’re able to answer a specific question that already existed just within number theory and that nobody thought had anything to do with dynamical systems,” said Patrick Ingram of York University in Toronto. “That got a lot of attention.”
Shortly after DeMarco, Krieger and Ye first posted their proof of a uniform bound for the Manin-Mumford conjecture, they released a second, related paper. The follow-up work is about a question in dynamical systems, instead of number theory, but it uses similar methods. In that sense, the pair of papers is a quintessential product of the analogy Silverman noticed almost 30 years earlier.
“In some sense, it’s the same argument applied to two different families of examples,” said DeMarco.
The two papers synthesized many of the ideas that mathematicians working in arithmetic dynamics have developed over the last three decades while also adding wholly new techniques. But Silverman sees the papers as suggestive more than conclusive, hinting at an even wider influence for the new discipline.
“The specific theorems are special cases of what the big conjectures should be,” said Silverman. “But even those individual theorems are really, really beautiful.”
Correction: February 23, 2021 This article has been revised to avoid implying that Lars Kühne’s new work uses arithmetic dynamics.
In the last post in this series, I discussed the idea of large-scale depots for human spaceflight applications, which operate in fixed, low-orbits. While the final post in this series will investigate human spaceflight depots that operate in fixed higher orbits1, in this post I want to talk about situations where you want to refuel with large amounts of cryogenic propellants in an orbit where a permanent depot doesn’t make sense — in temporary high-elliptical orbits on your way into/out of a deep gravity well like Earth’s. The background on these depots involves getting into the weeds on some orbital dynamics, but I’ll try to keep it as understandable as possible for the layperson2.
Orbital Dynamics Background for Roving Depots
For an outbound interplanetary mission, there are two obvious places to do refueling — one at the first safe stopping point after leaving the planetary body3, and other is in an orbit that’s just shy of leaving the planetary body’s gravity well4. For a high-thrust/low-Isp departure, like you’d have with a chemical or nuclear thermal rocket, this would be a highly elliptical orbit, with the periapsis as low as possible5, and a apoapsis as high as practical6. There’s just one problem — in order to leave on an interplanetary trajectory from a highly elliptical orbit, that elliptical orbit’s periapsis has to be in the right place7, going in the right direction8, at the right time9. As you can probably guess, the odds of a specific highly elliptical orbit for one interplanetary departure trajectory lining up with another specific interplanetary departure trajectory at a specific point in the future is really, really low.
An illustration of hyperbolic departures, showing the locus of periapses/injection points on the opposite side of the earth from the desired departure asymptote (credit: Bate, Mueller, and White)
While I was working with Mike Loucks and John Carrico on our three-burn departure orbital dynamics papers (reviewed previously on Selenian Boondocks here, here, and here), I realized that there might be a way around this problem of elliptical orbits not lining up for future missions, especially if you had a reusable, mobile depot capable of both entering into and then returning from that highly elliptical orbit.
Illustration of the 3-Burn Departure method described in the two linked AAS papers. An interplanetary mission stack (1) starts in a low-orbit depot’s orbit after topping-up, (2) performs a burn to enter a highly-elliptical phasing orbit, (3) performs a plane change maneuver at apogee to align the departure plane, and (4) performs the final interplanetary departure burn when the spacecraft is back at perigee in the right place at the right time.
One or more such “roving depots” could work with reusable tankers10 and a low-orbit depot to preposition propellants into a highly-elliptical orbits for specific departure missions, with the mission stack for a given mission only having to travel out and rendezvous with the depot on the last time the departure highly-elliptical orbit plane lines up with the low-orbit depot’s plane. Once the mission stack has refueled, and left the roving depot (on its way to doing its final plane change and departure burn), the roving depot can return to the low-orbit depot the next time its plane lines up with the low-orbit depot, enabling it to be refueled and prepared for its next mission.
Conceptual Illustration of a reusable space tug. A roving depot would likely look similar, except possibly with more tankage and robotics capabilities. (Credit: Commercial Space Development Company)
Roving Depots
Application: Significantly enhancing the performance benefits of using a low-orbit depot, by providing one last chance to top off before heading out into interplanetary space. This can range from topping up a smallsat interplanetary stage to assembling and fueling a large interplanetary human mission or multi-ship convoy.
Location: As described earlier, roving depots are mobile depots that start a given mission at a low-orbit depot, maneuver into a highly elliptical orbit that is targeted for a specific interplanetary departure, and then returns to the low-orbit depot after the refueling operation is over.
Size: Depends strongly on what is being refueled. These could be as small as 100’s of kg for refueling a smallsat launcher interplanetary stage, up to 10’s to 100’s of mT for refueling larger human spaceflight missions.
Propellant Types: The propellant type for a roving depot will be driven by the propellant type used for the mission stack stage that is performing the interplanetary injection. For smallsat interplanetary stages, this is almost certainly storable, while for human spaceflight missions this would likely be LOX/LH2 or LOX/CH4.
Other Considerations
For roving depots and tankers operating around planets with an atmosphere, you’ll almost always want to use some form of aerobraking or aerocapture system11. Because you want your highly-elliptical orbit to have a relatively low periapsis for departure performance reasons, it only takes a little burn at apoapsis to lower the periapsis far enough for practical aerobraking/aerocapture.
Something you may have noticed is that the distinction between a roving depot and a tanker is sort of blurry. As I see it, they exist in sort of a continuum with expendable tankers on one end and reusable roving depots on the other extreme. The key differences are that roving depots would be more likely to have more significant propellant cooling (active and passive) capabilities, likely be designed to handle a wider range of client vehicles, and likely carry a lot more rendezvous, prox-ops, and grappling hardware.
I think the tankers that top off a roving depot in its mission orbit would likely be just minimalist upper stages with an aerocapture system, with as much of the smarts and complexity offloaded to the roving depot as possible. Minimizing the dry mass per unit propellant hauled from the low-orbit depot to the roving depot.
I think the roving depots, since they move less would strike a balance of complexity/robustness between the tanker and a fixed depot. You don’t want to go too crazy by throwing dry mass at problems like RPOD12 reliability, and mission robotic flexibility, but you want to be able to make the tankers as dumb and lean as possible, by offloading capabilities to the roving depot as much as possible.
This idea of roving depots can be combined with in-space assembly/manufacturing in the highly-elliptical orbit, sending up parts/materials/propellant for the mission every few months when the low-orbit depot lines up with the departure plane, assembling the overall stack, and then only sending the mission crew up on the last time the low-orbit depot’s plane aligns with the departure plane.
For chemical propulsion, travel to interplanetary destinations like Mars and Venus typically is only feasible once every 1.5-2yrs (depending on the synodic period). The use of roving depots can allow you to spread out the propellant launches for missions to a planet like that over a longer period of time. Instead of having to launch all the propellant and hardware for a given “launch season” all in the few months leading up to that season, you may be able to set up multiple roving depots, aligned for that departure opportunity, and then top them up over the course of a year or more. This allows a much smaller fixed low-orbit depot to support a lot more mission capacity than you would otherwise think, because the low-orbit depot wouldn’t need as much surge capacity, since you could likely plan things in advance.
This also suggests to me that for busy planetary systems like Earth, if roving depots take off as a concept, they’d likely significantly outnumber the low-orbit fixed depots.
One drawback to using roving depots and these highly elliptical parking orbits is that you end up putting a lot more van allen belt passes on your hardware than you would otherwise. While you typically won’t have your crew onboard until late in the process, your electronics, especially on your roving depot, will take a lot more radiation than it might otherwise.
On the plus side, you spend a lot less time in LEO where MMOD13 issues are worse, so your mission hardware doesn’t need as much MMOD shielding for how long it is in orbit. Additionally, since you spend most of your orbit far away from the planet, these highly-elliptical phasing orbits tend to be much easier for long-term cryogenic thermal storage.
One other consideration is that the longer you spend in the phasing orbit, the more orbit adjustment maneuvers you’ll need to perform. This may put some limitations on how long you can practically build things up in a phasing orbit for a given mission. If you’re just pre-staging propellant long in advance for a large convoy mission, you may be able to let your orbit drift a bit, and only trim things up shortly before departure, but we’d need to run the numbers on how practical that is. This is more of the case for orbits with very high apogees/long periods, especially in multi-body systems like the Earth/Moon system where, lunar perturbations become more of a problem with high apogee phasing orbits. In theory, it may be possible to craft an orbit that starts in a lower, shorter-period phasing orbit initially, but that then boosts up to a higher phasing orbit shortly before the final refueling of the mission stack. Long story short, there are lots of knobs to twist on optimization14.
One other problem I’m handwaving away as probably solvable is the complexity of rendezvousing with a roving depot in a highly-elliptical orbit. This will definitely be a lot different than the relative dynamics of rendezvous with objects in LEO. Though fortunately this should be pretty similar with trying to rendezvous with a facility in NRHO, so it’s something that probably has had a lot of recent thought put into it.
Roving depots around planets with lots of moons will definitely be more challenging from an orbital dynamics standpoint, but could be really enabling for missions to the gas giant planets. Especially if you want to do return trips, and don’t have something as cool as an Epstein Drive available yet. In some ways this reminds me of the idea of using base camps and/or storage depots when planning expeditions up mountains, or the early Antarctic expeditions. Once you have the right building blocks in place, there’s a lot you can do.
When you think about what you can do with the combination of low-orbit depots and roving depots on both ends of the mission, especially supported with ISRU and reusable launch, you can actually do very large and capable missions without needing super-heavy lift vehicles. It’s kind of amazing what you can do with a refuel early, refuel often, reusable space transportation architecture.
Next Up An Updated Propellant Depot Taxonomy Part VII: Human Spaceflight Fixed Depots (High-Orbit)
If you want to hack around with the communication protocol that USB Power Delivery devices use to negotiate their power requirements with the upstream source, a tool like Google’s Twinkie really helps. With it you can sniff data off the line, analyze it, and even inject your own packets. Luckily for us, the search giant made the device open source so we can all have one of our own.
Unfortunately, as [dojoe] found out, the Twinkie isn’t particularly well suited for small-scale hobbyist manufacturing. So he came up with a revised design he calls Twonkie that replaces the six layer PCB with a much more reasonable four layer version that can be manufactured cheaply by OSHPark, and swaps out the BGA components with QFP alternatives you can hand solder.
Somewhere back in my long ago I got involved with the QRP world, a time when there was always another kit that needed to be built. One that caught my fancy was the Wilderness Sierra, a QRP multi-band CW transceiver whose appearance was precisely the way I imagined any real ham radio transceiver should look.
This transceiver features a low current drain superhet receiver and VFO controlled transmitter. It all fits into a 2.5”H x 6.2”W x 5.5”D box done up in two tone blue. It was designed by Wayne Burdick, N6KR and was kitted and beta tested by NorCal. The NorCal rig was such a hit that Bob Dyer, KD6VIO (now K6KK) formed Wilderness Radio to market the Sierra.
Best I recall (that was more than 20 years ago) I discovered the Wilderness Sierra around the same time that Elecraft was formed and announced the K2. I opted for the K2 and was not disappointed. I built #524 in 1999 and was absolutely captivated. I still have the K2, the only transceiver I’ve ever kept, would never sell, and intend to be buried with.
But I never forgot about the Wilderness Sierra and always counted it as one that got away as it was eventually discontinued.
I mentioned this in conversation with a friend who lives nearby who also happens to be a fellow CW and QRP enthusiast. Turns out that a friend of his had pulled up stakes and moved to Florida a few years ago and when he did, he left my buddy with a pile of old gear and a few unbuilt kits. He was pretty sure there was an unbuilt Wilderness Sierra in that pile. He said he would check and if so, I could have it.
If you have no friends like that I suggest you make some!
That conversation was a few weeks ago but yesterday he drove by my house and dropped a package off on the doorstep. It was the Wilderness Sierra kit, unbuilt and unopened. I opened the bag with the front panel to get a better look at it. It seems to all be there. Bags of unopened components all marked with the original packaging stickers…
I’ll first do a complete inventory to see if anything is missing of course, but my hopes are high at this point. It’s a rare find and a unique opportunity to turn back time and I have no intention of rushing through it.
Right now I’m trying to decide if the assembly should be accompanied by bourbon or a good wine, and if so, what wine? And what tobacco should be burning in my pipe while I’m winding so many toroids?
The workbench will be updated. This build deserves new tools and test equipment. I don’t know yet when the work will commence or how long it might take, but the journey to assemble this kit into a working transceiver will be a joy and I intend to savor every moment of it.
Besides, if it takes three months or a year, does it matter?
Like many self-confessed geeks, I’ve long been curious about 3d-printing. To me, it sounds like the romantic early days of home computing in the 70s, where expensive machines that easily broke and were used as toys gradually gave way to more reliable and useful devices that became mainstream twenty years later.
The combination of a few factors led me to want to give it a go: needing a hobby in lockdown; teenage kids who might take to it (and were definitely interested); a colleague who had more experience with it; and the continuing drop in prices and relative maturity of the machines.
Going into this, I knew nothing about the details or the initial difficulties, so I wanted to blog about it before I forget about them or think that they are ‘obvious’ to everyone else. Plenty wasn’t obvious to me…
Reading Up And Choosing A Printer
I started by trying to do research on what kind of printer I wanted, and quickly got lost in a sea of technical terms I didn’t understand, and conflicting advice on forums. The choices were utterly bewildering, so I turned to my colleague for advice. The gist of what he told me was: ‘Just pick one you can afford that seems popular and go for it. You will learn as you go. Be prepared for it to break and be a general PITA.’
So I took his advice. I read somewhere that resin printers were far more detailed, and got advice from another former colleague on the reputable brands, held my nose and dove in. I plumped for the Elegoo Mars 2, as it was one of the recommendations, and it arrived a few days later, along with a bottle of resin. Machine + resin was about £230.
Setup
I won’t say setup was a breeze, but I imagine it was a lot slicker than it was in the really early days of home 3d printing. I didn’t have to construct the entire thing, and the build quality looked good to me.
The major difficulties I had during setup were:
Not realising I needed to wash the print in IPA (Isopropyl Alcohol, 90%+), surgical gloves (washing up gloves won’t cut it), and a mask. The people that inhabit 3d printing forums seemed to think it was trivial to get hold of gallons of it from local hardware stores, but all I could find was a surprisingly expensive 250ml bottle for £10 in a local hardware shop (the third I tried). Three pairs of gloves are supplied
Cack-handedly dropping a screw into the resin vat (not recommended) and having to fish it out.
Not following the instructions on ‘levelling the plate’ (the print starts by sticking resin to the metal printing plate, so it has to be very accurately positioned) to the absolute letter. The instructions weren’t written by a native speaker and also weren’t clearly laid out (that’s my excuse).
I also wasn’t aware that 3d-printing liquid resin is an unsafe substance (hence the gloves and mask), and that the 3d printing process produces quite a strong smell. My wife wasn’t particularly happy about this news, so I then did a lot of research to work out how to ensure it was safe. This was also bewildering, as you get everything from health horror stories to “it’s fine” reassurance.
In the event it seems like it’s fine, as long as you keep a window open whenever the printing lid is off and for a decent time after (30 mins+). It helps if you don’t print all day every day. The smelliest thing is the IPA, which isn’t as toxic as the resin, so as long as you keep the lid on wherever possible any danger is significantly reduced. If you do the odd print every other day, it’s pretty safe as far as I can tell. (This is not medical advice: IANAD). A far greater risk, it seems, is getting resin on your hands.
Thankfully also, the smell is not that unpleasant. It’s apparently the same as a ‘new car’ smell (which, by the way, is apparently horrifyingly toxic – I’ll always be opening a window when I’m in a new car in future).
Unlike the early days of computing, we have youtube, and I thoroughly recommend watching videos of setups before embarking on it yourself.
Finally, resin disposal is something you should be careful about. It’s irresponsible to pour resin down the drain, so don’t do it. Resin hardens in UV light (that’s how the curing/hardening process works), so there’s plenty of advice on how to dispose of it safely.
First Print
The first prints (which come on the supplied USB stick) worked first time, which was a huge relief. (Again, online horror stories of failed machines abound.)
The prints themselves were great little pieces themselves, a so-called ‘torture test’ for the printer to put it through its paces. A pair of rooks with intricate staircases inside and minute but legible lettering. The kids immediately claimed them as soon as I’d washed them in alcohol and water, before I had the time to properly cure them.
I didn’t know what curing was at the time, and had just read that it was a required part of the process. I was confused because I’d read it was a UV process, but since the machine worked by UV I figured that the capability to cure came with the machine. Wrong! So I’d need a source of UV light, which I figured daylight would provide.
I tried leaving the pieces outside for a few hours, but I had no idea when they would be considered done, or even ‘over-cured’, which is apparently a thing. In the end I caved and bought a curing machine for £60 that gave me peace of mind.
From here I printed something for the kids. The first print proper:
Darth Buddha, First Print for my Kids
I’d decided to ‘hollow out’ this figure, to reduce the cost of the resin. I think it was hollowed to 2mm, and worked out pretty well. One downside was that the base came away slightly at the bottom, suggesting I’d hollowed it out too much. In any case, the final result has pride of place next to the Xbox.
More Prints
Next was for me, an Escher painting I particularly like (supposedly the figure in the reality/gallery world is Wittgenstein):
MC Escher’s ‘Print Gallery’ Etched in 3-D
You can see that there are whiter, chalkier bits. I think this is something to do with some kind of failure in my washing/curing process combined with the delicacy of the print, but I haven’t worked out what yet.
And one for my daughter (she’s into Death Note):
And another for me – a 3D map of the City of London:
A 3D Map of the City of London
The Paraphernalia Spreads…
Another echo of the golden age of home computing is the way the paraphernalia around the machine gradually grows. The ‘lab’ quickly started to look like this:
The Paraphernalia Spreads…
Alongside the machine itself, you can also see the tray, tissue paper, bottles (IPA and resin), curing station, gloves, masks, tools, various tupperware containers, and a USB stick.
It helps if you have a garage, or somewhere to spread out to that other people don’t use during the day.
After a failed print (an elephant phone holder for my mother), which sagged halfway through on the plate, the subsequent attempts to print were marked by what sounded like a grinding noise of the plate against the resin vat. It was as though the plate tried to keep going through the vat to the floor of the machine.
I looked up this problem online, and found all sorts of potential causes, and no easy fix. Some fixes talked about attaching ‘spacers’ (?) to some obscure part of the machine. Others talked about upgrading the firmware, and even a ‘factory’. Frustrated with this, I left it alone for a couple of weeks. After re-levelling the plate a couple of times (a PITA, as the vat needed to be carefully removes, gloves and mask on etc), it occurred to me one morning that maybe some hardened material had fallen into the resin vat and that that was what the plate was ‘grinding’ on.
I drained the vat, which was a royal PITA the first time I did it, as my ineptitude resulted in spilled resin due to the mismatch between bottle size and resin filter (the supplied little resin jug is also way to small for purpose). But it was successful, as there were bits caught in the filter, and after re-filling the vat I was happily printing again.
Disaster Two
Excited that I hadn’t spent well north of £200 on a white elephant, I went to print another few things. Now the prints were failing to attach to the plate, meaning that nothing was being printed at all. A little research again, and another draining of the vat later I realised the problem: the plate hadn’t attached to the print, but the base of the print had attached to the film at the bottom of the vat. This must be a common problem, as a plastic wedge is provided for exactly this purpose. It wasn’t too difficult to prise the flat hardened piece of resin off the floor of the vat and get going again.
Talking to my colleague I was told that ‘two early disasters overcome is pretty good going so far’ for 3d printing.
We’re Back
So I was back in business. And I could get back to my original intention to print architectural wonders (history of architecture is an interest of mine). Here’s a nice one of Notre Dame.
Conclusion
When 3d printing works, it’s a joy. There is something magical about creating something so refined out of a smelly liquid.
When it doesn’t work it’s very frustrating. Like speculating on shares, I would only spend money on it you can afford to lose. And like any kind of building, don’t expect the spending to stop on the initial materials.
I think this is the closest I’ll get to the feeling of having one of these in 1975 (the year I was born).
The Altair 8800 Home PC
It’s also fun to speculate on what home 3d printing will look like in 45 years…
Linux draws a distinction between code running in kernel (kernel space) and applications running in userland (user space). This is enforced at the hardware level - in x86-speak[1], kernel space code runs in ring 0 and user space code runs in ring 3[2]. If you're running in ring 3 and you attempt to touch memory that's only accessible in ring 0, the hardware will raise a fault. No matter how privileged your ring 3 code, you don't get to touch ring 0.
Kind of. In theory. Traditionally this wasn't well enforced. At the most basic level, since root can load kernel modules, you could just build a kernel module that performed any kernel modifications you wanted and then have root load it. Technically user space code wasn't modifying kernel space code, but the difference was pretty semantic rather than useful. But it got worse - root could also map memory ranges belonging to PCI devices[3], and if the device could perform DMA you could just ask the device to overwrite bits of the kernel[4]. Or root could modify special CPU registers ("Model Specific Registers", or MSRs) that alter CPU behaviour via the /dev/msr interface, and compromise the kernel boundary that way.
It turns out that there were a number of ways root was effectively equivalent to ring 0, and the boundary was more about reliability (ie, a process running as root that ends up misbehaving should still only be able to crash itself rather than taking down the kernel with it) than security. After all, if you were root you could just replace the on-disk kernel with a backdoored one and reboot. Going deeper, you could replace the bootloader with one that automatically injected backdoors into a legitimate kernel image. We didn't have any way to prevent this sort of thing, so attempting to harden the root/kernel boundary wasn't especially interesting.
In 2012 Microsoft started requiring vendors ship systems with UEFI Secure Boot, a firmware feature that allowed[5] systems to refuse to boot anything without an appropriate signature. This not only enabled the creation of a system that drew a strong boundary between root and kernel, it arguably required one - what's the point of restricting what the firmware will stick in ring 0 if root can just throw more code in there afterwards? What ended up as the Lockdown Linux Security Module provides the tooling for this, blocking userspace interfaces that can be used to modify the kernel and enforcing that any modules have a trusted signature.
But that comes at something of a cost. Most of the features that Lockdown blocks are fairly niche, so the direct impact of having it enabled is small. Except that it also blocks hibernation[6], and it turns out some people were using that. The obvious question is "what does hibernation have to do with keeping root out of kernel space", and the answer is a little convoluted and is tied into how Linux implements hibernation. Basically, Linux saves system state into the swap partition and modifies the header to indicate that there's a hibernation image there instead of swap. On the next boot, the kernel sees the header indicating that it's a hibernation image, copies the contents of the swap partition back into RAM, and then jumps back into the old kernel code. What ensures that the hibernation image was actually written out by the kernel? Absolutely nothing, which means a motivated attacker with root access could turn off swap, write a hibernation image to the swap partition themselves, and then reboot. The kernel would happily resume into the attacker's image, giving the attacker control over what gets copied back into kernel space.
This is annoying, because normally when we think about attacks on swap we mitigate it by requiring an encrypted swap partition. But in this case, our attacker is root, and so already has access to the plaintext version of the swap partition. Disk encryption doesn't save us here. We need some way to verify that the hibernation image was written out by the kernel, not by root. And thankfully we have some tools for that.
Trusted Platform Modules (TPMs) are cryptographic coprocessors[7] capable of doing things like generating encryption keys and then encrypting things with them. You can ask a TPM to encrypt something with a key that's tied to that specific TPM - the OS has no access to the decryption key, and nor does any other TPM. So we can have the kernel generate an encryption key, encrypt part of the hibernation image with it, and then have the TPM encrypt it. We store the encrypted copy of the key in the hibernation image as well. On resume, the kernel reads the encrypted copy of the key, passes it to the TPM, gets the decrypted copy back and is able to verify the hibernation image.
That's great! Except root can do exactly the same thing. This tells us the hibernation image was generated on this machine, but doesn't tell us that it was done by the kernel. We need some way to be able to differentiate between keys that were generated in kernel and ones that were generated in userland. TPMs have the concept of "localities" (effectively privilege levels) that would be perfect for this. Userland is only able to access locality 0, so the kernel could simply use locality 1 to encrypt the key. Unfortunately, despite trying pretty hard, I've been unable to get localities to work. The motherboard chipset on my test machines simply doesn't forward any accesses to the TPM unless they're for locality 0. I needed another approach.
TPMs have a set of Platform Configuration Registers (PCRs), intended for keeping a record of system state. The OS isn't able to modify the PCRs directly. Instead, the OS provides a cryptographic hash of some material to the TPM. The TPM takes the existing PCR value, appends the new hash to that, and then stores the hash of the combination in the PCR - a process called "extension". This means that the new value of the TPM depends not only on the value of the new data, it depends on the previous value of the PCR - and, in turn, that previous value depended on its previous value, and so on. The only way to get to a specific PCR value is to either (a) break the hash algorithm, or (b) perform exactly the same sequence of writes. On system reset the PCRs go back to a known value, and the entire process starts again.
Some PCRs are different. PCR 23, for example, can be reset back to its original value without resetting the system. We can make use of that. The first thing we need to do is to prevent userland from being able to reset or extend PCR 23 itself. All TPM accesses go through the kernel, so this is a simple matter of parsing the write before it's sent to the TPM and returning an error if it's a sensitive command that would touch PCR 23. We now know that any change in PCR 23's state will be restricted to the kernel.
When we encrypt material with the TPM, we can ask it to record the PCR state. This is given back to us as metadata accompanying the encrypted secret. Along with the metadata is an additional signature created by the TPM, which can be used to prove that the metadata is both legitimate and associated with this specific encrypted data. In our case, that means we know what the value of PCR 23 was when we encrypted the key. That means that if we simply extend PCR 23 with a known value in-kernel before encrypting our key, we can look at the value of PCR 23 in the metadata. If it matches, the key was encrypted by the kernel - userland can create its own key, but it has no way to extend PCR 23 to the appropriate value first. We now know that the key was generated by the kernel.
But what if the attacker is able to gain access to the encrypted key? Let's say a kernel bug is hit that prevents hibernation from resuming, and you boot back up without wiping the hibernation image. Root can then read the key from the partition, ask the TPM to decrypt it, and then use that to create a new hibernation image. We probably want to prevent that as well. Fortunately, when you ask the TPM to encrypt something, you can ask that the TPM only decrypt it if the PCRs have specific values. "Sealing" material to the TPM in this way allows you to block decryption if the system isn't in the desired state. So, we define a policy that says that PCR 23 must have the same value at resume as it did on hibernation. On resume, the kernel resets PCR 23, extends it to the same value it did during hibernation, and then attempts to decrypt the key. Afterwards, it resets PCR 23 back to the initial value. Even if an attacker gains access to the encrypted copy of the key, the TPM will refuse to decrypt it.
And that's what this patchset implements. There's one fairly significant flaw at the moment, which is simply that an attacker can just reboot into an older kernel that doesn't implement the PCR 23 blocking and set up state by hand. Fortunately, this can be avoided using another aspect of the boot process. When you boot something via UEFI Secure Boot, the signing key used to verify the booted code is measured into PCR 7 by the system firmware. In the Linux world, the Shim bootloader then measures any additional keys that are used. By either using a new key to tag kernels that have support for the PCR 23 restrictions, or by embedding some additional metadata in the kernel that indicates the presence of this feature and measuring that, we can have a PCR 7 value that verifies that the PCR 23 restrictions are present. We then seal the key to PCR 7 as well as PCR 23, and if an attacker boots into a kernel that doesn't have this feature the PCR 7 value will be different and the TPM will refuse to decrypt the secret.
While there's a whole bunch of complexity here, the process should be entirely transparent to the user. The current implementation requires a TPM 2, and I'm not certain whether TPM 1.2 provides all the features necessary to do this properly - if so, extending it shouldn't be hard, but also all systems shipped in the past few years should have a TPM 2, so that's going to depend on whether there's sufficient interest to justify the work. And we're also at the early days of review, so there's always the risk that I've missed something obvious and there are terrible holes in this. And, well, given that it took almost 8 years to get the Lockdown patchset into mainline, let's not assume that I'm good at landing security code.
[1] Other architectures use different terminology here, such as "supervisor" and "user" mode, but it's broadly equivalent [2] In theory rings 1 and 2 would allow you to run drivers with privileges somewhere between full kernel access and userland applications, but in reality we just don't talk about them in polite company [3] This is how graphics worked in Linux before kernel modesetting turned up. XFree86 would just map your GPU's registers into userland and poke them directly. This was not a huge win for stability [4] IOMMUs can help you here, by restricting the memory PCI devices can DMA to or from. The kernel then gets to allocate ranges for device buffers and configure the IOMMU such that the device can't DMA to anything else. Except that region of memory may still contain sensitive material such as function pointers, and attacks like this can still cause you problems as a result. [5] This describes why I'm using "allowed" rather than "required" here [6] Saving the system state to disk and powering down the platform entirely - significantly slower than suspending the system while keeping state in RAM, but also resilient against the system losing power. [7] With some handwaving around "coprocessor". TPMs can't be part of the OS or the system firmware, but they don't technically need to be an independent component. Intel have a TPM implementation that runs on the Management Engine, a separate processor built into the motherboard chipset. AMD have one that runs on the Platform Security Processor, a small ARM core built into their CPU. Various ARM implementations run a TPM in Trustzone, a special CPU mode that (in theory) is able to access resources that are entirely blocked off from anything running in the OS, kernel or otherwise.
I have continued with Parks On The Air this winter. It’s been a bit of a challenge due to weather, but pretty much every week there has been one day, sandwiched between snow or ice storms, that I have been able to get outside. At this point I have activated 20 of the 52 parks in Rhode Island.
[Attoparsec] has been building intriguing musical projects on his YouTube channel for a while and his latest is no exception. Dubbed simply as “Node Module”, it is a rack-mounted hardware-based Markov chain beat sequencer. Traditionally Markov chains are software state machines that transition between states with given probabilities, often learned from a training corpus. That same principle has been applied to hardware beat sequencing.
Each Node Module has a trigger input, four outputs each with a potentiometer, and a trigger out. [Attoparsec] has a wonderful explanation of all the different parts and theories that make up the module at the start of his video, but the basic operation is that a trigger input comes in and the potentiometers are read to determine the probabilities of each output. One is randomly selected and fired. As you can imagine, there are loops and even dead-end nodes and for some musical pieces there is a certain number of beats expected, so a clever reset signal can be sent to pull the chain back to the initial starting state at a regular interval. The results are interesting to listen to and even better to imagine all the possibilities.
The NASA Perseverance team included a hidden message on the space frame that landed on Mars this week. An image of the Earth, Sun, and Mars in proper orbital relationship and inscribed on that glyph a message to the heavens written in Morse Code: “Explore as One”.
Interesting that Morse holds such noble standing in the human zeitgeist. And to think amateur radio is practically the sole remaining custodian of an ancient language that’s literally gone interplanetary. CW men and women are the druids of this present age. Keepers of ancient wisdom and eternal knowledge.
I don’t know if the dreamers at NASA had this in mind when they designed that plaque. Perhaps it came to them in a dream or from a notion buried deep within that they cannot explain. After all, had they wanted to appeal to popular culture they could have included a golden disk with a video recording of the popular television program Dancing With the Stars.
Of course that would provide an alien race with good reason to destroy us so…
I have a couple of new PCBs being made by OSHPark . They are awesome as usual and have kept up during this pandemic. Which is a tuff thing to do. The first PCB is my IOBuddy shrunk down to smaller size. Its called the AtomIO
As you can see from the render it has 3 Outputs (LED) and 2 Inputs (BTN). It’s tiny as heck and will help keep breadboards less full. The LEDs are tied to GND with a Resistor so all you have to do is supply 2v to 6v to turn them on. The buttons are pulled low via 10k – 22k ohm. So when pressed they output a HIGH signal. What ever is on the power rail it is connected to.
The next board I have on the way is more of an internal use PCB but may sell it if wanted enough. its a simple breakout for those memory LCDs from sharp. The LS027B7DH01 to be exact. I call it the AtomSharp.
Water plays a critical role in sustainable agriculture production and food security worldwide. However, with the world population expected to grow to over 10 billion people by 2050, coupled with climate change, competition for water resources is rapidly increasing, having a particular impact on agriculture. Manna Irrigation, an Ag-Tech company that provides growers around the world with irrigation recommendations, crop monitoring maps, and irrigation planning tools, is using satellite data from Planet to address this challenge.
According to the World Bank, agricultural production will need to expand by approximately 70 percent by 2050 in order to meet humanity’s basic food and fiber needs. Yet, to date, agriculture occupies nearly 40 percent of the Earth’s surface and accounts for approximately 70 percent of all freshwater withdrawals globally. Given this information, future demand on water will require 25 to 40 percent of water to be reallocated, which is expected to come from agriculture due to its high share of total water use.
While these numbers may seem daunting, there is space for improvement. The current efficiency of irrigation is low, with less than 65 percent of the water actually being used by the crops, offering room for more sustainable systems to make a difference.
To combat this growing issue, reallocations of water away from agriculture will need to be accompanied by solutions to improve water use efficiency for farmers and growers in order to meet the needs of a growing population. The magnitude and complexity of this problem accompanied by the arid nature of the Middle East are what led to the foundation of Manna Irrigation.
“Our solution is a sensor-free, software-only approach that leverages high-resolution, frequently refreshed satellite data and hyper-local weather information to deliver highly affordable and accessible solutions for site-specific irrigation recommendations,” said Hovav Lapidot, Director of Marketing and Sales at Manna.
The Manna Irrigation Intelligence solution delivers actionable information to farmers and growers around the globe so they can make better-informed, more confident irrigation decisions to help combat the global water and food crisis. To do so more effectively and efficiently, Manna will now incorporate our global PlanetScope monitoring and archive into their solution, providing better prediction services for irrigation of agricultural fields.
“We chose Planet because of their capability of getting updated imagery on a near-daily basis using the Area Under Management approach providing archive and monitoring data,” said Eyal Mor, CEO at Manna. “We believe every grower in the world should have direct access to personalized and affordable irrigation intelligence which allows optimized and efficient irrigation decisions. We are proud to take part in the efforts of helping growers to improve field performance and reduce water usage, supporting sustainable farming,” said Mor.
With the socio-economic pressures and demanding restrictions around water allocated to agriculture, we are thrilled to see how Manna leverages Planet data to increase agricultural water-use efficiency around the globe.
The leader of Mexico’s Green Party has been removed from office following allegations that he received money from a Romanian ATM skimmer gang that stole hundreds of millions of dollars from tourists visiting Mexico’s top tourist destinations over the past five years. The scandal is the latest fallout stemming from a three-part investigation into the organized crime group by KrebsOnSecurity in 2015.
One of the Bluetooth-enabled PIN pads pulled from a compromised ATM in Mexico. The two components on the left are legitimate parts of the machine. The fake PIN pad made to be slipped under the legit PIN pad on the machine, is the orange component, top right. The Bluetooth and data storage chips are in the middle.
Jose de la Peña Ruiz de Chávez, who leads the Green Ecologist Party of Mexico (PVEM), was dismissed this month after it was revealed that his were among 79 bank accounts seized as part of an ongoing law enforcement investigation into a Romanian organized crime group that owned and operated an ATM network throughout the country.
In 2015, KrebsOnSecurity traveled to Mexico’s Yucatan Peninsula to follow up on reports about a massive spike in ATM skimming activity that appeared centered around some of the nation’s primary tourist areas.
That three-partseriesconcluded that Intacash, an ATM provider owned and operated by a group of Romanian citizens, had been paying technicians working for other ATM companies to install sophisticated Bluetooth-based skimming devices inside cash machines throughout the Quintana Roo region of Mexico, which includes Cancun, Cozumel, Playa del Carmen and Tulum.
Unlike most skimmers — which can be detected by looking for out-of-place components attached to the exterior of a compromised cash machine — these skimmers were hooked to the internal electronics of ATMs operated by Intacash’s competitors by authorized personnel who’d reportedly been bribed or coerced by the gang.
But because the skimmers were Bluetooth-based — allowing thieves periodically to collect stolen data just by strolling up to a compromised machine with a mobile device — KrebsOnSecurity was able to detect which ATMs had been hacked using nothing more than a cheap smart phone.
In a series of posts on Twitter, De La Peña denied any association with the Romanian organized crime gang, and said he was cooperating with authorities.
But it is likely the scandal will ensnare a number of other important figures in Mexico. According to a report in the Mexican publication Expansion Politica, the official list of bank accounts frozen by the Mexican Ministry of Finance include those tied to the notary Naín Díaz Medina; the owner of the Quequi newspaper, José Alberto Gómez Álvarez; the former Secretary of Public Security of Cancun, José Luis Jonathan Yong; his father José Luis Yong Cruz; and former governors of Quintana Roo.
In May 2020, the Mexican daily Reformareported that the skimming gang enjoyed legal protection from a top anti-corruption official in the Mexican attorney general’s office.
The following month, my reporting from 2015 emerged as the primary focus of a documentary published by the Organized Crime and Corruption Reporting Project (OCCRP) into Intacash and its erstwhile leader — 44-year-old Florian “The Shark” Tudor. The OCCRP’s series painted a vivid picture of a highly insular, often violent transnational organized crime ring (referred to as the “Riviera Maya Gang“) that controlled at least 10 percent of the $2 billion annual global market for skimmed cards.
It also details how the group laundered their ill-gotten gains, and is alleged to have built a human smuggling ring that helped members of the crime gang cross into the U.S. and ply their skimming trade against ATMs in the United States. Finally, the series highlights how the Riviera Maya gang operated with impunity for several years by exploiting relationships with powerful anti-corruption officials in Mexico.
According to prosecution documents, Marcu and The Shark spotted my reporting shortly after it was published in 2015, and discussed what to do next on a messaging app:
The Shark: Krebsonsecurity.com See this. See the video and everything. There are two episodes. They made a telenovela.
Marcu: I see. It’s bad.
The Shark: They destroyed us. That’s it. Fuck his mother. Close everything.
The intercepted communications indicate The Shark also wanted revenge on whoever was responsible for leaking information about their operations.
The Shark: Tell them that I am going to kill them.
Marcu: Okay, I can kill them. Any time, any hour.
The Shark: They are checking all the machines. Even at banks. They found over 20.
Marcu: Whaaaat?!? They found? Already??
Since the OCCRP published its investigation, KrebsOnSecurity has received multiple death threats. One was sent from an email address tied to a Romanian programmer and malware author who is active on several cybercrime forums. It read:
“Don’t worry.. you will be killed you and your wife.. all is matter of time amigo :)”
The Bussard ramjet is an idea whose attractions do not fade, especially given stunning science fiction treatments like Poul Anderson’s novel Tau Zero. Not long ago I heard from Peter Schattschneider, a physicist and writer who has been exploring the Bussard concept in a soon to be published novel. In the article below, Dr. Schattschneider explains the complications involved in designing a realistic ramjet for his novel, with an interesting nod to a follow-up piece I’ll publish as soon as it is available on the work of John Ford Fishback, whose ideas on magnetic field configurations we have discussed in these pages before.
The author is professor emeritus in solid state physics at Technische Universität Wien, but he has also worked for a private engineering company as well as the French CNRS, and has been director of the Vienna University Service Center for Electron Microscopy. With more than 300 research articles in peer-reviewed journals and several monographs on electron-matter interaction, Dr. Schattschneider’s current research focuses on electron vortex beams, which are exotic probes for solid state spectroscopy. He tells me that his interest in physics emerged from an early fascination with science fiction, leading to the publication of several SF novels in German and many short stories in SF anthologies, some of them translated into English and French. As we see below, so-called ‘hard’ science fiction, scrupulously faithful to physics, demands attention to detail while pushing into fruitful speculation about future discovery.
by Peter Schattschneider
When the news about the BLC1 signal from Proxima Centauri came in, I was just finishing a scientific novel about an expedition to our neighbour star. Good news, I thought – the hype would spur interest in space travel. Disappointment set in immediately: Should the signal turn out to be real, this kind of science fiction would land in the dustbin.
Image: Peter Schattschneider. Credit & copyright: Klaus Ranger Fotografie.
The space ship in the novel is a Bussard ramjet. Collecting interstellar hydrogen with some kind of electrostatic or magnetic funnel that would operate like a giant vacuum cleaner is a great idea promoted by Robert W. Bussard in 1960 [1]. Interstellar protons (and some other stuff) enter the funnel at the ship‘s speed without further ado. Fusion to helium will not pose a problem in a century or so (ITER is almost working), conversion of the energy gain into thrust would work as in existing thrusters, and there you go!
Some order-of-magnitude calculations show that it isn‘t as simple as that. But more on that later. Let us first look at the more mundane problems occuring on a journey to our neighbour. The values given below were taken from my upcoming The EXODUS Incident [2], calculated for a ship mass of 1500 tons, an efficiency of 85% of the fusion energy going into thrust, an interstellar medium of density 1 hydrogen atom/cm3, completely ionized by means of electron strippers.
On the Way
Like existing ramjets the Bussard ramjet is an assisted take-off engine. In order to harvest fuel it needs a take-off speed, here 42 km/s, the escape velocity from the solar system. The faster a Bussard ramjet goes, the higher is the thrust, which means that one cannot assume a constant acceleration but must solve the dynamic rocket equation. The following table shows acceleration, speed and duration of the journey for different scoop radii.
At the midway point, the thrust is inverted to slow the ship down for arrival. To achieve an acceleration of the order of 1 g (as for instance in Poul Anderson’s celebrated novel Tau Zero [3]), the fusion drive must produce a thrust of 18 million Newton, about half the thrust of the Saturn-V. That doesn’t seem tremendous, but a short calculation reveals that one needs a scoop radius of about 3500 km to harvest enough fuel because the density of the interstellar medium is so low. Realizing magnetic or electric fields of this dimension is hardly imaginable, even for an advanced technology.
A perhaps more realistic funnel entrance of 200 km results in a time of flight of almost 500 years. Such a scenario would call for a generation starship. I thought that an acceleration of 0.1 g was perhaps a good compromise, avoiding both technical and social fantasizing. It stipulates a scoop radius of 1000 km, still enormous, but let us play the “what-if“ game: The journey would last 17.3 years, quite reasonable with future cryo-hibernation. The acceleration increases slowly, reaching a maximum of 0.1 g after 4 years. Interestingly, after that the acceleration decreases, although the speed and therefore the proton influx increases. This is because the relativistic mass of the ship increases with speed.
Fusion Drive
It has been pointed out by several authors that the “standard“ operation of a fusion reactor, burning Deuterium 2D into Helium 3He cannot work because the amount of 2D in interstellar space is too low. The proton-proton burning that would render p+p → 2D for the 2D → 3He reaction is 24 orders of magnitude (!) slower.
The interstellar ramjet seemed impossible until in 1975 Daniel Whitmire [4] proposed the Bethe-Weizsäcker or CNO cycle that operates in hot stars. Here, carbon, nitrogen and oxygen serve as catalysts. The reaction is fast enough for thrust production. The drawback is that it needs a very high core temperature of the plasma of several hundred million Kelvin. Reaction kinetics, cross sections and other gadgets stipulate a plasma volume of at least 6000 m3 which makes a spherical chamber of 11 m radius (for design aficionados a torus or – who knows? – a linear chamber of the same order of magnitude).
At this point, it should be noted that the results shown above were obtained without taking account of many limiting conditions (radiation losses, efficiency of the fusion process, drag, etc.) The numerical values are at best accurate to the first decimal. They should be understood as optimistic estimates, and not as input for the engineer.
Waste Heat
Radioactive high-energy by-products of the fusion process are blocked by a massive wall between the engine and the habitable section, made up of heavy elements. This is not the biggest problem because we already handle it in the experimental ITER design. The main problem is waste heat. The reactor produces 0.3 million GW. Assuming an efficiency of 85% going into thrust, the waste energy is still 47,000 GW in the form of neutrinos, high energy particles and thermal radiation. The habitable section should be at a considerable distance from the engine in order not to roast the crew. An optimistic estimate renders a distance of about 800 m, with several stacks of cooling fins in between. The surface temperature of the sternside hull would be at a comfortable 20-60 degrees Celsius. Without the shields, the hull would receive waste heat at a rate of 6 GW/m2, 5 million times more than the solar constant on earth.
Radiation shielding
An important aspect of the Bussard ramjet design is shielding from cosmic rays. At the maximum speed of 60% of light speed, interstellar hydrogen hits the bow with a kinetic energy of 200 MeV, dangerous for the crew. A.C. Clarke has proposed a protecting ice sheet at the bow of a starship in his novel The Songs of Distant Earth [5]. A similar solution is also known from modern proton cancer therapy. The penetration depth of such protons in tissue (or water, for that matter) is 26 cm. So it suffices to put a 26 cm thick water tank at the bow.
Artificial gravity
It is known that long periods of zero gravity are disastrous to the human body. It is therefore advised to have the ship rotate in order to create artificial gravity. In such an environment there are unusual phenomena, e.g. a different barometric height equation, or atmospheric turbulence caused by the Coriolis forces. Throwing an object in a rotating space ship has surprising consequences, exemplified in Fig. 1. Funny speculations about exquisite sporting activities are allowed.
Fig. 1: Freely falling objects in a rotating cylinder, thrown in different directions with the same starting speed. In this example, drawn from my novel, the cylinder has a radius of 45 m, rotating such that the artificial gravity on the inner hull is 0.3 g. The object is thrown with 40 km/h in different directions. Seen by an observer at rest, the cylinder rotates counterclockwise.
Scooping
The central question for scooping hydrogen is this: Which electric or magnetic field configuration allows us to collect a sufficient amount of interstellar hydrogen? There are solutions for manipulating charged particles: colliders use magnetic quadrupoles to keep the beam on track. The symmetry of the problem stipulates a cylindrical field configuration, such as ring coils or round electrostatic or magnetic lenses which are routinely used in electron microscopy. Such lenses are annular ferromagnetic yokes with a round bore hole of the order of a millimeter. They focus an incoming electron beam from a diameter of some microns to a nanometer spot.
Scaling the numbers up, one could dream of collecting incoming protons over tens of kilometers into a spot of less than 10 meters, good enough as input to a fusion chamber. This task is a formidable technological challenge. Anyway, it is prohibitive by the mere question of mass. Apart from that, one is still far away from the needed scoop radius of 1000 km.
The next best idea relates to the earth’s magnetic dipole field. It is known that charged particles follow the field lines over long distances, for instance causing aurora phenomena close to earth’s magnetic poles. So it seems that a simple ring coil producing a magnetic dipole is a promising device. Let’s have a closer look at the physics. In a magnetic field, charged particles obey the Lorentz force. Calculating the paths of the interstellar protons is then a simple matter of plugging the field into the force equation. The result for a dipole field is shown in Fig. 2.
Fig. 2: Some trajectories of protons starting at z=2R in the magnetic field of a ring coil of radius R that sits at the origin. Magnetic field lines (light blue) converge towards the loop hole. Only a small part of the protons would pass through the ring (red lines), spiralling down according to cyclotron gyration. The rest is deflected (black lines).
An important fact is seen here: the scoop radius is smaller than the coil radius. It turns out that it diminishes further when the starting point of the protons is set at higher z values. This starting point is defined where the coil field is as low as the galactic magnetic field (~1 nT). Taking a maximum field of a few Tesla at the origin and the 1/(z/R)3 decay of the dipole field, where R is the coil radius (10 m in the example), the charged particles begin to sense the scooping field at a distance of 10 km. The scoop radius at this distance is a ridiculously small – 2 cm. All particles outside this radius are deflected, producing drag.
That said, loop coils are hopelessly inefficient for hydrogen scooping, but they are ideal braking devices for future deep space probes, and interestingly they may also serve as protection shields against cosmic radiation. On Proxima b, strong flares of the star create particle showers, largely protons of 10 to 50 MeV energy. A loop coil protects the crew as shown in Fig. 3.
Fig.3: Blue: Magnetic field lines from a horizontal superconducting current loop of radius R=30 cm. Red lines are radial trajectories of stellar flare protons of 10 MeV energy approaching from top. The loop and the mechanical protection plate (a 3 cm thick water reservoir colored in blue) are at z=0. It absorbs the few central impinging particles. The fast cyclotron motion of the protons creates a plasma aureole above the protective plate, drawn as a blue-green ring right above the coil. The field at the coil center is 6 Tesla, and 20 milliTesla at ground level.
After all this paraphernalia the central question remains: Can a sufficient amount of hydrogen be harvested? From the above it seems that magnetic dipole fields, or even a superposition of several dipole fields, cannot do the job. Surprisingly, this is not quite true. For it turns out that an arcane article from 1969 by a certain John Ford Fishback [6] gives us hope, but this is another story and will be narrated at a later time.
References
1. Robert W. Bussard: Galactic Matter and Interstellar Flight. Astronautica Acta 6 (1960), 1-14.
2. P. Schattschneider: The EXODUS Incident – A Scientific Novel. Springer Nature, Science and Fiction Series. May 2021, DOI: 10.1007/978-3-030-70019-5.
3. Poul Anderson: Tau Zero (1970).
4. Daniel P. Whitmire: Relativistic Spaceflight and the Catalytic Nuclear Ramjet. Acta Astronautica 2 (1975), 497-509.
5. Arthur C. Clarke: Songs of distant Earth (1986).
6. John F. Fishback: Relativistic Interstellar Space Flight. Astronautica Acta 15 (1969), 25-35.
Readers with an Amazon Prime Video account may be interested in this free viewing from a bygone era of amateur radio.
WWII Amateur Radio Films is a collection of what appears to be at least three separate ham radio related documentaries stitched together into a single video loop.
A collection of historical films documenting the importance of amateur ham radio communication during WWII. Also includes a look at the Military Affiliate Radio System during the 1970s.
The whole thing was interesting, but it might be especially so for anyone who appreciates vintage gear. The first segment details the conversion of a popular amateur radio transmitter being modified for military use by The Hallicrafters Company during World War II.
Sorry if you’ve already seen it, but I thought it a swell way for any amateur radio enthusiast with an appreciation for radio history to spend 78 minutes on a Friday evening. Enjoy!
Lava flows on Mount Etna, ski championships in Italy, scenes from the Australian Open, ice-skating in the Netherlands, an image from New York Fashion Week, freezing conditions in Texas, a monument to cosmonaut Yuri Gagarin, snowy scenes in Greece, and much more
Stefan Luitz of Germany is transported back up to the start gate on a tracked ATV during the FIS World Ski Championships Team Parallel Event on February 17, 2021, in Cortina d'Ampezzo, Italy.
(
Alexander Hassenstein / Getty)
I never drew a second pose for Rocket Hat. He just stood around as if he was wearing handcuffs, whether he was captured or not. In a way, it’s fitting that a character who never says anything was drawn by a cartoonist who never drew anything.
As always, thanks for using my Amazon Affiliate links (US, UK, Canada).
The nRF9160 Feather by Jared Wolff (aka Circuit Dojo LLC) is an electronics development board. It features tghe nRF9160 by Nordic Semiconductor. This part is capable of both CAT M1 LTE and NB-IoT for communication with the outside world. It’s compatible with the Zephyr RTOS which is fully baked into Nordic’s nRF Connect SDK. Other toolchains and languages coming soon to a Github repository near you.