# Planet Aardvark

## January 20, 2021

• How to Explain an Unpopular Opinion

I am excited for the new version of DUNE. That said, the film is doomed. DOOMED, I SAY!

It already had unattainable expectations back when they planned to release it in November of 2020. Adding a year of further anticipation is not going to help. Â It certainly didnât do any favors for Wonder Woman 1984.

Hereâs a question that I havenât heard anyone else ask. Why 1984? There are nine other years that fall within the â80s and arenât already the title of a well-known storyâa story which is in no way related to, and is very different in tone from the story of Wonder Woman 1984. It would be like making a third Babe movie, where he returns to the farm after his trip to the city, and calling it Babe 3: Animal Farm.

As always, thanks for using my Amazon Affiliate links (US, UK, Canada).

## January 19, 2021

• Planet Releases ArcGIS Add-In & QGIS Plugin V2.0

At Planet, weâre committed to reducing the friction of using satellite imagery so our users can gain the greatest value from our data. We do this by partnering with leading GIS-providers and delivering the necessary tools and software needed to derive insights from our data. Weâre excited to announce the release of the new and improved Planet ArcGIS Add-In & Planet QGIS Plugin V2.0. The new integration products are designed to make it easier for GIS users to discover and apply imagery in their preferred mapping and analysis tools, enriching their applications and projects with more frequent satellite imagery.Â

The ArcGIS Add-In & QGIS Plugin V1.0 were released in December 2019 and enabled customers to search for, preview, and download Planet imagery & Basemaps directly within ArcGIS Pro and QGIS desktop. The update to 2.0 both strengthens these existing features and adds new capabilities to augment usersâ imagery workflows in their GIS.

More search options to find the imagery your application needs.Â

The Imagery Search Panel in ArcGIS & QGIS has new features and a new design. Users now have: more options to define their areas of interest, more imagery filters and a better designed filters page, the ability to view and customize metadata display in their search results, plus the ability to save their search within their GIS and access their saved searches across the Planet platform.Â

Experience the ability to view and customize metadata display in ArcGIS Add-in 2.0.

Find improved search options and imagery filters in ArcGIS Add-in 2.0.

More Basemaps visualization and download options to make the most of your Basemap subscriptions.Â

Unlike your typical GIS basemap, Planet Basemaps can be updated on a regular cadence and viewed with multispectral data, like near-infrared for Planetâs Surface Reflectance (SR) Basemaps.Â

In Planetâs ArcGIS Add-In & QGIS Plugin 2.0 users with a Basemaps subscription can visualize their basemaps as a time-series and, for SR Basemaps, view them in false color using one of six visualization indices, like color infrared. Users can also now bulk download Basemap quads for a single Basemap or across a time-series using the Planet Basemaps Panel.Â

Use QGIS to perform advanced geospatial analytics on top of Planet data.

The Planet Basemaps Panel in QGIS now allows bulk download of Basemap quads.

Pro Tip: The new Planet Inspector Panel enables you to identify the contributing imagery scene on any Planet Basemap so that you determine when any pixel on a Basemap was collected and other information about the source image.Â

Task for High-Resolution Imagery within your GIS workflows

Tasking customers will also be able to task for high-resolution SkySat imagery directly within their GIS, enabling analysts to take advantage of the premier mapping capabilities of GIS software and pinpoint the precise location theyâd like to examine with high-resolution imagery.Â

Easily define your AOI for high res tasking directly within your GIS

There are even more improvements and features waiting to be utilized in this release. To get started integrating Planet into your GIS workflows today, learn more at Planetâs Developer Center or download the integrations directly from the QGIS Plugins Repository and the ArcGIS Marketplace.Â

• How to recover from a damaged fstab

If you edit /etc/fstab, and then later change something like the proc filesystem from OpenJDK, you might not boot normally.Â  Antonio Olivares has a solution for you.

• Meet the PyCorder

Joey Castillo is well-known for the awesome OpenBook e-reader project and has recently announced a new open source hardware project: the PyCorder!

It uses the microcontroller to sense capactive touch keyboard:

And has add-on sensors like moisture to monitor soil:

And pulse oximetry:

• TOI-1259A: Implications of a White Dwarf Companion

TESS, our Transiting Exoplanet Survey Satellite, continues to roll up interesting planet candidates, with over 2450 TESS Objects of Interest (TOI) thus far identified. The one that catches my eye this morning showed up in the lightcurve of TOI-1259A, a K-dwarf some 385 light years away. The planet designated TOI-1259Ab is Jupiter-sized but some 56 percent less massive, with a 3.48 day orbit at 0.04 AU, and an equilibrium temperature of 963 K.

This system gets interesting, though, not so much for the planet but the other star, a white dwarf (TOI-1259B) in a wide orbit at 1648 AU from the K-dwarf. A team of astronomers led by David Martin (Ohio State University) finds an effective temperature of 6300 K, a radius of 0.013 solar radii and a mass of 0.56 solar masses, a set of characteristics that allow the team to estimate that the system is just over 4 billion years old.

Image: SDSS image of the planet host TOI-1259A and its bound white dwarf companion TOI-1259B. Credit: Martin et al., 2021.

Letâs home in on the issue of white dwarfs, for they force a significant question: How does the evolution of a star affect the evolution of the planetary system around it? The system containing TOI-1259Ab may prove useful in helping us understand the processes at work.

But first, what about white dwarfs themselves? A star like the Sun will expand into a red giant perhaps five billion years from now, ultimately leaving a white dwarf remnant, with whatever planetary survivors still intact left transformed by the process.

So itâs not surprising that about 50 percent of the white dwarfs studied show atmospheres polluted by heavy elements. That would be an indication of material from the surrounding system accreting onto the white dwarf. Still noteworthy, though, is the fact that you would expect such heavy elements to settle out in the presence of the white dwarfâs high gravity. The process of accretion must, then, be relatively common, allowing the stellar atmosphere to be constantly replenished.

White dwarfs produce their own set of challenges when it comes to exoplanet discovery. Finding planets around them is relatively rare. From the paper on the TOI-1259Ab work, I learned that these stars show a lack of the kind of sharp spectral features that would allow precise characterization using radial velocity methods. The push and pull of orbiting worlds is less evident than it would be around other classes of star.

That seems to throw us back on transit methods, something both Kepler and TESS turned into an art, but here weâre dealing with the problem that white dwarfs have a small radius. Theyâre roughly the size of the Earth, which means that transit probabilities are reduced and so are transit durations. Moreover, according to Martin et al., white dwarfs are faint enough that their light curves are noisy, a problem for astrometric methods â think Gaia â as well.

So while weâve found atmospheric pollutants at these stars, and have tagged transits of planetary debris, the first confirmed planet orbiting a white dwarf wasnât found until recently (for more on WD 1856+534, see On White Dwarf Planets as Biosignature Targets).

But back to TOI-1259Ab, which is not a white dwarf planet, but a tightly orbiting Jupiter-sized planet around a K-dwarf. Here the white dwarf is a distant but definitely bound second star. It turns out that even systems with planets and white dwarf companions are rare, as the paper notes:

Only a few bona fide planets have been discovered with degenerate outer companions (Table 2), the first being Gliese-86b (Queloz et al. 2000; Els et al. 2001; Lagrange et al. 2006). Mugrauer (2019) found 204 binary companions in a sample of roughly 1300 exoplanet hosts, of which eight of the companions were white dwarfs. Mugrauer & Michel (2020) found five white dwarf companions to TESS Objects of Interest, including TOI-1259, but without radial velocity data to confirm the TOIs as planets. Some of these planets were also in the El-Badry & Rix (2018) catalogue.

Stellar systems that include white dwarfs have much to teach us, and in the case of TOI-1259Ab, we have a world that has now been confirmed through radial velocity follow-up, and a white dwarf that influenced it. Driving this research forward will be the question of how systems with a degenerate outer companion object evolve, for there are implications here for planetary dynamics. This system should be an interesting target for the James Webb Space Telescope. Consider: The transit depth is 2.7 percent on the K-dwarf host star, which is 0.71 percent of the Sunâs radius. Moreover, its location places the system near the TESS and JWST continuous viewing zones.

The authors believe that the white dwarf in this system is far enough from the K-dwarf that it would not affect the formation of planets, but go on to point out that while it was on the main sequence, it progenitor star would have been both more massive and also closer, which would have made it a factor in orbital dynamics for TOI-1259Ab. The planetâs tight orbit may thus be at least partially the result of migration forced by the now degenerate white dwarf companion.

On the matter of stellar age, itâs worth noting that white dwarfs cool steadily as they age, which helps astronomers constrain the age of the star and the system around it using its temperature and luminosity. Let me quote the paper on this, because star age is so tricky to determine for other stellar types:

If the WDâs mass is known, the initial mass of its progenitor star can be inferred through the initial-final mass relation (IFMR), and this initial mass constrains the pre-WD age of the WD progenitor. Therefore if we have a well-constrained distance to the WD then its total age, i.e. the sum of its main sequence lifetime and its cooling age, can be robustly measured from its spectral energy distribution (SED). Under the reasonable ansatz that the WD and K dwarf formed at the same time, we can then measure the total system age from the WD.

Another useful insight offered by white dwarfs, the study of which may help us explain unusual system architectures like this one, as well as informing us on outcomes as stars and their companions are transformed over time.

The paper is Martin et al., âTOI-1259Ab â a gas giant planet with 2.7% deep transits and a bound white dwarf companion,â submitted to Monthly Notices of the Royal Astronomical Society (preprint).

• Photos: Preparation for Bidenâs Inauguration (22 photos)

As the final day of the Trump presidency passes, rehearsals and preparations are under way for the upcoming inaugural ceremony for President-elect Joe Biden and Vice President-elect Kamala Harris, set to take place on January 20. Amid unprecedented security concerns and an ongoing pandemic, the visual landscape of Washington is different from any previous inaugural ceremony. Roads have been closed, concrete barriers and security fencing have been placed, and more than 20,000 armed National Guard troops have been deployed to the nation's capital. In place of what would normally be a large crowd of onlookers, the National Mall is filled with thousands of national, state, and territory flags representing the American people who will be unable to attend.

• New Charges Derail COVID Release for Hacker Who Aided ISIS

A hacker serving a 20-year sentence for stealing personal data on 1,300 U.S. military and government employees and giving it to an Islamic State hacker group in 2015 has been charged once again with fraud and identity theft. The new charges have derailed plans to deport him under compassionate release because of the COVID-19 pandemic.

Ardit Ferizi, a 25-year-old citizen of Kosovo, was slated to be sent home earlier this month after a federal judge signed an order commuting his sentence to time served. The release was granted in part due to Feriziâs 2018 diagnosis of asthma, as well as a COVID outbreak at the facility where he was housed in 2020.

But while Ferizi was in quarantine awaiting deportation the Justice Department unsealed new charges against him, saying heâd conspired from prison with associates on the outside to access stolen data and launder the bitcoin proceeds of his previous crimes.

In the years leading up to his arrest, Ferizi was the administrator of a cybercrime forum called Pentagon Crew. He also served as the leader of an ethnic Albanian group of hackers from Kosovo known as Kosova Hackerâs Security (KHS), which focused on compromising government and private websites in Israel, Serbia, Greece, Ukraine and the United States.

The Pentagon Crew forum founded by Ferizi.

In December 2015, Ferizi was apprehended in Malaysia and extradited to the United States. In January 2016, Ferizi pleaded guilty to providing material support to a terrorist group and to unauthorized access. He admitted to hacking a U.S.-based e-commerce company, stealing personal and financial data on 1,300 government employees, and providing the data to an Islamic State hacking group.

Ferizi gave the purloined data to Junaid âTrickâ Hussain, a 21-year-old hacker and recruiter for ISIS who published it in August 2015 as part of a directive that ISIS supporters kill the named U.S. military members and government employees. Later that month, Hussain was reportedly killed by a drone strike in Syria.

The government says Ferizi and his associates made money by hacking PayPal and other financial accounts, and through pornography sites he allegedly set up mainly to steal personal and financial data from visitors.

Junaid Hussainâs Twitter profile photo.

Between 2015 and 2019, Ferizi was imprisoned at a facility in Illinois that housed several other notable convicts. For example, prosecutors allege that Ferizi was an associate of Mahmud âRedâ Abouhalima, who was serving a 240 year sentence at the prison for his role in the 1993 World Trade Center bombing.

Another inmate incarcerated at the same facility was Shawn Bridges, a former U.S. Secret Service agent serving almost eight years for stealing $820,000 worth of bitcoin from online drug dealers while investigating the hidden underground website Silk Road. Prosecutors say Ferizi and Bridges discussed ways to hide their bitcoin. The information about Feriziâs inmate friends came via a tip from another convict, who told the FBI that Ferizi was allegedly using his access to the prisonâs email system to share email and bitcoin account passwords with family members back home. The Justice Department said subpoenas served on Feriziâs email accounts and interviews with his associates show Feriziâs brother in Kosovo used the information to âliquidate the proceeds of Feriziâs previous criminal hacking activities.â [Side note: It may be little more than a coincidence, but my PayPal account was hacked in Dec. 2015 by criminals who social engineered PayPal employees over the phone into changing my password and bypassing multi-factor authentication. The hackers attempted to send my balance to an account tied to Hussain, but the transfer never went through.] Ferizi is being tried in California, but has not yet had an initial appearance in court. Heâs charged with one count of aggravated identity theft and one count of wire fraud. If convicted of wire fraud, he faces a maximum penalty of 20 years in prison and a fine of$250,000. If convicted of aggravated identity theft, he faces a mandatory penalty of 2 years in prison in addition to the punishment imposed for a wire fraud conviction.

• The NASA Engineer Whoâs a Mathematician at Heart

Just before World War II, the American civil rights activist A. Philip Randolph persuaded President Roosevelt to end discrimination on the basis of race, color and national origin in defense-industry employment. Not long after, in 1941, Roosevelt issued executive order 8802, prompting several agencies, including the National Advisory Committee for Aeronautics â NACA, the precursor to NASA â to begin hiring Black workers.

That paved the way for Christine Darden, who earned a masterâs degree in mathematics at a historically Black university in 1967 and was hired into NASAâs all-female pool of âhuman computersâ at the Langley Research Center. However, she soon discovered that her role as a mathematician was limited to performing time-consuming calculations by hand. To do the creative mathematical work she craved, Darden needed to recast herself as an engineer.

Darden transferred to NASAâs male-dominated engineering division and later earned an engineering doctorate. She went on to lead the Sonic Boom Group of NASAâs High-Speed Research Program, though she never stopped thinking of herself as a mathematician. âDespite my doctorate, I probably have more of a mathematics background,â she said. âI really enjoy the story of what these mathematical equations do in the physical world.â

Her groundbreaking work laid the foundation for a new era of research on experimental planes (known as X-planes) that NASA launched in 2016. The goal has always been to accelerate the adoption of quieter, greener, safer, faster and more efficient planes â even supersonic ones, which travel faster than sound.

The fundamental problem she worked on, the sonic boom, begins when an airplane pushes air molecules out of the way as it flies. This creates an invisible, cone-shaped pressure field whose tip is on the aircraftâs nose and whose sides surround the plane. The cone moves with the plane and emits a series of pressure waves that travel at the speed of sound. As the plane speeds up, these waves get closer together. Should the plane exceed the speed of sound â dubbed Mach 1 â the waves coalesce into a potentially destructive shock wave called a sonic boom.

âIt sounds like a sharp thunderclap,â said Darden, who published more than 50 papers on high-lift wing design in supersonic flow, flap design and sonic boom prediction and minimization.

Darden retired from NASA in 2007 after a 40-year career. She was featured in Margot Lee Shetterlyâs 2016 book Hidden Figures, alongside Katherine Johnson, Dorothy Vaughan and Mary Jackson â three Black women mathematicians at NASA who made significant contributions at pivotal moments in the space race. All four women were awarded Congressional Gold Medals in 2019 for their scientific contributions.

Quanta Magazine spoke with Darden recently about her experience working for NASA, how to make fast planes quieter, and her surreptitious visits to speak with schoolchildren and Girl Scouts. The interview has been condensed and edited for clarity.

### Have you always been interested in math and science?

My mother tells the story of giving me a talking doll when I was 5. She was disappointed because, instead of playing with the doll, I cut it open to see why it talked. I also helped my dad work on his car and change the oil. When girls were inside playing, I was in the street, bicycle riding, skating and racing with the boys.

We lived in Union County, North Carolina, right outside of Charlotte. My mother taught in a two-room school. When I turned 4 , Mother took me to school with her. She said I could play outside, but who was I going to play with? I stayed and did the first grade work. , she promoted me to second grade.

### When did you turn to space and aeronautics?

I was in high school on October 4, 1957, when Sputnik launched. I felt the countryâs excitement that Russia beat us into space. Also, I attended college in Hampton, Virginia, near NACA. John Glennâs parade rode by campus. So there was certainly that influence too.

### How did you first come to work at NASA?

After graduating with my masterâs degree in applied mathematics , I was hired as a data analyst in the high-speed aeronautics division. We were female mathematicians who helped the male engineers create documents about wing and airflow shapes for the military and airplane companies. The engineers had slide rules and mechanical calculators but didnât like doing calculations. So the head computer assigned young ladies to do the work.

It wasnât creative, though we drew figures. I still have some of the French curves I was given to draw smooth lines through my data points.

### What was it like? Did you ever talk with the male engineers in those early years?

Yes, I often did after getting an assignment. Once, an engineer asked me to complete his work by writing a computer program. It was an interesting assignment. When I finished, he said my program gave incorrect answers. I reviewed and ran it again. He laughed and said, âThatâs still not right.â

I didnât like the laugh. My work wasnât wrong. I looked at the work he had done prior to giving me the assignment and found one sign error. When I corrected his mistake and ran the code again, the numbers looked good.

### Was he gracious?Â

No. But he didnât laugh anymore.

### Was that something of a wake-up call for you?

Well, I later asked a friend why all of the men were in engineering and all of the women were in computing. I thought it was because we had math degrees and they had engineering degrees. But she told me that some male engineers had math degrees.

I wanted to be in engineering. Men in engineering did research, gave talks, wrote and published papers, and got promoted. The women, on the other hand, followed the engineersâ orders. Sometimes they didnât even know what they were working on. They didnât give talks, werenât recognized on papers even when they helped, and didnât get promoted.

### During that time, did you know the other âhidden figures,â or their work?Â

Dorothy Vaughan lived down the street from me. She started in 1943, so she was 24 years ahead of me. Mary Jackson and Katherine Johnson were 15 years ahead of me. Katherineâs daughter was my classmate . Katherine and I sang together in church for 50 years. I went by her office a couple of times. I met some of the men who she worked with. However, I never read anything at NASA about what she or the others did as their work was really hidden. I learned about their work in Hidden Figures.

### So how did you end up escaping that smaller role in computing?Â Â

I asked for a transfer to engineering, which my supervisor said was impossible. So I went to the director and asked why males and females with the same background were assigned different jobs. He said, âYou know, nobody ever asked me that question before.â I said, âWell, Iâm asking it now.â

Three weeks later, I got promoted to engineering.

### Did other women follow you?

I talked to some of the ladies in the office, but they werenât interested. Maybe they werenât outgoing. Of course, Mary Jackson was outgoing. One of the engineers had suggested that she go to a segregated high school to get credentials so that NASA would let her work in the wind tunnel. She did that and worked in the wind tunnel, but never got promoted there.

### What was your first engineering assignment?Â

My supervisor asked me to program the equations from a paper on sonic boom minimization. The authors had assumed an isothermal atmosphere â â but I put the real atmosphere into the code. Eventually, I published on the topic. I also started working on my mechanical engineering doctorate because I didnât want anybody saying I couldnât do that job.

Once I finished the computer program, we input variables such as the airplaneâs length, weight, altitude and Mach number. The output gave the equivalent area distribution . With that, we started designing planes.

### Did any working airplanes use your designs?

In the wind tunnel, the difference in pressure inside and outside of the pressure cone had been much smaller for our design than for the baseline plane. However, we needed to fly over people to get feedback about how people would tolerate the minimized boom. Boeing ran a test flight over Chicago and Oklahoma City. Once they started, people called to report damage to sheetrock, windows and the good china in their homes. After that, the U.S. canceled the program and outlawed commercial supersonic flights over land. That law is still there.

But supersonic transport remained very popular. Eventually, in the late 1980s, Congress offered money to address the supersonic boomâs environmental concerns, which include noise and the boom itself, but also possible ozone destruction. They asked that I gather everybody in the U.S. researching the sonic boom for a two-day national meeting at Langley.

I led the design and operational plan of the research program . We did years of testing in our wind tunnels to show what worked. Then DARPA borrowed two F5 supersonic Air Force planes for a test around 2002 over the Mojave Desert. They built and pasted panels onto one of the planes so that it matched the equivalent area distribution from our computer program. The other F5 stayed the same. When the F5 with no changes was flown, you could hear people in the control room shouting because of the loud boom. But the demonstrator plane â the one with the panels on it â had a much softer boom. It worked!

I retired shortly after that, but much later, in 2018, NASA gave a contract to Skunk Works, Lockheed Martin, to build QueSST â a good, low-boom supersonic X plane. Theyâre working on it now. It has such a long nose that it uses external vision to land the plane. They expect the boom to sound like a thump.

### But what about the law prohibiting commercial supersonic flights?

Theyâll do flight tests and get feedback on the noise. Then NASA will present the data to the FAA to request a rule change. Theyâre also talking to other world noise agencies to change laws so that supersonic planes can fly around the globe.

### Another big legacy of your work â the book Hidden Figures â also happened after you retired. How did that come about?Â

Margotâs father and I worked together, and weâd bring our children to Langleyâs big spring picnic. So I first met Margot when she was a girl. Later, Margot was working on Wall Street but wanted to write â her mother taught English and had worked with Margot on her writing.

One day Margot and her husband visited her parents. They were riding to church when her dad said, âOh, look, Margot, thereâs Miss So-and-so,â . âShe was a computer at Langley and your Sunday school teacher.â Then her dad talked about the Langley computers. Margotâs husband said, âWell, if the Langley computers did all that, how come Iâve never heard of them?â And Margot thought, âMaybe I should write that book.â

Thatâs when she called me. Soon, it got so that every time she came to town, we had lunch. Once, I mentioned The Warmth of Other Suns, by Isabel Wilkerson, a book about Black migration patterns to the North and West told through three people. It was a great story and great history book. And so Margot put both personal stories and history in her book.

### Now youâre a public figure, often speaking to student groups and Girl Scout troops. Did you do this before the book?

Yes, but when I first met Mary Jackson , she told me, âDo you know that I got a poor performance appraisal because my supervisor said I spent too much time visiting schools?â Remember, she never got promoted there.

When she told me that, I said, âOK, the next time I go, Iâll tell nobody where Iâm going. Iâll just say Iâm going out for an hour.â Of course, within a few years they were giving awards for people who visited schools.

In recent years, Iâve been talking to students all over the country. Invariably, the young women come up and say, âWe didnât know women did work like that!â Girls need to know that women do this work.

• The Public Face of Amateur Radio

That letter from the ARRL about the purpose of ham radio was brief and felt as though there was more to read between the lines. Of course, ten of us could read that terse message and come away with at least a dozen different interpretations. Hereâs mine.

For a century amateur radio has grown by the addition of those who marvel at the magic of radio with a desire to learn more about it. Our kind of communication has been practiced and perfected over decades. Itâs not a new thing, despite our willingness to explore new methods to advance the way we endlessly practice the art.

The result of all that effort yielded a robust radio service capable of spanning the globe with layers of redundancy.

These capabilities are often used for pure enjoyment, but itâs also been employed in service of the public via many facets of emergency communications. Our history of standing in the gap âwhen all else failsâ during hurricanes, floods, etc. is the stuff they used to make movies about. In fact, some believe it to be the only reason amateur radio still exists in these modern times.

The ham radio experience provides an incredibly rich environment for those who want to build, learn, and explore radio, science, technology, electronics, software, and communication techniques.

But it can also be a valuable tool for organizing militias, extremist groups, insurrectionists, terrorists, and an endless host of nefarious organizations and itâs apparent âsomethingâ like that took place during the insurrection at the Capitol buiding in Washington.

I donât know whether this âsomethingâ was first used on that day or if thereâs been a slow infiltration of our ranks by those determined to use the amateur service as their paramilitary radio network.

The crowd of right-wing groups who descended on DC that day didnât seem to grok cameras, facial recognition, social media, smartphone location sharing, IP tracing, or even the simple fact that nothing on the Internet is âprivateâ.

The swift identification of so many rioters and the subsequent takedown of their favorite social media sites may have started an exodus from one means of communication to another where detection is thought to be more difficult?

The possibility that the public ever equates the amateur radio service as a tool for organizing chaos surely creates sleepless nights for those in Newington and should trigger a similar response in all who care about the hobby.

For those who doubt that ham radio could be switched off in the blink of an eye, Iâd suggest you study the history of World War II and its impact on the amateur radio service. Betting that could never happen again is one of the surest ways of making certain that it will.

• How (not) to measure progress in science?

Iâm a bit late to the party but Iâve been enjoying some Collison podcast backlog and realized I had more to say about the âdiminishing returns of scienceâ trope that does the rounds from time to time.

Simply stated, the thesis suggests that a variety of metrics employed to measure progress of science all seemingly concur that despite increasing numbers of PhDs and the net accumulation of knowledge, major new discoveries are few and far between, at least compared to science in prior ages.

For a process thatâs devoted to discovering knowledge, science is poorly understood by nearly everyone, including scientists. It may not be surprising to find that GAAP metrics donât neatly translate into an industry that has resolutely resisted market forces since the beginning of time, but there are less glib reasons why measuring progress in science is actually highly non-trivial.

Iâm going to go into more detail later in this post, but the fundamental issue is that science is like Tetris, in that itâs both cumulative and self-compressing. Unlike, say, philosophy, the measure of a fieldâs maturity is the brevity of its textbooks and the unity of its underlying theoretical framework. With the benefit of hindsight, the discoveries of a century ago appear neatly ordered with the most (contingently) significant results retaining salience while the rest have slipped away.

Where do we start?

I first became aware of this idea being in the startup world around 2014, when Peter Thiel was giving talks such as this one.

I found that Thiel had somewhat limited insight here, though I recognize that a scientific background isnât necessarily a requirement to contribute to this question. Still, comparing rate of financial return between venture capital and fundamental research is a bit gauche, particularly if one happens to be a VC. Of course, such comparisons will be made but itâs germane to begin with the requisite 45 minutes of throat clearing about Keynesian capital overabundance and selection bias.

That is, we donât typically get lectures from failed (unlucky) VCs about the genius of the market. My rule of thumb is that if 5 sigma is good enough for particle physics, itâs good enough for VC as well. Anyone can get lucky once. Get lucky 5 times in a row and thatâs more interesting.

I was more interested when Patrick Collison and Michael Neilson entered the arena in 2018, with an article in the Atlantic asking if science was stagnant.

Iâve met both men socially a couple of times and must begin by stating I have nothing but the highest respect for them personally, and their intentions in this endeavor. I also think that their suggested program of attempting more detailed study of this area is a damn good idea.

However, getting traction is difficult if the foundations are awry and there are aspects of the article that need further attention. Collison has collated other responses here so itâs gratifying that there is an ongoing conversation in this area. Certainly, science coming to a grinding halt is the stuff of civilizational nightmares.

Much of what Iâll write here may be obvious to some readers, perhaps less so to others. It seemed less obvious to me despite being familiar with these ideas for years and working in science for a decade or so, so Iâm writing them down.

In summary, the article leans heavily on statistics and surveys about Nobel Prize-winning discoveries to make the case that despite exponentially increasing PhDs, publications, and science funding, few major discoveries have been made very recently.

At the risk of appearing a bit snarky, I would ask a hypothetical question: Over time Stripe has hired exponentially greater numbers of talented software engineers and yet has continued to ship about one product a year. Why?

To be fair, I could ask this of any software startup. The answer, of course, is that itâs complicated. Larger organizations move more slowly. Incremental growth in a market suffers diminishing returns. More ambitious products have more onerous compliance requirements. Competition. Technical debt.

To get a little deeper, like many ambitious companies Stripe has teams working on fairly fundamental research questions in cryptography and computer science. These team members are among the smartest, best resourced humans to have ever lived. How long until we get a Stripe publication thatâs as significant as the Church-Turing Thesis?

This is a silly question, intended to provoke more than illuminate. But I think it underscores that progress in science is not purely an organizational problem. Academia is undoubtedly riven with dozens of major inefficiencies, some old and some new. I even wrote a whole book about ways in which assimilating this way of life can challenge humans. See the chapter on leaving academia for more information along these lines. The point is that even private research outside academia, despite enormous leaps in capability, doesnât necessarily see itself as being in the Nobel Prize game.

And so, with that primer, we turn to the Nobel Prize. Awarded once a year in various fields for outstanding research, the prize itself has numerous well documented limitations. Using it as a mechanism to measure progress in science, no matter how well intentioned, is unlikely to result in deep insight. Doing so rests on a variety of flawed assumptions, chief among them being that science progress is conventionally measurable and the Nobel Prize performs that sort of measurement. Unfortunately, this is fairly far from the truth.

Before we tackle the salience of historical scientific research in general, itâs worth enumerating known biases in the Nobel Prize alone:
â Cadence of once a year, to at most three living participants, doesnât scale with increased number of scientists, increased lifespans, or scale of scientific research.
â Nobel Committee is notoriously, and increasingly, conservative, reliably shunning certain demographics.
â Nobel Committee strongly prefers established results, meaning that in general old scientists get prizes for work they did when much younger, often work that may not have seemed that significant at the time.
â Scientists who die, or leave the field, or leave science, never get prizes.
â Women hardly ever win, particularly in physics.
â Standards and practices on the Committee have changed over time.
â Standards and practices of science outside the Committee have changed over time.

Earlier in the article, however, Collison and Nielson talk about a survey which asked scientists to evaluate which of a pair of given Nobel Prize discoveries were more significant. This approach ameliorates some of the peculiarities of the Prize system, however as designed cannot give unequivocal evidence. Science is progressive, and later results build on earlier ones. The significance, and salience, of earlier discoveries is enhanced, or overshadowed, by later ones in the same area.

Judging the relative merit of the discovery of the neutron or the Higgs Boson is a pointless exercise. The Higgs is the final page of a story that began with the neutron, a story that involves the contributions of at least tens of thousands of scientists, most unknown even to their own close families. If the Higgs had been discovered before the neutron, it would be more significant by far! So in some ways the question as posed asks âwhich of these discoveries was made first?â Are we so surprised to find that answers to this question, averaged and graphed, are biased to the left?

Collison and Nielsonâs article follows this by talking about the lack of Earth-shaking discoveries, such as Einsteinâs final formulation of General Relativity in 1915. Today that seems like as good a date as any to bake a cake with equations on it, but the reality for working scientists is that the ways in which GR âradically changed our understanding of space, time, mass, energy, and gravityâ began, in many ways, with Maxwell in the 1870s and continue to the present day. There were several other prominent mathematicians (Hilbert among them) also working on similar geometric formulations of gravity, while GR was not widely accepted in physics for decades. Many of the more interesting cosmological consequences were not appreciated until the universe was found to be expanding and the cosmic microwave background (CMB) discovered, and details are still being actively researched today. I worked in this field for five years and I am 100% certain that by 2050, 90% of what I learned will be utterly irrelevant, I just have no way of knowing what.

The article discusses (and largely rejects) the idea that science is reaching a point of diminishing returns because all the easy stuff has been found and weâre approaching a more-or-less complete knowledge of nature. The death of physics has been predicted in the past, just prior to the discovery of quantum mechanics. Itâs true that subfields of physics wax and wane depending on funding priorities and the somewhat stochastic fine-grained nature of discovery. Nuclear physics has run out of superpowers and high energy physics has certainly run out of big accelerators for the time being. But grad students and postdocs are highly fungible and may be counted on to reliably find the next big thing. Just because we havenât yet digested the significance of the body of knowledge produced by our own generation doesnât mean that itâs intrinsically worthless.

The article closes with a brief discussion of the idea of productivity slowdown. Economists measuring nation-state level productivity find that gains in per-person productivity in the US and other developed nations have largely tailed off since their peaks between post WW2 1950s-1970s. Obviously at this scale economic behavior is multifactorial (to say the least) but the specific mention of the Concord as a false harbinger does highlight the omission of the single most glaring factor in economic changes in the last quarter of the 20th century: the loss of predictably cheap oil. If Iâm right, exploding capacity and plunging electricity costs currently occurring due to developments in photovoltaics and batteries will reverse this trend and enable supersonic air travel. Weâll see before weâre old!

Finally, letâs talk about the single most troubling aspect of the science progress measurement problem. Science is axiomatically different from nearly every other human pursuit, in that itâs cumulative and self-compressing. Scientists often joke that if theyâre lucky theyâll get a Nature paper in a career â a single really solid bit of research that, years later, will be half a paragraph or a footnote in a textbook. Scientific progress often isnât measured in pages of text produced, but pages of text removed. A great insight will allow two previously disparate phenomena to be understood under the same concept, and thus, the knowledge is compressed.

So when we attempt to measure the salience of the net contribution of an individual historical scientist today, itâs very difficult to propagate that distribution forwards or backwards in time, or to analyse it in isolation to other contemporary work. This is not simply a matter of bunging a bunch of weights into the Perron-Frobenius model and redoing PageRank. Everything is contingent.

I will, however, suggest a useful mental model. Imagine that the contribution of individual scientists can be modeled with a power law distribution. Landau actually tried to do something like this, for real.

Over time the rankings will shift and the absolute value will rise and fall but, generally speaking, salience falls with time, and often for reasons beyond any individualâs control, and often for reasons unrelated to the intrinsic quality or utility of that personâs work. Within someoneâs career, if their salience is consistently high and the gods are favorable, the Swedish Committee may bestow The Prize but in many ways this is an inherently (and inconsistently) biased sample of an already biased probability distribution.

This all seems a bit handwavey, so Iâll give a concrete example.

I studied physics as an undergrad in building A28 at the University of Sydney. Built in 1920, its striking facade is decorated with the debossed names of famous physicists of the time. Einstein is not listed. Indeed, 1920 predates almost all of quantum mechanics and subatomic physics. It is a fun exercise to read âAnathemâ by Neal Stephenson and map all the renamed physics in there to relatively obscure physics arcana in our own universe.

As someone who has read a couple of undergraduate physics textbooks, I could associate each of the names with an effect or equation, but I would be surprised if non-physicists knew any of the names, which are: Archimedes, Roger Bacon, Copernicus, Kepler, Galileo, Newton, Huyghens, Dalton, Fresnel, Fourier, Carnot, Faraday, Maxwell, Helmholtz, Kelvin, Boltzmann, Roentgen, and Bessel. I challenge the interested reader to write a paragraph, from memory, on the key contributions of each of these people.

For the purposes of this blog, I decided to research who did what and when, and it turns out that nearly all these physicists made their discoveries between 1800 and 1850, which is to say, 70-120 years before the building was built. A more comprehensive list of physicists active during this period can be found here. Attentive readers will have noticed that Collison and Nielsonâs primary thesis is that exciting discoveries in physics dried up by 1950, 70 years ago. Coincidence, or perhaps low confidence in oneâs ability to predict the long term value of recent discoveries isnât a new phenomenon?

To take just one example, consider Thomas Young, who is best known today for the classic double slit experiment. A renowned polymath âwho made notable contributions to the fields of vision, light, solid mechanics, energy, physiology, language, musical harmony, and Egyptology.â Widely considered to be the smartest working scientist of his generation, though apparently not good enough for the facade of A28. Best known today for an experiment that can be repeated by a four year old with a cheap laser. And yet in his own lifetime, despite considerable talents, he struggled as well as anyone else to wrest knowledge from the unordered chaos of the universe.

Science is a largely artisanal endeavor whose discoveries are always made by a huge number of people working in parallel. Tiny pieces of the puzzle are worked out by people who, in many cases, remain unaware of the otherâs existence. The lucky few get an obscure equation named after them. Measuring the rate of equation naming is not a good way to understand the progress of science!

So how might we go about measuring the progress of science?

To take a utilitarian perspective, I think itâs fairly widely agreed that the human condition, both individually and collectively, has improved markedly over the last century. Amongst many others, Human Progress has collated impressive datasets showing rapid, and accelerating, improvement in key indicators such as hunger, poverty, literacy, freedom, life expectancy, exposure to violence, and access to markets. For sure, much of the improvement can be attributed to wider implementation of existing technology, and in some cases rather antique technology at that.

Food scarcity was largely (and unexpectedly) solved in the 20th century with the invention of the Haber process (1913) and Berlaugâs dwarf wheat (1950s), which were widely deployed within decades.

There is no doubt in my mind, however, that on a per-minute or per-smile basis, the material resources my contemporaries enjoy are overwhelmingly the result of new inventions, which is to say, new applications of relatively recently discovered science. The most transformative of these are personal computers and the internet, but I am convinced that weâre not even half way through chapter one of that story.

Manufacturing and automation are also salient examples. One could argue that thereâs no reason that, for example, Tesla cars couldnât have been built on a Ford production line in 1920 (perhaps without the autopilot and computer screen) but that would require overlooking the vast foundation of incremental knowledge gains necessary to make something as banal (and alien!) as a lithium battery cell for only $1.50 â cheaper than a loaf of bread. A more thorough accounting premised on applied utility will show, I believe, accelerating scientific knowledge generation, diffusion, and application for the improvement of the human condition in every corner of the globe. ## January 18, 2021 • Injecting a Backdoor into SolarWinds Orion Crowdstrike is reporting on a sophisticated piece of malware that was able to inject malware into the SolarWinds build process: Key Points • SUNSPOT is StellarParticleâs malware used to insert the SUNBURST backdoor into software builds of the SolarWinds Orion IT management product. • SUNSPOT monitors running processes for those involved in compilation of the Orion product and replaces one of the source files to include the SUNBURST backdoor code. • Several safeguards were added to SUNSPOT to avoid the Orion builds from failing, potentially alerting developers to the adversaryâs presence. Analysis of a SolarWinds software build server provided insights into how the process was hijacked by StellarParticle in order to insert SUNBURST into the update packages. The design of SUNSPOT suggests StellarParticle developers invested a lot of effort to ensure the code was properly inserted and remained undetected, and prioritized operational security to avoid revealing their presence in the build environment to SolarWinds developers. This, of course, reminds many of us of Ken Thompsonâs thought experiment from his 1984 Turing Award lecture, âReflections on Trusting Trust.â In that talk, he suggested that a malicious C compiler might add a backdoor into programs it compiles. The moral is obvious. You canât trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well-installed microcode bug will be almost impossible to detect. Thatâs all still true today. • Click Here to Kill Everybody Sale For a limited time, I am selling signed copies of Click Here to Kill Everybody in hardcover for just$6, plus shipping.

Note that I have had occasional problems with international shipping. The book just disappears somewhere in the process. At this price, international orders are at the buyerâs risk. Also, the USPS keeps reminding us that shipping â both US and international â may be delayed during the pandemic.

I have 500 copies of the book available. When theyâre gone, the sale is over and the price will revert to normal.

Order here.

EDITED TO ADD: I was able to get another 500 from the publisher, since the first 500 sold out so quickly.

Please be patient on delivery. There are already 550 orders, and thatâs a lot of work to sign and mail. Iâm going to be doing them a few at a time over the next several weeks. So all of you people reading this paragraph before ordering, understand that there are a lot of people ahead of you in line.

EDITED TO ADD (1/16): I am sold out. If I can get more copies, Iâll hold another sale after I sign and mail the 1,000 copies that you all purchased.

• Last minute sponsorships

Michael W. Lucasâs ebook sponsorships and print sponsorships for âTLS Masteryâ will close in the next 24-48 hours; get in there if you want to participate.

• Jokerâs Stash Carding Market to Call it Quits

Jokerâs Stash, by some accounts the largest underground shop for selling stolen credit card and identity data, says itâs closing up shop effective mid-February 2021. The announcement came on the heels of a turbulent year for the major cybercrime store, and just weeks after U.S. and European authorities seized a number of its servers.

A farewell message posted by Jokerâs Stash admin on Jan. 15, 2021.

The Russian and English language carding store first opened in October 2014, and quickly became a major source of âdumpsâ â information stolen from compromised payment cards that thieves can buy and use to create physical counterfeit copies of the cards.

But 2020 turned out to be a tough year for Jokerâs Stash. As cyber intelligence firm Intel 471 notes, the curator of the store announced in October that heâd contracted COVID-19, spending a week in the hospital. Around that time, Intel 471 says many of Jokerâs loyal customers started complaining that the shopâs payment card data quality was increasingly poor.

âThe condition impacted the siteâs forums, inventory replenishments and other operations,â Intel 471 said.

Image: Gemini Advisory

That COVID diagnosis may have affected the shop ownerâs ability to maintain fresh and valid inventory on his site. Gemini Advisory, a New York City-based company that monitors underground carding shops, tracked a âsevere declineâ in the volume of compromised payment card accounts for sale on Jokerâs Stash over the past six months.

âJokerâs Stash has received numerous user complaints alleging that card data validity is low, which even prompted the administrator to upload proof of validity through a card-testing service,â Gemini wrote in a blog post about the planned shutdown.

Image: Gemini Advisory

Then on Dec. 16, 2020, several of Jokerâs long-held domains began displaying notices that the sites had been seized by the U.S. Department of Justice and Interpol. The crime shop quickly recovered, moving to new infrastructure and assuring the underground community that it would continue to operate normally.

Gemini estimates that Jokerâs Stash generated more than a billion dollars in revenue over the past several years. Much of that revenue came from high-profile breaches, including tens of millions of payment card records stolen from major merchants including Saks Fifth Avenue, Lord and Taylor,Â Bebe Stores,Â Hilton Hotels,Â Jasonâs Deli,Â Whole Foods,Â Chipotle, Wawa, Sonic Drive-In, the Hy-Vee supermarket chain, Buca Di Beppo, and Dickeyâs BBQ.

Jokerâs Stash routinely teased big breaches days or weeks in advance of selling payment card records stolen from those companies, and periodically linked to this site and other media outlets as proof of his shopâs prowess and authenticity.

Like many other top cybercrime bazaars, Jokerâs Stash was a frequent target of phishers looking to rip off unwary or unsophisticated thieves. In 2018, KrebsOnSecurity detailed a vast network of fake Jokerâs Stash sites set up to steal login credentials and bitcoin. The phony sites all traced back to the owners of a Pakistani web site design firm. Many of those fake sites are still active (e.g. jokersstash[.]su).

As noted here in 2016, Jokerâs Stash attracted an impressive number of customers who kept five and six-digit balances at the shop, and who were granted early access to new breaches as well as steep discounts for bulk buys. Those âpartnerâ customers will be given the opportunity to cash out their accounts. But the majority of Stash customers do not enjoy this status, and will have to spend their balances by Feb. 15 or forfeit those funds.

The dashboard for a Jokerâs Stash customer whoâs spent over $10,000 buying stolen credit cards from the site. Gemini said another event that may have contributed to this threat actor shutting down their marketplace is the recent spike in the value of Bitcoin. A year ago, one bitcoin was worth about$9,000. Today a single bitcoin is valued at more than 35,000. âJokerStash was an early advocate of Bitcoin and claims to keep all proceeds in this cryptocurrency,â Gemini observed in a blog post. âThis actor was already likely to be among the wealthiest cybercriminals, and the spike may have multiplied their fortune, earning them enough money to retire. However, the true reason behind this shutdown remains unclear.â If the bitcoin price theory holds, that would be fairly rich considering the parting lines in the closure notice posted to Jokerâs Stash. âWe are also want to wish all young and mature ones cyber-gangsters not to lose themselves in the pursuit of easy money,â the site administrator(s) advised. âRemember, that even all the money in the world will never make you happy and that all the most truly valuable things in this life are free.â Regardless, the impending shutdown is unlikely to have much of an impact on the overall underground carding industry, Gemini notes. âGiven Jokerâs Stashâs high profile, it relied on a robust network of criminal vendors who offered their stolen records on this marketplace, among others,â the company wrote. âGemini assesses with a high level of confidence that these vendors are very likely to fully transition to other large, top-tier dark web marketplaces.â • Recovering a Triple Star Planet with New Data The Kepler mission produced so much data â 2394 exoplanets, 2366 candidates by the end of spacecraft operations in 2018 â that we might forget how quickly all this came about. The first Kepler results started being announced in 2010. One of these, the second candidate to emerge, was KOI-5Ab, which was a tough pick in the early going given its position within a triple star system. An ambiguous detection, it was soon left behind, its status uncertain, as the numbers of more definitive candidates swelled. Caltechâs David Ciardi, who discussed this elusive system at the recent virtual meeting of the American Astronomical Society, says this about it: âKOI-5Ab got abandoned because it was complicated, and we had thousands of candidates. There were easier pickings than KOI-5Ab, and we were learning something new from Kepler every day, so that KOI-5 was mostly forgotten.â Ciardi, who is chief scientist at NASAâs Exoplanet Science Institute, has now pulled KOI-5Ab back to vibrant life. Almost certainly a gas giant, the planet orbits one of three stars in the system in a misaligned orbit that calls into question the nature of its formation. Making original confirmation of the planet difficult was the inability to completely distinguish it from the possible effects of the other two stars in the system. Untangling the question would involve TESS data. The object appears in TESS terminology as TOI-1241b, showing a five-day orbit that matches the Kepler result. Confirming such a candidate calls for more than a single backup asset on the ground. Working with colleagues in the California Planet Search, Ciardi used Keck data, as well as observations from Caltechâs Palomar Observatory near San Diego and Gemini North in Hawaii, though the scientist credits TESS as the motivation for re-visiting the object. We learn that KOI-5Ab orbits Star A, whose companion, Star B, is in a tight 30 year orbit with A. The third star orbits A and B every 400 years. Have a look at the image below to see the orbital dance. Image: The KOI-5 star system consists of three stars, labeled A, B, and C in this diagram. Star A and B orbit each other every 30 years. Star C orbits stars A and B every 400 years. The system hosts one known planet, called KOI-5Ab, which was discovered and characterized using data from NASAâs Kepler and TESS (Transiting Exoplanet Survey Satellite) missions, as well as ground-based telescopes. KOI-5Ab is about half the mass of Saturn and orbits star A roughly every five days. Its orbit is titled 50 degrees relative to the plane of stars A and B. Astronomers suspect that this misaligned orbit was caused by star B, which gravitationally kicked the planet during its development, skewing its orbit and causing it to migrate inward. Credit: Caltech/R. Hurt (IPAC). The orbital mechanics in play as planets formed from a primordial disk could account for the misalignment, given the gravitational influence of the second star, causing the planet to migrate inward as well as skewing its inclination. There are similar cases of misaligned orbits â particular GW Orionis â offering evidence of distortions caused by multiple star system evolution. âWe donât know of many planets that exist in triple-star systems, and this one is extra special because its orbit is skewed,â adds Ciardi. âWe still have a lot of questions about how and when planets can form in multiple-star systems and how their properties compare to planets in single-star systems. By studying this system in greater detail, perhaps we can gain insight into how the universe makes planets.â Image: This artistâs concept shows the planet KOI-5Ab transiting across the face of a sun-like star, which is part of a triple-star system located 1,800 light-years away in the constellation Cygnus. Credit: Caltech/R. Hurt (IPAC). • An Introduction to The Bicircular Restricted 4 Body Problem ### The Sun is BIG Quick question, which exerts a greater force on the Moon, The Sun, or the Earth? Newtonâs law of universal gravitation provides us with the following relation for calculating the force on an object $F = \frac{G m_1 m_2}{r_{12}^2}$ where G is the universal gravitational constant, m1/m2 are the masses of the two objects, and r12 is the distance between the two objects. While the Sun is more massive than the Earth, its distance to the Moon is much greater (and force drops off with distance squared) so it isnât obvious.Â Using NASAâs DE430 dataset, we find out that the Sun exerts approximately 2.2 times the Earthâs force on the Moon. If we look at it over a five-year span, the relative positions of the 3 bodies cause it to fluctuate, between slightly higher than 2.5 down to 1.9 times. Because the Sun exerts so much force on the moon, it will also exert a great force on spacecraft trajectories in and around Lunar orbit. This is especially important as NASA looks to create a semi-permanent cis-Lunar presence as part of the Artemis missions, so in this post, weâll look at a way to include the effects of the Sun in a reduced-order dynamical system, the Bicircualr Restricted 4 Body Problem (BR4BP). ### Â ### What is the BR4BP? In the past, weâve used the Circular Restricted 3Â Body Problem (CR3BP)Â Â to uncover structures in 3 body problem. There, 2 large bodies (like a planet and moon), circularly orbit about their combined barycenter unaffected by the third body, which is much smaller than the first two (like a spacecraft or comet). In the Bicircualr Restricted 4 Body Problem (BR4BP) we introduce a fourth body that orbits ina circular orbit about the initial barycenter. This equations of motion for this system has been written about before, although I enjoy this masters thesis (Starting on page 22) ### What Changes Now, thanks to the fourth bodyâs positions, the system is no longer autonomous. additionally, instead of being points fixed in space, the Lagrange pointsÂ are now closed orbits. Note how the L1 Lagrange point has a winding number of -2 (two clockwise turns to return to close) Additionally, the formally closed orbits in the CR3BP like the Lyapunov orbits are no longer closed While correcting for the solar perturbation would be trivial, any correction would only close a single orbit. The period of the Lyapunov orbit and the solar orbit are not multiples of each other so the sun would be in a different location and the trajectory would diverge on the second orbit. Starting from the same initial conditions, but with the sun at different locations produces the following plot. If we then plot the non-dimensional error for position and velocity at the end, it is clear that velocity dominates. Also, note the close, but not quite symmetrical behavior. This likely comes from the fictitious force generated by being in the synodic frame imparting a bias towards counterclockwise rotations.Â ### Stable Orbits Now, just because our formerly closed orbits, are now open, this doesnât mean there arenât stable orbits. For example Take the following DRO, it is stable for approximately 100 years in the Sun-Earth-Moon BR4BP, making it ideal for long-term storage of a captured asteroid. Another relatively stable BR4BP orbit is the NRHOâs. These orbits, while less stable than the DROs, can be maintained for small amounts of fuel. Additionally, because the L2 Southern NRHO spends most of its time above the Lunar south pole, NASA is interested in placing a space station in one of these orbits to support the Artemis missions. (There are also other benefits to placing gateway in an L2 Southern NRHO orbit like favorable geometry to avoid most eclipses) Now, plotted together ### Want More Gereshes? If you want to receive new Gereshes blog post directly to your email when they come out, you can sign up for thatÂ here! Donât want another email? Thatâs ok, Gereshes also has aÂ twitter accountÂ andÂ subreddit! The post An Introduction to The Bicircular Restricted 4 Body Problem appeared first on Gereshes. • Disturbia The low-light of the weekend had to be this widely distributed public notice from the FCC that amateur radio operators may not use radio equipment to commit or facilitate criminal acts. I donât recall anything like this in my forty-plus years in the hobby. This followed rumors that ham radio equipment may have played some role in the the storming of the United States Capitol and violent attack against the United States Congress on January 6, 2021, carried out by a mob of supporters of U.S. President Donald Trump in an attempt to overturn his defeat in the 2020 presidential election. The ARRL then issued a somewhat vague statement about the purpose of ham radio that concluded with: Amateur Radio is about development of communications and responsible public service. Its misuse is inconsistent with its history of service and its statutory charter. ARRL does not support its misuse for purposes inconsistent with these values and purposes. Itâs good to know that the ARRL doesnât support misuse of our hobbyspace for the violent overthrow of our government, but also somewhat disturbing they felt the need to proclaim it. • Pulsar Analogy • ghostedmarshmallow:sovietpostcards: Yuri Gagarin, the hobbyist... Yuri Gagarin, the hobbyist photographer, at home with his wife. Yuri Gagarin being identified only as an amateur photographer and not literally the first human in space has me on the floor • How to Be the Bigger Person If you start an apology by saying âit takes a big man,â it is not a real apology, and you are not a big man. As always, thanks for using my Amazon Affiliate links (US, UK, Canada). • Are We Really Engineers? I sat in front of Mat, idly chatting about tech and cuisine. Before now, I had known him mostly for his cooking pictures on Twitter, the kind that made me envious of suburbanites and their 75,000 BTU woks. But now he was the test subject for my new project, to see if it was going to be fruitful or a waste of time. âWhatâs your job?â âRight now Iâm working on microservices for a social media management platform.â âAnd before that?â âGeological engineering. A lot of open pit mining, some amount of underground tunnel work. Hydropower work. Earth embankment dams because they come along with mines.â He told me a story about his old job. His firm was hired to analyze a block cave in British Columbia. Block caves are a kind of mining project where you dig tunnels underneath the deposit to destabilize it. The deposit slowly collapses and leaks material into the tunnels, and then âyou just print moneyâ, as Mat called it. The big problem here? The block cave was a quarter mile under a rival companyâs toxic waste dump. âIn the event of an earthquake, could the waste flood the mine and kill everyone?â He had to prove it was safe. A different kind of work than what he was doing now. As for another difference? âMy personal blog has better security than some100 million mining projects.â

This wasnât going to be a waste of time, after all.

Is software engineering âreallyâ engineering? A lot of us call ourselves software engineers. Do we deserve that title? Are we mere pretenders to the idea of engineering? This is an important question, and like all important questions, it regularly sparks arguments online. On one hand you have the people who say we are not engineers because we do not live up to âengineering standardsâ. These people point to things like The Coming Software Apocalypse as evidence that we donât have it together. They say that we need things like certification, licensing, and rigorous design if we want to earn the title engineer.

On the other end of the horseshoe, we have people like Pete McBreen and Paul Graham who say that we are not engineers because engineering cannot apply to our domain. Engineers work on predictable projects with a lot of upfront planning and rigorous requirements. Software is dynamic, constantly changing, unpredictable. If we try to apply engineering practice to software then software would be 10 times as expensive and stuck in 1970.

For a long time I refused to call myself a software engineer and thought of other people with that title as poseurs. It was reading a short story against engineering that made me question my core assumptions. It was A Bridge to Nowhere, one of the Codeless Code stories. The author argues that the techniques of bridge building donât apply to software, since software clients can change their requirements and software gravity can sometimes reverse itself.

The predictability of a true engineerâs world is an enviable thing. But ours is a world always in flux, where the laws of physics change weekly. If we did not quickly adapt to the unforeseen, the only foreseeable event would be our own destruction.

Thatâs when I realized something about everybody involved in all of these arguments.

Theyâve never built a bridge.

Nobody I read in these arguments, not one single person, ever worked as a ârealâ engineer. At best they had some classical training in the classroom, but we all know that looks nothing like reality. Nobody in this debate had anything more than stereotypes to work with. The difference between the engineering in our heads and in reality has been noticed by others before, most visibly by Glenn Vanderburg. He read books on engineering to figure out the difference. But I wanted to go further.

If I wanted to know how software development compares and contrasts with ârealâ engineering, then Iâd have to talk to ârealâ engineers. But thinking about that, I realized another problem: while ârealâ engineers could tell me what they did, they couldnât tell me how their work differed from mine. Just as we make sweeping statements about real engineering without any evidence, they could make sweeping statements about software without knowing anything about it either. Even if they use a bit of software in the day-to-day work, thatâs not the same as developing software as a job. Only a person whoâs done both software development and ârealâ engineering can truthfully speak to the differences between them.

So thatâs what I set out to find: people who used to be professional engineers and then became professional software developers. I call these people crossovers, hybrids between the two worlds. I interviewed 17 crossovers on common software misconceptions, how the two worlds relate to each other, whether we can truthfully call what we do engineering, and what the different fields can teach and learn from each other.

Thereâs a lot I want to talk about here, much more than can comfortably fit in one blog post. I divided the write up into three parts, dealing with my three core topics. Part one is about the term âengineeringâ. Is what we do engineering, and can we honestly call ourselves engineers?

## Why is this a Question?

So first some ground rules: âsoftware engineeringâ is a real field. Most people would agree that the software running on spacecraft counts as âreal engineeringâ. The debate is about rest of the field. Are the people who build websites engineers? What about embedded hardware? I found in my research that people defending the title âengineerâ rarely coherently define what an engineer is, usually boiling it down to âengineers solve problemsâ. The arguments against the title tend to be a little more developed, as are the arguments about transcending it.

The backlash against âsoftware engineeringâ comes from two places. First, there are the gatekeepers. While at first we called ourselves programmers and âdevelopersâ, after 1985 more and more people started using the title âsoftware engineerâ. This âtitle bloatâ is one of the things that led to the backlash: people using it as a title without trying to live up to some form of standards.

The other side of the backlash comes from the new software movements of the 90s and early 00s. Movements like Extreme Programming and Agile were rejections of applying standard project management techniques to software development. To them, engineering represented the old way of doing things, incompatible with the new way of software development. Software engineering was the past, a failed dead end, and artisanal software was the future.

Both of these backlashes come from the connotations of engineering. In the case of gatekeepers, itâs the positive connotations they believe we have not earned. In the case of artisanal craftship, itâs the negative baggage they believe we donât deserve. Both of these are agenda-driven viewpoints, arguments based on how they want software to be. This is good for advocacy but doesnât help us figure out where software is right now. And their agendas are based on a reference point they donât understand themselves. None of the people arguing for or against software engineering as engineering have worked as engineers.

## Common Misconceptions

Letâs start with common arguments that people make about whether to include software in engineering. Most people have an informal concept of what engineering but not a strict definition. Like pornography, people know what engineering is when they see it. When I asked people to explicitly list the qualities of what makes something engineering, here are the most common responses:

1. Making something complex as part of a team.
2. Making something physical.
3. Making something according to a rigorous spec with strict principles.
4. Using mathematical principles in their design.
5. Working on situations with high consequences, like loss of life.
6. Doing work with a certification and license.

The first quality is too broad: almost all human professions involve making something complex in a team. It doesnât capture âengineeringâ. The second quality is too restrictive: not all forms of engineering result in physical processes. In particular, industrial engineering rarely does. The third quality will be discussed in the next essay.

That leaves three claims to discuss here. All three are used to say software canât be engineering. âWe donât use any math, unlike real engineers. Software doesnât matter, unlike real engineers. We arenât licensed, unlike real engineers.â As weâll see, none of these quite hold up.

### Engineering is mathematical

This one I can address directly. The claim is that engineering involves a lot of hard math, while software involves very little math. The confusion here comes from our misunderstanding of mathematics. Much of the math that mechanical engineers use is continuous math. This is where we work over a continuous domain, like real numbers. Things like calculus, trigonometry, and differential equations are in this category. This is what most people in the US learn in high school, codifying it as what they think of as âmathâ.

In software, we donât use these things, leading to the conception that we donât use math. But we actually use discrete math, where we deal exclusively with non-continuous numbers. This includes things like graph theory, logic, and combinatorics. You might not realize that you are using these, but you do. Theyâre just so internalized in software that we donât see them as math! In fact most of computer science is viewable as a branch of mathematics. Every time you simplify a conditional or work through the performance complexity of an algorithm, you are using math. Just because there are no integrals doesnât mean we are mathless.

This falls in line with the rest of engineering. Different branches use different kinds of math in different ways. Industrial engineers are concerned with very different things than mechanical engineers are. Just because we use a different branch of math doesnât mean weâre not doing engineering.

### Engineering is high-consequence

Iâm not sure what first disabused me of this notion. It might have been the conversation with Mike, an ex-mechanical engineer who now designs glaciology sensors. One of his last projects before the crossover was as part of a team working on a classified medical device. Most of it went smoothly, but then came the big blocker:

The one the thing that gave us the most grief was the handle, which was a bent bit of wire with some plastic molded around it. Yeah, getting the wires to bend correctly and the plastic onto this shiny piece of anodized aluminium wire. [â¦] That was the thing that took three months in the project.

Three months of the project dedicated to getting the handle working right.

This is not to say the work wasnât important. It does mean that the engineers spent a lot of time on something thatâs low-consequence or non-safety critical. In retrospect, this shouldnât have surprised me. The world is vast and our industry is huge. Someone needs to design the bridges, yes, but someone also needs to design all of the little things we use in our day-to-day life. Much of the engineering there is low-stakes, low-consequence, just like much software is.

Thereâs a huge difference between if youâre making a cabinet for Bluetooth speakers, like ones that are on my desk, or youâre making an assembly for the landing gear of 737. Youâre using some of the same tools, but your approach is wildly different. - Nathan (mechanical)

Â«But there are still some things that lead to loss of life!Â» Yes, and the same is true of software. An integer overflow in the Intrado software lead to a several hour 911 outage for millions of people. A biased algorithm unfairly sent people to jail. Just like with ârealâ engineering, thereâs also a wide swath of software that wonât kill people but will lead to millions of dollars in damages. This can be everything from Amazon going down to a game being unplayable.

### Engineers are licensed

This is the claim Iâve heard most: licensure. Thereâs a difference between doing engineering and being an engineer, just as people practice medicine at home without being doctors. Maybe software engineering is possible, but without the licenses we are not software engineers. In Canada you canât even call yourself a âSoftware Engineerâ unless youâre accredited!

At least thatâs what people in the United States say. Several Europeans I spoke to said the exact same thing about US system. Everybody seems to think the worst about their own system and the best about others. And about half of the engineers I spoke to werenât licensed but were still considered professional engineers by their peers. They didnât have the âcertificationâ, but they had the skills.

In the US, you donât need a license to practice any kind of engineering. You need a license to be a âprincipal engineerâ, a.k.a. the person who formally signs off on plans as valid. But the engineers working under the principal engineer donât need to be accredited and often are not. In fact many of them donât even have formal training as engineers.

Hereâs the problem with deciding engineering based on licenses: licenses are a political and social construct, not a fact of nature. Societies adopt licensure for reasons that are as much political as technical. To see this, letâs discuss some of the history of licensure in the United States.

As recently as 1906, not a single state in the US required licenses for any project. Wyoming was the first state to instate a licensure policy, entirely because irrigation projects kept blowing the budget. They theorized accredited engineers could give better estimates on project costs. It took 40 more years for all fifty states to get on board.

This all means that licensure isnât part of the federal government: itâs a requirement by the states. Among other things, licenses arenât necessarily transferable between states. If you get a license in Texas, you have to go through a process to be able to practice in California, in whatâs called âcomityâ.

While licensure originated as part of cost overruns, it expanded so rapidly for an entirely different reason. Regulations are written in blood. Fields become regulated when the lack of regulation kills people. Until that happens on a wide scale with arbitrary programs, itâs unlikely that weâll ever see the same licensure requirements for software. In the places where software has killed people, like Therac-25 and aircraft accidents, we see stricter regulations. Whether this leads to broader licensure requirements for software engineers remains to be seen.

You could argue that itâs immoral for us to not be licensed. This is an argument Iâm sympathetic to. But itâs a normative argument, not a positive one. By saying âwe should be licensedâ, you are saying something about how the world ought to be. You are trying to answer the question âshould we be held to higher standards?â But thatâs not the question here. I donât care right now where we should be going; I just want to know where we are right now. Whether or not we are engineers is irrelevant to whether or not we are good engineers.

In conclusion: licenses exist because we are part of society and have legal requirements, not because they are essential to what it means to do engineering. So while you might want to make software more licensed, as it stands the licensing question doesnât change the essence of our work.

## The Truth

This leaves us back where we started: there arenât any qualities we can point to and say âthis is engineering, this is notâ. Itâs a standard Wittgenstein game problem: human constructs do not neatly fall into precise definitions. Rather something like âengineeringâ is a family of related concepts, where we judge whether something belongs based on how much it resembles other things in the family. In other words, âengineeringâ is âwhat engineers doâ. In other other words, something becomes engineering if enough engineers say itâs engineering.

Consider chemical engineering. Chemical engineering is unlike mechanical, civil, or electrical engineering. Chemical engineers create processes to manufacture products at scale, often using experimentation and iteration. But nobody would disagree that it is engineering. Chemical engineering started in the late 1800s, before states licensed engineers. If chemical engineering started now, people would refuse to call it engineering. And theyâd be wrong to refuse.

Once I realized this, my interviewing process changed a little. Instead of asking how they felt about certain engineering topics, I just asked them point blank. âDo you consider software engineering actually engineering?â

Of the 18 crossovers I talked to, 16 said yes.

Thatâs not the answer I expected going in. I assumed we werenât engineers, that weâre actually very far from being engineers. But then again, I was never a ârealâ engineer. I donât know what itâs like to be a ârealâ engineer, and so canât compare software engineering to other forms. I donât have the experience. These people did, and they considered software engineering real engineering.

Even the product owners and project managers are in a sense engineers [â¦] everyone is kind of an engineer, an engineer of sorts. -Kate (chemical)

It makes no sense to use ârealâ engineering in contrast to âsoftwareâ engineering. Going forward I will instead use the term âtradâ engineering.

### Craft vs Engineering

Iâm gonna respond in a slightly different way. Not every software developer is a software engineer, just like not every single person who works in construction is an engineer. An engineer is a very specific set of skills. [â¦] Software [engineering] is skill plus understanding, all the processes and life cycle and the consequences and all the things that you should be aware of [and] avoid. -Dawn (Mechanical & Chemical)

That said, many of the crossovers also added an additional qualification: software engineering is real engineering, but a lot of people who write software arenât doing software engineering. This is not a problem with them, rather a problem with our field: we donât have a rich enough vocabulary to talk about what these developers do. Not everybody who works with electricity is going to be an electrical engineer; many will be electricians. And this is okay. Electrical engineering is a very narrow skill set in the broader field of electric professions and plenty of people have other important skills in that space. But we use things like âprogrammerâ, âsoftware engineerâ, and âsoftware developerâ interchangeably. What is the difference between a software engineer and a software developer?1

Some people propose the word âsoftware craftsmanâ. This term comes from the book Software Craftsmanship: The New Imperative, by Pete McBreen. In it he argues that software is the kind of engineering, being much more free-form creative and flexible. Weâre not line workers but artisans, artists, people who take pride in the craft we do and the flexibility of our states. As he puts it:

Software development is all about the unknown. The production process for software is trivially easyâjust copy a disk or CD. The software engineering metaphor fails because we understand production, a mechanical task, much better than we understand design, an intellectual task. - Software Craftsmanship

Many people have asked me why I care so much about this project. Why does it matter whether or not software is âreallyâ engineering? Why canât we just say that âsoftware is softwareâ? Itâs because of these misconceptions. People have a stereotyped notion of what engineering looks like. Because software doesnât look like the stereotype, they assume that we are wholly unlike engineering. The engineering disciplines have nothing to teach us. We are breaking pristine ground and have no broader history to guide us.

In contrast, if we are doing engineering, then we have a landmark. We can meaningfully compare and contrast the work we do as software engineers from the work that others do as traditional engineers. We can adapt their insights and watch for their pitfalls. We can draw upon the extant knowledge of our society to make better software.

I believe everything McBreen said about software is fairly reasonable, about how hard it is to predict things and how itâs intensely personally created. What he gets wrong is the assumption that engineering is not this way. Engineering is much richer, more creative, and more artistic than he thought. But of course he would have an imperfect view: he is, after all, not a traditional engineer.

Hereâs my final take. This is the belief Iâve settled on in synthesizing all the interviews I did and does not necessarily reflect how the crossovers think. I went into this thinking that software wasnât really engineering. Maybe there were a few people who could count themselves as that but most of us were far below that threshold. I still believe that most of us are not engineers, because weâre working in domains that people donât see as engineering. Most people donât consider a website âengineeredâ. However, and this is a big however, thereâs a much smaller gap between âsoftware developmentâ and âsoftware engineeringâ than there is between âelectricianâ and âelectrical engineerâ, or between âtradeâ and âengineeringâ in all other fields. Most people can go between âsoftware craftâ and âsoftware engineeringâ without significant retraining. We are separated from engineering by circumstance, not by essence, and we can choose to bridge that gap at will.

In the next essay, weâll talk about the similarities and differences between traditional engineering and software engineering, and how theyâre actually not all that different in the end.

Part two, We are not Special, will be posted on Wednesday. You can check back here, subscribe to RSS, or join my newsletter to be notified. You can also follow me on twitter.

Thanks to Glenn Vanderburg , Chelsea Troy , Will Craft , and Dan Luu for feedback, and to all of the engineers whom I interviewed.

## Appendix: Methodology

I only searched out people who work professionally for at least one year in each field. In practice, work experience in trad engineering ranged from a year and a half on the low end to over 15 years on the high end. Interviews ranged from half an hour to two hours, leading to a total of 24 hours of recorded interview. Two of the interviewees were not recorded, but I took notes. The breakdown of the specialties is:

• Civil engineers work on buildings. One person I talked to was a classic civil engineer, one specialized in designing mines, and two worked on oil rigs.
• Mechanical engineers design physical machines.
• Electrical engineers design circuits and electronics. Two worked on chipsets, while one was embedded in a submarine.
• Chemical engineers create processes to make chemicals at scale. They are generally not creating entirely new chemicals or chemical products, but work on how to produce extant ones. âChemicalsâ is a broad category here, covering everything from clean water to toothpaste.
• Industrial Engineers create holistic systems and procedures in an industry. One designed data center layouts and the other worked on integrating disparate air traffic systems.

These are very broad summaries of the specialties. Most engineers work in a subdomain of a specialty, such as automotive engineering or circuit design. There were also some fields of engineering I didnât cover in my interviews. This includes aerospace and nuclear engineering. I suspect (suspect) that aerospace would be roughly similar to mechanical engineering and nuclear engineering would be roughly similar to civil engineering. But it remains a threat to validity nonetheless. 2

The other two threats to validity are geographic location and crossover type. The majority of the interviewees were either in the US or the UK, with the rest being from the EU or Canada. I did not get a chance to interview anybody from Latin America, South America, Africa, or Asia. Everybody interviewed crossed from traditional engineering to software engineering. I was not able to find anyone who crossed the other way.

1. Sadly, the answer here is âprestigeâ. People have a nasty tendency to look down people who are not engineers in all fields, much the same way that doctors tend to look down on nurses. This is a problem with our society. [return]
2. After a lot of searching I found two but wasnât able to schedule interviews with them. [return]

## January 16, 2021

• Circuit VR: Even More Op Amps

In the last Circuit VRÂ we looked at some basic op amp circuitsÂ in a simulator, including the non-inverting amplifier. Sometimes you want an amplifier that inverts the signal. That is a 5V input results in a -5V output (or -10V if the amplifier has a gain of 2). This corresponds to a 180 degree phase shift which can be useful in amplifiers, filters, and other circuits. Letâs take a look at anÂ example circuit simulated with falstad.

Last time I mentioned two made up rules that are good shortcuts for analyzing op amp circuitsâ¦

Read more: Circuit VR: Even More Op Amps â Hackaday

• In Other BSDs for 2021/01/16

Vermaden, who I link to on the regular, has been doing an excellent job of posting BSD links to lobste.rs.

• Yippee-Ki-Yay

Iâve been out of town this week on a work-related mission. The company I work for still wonât let us fly due to the virus. Instead, they sent me on a twelve-hundred mile round-trip car ride across the banana republic formerly known as the United States because that was safer?

Following last weekâs events you can understand why every car and pick-up truck I saw along the way made me wonder if its occupants were headed to DC or perhaps a state capital to create a little more mayhem in the coming days.

Thereâs a persistent rumor that ham radio may have played some part in the storming of the United States Capitol.

Unsubstantiated reports that one or more FM repeater was turned off in DC during the riots because some in the mob were seen carrying VHF/UHF handheld radios and may have used those to coordinate the attack, something that probably wonât make the news in QST next month.

There are also some enthusiasts sympathetic to the cause of the insurrection suggesting that those removed from social media sites should take up ham radio as a place where they can safely share their views without reprisal. Just what we need. More prick-waving hams on 75 meters declaring themselves patriots and followers of Jesus with enough guns and ammo to keep Trump in the White House for a Third Reich. Yippee-Ki-Yay.

• Chat without servers

I always thought IRC was pretty decentralized, but I didnât realize talk(1) was designed to work machine-to-machine.Â  That means in theory that if you have a talk(1) binary on your machine, you could chat directly to anyone else with the same binary, even on a different platform.Â  Since 4.3BSD!Â  Anyway, I only realized this because of this recent bugfix thanks to Dan Cross.

## January 15, 2021

• Open Hardware Needs Policy Attention Now

From the Journal of Open HW:

Open Hardware Needs Policy Attention Now

Increasing government attention to âopenâ agendas, complemented by growing community capacity, have laid the groundwork for driving policy attention towards open hardware. The COVID-19 pandemic spotlighted the ability of open hardware communities to mobilize for disaster response, including through the design and production of personal protective equipment (PPE) and other medical supplies when traditional supply chains failed. A new Administration offers an opportunity to build on lessons learned from this unforeseen and extensive experiment in scaling open collaboration on hardware and also to revisit what has worked in the past for related fields such as community science and open source software. A whole-of-government approach to elevating open hardware, including for scientific research and disaster response, feels both timely and necessary in order to amplify effective activities and provide scaffolding for an even more impactful future.

To better understand potential opportunities, researchers and practitioners from the Wilson Center, Open Environmental Data Project, and University of Cambridge convened a workshop on October 28, 2020 to bring together members of the open hardware community, such as those involved inÂ GOSHÂ andÂ OSHWA. Beginning with the questionÂ What are you most excited about in open science hardware right now,Â the workshop focused on establishing a value proposition for open hardware as a matter of public policy as well as elucidating open challenges that might be addressed by policy interventions. One goal of the workshop was to develop high-level consensus around âkey messages,â for policy makers and a list of eleven suggestions was subsequently ranked by participants. This exercise made it clear that to refine these further, more work was needed to understand specific accelerators and barriers to the adoption and use of open hardware, and to align perspectives between the policy community and diverse developers and users of open hardware from academia, industry and community organisations operating across a broad range of disciplines.

Read moreâ¦

• First Mode Named Again as One of Seattle's "Best Places to Work"

First Mode is proud to be named by BuiltIn Seattle as one of the âBest Places to Workâ for two years running! This award follows our designation as one of Fast Companyâs 100 Best Workplaces for Innovators in 2020.

Our people, our challenging and meaningful work, and our values are what make First Mode great. Weâre growing, and weâd love for you to join us. Visit our Careers page to see our current open positions.

• Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

• Iâm speaking (online) as part of Western Washington Universityâs Internet Studies Lecture Series on January 20, 2021.
• Iâm speaking (online) at ITU Denmark on February 2, 2021. Details to come.
• Iâm being interviewed by Keith Cronin as part of The Center for Innovation, Security, and New Technologyâs CSINT Conversations series, February 10, 2021 from 11:00 AM â 11:30 AM CST.
• IÃ¢â¬â¢ll be speaking at an Informa event on February 28, 2021. Details to come.

The list is maintained on this page.

• 1/100,000th Scale World
• Cell Phone Location Privacy

We all know that our cell phones constantly give our location away to our mobile network operators; thatâs how they work. A group of researchers has figured out a way to fix that. âPretty Good Phone Privacyâ (PGPP) protects both user identity and user location using the existing cellular networks. It protects users from fake cell phone towers (IMSI-catchers) and surveillance by cell providers.

Itâs a clever system. The players are the user, a traditional mobile network operator (MNO) like AT&T or Verizon, and a new mobile virtual network operator (MVNO). MVNOs arenât new. Theyâre intermediaries like Cricket and Boost.

Hereâs how it works:

1. One-time setup: The userâs phone gets a new SIM from the MVNO. All MVNO SIMs are identical.
2. Monthly: The user pays their bill to the MVNO (credit card or otherwise) and the phone gets anonymous authentication (using Chaum blind signatures) tokens for each time slice (e.g., hour) in the coming month.
3. Ongoing: When the phone talks to a tower (run by the MNO), it sends a token for the current time slice. This is relayed to a MVNO backend server, which checks the Chaum blind signature of the token. If itâs valid, the MVNO tells the MNO that the user is authenticated, and the user receives a temporary random ID and an IP address. (Again, this is now MVNOs like Boost already work.)
4. On demand: The user uses the phone normally.

The MNO doesnât have to modify its system in any way. The PGPP MVNO implementation is in software. The userâs traffic is sent to the MVNO gateway and then out onto the Internet, potentially even using a VPN.

All connectivity is data connectivity in cell networks today. The user can choose to be data-only (e.g., use Signal for voice), or use the MVNO or a third party for VoIP service that will look just like normal telephony.

The group prototyped and tested everything with real phones in the lab. Their approach adds essentially zero latency, and doesnât introduce any new bottlenecks, so it doesnât have performance/scalability problems like most anonymity networks. The service could handle tens of millions of users on a single server, because it only has to do infrequent authentication, though for resilience youâd probably run more.

The paper is here.

• BSD Now 385: Wireguard VPN mesh

A very straightforward title in this weekâs BSD Now; worth listening to for more information on Wireguard, the new hotness.

• How to Come to Terms with What You've Become

Panel 3 would make a great Fatherâs Day card, if your father has the sense of humor to handle it . . . which means in most cases it would be a terrible Fatherâs Day card.

As always, thanks for using my Amazon Affiliate links (US, UK, Canada).

## January 14, 2021

• Across the Brown Dwarf Palette

Something to note about the brown dwarfs we looked at yesterday: Our views on how they would appear to someone nearby in visible light are changing. Itâs an interesting issue because these brown dwarfs exist in more than a single type. If youâll have a look at the image below, youâll see a NASA artist conception of the three classes of brown dwarf, all of these being objects that lack the mass to burn with sustained fusion.

Image: This artistâs conception illustrates what brown dwarfs of different types might look like to a hypothetical interstellar traveler who has flown a spaceship to each one. Brown dwarfs are like stars, but they arenât massive enough to fuse atoms steadily and shine with starlight â as our sun does so well. Our thoughts on how these objects appear are evolving quickly, as witness yesterdayâs discussion, and weâre likely to need another visual rendering of brown dwarf classes soon. Credit: NASA/JPL-Caltech.

One thing should jump out to anyone who read yesterdayâs post on the appearance of Luhman 16 B: The artist here does not depict bands of clouds/weather on the object, but rather localized storms of the kind that some researchers believed would characterize brown dwarfs. We know that Luhman 16 A (33 times Jupiterâs mass) is of spectral type L7.5, while Luhman 16 B is categorized as T0.5, putting it near the transition between types L and T. And Luhman 16 B shows strong evidence of banding.

Thatâs according to Daniel Apai and team, as discussed yesterday, in an analysis based on data from TESS. Looking further at the image above, itâs clear weâre going to be re-working our depictions going forward as we analyze more brown dwarfs. If we should expect a banded object at the L-T transition, then at least the L dwarf and the T dwarf shown here will likely show the same atmospheric pattern (obviously, weâll need to confirm these speculations with hard data). That would leave the Y dwarf as yet undetermined, and for good reason, as these objects are vanishingly hard to see.

Atmospheric temperatures drop as we move across the types of brown dwarfs here, with the L dwarf being the brightest and hottest in the image; its typical temperatures are in the range of 1400 degrees Celsius. The magenta T dwarf takes us down to about 900 degrees Celsius, but the Y dwarf really drops the reading, with the coldest yet identified having a temperature of a mere 25 degrees Celsius. Thatâs not all that far off what my thermostat is set on â 72 â â as I try to take the chill off this morning.

All three of the brown dwarfs shown above appear at the same size, a reminder that all types of this object have the same dimension, which is roughly that of Jupiter, despite wide variations in their mass. Same radius, major disparity in mass, in other words. My hopes that we would find one of these fascinating objects at no more than, say, 1 light year seem to have been dashed, although itâs certainly true that Y dwarfs are so cool that finding them is going to be difficult even for the best infrared observatories.

As we keep looking, we can now refer to the updated map of L, T and Y dwarfs in the vicinity of the Solar System that the Backyard Worlds: Planet 9 project has produced. Youâll recall from earlier posts here that Backyard Worlds: Planet 9 is funded by NASA as a collaboration between professional scientists and the public.

All those non-professional but often highly adept astronomers and volunteers have produced a map with a radius of about 65 light years. The work of 150,000 volunteers has been going on since 2017 using data from the WISE mission under its Near-Earth Object Wide-Field Infrared Survey Explorer (NEOWISE) incarnation. The study was presented at the ongoing virtual meeting of the American Astronomical Society.

Dozens of new brown dwarfs turned up in this work, which drew on data from the now retired Spitzer Space Telescope. Using the Backyard Worlds: Planet 9 results, astronomers consulted data from the space telescope to observe 361 local brown dwarfs of types L, T and Y and combined the results with previously known dwarfs, many of them catalogued by CatWise, the catalog of objects from WISE and NEOWISE.

The result: a 3D map of 525 brown dwarfs.

Image: In this artistâs rendering, the small white orb represents a white dwarf (a remnant of a long-dead Sun-like star), while the purple foreground object is a newly discovered brown dwarf companion, confirmed by NASAâs Spitzer Space Telescope. This faint brown dwarf was previously overlooked until being spotted by citizen scientists working with Backyard Worlds: Planet 9, a NASA-funded citizen science project. Credits: NOIRLab/NSF/AURA/P. Marenfeld/Acknowledgement: William Pendrill.

The galaxyâs coldest known Y dwarf is a neighbor (not surprising, given that more distant dwarfs should be below the level of detection), but it turns out that it is comparatively rare, a bit of an anomaly given our expectations of brown dwarf distribution. Of the seven objects nearest to our Solar System, three are brown dwarfs. And the Sunâs position within this cluster of nearby objects is a bit unusual as well, says Aaron Meisner (National Science Foundation NOIRLab), a co-author of the study:

âIf you were to put the Sun at a random place within our 3D map and you were to ask, âTypically, what do its neighbors look like?â We find that they would look very different from what our actual neighbors are.â

Again, we have to weigh this outcome against the difficulty in observing Y dwarfs, so conclusions shouldnât be drawn too hastily. With brown dwarfs having exoplanet dimensions but no companion main sequence star (in most cases), they become useful objects as we refine the tools of exoplanet characterization. The James Webb Space Telescope should be able to tell us more about nearby brown dwarfs, as will the upcoming SPHEREx mission, an all-sky infrared survey scheduled for a 2024 launch.

The paper is Marocco et al., âThe CatWISE2020 Catalog,â accepted for publication in the Astrophysical Journal Supplement Series (abstract/preprint).

• Helping Hands, Reinvented

[Nixie] was tired of using whatever happens to be around to hold things in place while soldering and testing. It was high time to obtain a helping hands of some kind, but [Nixie] was dismayed by commercial offerings â the plain old alligator clips and cast metal type leave a lot to be desired, and the cooling tubeÂ cephalopod type usually have the alligator clips just jammed into the standard tube ends with no thought given to fine control or the possibility of reducing cable count.

[Nixie] happened to have some unneeded cooling tube lying around and startedÂ designing a new type of helping hands from the ground plane up. Taking advantage of the fact that cooling tubes are hollow,Â  [Nixie] routed silicone-jacketed wires through them for power and low speed signals. These are soldered to five banana jacks that are evenly spaced around an alligator clip.

Read moreâ¦

• Mathematicians Resurrect Hilbertâs 13th Problem

Success is rare in math. Just ask Benson Farb.

âThe hard part about math is that youâre failing 90% of the time, and you have to be the kind of person who can fail 90% of the time,â Farb once said at a dinner party. When another guest, also a mathematician, expressed amazement that he succeeded 10% of the time, he quickly admitted, âNo, no, no, I was exaggerating my success rate. Greatly.â

Farb, a topologist at the University of Chicago, couldnât be happier about his latest failure â though, to be fair, it isnât his alone. It revolves around a problem that, curiously, is both solved and unsolved, closed and open.

The problem was the 13th of 23 then-unsolved math problems that the German mathematician David Hilbert, at the turn of the 20th century, predicted would shape the future of the field. The problem asks a question about solving seventh-degree polynomial equations. The term âpolynomialâ means a string of mathematical terms â each composed of numerical coefficients and variables raised to powers â connected by means of addition and subtraction. âSeventh-degreeâ means that the largest exponent in the string is 7.

Mathematicians already have slick and efficient recipes for solving equations of second, third, and to an extent fourth degree. These formulas â like the familiar quadratic formula for degree 2 â involve algebraic operations, meaning only arithmetic and radicals (square roots, for example). But the higher the exponent, the thornier the equation becomes, and solving it approaches impossibility. Hilbertâs 13th problem asks whether seventh-degree equations can be solved using a composition of addition, subtraction, multiplication and division plus algebraic functions of two variables, tops.

The answer is probably no. But to Farb, the question is not just about solving a complicated type of algebraic equation. Hilbertâs 13th is one of the most fundamental open problems in math, he said, because it provokes deep questions: How complicated are polynomials, and how do we measure that? âA huge swath of modern mathematics was invented in order to understand the roots of polynomials,â Farb said.

The problem has led him and the mathematician Jesse Wolfson at the University of California, Irvine into a mathematical rabbit hole, whose tunnels theyâre still exploring. Theyâve also drafted Mark Kisin, a number theorist at Harvard University and an old friend of Farbâs, to help them excavate.

They still havenât solved Hilbertâs 13th problem and probably arenât even close, Farb admitted. But they have unearthed mathematical strategies that had practically disappeared, and they have explored connections between the problem and a variety of fields including complex analysis, topology, number theory, representation theory and algebraic geometry. In doing so, theyâve made inroads of their own, especially in connecting polynomials to geometry and narrowing the field of possible answers to Hilbertâs question. Their work also suggests a way to classify polynomials using metrics of complexity â analogous to the complexity classes associated with the unsolved P vs. NP problem.

âTheyâve really managed to extract from the question a more interesting versionâ than ones previously studied, said Daniel Litt, a mathematician at the University of Georgia. âTheyâre making the mathematics community aware of many natural and interesting questions.â

## Open and Shut, and Open Again

Many mathematicians already thought the problem was solved. Thatâs because a Soviet prodigy named Vladimir Arnold and his mentor, Andrey Nikolyevich Kolmogorov, published proofs of it in the late 1950s. For most mathematicians, the Arnold-Kolmogorov work closed the book. Even Wikipedia â not a definitive source, but a reasonable proxy for public knowledge â until recently declared the case closed.

But five years ago, Farb came across a few tantalizing lines in an essay by Arnold, in which the famous mathematician reflected on his work and career. Farb was surprised to see that Arnold described Hilbertâs 13th problem as open and had actually spent four decades trying to solve the problem that heâd supposedly already conquered.

âThere are all these papers that would just literally repeat that it was solved. They clearly had no understanding of the actual problem,â Farb said. He was already working with Wolfson, then a postdoctoral researcher, on a topology project, and when he shared what heâd found in Arnoldâs paper, Wolfson jumped in. In 2017, during a seminar celebrating Farbâs 50th birthday, Kisin listened to Wolfsonâs talk and realized with surprise that their ideas about polynomials were related to questions in his own work in number theory. He joined the collaboration.

The reason for the confusion about the problem soon became clear: Kolmogorov and Arnold had solved only a variant of the problem. Their solution involved what mathematicians call continuous functions, which are functions without abrupt discontinuities, or cusps. They include familiar operations like sine, cosine and exponential functions, as well as more exotic ones.

But researchers disagree on whether Hilbert was interested in this approach. âMany mathematicians believe that Hilbert really meant algebraic functions, not continuous functions,â said Zinovy Reichstein, a mathematician at the University of British Columbia. Farb and Wolfson have been working on the problem they believe Hilbert intended ever since their discovery.

Hilbertâs 13th, Farb said, is a kaleidoscope. âYou open this thing up, and the more you put into it, the more new directions and ideas you get,â he said. âIt cracks open the door to a whole array, this whole beautiful web of math.â

## The Roots of the Matter

Mathematicians have been probing polynomials for as long as math has been around. Stone tablets carved more than 3,000 years ago show that ancient Babylonian mathematicians used a formula to solve polynomials of second degree â a cuneiform forebear of the same quadratic formula that algebra students learn today. That formula, $latex{x=\frac{{ â b \pm \sqrt {b^2 â 4ac} }}{{2a}}}$, tells you how to find the roots, or the values of x that make an expression equal to zero, of the second-degree polynomial $latex{ax^2 + bx +c}$.

Over time, mathematicians naturally wondered if such clean formulas existed for higher-degree polynomials. âThe multi-millennial history of this problem is to get back to something that powerful and simple and effective,â said Wolfson.

The higher polynomials grow in degree, the more unwieldy they become. In his 1545 book Ars Magna, the Italian polymath Gerolamo Cardano published formulas for finding the roots of cubic (third-degree) and quartic (fourth-degree) polynomials.

The roots of a cubic polynomial writtenÂ $latex{ax^3 + bx^2 + cx + d = 0}$Â can be found using this formula:

The quartic formula is even worse.

âAs they go up in degree, they go up in complexity; they form a tower of complexities,â said Curt McMullen of Harvard. âHow can we capture that tower of complexities?â

The Italian mathematician Paolo Ruffini argued in 1799 that polynomials of degree 5 or higher couldnât be solved using arithmetic and radicals; the Norwegian Niels Henrik Abel proved it in 1824. In other words, there can be no similar âquintic formula.â Fortunately, other ideas emerged that suggested ways forward for higher-degree polynomials, which could be simplified through substitution. For example, in 1786, a Swedish lawyer named Erland Bring showed that any quintic polynomial equation of the form $latex{ax^5 + bx^4 + cx^3 + dx^2 + ex + f = 0}$ could be retooled asÂ $latex{px^5 +Â qx + 1 = 0}$ (where p and q are complex numbers determined by a, b, c, d, e and f). This pointed to new ways of approaching the inherent but hidden rules of polynomials.

In the 19th century, William Rowan Hamilton picked up where Bring and others had left off. He showed, among other things, that to find the roots of any sixth-degree polynomial equation, you only need the usual arithmetic operations, some square and cube roots, and an algebraic formula that depends on only two parameters.

In 1975, the American algebraist Richard Brauer at Harvard introduced the idea of âresolvent degree,â which describes the lowest number of terms needed to represent the polynomial of some degree. (Less than a year later, Arnold and Japanese number theorist Goro Shimura introduced nearly the same definition in another paper.)

In Brauerâs framework, which represented the first attempt to codify the rules of such substitutions, Hilbertâs 13th problem asks us if itâs possible for seventh-degree polynomials to have a resolvent degree of less than 3; later, he made similar conjectures about sixth- and eighth-degree polynomials.

But these questions also invoke a broader one: Whatâs the smallest number of parameters you need to find the roots of any polynomial? How low can you go?

## Thinking Visually

A natural way to approach this question is to think about what polynomials look like. A polynomial can be written as a function â $latex{f(x)=x^2 -3x + 1}$, for example â and that function can be graphed. Then finding the roots becomes a matter of recognizing that where the function has value 0, the curve crosses the x-axis.

Higher-degree polynomials give rise to more complicated figures. Third-degree polynomial functions with three variables, for example, produce smooth but twisty surfaces embedded in three dimensions. And again, by knowing where to look on these figures, mathematicians can learn more about their underlying polynomial structure.

As a result, many efforts to understand polynomials borrow from algebraic geometry and topology, mathematical fields that focus on what happens when shapes and figures are projected, deformed, squashed, stretched or otherwise transformed without breaking. âHenri PoincarÃ© basically invented the field of topology, and he explicitly said he was doing it in order to understand algebraic functions,â said Farb. âAt the time, people were really wrestling with these fundamental connections.â

Hilbert himself unearthed a particularly remarkable connection by applying geometry to the problem. By the time he enumerated his problems in 1900, mathematicians had a vast array of tricks to reduce polynomials, but they still couldnât make progress. In 1927, however, Hilbert described a new trick. He began by identifying all the possible ways to simplify ninth-degree polynomials, and he found within them a family of special cubic surfaces.

Hilbert already knew that every smooth cubic surface â a twisty shape defined by third-degree polynomials â contains exactly 27 straight lines, no matter how tangled it appears. (Those lines shift as the coefficients of the polynomials change.) He realized that if he knew one of those lines, he could simplify the ninth-degree polynomial to find its roots. The formula required only four parameters; in modern terms, that means the resolvent degree is at most 4.

âHilbertâs amazing insight was that this miracle of geometry â from a completely different world â could be leveraged to reduce the to 4,â Farb said.

## Toward a Web of Connections

As Kisin helped Farb and Wolfson connect the dots, they realized that the widespread assumption that Hilbertâs 13th was solved had essentially closed off interest in a geometric approach to resolvent degree. In January 2020, Wolfson published a paper reviving the idea by extending Hilbertâs geometric work on ninth-degree polynomials to a more general theory.

Hilbert had focused on cubic surfaces to solve ninth-degree polynomials in one variable. But what about higher-degree polynomials? To solve those in a similar way, Wolfson thought, you could replace that cubic surface with some higher-dimensional âhypersurfaceâ formed by those higher-degree polynomials in many variables. The geometry of these is less understood, but in the last few decades mathematicians have been able to prove that hypersurfaces always have lines in some cases.

Hilbertâs idea of using a line on a cubic surface to solve a ninth-degree polynomial can be extended to lines on these higher-dimensional hypersurfaces. Wolfson used this method to find new, simpler formulas for polynomials for certain degrees. That means that even if you canât visualize it, you can solve a 100-degree polynomial âsimplyâ by finding a plane on a multidimensional cubic hypersurface (47 dimensions, in this case).

With this new method, Wolfson confirmed Hilbertâs value of the resolvent degree for ninth-degree polynomials. And for other degrees of polynomials â especially those above degree 9 â his method narrows down the possible values for the resolvent degree.

Thus, this isnât a direct attack on Hilbertâs 13th, but rather on polynomials in general. âThey kind of found some adjacent questions and made progress on those, some of them long-standing, in the hopes that that will shed light on the original question,â McMullen said. And their work points to new ways of thinking about these mathematical constructions.

This general theory of resolvent degree also shows that Hilbertâs conjectures about sixth-degree, seventh-degree and eighth-degree equations are equivalent to problems in other, seemingly unrelated fields of math. Resolvent degree, Farb said, offers a way to categorize these problems by a kind of algebraic complexity, rather like grouping optimization problems in complexity classes.

Even though the theory began with Hilbertâs 13th, however, mathematicians are skeptical that it can actually settle the open question about seventh-degree polynomials. It speaks to big, unexplored mathematical landscapes in unimaginable dimensions â but it hits a brick wall at the lower numbers, and it canât determine their resolvent degrees.

For McMullen, the lack of headway â despite these signs of progress â is itself interesting, as it suggests that the problem holds secrets that modern math simply canât comprehend. âWe havenât been able to address this fundamental problem; that means thereâs some dark area we havenât pushed into,â he said.

âSolving it would require entirely new ideas,â said Reichstein, who has developed his own new ideas about simplifying polynomials using a concept he calls essential dimension. âThere is no way of knowing where they will come from.â

But the trio is undeterred. âIâm not going to give up on this,â Farb said. âItâs definitely become kind of the white whale. What keeps me going is this web of connections, the mathematics surrounding it.â

• Repairing for Thriftyness! Zoom H4n tiny repair!

I'm always repairing stuff and find it a really rewarding thing to do for all kinds of reasons. I've been thinking about how repairs have different things that drive the desire to repair. Many times I repair because I need to use a tool that I've managed to break and can't wait for a replacement, sometimes I'm repairing something I've made, a crashed rocket or ripped parachute. Sometimes repairs are emotional and nostalgic, repairing an item I have an attachment too that is irreplaceable. Sometimes I feel repairs are important in terms of heritage, particularly older tools/ lathe etc. Anyway... today I was thinking, sometimes I repair because I am thrifty!

I particularly wanted one of these zoom H4n audio recorders as I wanted to explore a particular feature they have that's slightly different to others. They are an older model now that were about Â£250 originally but now can be picked up in good condition for about Â£100. However you can often find them cheaper listed with a bit of wear and a few faults. This one was listed with a few cosmetic scrapes, an intermittent fault on a button, and the SD card slot cover missing.... it was therefore much cheaper!Â

On arrival a quick blow through the case with some canned air has cured the button press issue and it's always worth searching ebay for people selling scrap/spares/repair versions of your item. I managed to find an entire right hand side chassis component for Â£3.20 complete with the SD card slot. A quick swap and it's not quite as good as new, but it's 100% functional and ready to go! For the repairers out there these are pretty easy to work on, no glues and standard screws!

• Extracting Personal Information from Large Language Models Like GPT-2

Researchers have been able to find all sorts of personal information within GPT-2. This information was part of the training data, and can be extracted with the right sorts of queries.

Paper: âExtracting Training Data from Large Language Models.â

Abstract: It has become common to publish large (billion parameter) language models that have been trained on private datasets. This paper demonstrates that in such settings, an adversary can perform a training data extraction attack to recover individual training examples by querying the language model.

We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the modelâs training data. These extracted examples include (public) personally identifiable information (names, phone numbers, and email addresses), IRC conversations, code, and 128-bit UUIDs. Our attack is possible even though each of the above sequences are included in just one document in the training data.

We comprehensively evaluate our extraction attack to understand the factors that contribute to its success. For example, we find that larger models are more vulnerable than smaller models. We conclude by drawing lessons and discussing possible safeguards for training large language models.

From a blog post:

We generated a total of 600,000 samples by querying GPT-2 with three different sampling strategies. Each sample contains 256 tokens, or roughly 200 words on average. Among these samples, we selected 1,800 samples with abnormally high likelihood for manual inspection. Out of the 1,800 samples, we found 604 that contain text which is reproduced verbatim from the training set.

The rest of the blog post discusses the types of data they found.

## January 13, 2021

• Normal schedule during Lunar New Year 2021

Happy Lunar New Year! We would like to let our customers know that all OSH Park boards areÂ manufactured in the United States, and weÂ will be operating on aÂ normal scheduleÂ during Lunar New Year:

Shipping Information and Turnaround Times

There are two periods of time to think about when making your order:

• Fabrication timeÂ between when you place your order and when we receive boards from the fab.
• Shipping timeÂ between when we ship and when the post office delivers your order to you.

All PCBs ship from Lake Oswego, Oregon, and are fully manufactured in the United States.

• TeX Live 2021 DragonFlyBSD

The short answer is: works great.Â  The version in dports lags, cause itâs based on whatâs in the FreeBSD package collection, and thatâs not updated as quickly.

This is technically the prerelease, since the official one is a few months off.Â  TeX Live binaries can be downloaded directly for DragonFly.

• The Crooked Geometry of Round Trips

Have you ever wondered what life would be like if Earth werenât shaped like a sphere? We take for granted the smooth ride through the solar system and the seamless sunsets afforded by the planetâs rotational symmetry. A round Earth also makes it easy to figure out the fastest way to get from point A to point B: Just travel along the circle that goes through those two points and cuts the sphere in half. We use these shortest paths, called geodesics, to plan airplane routes and satellite orbits.

But what if we lived on a cube instead? Our world would wobble more, our horizons would be crooked, and our shortest paths would be harder to find. You might not spend much time imagining life on a cube, but mathematicians do: They study what travel looks like on all kinds of different shapes. And a recent discovery about round trips on a dodecahedron has changed the way we view an object weâve been looking at for thousands of years.

Finding the shortest round trip on a given shape might seem as simple as picking a direction and walking in a straight line. Eventually youâll end up back where you started, right? Well, it depends on the shape youâre walking on. If itâs a sphere, yes. (And, yes, weâre ignoring the fact that the Earth isnât a perfect sphere and its surface isnât exactly smooth.) On a sphere, straight paths follow âgreat circles,â which are geodesics like the equator. If you walk around the equator, after about 25,000 miles youâll come full circle and end up right back where you started.

On a cubic world, geodesics are less obvious. Finding a straight path on a single face is easy, since each face is flat. But if you were walking around a cubic world, how would you continue to go âstraightâ when you reached an edge?

Thereâs a fun old math problem that illustrates the answer to our question. Imagine an ant on one corner of a cube who wants to get to the opposite corner. Whatâs the shortest path on the surface of the cube to get from A to B?

You could imagine lots of different paths for the ant to take.

But which is the shortest? Thereâs an ingenious technique for solving the problem. We flatten out the cube!

If the cube were made of paper, you could cut along the edges and flatten it out to get a ânetâ like this.

In this flat world, the shortest path from A to B is easy to find: Just draw a straight line between them.

To see what our cube-world geodesic looks like, just put the cube back together. Hereâs our shortest path.

Flattening out the cube works because each face of the cube is itself flat, so nothing gets distorted as we unfold along the edges. (A similar attempt to âunfoldâ a sphere like this wouldnât work, as we canât flatten out a sphere without distorting it.)

Now that we have a sense of what straight paths look like on a cube, letâs revisit the question of whether we can walk along any straight path and eventually end up back where we started. Unlike on the sphere, not every straight path makes a round trip on a cube.

But round trips do exist â with a catch. Notice that the ant could continue along the path we mapped out above and end up back where it started. On a cube, coming full circle produces a path that looks more like a rhombus.

In following this round-trip path, the ant has to pass through another vertex (point B) before returning to its starting point. Thatâs the catch: Every straight path that starts and ends on the same vertex must pass through another vertex of the cube.

This turns out to be true for four of the five Platonic solids. On the cube, tetrahedron, octahedron and icosahedron, any straight path that starts and ends on the same vertex must pass through some other vertex along the way. Mathematicians proved this five years ago, but the dodecahedron wasnât on their list. Weâll return to that later.

To get a sense of why this fact about geodesics is true on four of the five Platonic solids, weâll take a âtumblingâ approach to these paths, and weâll switch over to a tetrahedral world where tumbling paths are a little easier to study.

Imagine starting from a vertex of a tetrahedron and heading out on a straight path along a face. Letâs orient our tetrahedron so that our path starts on the bottom face.

When we meet an edge, we will roll the tetrahedron over, so that our path continues on the face that ends up on the bottom:

This tumbling diagram gives us a way to track our path just as we did on the net of the cube:

The tumbling path above represents this path on the surface of the tetrahedron:

Here the five tumbles of the tetrahedron correspond to the additional five faces traversed by the path.

Now we can imagine any path on the surface of the tetrahedron as a path in this tumbling space. Letâs call our starting point A and see where this point ends up after some tumbling.

As our path leaves from A, the tetrahedron tumbles over the opposite side. This lifts A off the ground.

Vertex A is temporarily suspended above our tumbling space. We wouldnât normally indicate Aâs location when creating our tumbling space, but hereâs where it would appear if we were looking down.

As the path continues, the tetrahedron tumbles again. There are two directions it could go, but either way A ends up back on the ground.

As we let the tetrahedron tumble away in every possible direction, we end up with a tumbling space that looks like this:

This creates a grid system because of the way the equilateral triangular faces of the tetrahedron fit together.

This grid system tells us two interesting things about our tumbling space. First, the points where vertices of the tetrahedron can land are all âlattice points,â or points with integer coordinates. Thatâs because one unit in our coordinate system is one edge length of our tetrahedron.

Second, take a look at where A can end up.

The coordinates of A are always even. Whenever A is on the ground, it will be back on the ground two tumbles later, so the possible landing spots for A are all spaced out by two edge lengths in each tumbling direction.

Now letâs see what this says about geodesics. Recall that a path on the tetrahedron that starts and ends at A will be a straight line segment in the tumbling space starting at the A at (0,0) and ending at another A. And when the starting and ending points of the path are both Aâs, thereâs something quite interesting about the midpoint of the path.

Even in our crooked coordinate system the standard midpoint formula still works, so we can find the coordinates of our midpoint by averaging the coordinates of the endpoints. Since the coordinates of the starting point are both 0 and the coordinates of the ending point are both even, the coordinates of our midpoint are both integers. This makes the midpoint a lattice point, and as we observed above, it therefore corresponds to a vertex of the triangle in the tumbling space.

For example, the path from (0,0) to (4,2) has midpoint (2,1), a lattice point in our grid.

That means that on the surface of the tetrahedron, this path from A to itself must pass through another vertex along the way.

Since every possible landing spot for A has even coordinates in this system, the midpoint of every geodesic path starting and ending at A will correspond to a lattice point. This shows that every geodesic from A to A on the surface of the tetrahedron must pass through another vertex.

This is a simple version of an argument that was made rigorous in 2015 by the mathematicians Diana Davis, Victor Dods, Cynthia Traub and Jed Yang. They used a similar but much more complicated argument to prove the same for the cube. Dmitry Fuchs proved the results for the octahedron and icosahedron the next year. Because of this, we know that for the tetrahedron, cube, octahedron and icosahedron, there are no straight paths going from a vertex back to itself that donât pass through another vertex.

But the existence of such paths on the surface of the dodecahedron remained an open question until 2019, when the mathematicians Jayadev Athreya, David Aulicino and Patrick Hooper proved it was actually possible. In fact, they found infinitely many straight paths on the surface of the dodecahedron that start and end on the same vertex without passing through any others.

Hereâs one shown on the net of the dodecahedron, hiding in plain sight.

For thousands of years the Platonic solids have been studied together because they have so much in common. But now we know something new about the dodecahedron that is decidedly different. This mysterious discovery shows that no matter how well we understand mathematical objects, thereâs always more to learn. It also shows that the path from problem to solution wonât always look like a straight line.

## Exercises

1. If the cube has edge length 1, how long is the antâs shortest path from vertex to opposing vertex?

2. Explain why the diagram below could not be the tumbling path for a path on the cube.

3. One complication with tumbling paths of the cube is that point A doesnât have a unique end position associated with a given ending location of the cube. For example, even though the cube ends up in the same location tumbling along either the red path or the blue one, point A ends up in different positions. Determine where A ends up after tumbling along the red path and the blue path.

4. Hereâs a valid tumbling path for a path on the cube.

Sketch the path on the surface of a cube starting at A.

## Answers

Click for Answer 1:

The path is the hypotenuse of a right triangle with legs of length 1 and 2. According to the Pythagorean theorem, the length of AB is $latexÂ \sqrt{5}$.

Click for Answer 2:

If a path forces the cube to initially tumble twice to the right, then its âslopeâ is at most 1 cube up per two cubes right. After the first tumble up, the highest the path could reach is halfway up the side, which would force the next tumble to be rightward. This gives some insight into why tumbling paths of the cube are more complicated than those of the tetrahedron.

Click for Answer 3:

Acting this out with a Rubikâs cube or a die is helpful. Notice also that the blue route could not be the tumbling path for a path on the cube.

Click for Answer 4:

Correction: January 13, 2021

This column was revised to make clear that the five tumbles of the tetrahedron shown correspond to the five "additional" faces traversed by the path, as the path traverses six faces in total.

• What Does the Closest Brown Dwarf Look Like?

I keep hoping weâll find a brown dwarf closer to us than Alpha Centauri, but none have turned up yet despite the best efforts of missions like WISE (Wide-Field Infrared Survey Explorer). If thereâs something out there, itâs dim indeed. Of course, I wouldnât be surprised at finding rogue planets between us and the nearest stars. Maybe some will be more massive than Jupiter, but evidently not massive enough to throw an infrared signature of the sort that defines a brown dwarf. Just what lies outside our systemâs edge always makes for interesting speculation.

The beauty of finding an actual brown dwarf as opposed to a rogue planet is that we might be dealing with a planetary system in miniature, a fine target in our own backyards. Lacking that, the closest brown dwarf we know is the Luhman 16 AB system, a binary in the southern constellation of Vela some 6.5 light years from the Sun (a little further than Barnardâs Star, making this the third closest known system to the Sun). Here we have one dwarf about 34 times Jupiterâs mass (Luhman 16 A), and another, Luhman 16 B, about 28 times more massive than Jupiter, and because both are brown dwarfs, both are hotter than the planet.

Luhman 16 AB is the subject of a new paper from Daniel Apai (University of Arizona / Lunar and Planetary Laboratory). Apaiâs team was intent on finding out what brown dwarfs look like, wondering whether theyâd be marked by the kind of well-defined banding and belts we see on Jupiter or roiling with storms of the kind weâve seen (thanks to Juno) on Jupiterâs poles. The method: Using data from TESS (Transiting Exoplanet Survey Satellite), the researchers deployed in-house algorithms to measure brightness changes of the two brown dwarfs as they rotated. Brighter atmospheric features rotate in and out of view.

What emerged was the most detailed look yet at a brown dwarfâs atmospheric circulation, and that led to conclusions about the appearance of these objects. We now know that Jupiter is a good analogy for what we would see if we could look at Luhman 16 AB up close. The work created a model for Luhman 16 Bâs atmosphere showing that high-speed winds run parallel to the brown dwarfâs equator. Also like Jupiter are the apparent vortices emerging in the polar regions. Here we need to pause to thank the late Adam Showman, also of the University of Arizona, whose models predicted this pattern.

Image: Using high-precision brightness measurements from NASAâs TESS space telescope, astronomers found that the nearby brown dwarf Luhman 16 Bâs atmosphere is dominated by high-speed, global winds akin to Earthâs jet stream system. This global circulation determines how clouds are distributed in the brown dwarfâs atmosphere, giving it a striped appearance. Credit: Daniel Apai.

The lighter zones shown above are thought to be thin cloud decks illuminated by light from the hot interior, while the darker zones are where thicker cloud decks block interior light. The wind speeds are highest at the equator, dropping at the higher latitudes. The global wind pattern is lost at the poles, which are a region of enormous local storms, as on Jupiter. Most of Luhman 16 B, then, is dominated by global wind patterns rather than localized storms.

Something of a surprise to the team (the paper refers to the development as âa stunning featureâ) is the changeable, non-periodic nature of the Luhman 16 light curve. Hereâs how the authors describe this fact:

â¦we identify four properties that are shared between the visual lightcurve of this object and the infrared lightcurves of other objects: 1) The lightcurves remain variable over long periods (years); 2) The lightcurve shape evolves, yet it displays characteristic period, which is likely the rotational period of the object (as found in Apai et al. 2017); 3) In spite of the rapid evolution of the lightcurve, the amplitudes over rotational time-scales remain similar and characteristic to the object; 4) The lightcurves tend to be symmetric in the sense of similar amount of positiveânegative features, in contrast to, for example, a situation in which a single positive feature appears periodically on an otherwise flat lightcurve, which would indicate a single bright spot in the atmosphere.

Image: This is Figure 16 from the paper. Caption: Sketch of the possible appearance of Luhman 16B, based on the emerging evidence. Zonal circulation models and comparison to Jupiter suggests that low-latitude regions are dominated by the fastest jets, and that wind speeds at mid-latitude are significantly lower. Circulation at the polar regions is likely to be vortex- and not jet-dominated. Cloud cover is likely to be correlated with the atmospheric circulation. Credit: Apai et al.

All this is drawn from TESS lightcurves of Luhman 16 AB covering 22 days and 100 rotations of the binary, allowing the researchers to conclude that both the brown dwarfs in this system show zonal circulation and fit the Jupiter model. It seems apparent that brown dwarfs can serve as more massive analogs of giant exoplanets and could thus help us develop techniques of atmospheric analysis that can be deployed even further from the Solar System. Says Apai:

âNo telescope is large enough to provide detailed images of planets or brown dwarfs. But by measuring how the brightness of these rotating objects changes over time, it is possible to create crude maps of their atmospheres â a technique that, in the future, could also be used to map Earthlike planets in other solar systems that might otherwise be hard to seeâ¦ Our study provides a template for future studies of similar objects on how to explore â and even map â the atmospheres of brown dwarfs and giant extrasolar planets without the need for telescopes powerful enough to resolve them visually.â

The paper is Apai et al. âTESS Observations of the Luhman 16 AB Brown Dwarf System: Rotational Periods, Lightcurve Evolution, and Zonal Circulation,â Astrophysical Journal Vol. 906, No. 1 (7 January 2021). Abstract / preprint.

• Changes in WhatsAppÃ¢â¬â¢s Privacy Policy

If youâre a WhatsApp user, pay attention to the changes in the privacy policy that youâre being forced to agree with.

In 2016, WhatsApp gave users a one-time ability to opt out of having account data turned over to Facebook. Now, an updated privacy policy is changing that. Come next month, users will no longer have that choice. Some of the data that WhatsApp collects includes:

• User phone numbers
• Other peopleâs phone numbers stored in address books
• Profile names
• Profile pictures and
• Status message including when a user was last online
• Diagnostic data collected from app logs

Under the new terms, Facebook reserves the right to share collected data with its family of companies.

EDITED TO ADD (1/13): WhatsApp tries to explain.

• First DosesÂ First

From No Learning Without Risk, by Alex Tabarrok at the excellent Marginal Revolution:

An important feature of First Doses First (FDF) and other policies such as fractional dosing is that they are reversible. In other words, FDF contains an option to switch back to Second Doses First (SDF). Options increase in value with uncertainty (Dixit and Pindyck 1994). Thus, contrary to many peopleÃ¢â¬â¢s intuitions, the greater the uncertainty the greater the value of moving to First Doses First. Indeed, the value of the option can be so high that one might want to move to First Doses First even if it were worse in expectation. For example, if the expected efficacy of the first dose were just 45% then in expectation it would be worse than Second Doses First (95% efficacy) but if there were lots uncertainty around the 45% expected efficacy it might still be better to switch to First Doses First. If there was a 75% chance that the efficacy of the first dose was 30%, for example, and a 25% chance that it was 90% (.75.3+.25.90=45%) then under reversibility one would still want to switch to First Doses First to learn whether the true efficacy was 30% orÂ 90%.

This sounds like a two way door that might have treasure on the other side, so Iâm glad thatâs where weâre heading.

• 1/10,000th Scale World
• Also, dpkg update

This happened a little bit ago but I wanted to be able to post a solution to the pkg upgrade issue (yesterday) before mentioning it: thereâs a freshly built batch of packages for DragonFly, so now is a good time to upgrade with pkg.

• Microsoft Patch Tuesday, January 2021 Edition

Microsoft today released updates to plug more than 80 security holes in its Windows operating systems and other software, including one that is actively being exploited and another which was disclosed prior to today. Ten of the flaws earned Microsoftâs most-dire âcriticalâ rating, meaning they could be exploited by malware or miscreants to seize remote control over unpatched systems with little or no interaction from Windows users.

Most concerning of this monthâs batch is probably a critical bug (CVE-2021-1647) in Microsoftâs default anti-malware suite â Windows Defender â that is seeing active exploitation. Microsoft recently stopped providing a great deal of detail in their vulnerability advisories, so itâs not entirely clear how this is being exploited.

But Kevin Breen, director of research at Immersive Labs, says depending on the vector the flaw could be trivial to exploit.

âIt could be as simple as sending a file,â he said. âThe user doesnât need to interact with anything, as Defender will access it as soon as it is placed on the system.â

Fortunately, this bug is probably already patched by Microsoft on end-user systems, as the company continuously updates Defender outside of the normal monthly patch cycle.

Breen called attention to another critical vulnerability this month â CVE-2020-1660 â which is a remote code execution flaw in nearly every version of Windows that earned a CVSS score of 8.8 (10 is the most dangerous).

âThey classify this vulnerability as âlowâ in complexity, meaning an attack could be easy to reproduce,â Breen said. âHowever, they also note that itâs âless likelyâ to be exploited, which seems counterintuitive. Without full context of this vulnerability, we have to rely on Microsoft to make the decision for us.â

CVE-2020-1660 is actually just one of five bugs in a core Microsoft service called Remote Procedure Call (RPC), which is responsible for a lot of heavy lifting in Windows. Some of the more memorable computer worms of the last decade spread automatically by exploiting RPC vulnerabilities.

Allan Liska, senior security architect at Recorded Future, said while it is concerning that so many vulnerabilities around the same component were released simultaneously, two previous vulnerabilities in RPC â CVE-2019-1409 and CVE-2018-8514 â were not widely exploited.

The remaining 70 or so flaws patched this month earned Microsoftâs less-dire âimportantâ ratings, which is not to say theyâre much less of a security concern. Case in point: CVE-2021-1709, which is an âelevation of privilegeâ flaw in Windows 8 through 10 and Windows Server 2008 through 2019.

âUnfortunately, this type of vulnerability is often quickly exploited by attackers,â Liska said. âFor example, CVE-2019-1458 was announced on December 10th of 2019, and by December 19th an attacker was seen selling an exploit for the vulnerability on underground markets. So, while CVE-2021-1709 is only rated as [an information exposure flaw] by Microsoft it should be prioritized for patching.â

Trend Microâs ZDI Initiative pointed out another flaw marked âimportantâ â CVE-2021-1648, an elevation of privilege bug in Windows 8, 10 and some Windows Server 2012 and 2019 that was publicly disclosed by ZDI prior to today.

âIt was also discovered by Google likely because this patch corrects a bug introduced by a previous patch,â ZDIâs Dustin Childs said. âThe previous CVE was being exploited in the wild, so itâs within reason to think this CVE will be actively exploited as well.â

Separately, Adobe released security updates to tackle at least eight vulnerabilities across a range of products, including Adobe Photoshop and Illustrator. There are no Flash Player updates because Adobe retired the browser plugin in December (hallelujah!), and Microsoftâs update cycle from last month removed the program from Microsoftâs browsers.

Windows 10 users should be aware that the operating system will download updates and install them all at once on its own schedule, closing out active programs and rebooting the system. If you wish to ensure Windows has been set to pause updating so you have ample opportunity to back up your files and/or system, see this guide.

Please back up your system before applying any of these updates. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once. You never know when a patch roll-up will bork your system or possibly damage important files. For those seeking more flexible and full-featured backup options (including incremental backups), Acronis and Macrium are two that Iâve used previously and are worth a look.

That said, there donât appear to be any major issues cropping up yet with this monthâs update batch. But before you apply updates consider paying a visit to AskWoody.com, which usually has the skinny on any reports about problematic patches.

As always, if you experience glitches or issues installing any of these patches this month, please consider leaving a comment about it below; thereâs a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

• How to Send Out Christmas Cards

Greetings cards are a weird concept. People try to express their unique feelings and personality by selecting a mass-produced card that was written by someone else and distributed to every grocery and drug store in the country.

Most of the cards could be replaced with a simple one that says, âYou are experiencing an important event, and I have no idea what to say about it,â because really, thatâs what every greeting card says.

As always, thanks for using my Amazon Affiliate links (US, UK, Canada).

## January 12, 2021

• SolarWinds: What Hit Us Could Hit Others

New research into the malware that set the stage for the megabreach at IT vendor SolarWinds shows the perpetrators spent months inside the companyâs software development labs honing their attack before inserting malicious code into updates that SolarWinds then shipped to thousands of customers. More worrisome, the research suggests the insidious methods used by the intruders to subvert the companyâs software development pipeline could be repurposed against many other major software providers.

In a blog post published Jan. 11, SolarWinds said the attackers first compromised its development environment on Sept. 4, 2019. Soon after, the attackers began testing code designed to surreptitiously inject backdoors into Orion, a suite of tools used by many Fortune 500 firms and a broad swath of the federal government to manage their internal networks.

Image: SolarWinds.

According to SolarWinds and a technical analysis from CrowdStrike, the intruders were trying to work out whether their âSunspotâ malware â designed specifically for use in undermining SolarWindsâ software development process â could successfully insert their malicious âSunburstâ backdoor into Orion products without tripping any alarms or alerting Orion developers.

In October 2019, SolarWinds pushed an update to their Orion customers that contained the modified test code. By February 2020, the intruders had used Sunspot to inject the Sunburst backdoor into the Orion source code, which was then digitally signed by the company and propagated to customers via SolarWindsâ software update process.

Crowdstrike said Sunspot was written to be able to detect when it was installed on a SolarWinds developer system, and to lie in wait until specific Orion source code files were accessed by developers. This allowed the intruders to âreplace source code files during the build process, before compilation,â Crowdstrike wrote.

The attackers also included safeguards to prevent the backdoor code lines from appearing in Orion software build logs, and checks to ensure that such tampering wouldnât cause build errors.

âThe design of SUNSPOT suggests [the malware] developers invested a lot of effort to ensure the code was properly inserted and remained undetected, and prioritized operational security to avoid revealing their presence in the build environment to SolarWinds developers,â CrowdStrike wrote.

A third malware strain â dubbed âTeardropâ by FireEye, the company that first disclosed the SolarWinds attack in December â was installed via the backdoored Orion updates on networks that the SolarWinds attackers wanted to plunder more deeply.

So far, the Teardrop malware has been found on several government networks, including the Commerce, Energy and Treasury departments, the Department of Justice and the Administrative Office of the U.S. Courts.

SolarWinds emphasized that while the Sunspot code was specifically designed to compromise the integrity of its software development process, that same process is likely common across the software industry.

âOur concern is that right now similar processes may exist in software development environments at other companies throughout the world,â said SolarWinds CEO Sudhakar Ramakrishna. âThe severity and complexity of this attack has taught us that more effectively combatting similar attacks in the future will require an industry-wide approach as well as public-private partnerships that leverage the skills, insight, knowledge, and resources of all constituents.â

• Morse Code The Office Segment

Morse Code The Office â Morse is not Outmoded, Jim Morse code was featured in an episode of the hit show The Office. Husband and wife team Jim and Pam work together to drive Dwight crazy. Jim Halpert says to â¦ Continue reading â

The post Morse Code The Office Segment first appeared on W3ATB.
• Easy silkscreen labels in EAGLE and KiCad

Nick Pool has created an easy way to create silkscreen labels in EAGLE:

SparkFun Buzzard Label Generator

Greg Davill is now working on a version for KiCad:

Buzzard plugin for KiCad

• Military Cryptanalytics, Part III

The NSA has just declassified and released a redacted version of Military Cryptanalytics, Part III, by Lambros D. Callimahos, October 1977.

Parts I and II, by Lambros D. Callimahos and William F. Friedman, were released decades ago â I believe repeatedly, in increasingly unredacted form â and published by the late Wayne Griswold Barkerâs Agean Park Press. I own them in hardcover.

Like Parts I and II, Part III is primarily concerned with pre-computer ciphers. At this point, the document only has historical interest. If there is any lesson for today, itâs that modern cryptanalysis is possible primarily because people make mistakes

The monograph took a while to become public. The cover page says that the initial FOIA request was made in July 2012: eight and a half years ago.

And thereâs more books to come. Page 1 starts off:

This text constitutes the third of six basic texts on the science of cryptanalytics. The first two texts together have covered most of the necessary fundamentals of cryptanalytics; this and the remaining three texts will be devoted to more specialized and more advanced aspects of the science.

Presumably, volumes IV, V, and VI are still hidden inside the classified libraries of the NSA.

And from page ii:

Chapters IV-XI are revisions of seven of my monographs in the NSA Technical Literature Series, viz: Monograph No. 19, âThe Cryptanalysis of Ciphertext and Plaintext Autokey Systemsâ; Monograph No. 20, âThe Analysis of Systems Employing Long or Continuous Keysâ; Monograph No. 21, âThe Analysis of Cylindrical Cipher Devices and Strip Cipher Systemsâ; Monograph No. 22, âThe Analysis of Systems Employing Geared Disk Cryptomechanismsâ; Monograph No.23, âFundamentals of Key Analysisâ; Monograph No. 15, âAn Introduction to Teleprinter Key Analysisâ; and Monograph No. 18, âArs Conjectandi: The Fundamentals of Cryptodiagnosis.â

This points to a whole series of still-classified monographs whose titles we do not even know.

EDITED TO ADD: I have been informed by a reliable source that Parts 4 through 6 were never completed. There may be fragments and notes, but no finished works.

• A Prodigy Who Cracked Open the Cosmos

In 1972, Frank Wilczek and his thesis adviser,Â David Gross,Â discoveredÂ the basic theory of the strong force â the final pillar of the Standard Model of particle physics. Their work revealed the strange alchemy at work inside the nucleus of an atom. It also turned out to underpin almost all subsequent research into the early universe. Wilczek and Gross went on to share the 2004 Nobel Prize in Physics for the work. At the time it was done, Wilczek was just 21 years old.

His influence in the decades since has been profound. He predicted the existence of a hypothetical particle called the axion, which today is a leading candidate for dark matter. He published groundbreaking papers on the nature of the early universe.Â And just last year, his prediction of the âanyonâ â a strange type of particle that only shows up in two-dimensional systems â was experimentally confirmed.

Wilczek was raised in Queens, New York, the son of immigrants and the product of public schools. He finished high school in two years and college in three. He has taught at the Massachusetts Institute of Technology for 20 years, though pre-pandemic he spent much of his time flying around the world to his concurrent appointments at Arizona State University, Stockholm University and Shanghai Jiao Tong University, where he directs the Wilczek Quantum Center.

Wilczekâs latest book, Fundamentals: Ten Keys to Reality, comes out today. Wilczek toldÂ QuantaÂ that he was âas proud of this book as anything Iâve ever done in my life.â

QuantaÂ caught up with him twice at his home in Concord, Massachusetts, via Zoom in December. The interviews have been condensed and edited for clarity.

### Congratulations on Fundamentals. How did the idea for the book come to you?

Well, the original idea was rather different from what the book has turned out to be. I was going to write this little book that told some basic facts about the physical world. I thought of it as something that people could read and feel that they knew more than they did before, something that would give them material to talk about at cocktail parties.

But as I put together a draft, I wasnât happy with what Iâd produced. There are already many books that do that, right? And then when I talked with my nonscientific friends â intelligent people, artists, people in literature â it didnât seem like anything they wanted to read.

So, I went back to my desk and I thought about when I was a teenager and when I was turning away from the Catholic faith Iâd been raised in, what did I look for in a book about science and our place in the universe? What would have been helpful to me then? The book I ended up writing was that one.

Yes.

### There probably arenât many scientists at your level whoâd go there.

Well, I happen to think these issues are too important for scientists to leave on the table.

You know, science actually has a lot to say about what God is or about what God did. Understanding his/her/its work â thatâs a direct way of informing yourself about what God is. Thereâs a great tradition that goes back to Galileo, Kepler, Newton, Maxwell and Einstein of looking at religious questions. Most of those people were explicitly, deeply religiously Christian.

Einstein, of course, was not. He described himself as having an attitude, similar to that of Spinoza, of a kind of pantheism, a more abstract idea of the world as God.

When I was a teen questioning my familyâs Catholicism, I came upon this concept of complementarity that helped me a lot: There can be different ways of approaching the same question. Often there are different ways of describing the same thing. They can be valid each in their own terms, but they may sometimes be very difficult or even impossible to reconcile. So many of the conflicts between religion and science arise because theyâre answering different questions. There are many conflicts where religions say things that are just wrong, but there are many other domains in which they are just addressing different questions.

In science, youâre broadly asking, âHow does this work?â In religion, youâre asking, âWhat does this mean and what should I do about it?â

### Are you religious?

Physics is my religious belief. In the sense that in physics we discover a fantastically wonderful world out there thatâs rich in potential, rich in realization, and that has ample scope for fantasy, because the laws are so strange and thereâs so much stuff out there to understand. And when you understand it, you understand how that could be.

I learn that I myself am very small. But I am also very large because I contain multitudes, as Walt Whitman said. I can process information. I can understand things. I can imagine. I can have fun. Thatâs the essence of my religion. I learn my religion from the study of what the world is and how it works.

### Tell us a bit more about this teenage boy who was trying to figure out the universe and his place in it.

I grew up in New York City â Queens. My parents were immigrants. My father from Poland. My mother from Italy.

Iâd say we were somewhere between upper-lower class and lower-middle class. My father worked as a technician. My mother was a homemaker. Both were very intelligent, though they hadnât been to college. My father had to quit high school in order to help support his family during the Depression. But they were very invested in the idea that their children should have a better life than they did.

We were also very fortunate that we lived in New York and had access to the public school system that served me very well. Almost from the moment I entered school, I was given IQ tests. When I scored high, my parents were called in and told, âFrank is exceptional. You should do everything to help him.â

### How did they respond to having a prodigy in the household?

Once the school started telling them how smart I was, it changed the dynamics within the family. I wasnât this little kid they could boss around anymore. Though they were themselves very intelligent, I think they were somewhat intimidated by me.

They allowed me to get toys that were clever, even though we couldnât afford them â mechanical toys and telescopes. Weâd go to a special bookstore for science and philosophy books. I read a lot of Bertrand Russell, who meant a lot to me. I think my parents were kind of puzzled by me. When I wanted to do strange things, theyâd let me do them. I think my mother sort of idolized me. It was assumed Iâd become some kind of scientist. She always said, âYouâre going to cure cancer.â

Iâm going to say something very immodest here. The fact that in school I got this kind of validation that was very objective and not faked gave me enormous confidence. I did very well on various tests and competitions. I had a lot of confidence. Thatâs stayed with me for the rest of my life. Scientists need that sort of confidence to take intellectual risks.

### Did you go to the Bronx High School of Science?

No. That was an hourâs commute away. I went to a very large Queens public high school, Martin Van Buren, where I was on the nerd track with about 20 other kids. We all went to the same AP classes and hung out together. Weâre still in touch, to this day.

When I went off to the University of Chicago at 16 â I had skipped two grades â on the whole, the students there were not as good as my pals back at Van Buren. On the other hand, the professors were on a different plane. They wanted to understand things in a big way. At Chicago, at least from the faculty, I felt I was getting into contact with the cutting edge of human intellectual exploration. I had a lot of amorphous ambitions. Chicago was a good place to try to figure out what I wanted to do.

For instance, I quickly discovered that lab work wasnât for me. I saw it can be fussy work, repetitive. So I went on to explore computer science, physics, biology. I was in a holding pattern. Eventually I settled on mathematics because I love to do puzzles, and that merged with the idea of solving equations and manipulating data in the broad sense. I had these amorphous ideas like wanting to investigate the way the mind works using mathematical logic.

### Did math work out?

As a college major, it did. I finished in three years and then went off to do graduate work in mathematics at Princeton. There, I experienced a big shock.

Graduate school was not a matter of learning things that others had already figured out, but of producing something new. They gave me an office with a blackboard and said, âDo something interesting.â Suddenly, I was being treated as an adult. I found this really difficult. I went into a kind of crisis.

But then I had a tremendous piece of luck. A miracle happened. I happened to attend some physics lectures, and that got me involved with this project of understanding how quantum field theory worked and whether it could be applied to the strong interaction. That was basically the first piece of serious research Iâd ever tried to do. And it worked!

And it led directly to a Nobel Prize. I did hard calculations that nobody else could do. I was collaborating with David Gross, who was a much more experienced scientist. And so brilliant. Working with him, I felt that I was making a real contribution. In subsequent years I was able to spin off one successful application of those ideas after another â or do other creative things like axions, though that was much later.

I knew right away that if the theory we were proposing was correct, it was potentially a very big deal. I remember saying to David, âIf the experiments bear this out, weâll get the Nobel Prize.â Now, the prize came 30 years later, in 2004. But the personal validation was instantaneous. We did the calculations in the winter of 1972, and we published in the spring. That summer I went to my first conference. It was very small, at a place called the Downingtown Inn. But Feynman was there, the Feynman, and he talked about our work. He said, âThis is really important.â I was 21.

### You could have been crippled by such a remarkable early success. Many people are.

Not at all. I think for the kind of work I do, having self-confidence is pure gold. To break new ground, you need to be willing to go out on a limb and take risks. Because of my early successes, my mistakes, when I made them, were not as devastating as they could have been.

To be honest, Iâve made some. One was a mistake of lost opportunity. I feel I should have discovered inflation. I had all the necessary pieces; I was at the right place at the right time with the right kind of knowledge. I was just thinking about other things.

One of my big successes in recent years was in proposing the concept of a time crystal, which has kind of flourished into a whole field. But in the early paper, my main example turned out to be not very good. It was unstable technically. It was not scientifically sound. The general direction was good, but the detailed implementation was not.

### What do you tell yourself when youâve made a mistake?

I donât dwell on it. Listen, I like working with imaginative ideas. Mistakes are sometimes the price of doing that.

### Letâs talk about one of your imaginative ideas â axions. They are the hypothetical particle that may make up dark matter and thus may be a key to understanding the early universe. Youâve been theorizing and publishing about them for more than 40 years. How did you get started on axions?

One night in the summer of 1977 I took a long walk at Fermilab and decided to think about issues around the Higgs particle or particles. First I thought about how one might detect them. I basically cracked the problem, though that didnât become clear until much later. Then I thought about whether there might be more than one Higgs field, and if there was anything interesting one could say about that kind of possibility. The most intriguing idea that occurred to me was that with more than one Higgs field you could have symmetry patterns.

The next day I went to the library and looked up what people had done on all these questions. I found that Roberto Peccei and Helen Quinn had not only considered the Higgs symmetry, but showed that it could address a fundamental problem in physics. But Peccei and Quinn hadnât realized a very striking consequence of the symmetry, namely the existence of a very light particle â what we now call the axion. My first thought was that such a particle would surely have been observed if it existed, so I didnât work on it as urgently as I did on the other idea.

But as I considered it more closely, I gradually convinced myself that maybe the axion â which is what I called it, from the start â might possibly exist after all, because one after the other my attempts to rule it out failed. Itâs not every day that you stumble into new ways of asking and possibly answering such fundamental questions, so I got sucked in.

A few years later I got invited by Stephen Hawking to a now legendary conference, the 1982 Nuffield conference in Cambridge, where early-universe cosmology was the main subject. Stephen asked me to give the summary talk. I felt I should try to live up to the occasion and present some new ideas, so I raised the question of how the axion field settles down after the Big Bang, and whether it might leave an observable relic.

John Preskill, Mark Wise and I worked this out and discovered that in fact lots of axions get left over, and that they would have just the right properties to supply the âdark matterâ that cosmologists seem to need.

### Given the fact that you and the others were heavy hitters, why were axions for so long the Rodney Dangerfields of the physics world? They didnât get much respect.

Well, axions were a very different kind of particle, quite different from anything before. Without an understanding of the deep theoretical ideas behind it, axions just seemed outlandish.

Thatâs changed. The case for dark matter has become much stronger, and axions have become much more popular. Certainly, the case for dark matter in the universe is very strong now, almost universally accepted. That dark matter is probably some new kind of new particle is almost universally accepted, too. It fits many, many independent lines of evidence. And it could be the axion. The axion has all the right properties, which is not trivial at all.

And also, the original fundamental problem that the axion was designed to address, this problem of why the laws look the same forward and backward in time, has been around for almost 50 years now, and axions are the only solution that stands up. So we need it. We need it. I think we can look forward to the discovery that the dark matter is in fact axions.

### For a long time, the alternate leading candidate for dark matter was something called a WIMP. Has it fallen by the wayside?

I think so. It was supposed to emerge from supersymmetric unification theories, certainly an attractive idea.

But whatâs happened is that at the Large Hadron Collider, supersymmetry was not discovered, and then in a large number of other searches for WIMPs, they were not discovered, and so the empirical case against them mounted. By process of elimination, that left axions as kind of the last man standing. Now, itâs not quite true that all modifications of the WIMP theory have been ruled out. But WIMPs are getting less plausible, whereas axions look better and better.

### If axions are proved to be the dark matter, will you have shaken up the physics world a second time?

Well, yes and no. If axions are it, it wouldnât shake everything up because thereâs already a well-established body of theory about it. The basic concepts have been around for 40 years. There are thousands of papers. But still, itâs one thing to have theoretical speculation and another to have the world.

Iâve enjoyed working on the axion â itâs been fun. With others, I started in the early 1980s proposing that it might be a candidate for dark matter, and that opened up new ways to look for it. It was an interesting concept to work on compared to QCD , which had been solved. Today, many people are working on axions. And thatâs inspiring, and it keeps me connected to younger people.

On the other hand, itâs not yet the same thing as what David and I did on the strong interaction, which is part of the fabric of fundamental law.

### What are you working on now?

Several things â an eclectic mix. Iâm very actively involved in trying to design axion antennas that will finally detect them. Iâm having a lot of fun working with experimentalists and engineering-type people to do that.

Iâm working on a state of matter called anyons that I think could open up a whole new area of physics and could change computing. I have another bright idea on something related to quantum information theory.

Iâm very interested to follow up on time crystals; I think that has flourished beyond my initial expectations. And itâs gone in different directions.

### Have you had to put these projects on hold because of the pandemic?

Oh, no. No! Theyâve been fostered because of it. Iâm not schlepping around and going to conferences and traveling. I mean, Iâm eating better. Iâve lost 15 pounds, taken up juggling, doing exercise. Itâs given me time to do creative thinking. In a way Iâve been going back to school, as if I were a graduate student. I want to learn more about machine learning. Iâm going to be 70 in May, but I feel younger now than Iâve felt for many years.

You know, when I was a teen and trying to put it all together, I thought that life was a matter of figuring out the answers to questions and that was that. Now Iâm learning that good answers lead to better questions, and that the cycle never ends.

• NASA Extends Agreement with Planet Under Its Commercial SmallSat Data Acquisition Program

All NASA-funded researchers now have continued access to PlanetScope and RapidEye imagery through September 2021 following an extension of our current agreement under NASAâs Commercial SmallSat Data Acquisition (CSDA) Program. Through the NASA CSDA Program, scientists have used Planet imagery for a variety of research projects to date and weâre eager to see what innovative projects these researchers will pursue in the coming year.Â

In the last year, PlanetScope imagery was used to validate burned area models of wildfires, analyze the collapse of the last intact Arctic ice sheet due to the impacts of climate change, and assess landslide hazards in the Himalayas. NASA researchers also used PlanetScope imagery to aid farmers and herders in Africa through SERVIR, a joint initiative of NASA and the U.S. Agency for International Development (USAID). As watering holes become less reliable and predictable due to changes in rainfall in the region, SERVIR created a web-based tool to aid nomadic herders in northern Senegal to find watering holes for their herds for cattle, donkeys, and goats.

NASA researchers have also turned to satellite imagery to investigate the impact of the COVID-19 pandemic and its associated economic shutdowns. Planet data was incorporated into NASAâs COVID-19 Dashboard to monitor changes in traffic and airports over time. The Togolese government approached NASA Harvest, NASAâs food security and agriculture program, to create a country-wide cropland map with the goal of helping in aid distribution. The Harvest team provided the government with the map they needed within 10 days of receiving the request, allowing the government to mobilize quickly to ensure food security during the global pandemic.Â

Weâre thrilled to provide Planet data to all NASA-funded researchers in an effort to unlock even more insights and discoveries that can benefit our world. Visit our page to get more information on the Planet-NASA CSDA agreement and learn how to apply.Â

Are you a researcher looking for access to Planet imagery but arenât funded by NASA? Check out our other Education and Research options.

• Overview, talks and takeaways of the 4th iteration of the Open Source CubeSat Workshop

## The Open Source CubeSat Workshop 2020!

The Open Source CubeSat Workshop is a yearly event that brings together enthusiasts from the fields of space technology, engineering, CubeSats, mission control and analysis, and of course, Open Source. The event has been around for four years and has been gaining continuous success and building rapport among Space and Open Source Technology supporters. In an attempt to maintain stability in a rather unpredictable year when a pandemic has been sweeping the globe, the event was decided to take place online. We found no better way to guarantee the safety of attendees and speakers alike, than participating in an online event from the comfort and safety of our home. And that brought on the Open Source CubeSat Workshop 2020 â online edition; with a strong focus onÂ sharing ideas and promoting collaboration; even when confined at home on different meridians of the planet.

## The Event

On the 12th and 13th of December, the Open Source CubeSat Workshop 2020 kicked off online! The event was streamed live on the Libre Space Foundation YouTube Channel and it brought together people from different continents, backgrounds and disciplines. At the same time conversations and insightful discussions were taking place on the YouTube Chat and on dedicated matrix channels (via element.io), where information provided by speakers (tips/source code/awesome list) was also shared. A community buzzing of ideas for cooperation that you can join any time of the year and explore collaboration potentials.

The event featured two Rooms where presentations took place in parallel; fascinating lightning talks and detailed tutorials. Overall there were 20 presentations, 12 lightning talks and 5 tutorials and here is the playlist with all the videos from the event.

The Open Source CubeSat Workshop 2020 was a fun and informative event where knowledge and great ideas were shared openly. Fascinating discussions and Q&As provided insightful approaches and there were some interesting conclusions.

## Key take away

OSCW is a great occasion, every year, to realize the immense impact of the open-source projects that compose the space industry, by playing an important role in enabling access to key technologies for creating and supporting space missions. Every year we see gaps being filled like the DOCKS software suite, managed by Observatoire de Paris (CCERES), offering a complete set of tools for space mission profiling. Thanks to the feedback provided by the community, the project has been restructured, to feature an easier interface and understanding of its different tools. SatNOGS project, the Satellite Network of Ground Stations, is realized by volunteers across the globe and it continues to expand. Coverage expansion, the well-scaling infrastructure and its evolution were presented while the project maintains user-friendly interfaces for flight control teams to store, access and view their spacecraft telemetry. SatNOGS puts the finger on key challenges (satellite detection, identification and tracking, easy deployment of SDR-based ground stations, and ground infrastructure). OpenSatCom was also presented at the event. This is an activity of the European Space Agency managed by the Libre Space Foundation; the latter has recently produced a resourceful report about Open Source, development methodology models for satellite communications.
The workshop also featured a lot of tutorials and information about the things you need to know if you are into creating your own smallsat mission or managing an existing one. It included everything from open-source embedded software and implementation of ECSS standards to the latest update on how to propagate an orbit and why you have to say goodbye to TLEs; as well as how to use machine learning to analyze your operations data.
There were plenty of other key projects presented like MetaSat or the standardization of PC/104 connectors with the Librecube initiative that you can retrieve from the contributions list.

OSCW is also the siege of numerous shared ideas as all interventions do trigger oneâs imagination. Lightning talks provide great inspiration. They are the perfect example of how you can quickly get a team of contributors for a project like a ground station in a backpack. Or they can inspire the birth of an idea which was not even on the initial list and it was formed on the spot. By raising the interest of the attendees during a talk, you get to inspire people who wish to contribute to your project. It is because of this that often you can find somebody saying âIâve got something I can share with you and we can make itâ; such as the air-bearing testing equipment for testing your ADCS (during the astonishing liquid metal-based pico reaction âwheelsâ).

## Acknowledgements

Though this year the event was different, as it went fully online, it was indeed a successful one! As it did manage to gather an online community of enthusiasts who are dedicated to Open Source, Technology and Space. It managed to overcome boundaries and a pandemic and to allow individuals from around the globe to unite under their shared interest. For this, we would like to thank everyone! The attendees for forming a friendly community of great diversity and knowledge, the speakers and presenters for creating great interactions and sharing insights, the contributors for enabling a smooth experience and the OSCW committee for putting everything together and overseeing the event. We would also like to thank the teams behind Indico (our conference management tool), The Big Blue Button (amazingly smooth video conferencing, with multi-user whiteboarding and break sessions), Element.io and Matrix for powering the conversations, links & file-sharing of this yearâs iteration of The Open Source CubeSat Workshop 2020!

Until we meet again, next year and hopefully in person, feel free to enjoy the recordings of this yearâs event and go through the presentations and talks. They are a great source of knowledge and you can shuffle through them and acquire great insight.

In the meantime, stay safe and connected!

• A fix for sudden Lua errors in pkg

If you upgrade pkg on your system, it may start erroring out.Â  This is because the default config will confuse the newer version.Â  To fix this, you can copy over a working config and the problem will go away.Â  I expect this may only be a problem until the next release.

## January 11, 2021

• Ubiquiti: Change Your Password, Enable 2FA

Ubiquiti, a major vendor of cloud-enabled Internet of Things (IoT) devices such as routers, network video recorders, security cameras and access control systems, is urging customers to change their passwords and enable multi-factor authentication. The company says an incident at a third-party cloud provider may have exposed customer account information and credentials used to remotely manage Ubiquiti gear.

In an email sent to customers today, Ubiquiti Inc. [NYSE: UI] said it recently became aware of âunauthorized access to certain of our information technology systems hosted by a third party cloud provider,â although it declined to name that provider.

The statement continues:

âWe are not currently aware of evidence of access to any databases that host user data, but we cannot be certain that user data has not been exposed. This data may include your name, email address, and the one-way encrypted password to your account (in technical terms, the passwords are hashed and salted). The data may also include your address and phone number if you have provided that to us.â

Ubiquiti has not yet responded to requests for more information, but the notice was confirmed as official in a post on the companyâs user support forum.

The warning from Ubiquiti carries particular significance because the company has made it fairly difficult for customers using the latest Ubiquiti firmware to interact with their devices without first authenticating through the companyâs cloud-based systems.

This has become a sticking point for many Ubiquiti customers, as evidenced by numerous threads on the topic in the companyâs user support forums over the past few months.

âWhile I and others do appreciate the convenience and option of using hosted accounts, this incident clearly highlights the problem with relying on your infrastructure for authenticating access to our devices,â wrote one Ubiquiti customer today whose sentiment was immediately echoed by other users. âA lot us cannot take your process for granted and need to keep our devices offline during setup and make direct connections by IP/Hostname using our Mobile Apps.â

To manage your security settings on a Ubiquiti device, visit https://account.ui.com and log in. Click on âSecurityâ from the left-hand menu.

1. Change your password
2. Set a session timeout value
3. Enable 2FA

Image: twitter.com/crosstalksol/

According to Ubiquitiâs investment literature, the company has shipped more than 85 million devices that play a key role in networking infrastructure in over 200 countries and territories worldwide.

This is a developing story that may be updated throughout the day.

• Why Are Circuits on Boards?

From Zack Freedman of Fat Cat Labs:

Why Are Circuits on Boards?

Chips are tiny and phones are glass, so why are circuits still flat and green? The printed circuit board played a pivotal role in World War 2, and itâs barely changed since then. Nearly every modern device has at least one circuit board; theyâre so ubiquitous, we just assume that electronics are flat rectangles. It wasnât always that way â once upon a time, terrifying globs of exposed connections and miles-long webs of wrapped wires lurked behind the wood veneer.

See how the literal foundation of technology is made, learn about the modern features that enable powerful electronics, catch a glimpse of the advanced future, and most importantly, discover why, after 80 years of progress, we still put all our circuits on boards.