# Planet Aardvark

## September 22, 2020

• Hydrogen & Mines Virtual Summit 2020, October 6-7

First Mode President, Chris Voorhees, will participate on a live panel to discuss hydrogen-powered mine fleets.

# How will hydrogen change the mining sector?

Join us for the Hydrogen and Mines Virtual Summit to find out!

Displacing diesel for mining vehicles is a critical part of mining companiesâ carbon strategies and renewable hydrogen is one of the options mines are considering as a fuel replacement.

On October 7, First Mode President Chris Voorhees will join the Hydrogen and Mines Virtual Summit 2020 for a live streamed panel discussion and Q&A â offering critical insight on the latest developments, pilot projects, business models, hurdles and milestones for hydrogen-powered mine fleets.Â Discussion will include:

• What are the main technical and commercial challenges with moving to hydrogen-powered fleets?

• What lessons can be drawn from current projects converting lighter vehicles to hydrogen fuel?Â

• When is hydrogen for mine fleets expected to be viable for large-scale deployment?Â

• Susan Grandone, Chief Operating Officer, Mining3

• Alfred Wong, Managing Director, Asia Pacific, Ballard Power Systems

• Craig Knight, Managing Director, Hyzon Motors Australia

• Luke Sandery, Package Manager, Power, West Musgrave & Carrapeteena Expansion, OZ Minerals

• The Simple Math Problem We Still Canât Solve

This column comes with a warning: Do not try to solve this math problem.

You will be tempted. This problem is simply stated, easily understood, and all too inviting. Just pick a number, any number: If the number is even, cut it in half; if itâs odd, triple it and add 1. Take that new number and repeat the process, again and again. If you keep this up, youâll eventually get stuck in a loop. At least, thatâs what we think will happen.

Take 10 for example: 10 is even, so we cut it in half to get 5. Since 5 is odd, we triple it and add 1. Now we have 16, which is even, so we halve it to get 8, then halve that to get 4, then halve it again to get 2, and once more to get 1. Since 1 is odd, we triple it and add 1. Now weâre back at 4, and we know where this goes: 4 goes to 2 which goes to 1 which goes to 4, and so on. Weâre stuck in a loop.

Or try 11: Itâs odd, so we triple it and add 1. Now we have 34, which is even, so we halve it to get 17, triple that and add 1 to get 52, halve that to get 26 and again to get 13, triple that and add 1 to get 40, halve that to get 20, then 10, then 5, triple that and add 1 to get 16, and halve that to get 8, then 4, 2 and 1. And weâre stuck in the loop again.

The infamous Collatz conjecture says that if you start with any positive integer, youâll always end up in this loop. And youâll probably ignore my warning about trying to solve it: It just seems too simple and too orderly to resist understanding. In fact, it would be hard to find a mathematician who hasnât played around with this problem.

I couldnât ignore it when I first learned of it in school. My friends and I spent days trading thrilling insights that never seemed to get us any closer to an answer. But the Collatz conjecture is infamous for a reason: Even though every number thatâs ever been tried ends up in that loop, weâre still not sure itâs always true. Despite all the attention, itâs still just a conjecture.

Yet progress has been made. One of the worldâs greatest living mathematicians ignored all the warnings and took a crack at it, making the biggest strides on the problem in decades. Letâs take a look at what makes this simple problem so very complicated.

$latex f(n) = \begin{cases} n / 2 & \text{if$n$is even } \\ 3n+1 & \text{if$n$is odd } \end{cases}\$

You might remember âpiecewiseâ functions from school: The above function takes an inputÂ nÂ and applies one of two rules to it, depending on whether the input is odd or even. This functionÂ fÂ enacts the rules of the procedure we described above: For example,Â fÂ (10) = 10/2 = 5 since 10 is even, andÂ fÂ (5) = 3 Ã 5 + 1 = 16 since 5 is odd.Â Because of the rule for odd inputs, the Collatz conjecture is also known as the 3n + 1 conjecture.

The Collatz conjecture deals with âorbitsâ of this functionÂ f.Â An orbit is what you get if you start with a number and apply a function repeatedly, taking each output and feeding it back into the function as a new input. We call this âiteratingâ the function. Weâve already started computing the orbit of 10 underÂ f,Â Â so letâs find the next few terms:

fÂ (10) = 10/2 = 5
fÂ (5) = 3 Ã 5 + 1 = 16
fÂ (16) = 16/2 = 8
fÂ (8) = 8/2 = 4

A convenient way to represent an orbit is as a sequence with arrows. Hereâs the orbit of 10 underÂ f:

10 â 5 â 16 â 8 â 4 â 2 â 1 â 4 â 2 â 1 â â¦

At the end we see we are stuck in the loopÂ 1 â 4 â 2 â 1 â â¦.

Similarly, the orbit for 11 underÂ fÂ can be represented as

11 â 34 â 17 â 52 â 26 â 13 â 40 â 20 â 10 â 5 â 16 â 8 â 4 â 2 â 1 â 4 âÂ â¦.

Again we end up in that same loop. Try a few more examples and youâll see that the orbit always seems to stabilize in that 4 â 2 â 1 â â¦Â loop. The starting values of 9 and 19 are fun, and if youâve got a few minutes to spare, try 27. If your arithmetic is right, youâll get there after 111 steps.

The Collatz conjecture states that the orbit of every number underÂ f eventually reaches 1. And while no one has proved the conjecture, it has been verified for every number less than 268.Â So if youâre looking for a counterexample, you can start around 300 quintillion. (You were warned!)

Itâs easy to verify that the Collatz conjecture is true for any particular number: Just compute the orbit until you arrive at 1. But to see why itâs hard to prove for every number, letâs explore a slightly simpler function,â.

$latex g(n) = \begin{cases} n / 2 & \text{if$n$is even } \\ n+1 & \text{if$n$is odd } \end{cases}\$

The functionâÂ is similar toÂ f, but for odd numbers it just adds 1 instead of tripling them first. SinceÂ â andÂ f are different functions, numbers have different orbits underÂ âÂ than underÂ f. For example, here are the orbits of 10 and 11 underÂ â:

10 â 5 â 6 â 3 â 4 â 2 â 1 â 2 â 1 â 2 â â¦
11 â 12 â 6 â 3 â 4 â 2 â 1 â 2 â 1â 2 â â¦

Notice that the orbit of 11 reaches 1 faster underÂ âÂ than underÂ f. The orbit of 27 also reaches 1 much faster underÂ â.

27 â 28 â 14 â 7 â 8 â 4 â 2 â 1 â 2 âÂ â¦

In these examples, orbits underÂ âÂ appear to stabilize, just like orbits underÂ f, but in a slightly simpler loop:

â 2 â 1 â 2 â 1 âÂ â¦.

We might conjecture that orbits underâÂ always get to 1. Iâll call this the âNollatzâ conjecture, but we could also call it theÂ n + 1 conjecture. We could explore this by testing more orbits, but knowing something is true for a bunch of numbers â evenÂ 268Â of them â isnât a proof that itâs true for every number. Fortunately, the Nollatz conjecture can actually be proved. Hereâs how.

First, we know that half of a positive integer is always less than the integer itself. So ifÂ nÂ is even and positive, thenÂ â(n) =Â n/2 <Â n.Â In other words, when an orbit reaches an even number, the next number will always be smaller.

Now, ifÂ nÂ is odd, thenÂ â(n) =Â nÂ + 1 which is bigger thanÂ n. But sinceÂ nÂ is odd,Â nÂ + 1 is even, and so we know where the orbit goes next:Â âÂ will cutÂ nÂ + 1 in half. For an oddÂ nÂ the orbit will look like this:

â¦ âÂ nÂ âÂ nÂ + 1 â $latex \frac{n+1}{2}$ â â¦

Notice that $latex \frac{n+1}{2}$ = $latex \frac{n}{2}$ + $latex \frac{1}{2}$. Since $latex \frac{n}{2}$ <Â nÂ and $latex \frac{1}{2}$ is small, $latex \frac{n}{2}$ + $latex \frac{1}{2}$ is probably also less thanÂ n. In fact, some simple number theory can show us that as long asÂ nÂ > 1, then itâs always true that $latex \frac{n}{2}$ + $latex \frac{1}{2}$<Â n.

This tells us that when an orbit underÂ âÂ reaches an odd number greater than 1, weâll always be at a smaller number two steps later. And now we can outline a proof of the Nollatz conjecture: Anywhere in our orbit, whether at an even or an odd number, weâll trend downward. The only exception is when we hit 1 at the bottom of our descent. But once we hit 1 weâre trapped the loop, just as we conjectured.

Can a similar argument work for the Collatz conjecture? Letâs go back to the original function.

$latex f(n) = \begin{cases} n / 2 & \text{if$n$is even } \\ 3n+1 & \text{if$n$is odd } \end{cases}\$

As withÂ â, applyingÂ f to an even number makes it smaller. AndÂ as withÂ â,Â applyingÂ f to an odd number produces an even number, which means we know what happens next:Â f will cut the new number in half. Hereâs what the orbit underÂ fÂ looks like whenÂ nÂ is odd:

â¦ âÂ nÂ â 3nÂ + 1 â $latex \frac{3n+1}{2}$ â â¦

But hereâs where our argument falls apart. Unlike our example above, this number is bigger thanÂ n: $latex \frac{3n+1}{2}$ = $latex \frac{3n}{2}$ + $latex \frac{1}{2}$, and $latex \frac{3n}{2}$ = 1.5n, which is always bigger thanÂ n.Â The key to our proof of the Nollatz conjecture was that an odd number must end up smaller two steps later, but this isnât true in the Collatz case. Our argument wonât work.

If youâre like me and my friends back in school, you might now be excited about proving that the Collatz conjecture is false: After all, if the orbit keeps getting bigger, then how can it get down to 1? But that argument requires thinking about what happens next, and what happens next illuminates why the Collatz conjecture is so slippery: We canât be sure whetherÂ $latex \frac{3n+1}{2}$Â is even or odd.

We know thatÂ 3nÂ + 1Â is even. IfÂ 3nÂ + 1 is also divisible by 4, thenÂ $latex \frac{3n+1}{2}$Â is also even, and the orbit will fall. But ifÂ 3nÂ + 1Â is not divisible by 4, thenÂ $latex \frac{3n+1}{2}$Â is odd, and the orbit will rise. In general we canât predict which will be true, so our argument stalls out.

But this approach isnât completely useless. Since half of all positive integers are even, thereâs a 50% chance thatÂ $latex \frac{3n+1}{2}$ is even, which makes the next step in the orbitÂ $latex \frac{3n+1}{4}$.Â ForÂ nÂ > 1Â this is less thanÂ nÂ ,Â so half the time an odd number should get lower after two steps. Thereâs also a 50% chance thatÂ $latex \frac{3n+1}{4}$Â is even, which means thereâs a 25% chance that an odd number will be reduced to less than half of where it started after three steps. And so on. The net result is that, in some average way, Collatz orbits decrease when they encounter an odd number. And since Collatz orbits always decrease at even numbers, this suggests that all Collatz sequences must decrease in the long run. This probabilistic argument is widely known, but no one has been able to extend it to a complete proof of the conjecture.

Yet several mathematicians have proved that the Collatz conjecture is âalmost alwaysâ true. This means theyâve proved that, relative to the amount of numbers they know lead to 1, the amount of numbers they arenât sure about is negligible. In 1976 the Estonian American mathematician Riho Terras proved that, after repeated application of the Collatz function, almost all numbers eventually wind up lower than where they started. As we saw above, showing that the numbers in the orbit consistently get smaller is one path to showing that they eventually get to 1.

And in 2019, Terence Tao, one of the worldâs greatest living mathematicians, improved on this result. Where Terras proved that for almost all numbers the Collatz sequence of n ends up below n, Tao proved that for almost all numbers the Collatz sequence of n ends up much lower: belowÂ $latex \frac{n}{2}$,Â belowÂ $latexÂ \sqrt{n}$,Â below $latex \ln n$ (the natural log of n), even below everyÂ f(n) whereÂ f(x)Â is any function that goes off to infinity, no matter how slowly. That is, for almost every number, we can guarantee that its Collatz sequence goes as low as we want. In a talk about the problem, Tao said this result is âabout as close as one can get to the Collatz conjecture without actually solving it.â

Even so, the conjecture will continue to attract mathematicians and enthusiasts. So pick a number, any number, and give it a go. Just remember, youâve been warned: Donât get stuck in an endless loop.

## Exercises

1. Show that there are infinitely many numbers whose Collatz orbits pass through 1.

2. The âstopping timeâ of a number n is the smallest number of steps it takes for the Collatz orbit of n to reach 1. For example, the stopping time of 10 is 6, and the stopping time of 11 is 14. Find two numbers with stopping time 5.

3. In a recent talk on the Collatz conjecture, Terrance Tao mentioned the following Collatz-like function:

$latex h(n) = \begin{cases} n / 2 & \text{if$n$is even } \\ 3n-1 & \text{if$n$is odd } \end{cases}\$

Tao points out that in addition to the 1 â 2 â 1 â 2 â 1â¦ loop, two other loops appear. Can you find them?

Notice that every power of 2 has a simple orbit path to 1. For example,

24 â 23 â 22 â 2 â 1 â â¦

Since there are infinitely many powers of 2, there are infinitely many numbers whose Collatz orbits pass through 1.

Notice that 25 has stopping time 5, since 25 â 24 â 23 â 22 â 2 â 1 â â¦. And since 24 has stopping time 4, any number that is one step away from 24 has stopping time 5. For example, 5 â 16 â 8 â 4 â 2 â 1. Could there be others?

The other loops are

5 â 14 â 7 â 20 â 10 â 5 â â¦

and

17 â 50 â 25 â 74 â 37 â 110 â 55 â 164 â 82 â 41 â 122 â 61 â 182 â 91 â 272 â 136 â 68 â 34 â 17 â â¦.

• Getting Started With The TPS22917 Load Switch

From Seth Kerr of Oak Dev Tech:

#### Getting Started With The TPS22917 Load Switch

Trying to figure out how to reduce the power consumption of your project with multiple peripherals can be tricky, especially if these peripherals are operating off different power supplies than your main controller. This is where the easy to use TPS22917 Power Switch/Load Driver from Texas Instruments comes in.Packed into a tiny SOT23 package, the TPS22917 is compact enough to fit into any project, requiring just four total components including the main IC. Combined together on a small breakout board, we then have an easy to use form factor that can be used to easily integrate it right into your project without the extra hassle.

In this tutorial weâre going to show you how easily it is to use our TPS22917 breakout board in your project.

• Finding where a file came from

Hereâs a recommendation (and a usage lesson) on pkg-provides, a tool for matching a file to the installed pkg that brought it.Â  It goes with the pkglocate article some weeks ago; it seems like this should be standard functionality.Â  Thanks to Nelson H. F. Beebe.

• Autumn Equinox - Alban Elfed

The name for the festival of the Autumn Equinox in Druidry is Alban Elfed, which means âThe Light of the Waterâ.

The Wheel turns and the time of balance returns. Alban Elfed marks the balance of day and night before the darkness overtakes the light. It is also the time of the second harvest, usually of the fruit which has stayed on the trees and plants that have ripened under the summer sun.

It is this final harvest which can take the central theme of the Alban Elfed ceremony â thanking the Earth, in her full abundance as Mother and Giver, for the great harvest, as Autumn begins.

This Feast is known by many names to many people, for the Truth is reflected from many mirrors. It has been celebrated as Alban Elfed and Harvest. Our ancestors called it by names long forgotten, and our children will call it by names as yet unconceived.

At this time, our ancestors saw the Sun, for the first time in half a year, be unable to outshine the Dark. Although he still shines with strength, his strength grows weaker as the days grow shorter.

Today he holds the Darkness in in equal measure to the Light, but he is struck in his season with the wound of Time and from day to day the darkness will grow as the Lord of Light sinks into his Age, for the wound is grievous and will not heal. This is a time of farewell and gratitude for the summer that has been.

## September 21, 2020

• Matt Blaze on OTP Radio Stations

Matt Blaze discusses (also here) an interesting mystery about a Cuban one-time-pad radio station, and a random number generator error that probably helped arrest a pair of Russian spies in the US.

• On White Dwarf Planets as Biosignature Targets

So often a discovery sets off a follow-up study that strikes me as even more significant in practical terms. This is not for a moment to downplay the accomplishment of Andrew Vanderburg (University of Wisconsin â Madison) and team that discovered a planet in close orbit around a white dwarf. This is the first time weâve found a planet that has survived its starâs red giant phase and remains in orbit around the remnant, and quite a tight orbit at that. Previously, weâve had good evidence only of atmospheric pollution in such stars, indicating infalling material from possible asteroids or other objects during the primaryâs cataclysmic re-configuration.

The white dwarf planet, found via data gathered from TESS (Transiting Exoplanet Survey Satellite) and the Spitzer Space Telescope, makes for quite a discovery. But coming out of this work, I also love the idea of studying such a world with tools weâre likely to have soon, such as the James Webb Space Telescope, and on that score, Lisa Kaltenegger (Carl Sagan Institute, Cornell University), working with Ryan MacDonald and including Vanderburg in the team, have shown us how JWST can identify chemical signatures in the atmospheres of possible Earth-like planets around white dwarf stars. Assuming we find such, and I suspect we will.

The planet at the white dwarf WD 1856+534 is anything but Earth-like. Itâs running around the star every 34 hours, which means itâs on a pace 60 times faster than Mercury orbits the Sun. The planet here is also the size of Jupiter, and what a system weâve uncovered â the new world orbits a star that is itself only 40 percent larger than Earth (imagine the transit depth possible with white dwarfs transited by a gas giant!) In this planetary system, the planet weâve detected is about deven times larger than its primary. Says Vanderburg:

âWD 1856 b somehow got very close to its white dwarf and managed to stay in one piece. The white dwarf creation process destroys nearby planets, and anything that later gets too close is usually torn apart by the starâs immense gravity. We still have many questions about how WD 1856 b arrived at its current location without meeting one of those fates.â

Image: In this illustration, WD 1856b, a potential Jupiter-size planet, orbits its dim white dwarf star every day-and-a-half. WD 1856 b is nearly seven times larger than the white dwarf it orbits. Astronomers discovered it using data from NASAâs Transiting Exoplanet Survey Satellite (TESS) and now-retired Spitzer Space Telescope. Credit: NASA GSFC.

So on the immediate question of WD 1856 b, letâs note that we have a serious issue with explaining how the planet got to be this close to the white dwarf in the first place. White dwarfs form when stars like the Sun swell into red giant status as they run out of fuel, a phase in which 80 percent of the starâs mass is ejected, leaving a hot core â the white dwarf â behind. Anything on relatively close orbit would be presumably swallowed up in the stellar expansion phase.

Which is why Vanderburgâs team believes the planet probably formed fully 50 times farther away from its present location, later moving inward perhaps through interactions with other large bodies close to the planetâs original orbit, with its orbit circularizing as tidal forces dissipated. Such instabilities could bring a planet inward, as could other scenarios involving the red dwarfs G229-20 A and B in this triple star system, although the paper plays down this idea, as well as the notion of a rogue star acting as a perturber. Other Jupiter-like planets, presumably long gone, seem to be the best bet to explain this configuration.

From the paper:

â¦a more probable formation history is that WD 1856 b was a planet that underwent dynamical instability. It is well established that when stars evolve into white dwarfs, their previously stable planetary systems can undergo violent dynamical interactions that excite high orbital eccentricities. We have confirmed with our own simulations that WD 1856 b-like objects in multi-planet systems can be thrown onto orbits with very close periastron distances. If WD 1856 b were on such an orbit, the orbital energy would have rapidly dissipated, owing to tides raised on the planet by the white dwarf. The final state of minimum energy would be a circular, short-period orbit. The advanced age of WD 1856 (around 5.85 Gyr) gives plenty of time for these relatively slow (of the order of Gyr) dynamical processes to take place. In this case, it is no coincidence that WD 1856 is one of the oldest white dwarfs observed by TESS.

Did you catch that reference to the white dwarfâs age? The 5.85 billion year frame gives ample opportunity for such orbital adjustments to take place, winding up with the observed orbit. Or perhaps weâre dealing with interactions with a debris disk around the star, as co-author Stephen Kane (UC-Riverside, and a member of the TESS science team) hypothesizes:

âIn this case, itâs possible that a debris disc could have formed from ejected material as the star changed from red giant to white dwarf. Or, on a more cannibalistic note, the disc could have formed from the debris of other planets that were torn apart by powerful gravitational tides from the white dwarf. The disc itself may have long since dissipated.â

But back to Lisa Kaltenegger, lead author of a paper in Astrophysical Journal Letters probing whether an exposed stellar core â a white dwarf â would be workable as a target for the JWST, in which case we would like to look at planetary atmospheres to probe for the possibility of biosignatures. Here the news is good, for Kaltenegger believes that such detections would be possible, assuming rocky planets exist around these stars. WD 1856 b gives hope that such a world could exist in the white dwarfâs habitable zone for a period longer than the time it took for life to develop on Earth. The implications are intriguing:

âWhat if the death of the star is not the end for life?â Kaltenegger said. âCould life go on, even once our sun has died? Signs of life on planets orbiting white dwarfs would not only show the incredible tenacity of life, but perhaps also a glimpse into our future.â

Image: In newly published research, Cornell researchers show how NASAâs upcoming James Webb Space Telescope could find signatures of life on Earth-like planets orbiting burned-out stars, known as white dwarfs. Credit: Jack Madden/Carl Sagan Institute.

The Kaltenegger team used methods developed to study gas giant atmospheres and combined them with computer models configured to apply the technique to small, rocky white dwarf planets. The researchers found that JWST, when observing an Earth-class planet around a white dwarf, could detect carbon dioxide and water with data from as few as 25 transits. According to co-lead author Ryan MacDonald, it would take a scant two days of observing time with JWST to probe for the classic biosignature gases ozone and methane. Adds MacDonald:

âWe know now that giant planets can exist around white dwarfs, and evidence stretches back over 100 years showing rocky material polluting light from white dwarfs. There are certainly small rocks in white dwarf systems. Itâs a logical leap to imagine a rocky planet like the Earth orbiting a white dwarf.â

So we have a possible target weâll want to add into the exoplanet mix when it comes to nearby white dwarf systems. WD 1856 is about 80 light years out in the direction of Draco. The white dwarf formed over 5 billion years ago, as noted in the paper, but the age of the original Sun-like star may take us back as much as 10 billion years. The post red giant phase allows plenty of time for orbital adjustment, drawing rocky worlds inward and circularizing their orbit. Will we find such planets in this setting in the near future? The hunt for such will surely intensify.

The paper is Vanderburg et al., âA giant planet candidate transiting a white dwarf,â Nature 585 (16 September 2020), 363-367 (abstract). The Kaltenegger paper is âThe White Dwarf Opportunity: Robust Detections of Molecules in Earth-like Exoplanet Atmospheres with the James Webb Space Telescope,â Astrophysical Journal Letters Vol. 901, No. 1 (16 September 2020). Abstract.

• At the Math Olympiad, Computers Prepare to Go for the Gold

The 61st International Mathematical Olympiad, or IMO, begins today. It may go down in history for at least two reasons: Due to the COVID-19 pandemic itâs the first time the event has been held remotely, and it may also be the last time that artificial intelligence doesnât compete.

Indeed, researchers view the IMO as the ideal proving ground for machines designed to think like humans. If an AI system can excel here, it will have matched an important dimension of human cognition.

âThe IMO, to me, represents the hardest class of problems that smart people can be taught to solve somewhat reliably,â said Daniel Selsam of Microsoft Research. Selsam is a founder of the IMO Grand Challenge, whose goal is to train an AI system to win a gold medal at the worldâs premier math competition.

Since 1959, the IMO has brought together the best pre-college math students in the world. On each of the competitionâs two days, participants have four and a half hours to answer three problems of increasing difficulty. They earn up to seven points per problem, and top scorers take home medals, just like at the Olympic Games. The most decorated IMO participants become legends in the mathematics community. Some have gone on to become superlative research mathematicians.

IMO problems are simple, but only in the sense that they donât require any advanced math â even calculus is considered beyond the scope of the competition. Theyâre also fiendishly difficult. For example, hereâs the fifth problem from the 1987 competition in Cuba:

LetÂ nÂ be an integer greater than or equal to 3. Prove that there is a set ofÂ nÂ points in the plane such that the distance between any two points is irrational and each set of three points determines a non-degenerate triangle with rational area.

Like many IMO problems, this one might appear impossible at first.

âYou read the questions and think, âI canât do that,ââ said Kevin Buzzard of Imperial College London, a member of the IMO Grand Challenge team and gold medalist at the 1987 IMO. âTheyâre extremely hard questions that are accessible to schoolchildren if they put together all the ideas they know in a brilliant way.â

Solving IMO problems often requires a flash of insight, a transcendent first step that todayâs AI finds hard â if not impossible.

For example, one of the oldest results in math is Euclidâs proof from 300 BCE that there are infinitely many prime numbers. It begins with the recognition that you can always find a new prime by multiplying all known primes and adding 1. The proof that follows is simple, but coming up with the opening idea was an act of art.

âYou cannot get computers to get that idea,â said Buzzard. At least, not yet.

The IMO Grand Challenge team is using a software program called Lean, first launched in 2013 by a Microsoft researcher named Leonardo de Moura. Lean is a âproof assistantâ that checks mathematiciansâ work and automates some of the tedious parts of writing a proof.

De Moura and his colleagues want to use Lean as a âsolver,â capable of devising its own proofs of IMO problems. But at the moment, it cannot even understand the concepts involved in some of those problems. If itâs going to do better, two things need to change.

First, Lean needs to learn more math. The program draws on a library of mathematics called mathlib, which is growing all the time. Today it contains almost everything a math major might know by the end of their second year of college, but with some elementary gaps that matter for the IMO.

The second, bigger challenge is teaching Lean what to do with the knowledge it has. The IMO Grand Challenge team wants to train Lean to approach a mathematical proof the way other AI systems already successfully approach complicated games like chess and Go â by following a decision tree until it finds the best move.

âIf we can get a computer to have that brilliant idea by simply having thousands and thousands of ideas and rejecting all of them until it stumbles on the right one, maybe we can do the IMO Grand Challenge,â said Buzzard.

But what are mathematical ideas? Thatâs surprisingly hard to say. At a high level, a lot of what mathematicians do when they approach a new problem is ineffable.

âA key step in many IMO problems is to basically play around with it and look for patterns,â said Selsam. Of course, itâs not obvious how you tell a computer to âplay aroundâ with a problem.

At a low level, math proofs are just a series of very concrete, logical steps. The IMO researchers could try to train Lean by showing it the full details of previous IMO proofs. But at that granular level, individual proofs become too specialized to a given problem.

âThereâs nothing that works for the next problem,â said Selsam.

To help with this, the IMO Grand Challenge team needs mathematicians to write detailed formal proofs of previous IMO problems. The team will then take these proofs and try to distill the techniques, or strategies, that make them work. Then theyâll train an AI system to search among those strategies for a âwinningâ combination that solves IMO problems itâs never seen before. The trick, Selsam observes, is that winning in math is much harder than winning even the most complicated board games. In those games, at least you know the rules going in.

âMaybe in Go the goal is to find the best move, whereas in math the goal is to find the best game and then to find the best move in that game,â he said.

The IMO Grand Challenge is currently a moonshot. If Lean were participating in this yearâs competition, âweâd probably get a zero,â said de Moura.

But the researchers have several benchmarks theyâre trying to hit before next yearâs event. They plan to fill in the holes in mathlib so that Lean can understand all of the questions. They also hope to have the detailed formal proofs of dozens of previous IMO problems, which will begin the process of providing Lean with a basic playbook to draw from.

At that point a gold medal may still be far out of reach, but at least Lean could line up for the race.

âRight now lots of things are happening, but thereâs nothing particularly concrete to point to,â said Selsam. â year it becomes a real endeavor.â

• Street Lighting Information and Data by Kelly Beatty

The plot (see below) shows a red-rich, blue-poor âpeachy coloredâ spectrum of an older-style HPS light (high-pressure sodium) compared to a blue-rich LED (thick black line).

Also plotted are our eyesâ photopic (daytime) sensitivity and our circadian (âmelatonin-suppressingâ) sensitivity, shown as thin solid and dashed black lines, respectively. notice the shift of the peaks.

Our eyes are much more sensitive to blue light at night than they are in daylight. This shift is called the âPurkinje Effect.â

Some utilities think theyâre doing the right thing by replacing older HPS Â lighting with LEDs of the same lumen output. Â But if you put HPS and LED fixtures of the same lumen output side by side, the eye will see the LED as much brighter.

Consequently, to get the equivalent perceived âbrightnessâ, youâd install an LED with a lower lumen output. Most existing HPS fixtures on residential streets are 50W (about 4000 lumens).

However, as Dr. Mario Motta, MD (AMA Trustee) has noted, most communities are installing 3000K LEDs onÂ residential streets that are only in the 13W-15W range (at most 20W).

A 15W LED streetlight puts out ~2,000 lumens, but it will seem just as bright as a 4,000-lumen (50W) HPS streetlight.

Duke Energy seems to be using a watt-for-watt approach, which results in a much stronger LED light. Â A 50W LED streetlight, if thatâs what Duke is really installing, emits about 6000 lumens â far too high for residential settings (as youâve seen).

Â

Clear skies,

Kelly

Â

• Jared Wolff talks Cellular IoT

#### The Amp Hour #509:Cellular IoT with Jared Wolff

Welcome Jared Wolff of Circuit Dojo!

Jared is a graduate of theÃÂ Rochester Institute of TechnologyÃÂ (which Chris also considered attending). He did co-ops while there, like we talked about on last weekÃ¢â¬â¢s episode.

While on co-op atÃÂ Cisco, he was in theÃÂ cable group and marveled at the techs doing repairs with magnet wire.

He is an east coast guy at heart, so he moved back to Connecticut eventually

Jared worked at Apple for a while, but the lifestyle is difficult because of time requirements and stressful travel. He was also there when Steve Jobs was still around and there was a bit of over the top hero worship.

NordicÃ¢â¬â¢s early bluetooth chipset was theÃÂ nRF8001, which was a transceiver over SPI (no micro)

Working for startups was interesting if you thrive on doing a lot of different things

• Nine SuperDoves Scheduled for Flight on Rocket Labâs Electron

Planetâs next scheduled launch of Flock 4eâ, a total of 9 SuperDoves on Rocket Labâs Electron, is currently planned for the first half of October and will lift off from Rocket Lab Launch Complex 1 on the MÄhia Peninsula in New Zealand. Emerging from the unfortunate launch accident of Flock 4e on Electron in July, this next launch speaks to both Planet and Rocket Labâs resilience and agility to get back on the pad so quickly. These SuperDoves will be deployed into an approximate 500 km morning-crossing Sun Synchronous Orbit (SSO), joining the rest of the flock already providing unprecedented medium-resolution global coverage and revisit.

Planetâs SuperDoves will be integrated with Rocket Labâs Maxwell payload dispensers, seen here in the clean room at Rocket Labâs headquarters in Long Beach, California. Credit: Rocket Lab

This will be the 15th flight of Rocket Labâs Electron and the third time theyâll be carrying Planet satellites. Rocket Lab named this mission âIn Focusâ in a nod to the Earth-imaging satellites onboard, and we couldnât be more excited to be launching from New Zealand again.

Follow along for launch updates on Planetâs Pulse blog and Planetâs Twitter, and if you havenât already, sign up for an Explorer Trial to see the great imagery and insights that our satellites can offer.

• Volcano Dinosaur
• New scope: Celestron NexStar 8SE

London looking through the scope the first evening, when I had it on the AZ-4. His 60mm Meade refractor waits in the background.

Welp, I finally did it. Iâve been low-key lusting after one of these scopes for a few years now. Between 2007 and now, Iâve owned reflectors from 70mm to 300mm, refractors from 50mm to 102mm, and Maks from 60mm to 127mm, but Iâve never had a Schmidt-Cassegrain, and Iâve never had a GoTo scope. I figured it was time to rectify both of those omissions. What tipped my hand was the planets: Iâve had great fun these last few weeks observing Jupiter and Saturn almost every evening, and Mars on many evenings, as we speed toward opposition with the Red Planet in mid-October. Yes, the Apex 127 and the XT10 both do great on planets, but after a while I get tired of nudging them along, especially at high power. Also, the XT10 weighs about 55 lbs all set up and kitted out, and some evenings I wuss out. It will be nice to have something between the 5-inch Mak and the 10-inch Dob for those times when I want a little more oomph and a little less hassle.

If the NexStar 8SE is actually less hassleâIâm new to computerized scopes, or indeed even to motorized scopes, and my first night getting the whole system set up was not without some frustration. But Iâm getting ahead of myself.

The first point in this saga is that the NexStar 8SE, like almost all NexStar scopes, and like almost all computerized scopes, and in fact like almost all scopes period, is almost completely sold out right now, from sea to shining sea. This is apparently less about the pandemic disrupting supply lines and more about a completely bonkers demand for telescopes during the era of COVID. A lot of people are looking for hobbies while they are stuck at home, and sales of astronomical gear are, well, sky-high, at least according to the vendors Iâve heard from via email or on Cloudy Nights. So it took some doing to find one. I usually prefer to support friendly local and not-so-local telescope stores like Oceanside Photo and Telescope, Woodland Hills Camera & Telescopes, Astronomics, and Orion, but none of them had the scope in stock when I was looking. Turns out, Amazon had a few, so I put in an order. Aaaaandâ¦nothing. More than a week after I placed the order, the scope still hadnât shipped, and there was no sign that it was going to do so anytime soon.

During the unboxing. Each big component is sandwiched in styrofoam or ethofoam, inside its own box, and all of them are in two bigger boxes. The square vacuity at the lower right held the box for accessories. Note the ruler sitting on the OTAâthis is a big scope, in a big package.

Frequent commentor, sometime observing buddy, and telescope-purchase instigator Doug Rennie came to the rescue, with an AmazonSmile link to NexStar 8SEs that were said to be shipping in just a few days. I canceled the original order and tried again using Dougâs link (which is here â apparently the scope is still in stock). The scope arrived in just a couple of days, which is only surprising because the estimated delivery time was more like five days. It arrived in a big box: 3.5 feet x 2 feet x 12.5 inches.

On the day that the scope arrived, I had no way to power it. I had been planning to order a rechargeable battery pack (this one), but hadnât gotten around to it; we were out of suitable alkaline batteries at the house; Vicki had the car to work a forensic case so I couldnât drive to the store; and London and I were still sunburned from a trip to the beach the previous weekend so we didnât want to walk to a store. I took a page from Uncle Rod (this post and this one) and put the C8 OTA on the SkyWatcher-branded Synta AZ-4 alt-az mount I got back when. The result looks goofy as heck but it works. At 17â³ long and 9â³ in diameter, the C8 is a voluminous scope, but itâs mostly air, and the OTA is not much heavier than the Apex 127/SV50 combo that I use on the AZ-4 all the time.

C8 OTA on the left, Apex 127 with rings on the right.

Here I hit a snag. The NexStar mount is left-handed, the AZ-4 is right-handed, so the C8 tube went on upside-down. That put the focuser knob above the visual back, diagonal, and eyepiece, rather than below, which was weird but not a deal-breaker. It also put the finder mountâa little Picatinny rail for the included red-dot finderâbelow the scopeâs equator instead of above it. (I had the same problem with the Apex 127 back in the day, as discussed in this post.) I figured heck with it, Iâd get by just sighting down the tube. I do it all the time with my other scopes, and it works okay.

Correction: I do it all the time with my other non-Cassegrain scopes, and it works okay because they have short focal lengths and wide fields of view. The C8 has a focal length of 2032mm and a max field of view of a little less than 1 degree. Getting the scope pointed at anything without a finder involved a tremendous amount of faffing about, like 5 to 10 minutes per object. It would have been way simpler to just mount the RDF and crouch down to use it. But like a bloody-minded fool, I persevered without, and managed to observe the following objects the first night out:

• Jupiter â even at just 63x in not-great seeing, I caught the Great Red Spot easily in direct vision.
• Saturn â also at 63x, immediately got 4 moons. Iâm sure more would be possible on a night with better seeing. I ran the magnification up a bit, but didnât see any more. Thatâs how it goes when the seeing is bad.
• Moon â holy light-collecting area, Batman! At low power, with the entire just-past-full moon in the FOV, I heard a sizzling sound and a beam of moonlight shot out the back of my head. I ran the power up to 169x and saw subtle features in the maria that Iâd never seen before, especially inside flooded craters on the margin of Mare Fecunditatis. Focus on the C8 was surprisingly snappy for a non-refractorâone second an object would be out of focus, then BAM, it was in, no question. I decided a star test was in order. But first, on the way to the pole:
• Mars â brilliant. Even at 81x with the included 25mm Plossl, I could see a wealth of detail on the surface, including the dark triangle of Syrtis Major.
• Polaris â used this for a star test. Happily, the collimation appears to be dead nuts on. The star test looks excellent. I hauled out a copy of Suiterâs book, Star-Testing Astronomical Telescopes, which is on loan from a fellow club member. Any problems with the optics are below the threshold of my ability to diagnose. This is consistent with the fine details and low-contrast features I was picking up on other targets.
• Vega â I just used this to get on target at Epsilon Lyrae, but I was happy to see no chromatic aberration. I did catch just the faintest whiff of greenish-yellow on the limb of the moon, but I canât be sure that wasnât in the eyepiece. Long planetary and lunar sessions with the Apex 127 these past few weeks have shown me that eyepiece CA is real, and varies a lot between makes and models.
• Epsilon Lyrae â by now the seeing had turned to crap again, at least in the west. I only ran up to 169x and the stars were still too shimmery to âblack-lineâ split, but I was happy to see that they were elongated into little 8s at 81x, which makes me think this scope will split the Double-Double below 100x on a still night. Thatâs not any huge achievement, but itâs nice to know the scope is performing within expectations.

In sum, the scope is optically great. Iâve been pretty lucky with most of my scopes, but Iâve had a couple of stinkers, so itâs nice when they turn out better-than-expected, which this one certainly did.

In fact, it was a little anxiety-inducing. I really, really wanted the mount to work, too, so Iâd have no reason to return the package and lose such a nice OTA.ÃÂ Yesterday (Thursday) I had to run some errands anyway, so I picked up some batteries. By this time I had a rechargeable external battery pack on the way, but not yet in. So I murdered some AAs to try out the mount.

The accessories that come with the NexStar 8SE. Clockwise from the upper right: a bubble level for leveling the tripod before you put on the mount and telescope; a 25mm Plossl (of course!); mirror star diagonal; and the hated red dot finder (RDF).

First thing: you really, really need a finder to get the scope aligned for GoTo. Which means the finder needs to be aligned to the scope, and I foolishly had not done that during the day. Have I ever said how much I hate, hate, hate red dot finders? My first accessory purchase for this scope, after the external battery pack, was a 9Ã50 RACI, again from Doug Rennie, who had gotten one for his NexStar 6SE but wasnât using it. Anyway, after some faffing about I got the RDF on and aligned. Did a rough alignment on some distant leaves, then got it dialed in on Capella.

I had just watched a video Doug had sent on the Auto 2-Star alignment (this one), so I did that, starting with Capella. The suggested second star was Vega, which was still visible in the west. Got the alignment dialed in on Vega, then I was off and running.

First object I tried was M27. I couldnât see a darned thing, BUT it was going down into the light dome over LA, and fighting the light of the nearly-full moon, soâ¦who knows. After that I punched in Mars, and after the scope stopped slewing, Barsoom was in the eyepiece and looking good. Pleiades, ditto, although they spilled well beyond the sub-1-degree field of view. M34, ditto. Neptune, ditto, a tiny ball of blue floating out in the black. Then the moon, and like every one of the others, it was just about centered in the eyepiece. These objects were reasonably well-distributed over the sky, so I was pretty happy with the mountâs ability to get the scope on target. I let the scope just track the moon for a few minutes while I took some notes.

One thingâI had left the tripod legs collapsed for max stability, but even sitting down that put the eyepiece about 7-8â³ lower than it could have been, and punishingly low on some high targets. I figured Iâd elevate the scope a little more in future sessions. To figure out how much Iâd need to raise the tripod, I punched in Aldebaran to get a low-in-the-sky target, and the scope slewed right to it. I spent a few minutes using the hand control to drive around the Hyades, looking for double stars, then stopped to write some more notes. Have I mentioned that Iâm including more double stars in my observing these days? Blame the Astronomical Leagueâs Double Star and Binocular Double Star observing programs, for acquainting me with so many fetching targets.

At this point the mount had been on for about an hour. I tried for the double star Eta Cassiopeia, and the scope drove to Cass, but not to the star. I wondered if the batteries were dyingâapparently GoTos lose their minds as the power runs outâso I punched in Polaris, hoping to get one last target, but the scope slewed off to the east, in completely the wrong direction, and then stopped moving entirely. I flipped the power switch off and put everything away. The scope ran for a little over an hour on the AAs, which is in line with what others have reported. And also a fairly expensive session!

The NexStar 8SE set up just inside the garage, looking south over the car for some alignment and tracking testing.

So, the OTA was optically great, and the mount worked, did GoTo, and tracked objects. The Talentcell battery pack (this one) arrived the day after the AA-powered session. What I wanted to do was set up in the driveway for a long planetary session, to see how the mount and battery pack work during extended tracking, and to take the whole rig up the mountain soon to see how it would work on a multiple-hour session under darker skies. Unfortunately by this time ash from the wildfires was raining from the sky, and ash is hell on telescope optics, so both the driveway and Mount Baldy were out. Still, I was desperate to know how the whole rig would work together, so I set the scope up inside my garage, which has a south-facing door, and did some tests in the southern sky. After doing a two-star align on Fomalhaut and Nunki, the system was putting objects near the center of the FOV every time. I also tried a single-object solar system align on Jupiter, and that was good enough put objects somewhere in the FOV of a low-power eyepiece, and to track for 20 minutes or so, but definitely not as good as the two-star align.

Why was I pushing to get this scope and mount tested when conditions were so crappy? That will be revealed in the next post.

• How to Explain Why You Like Something

Wolverine is haunted by things in his past he canât remember. I never thought about that before. How does that work?

âIâm haunted by horrible things I canât remember.â

âHow do you know they happened, or that they were horrible?â

âThatâs just the thing. I donât!â

As always, thanks for using my Amazon Affiliate links (US, UK, Canada).

## September 20, 2020

• Vote-by-mail meltdowns in 2020?

If your state is voting by mail, then you canât process all the ballot envelopes on November 3rd â itâs just too labor-intensive.

The details vary by state, as every state has different laws, but (basically) for each mail-in ballot received by the county election clerk, they must:

• Sort the envelopes by âballot styleâ (municipality or district) [CA and some other states donât need to sort]
• Look up the voterâs information (written on the envelope) in the voter-registration database (to find the signature for comparison, and to record in the database that the voter has voted, so therefore canât vote twice)
• Compare the signature and accept or reject the envelope
• Remove identifying information from the envelope (to ensure the votes cannot be connected to the voter when the envelope is opened); in NJ itâs on a tear-off perforated tab
• Open the envelope; check that the ballot type is right for the municipality or district
• If the ballot is deemed unscannable, remake (copy by hand) the ballot
• Flatten the ballot and put it in the batch for high-speed scanning+counting
• Run the batch through the optical scanner

States that (usually) vote in-person, with just a few absentee ballots per county, can do all this processing on election day.

States that vote mostly by mail need to do all the labor-intensive parts (that is, all but running through the scanner) well in advance of election day â it is many days of work. Running through the scanner can perhaps be saved for election day (or the days immediately before), because the scanners can process 75 or 300 ballots per minute.

So therefore, vote-by-mail (or mostly-vote-by-mail) states such as OR, UT, CO, HI, WA have developed (over the years) procedures to process vote-by-mail envelopes in a timely way, as the ballots arrive in the weeks before the election. Some states that are mailing ballots to all voters just for the COVID-19 pandemic this year include NJ, CA, NV â and these states have adjusted their laws to allow processing the envelopes in the weeks before November 3rd. That makes sense.

Some states are sticking with in-person voting, but they allow processing of absentee ballots in a timely way (before November 2nd). That should be OK. Indeed, it is OK, as well as AK, AR, AZ, FL, GA, ID, IN, KS, ME, MN, MO, NC, NH, TN, TX, VT.

Some states are encouraging vote-by-mail â that is, they are mailing absentee-ballot request forms to every voter, while also planning for in-person voting. The states that are doing this (with timely processing of absentee ballot envelopes before November 2nd) are CT, DE, IA, IL, MD, MA, NE, OH, RI.

Signature cure: Thereâs another advantage to processing ballot-envelopes early. In many (but not all) of these states, if a voterâs signature does not match (or is missing), thereâs time to contact the voter and let the voter fix the problem so the vote can count. If you process the signatures only on November 2nd or 3rd, thatâs not possible.

## Potential election meltdown states

Several states are sticking with in-person voting this year, and (as usual) planning to process all their absentee ballots on November 3rd or November 2nd. That will be OK, unless they experience a much greater rate of absentee ballots than usual. If a state is accustomed to 5% of the voters requesting (and returning) absentee ballots, and they get 40%, then it may take them several days after November 3rd to finish counting the votes. These states include AL, KY, LA, MS, NY, ND, PA, SC, SD, WV, WY.

Experts are particularly concerned about PA based on experience in the primary; and because of the late adoption of procedural changes, delayed by lawsuits that have only just been resolved.

Voters in these states should strongly consider taking this advice: vote in person.

## Probable election meltdown states

What would be really dysfunctional would be to encourage vote-by-mail, but then to wait until November 3 (or November 2) to start processing those envelopes. Thatâs a recipe for election meltdown. The states that are heading for this disaster are MI, VT, WI. Iowa, Michigan, and Wisconsin are mailing absentee-ballot-request forms to every voter; but waiting until the last minute to process them after theyâre returned. Vermont is even worse: the state is mailing a ballot to every voter, but wonât start processing the returned envelopes until November 2nd.

The Michigan State Senate recently approved a bill to start processing on November 2nd instead of November 3rd. If passed into law, thatâs better than nothing â it will certainly help â but it may turn out to be inadequate.

Voters in these states should strongly consider taking this advice: vote in person.

## Late ballot arrival states

Some states will count ballots postmarked by November 3rd, as long as they arrive by November 5th, or November 10th, etc. (depending on the state). (And the post office doesnât âpostmarkâ prepaid âbusiness reply mailâ, but can provide other evidence of when it was mailed, so states should be careful about how they use the word âpostmark.â)

AK, CA, DC, IL, IA, KS, MD, MN, MS, NV, NJ, NY, NC, ND, OH, PA, TX, VA, WA, WV.

In these states, final election results cannot be known until several days after the election. If the late-arriving ballots are more for one candidate than another, this will cause an apparent shift in election results. Thatâs a meltdown of a different kind.

## No-late-ballot-arrival states

Several states do not accept ballots that arrive after election day, even if postmarked before the deadline. If the postal service is unusually slow this year, then these states may disenfranchise many voters: AL, AZ, AK, CO, CT, DE, FL, GA, HI, ID, IN, KY, LA, ME, MA, MI, MN, MO, MT, NE, NH, NM, OK, OR, PA, RI, SC, SD, TN, VT, WI, WY.

Iâm not saying which is the right or wrong answer: accept ballots past November 3rd (if postmarked by the state-set deadline) and suffer from delays in reporting results; or stop accept ballots November 3rd and disenfranchise voters. Pick your poison. A compromise in the middle is to accept absentee ballots dropped off in drop boxes, vote centers, and polling places, as several (but not all) states do.

## Polling-place meltdown states

So far, Iâve just been talking about the processing of absentee ballots. But some states and cities, in the past, have experienced hours-long lines at polling places, because of (1) underprovisioning of touch-screen voting machines, or (2) voting machines (or e-pollbooks) failing to turn on in the morning, or both at once.

Famous examples include Cleveland in 2004, and Atlanta in 2020 (primary election). Not coincidentally, both of these cities used touch-screen voting machinesâbecause those are expensive (and slow to vote on), this can lead to underprovisioning of machines compared to how many voters there are.

In contrast, states that use hand-marked paper ballots can (usually) provide enough pens and enough cardboard privacy screens for many voters. The challenge, this year, will be to do this while also social distancing.

Letâs hope that the in-person voting states, this year, can avoid meltdowns (long lines, or COVID transmission) at their polling places.

The data from this article came, in part, from the National Conference of State Legislatures (and this NCSL page too).

[edited 9/22/20: MN is a late-ballot-arrival state]

• Montana - Three Lakes Peak 27July2019
Summit:Â
W7M/LO-002
Voice Cellular Coverage:Â
Good, very usable
Data Cellular Coverage:Â
Good, very usable
Cellular Provider:Â
Verizon
APRS Coverage:Â
Don't know

Three Lakes Peak is just within the Flathead Indian Reservation along the Reservation Divide. The summit has great views of the Confederate Salish-Kootenai Tribes land, the Flathead River, three alpine lakes, the Mission Mountains and the Ninemile Valley.Â  A Flathead Reservation Use Permit is required.Â

Pictures:Â

• Watch The Scorpionâs Eye Blink! Moon Occults Acrab Sept. 21

This should be fun to watch. On Monday night, Sept. 21, the waxing crescent moon will briefly cover up the bright star Acrab in the head of Scorpius during evening twilight.

Acrab, also known as Beta Scorpii, will disappear along the âdarkâ edge of the moon â the part lit by sunlight reflected off the Earth â and reappear on the opposite or bright side. Astronomers call these stellar peek-a-boo events occultations. What makes this one extra special is that Acrab is a double star with a 5th magnitude companion. An absolutely gorgeous sight in a small telescope I consider it one of the best binaries in the summer sky.

That means weâll see two occultations one right after the other. Very cool! Scorpius is low in the southwestern sky this time of year, so be sure to find a place where you can get a clear look at the moon without interference from buildings, trees or mountains.

The event will be visible during evening twilight for much of the U.S., Canada, Mexico and Central America. Unfortunately, the moon sets along the East Coast before the occultation begins. Similarly, observers along the West Coast from about San Francisco north will miss the show because it wraps up before sunset. But if youâre in the east-central to west-central U.S. and Canada youâll see the starâs dramatic disappearance along the upper left side of the moon.

Folks living in the western third of the U.S. and Canada will miss the disappearance but will see the star return to view along the moonâs bright edge during evening twilight. South of Central America the moon narrowly misses the star â skywatchers will see a pretty conjunction instead.

To find out the times of disappearance and / or reappearance for your location consult this list of hundreds of cities prepared by IOTA, the International Occultation Timing Association. The times shown are Universal Time or UT. To convert to Eastern, subtract 4 hours; 5 hours for Central; 6 hours for Mountain and 7 hours for Pacific. For example, disappearance for Chicago occurs at 1:44:57 (rounded to 1:45) on Sept. 22. Back up 5 hours and that becomes 8:45 p.m. on Sept. 21.

You should have no problem seeing Betaâs disappearance with binoculars, but a telescope will split the star in two for an extra dimension of enjoyment. Be sure youâre out at least 10 minutes before Acrabâs farewell and return so you donât miss a thing. Minutes before it disappears, the brighter star will hover seemingly forever at the moonâs edge â¦ and then vanish in a breathtaking instant followed by its companion.

Stars disappear and reappear along the lunar edge with such suddenness for two reasons â theyâre so far away that theyâre essentiallyÂ points of light, and the moon lacks a substantial atmosphere. Without air to filter the starâs light it remains bright right up to the edge and then blinks off.

I hope you get to see this!

• How to Make Biomass Energy Sustainable Again

From the Neolithic to the beginning of the twentieth century, coppiced woodlands, pollarded trees, and hedgerows provided people with a sustainable supply of energy, materials, and food.

Pollarded trees in Germany. Image: RenÃ© SchrÃ¶der (CC BY-SA 4.0).

## How is Cutting Down Trees Sustainable?

Advocating for the use of biomass as a renewable source of energy â replacing fossil fuels â has become controversial among environmentalists. The comments on the previous article, which discussed thermoelectric stoves, illustrate this:

• âAs the recent film Planet of the Humans points out, biomass a.k.a. dead trees is not a renewable resource by any means, even though the EU classifies it as such.â
• âHow is cutting down trees sustainable?â
• âArticle fails to mention that a wood stove produces more CO2 than a coal power plant for every ton of wood/coal that is burned.â
• âThis is pure insanity. Burning trees to reduce our carbon footprint is oxymoronic.â
• âThe carbon footprint alone is just horrifying.â
• âThe biggest problem with burning anything is once it's burned, it's gone forever.â
• âThe only silly question I can add to to the silliness of this piece, is where is all the wood coming from?â

In contrast to what the comments suggest, the article does not advocate the expansion of biomass as an energy source. Instead, it argues that already burning biomass fires â used by roughly 40% of todayâs global population â could also produce electricity as a by-product, if they are outfitted with thermoelectric modules. Nevertheless, several commenters maintained their criticism after they read the article more carefully. One of them wrote: âWe should aim to eliminate the burning of biomass globally, not make it more attractive.â

Apparently, high-tech thinking has permeated the minds of (urban) environmentalists to such an extent that they view biomass as an inherently troublesome energy source â similar to fossil fuels. To be clear, critics are right to call out unsustainable practices in biomass production. However, these are the consequences of a relatively recent, âindustrialâ approach to forestry. When we look at historical forest management practices, it becomes clear that biomass is potentially one of the most sustainable energy sources on this planet.

## Coppicing: Harvesting Wood Without Killing Trees

Nowadays, most wood is harvested by killing trees. Before the Industrial Revolution, a lot of wood was harvested from living trees, which were coppiced. The principle of coppicing is based on the natural ability of many broad-leaved species to regrow from damaged stems or roots â damage caused by fire, wind, snow, animals, pathogens, or (on slopes) falling rocks. Coppice management involves the cutting down of trees close to ground level, after which the base â called the âstoolâ â develops several new shoots, resulting in a multi-stemmed tree.

A coppice stool. Image: Geert Van der Linden.

A recently coppiced patch of oak forest. Image: Henk vD. (CC BY-SA 3.0)

Coppice stools in Surrey, England. Image: Martinvl (CC BY-SA 4.0)

When we think of a forest or a tree plantation, we imagine it as a landscape stacked with tall trees. However, until the beginning of the twentieth century, at least half of the forests in Europe were coppiced, giving them a more bush-like appearance. [1] The coppicing of trees can be dated back to the stone age, when people built pile dwellings and trackways crossing prehistoric fenlands using thousands of branches of equal size â a feat that can only be accomplished by coppicing. [2]

The approximate historical range of coppice forests in the Czech Republic (above, in red) and in Spain (below, in blue). Source: "Coppice forests in Europe", see [1]

Ever since then, the technique formed the standard approach to wood production â not just in Europe but almost all over the world. Coppicing expanded greatly during the eighteenth and nineteenth centuries, when population growth and the rise of industrial activity (glass, iron, tile and lime manufacturing) put increasing pressure on wood reserves.

## Short Rotation Cycles

Because the young shoots of a coppiced tree can exploit an already well-developed root system, a coppiced tree produces wood faster than a tall tree. Or, to be more precise: although its photosynthetic efficiency is the same, a tall tree provides more biomass below ground (in the roots) while a coppiced tree produces more biomass above ground (in the shoots) â which is clearly more practical for harvesting. [3] Partly because of this, coppicing was based on short rotation cycles, often of around two to four years, although both yearly rotations and rotations up to 12 years or longer also occurred.

Coppice stools with different rotation cycles. Images: Geert Van der Linden.Â

Because of the short rotation cycles, a coppice forest was a very quick, regular and reliable supplier of firewood. Often, it was cut up into a number of equal compartments that corresponded to the number of years in the planned rotation. For example, if the shoots were harvested every three years, the forest was divided into three parts, and one of these was coppiced each year. Short rotation cycles also meant that it took only a few years before the carbon released by the burning of the wood was compensated by the carbon that was absorbed by new growth, making a coppice forest truly carbon neutral. In very short rotation cycles, new growth could even be ready for harvest by the time the old growth wood had dried enough to be burned.

In some tree species, the stump sprouting ability decreases with age. After several rotations, these trees were either harvested in their entirety and replaced by new trees, or converted into a coppice with a longer rotation. Other tree species resprout well from stumps of all ages, and can provide shoots for centuries, especially on rich soils with a good water supply. Surviving coppice stools can be more than 1,000 years old.

## Biodiversity

A coppice can be called a âcoppice forestâ or a âcoppice plantationâ, but in reality it was neither a forest nor a plantation â perhaps something in between. Although managed by humans, coppice forests were not environmentally destructive, on the contrary. Harvesting wood from living trees instead of killing them is beneficial for the life forms that depend on them. Coppice forests can have a richer biodiversity than unmanaged forests, because they always contain areas with different stages of light and growth. None of this is true in industrial wood plantations, which support little or no plant and animal life, and which have longer rotation cycles (of at least twenty years).

Coppice stools in the Netherlands. Image: K. Vliet (CC BY-SA 4.0)

Sweet chestnut coppice at Flexham Park, Sussex, England. Image: Charlesdrakew, public domain.

Our forebears also cut down tall, standing trees with large-diameter stems â just not for firewood. Large trees were only âkilledâ when large timber was required, for example for the construction of ships, buildings, bridges, and windmills. [4] Coppice forests could contain tall trees (a âcoppice-with-standardsâ), which were left to grow for decades while the surrounding trees were regularly pruned. However, even these standing trees could be partly coppiced, for example by harvesting their side branches while they were alive (shredding).

## Multipurpose Trees

The archetypical wood plantation promoted by the industrial world involves regularly spaced rows of trees in even-aged, monocultural stands, providing a single output â timber for construction, pulpwood for paper production, or fuelwood for power plants. In contrast, trees in pre-industrial coppice forests had multiple purposes. They provided firewood, but also construction materials and animal fodder.

The targeted wood dimensions, determined by the use of the shoots, set the rotation period of the coppice. Because not every type of wood was suited for every type of use, coppiced forests often consisted of a variety of tree species at different ages. Several age classes of stems could even be rotated on the same coppice stool (âselection coppiceâ), and the rotations could evolve over time according to the needs and priorities of the economic activities.

A small woodland with a diverse mix of coppiced, pollarded and standard trees. Image: Geert Van der Linden.Â Â

Coppiced wood was used to build almost anything that was needed in a community. [5] For example, young willow shoots, which are very flexible, were braided into baskets and crates, while sweet chestnut prunings, which do not expand or shrink after drying, were used to make all kinds of barrels. Ash and goat willow, which yield straight and sturdy wood, provided the material for making the handles of brooms, axes, shovels, rakes and other tools.

Young hazel shoots were split along the entire length, braided between the wooden beams of buildings, and then sealed with loam and cow manure â the so-called wattle-and-daub construction. Hazel shoots also kept thatched roofs together. Alder and willow, which have almost limitless life expectancy under water, were used as foundation piles and river bank reinforcements. The construction wood that was taken out of a coppice forest did not diminish its energy supply: because the artefacts were often used locally, at the end of their lives they could still be burned as firewood.

Harvesting leaf fodder in Leikanger kommune, Norway. Image: Leif Hauge. Source: [19]

Coppice forests also supplied food. On the one hand, they provided people with fruits, berries, truffles, nuts, mushrooms, herbs, honey, and game. On the other hand, they were an important source of winter fodder for farm animals. Before the Industrial Revolution, many sheep and goats were fed with so-called âleaf fodderâ or âleaf hayâ â leaves with or without twigs. [6]

Elm and ash were among the most nutritious species, but sheep also got birch, hazel, linden, bird cherry and even oak, while goats were also fed with alder. In mountainous regions, horses, cattle, pigs and silk worms could be given leaf hay too. Leaf fodder was grown in rotations of three to six years, when the branches provided the highest ratio of leaves to wood. When the leaves were eaten by the animals, the wood could still be burned.

## Pollards & Hedgerows

Coppice stools are vulnerable to grazing animals, especially when the shoots are young. Therefore, coppice forests were usually protected against animals by building a ditch, fence or hedge around them. In contrast, pollarding allowed animals and trees to be mixed on the same land. Pollarded trees were pruned like coppices, but to a height of at least two metres to keep the young shoots out of reach of grazing animals.

Pollarded trees in Segovia, Spain. Image: Ecologistas en AcciÃ³n.

Wooded meadows and wood pastures â mosaics of pasture and forest â combined the grazing of animals with the production of fodder, firewood and/or construction wood from pollarded trees. âPannageâ or âmast feedingâ was the method of sending pigs into pollarded oak forests during autumn, where they could feed on fallen acorns. The system formed the mainstay of pork production in Europe for centuries. [7] The âmeadow orchardâ or âgrazed orchardâ combined fruit cultivation and grazing -- pollarded fruit trees offered shade to the animals, while the animals could not reach the fruit but fertilised the trees.

Forest or pasture? Something in between. A "dehesa" (pig forest farm) in Spain. Image by Basotxerri (CC BY-SA 4.0).

Cattle grazes among pollarded trees in Huelva, Spain. (CC BY-SA 2.5)

A meadow orchard surrounded by a living hedge in Rijkhoven, Belgium. Image: Geert Van der Linden.

While agriculture and forestry are now strictly separated activities, in earlier times the farm was the forest and vice versa. It would make a lot of sense to bring them back together, because agriculture and livestock production â not wood production â are the main drivers of deforestation. If trees provide animal fodder, meat and dairy production should not lead to deforestation. If crops can be grown in fields with trees, agriculture should not lead to deforestation. Forest farms would also improve animal welfare, soil fertility and erosion control.

## Line Plantings

Extensive plantations could consist of coppiced or pollarded trees, and were often managed as a commons. However, coppicing and pollarding were not techniques seen only in large-scale forest management. Small woodlands in between fields or next to a rural house and managed by an individual household would be coppiced or pollarded. A lot of wood was also grown as line plantings around farmyards, fields and meadows, near buildings, and along paths, roads and waterways. Here, lopped trees and shrubs could also appear in the form of hedgerows, thickly planted hedges. [8]

Hedge landscape in Normandy, France, around 1940. Image: W Wolny, public domain.

Line plantings in Flanders, Belgium. Detail from the Ferraris map, 1771-78.Â

Although line plantings are usually associated with the use of hedgerows in England, they were common in large parts of Europe. In 1804, English historian AbbÃ© Mann expressed his surprise when he wrote about his trip to Flanders (today part of Belgium): âAll fields are enclosed with hedges, and thick set with trees, insomuch that the whole face of the country, seen from a little height, seems one continued woodâ. Typical for the region was the large number of pollarded trees. [8]

Like coppice forests, line plantings were diverse and provided people with firewood, construction materials and leaf fodder. However, unlike coppice forests, they had extra functions because of their specific location. [9] One of these was plot separation: keeping farm animals in, and keeping wild animals or cattle grazing on common lands out. Various techniques existed to make hedgerows impenetrable, even for small animals such as rabbits. Around meadows, hedgerows or rows of very closely planted pollarded trees (âpollarded tree hedgesâ) could stop large animals such as cows. If willow wicker was braided between them, such a line planting could also keep small animals out. [8]

Detail of a yew hedge. Image: Geert Van der Linden.Â

Hedgerow. Image: Geert Van der Linden.Â

Pollarded tree hedge in Nieuwekerken, Belgium. Image: Geert Van der Linden.

Coppice stools in a pasture. Image: Jan Bastiaens.

Trees and line plantings also offered protection against the weather. Line plantings protected fields, orchards and vegetable gardens against the wind, which could erode the soil and damage the crops. In warmer climates, trees could shield crops from the sun and fertilize the soil. Pollarded lime trees, which have very dense foliage, were often planted right next to wattle-and-daub buildings in order to protect them from wind, rain and sun. [10]

Dunghills were protected by one or more trees, preventing the valuable resource from evaporating due to sun or wind. In the yard of a watermill, the wooden water wheel was shielded by a tree to prevent the wood from shrinking or expanding in times of drought or inactivity. [8]

A pollarded tree protects a water wheel. Image: Geert Van der Linden.Â

Pollarded lime trees protect a farm building in Nederbrakel, Belgium. Image: Geert Van der Linden.

## Location Matters

Along paths, roads and waterways, line plantings had many of the same location-specific functions as on farms. Cattle and pigs were hoarded over dedicated droveways lined with hedgerows, coppices and/or pollards. When the railroads appeared, line plantings prevented collisions with animals. They protected road travellers from the weather, and marked the route so that people and animals would not get off the road in a snowy landscape. They prevented soil erosion at riverbanks and hollow roads.

All functions of line plantings could be managed by dead wood fences, which can be moved more easily than hedgerows, take up less space, donât compete for light and food with crops, and can be ready in a short time. [11] However, in times and places were wood was scarce a living hedge was often preferred (and sometimes obliged) because it was a continuous wood producer, while a dead wood fence was a continuous wood consumer. A dead wood fence may save space and time on the spot, but it implies that the wood for its construction and maintenance is grown and harvested elsewhere in the surroundings.

Image: Pollarded tree hedge in Belgium. Image: Geert Van der Linden.

Local use of wood resources was maximised. For example, the tree that was planted next to the waterwheel, was not just any tree. It was red dogwood or elm, the wood that was best suited for constructing the interior gearwork of the mill. When a new part was needed for repairs, the wood could be harvested right next to the mill. Likewise, line plantings along dirt roads were used for the maintenance of those roads. The shoots were tied together in bundles and used as a foundation or to fill up holes. Because the trees were coppiced or pollarded and not cut down, no function was ever at the expense of another.

Nowadays, when people advocate for the planting of trees, targets are set in terms of forested area or the number of trees, and little attention is given to their location â which could even be on the other side of the world. However, as these examples show, planting trees closeby and in the right location can significantly optimise their potential.

## Shaped by Limits

Coppicing has largely disappeared in industrial societies, although pollarded trees can still be found along streets and in parks. Their prunings, which once sustained entire communities, are now considered waste products. If it worked so well, why was coppicing abandoned as a source of energy, materials and food? The answer is short: fossil fuels. Our forebears relied on coppice because they had no access to fossil fuels, and we donât rely on coppice because we have.

Our forebears relied on coppice because they had no access to fossil fuels, and we donât rely on coppice because we have

Most obviously, fossil fuels have replaced wood as a source of energy and materials. Coal, gas and oil took the place of firewood for cooking, space heating, water heating and industrial processes based on thermal energy. Metal, concrete and brick â materials that had been around for many centuries â only became widespread alternatives to wood after they could be made with fossil fuels, which also brought us plastics. Artificial fertilizers â products of fossil fuels â boosted the supply and the global trade of animal fodder, making leaf fodder obsolete. The mechanisation of agriculture â driven by fossil fuels â led to farming on much larger plots along with the elimination of trees and line plantings on farms.

Less obvious, but at least as important, is that fossil fuels have transformed forestry itself. Nowadays, the harvesting, processing and transporting of wood is heavily supported by the use of fossil fuels, while in earlier times they were entirely based on human and animal power â which themselves get their fuel from biomass. It was the limitations of these power sources that created and shaped coppice management all over the world.

Harvesting wood from pollarded trees in Belgium, 1947. Credit: Zeylemaker, Co., Nationaal Archief (CCO)

Transporting firewood in the Basque Country. Source: Notes on pollards: best practices' guide for pollarding. Gipuzkoaka Foru AldundÃ­a-DiputaciÃ³n Foral de Giuzkoa, 2014.

Wood was harvested and processed by hand, using simple tools such as knives, machetes, billhooks, axes and (later) saws. Because the labour requirements of harvesting trees by hand increase with stem diameter, it was cheaper and more convenient to harvest many small branches instead of cutting down a few large trees. Furthermore, there was no need to split coppiced wood after it was harvested. Shoots were cut to a length of around one metre, and tied together in âfaggotsâ, which were an easy size to handle manually.

It was the limitations of human and animal power that created and shaped coppice management all over the world

To transport firewood, our forebears relied on animal drawn carts over often very bad roads. This meant that, unless it could be transported over water, firewood had to be harvested within a radius of at most 15-30 km from the place where it was used. [12] Beyond those distances, the animal power required for transporting the firewood was larger than its energy content, and it would have made more sense to grow firewood on the pasture that fed the draft animal. [13] There were some exceptions to this rule. Some industrial activities, like iron and potash production, could be moved to more distant forests â transporting iron or potash was more economical than transporting the firewood required for their production. However, in general, coppice forests (and of course also line plantings) were located in the immediate vicinity of the settlement where the wood was used.

In short, coppicing appeared in a context of limits. Because of its faster growth and versatile use of space, it maximised the local wood supply of a given area. Because of its use of small branches, it made manual harvesting and transporting as economical and convenient as possible.

## Can Coppicing be Mechanised?

From the twentieth century onwards, harvesting was done by motor saw, and since the 1980s, wood is increasingly harvested by powerful vehicles that can fell entire trees and cut them on the spot in a matter of minutes. Fossil fuels have also brought better transportation infrastructures, which have unlocked wood reserves that were inaccessible in earlier times. Consequently, firewood can now be grown on one side of the planet and consumed at the other.

The use of fossil fuels adds carbon emissions to what used to be a completely carbon neutral activity, but much more important is that it has pushed wood production to a larger â unsustainable â scale. [14] Fossil fueled transportation has destroyed the connection between supply and demand that governed local forestry. If the wood supply is limited, a community has no other choice than to make sure that the wood harvest rate and the wood renewal rate are in balance. Otherwise, it risks running out of fuelwood, craft wood and animal fodder, and it would be abandoned.

Mechanically harvested willow coppice plantation. Shortly after coppicing (right), 3-years old growth (left). Image: Lignovis GmbH (CC BY-SA 4.0).Â

Likewise, fully mechanised harvesting has pushed forestry to a scale that is incompatible with sustainable forest management. Our forebears did not cut down large trees for firewood, because it was not economical. Today, the forest industry does exactly that because mechanisation makes it the most profitable thing to do. Compared to industrial forestry, where one worker can harvest up to 60 m3 of wood per hour, coppicing is extremely labour-intensive. Consequently, it cannot compete in an economic system that fosters the replacement of human labour with machines powered by fossil fuels.

Coppicing cannot compete in an economic system that fosters the replacement of human labour with machines powered by fossil fuels

Some scientists and engineers have tried to solve this by demonstrating coppice harvesting machines. [15] However, mechanisation is a slippery slope. The machines are only practical and economical on somewhat larger tracts of woodland (>1 ha) which contain coppiced trees of the same species and the same age, with only one purpose (often fuelwood for power generation). As we have seen, this excludes many older forms of coppice management, such as the use of multipurpose trees and line plantings. Add fossil fueled transportation to the mix, and the result is a type of industrial coppice management that brings few improvements.

Coppiced trees along a brook in 's Gravenvoeren, Belgium. Image: Geert Van der Linden.Â

Sustainable forest management is essentially local and manual. This doesnât mean that we need to copy the past to make biomass energy sustainable again. For example, the radius of the wood supply could be increased by low energy transport options, such as cargo bikes and aerial ropeways, which are much more efficient than horse or ox drawn carts over bad roads, and which could be operated without fossil fuels. Hand tools have also improved in terms of efficiency and ergonomics. We could even use motor saws that run on biofuels â a much more realistic application than their use in car engines. [16]

## The Past Lives On

This article has compared industrial biomass production with historical forms of forest management in Europe, but in fact there was no need to look to the past for inspiration. The 40% of the global population consisting of people in poor societies that still burn wood for cooking and water and/or space heating, are no clients of industrial forestry. Instead, they obtain firewood in much of the same ways that we did in earlier times, although the tree species and the environmental conditions can be very different. [17]

A 2017 study calculated that the wood consumption by people in âdevelopingâ societies â good for 55% of the global wood harvest and 9-15% of total global energy consumption â only causes 2-8% of anthropogenic climate impacts. [18] Why so little? Because around two-thirds of the wood that is harvested in developing societies is harvested sustainably, write the scientists. People collect mainly dead wood, they grow a lot of wood outside the forest, they coppice and pollard trees, and they prefer the use of multipurpose trees, which are too valuable to cut down. The motives are the same as those of our ancestors: people have no access to fossil fuels and are thus tied to a local wood supply, which needs to be harvested and transported manually.

African women carrying firewood. (CC BY-SA 4.0)

These numbers confirm that it is not biomass energy thatâs unsustainable. If the whole of humanity would live as the 40% that still burns biomass regularly, climate change would not be an issue. What is really unsustainable is a high energy lifestyle. We can obviously not sustain a high-tech industrial society on coppice forests and line plantings alone. But the same is true for any other energy source, including uranium and fossil fuels.Â

Written by Kris De Decker.Â Proofread by Alice Essam.Â

* Support Low-tech Magazine viaÂ PaypalÂ orÂ Patreon.

## References:Â

[1] Multiple references:

Unrau, Alicia, et al. Coppice forests in Europe. University of Freiburg, 2018.Â

Notes on pollards: best practices' guide for pollarding. Gipuzkoako Foru Aldundia-DiputaciÃ³n Foral de Gipuzkoa, 2014.

Aarden wallen in Europa, in "Tot hier en niet verder: historische wallen in het Nederlandse landschap", Henk Baas, Bert Groenewoudt, Pim Jungerius and Hans Renes, Rijksdienst voor het Cultureel Erfgoed, 2012.

[2] Logan, William Bryant. Sprout lands: tending the endless gift of trees. WW Norton & Company, 2019.

[3] HoliÅ¡ovÃ¡, Petra, et al. "Comparison of assimilation parameters of coppiced and non-coppiced sessile oaks". Forest-Biogeosciences and Forestry 9.4 (2016): 553.Â

[4] Perlin, John. A forest journey: the story of wood and civilization. The Countryman Press, 2005.

[5] Most of this information comes from a Belgian publication (in Dutch language): Handleiding voor het inventariseren van houten beplantingen met erfgoedwaarde. Geert Van der Linden, Nele Vanmaele, Koen Smets en Annelies Schepens, Agentschap Onroerend Erfgoed, 2020. For a good (but concise) reference in English, see Rotherham, Ian. Ancient Woodland: history, industry and crafts. Bloomsbury Publishing, 2013.

[6] While leaf fodder was used all over Europe, it was especially widespread in mountainous regions, such as Scandinavia, the Alps and the Pyrenees. For example, in Sweden in 1850, 1.3 million sheep and goats consumed a total of 190 million sheaves annually, for which at least 1 million hectares deciduous woodland was exploited, often in the form of pollards. The harvest of leaf fodder predates the use of hay as winter fodder. Branches could be cut with stone tools, while cutting grass requires bronze or iron tools. While most coppicing and pollarding was done in winter, harvesting leaf fodder logically happened in summer. Bundles of leaf fodder were often put in the pollarded trees to dry.Â References:Â

Logan, William Bryant. Sprout lands: tending the endless gift of trees. WW Norton & Company, 2019.

Slotte H., "Harvesting of leaf hay shaped the Swedish landscape", Landscape Ecology 16.8 (2001): 691-702.Â

[7] Wealleans, Alexandra L. "Such as pigs eat: the rise and fall of the pannage pig in the UK". Journal of the Science of Food and Agriculture 93.9 (2013): 2076-2083.

[8] This information is based on several Dutch language publications:Â

Handleiding voor het inventariseren van houten beplantingen met erfgoedwaarde. Geert Van der Linden, Nele Vanmaele, Koen Smets en Annelies Schepens, Agentschap Onroerend Erfgoed, 2020.

Handleiding voor het beheer van hagen en houtkanten met erfgoedwaarde. Thomas Van Driessche, Agentschap Onroerend Erfgoed, 2019

Knotbomen, knoestige knapen: een praktische gids. Geert Van der Linden, Jos Schenk, Bert Geeraerts, Provincie Vlaams-Brabant, 2017.

Handleiding: Het beheer van historische dreven en wegbeplantingen. Thomas Van Driessche, Paul Van den Bremt and Koen Smets. Agentschap Onroerend Erfgoed, 2017.

Dirkmaat, Jaap. Nederland weer mooi: op weg naar een natuurlijk en idyllisch landschap. ANWB Media-Boeken & Gidsen, 2006.

For a good source in English, see: MÃ¼ller, Georg. Europe's Field Boundaries: Hedged banks, hedgerows, field walls (stone walls, dry stone walls), dead brushwood hedges, bent hedges, woven hedges, wattle fences and traditional wooden fences. Neuer Kunstverlag, 2013.

If line plantings were mainly used for wood production, they were planted at some distance from each other, allowing more light and thus a higher wood production. If they were mainly used as plot boundaries, they were planted more closely together. This diminished the wood harvest but allowed for a thicker growth.

[9] In fact, coppice forests could also have a location-specific function: they could be placed around a city or settlement to form an impenetrable obstacle for attackers, either by foot or by horse. They could not easily be destroyed by shooting, in contrast to a wall. Source: [5]

[10] Lime trees were even used for fire prevention. They were planted right next to the baking house in order to stop the spread of sparks to wood piles, haystacks and thatched roofs. Source: [5]

[11]Â  The fact that living hedges and trees are harder to move than dead wood fences and posts also has practical advantages. In Europe until the French era, there was no land register and boundaries where physically indicated in the landscape. The surveyor's work was sealed with the planting of a tree, which is much harder to move on the sly than a pole or a fence. Source: [5]

[12] And, if it could be brought in over water from longer distances, the wood had to be harvested within 15-30 km of the river or coast.Â

[13] Sieferle, Rolf Pieter. The Subterranean Forest: energy systems and the industrial revolution. White Horse Press, 2001.

Jalas, Mikko, and Jenny, Rinkinen. "Stacking wood and staying warm: time, temporality and housework around domestic heating systems", Journal of Consumer Culture 16.1 (2016): 43-60.

Rinkinen, Jenny. "Demanding energy in everyday life: insights from wood heating into theories of social practice." (2015).

[15] Vanbeveren, S.P.P., et al. "Operational short rotation woody crop plantations: manual or mechanised harvesting?" Biomass and Bioenergy 72 (2015): 8-18.

[16] However, chainsaws can have adverse effects on some tree species, such as reduced growth or greater ability to transfer disease.Â

[17] Multiple sources that refer to traditional forestry practices in Africa:

Leach, Gerald, and Robin Mearns. Beyond the woodfuel crisis: people, land and trees in Africa. Earthscan, 1988.Â

Leach, Melissa, and Robin Mearns. "The lie of the land: challenging received wisdom on the African environment." (1998)

Cline-Cole, Reginald A. "Political economy, fuelwood relations, and vegetation conservation: Kasar Kano, Northerm Nigeria, 1850-1915." Forest & Conservation History 38.2 (1994): 67-78.

[18] Multiple references:

Bailis, Rob, et al. "Getting the number right: revisiting woodfuel sustainability in the developing world." Environmental Research Letters 12.11 (2017): 115002

Masera, Omar R., et al. "Environmental burden of traditional bioenergy use." Annual Review of Environment and Resources 40 (2015): 121-150.

Study downgrades climate impact of wood burning, John Upton, Climate Central, 2015.

[19] Haustingsskog. [revidert] Rettleiar for restaurering og skjÃ¸tsel, GarnÃ¥s, Ingvill; Hauge, Leif ; Svalheim, Ellen, NIBIO RAPPORT | VOL. 4 | NR. 150 | 2018.Â

• QRP Afield 2020 Sleepy Tale

Yesterday I participated in the QRP Club of New Englandâs annual QRP Afield contest. This low-intensity end-of-summer low-powered radio contest is always scheduled for the third Saturday in September. It turns out it coincided with several other contests. Here are â¦ Continue reading â

The post QRP Afield 2020 Sleepy Tale first appeared on W3ATB.

The post QRP Afield 2020 Sleepy Tale appeared first on W3ATB.

• Nebraska: Images of the Cornhusker State (31 photos)

The state of Nebraska has a population of 1.9 million, and ranks 16th in area. It is largely a land of agriculture, with nearly 50,000 farms and ranches producing corn, beef, soybeans, and processed grain products. From the grasslands through the Sandhills, to the Missouri River, here are a few glimpses of the landscape of Nebraska and some of the wildlife and people calling it home.

This photo story is part of Fifty, a collection of images from each of the United States.

• bunker hill, oregon | september 2020
Summit:Â
W7O/NC-038
Voice Cellular Coverage:Â
Don't know
Data Cellular Coverage:Â
Spotty, may not work at all
Cellular Provider:Â
AT&T
APRS Coverage:Â
Good digi echos

Bunker Hill is an unremarkable summit that can be reached, cautiously, with a 4WD vehicle. After you leave the highway, the way up has a number of branch points; GPS or a map will be handy.Â  When you get to the yellow gate, take down your antennas and be carefree about your paint getting scratched, otherwise, park here and hike up. It is roughly a 300' climb over half a mile.Â

Here is a gaiagps map with driving and hiking information:Â W7O/NC-038 route.Â Â

• four things I think I learned about SOTA today

I made it to Bunker Hill yesterday and activated the summit. I will post a separate report, including driving info, as an activation report.Â  This post is more about stuff I learned during this activation.Â Â

You wake up in the morning, and check Hackaday over breakfast. Then itâs off to work or school, where youâve already had to explain the Jolly Wrencher to your shoulder-surfing colleagues. And then to a hackspace or back to your home lab, stopping by the skull-and-cross-wrenches while commuting, naturally. You donât bleed red, but rather #F3BF10. Itâs time we talked.

The Hackaday writing crew goes to great lengths to cover all that is interesting to engineers and enthusiasts. We find ourselves stretched a bit thin and itâs time to ask for help. Want to lend a hand while making some extra dough to plow back into your projects? Weâre looking for contributors to write a few articles per week and keep the Hackaday flame burning.

Contributors are hired as private contractors and paid for each article. YouÂ should have the technical expertise to understand the projects you write about, and a passion for the wide range of topics we feature. Youâll have access to the Hackaday Tips Line, and we count on your judgement to help us find the juicy nuggets that youâd want to share with your hacker friends.

If youâre interested,Â please email our jobs line (jobs at hackaday dot com)Â and include:

• One example article written in the voice of Hackaday. Include a banner image, at least 150 words, the link to the project, and any in-links to related and relevant Hackaday features. We need to know that you can write.
• Details about your background (education, employment, interests) that make you a valuable addition to the team. What do you like, and what do you do?
• Links to your blog/project posts/etc. that have been published on the Internet, if any.

• AMSAT BoD Election Results

The number of votes cast for each candidate is as follows:

• Mark Hammond, N8MH - 707
• Paul Stoetzer, N8HM - 703
• Bruce Paige, KK5DO - 667
• Howie DeFelice, AB2S - 550
• Bob McGwier, N4HY - 534
• Jeff Johns, WE4B - 429

Accordingly, pursuant to Article III, Section 4 of the Bylaws:

Mark Hammond, N8MH, Paul Stoetzer, N8HM, and Bruce Paige, KK5DO, have been elected as Directors of the Corporation for terms ending in 2022.

The members have spoken and hopefully this will close the door on a particularly ugly campaign season and we can finally quit arguing and get back to the primary mission, keeping ham radio in space.

## September 19, 2020

• Montana - Vermilion Peak 1July2019
Summit:Â
W7M/LO-056
Voice Cellular Coverage:Â
Decent, workable
Data Cellular Coverage:Â
Don't know
Cellular Provider:Â
Verizon
APRS Coverage:Â
Don't know

Vermilion Peak is accessed via a moderately climbing trail in the southern Cabinet Mountains.Â  It also makes an easy SOTA double when teamed up with Mount Headley, W7M/LO-018.Â  The summit has fine views of nearby summits and distant peaks, like the Cabinet Mountains Wilderness Area.Â

Pictures:Â

Summit:Â
W7M/LO-018
Voice Cellular Coverage:Â
Decent, workable
Data Cellular Coverage:Â
Don't know
Cellular Provider:Â
Verizon
APRS Coverage:Â
Don't know

Mount Headley is easily accessed via a moderately climbing trail in the southern Cabinet Mountains.Â  It also makes an easy SOTA double when teamed up with Vermilion Peak, W7M/LO-056.Â  The summit is the tallest in the area and has fine views of nearby summits, lakes and distant peaks.Â  Be sure to stop and view Graves Creek Falls w

Pictures:Â

• QRV New England QRP Afield
The set up.
The 20m Kite Antenna. Modeled after W6NBC magnetic slot.
An End Fed 40m dipole.
Added my handy dandy Tandy coax switch.
• The year that wonât end

Pandemic, social unrest, political unrest, unbelievable fires, running out of names for Tropical Storms, even a plague of locusts, and there is still another quarter of 2020 to go!Â  Nevertheless, Iâm very grateful to have Amateur Radio as a hobby as it provides many opportunities to keep my brain active and distract me from some terrible things going on in the world.

I set the following as goals for 2020:

• Teach a Technician Class
• Reach 275 Confirmed Countries in DXCC
• Reach 90 Confirmed Countries on 160-Meters
• Reach 1450 Band-Points in the DXCC Challenge
• Reach 40 confirmed states on satellite

Good news is that three of the five have already been accomplished.Â  Not only did I teach a technician Class, but Iâm part way through the second for this year.Â  Both are historically large classes being taught online.Â  They have lead to historically large VE sessions, and large number of new hams licensed.Â  At this point Iâve taught between 15 and 20% of all licensed hams in the state!

I did finally reach 275 confirmed countries in DXCC.Â  Almost everything left is dependent on future DXPeditions.Â  That has brought me close to another goal, with only four band-points left to get to my goal of 1450.Â  Two of those came about due to 6-meter openings.

After stalling at 89 confirmed countries on 160 over the noisy summer months, I got back on 160-meters yesterday.Â  I worked two new countries (Iceland and Anguilla) and heard but didnât work three more in South America.Â  By this morning, Iceland had confirmed (thank you TF2MSN), completing my goal of 90 confirmed countries on 160-meters for 2020.

My final uncompleted goal for 2020 is to work 40 states via Satellite.Â  Iâve been off the birds for months due to foliage, and my usual excursions to parks have been severely limited by the pandemic.Â  I doubt I will be able to work 3 more states before the end of this year.Â  So more than likely Iâll finish the year with 4 out of 5 goals completed.

Â

• Introducing Precursor

Precursor (preÂ·âcurÂ·âsor | \ pri-ËkÉr-sÉr):
1. one that precedes or gives rise to; a predecessor; harbinger
2. a pocketable open development board

Precursor is a mobile, open source electronics platform. Similar to how a Raspberry Pi or an Arduino can be transformed into an IoT gadget with the addition of a couple breakout boards, some solder, and a bit of code, Precursor is a framework upon which you can assemble a wide variety of DIY mobile applications.

Precursor is unique in the open source electronics space in that itâs designed from the ground-up to be carried around in your pocket. Itâs not just a naked circuit board with connectors hanging off at random locations: it comes fully integratedâwith a rechargeable battery, a display, and a keyboardâin a sleek, 7.2 mm (quarter-inch) aluminum case.

### Precursor â Betrusted

Followers of my blog will recognize the case design from Betrusted, a secure-communication device. Itâs certainly no accident that Precursor looks like Betrusted, as the latter is built upon the former. Betrusted is a great example of the kind of thing that you (and we) might want to make using Precursor. Betrusted is a huge software project, however, and it will require several years to get right.

Precursor, on the other hand, is ready today. And it has all of the features you might need to validate and test a software stack like the one that will drive Betrusted. We are also using the FPGA in Precursor to validate our SoC design, which will eventually give us the confidence we need to tape out a full-custom Betrusted ASIC, thereby lowering production costs while raising the bar on hardware security.

In the meantime, Precursor gives us a prototyping platform that we can use to work through user-experience challenges, and it gives you a way to implement projects that demand a secure, portable, trustable communications platform but that might not require the same level of hardware tamper resistance that a full-custom ASIC solution could provide.

And for developers, the best part is that Betrusted is 100% open source. As we make progress on the Betrusted software stack, we will roll those improvements back into Precursor, so you can count on a constant stream of updates and patches to the platform.

### Hackable. In a Good Way.

Precursor is also unique in that you can hack many aspects of the hardware without a soldering iron. Instead of a traditional ARM or AVR âSystem on Chipâ (SoC), Precursor is powered by the software-defined hardware of a Field Programmable Gate Array (FPGA). FPGAs are a sea of basic logic units that users can wire up using a âbitstreamâ. Precursor comes pre-loaded with a bitstream that makes the FPGA behave like a RISC-V CPU, but youâre free to load up (or code up) any CPU you like, be it a 6502, an lm32, an AVR, an ARM, or something else. Itâs entirely up to you.

This flexibility comes with its own set of trade-offs, of course. CPU speeds are limited to around 100 MHz, and complexity is limited to single-issue, in-order microarchitectures. Itâs faster than any Palm Pilot or Nintendo DS, but itâs not looking to replace your smartphone.

### At Its Core

We describe bitstreams using a Python-based Fragmented Hardware Description Language (FHDL) called Migen, which powers the LiteX framework. (Migen is to LiteX as GNU is to Linux, hence we refer to the combination as Migen/LiteX.) The framework is flexible enough that we can incorporate Googleâs OpenTitan SHA and AES crypto-cores (written in SystemVerilog), yet powerful enough that we can natively describe a bespoke Curve25519 crypto engine.

If youâve ever wanted to customize your CPUâs instruction set, experiment with hardware accelerators, or make cycle-accurate simulations of retro-hardware, Precursor has you covered. And the best part is, thanks to Precursorâs highly integrated design philosophy, you can take all that hard work out of the lab and on the road.

### On the Inside

And if youâre itching for a excuse to break out your soldering iron or your 3D printer, Precursor is here to give you one. While its compact form factor might seem limiting at first, weâve observed that 80% of projects involve adding just one or two domain-specific sensors or hardware modules to a base platform. And most of those additions come on breakout boards that require only a handful of signal wires.

With eight GPIOs (configurable as three differential pairs and two single-ended lines) connected directly to the FPGA, Precursorâs battery compartment is designed to accommodate breakout boards. It also provides multiple power rails. You will find any number of third-party breakout boards with sensors ranging from barometers to cameras and radios ranging from BLE to LTE. Patch them in with a soldering iron, and youâre all set. The main trade-off is that, the more hardware you add, the less space you have left for your battery. Unless of course you build a bigger enclosureâ¦

### On the Outside

If you need even more space or custom mounting hardware, the case is designed for easy fabrication using an aluminum CNC machine or a resin printer. Naturally, our case designs are open source, and the native Solidworks CAD files we provide are constructed such that the enclosureâs length and thickness are parameterized.

Furthermore, Precursorâs bezel is a plain old FR-4 PCB, so if your application does not require a large display and a keyboard, you can simply remove them and replace the bezel with a full-sized circuit board. By way of example, removing the LCD and replacing it with a smaller OLED module would make room for a much larger battery while freeing up space for the custom hardware you might need to build, say, a portable, trustable, VPN-protected LTE hotspot.

### Come Have a Look!

If youâve ever wanted to hack on mobile hardware, Precursor was made for you. By combining an FPGA dev board, a battery, a case, a display, and a keyboard into a single thin, pocket-ready package, it makes it easier than ever to go from a concept to a road-ready piece of hardware.

Precursor will soon be crowdfunding on Crowd Supply. Learn more about its specifications on our pre-launch page, and sign up for our mailing list so that you can take advantage of early-bird pricing when the campaign goes live.

### Meta

Weâve decided to do an extended pre-launch phase for the Precursor campaign to gauge interest. After all, we are in the middle of an unprecedented global pandemic, and one of the worst economic downturns in recorded history. It might seem a little crazy to try and fund the project now, but itâs also crazy to try and build trustable hardware that can hold up against state-level adversaries. We make decisions not on what is practical, but what is right.

The fact is the hardware is at a stage where we are comfortable producing more units and getting it into the hands of developers. The open question is if developers have the time, interest, and money to participate in our campaign. Initial outreach indicates there might be, but weâll only find out for sure in the coming months. Precursor is not cheap to produce; I am prepared to accept a failed funding campaign as a possible outcome.

Weâre also carefully considering alternate sources of funding, such as grants from organizations that share our values (such as NLNet) and commercial sponsors that will not attach conditions that compromise our integrity (youâll notice the Silicon Labs banner at Crowd Supply). This will hopefully make the hardware more accessible, especially to qualified developers in need, but please keep in mind we are not a big corporation. As individual humans like you, we need to put food on the table and keep a roof over our heads. Our current plan is to offer a limited number of early bird units at a low price â so if youâre like me and worried about making ends meet next year, subscribe to our mailing list so you can hopefully take advantage of the early bird pricing. And if youâre lucky enough to be in a stable situation, please consider backing the campaign at a higher pricing tier.

Over the coming months, Iâll be mirroring some of the more relevant posts from the campaign onto my blog, sometimes with additional commentary like this. Thereâs over two years of effort that have gone into building Precursor, and I look forward to sharing with you the insights and knowledge gained on my journey.

• In Other BSDs for 2020/09/19

No theme, just everything in a bucket.

• Morning Brew

I woke early this morning. I donât sleep well in hotels and this morning I was wide-awake at 5am in South Carolina. Thereâs a Starbucks across the road from where Iâm staying and I was waiting outside when they opened. I havenât been to a coffee shop in six months due to the pandemic and this wonât become routine but strong coffee was necessary at this hour.

Long before the run for coffee I was listening to traffic on QO-100 via the WebSDR at Goonhilly in Cornwall. The time difference makes listening in the evening (my time) less productive but early mornings here, especially on the weekends, usually results in a busy transponder and this Saturday morning was no exception.

There were many conversation streams to choose from and several were English speaking. One was particularly interesting because Lutz, PA/DL9DAN was operating portable from Ameland Island (JO23VK) as he is nearing the end of his two week IOTA holiday there. His portable station for the satellite consists of an RSP2pro SDR and an IC7000 with an SQ-Lab transverter and amplifier along with a 40cm dish.

I caught him working several stations and snipped a bit of the audio. Listen here. He expects to be on the island through September 25th if you need EU-038 and have access to QO-100.

Whenever I listen to this satellite I wish we had a similar resource available in North America though it would be less interesting to me if it only covered the Americas. The geostationary orbit provides a stable platform and significantly reduces requirements on the ground, but nice as it is, the fixed footprint doesnât provide the global communication opportunities we had with the P3 birds.

• The State of OSHdata and Open Hardware

Thereâs a lot happening, so I want to take some time to update you on the OSHdata project and what we see happening next in the broader Open Source Hardware community as we look ahead toÂ Open Hardware MonthÂ in October.

OSHdata was started earlier this year by myself (Harris Kenny) and my friend and co-creator Steven Abadie. We worked together to create and publish theÂ 2020 Report on the State of Open Source Hardware, which we released under an openÂ CC BY-SA 4.0Â International license.

Our report took a deep dive into how to price open source products, which licenses are being used by open source hardware projects, the growth of open hardware, and potential ways that we would modify OSHWAâs certification application. In parallel, OSHWA has been working on developing an API to increase accessibility of their dataâwhich means more and easier reporting in the future!

After publication, we co-authored articles on new open hardware forÂ Make:Â magazine and received media coverage in a number of different places likeÂ Hackster.ioÂ from Gareth Halfacree,Â FabbalooÂ from Sarah Goehrke, and theÂ Makers on TapÂ podcast co-hosted by Aaron Peterson and Joe Spanier. In many ways, this project has already exceeded our expectations. But thereâs still more work to be done.

Our research has been read in over 40 countries around the world, on every continentâexcept Antarctica. Our report helped increase awareness of the certification program and created a sense of friendly competition between some of the leading Open Source Hardware companies in the world. Since our report was first published, the program went from slightly over 400 certifications to now boasting over 1,000 certifications!

Posted three years ago, but definitely appropriate for the times.

## September 18, 2020

• Crescent Moon Returns With A Bellyful Of Earth

Tonight the crescent moon comes back to the evening sky. Youâll see it low in the west starting about 20-30 minutes after sunset. By tomorrow it will stand further from the sun and higher, making it even easier to spot. On both nights, earthshine will add an extra layer of enjoyment. Use binoculars to see this twice-reflected light best.

The moon orbits the Earth every 27 days, moving eastward (to the left in the northern hemisphere) at the at about 13Â° a day, roughly one horizontal, balled fist held at armâs length against the sky. Thatâs also the equivalent ofÂ one moon-diameter per hour. Its night-to-night motion is easy to see, but hour-to-hour not so much. For that you need a marker close by like a bright planet or star.

Fortunately, that happens during conjunctions when the moon pairs up with a celestial companion. If you check on the moon hourly during one of these trysts youâll see it slowly move toward or away from the star or planet. No bright conjunctions are on the docket tonight or tomorrow night, but if youâre patient a wonderful opportunity lies ahead.

On September 21, the last day of summer, the waxing crescent not only aligns with the modestly bright star Acrab (Beta Scorpii) in Scorpius, but from many locations it will temporarilyÂ occult or hide the star. This should be a very exciting event to observe with binoculars and small telescopes. Iâll have more to say about it here later this weekend.

In the meantime, enjoy watching the moon work its way around the Earth while chewing on this thought. A titanic collision between a Mars-sized protoplanet and the Earth 4.4 â 4.45 billion years ago likely created the moon. The material first formed a ring around the Earth before coming together to mold the gray sphere that 4.5 billion years later puts a haunt in Halloween. The original energy of the impact and subsequent capture of the material by Earthâs gravity continue to propel the moon around our planet at an average speed of 2,288 mph (3,683 km). Thatâs very close to the speed of one of the fastest non-experimental, manned aircraft, the Lockheed SR-71 Blackbird, which could do 2,193 mph (3,530 kph).

• SETI and Altruism: A Dialogue with Keith Cooper

Keith Cooperâs The Contact Paradox is as thoroughgoing a look at the issues involved in SETI as I have seen in any one volume. After I finished it, I wrote to Keith, a Centauri Dreams contributor from way back, and we began a series of dialogues on SETI and other matters, the first of which ran here last February as Exploring the Contact Paradox. Below is a second installment of our exchanges, which were slowed by external factors at my end, but the correspondence continues. What can we infer from human traits about possible contact with an extraterrestrial culture? And how would we evaluate its level of intelligence? Keith is working on a new book involving both the Cosmic Microwave Background and quantum gravity, the research into which will likewise figure into our future musings that will include SETI but go even further afield.

Keith, in our last dialogue I mentioned a factor you singled out in your book The Contact Paradox as hugely significant in our consideration of SETI and possible contact scenarios. Let me quote you again: âUnderstanding altruism may ultimately be the single most significant factor in our quest to make contact with other intelligent life in the Universe.â

I think this is exactly right, but the reasons may not be apparent unless we take the statement apart. So letâs start today by talking about altruism before we explore the question of âdeep timeâ and how our species sees itself in the cosmos. I think we have ramifications here for how we deal not only with extraterrestrial contact but issues within our own civilization.

Iâm puzzled by the seemingly ready acceptance of the notion that any extraterrestrial civilization will be altruistic or it could not have survived. Perhaps itâs true, but it seems anthropocentric given our lack of knowledge of any life beyond Earth. What, then, did you mean with your statement, and why is understanding altruism a key to our perception of contact?

• Keith Cooper

I think so much that is integral to SETI comes down to our assumptions about altruism. How often do we hear that an older extraterrestrial society will be altruistic, as though itâs the end result of some kind of evolutionary trajectory. But thereâs several problems with this. One is that the person making such claims â usually an astrophysicist straying into areas outside their field of expertise â is often conflating âaltruismâ with âbeing niceâ.

And sure, maybe aliens are nice. I kind of get the logic, even though itâs faulty. The argument is that if they are still around then they must have abandoned war long ago, otherwise they would have destroyed themselves by now, ergo they must be peaceful.

And itâs entirely possible, I suppose, that a civilisation may have developed in that direction. In The Better Angels of Our Nature, Steven Pinker attempted to argue that our civilization is becoming more peaceable over time, although Pinkerâs analysis and conclusions have been called into question by numerous academics.

• Paul Gilster

I hope so. I think the notion is facile at best.

• Keith Cooper

Itâs what human societies should always aim for, I truly believe that, but whether we can achieve it or not is another question. When it comes to SETI, we seem to home in on the most simplistic definitions of what an extraterrestrial society might be like â âtheyâve survived this long, they must be peacefulâ. A xenophobic civilization might be at peace with its own species, but malevolent towards life on other planets. A planet could be at peace, but that peace could be implemented by some 1984-style dystopian dictatorship where nobody is free. Neither of which is particularly âniceâ, and we could think of many other scenarios, too.

Nevertheless, this myth of wise, kindly aliens has grown up around SETI â that was the expectation, 60 years ago, that ET would be pouring resources into powerful beacons to make it easy for us to detect them. To transmit far and wide across the Galaxy, and to maintain those transmissions for centuries, millennia, maybe even millions of years, would require huge amounts of resources. When we consider that the aliens may not even know for sure whether they share the Universe with other life, itâs a huge gamble on their part to sacrifice so much time and energy in trying to communicate with others in the Universe.

If we look at what altruism really is, and how that may play into the likelihood that ET will want to beam messages across the Galaxy given the cost in time and energy, then it poses a big problem for SETI. ET really needs to help us out â to display a remarkable degree of selfless altruism towards us â by plowing all those resources into transmitting signals that weâll be able to detect.

One of the forms that altruism can take in nature is kin selection. We can see how this has evolved: lifeforms want to ensure that their genes are passed on to later generations, so a parent will act to protect and give the greatest possible advantage to their child, or nieces and nephews. Thatâs a form of altruism predicated by genes, not ethics. Unless some form of extreme panspermia has been at play, alien life would not be our kin, so they would be unlikely to show us altruistic behaviour of this type.

• Paul Gilster

But we havenât exhausted all the forms altruism might take. Is there an expectation of mutual benefit that points in that direction?

• Keith Cooper

Okay, so what about quid pro quo? Thatâs a form of reciprocal altruism. Consider, though, the time and distance separating the stars. It could take centuries or millennia for a message to reach a destination, and thereâs no guarantee that anyone is going to hear that message, nor that they will send a reply. Thatâs a long time to wait for a return on an investment, if there even is a return. Why plow so many resources into transmitting if thatâs the case? Whatâs in it for them?

So if kin selection and reciprocal altruism are not really tailored for interstellar communication, then it seems more unlikely that we will hear from aliens. Of course, there is always the possibility of exceptions to the rule, one-off reasons why a society might wish to broadcast its existence. Maybe ET wants to transmit a religious gospel to the stars to convert us all. Maybe they are about to go extinct and want to send one last hurrah into the Universe. But these would not be global reasons, and we shouldnât expect alien societies to make it easy for us to discover them.

• Paul Gilster

Good point. Why indeed should they want us to discover them? I can think of reasons a society might decide to broadcast its existence to the stars, though I admit that itâs a bit of a strain. But aliens are alien, right? So letâs assume some may want to do this. I like your mention of reciprocal altruism, as itâs conceivable that an urge to spread knowledge, for example, might result in a SETI beacon of some kind that points to an information resource, the fabled Encyclopedia Galactica. What a gorgeous dream that something like that might be out there.

Curiosity leads where curiosity leads. I wonder if itâs a universal trait of intelligence?

• Keith Cooper

Itâs interesting that you describe the Encyclopedia Galactica as a âdreamâ, because I think thatâs exactly what it is, a fantasy that weâve imagined without any strong rationale other than falling back on this outdated idea that aliens are going to act with selfless altruism. As David Brin argues, if you pump all your knowledge into space freely, what do you have left to barter with? And yet it is expectations such as receiving an Encyclopedia Galactica that still drive SETI and influence the kinds of signals that we search for. I really do think SETI needs to move on from this quaint idea. But I digress.

• Paul Gilster

Itâs certainly worth keeping up the SETI effort just to see what happens, especially when itâs privately funded. But I want to circle back around. Iâve always had an interest in what the general publicâs reaction to the idea of extraterrestrial civilization really is. In the 16 years that Iâve been writing about this and talking to people, Iâve found a truly lopsided percentage that believe as a matter of course that an advanced civilization will be infinitely better than our own. This plays to a perceived disdain for human culture and a faith in a more beneficent alternative, even if it has to come from elsewhere to set right our fallen nature.

Put that way, it does sound a bit religious, but so what â Iâm talking about how human beings react to an idea. Humans construct narratives, some of them scientific, some of them not.

Iâm also talking about the general public, not people in the interstellar community, or scientists actively working on these matters. As you would imagine with COVID about, Iâm not making many talks these days, but when I was fairly active, Iâd always ask audiences of lay people what they thought of intelligent aliens. The reaction was almost always along two lines: 1) The idea used to seem crazy, but now we know itâs not. And 2) it would be something like an European Renaissance all over again if we made contact, because they would have so much to teach us.

A golden age, with its Dantes and Shakespeares and Leonardos. Or think of the explosion of Chinese culture and innovation in the Tang Dynasty, or Meiji Japan, all this propelled by the infusion not of recovered ancient literature and teaching, as in the European example, but materials discovered in the evidently limitless databanks of the Encyclopedia Galactica.

I ran into these audience reactions so frequently in both talks to interested audiences and just conversations among neighbors and friends that I had to ask what was propelling the Hollywood tradition of scary movies about alien invasion? What about Independence Day, with its monstrous ships crushing the life out of our planet? So I would ask, if you believe all this altruistic stuff, why do you keep going to these sensational movies of death and destruction?

The answer: Because people think theyâre fun. Theyâre a good diversion, a comic book tale, a late night horror movie where getting scared is the point. Whole film franchises are built around the idea that fear is addictive when experienced within the cocoon of a home or theater. Thus the wave of horror fiction that has been so prominent in recent years. Itâs because people like being scared, and the reason for that goes a lot deeper into psychiatry than I would know how to go. I admit I may not believe in Cthulhu, but I love going to Dunwich with H. P. Lovecraft.

Keith, as we both know â and you, as the author of The Contact Paradox would know a lot more about this than I do â there is an active lobby against messaging to the stars: METI. Iâve expressed my own opposition to METI on many an occasion in these pages, and the discussion has always been robust and contentious, with the evidently minority position being that we should hold back on such broadcasts unless we reach international consensus, and the majority position being that it doesnât matter because sufficiently intelligent aliens already know about us anyway.

I donât want to re-litigate any of that here. Rather, I just want to note that if the anti-METI position gets loud pushback in the interstellar community, it gets even louder pushback among the general public. In my talks, bringing up the dangers of METI invariably causes people to accuse me of taking films like Independence Day too seriously. From what I can see from my own experience, most people think ETI may be out there but assume that if it ever shows up on our doorstep, it will represent a refined, sophisticated, and peaceful culture.

I donât buy that idea, but Iâm so used to seeing it in print that I was startled to read this in James Trefil and Michael Summersâ recent book Imagined Life. The two first tell a tale:

Two hikers in the mountains encounter an obviously hungry grizzly bear. One of the hikers starts to shed his backpack. The other says, âWhat are you doing? You canât run faster than that bear.â

âI donât have to run faster than the bear â I just have to run faster than you.â

Natural selection doesnât select for bonhomie or moral hair-splitting. The one whose genes will survive in the above encounter is the faster runner. Trefil and Summers go on:

So what does this tell us about the types of life forms that will develop on Goldilocks worlds? Weâre afraid that the answer isnât very encouraging, for the most likely outcome is that they will probably be no more gentle and kind than Homo Sapiens. Looking at the history of our species and the disappearance of over 20 species of hominids that have been discovered in the fossil record, we cannot assume we will encounter an advanced technological species that is more peaceful than we are. Anyone we find out there will most likely be no more moral or less warlike that we areâ¦

That doesnât mean any ETI we find will try to destroy us, but it does give me pause when contemplating the platitudes of the original The Day the Earth Stood Still movie, for example. Itâs so easy to point to our obvious flaws as humans, but the more likely encounter with ETI, if we ever meet them face to face, will probably be deeply enigmatic and perhaps never truly understood. I also argue that there is no reason to assume that individual members of a given species will not have as much variation between them as do individual humans.

Itâs a long way from Francis of Assisi to Joseph Goebbels, but both were human. So what happens, Keith, if we do get a SETI signal one day. And then, a few days later, another one that says, âDisregard that first message. The one you want to talk to is me?â

• Keith Cooper

Iâm hesitant to rely too closely on comparisons with ourselves and our own evolution, since ultimately we are just a sample of one, and we could be atypical for all we know. I see what Trefil and Summers are saying, but equally I could imagine a world, perhaps with a hostile environment, where species have to work together to survive. Instead of survival of the fittest, it becomes survival of those who cooperate. And suppose intelligent life evolves to be post-biological. What role do evolutionary hangovers play then?

I think the most we can say is that we donât know, but that for me is enough of a reason to be cautious both about the assumptions we make in SETI, and about the possible consequences of METI.

But youâre right about our flawed assumption that aliens will exist in a monolithic culture. Unless thereâs some kind of hive mind or network, there will likely be variation and dissonance, and different members of their species may have different reactions to us.

If we detected two beacons in the same system, I think that would be great! Why? Because it would give us more information about them than a single signal would. Since we will have no knowledge of their language, their culture, their history or their biology, being able to understand their message in even the most general sense is going to be exceptionally difficult.

So, if we detect a signal, we might not be able to decipher it or learn a great deal. But if we detect two different, competing beacons from the same planet, or planetary system, then we will know something about them that we couldnât know from just one unintelligible signal, which is that they are not necessarily a monolithic culture, and that their society may contain some dissonance, and this may influence how, and if, we respond to their messages.

For me, the name of the game is information. Learn as much about them as we can before we embark on making contact, because the more we know, then the less likely we are to be surprised, or to make a misunderstanding that could be catastrophic.

• Paul Gilster

Just so. But there, you see, is the reason why I think we have to be a lot more judicious about METI. Itâs just conceivable that, to them as well as us, content matters.

But look, I see youâre headed in a direction I wanted to go. If information is the name of the game, then information theory is going to play a mighty role in our investigations. So itâs no surprise that you dwell on the matter in The Contact Paradox. Here weâre in the domain of Claude Shannon at Bell Laboratories in the 1940s, but of course signal content analysis applies across the whole spectrum of information transmittal. Shannon entropy measures disorder in information, which is a way of saying that it lets us analyze communications quantitatively.

Do you know Stephen Baxterâs story âTuringâs Apple?â Here a brief signal is detected by a station on the far side of the Moon, no more than a second-long pulse that repeats roughly once a year. It comes from a source 6500 light years from Earth, and Baxter delightfully presents it as a âBenford beacon,â after the work Jim and Greg Benford have done on the economics of extraterrestrial signaling and the understanding that instead of a strong, continuous signal, weâre more likely to find something more like a lighthouse that sweeps its beam around the galaxy, in this case on the galactic plane where the bulk of the stars are to be found.

Baxterâs story sees the SETI detection as a confirmation rather than a shock, a point Iâm glad to see emerging, since I think the idea of extraterrestrial intelligence is widely understood. No great revolution in thought follows, but rather a deepening acceptance of the fact that weâre not alone.

Anyway, in the story, the signal is investigated, six pulses being gathered over six years, with the discovery that this ETI uses something like wavelength division multiplexing, dividing the signal into sections packed with data. Scientists turn to Zipf graphing to tackle the problem of interpretation â as you present this in your book, Keith, this means breaking the message into components and going to work on the relative frequency of appearance of these components. From this they deduce that the signal is packed with information, but what are its elements?

Shannon entropy analysis looks for the relationships between signal elements, so how likely is it that a particular element will follow another particular element? Entropy levels can be deduced â how likely are not just pairs of elements to appear, but triples of elements? In English, for example, how likely is it that we might find a G following an I and an N? Dolphin languages get as high as fourth-order entropy by this analysis, as you know. Humans get up to eighth or ninth. Baxterâs signal analysts come up with a Shannon entropy in the range of 30 for ETI.

Let me quote this bit, because I love the idea:

âThe entropy level breaks our assessment routinesâ¦ It is information, but much more complex than any human language. It might be like English sentences with a fantastically convoluted structure â triple or quadruple negatives, overlapping clauses, tense changesâ¦ Or triple entendres, or quadruples.â

Weâre in challenging territory here. In the story, ETI is a lot smarter than us, based on Shannon entropy. The presence of this kind of complexity in a signal, in Baxterâs scenario, is evidence that the detected message could not have been meant for us, because if it were, the broadcasting civilization would have âdumbed it downâ to make it accessible. Instead, humanity has found a signal that demonstrates the yawning gap between humanity and a culture that may be millions of years old. If we find something like this, itâs likely we would never be able to figure it out.

Would something like this be a message, or perhaps a program? If we did decode it, what would it mean? An ever better question: What might it do? Baxterâs story is so ingenious that I donât want to give away its ending, but suffice it to say that impersonal forces may fall well outside our conventional ideas of âfriendlyâ vs. âhostileâ when it comes to bringing meaning to the cosmos.

But letâs wrap back around to Shannon and Zipf, and the SETI Instituteâs Laurance Doyle, to whom you talked as you worked on The Contact Paradox. Doyle told you that communication complexity invariably tells us something about the cultural complexity of the beings that sent the message. And I think the great point that he makes is that the best way to approach a possible signal is by studying how communications systems work right here on Earth. Thus Claude Shannon, who started working out his theories during World War II, gets applied to the question of species intelligence (dolphins vs. humans) and now to hypothetical alien signals.

In a broader sense, weâre exploring what intelligence is. Does intelligence mean technology, or are technological societies a subset of all the intelligent but non-tool making cultures out there? SETI specifically targets technology, which may itself be a rarity even in a universe awash with forms of life with high Shannon entropy in communications they make only among themselves.

A great benefit of SETI is that it is teaching us just how much we donât know. Thus the recent Breakthrough Listen breakdown of their findings, which extends the data analysis to a much wider catalog of stars by a factor of 220, all at various distances and all within the âfield of view,â so to speak, of the antennae at Green Bank and Parkes. Still more recent work at the Murchison Widefield Array tackles an even vaster starfield. Still no detections, but weâre getting a sense of what is not there in terms of Arecibo-like signals aimed intentionally at us.

So how do you react to the idea that, in the absence of information to analyze from an actual technological signal, we will always be doing no more than collecting data about a continually frustrating âgreat silence?â Because SETI canât ever claim to have proven there is no one there.

• Keith Cooper

Thatâs one of my unspoken worries about SETI; how long do we give it before we start to suspect that weâre alone? People might say, well, weâve been searching for 60 years now â surely thatâs long enough? Of course, modern SETI may be 60 years old, but weâve certainly not accrued 60 yearsâ worth of detailed SETI searches. Weâve barely scratched the tip of the iceberg bobbing up above the cosmic waters.

So how long until we can safely say weâve not only seen the tip of the iceberg, but that weâve also taken a deep dive to the bottom of it as well? Maybe our limited human attention spans will come into play long before then, and weâll get bored and give up. I think we can also be too quick to assume that thereâs no one out there. Take the recent re-analysis of Breakthrough Listen data, which prompted one of the researchers, Bart Wlodarczyk-Sroka of the University of Manchester, to declare:

âWe now know that fewer than one in 1600 stars closer than about 330 light years host transmitters just a few times more powerful than the strongest radar we have here on Earth. Inhabited worlds with much more powerful transmitters than we can currently produce must be rarer still.â

Except that we donât know that at all. All we can say was that there was no one transmitting a radio signal during the brief time that Breakthrough was listening. We could have easily missed a Benford Beacon, for instance. Itâs a problem of expectation versus reality â we expect these powerful, omnipresent beacons, and when we donât find them we jump to the conclusion that ET must not exist, rather than the possibility that our expectation is flawed.

The Encyclopedia Galactic is a similar kind of expectation that isnât just a fanciful notion, but is a concept that actively influences SETI â we expect ET to be blasting out this guide to the cosmos, so we tailor SETI to look for that kind of signal, rather than something like a Benford Beacon. It also biases our thinking as to what we might gain from first contact â all this knowledge given to us by peaceful, selflessly altruistic beings. It would be lovely if true, but I think itâs dangerous to expect it.

Case in point: Brian McConnell recently wrote on Centauri Dreams about his concept for an Interstellar Communication Relay â basically a way of disseminating the data detected within a received signal, giving everybody the chance to try and decipher it [see What If SETI Finds Something, Then What?]. He rightly points out that we need to start thinking about what happens after we detect a signal, and the relay is a nifty way of organising that, so that should we detect a signal tomorrow, we will already have procedures in hand.

I wonât comment too much on the technical aspects, other than to say that if a message contains a Shannon entropy of 30, then it probably wonât matter how many people try and make sense of the message, we wonât get close (A.I., on the other hand, may have a bit more luck).

The Interstellar Communication Relay is an effort to democratize SETI. My cynical side worries, however, about safeguards. The relay relies on people acting in good faith, and not concealing or misusing any information gleaned from a signal. McConnell proposes a âcopyleft licenseâ, a bit like a creative commons license, that will put the data in the public domain while preventing people commercialising it for their own gain. I can see how this makes sense in the Encyclopedia Galactica paradigm â McConnell refers to entrepreneurs being allowed to make âgames and educational softwareâ from what we may learn from the alien signal.

I worry about this. In The Contact Paradox, I wrote about how even something as innocent as the tulip, when introduced into seventeenth-century Dutch society, proved disruptive (https://en.wikipedia.org/wiki/Tulip_mania). The Internet, motor cars, nuclear power â theyâve all been disruptive, sometimes positively, other times negatively.

How do we manage the disruptive consequences of information from an extraterrestrial signal? Even if ET has the best of intentions for us, they canât foresee what the effects will be when facets of their culture or technology are introduced into human society, in which case the expectation that ET will be wise and âaltruisticâ is almost irrelevant. Heaven forbid they send us technology that could be turned into a weapon, and we canât guarantee that bad actors â after being freely given that information â wonât run off with it and use it for their own nefarious ends. A copyleft license surely isnât going to put them off.

My feeling is that fully deciphering a signal will take a long, long time, if ever, in which case we shouldnât worry quite so much. But suppose we are able to decipher it quickly, and itâs more than just a simple âgreetingsâ. Yes, we have to think about what happens after we detect a signal, but itâs not just the mechanics of processing that data that we have to think about; we also have to plan how we manage the dissemination of potentially disruptive information into society in a safe way. Itâs a dilemma that the whole of SETI should be grappling with I think, and nobody â certainly not me â has yet come up with a solution. But, I think that revising our assumptions, recasting our expectations, and casting aside the idea that ET will be selflessly altruistic and wise, would be a good start.

• Paul Gilster

Well said. As I look back through our exchanges, I see I didnât get around to the Deep Time concept I wanted to explore, but maybe we can talk about that in our next dialogue, given your interest in the Cosmic Microwave Background, which is the very boundary of Deep Time. Letâs plan on discussing how ideas of time and space have, in relatively short order, gone from a small, Earth-centered universe defined in mere thousands of years to todayâs awareness of a cosmos beyond measure that undergoes continuous accelerated expansion. All Fermi solutions emerge within this sense of the infinite and challenge previous human perspectives.

• Easy Sats

Iâve read some of the collected ignorance about how working the FM satellites is just like working another repeater. Of course thatâs silly, Iâve never had to track a repeaterâs location while continuously adjusting the frequency to compensate for Doppler, etc.

But that doesnât make for a snappy comeback.

This printed on the back of an AMSAT T-shirt does a better job of making that point:

I had a QSO on a repeater that is 4 inches square, traveling 17,000 MPH, 1,400 miles away, using 5 watts, in outer space. Easy.

Yeah. Thatâs much better.

• Update catchup
• First Pictures: Voyager 1 Portrait of the Earth & Moon â September 18, 1977

For space enthusiasts of a certain age like myself, the 1970s were a golden age of discovery with missions encountering all five planets known to ancient astronomers. Surely the greatest of these were NASAâs Voyager missions which returned spectacular images of the outer planets and their satellites over the course of a decade starting in 1979. But the very first image returned by Voyager 1 and subsequently released to the public stuck with me and gave a foretaste of what was to come. That image was the first portrait of the Earth and Moon as seen from Voyager 1 as it left its home for good.

The launch of Voyager 1 from Cape Canaveral on September 5, 1977. (NASA)

Voyager 1 was launched from LC-41 at Cape Canaveral on September 5, 1977 some 16 days after its slower moving twin, Voyager 2 (the spacecraft numbering was reversed from their launch order to avoid confusion later as the faster moving Voyager 1 reached Jupiter and Saturn first). On September 11 and 13, Voyager 1 made a pair of trajectory correction maneuvers to fine tune its trajectory and eliminate the minor errors remaining from launch. On September 18, Voyager 1 was instructed to acquire optical navigation images including a sequence of 54 images in the direction of the Earth â a trio of narrow-angle images each taken through different color filters for 18 different aiming positions. At the time these images were taken, Voyager 1 was 11.66 million kilometers away from the Earth directly above Mt. Everest at 25Â° north latitude on the dark side of the planet. At this range, the image scale was about 110 kilometers per pixel.

Here is a color image showing the Earth and Moon together for the first time as seen from a departing interplanetary spacecraft. Voyager 1 took this color image on September 18, 1977 from a range of 11.66 million kilometers which was subsequently released to the public on January 10, 1978. Click on image to enlarge. (NASA/JPL

Because the Earth was out of view of Voyagerâs high gain antenna during the first weeks of the mission, these images were not transmitted back to Earth until October 7 and 10. The images were subsequently sent to JPLâs Image Processing Laboratory (IPL) for processing and the creation of a near-true-color image. Because the Moon is so much darker than the Earth, the Moon was artificially brightened by a factor of three relative to the Earth by computer enhancement so that both worlds would show clearly in the same image. The resulting color image was released to the public by JPL on January 10, 1978.

Diagram of the major components and instruments of the Voyager spacecraft. Click on image to enlarge. (NASA)

Each Voyager spacecraft carried a pair of vidicon-based slow scan television cameras attached on a pointable Science Scan Platform mounted on the end of one of the booms extending from the spacecraftâs bus. This would be the last NASA planetary mission to use vidicon-based imagers with improved solid state cameras employed on future missions. Despite the comparatively primitive nature of these cameras, JPL engineers used their 13 years of experience flying vidicon cameras on earlier Mariner and Viking missions to create highly sensitive and stable cameras suitable for use in the dim reaches of the outer solar system.

Diagram showing the layout of Voyagerâs wide and narrow angle cameras. Click on image to enlarge. (NASA)

Each Voyager carried a wide and a narrow angle camera which employed 200 mm and 1,500 mm focal length lenses, respectively, to focus images on an 11 mm square vidicon plate. This resulted in 3.2Â° and 0.42Â° fields of view for these cameras â the equivalent of using 400 mm and 3,200 mm lenses on an old style 35 mm format camera. The images were subsequently read out as quickly as 48 seconds after being sliced into 800 lines consisting of 800 pixels each. Each pixel was digitized to 8 bits and could be transmitted live at a rate of as great as 115,200 bits per second or saved on a magnetic tape recorder with a 100-image capacity (the equivalent of about 61 megabytes of data). Each camera had its own eight-position filter wheel which included orange, green and blue filters that could be combined to create near true color images. Because of the low sensitivity of vidicon cameras at redder wavelengths, a wide-band orange filter was employed instead of a red filter which would normally be used to create true color images. Despite the comparatively primitive nature of these cameras, Voyager 1 returned tens of thousands of images over the following years providing some of the most spectacular views of Jupiter and Saturn we have today.

Voyager 1 took this color image of Jupiter and two of its satellites, Io and Europa, on February 13, 1979 at a range of 20 million kilometers about three weeks before closest approach. Click on image to enlarge. (NASA/JPL)

Â

Â

âVoyager 1: The First Close Encounter with Titanâ, Drew Ex Machina, November 12, 2015 [Post]

âVoyager 2: The First Uranus Flybyâ, Drew Ex Machina, January 24, 2016 [Post]

âFinishing the Grand Tour: Voyager 2 at Neptuneâ, Drew Ex Machina, August 29, 2019 [Post]

Â

#### General References

Michael M. Mirabito, The Exploration of Outer Space with Cameras, McFarland, 1983

Andrew Wilson, Solar System Log, Janeâs Publishing, 1987

Voyager to Jupiter and Saturn, NASA SP-420, 1977

Voyager Mission Status Bulletin No. 9, JPL/NASA, September 29, 1977

Voyager Mission Status Bulletin No. 10, JPL/NASA, October 20, 1977

Voyager Mission Status Bulletin No. 14, JPL/NASA, January 16, 1978

Â

• Heads Up â QRP Afield

If you havenât heard, the annual running of the QRP Afield contest is Saturday, September 19, 2020. This contest, sponsored by the QRP Club of New England, has been around for decades, and.Â

The contest runs from 1500Z â 2100Z. You can get all of the details from theÂ QRP Club of New Englandâs website.Â

QRP Afield is one of the contests I always add to my calendar each year. Unfortunately, family obligations will probably prevent me from participating this year. However, if you are so inclined, head out to the field and give it a go!

72, Craig WB3GCK

• Artistic PCB Design for Terrified Beginners workshop

Hardwaer hacker kliment created a workshop for the recent HOPE conference:

From the Artistic PCB Design for Terrified Beginners workshop

If youâre not familiar with some of those concepts:

• PCBsÂ stands forÂ Printed Circuit Boards, the things that electronic devices are build on
• ArtÂ meansÂ people creating wonderful (or awful) thingsÂ for other people to enjoy
• AÂ terrified beginnerÂ is someone who we all either are or have been at some point

IâmÂ Kliment, and Iâm doing this workshop to achieve one or more of the following things:

1. Get art people to play with electronics
2. Get electronics people to play with art
3. Give the two groups above a common language so they can talk to each other
4. Get people to build something cute and have fun

Weâre going to be making a [SAO].Â  A [SAO] is a small circuit board that gets attached as a decorative or functional addition to one of the many badges that are so popular at hacker events. Weâre going to make one that doesnât do much (except light up). Except it wonât be shitty! It will be pretty! So letâs call it a prettyÂ addon!

• CHA MPAS Lite: Chameleon designs a new QRP compact portable antenna system
Many thanks to, Don (W7SSB), who notes that Chameleon Antenna has just introduced the CHA MPAS Lite: a modular portable antennas system covering from 6M â 160 meters. I know a number of participants in the Parks On The Air program who use the CHA MPAS antenna systemâthe MPAS Lite is the âlittle brotherâ of â¦ Continue reading CHA MPAS Lite: Chameleon designs a new QRP compact portable antenna system â
• An Updated Propellant Depot Taxonomy Part III: GEO Depots

Of the six depot types Iâll be describing in this series, GEO Depots are probably the least fully-baked of the concepts for me, mostly because Iâve only had limited involvement in the traditional GEO satellite world. But I wanted to share a few preliminary thoughts about how one would do depots in support of activities in the Geostationary belt before moving on to talk about more traditional depots. And before getting into the specific characteristics I think GEO depots will likely have, I wanted to share some background thoughts on design drivers for GEO depots, to walk you through my logic on where I think things will go.

## Background on Geostationary Orbit

First off, for those of you newer to spaceflight, what is the Geostationary Belt1? Basically, when people talk about GEO theyâre typically talking about a circular equatorial earth orbit whose altitude (~35,786km) is such that its orbital period is exactly one day, meaning that to an observer on earth, the satellite always stays in the exact same position in the sky. Sir Arthur Clarke came up with the idea of using such orbits for telecommunications in the 1940s, and today there are hundreds of GEO satellites, providing not just communications, but also weather observation, and other commercial, scientific, and military functions.

Satellites in GEO experience various orbital perturbations2, that require stationkeeping maneuvers to prevent the satellite from slowly drifting out of position. These stationkeeping maneuvers amount to ~52m/s per year of delta-V requirements. Satellites are designed with enough propellant to not only get to GEO, but to perform stationkeeping for a certain amount of time, and then at end of life boost themselves up to a âgraveyard orbitâ thatâs typically ~200km above GEO.

One nice thing about GEO compared to LEO is that pretty much everything is in the same plane, heading in the same direction, all moving at about the same velocity3. Which means that moving between different GEO satellites never requires costly inclination changes. You basically raise or lower your orbit into an orbit with a slightly different orbital period and either catch up with the new satellite, or let it catch up with you. But to give you an idea of scale4, an object in the graveyard orbit, 200km higher than GEO, will take ~10min longer to complete an orbit, which means that over the course of ~140days, the GEO satellite will âlapâ a spacecraft in the graveyard orbit5. Iâll get into why this matters later.

Another characteristic of GEO is that most GEO satellites are pretty big. Some GEO satellites are bigger than a school bus, can weigh several tonnes, and cost hundreds of millions of dollars. There are some groups like our friends at Astranis that are trying to develop GEO smallsats (~350kg in their case), but the vast majority of GEO satellites, whether commercial, civil, or military, are over 1 tonne.

Because of the very expensive nature of satellites in GEO, there has been a lot of interest in servicing GEO satellites, primarily to extend their life. Most GEO birds are designed with ~15yrs of propellant on-board, and normally when they run out, they have to be boosted to a graveyard orbit to avoid becoming a âzombiesatâ that endangers other GEO operators. However, in many cases by the time the propellant starts running low, the satellite may still be economically useful. Maybe itâs been transferred from a higher value orbital slot to a lower value one (one owned by a less populous or less well developed country), but in many cases, the satellite can still be producing millions or dollars per year of revenue. Because of these realities, several companies have proposed servicer vehicles that could fly up to a satellite thatâs almost out of fuel, dock with it, and then either refuel it or provide âjet packâ services where the servicer takes over stationkeeping maneuvers. Our friends at Northrop Grumman Space Logistics Services finally pulled off the worldâs first successful commercial satellite servicing mission just this year, with their MEV-1 vehicle. Other players in the field include our friends at Astroscale/Effective Space6, as well as our friends at Maxar7. Since GEO satellite arenât typically designed for servicing, these missions have focused on leveraging structural features on GEO satellites (liquid apogee motor nozzles, or the separation system hardware left on the satellites after theyâre released by their launch vehicles) to mechanically grapple the satellites. Initial servicing missions are focused on what I was calling âjet packâ services where after grappling with the client satellite, they take over stationkeeping and other propulsion requirements. But most of the players have plans to move to fancier services, such as using a servicer to clip on a Mission Extension Pod to provide the jetpack services without tying up the more expensive servicer, or using robotic manipulators to fix stuck deployable structures, etc.

One final attribute of GEO is how things get there. Most GEO satellites are launched on a rocket into a Geostationary Transfer Orbit (GTO), which has a perigee down in LEO, and an apogee up at or above GEO8. Occasionally, mostly for military GEO satellites, the rocket will also then perform a second burn at apogee to circularize the orbit and drop the satellite off directly in GEO, but for most commercial satellites, the satellite itself provides the ~1.5km/s worth of maneuvers to raise perigee, lower inclination, and circularize in GEO. Some satellites do this quickly, in a small number of chemical propulsion burns9, others take months slowly spiraling out using more efficient solar electric propulsion.

## What I think This Means for GEO Depots

My first opinion when I started looking at GEO depots, was that because it was a lot easier to get around GEO without using lots of propellant, that maybe it would make sense to have a single aggregated depot, located in a circular orbit somewhat above GEO. But then several factors made me reconsider that as I dug deeper:

• First, at 200km above GEO, youâre talking a very long time for the depot to make itâs lap around the GEO belt. Servicers are expensive, so you donât want to have to wait forever to replenish/resupply. You can cut into that time by having your depot higher than GEO (where the relative orbital periods mean that it takes less time to do a lap around the belt), but the round trip delta-V requirements start growing pretty quickly on you. Including a rendezvous with the depot, youâd be at over 60m/s of round-trip dV with a depot 800km above the GEO belt, but it would still take over 1 month for it to do a lap, which means you could be waiting for weeks. If media reports that Intelsat is paying Northrop Grumman $13M/yr for MEV jetpack services are correct, that means that waiting a month could be costing you$1M worth of revenue.
• Also, because most missions, and thus most rideshare opportunities go to GTO, not GEO, youâre going to have to use a decent amount of propulsion to get any propellant or supplies (spare extension pods, tools, etc) up to the depot10. Sending a tug down to pick things up in GTO and bring them back to GEO may be possible, but thatâs a lot of dV 11, and for low-thrust, potentially several months. I havenât done the detailed trades, so I canât be sure, but I think this suggests that in many cases youâll want the depot deliveries to be self-propelled to some extent.
• Also, because no GEO satellites are currently designed for refueling, most of what the depot(s) would be storing will be propellant, extension pods, tools, and other supplies for servicers themselves. That means that almost all depot customers in GEO will be satellite servicing vehicles with full rendezvous, proximity operations, and docking (RPOD) capabilities, and in many cases with sophisticated servicing robotics.
• Lastly, all GEO servicers Iâve seen use either storable chemical bipropellants or storable electric propulsion gases. Nobody tends to use cryogenic propellants or other things that need careful thermal design or active cooling.

To me, all of these things undermine the case for a unitary depot, and push me in the direction of a more distributed depot arrangement, like in LEO. Basically, you donât want to have to take a long time to get to/from a depot, so having more than one of them spread throughout GEO makes sense. Since you likely will need on-board propulsion anyway to get your supplies into the depot orbit, itâs harder to have a dumb tanker that supplies a more sophisticated unitary storage facility. Since your customers all have to have RPOD capabilities, and most have robotics, thereâs less need for that on the depot side, so once again less benefit to a centralized depot that can amortize its robotics/RPOD capabilities over lots of customers. Lastly, since the propellant mostly doesnât need much propellant conditioning, thereâs not a big advantage for storing it in a larger quantity like there would be if the propellant needed was cryogenic.

Anyhow, so based on all of that, hereâs my best guess at what I think weâll see for GEO depots.

## GEO Depot Characteristics and Considerations

Application: Propellant for refueling GEO servicers and âextension podsâ12, spare extension pods, and tools for servicers.

Location: In a circular orbit at moderately higher altitude than GEO (likely 100-400km over GEO), spread out fairly evenly to minimize the wait time before a depot has drifted close enough to boost to and rendezvous with.

• Note that for propellants, one depot location is like another, but for specific tools or pods, it may be important to have the depot preposition itself in the right orbital position relative to the servicer, rather than just going to any old GEO depot.
• For payloads launched via PODS or similar GEO-insertion opportunities where the material may not have its own propulsion, it may need a servicer to capture it, and drag it up to a safe operating altitude for temporary storage if itâs not needed right away.

Depot Size: Likely ESPA class (180-400kg). Most rideshare opportunities to GTO are via ESPAs or similar adapters. PODS-delivered options may be more in the 100-150kg range.

• As described elsewhere, I see these depots as likely being one or more propellant tanks, attached to a propulsion system, with some basic spacecraft bus functionality. They likely wouldnât have RPOD, but would likely have grapple fixtures, and possibly some servicing ports for attaching tools, pods, or transfering propellant, or attaching dumb payloads that need to be attached to something capable of stationkeeping.
• Itâs possible that the propulsion/bus system that delivers the payloads from GTO could be some sort of deployer tug like what Rocketlab is doing for Photon or what Spaceflight is doing with their SHERPA vehicle. Which means that in theory, the system might also be delivering some smaller GEO satellites along the way to getting into a depot parking orbit.

Propellant Types: Storable chemical propellants and EP propellants. Unlike LEO, thereâs a lot more standardization of propellant types in GEO. Most GEO satellites, once theyâre all the way in GEO use either something in the Hydrazine family as a monopropellant, or Xenon as a EP propellant.

• Though as with all other depot types, the propellant type chosen is going to be driven by what clients (in this case servicers) are looking for. Xenon seems like a likely first bet for most customers, though Hydrazine or one of its variants (MMH or UDMH) might also be first.
• In addition to just the propellants, the depots would also likely be storing tools, âpodsâ, and other hardware supplies.

Other Characteristics/Considerations:

• If PODS or other direct GEO insertion options result in significant numbers of deliveries of dumb cargo pallets or tanks, it could be beneficial to haul them up and attach them to one of the self-propelled depots, because in that case the PODS themselves would lack the stationkeeping and other functionality youâd get from a self-propelled tanker.
• But given the tradeoff between lapping time and round-trip dV to depots, I think youâll still want to keep things fairly small overall, since youâll want to have a large number (maybe eventually more than a dozen) smaller depots.
• Because GEO is so hard to reach from any place you could likely get propellant from, I donât see the likelihood of reusable tankers being used to fill up a permanent depot tank, I really think youâll be talking about expendable tankage for the most part.
• However, the tankage is a lot more likely to last for more than one refueling â since thereâs a lot of variation in sizes of servicers, so most tanks will likely be involved in multiple refueling. And if a tank starts getting relatively low, it might be worth transferring a little from one tank to another tanker/depot before retiring the empty tank.
• Because these depots would already be operating in a safe GEO graveyard orbit, they wonât need further disposal delta-V. Though since the use of graveyard orbits may not always and forever be best practices for GEO disposal, having grapple fixture on-board could be a good idea to enable future disposal missions â but youâd probably want those anyway just for making refueling operations easier.

As I said up-front, of all the depot concepts, this is the one I feel least definite about. Iâd love to hear other peopleâs thoughts, but I tried to do my best to put together some logic and rationale for how I think things would turn out, and why.

Next-Up An Updated Propellant Depot Taxonomy Part IV: Smallsat Launcher Refueling Depots

• Photos of the Week: Tinside Lido, Log Climber, Dragon Temple (35 photos)

The Sagrada Familia in Barcelona, a hammock on Australian ski slopes, wildfire damage in Oregon, scorched wetlands in Brazil, flooding in Florida from Hurricane Sally, continued protests in Belarus, smoky skies over Seattle, scenes from the Crimean Fashion Week, and much more

• Voting
• How to Live on the Edge

I was all set to write a paragraph about how I should try to start a TV career setting myself up as someone who explores the extreme edge of quiet, mellow, grown-up activities, sort of like the Bear Grylls of safety and complacency. Then I realized James May has beat me to it.

I offer the following videos as evidence.

As always, thanks for using my Amazon Affiliate links (US, UK, Canada).

## September 17, 2020

• Chinese Antivirus Firm Was Part of APT41 âSupply Chainâ Attack

The U.S. Justice Department this week indicted seven Chinese nationals for a decade-long hacking spree that targeted more than 100 high-tech and online gaming companies. The government alleges the men used malware-laced phishing emails and âsupply chainâ attacks to steal data from companies and their customers. One of the alleged hackers was first profiled here in 2012 as the owner of a Chinese antivirus firm.

Image: FBI

Charging documents say the seven men are part of a hacking group known variously as âAPT41,â âBarium,â âWinnti,â âWicked Panda,â and âWicked Spider.â Once inside of a target organization, the hackers stole source code, software code signing certificates, customer account data and other information they could use or resell.

APT41âs activities span from the mid-2000s to the present day. Earlier this year, for example, the group was tied to a particularly aggressive malware campaign that exploited recent vulnerabilities in widely-used networking products, including flaws in Cisco and D-Link routers, as well as Citrix and Pulse VPN appliances. Security firm FireEye dubbed that hacking blitz âone of the broadest campaigns by a Chinese cyber espionage actor we have observed in recent years.â

The government alleges the group monetized its illicit access by deploying ransomware and âcryptojackingâ tools (using compromised systems to mine cryptocurrencies like Bitcoin). In addition, the gang targeted video game companies and their customers in a bid to steal digital items of value that could be resold, such as points, powers and other items that could be used to enhance the game-playing experience.

APT41 was known to hide its malware inside fake resumes that were sent to targets. It also deployed more complex supply chain attacks, in which they would hack a software company and modify the code with malware.

âThe victim software firm â unaware of the changes to its product, would subsequently distribute the modified software to its third-party customers, who were thereby defrauded into installing malicious software code on their own computers,â the indictments explain.

While the various charging documents released in this case do not mention it per se, it is clear that members of this group also favored another form of supply chain attacks â hiding their malware inside commercial tools they created and advertised as legitimate security software and PC utilities.

One of the men indicted as part of APT41 â now 35-year-old Tan DaiLin â was the subject of a 2012 KrebsOnSecurity story that sought to shed light on a Chinese antivirus product marketed as Anvisoft. At the time, the product had been âwhitelistedâ or marked as safe by competing, more established antivirus vendors, although the company seemed unresponsive to user complaints and to questions about its leadership and origins.

Tan DaiLin, a.k.a. âWicked Rose,â in his younger years. Image: iDefense

Anvisoft claimed to be based in California and Canada, but a search on the companyâs brand name turned up trademark registration records that put Anvisoft in the high-tech zone of Chengdu in the Sichuan Province of China.

A review of Anvisoftâs website registration records showed the companyâs domain originally was created by Tan DaiLin, an infamous Chinese hacker who went by the aliases âWicked Roseâ and âWithered Rose.â At the time of story, DaiLin was 28 years old.

That story cited a 2007 report (PDF) from iDefense, which detailed DaiLinâs role as the leader of a state-sponsored, four-man hacking team called NCPH (short for Network Crack Program Hacker). According to iDefense, in 2006 the group was responsible for crafting a rootkit that took advantage of a zero-day vulnerability in Microsoft Word, and was used in attacks on âa large DoD entityâ within the USA.

âWicked Rose and the NCPH hacking group are implicated in multiple Office based attacks over a two year period,â the iDefense report stated.

When I first scanned Anvisoft at Virustotal.com back in 2012, none of the antivirus products detected it as suspicious or malicious. But in the days that followed, several antivirus products began flagging it for bundling at least two trojan horse programs designed to steal passwords from various online gaming platforms.

Security analysts and U.S. prosecutors say APT41 operated out of a Chinese enterprise called Chengdu 404 that purported to be a network technology company but which served a legal front for the hacking groupâs illegal activities, and that Chengdu 404 used its global network of compromised systems as a kind of dragnet for information that might be useful to the Chinese Communist Party.

Chengdu404âs offices in China. Image: DOJ.

âCHENGDU 404 developed a âbig dataâ product named âSonarX,â which was describedâ¦as an âInformation Risk Assessment System,'â the governmentâs indictment reads. âSonarX served as an easily searchable repository for social media data that previously had been obtained by CHENGDU 404.â

The group allegedly used SonarX to search for individuals linked to various Hong Kong democracy and independence movements, and snoop on a U.S.-backed media outlet that ran stories examining the Chinese governmentâs treatment of Uyghur people living in its Xinjian region.

As noted by TechCrunch, after the indictments were filed prosecutors said they obtained warrants to seize websites, domains and servers associated with the groupâs operations, effectively shutting them down and hindering their operations.

âThe alleged hackers are still believed to be in China, but the allegations serve as a âname and shameâ effort employed by the Justice Department in recent years against state-backed cyber attackers,â wrote TechCrunchâs Zack Whittaker.

• BSD Now 368: Changing OS roles

The theme of this weekâs BSD Now seems to be about new roles for BSD, cause thereâs talk about clustering and console changes.

• Production company aims to film space reality TV show, with the winner flying to orbit
Image: NASA

A US production company is planning to produce a reality TV show competition, where the winner will receive a trip to the International Space Station as the ultimate prize, Deadline reports. The plan is yet another way to capitalize on the newly developed private human spacecraft, from SpaceX and Boeing, that are opening up ways for non-government astronauts to reach space.

Dubbed Space Hero Inc., the production company plans to put together a televised contest called Space Hero that would select contestants from around the world to train for space, according to Deadline. The winner of the contest would supposedly receive a 10-day trip to the space station that would be televised from launch to return to Earth.

• Amazon Delivery Drivers Hacking Scheduling System

Amazon drivers â all gig workers who donât work for the company â are hanging cell phones in trees near Amazon delivery stations, fooling the system into thinking that they are closer than they actually are:

The phones in trees seem to serve as master devices that dispatch routes to multiple nearby drivers in on the plot, according to drivers who have observed the process. They believe an unidentified person or entity is acting as an intermediary between Amazon and the drivers and charging drivers to secure more routes, which is against Amazonâs policies.

The perpetrators likely dangle multiple phones in the trees to spread the work around to multiple Amazon Flex accounts and avoid detection by Amazon, said Chetan Sharma, a wireless industry consultant. If all the routes were fed through one device, it would be easy for Amazon to detect, he said.

âTheyâre gaming the system in a way that makes it harder for Amazon to figure it out,â Sharma said. âTheyâre just a step ahead of Amazonâs algorithm and its developers.â

• Engineering & Mining Journal: Futuristic Solutions Threaten Status Quo

Engineering & Mining Journal caught up with the leaders of Anglo American, First Mode, and other partners involved in developing the worldâs largest hydrogen-powered mining haul truck.

In December 2019, First Mode announced a \$13.5 million agreement to develop technology for Anglo Americanâs Future Smart Mining Program. A cornerstone of the project is the design and deployment of the worldâs largest hydrogen-fueled mining truck.

An effort of this magnitude requires the collaboration of many skilled partners. Engineering & Mining Journal shares thoughts from each group on their contributions, the impact of the technology on the industry, and the work involved to get the truck moving by the end of 2020. Hereâs an excerpt:

âAt the razor-sharp cutting edge of innovation in the truck-shovel mining solutions space are a handful of players rolling out solutions that, some say, could change the industry.

For example, Anglo American reported a prototype H2-fueled haul truck will be assembled at a working mine before 2021. If the prototype proves viable, it could be a game-changer and puts Anglo American in the catbird seat in the push to decarbonize the mining industry.

In consideration of what they all offer in terms of production increases, cost savings and safety improvements, every one of these solutions could prove to be disruptive. And with sustainable mining a talking point among both industry critics and leaders, the timing of their release is near perfect.

First Mode saidÂ it was on schedule at integrating the 2-MW hybrid power plant that will replace the diesel generator in the prototype truck. âSince the design was finalized in early 2020, First Mode has been busy integrating and testing the power module in Seattle, while simultaneously working to understand the existing truck to ensure a clean integration within existing systems,â First Mode President Chris Voorhees said.â

First Mode is proud to be involved in this effort, and weâre looking forward to seeing the truck in action next year!

(Pages 60-63)

• As Fall Approaches, Space Station Brightens Twilight Skies

Just a quick heads up. The International Space Station (ISS) will be making evening passes now through early October for northern hemisphere skywatchers. With earlier sunsets weâll see the station during convenient evening viewing hours. If you have children and COVID restrictions force you to remain at home, herd them outside for a look. What better introduction to outer space and space travel than watching the space station and its current crew of three speed across the night sky?

You might even get lucky and see the ISS pass near the bright Saturn-Jupiter duo like I did last night. The station is always bright and unmistakable, but that brightness varies depending on its horizontal distance from an observer. Currently, the station orbits the Earth at an altitude of 261 miles (420 km). When it passes overhead, itâs 261 miles away.

Overhead passes are the brightest because the ISS is closest. It easily exceeds Jupiter and nearly equals Venus, the most brilliant nighttime object besides the moon. The lower in the sky we view the station the farther away it is because you have to factor in its horizontal distance plus altitude to arrive at its true line-of-sight distance. The ISS may only be 261 miles up, but it can also be hundreds of miles away.

For instance, during last nightâs pass of Jupiter and Saturn, the stationâs line-of-sight distance was around 800 miles (1,300 km). Thatâs why the ISS only reached magnitude -2.1, about as bright as Jupiter. No matter where you live, when the space station first ârisesâ in the western sky (it always travels west to east) its line-of-sight distance is more than 1,400 miles (2,300 km). Isnât it amazing you can see it from so far away? Its large size and high orbit are the reasons why.

You can easily find the distance to the space station during any pass for your location by going to Heavens Above. Under Configuration on the left side of the page, click the observing location and settings link and add your city. Return to the home page and click the blue ISS link for a list of passes. Click on the next pass and a map and timeline of the stationâs path will pop up. The times shown are local.

Scroll down to the bottom and youâll see distances in kilometers at three different points of its track. To convert to miles clickÂ here.

Phone apps are another good way of keeping track of the space station. They also include distances for selected spots along its path. Here are two:

I wrote a short tutorial on how to use these apps here.
September and October are beautiful months to be outside. Cooler nights put an end to nuisance mosquitos, and earlier sunsets mean you donât have to stay up so late to enjoy the stars. Wishing you clear skies!
• Friday Squid Blogging: Nano-Sized SQUIDS

SQUID news:

Physicists have developed a small, compact superconducting quantum interference device (SQUID) that can detect magnetic fields. The team l focused on the instrumentâs core, which contains two parallel layers of graphene.

As usual, you can also use this squid post to talk about the security stories in the news that I havenât covered.

Read my blog posting guidelines here.

• Delegation Appearing at Vancouver Police Board to Demand Ban on Police Street Checks; Over Ninety Organizations Call for Street Check Ban

FOR IMMEDIATE RELEASE

September 17, 2020, (XÊ·mÉÎ¸kÊ·ÉyÌÉm (Musqueam), Sá¸µwxÌ±wÃº7mesh (Squamish) and sÉlilwÉtaÉ¬ (Tsleil-Waututh)/Vancouver, B.C â A coalition of organizations are appearing remotely as a Delegation at the Vancouver Police Board meeting on September 17 at 1 pm.

Hoganâs Alley Society, Union of BC Indian Chiefs, WISH Drop-In Centre Society, Black Lives Matter-Vancouver, and the BC Civil Liberties Association will make a presentation to the Vancouver Police Board calling on them to immediately ban police street checks in the city. On July 22, Vancouver City Council unanimously passed a motion stating Councilâs priority is to end the practice of street checks in Vancouver.

Over 8,274 individuals have signed a petition calling for an immediate ban on police street checks. In an open letter to the Vancouver Police Board and the Province of BC, a total of 92 local and provincial community, environmental, faith, health, labour, legal, LGBTQ, student, and womenâs organizations are also calling on the Vancouver Police Board to implement an immediate ban on the racist and illegal practice of street checks.

Addressed to the Vancouver Police Board as well as Government of BC, the organizations write, âWe are writing you to take immediate action to address systemic discrimination in policing by ending all street checks in Vancouver and BC. Street checks are harmful and discriminatory for Indigenous, Black, and low-income communities. Street checks also have no basis in law, and you have the powers to ban them.â

Open letter with 92 co-signatories is available here.

Petition is available here.

Vancouver Police Board meeting on September 17 at 1 pm can be viewed online here.

MEDIA CONTACTS

Latoya Farrell, Staff Lawyer, BC Civil Liberties Association: 780-716-4408; latoya@bccla.org
Lama Mugabo, Director, Hoganâs Alley Society: 604-715-9565
Mebrat Beyene, Executive Director, WISH Drop-In Centre Society: 604-836-6464
Chief Don Tom, Vice President, Union of BC Indian Chiefs: 604-290-6083
Udokam Iroegbu: blacklivesmattervan@gmail.com
Harsha Walia, Executive Director, BC Civil Liberties Association: 778-885-0040

• Mars through the Smokey Hazeâ¦

The smoke from the fires in California has moved across the US and is currently creating very hazy skies here. Nothing like what theyâre seeing in the west, but certainly having an affect on astronomy. There were no other âstarsâ visible in the sky at the time these images were taken, other than Mars itself.

Here is Mars on 17 September, taken in moderately good seeing, but under extremely reduced transparency (1/5). In fact, the visibility in blue was so poor that I binned the blue channel (2Ã2) for this image. As a result of all this, I think the color balance is slightly affected by the smoke in the atmosphere, but the seeing was good enough to make up for it.

Visible in this image is the Amazonis region, with Olympus Mons to the lower left and the Tharsis volcanos just coming into view above and to the left of Olympus.

Taken with a C14, ASI290m, and Astronomik RGB filters @f/24 (Blue binned 2Ã2).

• A review of the Par EndFedz EFT-MTR triband antenna
Note: the following post originally appeared on our sister site, the SWLing Post. In July, I purchased a tiny QRP transceiver Iâve always wanted: the LnR Precision MTR-3B. Itâs a genius, purpose-built little radio and a lot of fun to operate in the field. Itâs also rather bare-bones, only including a specific feature set built â¦ Continue reading A review of the Par EndFedz EFT-MTR triband antenna â
• What the future of Venus exploration could look like following major discovery

With the recent discovery of a possible sign of life on Venus, itâs possible we could see a new surge of missions headed to the cloudy world in the future. Those next-generation robotic explorers may need to take extra precautions than Venusian explorers of the past.

Numerous missions to Venus have been proposed throughout the years, but few have actually manifested over the last three decades. That could change now, with perhaps the most compelling reason to visit the planet again. Astronomers found a gas, phosphine, in the sulfuric acid clouds of the planet, and they canât really explain why itâs there. Itâs possible that phosphine is being produced by some tiny lifeforms since we know that itâs produced by microbes here on Earth, or...

• How Mathematical âHocus-Pocusâ Saved Particle Physics

In the 1940s, trailblazing physicists stumbled upon the next layer of reality. Particles were out, and fields â expansive, undulating entities that fill space like an ocean â were in. One ripple in a field would be an electron, another a photon, and interactions between them seemed to explain all electromagnetic events.

There was just one problem: The theory was glued together with hopes and prayers. Only by using a technique dubbed ârenormalization,â which involved carefully concealing infinite quantities, could researchers sidestep bogus predictions. The process worked, but even those developing the theory suspected it might be a house of cards resting on a tortured mathematical trick.

âIt is what I would call a dippy process,â Richard Feynman later wrote. âHaving to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent.â

Justification came decades later from a seemingly unrelated branch of physics. Researchers studying magnetization discovered that renormalization wasnât about infinities at all. Instead, it spoke to the universeâs separation into kingdoms of independent sizes, a perspective that guides many corners of physics today.

Renormalization, writes David Tong, a theorist at the University of Cambridge, is âarguably the single most important advance in theoretical physics in the past 50 years.â

## A Tale of Two Charges

By some measures, field theories are the most successful theories in all of science. The theory of quantum electrodynamics (QED), which forms one pillar of the Standard Model of particle physics, has made theoretical predictions that match up with experimental results to an accuracy of one part in a billion.

But in the 1930s and 1940s, the theoryâs future was far from assured. Approximating the complex behavior of fields often gave nonsensical, infinite answers that made some theorists think field theories might be a dead end.

Feynman and others sought whole new perspectives â perhaps even one that would return particles to center stage â but came back with a hack instead. The equations of QED made respectable predictions, they found, if patched with the inscrutable procedure of renormalization.

The exercise goes something like this. When a QED calculation leads to an infinite sum, cut it short. Stuff the part that wants to become infinite into a coefficient â a fixed number â in front of the sum. Replace that coefficient with a finite measurement from the lab. Finally, let the newly tamed sum go back to infinity.

To some, the prescription felt like a shell game. âThis is just not sensible mathematics,â wrote Paul Dirac, a groundbreaking quantum theorist.

The core of the problem â and a seed of its eventual solution â can be seen in how physicists dealt with the charge of the electron.

In the scheme above, the electric charge comes from the coefficient â the value that swallows the infinity during the mathematical shuffling. To theorists puzzling over the physical meaning of renormalization, QED hinted that the electron had two charges: a theoretical charge, which was infinite, and the measured charge, which was not. Perhaps the core of the electron held infinite charge. But in practice, quantum field effects (which you might visualize as a virtual cloud of positive particles) cloaked the electron so that experimentalists measured only a modest net charge.

Two physicists, Murray Gell-Mann and Francis Low, fleshed out this idea in 1954. They connected the two electron charges with one âeffectiveâ charge that varied with distance. The closer you get (and the more you penetrate the electronâs positive cloak), the more charge you see.

Their work was the first to link renormalization with the idea of scale. It hinted that quantum physicists had hit on the right answer to the wrong question. Rather than fretting about infinites, they should have focused on connecting tiny with huge.

Renormalization is âthe mathematical version of a microscope,â said Astrid Eichhorn, a physicist at the University of Southern Denmark who uses renormalization to search for theories of quantum gravity. âAnd conversely you can start with the microscopic system and zoom out. Itâs a combination of a microscope and a telescope.â

## Magnets Save the Day

A second clue emerged from the world of condensed matter, where physicists were puzzling over how a rough magnet model managed to nail the fine details of certain transformations. The Ising model consisted of little more than a grid of atomic arrows that could each point only up or down, yet it predicted the behaviors of real-life magnets with improbable perfection.

At low temperatures, most atoms align, magnetizing the material. At high temperatures they grow disordered and the lattice demagnetizes. But at a critical transition point, islands of aligned atoms of all sizes coexist. Crucially, the ways in which certain quantities vary around this âcritical pointâ appeared identical in the Ising model, in real magnets of varying materials, and even in unrelated systems such as a high-pressure transition where water becomes indistinguishable from steam. The discovery of this phenomenon, which theorists called universality, was as bizarre as finding that elephants and egrets move at precisely the same top speed.

Physicists donât usually deal with objects of different sizes at the same time. But the universal behavior around critical points forced them to reckon with all length scales at once.

Leo Kadanoff, a condensed matter researcher, figured out how to do so in 1966. He developed a âblock spinâ technique, breaking an Ising grid too complex to tackle head-on into modest blocks with a few arrows per side. He calculated the average orientation of a group of arrows and replaced the whole block with that value. Repeating the process, he smoothed the latticeâs fine details, zooming out to grok the systemâs overall behavior.

Finally, Ken Wilson â a former graduate student of Gell-Mann with feet in the worlds of both particle physics and condensed matter â united the ideas of Gell-Mann and Low with those of Kadanoff. His ârenormalization group,â which he first described in 1971, justified QEDâs tortured calculations and supplied a ladder to climb the scales of universal systems. The work earned Wilson a Nobel Prize and changed physics forever.

The best way to conceptualize Wilsonâs renormalization group, said Paul Fendley, a condensed matter theorist at the University of Oxford, is as a âtheory of theoriesâ connecting the microscopic with the macroscopic.

Consider the magnetic grid. At the microscopic level, itâs easy to write an equation linking two neighboring arrows. But taking that simple formula and extrapolating it to trillions of particles is effectively impossible. Youâre thinking at the wrong scale.

Wilsonâs renormalization group describes a transformation from a theory of building blocks into a theory of structures. You start with a theory of small pieces, say the atoms in a billiard ball. Turn Wilsonâs mathematical crank, and you get a related theory describing groups of those pieces â perhaps billiard ball molecules. As you keep cranking, you zoom out to increasingly larger groupings â clusters of billiard ball molecules, sectors of billiard balls, and so on. Eventually youâll be able to calculate something interesting, such as the path of a whole billiard ball.

This is the magic of the renormalization group: It helps identify which big-picture quantities are useful to measure and which convoluted microscopic details can be ignored. A surfer cares about wave heights, not the jostling of water molecules. Similarly, in subatomic physics, renormalization tells physicists when they can deal with a relatively simple proton as opposed to its tangle of interior quarks.

Wilsonâs renormalization group also suggested that the woes of Feynman and his contemporaries came from trying to understand the electron from infinitely close up. âWe donât expect to be valid down to arbitrarily small scales,â said James Fraser, a philosopher of physics at Durham University in the U.K. Mathematically cutting the sums short and shuffling the infinity around, physicists now understand, is the right way to do a calculation when your theory has a built-in minimum grid size. âThe cutoff is absorbing our ignorance of whatâs going onâ at lower levels, said Fraser.

In other words, QED and the Standard Model simply canât say what the bare charge of the electron is from zero nanometers away. They are what physicists call âeffectiveâ theories. They work best over well-defined distance ranges. Finding out exactly what happens when particles get even cozier is a major goal of high-energy physics.

## From Big to Small

Today, Feynmanâs âdippy processâ has become as ubiquitous in physics as calculus, and its mechanics reveal the reasons for some of the disciplineâs greatest successes and its current challenges. During renormalization, complicated submicroscopic capers tend to just disappear. They may be real, but they donât affect the big picture. âSimplicity is a virtue,â Fendley said. âThere is a god in this.â

That mathematical fact captures natureâs tendency to sort itself into essentially independent worlds. When engineers design a skyscraper, they ignore individual molecules in the steel. Chemists analyze molecular bonds but remain blissfully ignorant of quarks and gluons. The separation of phenomena by length, as quantified by the renormalization group, has allowed scientists to move gradually from big to small over the centuries, rather than cracking all scales at once.

Yet at the same time, renormalizationâs hostility to microscopic details works against the efforts of modern physicists who are hungry for signs of the next realm down. The separation of scales suggests theyâll need to dig deep to overcome natureâs fondness for concealing its finer points from curious giants like us.

âRenormalization helps us simplify the problem,â said Nathan Seiberg, a theoretical physicist at the Institute for Advanced Study in Princeton, New Jersey. But âit also hides what happens at short distances. You canât have it both ways.â

• History Brief: NASA Selects the âNew Nineâ â September 17, 1962

As the year 1962 unfolded, NASA was beginning its series of crewed orbital spaceflights as part of the Mercury program. But with the crewed Gemini and Apollo programs now approved, a new group of astronauts was required to supplement the original seven chosen three years earlier, dubbed the âMercury 7â (see âProject Mercury: Choosing the Astronauts and Their Machineâ). On April 18, 1962, NASA formally announced that it was accepting applications for a new group of astronauts. Unlike the secret selection process for the Mercury 7, this selection was widely advertised with public announcements and the minimum standards communicated to aircraft companies, government agencies and the Society of Experimental Test Pilots.

An early concept drawing comparing NASAâs manned spacecraft: Apollo (top), Gemini (middle) and Mercury (bottom). In the lower left is a comparison of these programsâ launch vehicles: (left to right) The Saturn V, Titan II and Atlas. (NASA)

The criteria for the selection process were broadly similar to those three years earlier: High-performance jet pilots with a minimum of 1,500 hours of test pilot flight experience who had earned degrees in engineering or science. The maximum age was lowered from 40 to 35 years old since the selected candidates could be flying through the decade during the Apollo program. Because the Gemini and Apollo spacecraft would be less cramped than the small Mercury capsule, the height limit was relaxed somewhat to six feet (1.83 meters).

In total, 253 applications were received by the June 1, 1962 deadline plus that of Neil Armstrong which arrived a week late but was added at the insistence of the associate director of the Space Task Group, Walter C. Williams, who wanted a NASA test pilot included. After a series of examinations and tests during the summer of 1962, a total of nine candidates were eventually selected. Deke Slayton, who was one of the Mercury 7 and about to become the assistant director of Flight Crew Operations, telephoned the pilots on September 14. On September 17, 1962, NASAâs newest astronauts were revealed during an official announcement at the Cullen Auditorium at the University of Houston and dubbed âThe New Nineâ by the press.

Neil A. Armstrong (NASA)

Neil A. Armstrong, who was 32 years old at the time, was a civilian test pilot for NASA before being selected as part of NASAâs second group of astronauts. Originally, Armstrong had been selected in June 1958 as an astronaut for the USAF Man In Space Soonest (MISS) program which was cancelled a couple of months later in favor of NASAâs Project Mercury. During his career as a NASA test pilot, Armstrong flew the X-15 rocket plane a total of seven times between December 1960 and July 1962 but never flew high enough to qualify for astronaut wings. Armstrong was once again named part of an astronaut team in March 1962 this time for the USAF X-20 Dyna Soar program before opting to join the NASA astronaut corps six months later. Armstrongâs first space mission was as the command pilot of the Gemini 8 mission in March 1966. He would later go on to command the Apollo 11 mission to become the first person to step foot on the Moon in July 1969.

Frank F. Borman II (NASA)

USAF Major Frank F. Borman II, 34 years old, was a West Point graduate with extensive experience as a pilot and instructor in thermodynamics, fluid mechanics as well as flight and spacecraft testing before becoming part of NASAâs astronaut corps. Bormanâs first space mission was as the command pilot of the Gemini 7 long-duration mission flown in December 1965. He would go on to be the Commander of the historic Apollo 8 mission to orbit the Moon in December 1968.

USN Lieutenant Charles âPeteâ Conrad, Jr., 32 years old, had been a Navy test pilot before being selected as part of NASAâs second group of astronauts. Back in 1959, Conrad had been in the group of pilots being considered for NASAâs first group of astronauts but did not make the final cut. He flew as the pilot on the Gemini 5 long-duration mission in August 1965 then went on to be the command pilot for the Gemini 11 mission 13 months later. During the Apollo program, he was the Commander of the Apollo 12 mission in November 1969, becoming the third person to walk on the Moon, then reprised his role as Commander for NASAâs Skylab 2 mission launched in May 1973.

James A. Lovell, Jr. (NASA)

USN Lt. Commander James A. Lovell, Jr., 34 years old, was an Annapolis graduate with an impressive military career as a pilot and instructor before joining NASAâs new group of astronauts. Like Conrad, he had been part of the group competing for a slot in NASAâs first group of astronauts but was not selected. His first assignment was as the pilot on the Gemini 7 long-duration mission in December 1965. He then went on to be the command pilot of the Gemini 12 mission nine months later. During the Apollo program, he served as the Command Module Pilot of the historic Apollo 8 mission in December 1968 and then went on to be the Commander of the ill-fated Apollo 13 mission in April 1970.

James A. McDivitt (NASA)

USAF Major James A. McDivitt, 33 years old, was a veteran of the Korean War, flying 145 combat missions, and had extensive experience flying experimental aircraft before joining NASA. His first spaceflight was as the command pilot on the Gemini 4 mission in June 1965 which included the first American EVA. He was subsequently the Commander of the Apollo 9 mission in March 1969 to test the Lunar Module in Earth orbit. He later became Manager of Lunar Landing Operations and was the Apollo Spacecraft Program Manager from 1969 to 1972.

Elliott M. See, Jr. (NASA)

Elliot M. See, Jr., 35 years old, was a civilian test pilot for General Electric where he was involved in flight testing of jet engines for various high-performance aircraft before joining NASA. He had been assigned to be the command pilot for the Gemini 9 mission but was killed, along with his crew mate Charles Bassett, on February 28, 1966 when the T-38 aircraft they were flying crashed in bad weather in St. Louis as they were on their way to McDonnell to inspect their spacecraft.

Thomas P. Stafford (NASA)

USAF Major Thomas P. Stafford, who turned 32 the day when his selection was publicly announced, was a graduate of the US Naval Academy and a pilot who served as the chief of the Performance Branch of the USAF Aerospace Research Pilot School at Edwards Air Force Base before he was selected as an astronaut. Staffordâs first assignment was as the pilot for the Gemini 6 mission launched in December 1965. After Seeâs death, Stafford became the command pilot of the Gemini 9 mission launched in June 1966 â the first person to fly into orbit twice in less than six months. During the Apollo program, Stafford was the Commander of the Apollo 10 mission in May 1969 which performed a dress rehearsal for the Apollo 11 lunar landing. Stafford then went on to be the Commander of the Apollo component of the cooperative Apollo-Soyuz Test Project flown in July 1975.

Edward H. White II (NASA)

USAF Major Edward H. White II, 31 years old, was an experienced pilot who earned his credentials as a USAF test pilot in 1959 specifically to improve his chances of becoming an astronaut. He served as the pilot on the Gemini 4 mission to become the first American to perform an EVA in June 1965. His life was tragically cut short on January 27, 1967 during the Apollo 1 fire during what was suppose to be a routine dress rehearsal for the upcoming launch of that mission.

John W. Young (NASA)

USN Lt. Commander John W. Young, who was about to turn 32 when selected, was a naval aviator who had previously done work to support the development of weapons systems for the F-4 Phantom II jet fighter before starting his long career as a NASA astronaut. His first assignment was as the pilot on the Gemini 3 mission in March 1965. He then went on to be the command pilot for the Gemini 10 mission in July 1966. During the Apollo program, Young served as the Command Module Pilot of the Apollo 10 mission in May 1969 to rehearse the first Moon landing attempt and then went on to command the Apollo 16 lunar landing mission in April 1972. After Apollo, he served as Chief of the Astronaut Office from 1974 to 1987 as well as commanded the STS-1 and STS-9 missions in the Space Shuttle program launched in April 1981 and November 1983, respectively.

A portrait of NASAâs astronaut corps after the addition of The New Nine. (NASA)

Â

Â

âProject Mercury: Choosing the Astronauts and Their Machineâ, Drew Ex Machina, April 9, 2019 [Post]

Â

• Odds and Ends on the Clouds of Venus

James Gunn may have been the first science fiction author to anticipate the ânew Venus,â i.e., the one we later discovered thanks to observations and Soviet landings on the planet that revealed what its surface was really like. His 1955 tale âThe Naked Skyâ described âunbearable pressures and burning temperaturesâ when it ran in Startling Stories for the fall of that year. Gunn was guessing, but we soon learned Venus really did live up to that depiction.

I think Larry Niven came up with the best title among SF stories set on the Venus we found in our data. âBecalmed in Hellâ is a 1965 tale in Nivenâs âKnown Spaceâ sequence that deals with clouds of carbon dioxide, hydrochloric and hydrofluoric acids. No more a tropical paradise, this Venus was a serious do-over of Venus as a story environment, and the more we learned about the planet, the worse the scenario got.

But when it comes to life in the Venusian clouds â human, no less â I always think of Geoffray Landis, not only because of his wonderful novella âThe Sultan of the Clouds,â but also because of his earlier work on how the planet might be terraformed, and what might be possible within its atmosphere. For a taste of his ideas on terraforming, a formidable task to say the least, see his âTerraforming Venus: A Challenging Project for Future Colonization,â from the AIAA SPACE 2011 Conference & Exposition, available here. But really, read âThe Sultan of the Clouds,â where human cities float atop the maelstrom:

âA hundred and fifty million square kilometers of clouds, a billion cubic kilometers of clouds. In the ocean of clouds the floating cities of Venus are not limited, like terrestrial cities, to two dimensions only, but can float up and down at the whim of the city masters, higher into the bright cold sunlight, downward to the edges of the hot murky depthsâ¦ The barque sailed over cloud-cathedrals and over cloud-mountains, edges recomplicated with cauliflower fractals. We sailed past lairs filled with cloud-monsters a kilometer tall, with arched necks of cloud stretching forward, threatening and blustering with cloud-teeth, cloud-muscled bodies with clawed feet of flickering lightning.â

Published originally in Asimovâs (September 2010) and reprinted in the Dozois Yearâs Best Science Fiction: Twenty-Eighth Annual Collection, the story depicts a vast human presence in aerostats floating at the temperate levels. Landis has explored a variety of Venus exploration technologies including balloons, aircraft and land devices, all of which might eventually be used in building a Venusian infrastructure that would support humans.

Weâve already seen that Carl Sagan had written about possible life in the Venusian atmosphere, and an even more ambitious Paul Burch considered using huge mirrors in space to deflect sunlight, generate power, and cool down the planet. Closer to our time, an internal NASA study called HAVOC, a High Altitude Venus Operational Concept based on balloons, was active, though my understanding is that the project, in the hands of Dale Arney and Chris Jones at NASA Langley, has been abandoned. Maybe the phosphine news will give it impetus for renewal. The Landis aerostats would be far larger, of course, carrying huge populations. I have to wonder what ideas might emerge or be reexamined given the recent developments.

Image: Artistâs rendering of a NASA crewed floating outpost on Venus

With Venus so suddenly in the news, I see that Breakthrough Initiatives has moved swiftly to fund a research study looking into the possibility of primitive life in the Venusian clouds. The funding goes to Sara Seager (MIT) and a group that includes Janusz Petkowski (MIT), Chris Carr (Georgia Tech), Bethany Ehlmann (Caltech), David Grinspoon (Planetary Science Institute) and Pete Klupar (Breakthrough Initiatives). The group will go to work with the phosphine findings definitely in mind. Pete Worden is executive director of Breakthrough Initiatives:

âThe discovery of phosphine is an exciting development. We have what could be a biosignature, and a plausible story about how it got there. The next step is to do the basic science needed to thoroughly investigate the evidence and consider how best to confirm and expand on the possibility of life.â

Phosphine has been detected elsewhere in the Solar System in the atmospheres of Jupiter and Saturn, with formation deep below the cloud tops and later transport to the upper atmosphere by the strong circulation on those worlds. Given the rocky nature of Venus, weâre presumably looking at far different chemistry as we try to sort out what the ALMA and JCMT findings portend, with exotic and hitherto natural processes still possible. On that matter, Iâll quote Hideo Sagawa (Kyoto Sangyo University, Japan), who was a member of the science team led by Jane Greaves that produced the recent paper:

âAlthough we concluded that known chemical processes cannot produce enough phosphine, there remains the possibility that some hitherto unknown abiotic process exists on Venus. We have a lot of homework to do before reaching an exotic conclusion, including re-observation of Venus to verify the present result itself.â

Image: ALMA image of Venus, superimposed with spectra of phosphine observed with ALMA (in white) and JCMT (in grey). As molecules of phosphine float in the high clouds of Venus, they absorb some of the millimeter waves that are produced at lower altitudes. When observing the planet in the millimeter wavelength range, astronomers can pick up this phosphine absorption signature in their data as a dip in the light from the planet. Credit: ALMA (ESO/NAOJ/NRAO), Greaves et al. & JCMT (East Asian Observatory).

Iâll close with the interesting note that the BepiColombo mission, carrying the Mercury Planetary Orbiter (MPO) and Mio (Mercury Magnetospheric Orbiter, MMO), will be using Venus flybys to brake for destination, one on October 15, the other next year on August 10. It has yet to be determined whether the onboard MERTIS (MErcury Radiometer and Thermal Infrared Spectrometer) could detect phosphine at the distance of the first flyby â about 10,000 kilometers â but the second is to close to 550 kilometers, a far more promising prospect. You never know when a spacecraft asset is going to suddenly find a secondary purpose.

Image: A sequence taken by one of the MCAM selfie cameras on board of the European-Japanese Mercury mission BepiColombo as the spacecraft zoomed past the planet during its first and only Earth flyby. Images in the sequence were taken in intervals of a few minutes from 03:03 UTC until 04:15 UTC on 10 April 2020, shortly before the closest approach. The distance to Earth diminished from around 26,700 km to 12,800 km during the time the sequence was captured. In these images, Earth appears in the upper right corner, behind the spacecraft structure and its magnetometer boom, and moves slowly towards the upper left of the image, where the medium-gain antenna is also visible. Credit: ESA/BepiColombo/MTM, CC BY-SA IGO 3.0.

And keep your eye on the possibility of a Venus mission from Rocket Lab, a privately owned aerospace manufacturer and launch service, which could involve a Venus atmospheric entry probe using its Electron rocket and Photon spacecraft platform. According to this lengthy article in Spaceflight Now, Rocket Lab founder Peter Beck has already been talking with MITâs Sara Seager about the possibility. Launch could be as early as 2023, a prospect weâll obviously follow with interest.

A final interesting reference re life in the clouds, one I havenât had time to get to yet, is Limaye et al., âVenusâ Spectral Signatures and the Potential for Life in the Clouds,â Astrobiology Vol. 18, No. 9 (2 September 2018). Full text.