What governs the size of a newly forming star as it emerges from the molecular cloud around it? The answer depends upon the ability of gravity to overcome internal pressure within the cloud, and that in turn depends upon exceeding what is known as the Jeans Mass, whose value will vary with the density of the gas and its temperature. Exceed the Jeans mass and runaway contraction begins, forming a star whose own processes of fusion will arrest the contraction.
At the Max Planck Institute for Astronomy (Heidelberg), Hubert Klahr and colleagues have been working on a different kind of contraction, the processes within the protoplanetary disk around such young stars. Along with postdoc Andreas Schreiber, Klahr has come up with a type of Jeans Mass that can be applied to the formation of planetesimals. While stars in formation must overcome the pressure of their gas cloud, planetesimals work against turbulence within the gas and dust of a disk — a critical mass is needed for the clump to overcome turbulent diffusion. The new paper on this work extends the findings reported in a 2020 paper by the same authors.
The problem this work is trying to solve has to do with the size distribution of asteroids; specifically, why do primordial asteroids — planetesimals that have survived intact since the formation of the Solar System, without the collisions that have fragmented so many such objects — tend to be found with a diameter in the range of 100 kilometers? To answer the question, the researchers look at the behavior of pebbles in the disk, small clumps in the range of a few millimeters to a few centimeters in size. Here the plot thickens.
Pebbles tend to drift inward toward the star, meaning that if these pebbles accrete slowly, they would fall into the star before reaching the needed size. Klahr and Schreiber argue that this drift can be overcome thanks to turbulence within the gas of the protoplanetary disk, creating traps where the pebbles can accumulate enough mass for them to become bound together by gravity. Turbulence is a local effect, to be sure, but it can also vary as the disk changes with increased distance from the star. A fast enough drop in gas pressure with distance can create a streaming instability within the disk, so that pebbles move chaotically and churn the gas around them.
Image: The solar primordial nebula of gas and dust surrounds the sun in the form of a flat disk. In it are vortices in which the small pebbles accumulate. These pebbles can grow up to asteroid size by gravitational collapse, if this is not hindered by turbulent diffusion. Hubert Klahr, Andreas Schreiber, MPIA / MPIA Graphics Department, Judith Neidel.
So on the one hand, turbulence helps create the pebble clusters that can aggregate into larger objects, but to grow larger still, such clouds need to reach a certain mass or they will be dispersed by these instabilities. Klahr and Schreiber’s calculations of the necessary mass show that objects would need to reach 100 kilometers in size to overcome the turbulence pressure for most regions within our own Solar System, matching observation. Thus we have a limiting mass for planetesimal formation.
All of this grows out of simulations and the comparison of their results with telescopic observations of protoplanetary disks around young stars. In the numerical models the authors used, planetesimals grow quickly from the projected pebble traps. Low mass pebble clouds below this turbulence-based Jeans Mass are less likely to grow through accretion, while larger clouds would collapse as they exceeded the critical mass.
Image: In this schematic the sequence of planetesimal formation starts with particles growing to pebble size. These pebbles then accumulate at the midplane and drift toward the star to get temporarily trapped in a zonal flow or vortex. Here the density of pebbles in the midplane is eventually sufficient to trigger the streaming instability. This instability concentrates and diffuses pebbles likewise, leading to clumps that reach the Hill density at which tidal forces from the star can no longer shear the clumps away. If now also the turbulent diffusion is weak enough to let the pebble cloud collapse, then planetesimal formation will occur. Hubert Klahr, Andreas Schreiber, MPIA / MPIA Graphics Department, Judith Neidel.
Planetesimal size does vary at larger distances, such that objects forming in the outer regions of the disk cluster around 10 kilometers at 100 AU. The authors suggest that the characteristic size of Kuiper Belt Objects will be found to decrease with increasing distance from the Sun, something that future missions into that region can explore. The 100 kilometer diameter seems to hold out to about 60 AU, beyond which smaller objects form from a depleted disk.
New Horizons comes to mind as the only spacecraft to study such an object, the intriguing 486958 Arrokoth, which at 45 AU is the most distant primordial object visited by a space probe. Future missions will give us more information about the size distribution of Kuiper Belt Objects. We can also think about the Lucy mission, whose solar panels we looked at yesterday. The target asteroids orbit at the Jupiter Lagrange points 4 and 5, one group ahead of the planet in its orbit, one group behind. Lucy is to visit six Trojans, objects which evidently migrated from different regions in the Solar System and may include a population of primordial planetesimals.
Klahr and Schreiber have come up with a model that can make predictions about how planetesimals form within the protoplanetary disk and their likely sizes within the disk. Whether these predictions are born out by subsequent observation will help us choose between this new Jeans Mass model for planetesimals and other formation processes, including the possibility that ice is the mechanism allowing pebbles to stick together, or whether aggregates of silicate flakes are the answer. Klahr is not convinced that either of these models can withstand scrutiny:
“Even if collisions were to lead to growth up to 100 km without eventually switching to a gravitational collapse, this method would predict too large a number of asteroids smaller than 100 km. It would also fail to describe the high frequency of binary objects in the Edgeworth-Kuiper belt. Both properties of our Solar System are easily reconcilable with the gravitational pebble cloud collapse.”
The effects of turbulence in other stellar systems will likewise rely upon the concentration of mass within the disk, with the ability for larger planetesimals to form decreasing as the gas within the disk is depleted, either by planet formation or by absorption by the host star. If Klahr and Schreiber’s model of planetesimal formation as a function of gas pressure is applicable to what we are seeing in protoplanetary disks around other stars, then it should prove useful for scientists building population synthesis models to incorporate planetesimals into the mix.
The previously published paper is Klahr & Schreiber, “Turbulence Sets the Length Scale for Planetesimal Formation: Local 2D Simulations of Streaming Instability and Planetesimal Formation,” Astrophysical Journal Volume 901, Issue 1, id.54, (September, 2020). Abstract. The upcoming paper is “Testing the Jeans, Toomre and Bonnor-Ebert concepts for planetesimal formation: 3D streaming instability simulations of diffusion regulated formation of planetesimals,” in process at the Astrophysical Journal (preprint).
I decided to sign up for the CWopsCW Academy. They have small, instructor led classes that meet for an hour twice a week, for eight weeks. In addition, there is an expectation of an hour of practice sending/receiving each day, so it is pretty intense, but a lot of fun. My instructor, Joe KK5NA, makes it enjoyable and fresh.
With all that CW practice, I have radios and keyers all over the house now (bedroom, den, and of course shack). I’ve started to run out of paddles. I broke into a box that I received 7 years ago – A 100th Anniversary ARRL Commemorative Vibroplex paddle to hook up to my Flex-6600M.
Unknown hackers attempted to add abackdoor to the PHP source code. It was two maliciouscommits, with the subject “fix typo” and the names of known PHP developers and maintainers. They were discovered and removed before being pushed out to any users. But since 79% of the Internet’s websites use PHP, it’s scary.
Developers have moved PHP to GitHub, which has better authentication. Hopefully it will be enough — PHP is a juicy target.
A 6U CubeSat. CAS-9 also named Hope-3 (XW-3) carries a VHF uplink and UHF downlink linear transponder with a bandwidth of 30kHz. This transponder will work all day during the lifecycle of the satellite, and amateur radio enthusiasts around the world can use it for two-way radio relay communications. Proposing the following links:
VHF/UHF - V/U Mode Linear Transponder
UHF - CW Telemetry Beacon
UHF - AX.25 4.8k/9.6kbps GMSK Telemetry
Planning a launch from Jiuquan on December 15th 2021 into a 770km circular orbit with a 98.58 degree inclination. On the other hand, CW beacon use Morse code to send satellite telemetry data, which is also a feature that is widely welcomed by amateur radio enthusiasts.
The hits just keep coming. This one will be highly anticipated and a special Christmas present for all amateur satellite enthusiasts!
More recently I’ve been exploring in detail how Starship changes the game with Lunar exploration and began to wonder what “The Martian” would look like with a Starship-based architecture, rather than the rather more expensive and complex architecture used in the book.
If you haven’t read the book, do so now. Spoilers. Duh.
The book is not entirely self-consistent in its architecture, but to recap, about 25 launches of an Atlas V-like booster are used to assemble the parts and supplies for a 6 person trip to Mars. Each trip takes 4 years and provides a 30 day stay on the surface. Amortizing the Hermes cost over 5 missions, each mission works out to be about $30b, or $166m per person per day on Mars. This is a conservative estimate – in Chapter 12 Lewis says “Uncle Sam paid a hundred thousand dollars for every second we’ll be here” which works out to be about $260b for the whole program, or $65b/year. Either total is quite ambitious in the context of NASA’s current budget!
In contrast, a Starship-based mission is optimized for high volume cargo delivery to the Martian surface. With cargo costs around $1000/kg, extended crewed missions and even city building become the most natural activity, and personnel costs drop from O($1000/s) to O($1000/day). This makes a big difference!
Unlike the Martian, there is only one new space vehicle, instead of five or six. The marginal unit cost is perhaps $10m, instead of $5b. It’s rapidly reusable, and delivers cargo in 100 T lots to the surface. Unfortunately for “The Martian”, Starship edition, the loss of programmatic brittleness and overwhelming system complexity means that the plot can’t rely on steadily ricocheting between various MacGuffins and science-based exposition. Or at least, not in the same way.
Instead, it has to contend with the somewhat less familiar trope of a lonely human in a post-scarcity landscape. In some ways, this may enable a more thorough examination of the human condition but it also presents challenges for plot motivation.
For completeness, here’s the Mars-Earth and Earth-Mars “porkchop” plot for the 2020s. Broadly speaking, launches are only advisable in the blue regions.
It can no longer be denied. This blog lacks fanfic. Until now.
I’m pretty much fucked.
That’s my considered opinion. Fucked.
Six days into what should be the greatest month of my life, and it’s turned into a nightmare.
I got loaded up at the departure night celebration. 550 days on Mars, time to head home. Took the rover for a spin around the base to get one last look at the Martian horizon, homed in on the Starship that will take us home, crawled into my bunk, and went to sleep.
Next morning I woke up and realize that I’m not in space. Well, I am in space, in the sense that everything is in space somewhere, but I’m still in a Starship that’s parked on Mars. Did we delay our launch for some reason? No.
No we didn’t. I looked out the window and on the other side of the main drag saw a blackened spot where the other ERV Starship launched, oh, 6 hours before. It didn’t wake me.
It takes a while to make the 1200 T of fuel needed to fuel up one of these bad boys and fly home, so in the meantime I’m apparently stranded here on a planet with nothing but 5000 T of cargo for company and a 300 day wait until the next crew arrives.
But that’s not the funny part. The funny part is that the cargo manifest database has been corrupted. Within 2 miles of my current position there are a few dozen Starships still loaded to the gills with everything a growing Mars base could possibly need. Pallets of light bulbs. Container loads of robots. Several dozen robotic machines capable of basically any action given the right instructions.
I can search the database and verify that I have something like 10,000 days of food supply – surely enough to see me out – but without unloading every last Starship my odds of finding any of it is vanishingly small. Just the thought of all those lonely sandwiches and croissants made my stomach pinch a little.
Here I am, a tiny dragon on an unbelievable hoard, and I don’t know where anything is.
The Hab shook in the roaring wind as the astronauts huddled in the center. All six of them now wore their flight space suits, in case they had to scramble for an emergency takeoff in the MAV. Johanssen watched her laptop while the rest watched her.
“Sustained winds over one hundred kph now,” she said. “Gusting to one twenty-five.”
“Jesus, we’re gonna end up in Oz,” Watney said. “What’s the abort wind speed?”
“Technically one fifty kph,” Martinez said. “Any more than that and the Starship’s in danger of tipping.”
“Martinez!” Lewis spoke. “That’s enough.”
“Sorry, commander,” Martinez conceded. “It’s going to be okay, Watney. This is Mars. The low atmospheric density means that 150 kph wind is barely perceptible. Nothing is going to tip the Starship.”
“I guess I look pretty silly wearing a flight suit 12 months before we’re due to launch?” Watney asked.
“You should have seen your face!” Johannsen crowed. “Classic.”
I’d better run diagnostics on the failed communications system. It sure looks like that latest software update bricked it. Seems unlikely that NASA would ship a patch that broke stuff permanently, but then again, who really understands Linux these days anyway?
Now I’m faced with a dilemma. Hunt through various bins of contraband USB drives looking for an earlier form of the software so I can manually reflash the ROM, or hack literally any of the thousands of radios within easy reach.
Thousands of radios? Well, yes. Bluetooth? Radio. Wifi? Radio. GSM? Radio. 5G? Radio. Zigbee? Radio. Asset trackers? Radio. Radio radio radio. But in this brave new world of ours, all our Internet of Shit devices come with radios built in. It’s a security nightmare. But only the USB drives are forbidden…
Of course, a 2 W cellular transmitter is hardly capable of streaming live video all the way back to Earth, but the Shannon-Hartley theorem shows you can still transmit data over a very noisy channel. I’ll put a sacrificial IoT radio in some kind of cantenna, load up the WSPR error correcting software, and start transmitting a status beacon. Back of the envelope math suggests I can send and receive a few thousand bits a day, which when combined with a submarine code book I have handy will be adequate to keep the lights on until I can locate that pallet of spare Starlink receivers.
And of course it’s trivially easy to network all the local radios, so I don’t have to do an EVA to a nearby rover to check my email. What is this, 1982?
Let’s think about this methodically. Today I suited up, picked a cargo Starship at random, drove to its base, activated the external lift, and was gently lifted up into its cavernous cargo hold. Except that it was so full I could barely see in, let alone climb around, and every cargo pallet was wrapped in an opaque white pressure sleeve to prevent vacuum exposure. Naturally, every cargo pallet was also annotated with a giant QR code describing the contents, which I am unable to decode because my eyes are not pixelated. Useful.
I unloaded a pallet onto the tray of my rover, drove it back to the Hab and into the airlock, then broke through the seal to find … bearings. I now have 10,000 bearings of various sizes. And zero nutritional content.
Let’s do the math. I can unload 4 pallets a day, and my Hab can contain perhaps 20 before it is too full to move. Each Starship contains 200 pallets, so… this approach does not scale. My odds of finding a pallet I can metabolize before I die of old age are looking pretty slim.
Time to think bigger. Somewhere in those Starships are the materials to build a large pressurized tent easily big enough to contain all the cargo. Originally, we were meant to do some prospecting and the next crew would deploy the base materials, so that the crew up here was never material constrained.
Well, now I’m material constrained.
I cracked open my laptop and started hacking. First, there’s no reason I have to physically be present at a Starship to unload it. Obviously something needs to collect each pallet of cargo or else they’ll pile up at the base, but I dropped some code on each rover enabling it to coordinate with the Starship cargo crane to synchronize position and procure a pallet.
Because it’s the future, everything is covered in cameras and I can easily oversee and tweak the process as it goes along. My fleet of rovers is already in the process of ransacking every Starship in sight, driving their cargo to a selected location, and dropping it on the ground.
Which means that, in my quasi-isolated state, I’m deciding where the first Mars base is actually going to be built. So I’m picking the spot with a good view.
After a week or so of continuous operations, all 5000 T of cargo is now knolled in neat rows where I can drive back and forth on the open rover.
In my hand as I drive is an X-Ray Fluorescence gun, designed to perform contactless assays of near-surface minerals for prospecting, and, in this case, unmarked cargo bales. After 45 minutes of driving up and down the rows mindlessly irradiating this already irradiated frozen desert hellscape, I finally got what I’d been looking for – a strong fluorine signature.
Fluorine is ordinarily a rather antisocial little atom but substituting for hydrogen in polymers makes a very UV and chemical resistant substance. I cut through the packaging and found …
Teflon cookware. Okay, I’m on the right track, but I’m not opening a shop today. Back to driving.
The sun was setting in the west when I got my second hit. This time, I hit pay dirt. Reinforced PTFE tent material. Acres of the stuff. I forked the package to the center of the cargo midden and turned in for the night.
Before I went to sleep I tasked a rover to fit the entrenching tool and excavate the perimeter of the new enclosure.
Next morning I drove back to the site and found my path blocked by a trench. Duh. I filled in a bit, then XRF gun in hand I found the wall anchor pallets, buried them in the trench, then pulled the tent materials out from the center to the edges where it zipped together.
I’m making it sound easy. It would be impossible, except I’m the sole planetary heir to a boatload of stuff intended to replicate the entire industrial revolution in a year or two. So it was merely a pain in the ass. At times the effort of moving a joystick to drive the enormous and powerful vehicles around caused me to mist up my helmet.
See instruction manual steps 12-20 for a riveting account of what happened next: All my cargo was now under a 20 acre inflatable enclosure. Inside, the greenhouse effect began to warm things up a bit so they were no longer incredibly cold – merely ordinarily cold.
My jury rigged inflation set up provided enough pressure to protect the cargo so I proceeded to open every last pallet. This I did by hand. With a chain saw. I’m not a barbarian, and there are like a thousand pallets to open. But a rover-mounted robot chainsaw is too metal, even for me.
When all was said and done, I basically had the world’s loneliest and most poorly organized open-air bazaar full of stuff that was mostly, though not entirely, recognizable by sight. I unpacked the life support physical plant gear, got it set up, and when the interior was shirt sleeves compatible, rebuilt my cargo manifest by hand.
I worked in a supermarket as a kid. This was like a stock take, except that making it up before the shift ended wasn’t a viable strategy anymore.
I had spent ten weeks getting everything unpacked and when I was done, I got a reply to my beacon. NASA thoughtfully sent me the manifest database decryption passcode in plaintext. It was “cargo_manifest_123”. So now I know where everything was on the now empty Starships. Great.
I was pleasantly surprised that so much could be done so quickly with so few people. Soon they’ll be building new Mars cities without bothering to send any astronauts at all. Even better.
Mindy Park moved on to perusing the rest of the image. The Hab was intact of course, why wouldn’t it be? Aliens? Dr. Kapoor would be happy to see that anyway.
She brought the coffee mug to her lips, then froze.
“Um…,” she mumbled to herself. “Uhhh…”
She composed an email to Venkat.
“Dr Kapoor, not urgent but I noticed that activity at the base has been continuing since the crew left, are the robotics teams proceeding? If so, why are they siting what seem to be cargo pallets in the backup city site location? And in the shape of a giant …”
Teddy looked to Mitch. “Mitch, your e-mail said you had something urgent?”
“Yeah,” Mitch said. “How long are we gonna keep this from the Ares 3 crew? They all think Watney’s dead. It’s a huge drain on morale.”
Teddy looked to Venkat.
“Mitch,” Venkat said. “We discussed this—”
“No, you discussed it,” Mitch interrupted. “They think they lost a crewmate. They’re devastated.”
“And when they find out they abandoned a crewmate?” Venkat asked. “Will they feel better then?”
Mitch poked the table with his finger. “They deserve to know. You think Commander Lewis can’t handle the truth?”
“It’s a matter of morale,” Venkat said. “They can concentrate on getting home—”
“I make that call,” Mitch said. “I’m the one who decides what’s best for the crew. And I say we bring them up to speed.”
After a few moments of silence, all eyes turned to Teddy. Annie coughed slightly.
“What’s that, Annie?” Teddy said.
“You know they have access to social media on the Hermes, right?”
Venkat looked at Mitch. “I thought we controlled all the data flowing to and from the Hermes?”
“They’re elite technical experts,” Mitch replied, “They can set up a VPN. Their browser traffic is encrypted. It’s not hard. My 8 year old does it to stream Netflix shows from Europe.”
“So, what are they saying?” Teddy asked Annie.
“See for yourself.”
Annie swiped her practically incandescent Twitter feed to the main screen.
Mindy Park whispered. “I’ve seen worse…”
Lewis: Someone *cough @AstroJohannsen cough* said she verified Mark was in his bunk before we launched. Looks like we left him behind. Whoops. Sorry @NASA.
Bruce Ng called Venkat from Pasadena.
“Venkat, we may have a problem. I manifested an additional 100 kg of pasta for the early resupply.”
“That’s great, Bruce. Watney loves the stuff.”
“Yes, but I clicked the wrong button and …”
“You sent 100 T?”
“It was an accident. How did you know?”
“Classic error. We’ve all done it. And would you believe, Amazon Prime’s return policy from Mars isn’t much good.”
“Does this cause problems for your team?”
“Hardly! This isn’t the 1900s with their primitive expendable rockets. Logistics is practically below the API nowadays. And NASA never complains about getting the extra miles!”
“I remember those days. We drilled holes in everything to save weight, even the spaghetti!”
“Not the astronauts, though, they always squealed when we tried to remove redundant anatomical systems to save weight.”
“Well 100 T of pasta is a big mistake, but at least you didn’t accidentally miscount your crew before pressing the big red button.”
“Do you think they’ll let Lewis fly again?”
“Do you think they’ll let her land?”
“Good point, why not send her back out with the next crew? I’ll run it by Mitch. He’ll love it.”
“Venkat, thanks for the suggestion re long term crew disposition.”
“No problem Mitch, I thought you’d like it.”
“Yes, I think we can retcon a regulation stipulating that crew members may only re-enter Earth’s atmosphere provided all living members are physically present.”
“One other thing. The Chinese have been offering us their booster again.”
“The Taiyang Shen?”
“Yes. I appreciate the sacrifice but can you imagine NASA would have ever begun a crewed Mars campaign without the ability to lift a million tonnes a year to orbit? Including oddly large orders of pasta?”
“I think they want to participate, but they would have to sacrifice their probe.”
“Why? NASA, Russia, ESA, JAXA, hell, even the Australian Space Agency could launch it for them. When is the next Starship flight in their direction?”
“Current policy is no fewer than one Starship per planet per launch window, so we have half a dozen opportunities in the next year. Can we pencil them in?”
“Do it. And on their booster?”
“I dunno. More pasta?”
I’ve had some time on my hands. Well, that’s one way of putting it. Now that NASA’s communications system is back up, they won’t stop driving every robot they can get their hands on to do stuff. Do you know how annoying it is to find your rover drove off in the middle of the night and insists it can’t return until it’s torqued 13,000 bolts on some obscure production line?
Can you believe how failure prone that rover’s radio turned out to be? Rover antenna in hand, I drove my Mars car around my growing base and allowed my mind to wander.
So here I am. An overproductive fuel plant, a bunch of robotic robot factories, a small decorative lake that is yet to be stocked with fish, and a few dozen empty Starships waiting in a bone yard to be scrapped for parts some day.
Each has 8 little legs that fold out under the skirt and can adjust position to ensure stable landings on uneven ground. I found that the relevant accelerometer was not as well bolted down as it should be, and devised a cunning plan.
Standing on the Starship’s flight deck, I activating landing mode, put the accelerometer in my hand, turned on some sweet beats, and started wiggling that little sensor around.
The Starship computer thought the ground was doing something very funny and moved the legs to compensate. And slowly, slowly, the entire spaceship began to shuffle across the landscape. I was halfway to the launching pad before the beat even dropped.
Once there I dumped the fuel plant tanks into the Starship, climbed aboard, and pressed the big red button. Sitting atop 9.5 km/s of dispatchable delta-V, where was I going to go?
8 minutes later I was in a nice polar Low Mars Orbit. After this it was an exercise in Kerbal Space Program. I boosted out, changed planes, aerobraked back in, orbited Phobos, landed for a couple of days, walked around, flew to Deimos, same deal, then went back to LMO. I still had enough fuel practically to fly back to Earth but the launch window wasn’t quite right. So I dumped a few hundred tons of prop and re-entered, landing back on the launch pad. I unstowed my safely hijacked rover from the skirt cargo bin and drove back into town.
Soon enough the next window opened up and the next lot of cargo began to rain down from above. I drove out to a nearby hill so I could watch them come in in batches of 6 or 7, every morning just before lunch time. Soon the dusty old bone yard was joined by a few dozen more Starships, this time with their manifests intact.
Some of them brought people too. That’s gonna be weird.
The numbers didnâ€™t make any sense at first. They just seemed to be a random jumble of noise. But the pulses were so perfectly uniform, and on a frequency that was always so silent; they had to come from an artificial source. I looked over the transmission again, and my heart skipped a beat.
And the picture-perfect penultimate line:
As I finish piecing together the message, my stomach sinks like an anchor. The words before me answer everything.
Ballet in an empty Syrian market, a forest fire in California, releasing turtles in Israel, a briefing by the Easter Bunny in the White House, riots in Northern Ireland, a giant sand dune in France, a wheat harvest in India, sunny weather in New York City, and much more.
Sky Pond State Forest – Parks on the Air K-4963 Today was a delicious early spring day. It was brilliant sunshine, the temperature was 65 F but the infrared rays washing over me felt like it was 85 F. It … Continue reading →
ARISS-USA, a Maryland nonprofit corporation, has earned recognition from the US Internal Revenue Service (IRS) as a Section 501(c)(3) charitable, scientific, and educational organization. ARISS-USA is the US segment of the Amateur Radio on the International Space Station (ARISS) international working group. With this IRS determination, donations to ARISS-USA become tax-deductible in the US, retroactive to May 21, 2020. This status allows the organization to solicit donations and grants. (via ARRL News)
Nice to hear that ARISS has finally taken flight and has become its own non-profit organization. This will help with funding and provide strategic direction for its role in keeping ham radio in space. My only question is why they didnâ€™t take the opportunity to change the name of the organization? With the ISS now at 200% of its planned life and with ARISS planning future lunar projects, a name change in the not too distant future seems inevitable.
In any event, congratulations to ARISS on this bold move!
In 2008, while the medical resident Sam Behjati was doing his usual rounds in a hospital maternity ward, a colleague urgently pulled him into a patient’s room, where he saw a mother beaming with joy and swaddling a perfectly healthy newborn. Behjati’s jaw dropped. Only a few months earlier, doctors had given this mother the devastating news that a routine prenatal test — which had analyzed a sample from her placenta — showed that her baby had an extra copy of chromosome 13, a condition typically fatal for newborns. Yet postnatal tests showed that the baby had 23 normal pairs of chromosomes. “I walked away from the room thinking, ‘How can that possibly be?’” said Behjati, who is now a geneticist at the Wellcome Sanger Institute.
Behjati had uncovered a case of confined placental mosaicism (CPM), a condition in which patches of the placenta have genomes that don’t match up with that of the fetus — a strange phenomenon given that the placenta and fetus derive from the same fertilized egg. Scientists have known about CPM for decades, and they estimated that it occurs in less than 2% of pregnancies.
But according to a recent study by Behjati and his colleagues in Nature, human placentas routinely consist of a quilt of different genotypes, and this strange heterogeneity may actually play a role in protecting the fetus from genetic harm. The discovery illuminates not only several mysteries about the placenta itself but also some underlying connections to cancer.
The study painted the clearest picture yet of the genomic landscape of the placenta — and it’s unlike that of any other human tissue ever seen by Behjati, who calls it the “wild west of the human genome.” When they sequenced the DNA of 86 samples from 37 placentas, each set of cells was found to be genetically distinct and chock-full of genetic aberrations typically seen only in aggressive childhood cancers.
“Every placenta is organized as these big chunks of clones that sit next to each other,” he explained. “It’s like a cobblestone pattern of lots of different tumors that together form the placenta, and that is completely astonishing.”
The findings further confirm that the placenta is a biological oddity. Even its origin is peculiar: Placentas are thought to have emerged more than 90 million years ago, when a series of symbiotic retroviruses infiltrated ancient mammals’ genomes and over many generations led to the organ’s formation.
“It’s this strange organ because mothers invest a huge amount of resources into generating the whole placenta, which they then throw away,” said Steve Charnock-Jones, a reproductive biologist at the University of Cambridge and a co-author of the new study. For decades, biologists have puzzled over the apparent wastefulness of this arrangement: Why would natural selection allow a crucial, resource-intensive feature of mammalian life to be so seemingly inefficient?
A Genetic ‘Dumping Ground’
To try to answer this question, Behjati and his colleagues retraced when and where placental cells originate by comparing patterns of mutations in placental samples with samples from corresponding umbilical cords, which develop from fetal cells. The researchers found that cells separate into fetal and placental lineages earlier than anticipated — in some cases, within the first few cell divisions of the zygote. These findings show that the placenta charts its own path separate from that of the fetus early in pregnancy, explained Derek Wildman, an evolutionary biologist at the University of South Florida who was not involved in the study.
But during those crucial first weeks, when a single genetic defect could derail the pregnancy, the placenta may also act as a “dumping ground” for aberrations. During early development, when some of the dividing cells randomly develop genetic abnormalities, they might get earmarked for the placenta instead of the fetus, Behjati reasoned. His team found evidence for this theory: In one of the biopsies, the researchers observed placental cells with three copies of chromosome 10 — two from the mother and one from the father. But cells in the rest of the placenta and fetus had two copies of the chromosome (both from the mother), which suggested that the error started in the fertilized egg but was later corrected.
Patches of the placenta continue to carry on these early mutations — a living archive of genetic defects from the first days of pregnancy — while the fetus remains unharmed. But that’s no problem for the placenta, Wildman hypothesizes, because “it’s not constrained by the necessity to successfully produce an organ that’s going to live for 85 years.” The placenta may not have the same genetic checks and balances that other human cells do because of its inherent transience, he said.
Another possible explanation for these mutations, Behjati said, is that the placenta must outpace the growth of the fetus for the first 16 weeks of human pregnancy, so it may be worth racking up mutations as it balloons inside the uterus. It can “live fast and die young,” as Behjati put it.
Wendy Robinson, a medical geneticist at the University of British Columbia who studies early human development, said that it’s an interesting theory, but she disagreed with the notion that the placenta is merely the genetic garbage pail for the fetus. “There’s very rapid cell divisions that occur early in pregnancy, and that probably imposes a strong selection against cells that just can’t keep up, and so only the good cells will contribute ,” she said. “So, it’s not that you’re shunting the bad cells to the placenta — and I know it’s semantics — but it’s that you’re selecting for the good cells in the baby and leaving everything else behind.”
The Normal Abnormal Cells
Regardless of the placenta’s role, the newly uncovered heterogeneity underscores just how miraculous it is that the placenta can evade detection and destruction by the maternal immune system, said Wildman. “You would think that the variant genes, which differ from the maternal genome, would be recognized by the maternal immune system,” he said.
Researchers have long noted similarities between cancers and the placenta — in how they evade the immune system, their invasion tactics and the set of chemical tags on their cells’ DNA that direct the activity of their genes. The two behave alike, too, Robinson said: For a successful pregnancy, the placenta must invade the uterine lining of the mother, tap into the mother’s blood supply and create its own network of blood vessels — all of which cancerous cells do as well.
Given these similarities, “people who study cancer should be very interested in this study,” said Yoel Sadovsky, the executive director of the Magee-Womens Research Institute in Pittsburgh who studies placental genetics. “It may suggest that childhood cancers have the same primitive things as the placenta that allow abnormal cells to propagate, but not in the normal embryonic tissue.”
Although the placenta and cancers are both invasive, there is a crucial difference: The placenta normally knows when to stop growing. (A very rare condition known as placenta accreta occurs if the placenta continues to invade the uterine muscle or nearby organs like the bladder.) While fetal growth accelerates rapidly during the third trimester, the most intense growth for the placenta takes place in the first trimester, said Charnock-Jones, adding that it would be problematic if the placenta continued to act as a tumor and drained valuable resources from the fetus during the third trimester.
“Not only did we find cancer-causing mutations, but we also actually found something that genomically looks like a perfectly normal cancer with very odd genetic signatures and copy number changes,” Behjati said.
“A cell can be normal despite all of that, and I find that quite incredible.”
I received the 2nd (Moderna) VAX jab this morning and am now considered fully vaccinated. Good for me, but it comes at a time when the virus is beginning to surge again. It wouldn’t be terribly surprising to discover this newest surge is related to the too-soon removal of the safety protocols mostly in the red states.
Spiking the football on the ten-yard line as it were.
Here in deep-red Indiana our governor (who has done an admirable job with the pandemic until now) canceled the mask mandate a few days ago. Even though businesses can continue to require it and masks still must be worn in all government buildings.
When asked, our governor said he would continue wearing a mask whenever he is in public and recommended that everyone else do the same. He further said he only lifted the mask mandate because it “made his Republican colleagues feel better” which is about as lame-ass an excuse as I’ve ever heard for anything.
Who are these right wing snowflakes with such tender feelings being assaulted by a mask mandate that it would make them feel better if lifted, even if they still intend to wear them?
I don’t get it, but since 2016 the GOP is only the shadow of a clown of its former self and it’s difficult to imagine anyone taking them seriously as a governing party again. Which is a problem given the alternative is only slightly less odious.
America is hosed and COVID is surging into yet another wave. Meanwhile, I’m fully vaccinated and waiting to see if there will be any overnight side-effects from taking the juice.
According to Wired, Signal is adding support for the cryptocurrency MobileCoin, “a form of digital cash designed to work efficiently on mobile devices while protecting users’ privacy and even their anonymity.”
Moxie Marlinspike, the creator of Signal and CEO of the nonprofit that runs it, describes the new payments feature as an attempt to extend Signal’s privacy protections to payments with the same seamless experience that Signal has offered for encrypted conversations. “There’s a palpable difference in the feeling of what it’s like to communicate over Signal, knowing you’re not being watched or listened to, versus other communication platforms,” Marlinspike told WIRED in an interview. “I would like to get to a world where not only can you feel that when you talk to your therapist over Signal, but also when you pay your therapist for the session over Signal.”
I think this is an incredibly bad idea. It’s not just the bloating of what was a clean secure communications app. It’s not just that blockchain is just plain stupid. It’s not even that Signal is choosing to tie itself to a specific blockchain currency. It’s that adding a cryptocurrency to an end-to-end encrypted app muddies the morality of the product, and invites all sorts of government investigative and regulatory meddling: by the IRS, the SEC, FinCEN, and probably the FBI.
And I see no good reason to do this. Secure communications and secure transactions can be separate apps, even separate apps from the same organization. End-to-end encryption is already at risk. Signal is the best app we have out there. Combining it with a cryptocurrency means that the whole system dies if any part dies.
I think I speak for many technologists when I say that any bolted-on cryptocurrency monetization scheme smells like a giant pile of rubbish and feels enormously user-exploitative. Weâ€™ve seen this before, after all Telegram tried the same thing in an ICO that imploded when SEC shut them down, and Facebook famously tried and failed to monetize WhatsApp through their decentralized-but-not-really digital money market fund project.
Signal is a still a great piece of software. Just do one thing and do it well, be the trusted de facto platform for private messaging that empowers dissidents, journalists and grandma all to communicate freely with the same guarantees of privacy. Don’t become a dodgy money transmitter business. This is not the way.
Here’s a collection of some of the sights and events taking place in and around Boston from 1970 to 1979. Below, images of the blizzard of 1978, a victory parade for the Bruins after they won the 1970 Stanley Cup, enforcement and opposition to school segregation by busing, a Celtics game in Boston Garden, urban renewals and restorations, a St. Patrick’s Day parade in South Boston, anti-war protests, charm-school lessons, and much more.
It’s easy to forget how large our space probes have been. A replica of the Galileo probe during at the Jet Propulsion Laboratory can startle at first glance. The spacecraft was 5.3 meters high (17 feet), but an extended magnetometer boom telescoped out to 11 meters (36 feet). Not exactly the starship Enterprise, of course, but striking when you’re standing there looking up at the probe and pondering what it took to deliver this entire package to Jupiter orbit in the 1990s.
The same feeling settles in this morning with news out of Lockheed Martin Space, where in both December 2020 and February of 2021 final deployment tests were conducted on the solar arrays that will fly aboard the Lucy mission. Scheduled for launch this fall (the launch window opens on October 16), Lucy is to make a 12-year reconnaissance of the Trojan asteroids of Jupiter. Given that energy from the Sun is inversely proportional to the square of the distance, Lucy in Jupiter space will receive only 1/27th of the energy available at Earth orbit
Image: Seen here partially unfurled, the Lucy spacecraft’s massive solar arrays completed their first deployment tests in January 2021 inside the thermal vacuum chamber at Lockheed Martin Space in Denver, Colorado. To ensure no extra strain was placed on the solar arrays during testing in Earth’s gravity environment, the team designed a special mesh wire harness to support the arrays during deployment. Credit: Lockheed Martin
“The success of Lucy’s final solar array deployment test marked the end of a long road of development. With dedication and excellent attention to detail, the team overcame every obstacle to ready these solar panels.”
Those are the words of Matt Cox, Lockheed Martin’s Lucy program manager, in Littleton, Colorado, who added:
“Lucy will travel farther from the Sun than any previous solar-powered Discovery-class mission, and one reason we can do that is the technology in these solar arrays.”
Lucy will actually move beyond Jupiter’s orbit in its study of the Trojans, up to 853 million kilometers out, so we need big panels. With the panels attached and fully extended, they could cover a five-story building. They’ll need to supply about 500 watts to the spacecraft and its instruments. Manufactured by Northrop Grumman, each solar panel when folded is no more than 4 centimeters thick, but when expanded, will have a diameter of almost 7.3 meters.
The good news is that with the help of weight offloading devices for needed support under gravity, the deployment tests, conducted in a thermal vacuum chamber at Lockheed Martin Space in Denver, went flawlessly. Hal Levison (Southwest Research Institute) points to the critical nature of the process:
“At about one hour after the spacecraft launches, the solar panels will need to deploy flawlessly in order to assure that we have enough energy to power the spacecraft throughout the mission. These 20 minutes will determine if the rest of the 12 year mission will be a success. Mars landers have their seven minutes of terror, we have this.”
Which takes me back momentarily to Galileo, where the ‘seven minutes of terror’ involved about 165 seconds, the time it was believed it would take the spacecraft’s actuators to deploy its high-gain antenna. The failure to deploy properly ultimately forced controllers to use a low-gain antenna to return data, a workaround that produced great science but at a much reduced rate. We know what happened here, which involved events during the 4.5 years Galileo spent in storage following the explosion of the Challenger shuttle, and we can hope that the thorough deployment tests at Lockheed Martin Space will ensure a successful deployment for Lucy.
Image: At 24 feet (7.3 meters) across each, Lucy’s two solar panels underwent initial deployment tests in January 2021. In this photo, a technician at Lockheed Martin Space in Denver, Colorado, inspects one of Lucy’s arrays during its first deployment. These massive solar arrays will power the Lucy spacecraft throughout its entire 4-billion-mile, 12-year journey as it heads out to explore Jupiter’s elusive Trojan asteroids. Credit: Lockheed Martin.
Twenty years after an apparent anomaly in the behavior of elementary particles raised hopes of a major physics breakthrough, a new measurement has solidified them: Physicists at Fermi National Accelerator Laboratory near Chicago announced today that muons — elementary particles similar to electrons — wobbled more than expected while whipping around a magnetized ring.
The widely anticipated new measurement confirms the decades-old result, which made headlines around the world. Both measurements of the muon’s wobbliness, or magnetic moment, significantly overshoot the theoretical prediction, as calculated last year by an international consortium of 132 theoretical physicists. The Fermilab researchers estimate that the difference has grown to a level quantified as “4.2 sigma,” well on the way to the stringent five-sigma level that physicists need to claim a discovery.
Taken at face value, the discrepancy strongly suggests that unknown particles of nature are giving muons an extra push. Such a discovery would at long last herald the breakdown of the 50-year-old Standard Model of particle physics — the set of equations describing the known elementary particles and interactions.
“Today is an extraordinary day, long awaited not only by us but by the whole international physics community,” Graziano Venanzoni, one of the leaders of the Fermilab Muon g-2 experiment and a physicist at the Italian National Institute for Nuclear Physics, told the press.
However, even as many particle physicists are likely to be celebrating — and racing to propose new ideas that could explain the discrepancy — a paper published today in the journal Nature casts the new muon measurement in a dramatically duller light.
The paper, which appeared just as the Fermilab team unveiled its new measurement, suggests that the muon’s measured wobbliness is exactly what the Standard Model predicts.
In the paper, a team of theorists known as BMW present a state-of-the-art supercomputer calculation of the most uncertain term that goes into the Standard Model prediction of the muon’s magnetic moment. BMW calculates this term to be considerably larger than the value adopted last year by the consortium, a group known as the Theory Initiative. BMW’s larger term leads to a larger overall predicted value of the muon’s magnetic moment, bringing the prediction in line with the measurements.
If the new calculation is correct, then physicists may have spent 20 years chasing a ghost. But the Theory Initiative’s prediction relied on a different calculational approach that has been honed over decades, and it could well be right. In that case, Fermilab’s new measurement constitutes the most exciting result in particle physics in years.
“This is a very sensitive and interesting situation,” said Zoltan Fodor, a theoretical particle physicist at Pennsylvania State University who is part of the BMW team.
BMW’s calculation itself is not breaking news; the paper first appeared as a preprint last year. Aida El-Khadra, a particle theorist at the University of Illinois who co-organized the Theory Initiative, explained that the BMW calculation should be taken seriously, but that it wasn’t factored into the Theory Initiative’s overall prediction because it still needed vetting. If other groups independently verify BMW’s calculation, the Theory Initiative will integrate it into its next assessment.
Dominik Stöckinger, a theorist at the Technical University of Dresden who participated in the Theory Initiative and is a member of the Fermilab Muon g-2 team, said the BMW result creates “an unclear status.” Physicists can’t say whether exotic new particles are pushing on muons until they agree about the effects of the 17 Standard Model particles they already know about.
Regardless, there’s plenty of reason for optimism: Researchers emphasize that even if BMW is right, the puzzling gulf between the two calculations could itself point to new physics. But for the moment, the past 20 years of conflict between theory and experiment appear to have been replaced by something even more unexpected: a battle of theory versus theory.
The reason physicists have eagerly awaited Fermilab’s new measurement is that the muon’s magnetic moment — essentially the strength of its intrinsic magnetism — encodes a huge amount of information about the universe.
A century ago, physicists assumed that the magnetic moments of elementary particles would follow the same formula as larger objects. Instead they found that electrons rotate in magnetic fields twice as much as expected. Their “gyromagnetic ratio,” or “g-factor” — the number relating their magnetic moment to their other properties — seemed to be 2, not 1, a surprise discovery later explained by the fact that electrons are “spin-1/2” particles, which return to the same state after making two full turns rather than one.
For years, both electrons and muons were thought to have g-factors of exactly 2. But then in 1947, Polykarp Kusch and Henry Foley measured the electron’s g-factor to be 2.00232. The theoretical physicist Julian Schwinger almost immediately explained the extra bits: He showed that the small corrections come from an electron’s tendency to momentarily emit and reabsorb a photon as it moves through space.
Many other fleeting quantum fluctuations happen as well. An electron or muon might emit and reabsorb two photons, or a photon that briefly becomes an electron and a positron, among countless other possibilities that the Standard Model allows. These temporary manifestations travel around with an electron or muon like an entourage, and all of them contribute to its magnetic properties. “The particle you thought was a bare muon is actually a muon plus a cloud of other things that appear spontaneously,” said Chris Polly, another leader of the Fermilab Muon g-2 experiment. “They change the magnetic moment.”
The rarer a quantum fluctuation, the less it contributes to the electron or muon’s g-factor. “As you go further into the decimal places you can see where suddenly the quarks start to appear for the first time,” said Polly. Further along are particles called W and Z bosons, and so on. Because muons are 207 times heavier than electrons, they’re about 2072 (or 43,000) times more likely to conjure up heavy particles in their entourage; these particles therefore alter the muon’s g-factor far more than an electron’s. “So if you’re looking for particles that could explain the missing mass of the universe — dark matter — or you’re looking for particles of a theory called supersymmetry,” Polly said, “that’s where the muon has a unique role.”
For decades, theorists have strived to calculate contributions to the muon’s g-factor coming from increasingly unlikely iterations of known particles from the Standard Model, while experimentalists measured the g-factor with ever-increasing precision. If the measurement were to outstrip the expectation, this would betray the presence of strangers in the muon’s entourage: fleeting appearances of particles beyond the Standard Model.
Muon magnetic moment measurements began at Columbia University in the 1950s and were picked up a decade later at CERN, Europe’s particle physics laboratory. There, researchers pioneered the measurement technique still used at Fermilab today.
High-speed muons are shot into a magnetized ring. As a muon whips around the ring, passing through its powerful magnetic field, the particle’s spin axis (which can be pictured as a little arrow) gradually rotates. Millionths of a second later, typically after speeding around the ring a few hundred times, the muon decays, producing an electron that flies into one of the surrounding detectors. The varying energies of electrons emanating from the ring at different times reveal how quickly the muon spins are rotating.
In the 1990s, a team at Brookhaven National Laboratory on Long Island built a 50-foot-wide ring to fling muons around and began collecting data. In 2001, the researchers announced their first results, reporting 2.0023318404 for the muon’s g-factor, with some uncertainty in the final two digits. Meanwhile, the most comprehensive Standard Model prediction at the time gave the significantly lower value of 2.0023318319.
It instantly became the world’s most famous eighth-decimal-place discrepancy.
“Hundreds of newspapers covered it,” said Polly, who was a graduate student with the experiment at the time.
Brookhaven’s measurement overshot the prediction by nearly three times its supposed margin of error, known as a three-sigma deviation. A three-sigma gap is significant, unlikely to be caused by random noise or an unlucky accumulation of small errors. It strongly suggested that something was missing from the theoretical calculation, something like a dark matter particle or an extra force-carrying boson.
But unlikely sequences of events sometimes happen, so physicists require a five-sigma deviation between a prediction and a measurement to definitively claim a discovery.
Trouble With Hadrons
A year after Brookhaven’s headline-making measurement, theorists spotted a mistake in the prediction. A formula representing one group of the tens of thousands of quantum fluctuations that muons can engage in contained a rogue minus sign; fixing it in the calculation reduced the difference between theory and experiment to just two sigma. That’s nothing to get excited about.
But as the Brookhaven team accrued 10 times more data, their measurement of the muon’s g-factor stayed the same while the error bars around the measurement shrank. The discrepancy with theory grew back to three sigma by the time of the experiment’s final report in 2006. And it continued to grow, as theorists kept honing the Standard Model prediction for the g-factor without seeing the value drift upward toward the measurement.
The Brookhaven anomaly loomed ever larger in physicists’ psyches as other searches for new particles failed. Throughout the 2010s, the $20 billion Large Hadron Collider in Europe slammed protons together in hopes of conjuring up dozens of new particles that might complete the pattern of nature’s building blocks. But the collider found only the Higgs boson — the last missing piece of the Standard Model. Meanwhile, a slew of experimental searches for dark matter found nothing. Hopes for new physics increasingly rode on wobbly muons. “I don’t know if it is the last great hope for new physics, but it certainly is a major one,” Matthew Buckley, a particle physicist at Rutgers University, told me.
Everyone knew that in order to cross the threshold of discovery, they would need to measure the muon’s gyromagnetic ratio again, and more precisely. So plans for a follow-up experiment got underway. In 2013, the giant magnet used at Brookhaven was loaded onto a barge off Long Island and shipped down the Atlantic Coast and up the Mississippi and Illinois rivers to Fermilab, where the lab’s powerful muon beam would let data accrue much faster than before. That and other improvements would allow the Fermilab team to measure the muon’s g-factor four times more accurately than Brookhaven had.
In 2016, El-Khadra and others started organizing the Theory Initiative, seeking to iron out any disagreements and arrive at a consensus Standard Model prediction of the g-factor before the Fermilab data rolled in. “For the impact of such an exquisite experimental measurement to be maximized, theory needs to get its act together, basically,” she said, explaining the reasoning at the time. The theorists compared and combined calculations of different quantum bits and pieces that contribute to the muon’s g-factor and arrived at an overall prediction last summer of 2.0023318362. That fell a hearty 3.7 sigma below Brookhaven’s final measurement of 2.0023318416.
But the Theory Initiative’s report was not the final word.
Uncertainty about what the Standard Model predicts for the muon’s magnetic moment stems entirely from the presence in its entourage of “hadrons”: particles made of quarks. Quarks feel the strong force (one of the three forces of the Standard Model), which is so strong it’s as if quarks are swimming in glue, and that glue is endlessly dense with other particles. The equation describing the strong force (and thus, ultimately, the behavior of hadrons) can’t be exactly solved.
That makes it hard to gauge how often hadrons pop up in the muon’s midst. The dominant scenario is the following: The muon, as it travels along, momentarily emits a photon, which morphs into a hadron and an antihadron; the hadron-antihadron pair quickly annihilate back into a photon, which the muon then reabsorbs. This process, called hadronic vacuum polarization, contributes a small correction to the muon’s gyromagnetic ratio starting in the seventh decimal place. Calculating this correction involves solving a complicated mathematical sum for each hadron-antihadron pair that can arise.
Uncertainty about this hadronic vacuum polarization term is the primary source of overall uncertainty about the g-factor. A small increase in this term can completely erase the difference between theory and experiment. Physicists have two ways to calculate it.
With the first method, researchers don’t even try to calculate the hadrons’ behavior. Instead, they simply translate data from other particle collision experiments into an expectation for the hadronic vacuum polarization term. “The data-driven approach has been refined and optimized over decades, and several competing groups using different details in their approaches have confirmed each other,” said Stöckinger. The Theory Initiative used this data-driven approach.
But in recent years, a purely computational method has been steadily improving. In this approach, researchers use supercomputers to solve the equations of the strong force at discrete points on a lattice instead of everywhere in space, turning the infinitely detailed problem into a finite one. This way of coarse-graining the quark quagmire to predict the behavior of hadrons “is similar to a weather forecast or meteorology,” Fodor explained. The calculation can be made ultra-precise by putting lattice points very close together, but this also pushes computers to their limits.
The 14-person BMW team — named after Budapest, Marseille and Wuppertal, the three European cities where most team members were originally based — used this approach. They made four chief innovations. First they reduced random noise. They also devised a way of very precisely determining scale in their lattice. At the same time, they more than doubled their lattice’s size compared to earlier efforts, so that they could study hadrons’ behavior near the center of the lattice without worrying about edge effects. Finally, they included in the calculation a family of complicating details that are often neglected, like mass differences between types of quarks. “All four needed a lot of computing power,” said Fodor.
The researchers then commandeered supercomputers in Jülich, Munich, Stuttgart, Orsay, Rome, Wuppertal and Budapest and put them to work on a new and better calculation. After several hundred million core hours of crunching, the supercomputers spat out a value for the hadronic vacuum polarization term. Their total, when combined with all other quantum contributions to the muon’s g-factor, yielded 2.00233183908. This is “in fairly good agreement” with the Brookhaven experiment, Fodor said. “We cross-checked it a million times because we were very much surprised.” In February 2020, they posted their work on the arxiv.org preprint server.
The Theory Initiative decided not to include BMW’s value in their official estimate for a few reasons. The data-driven approach has a slightly smaller error bar, and three different research groups independently calculated the same thing. In contrast, BMW’s lattice calculation was unpublished as of last summer. And although the result agrees well with earlier, less precise lattice calculations that also came out high, it hasn’t been independently replicated by another group to the same precision.
The Theory Initiative’s decision meant that the official theoretical value of the muon’s magnetic moment had a 3.7-sigma difference with Brookhaven’s experimental measurement. It set the stage for what has become the most anticipated reveal in particle physics since the Higgs boson in 2012.
A month ago, the Fermilab Muon g-2 team announced that they would present their first results today. Particle physicists were ecstatic. Laura Baudis, a physicist at the University of Zurich, said she was “counting the days until April 7,” after anticipating the result for 20 years. “If the Brookhaven results are confirmed by the new experiment at Fermilab,” she said, “this would be an enormous achievement.”
And if not — if the anomaly were to disappear — some in the particle physics community feared nothing less than “the end of particle physics,” said Stöckinger. The Fermilab g-2 experiment is “our last hope of an experiment which really proves the existence of physics beyond the Standard Model,” he said. If it failed to do so, many researchers might feel that “we now give up and we have to do something else instead of researching physics beyond the Standard Model.” He added, “Honestly speaking, it might be my own reaction.”
The 200-person Fermilab team revealed the result to themselves only six weeks ago in an unveiling ceremony over Zoom. Tammy Walton, a scientist on the team, rushed home to catch the show after working the night shift on the experiment, which is currently in its fourth run. (The new analysis covers data from the first run, which makes up 6% of what the experiment will eventually accrue.) When the all-important number appeared on the screen, plotted along with the Theory Initiative’s prediction and the Brookhaven measurement, Walton was thrilled to see it land higher than the former and pretty much smack dab on top of the latter. “People are going to be crazy excited,” she said.
Papers proposing various ideas for new physics are expected to flood the arxiv in the coming days. Yet beyond that, the future is unclear. What was once an illuminating breach between theory and experiment has been clouded by a far foggier clash of calculations.
It’s possible that the supercomputer calculation will turn out to be wrong — that BMW overlooked some source of error. “We need to have a close look at the calculation,” El-Khadra said, stressing that it’s too early to draw firm conclusions. “It is pushing on the methods to get that precision, and we need to understand if the way they pushed on the methods broke them.”
That would be good news for fans of new physics.
Interestingly, though, even if the data-driven method is the approach with an unidentified problem under the hood, theorists have a hard time understanding what the problem could be other than unaccounted-for new physics. “The need for new physics would only shift elsewhere,” said Martin Hoferichter of the University of Bern, a leading member of the Theory Initiative.
Researchers who have been exploring possible problems with the data-driven method over the past year say the data itself is unlikely to be wrong. It comes from decades of ultraprecise measurements of 35 hadronic processes. But “it could be that the data, or the way it is interpreted, is misleading,” said Andreas Crivellin of CERN and other institutions, a coauthor (along with Hoferichter) of one paper studying this possibility.
It’s possible, he explained, that destructive interference happens to reduce the likelihood of the hadronic processes arising in certain electron-positron collisions, without affecting hadronic vacuum polarization near muons; then the data-driven extrapolation from one to the other doesn’t quite work. In that case, though, another Standard Model calculation that’s sensitive to the same hadronic processes gets thrown off, creating a different tension between the theory and data. And this tension would itself suggest new physics.
It’s tricky to resolve this other tension while keeping the new physics “elusive enough to not have been observed elsewhere,” as El-Khadra put it, yet it’s possible — for instance, by introducing the effects of hypothetical particles called vector-like leptons.
Thus the mystery swirling around muons might lead the way past the Standard Model to a more complete account of the universe after all. However things turn out, it’s safe to say that today’s news — both the result from Fermilab, as well as the publication of the BMW calculation in Nature — is not the end for particle physics.
Google’s Project Zerodiscovered, and caused to be patched, eleven zero-day exploits against Chrome, Safari, Microsoft Windows, and iOS. This seems to have been exploited by “Western government operatives actively conducting a counterterrorism operation”:
The exploits, which went back to early 2020 and used never-before-seen techniques, were “watering hole” attacks that used infected websites to deliver malware to visitors. They caught the attention of cybersecurity experts thanks to their scale, sophistication, and speed.
Itâ€™s true that Project Zero does not formally attribute hacking to specific groups. But the Threat Analysis Group, which also worked on the project, does perform attribution. Google omitted many more details than just the name of the government behind the hacks, and through that information, the teams knew internally who the hacker and targets were. It is not clear whether Google gave advance notice to government officials that they would be publicizing and shutting down the method of attack.
No, she doesn’t mind discussing it openly, in front of her sons. But, and I cannot stress this point enough, instead of calling him a “sexy, sexy man,” as she does in this comic, she would swivel her hips and say he was “sexy-poo.”
Did you know that it’s possible for a teenage boy to cringe violently enough to throw his back out of alignment?
As always, thanks for using my Amazon Affiliate links (US, UK, Canada).
William Thomas State Forest NH – POTA and WW-FF Ham Radio Today I did a last-minute amateur radio adventure with my very good friend Jim Cluett, W1PID. We decided to go to the William Thomas State Forest just north of … Continue reading →
As the pandemic edges further into its second year, the tedium of life under lockdown is taking its toll. We may be fighting the spread of infection by staying home and having our meetings over video conferencing software, but it’s hellishly boring! What we wouldn’t do for our hackerspaces to be open, and for the chance to hang out and chew the fat about our lockdown projects!
Here at Hackaday we can bring some needed relief in the form of the Hackaday Remote: Bring-A-Hack held via Zoom on Thursday, April 8th, at 1pm Pacific time. We know you’ve been working hard over the last year, and since you’ve been denied the chance to share those projects in person, we know you just can’t wait to sign up. Last year’s Remoticon showed us the value of community get-togethers online, with both the team soldering challenge rounds and the bring-a-hack being particular event highlights, so it’s time for a fresh dose to keep up our spirits.
It doesn’t matter how large or small your project is, if it interests you other readers will also want to see it. Be prepared to tell the world how you made it, what problems you solved, and a bit about yourself, and then step back, take a bow, and be showered with virtual roses from the adoring masses. There’s a sign-up link if you have a project to show off. Don’t hold back if you’re worried it’s not impressive enough, a certain Hackaday scribe has submitted an OpenSCAD library she’s working on.
Not sure if this is a blogpost or just a cautionary tale, but anyway DON'T BUY THIS SOLDERING EXTRACTOR LAMP! With that said... here is my tale. I wanted a small solder extractor and I saw these extractors built into a angled lamp, running off USB power they peaked my interest and I handed over Â£40 for one online. On arrival I unpacked it and put it into use, it was OK, not the strongest extractor by any standards but my solder area is under a Velux window and it has enough of a draw to move the solder fumes away from me and out of the window. On the cable to the USB plug it had a controller with 4 momentary press buttons on it, you could switch between 3 "colours" of LED lamp, all shades of white, one a "cool white" one "warm white" and one that was a combination of both. You could ramp up and down the light level in steps, but I always had them on full power. It also had a fan control to set the "power", this was always on full and you could tell that when the fan was on it was starved of current a little as the light would dim. If you turned the lamp off the fan would speed up!
Anyway, the control system lasted about 2 weeks! I came to it one day and it had failed, I couldn't turn the lamp on and although I could run the fan I couldn't adjust it and randomly an LED in the controller was stuck on. I did really like it though, I liked being able to clamp it to my workbench and drag the lamp and extractor really close to work so I decided to try and fix it.
Pulling the controller apart and a bit of probing with a multi meter I realised that basically the control system made or broke the negative power rail and when the microcontroller was operational was probably PWM'ing the power to each circuit. I realised that I could quite simply wire in a switch for the fan and for the two LED light rings. I wasn't bothered about having both the LED lamps on at the same time. So a quick bit of soldering and then a small 3D print and I have a simple but working "controller" back on the lamp. I reattached the USB cable as I have lots of USB outlets and also it's nice that it can run off a battery pack if and when we get back to maker events or rocketry in fields!
Let’s take a look at how Earth’s carbon came to be here, through the medium of two new papers. This is a process most scientists have assumed involved molecules in the original solar nebula that wound up on our world through accretion as the gases cooled and the carbon molecules precipitated. But the first of the papers (both by the same team, though with different lead authors) points out that gas molecules carrying carbon won’t do the trick. When carbon vaporizes, it does not condense back into a solid, and that calls for some explanation.
University of Michigan scientist Jie Li is lead author of the first paper, which appears in Science Advances. The analysis here says that carbon in the form of organic molecules produces much more volatile species when it is vaporized, and demands low temperatures to form solids. Moreover, says Li, it does not condense back into organic form.
“The condensation model has been widely used for decades. It assumes that during the formation of the sun, all of the planet’s elements got vaporized, and as the disk cooled, some of these gases condensed and supplied chemical ingredients to solid bodies. But that doesn’t work for carbon.”
Most of Earth’s carbon, the researchers believe, accumulated directly from the interstellar medium well after the protoplanetary disk had formed and warmed; it was never vaporized in the way the condensation model suggests. Interesting concepts come into play here, among them the cleverly titled ‘soot line,’ in analogy to the ‘snow line’ in planetary systems. This marker has a lot to do with how carbon behaves. The authors use astronomical observations and modeling to explore the concept. From the paper — watch what happens as the disk warms:
Astronomical observations show that approximately half of the cosmically available carbon entered the protoplanetary disk as volatile ices and the other half as carbonaceous organic solids. As the disk warms up from 20 K, all the volatile carbon carriers sublimate by 120 K, followed by the conversion of major refractory carbon carriers into CO and other gases near a characteristic temperature of ~500 K… The sublimation sequence of carbon exhibits a “cliff” where dust grains in an accreting disk lose most of their carbon to gas within a narrow temperature range near 500 K.
The ‘cliff’ is another good analogy. It’s the edge of the soot line:
The division between the stability fields of solid and gas carbon carriers corresponds to the “soot line,” a term coined to describe the location where the irreversible destruction of presolar polycyclic hydrocarbons via thermally driven reactions in the planet-forming region of disks occurred.
Modeling the sublimation process and loss of carbon in the solar nebula, the authors chart the soot line as it migrates with time as the system matures and the pressure and temperature of the disk evolve. Shortly after the birth of the Sun, the soot line might have extended out 10s of AU, but as the accretion rate diminished, it would have migrated inward. A carbon poor early Earth, then, would be the result of formation during the period when the soot line was well beyond Earth’s orbit, during the first million years, when accretion rates were high.
Image: This is Figure 2 from the paper. Caption: Fig. 2 Schematic illustration of the soot line in a protoplanetary disk: The soot line (red parabola) delineates the phase boundary between solid and gaseous carbon carriers. In the accretion-dominated disk phase, it is located far from the proto-Sun and divides carbon-poor dust and pebbles (green dots) from carbon-rich ones (dark blue dots). Within 1 Ma, as a result of the transition to a radiation-dominated, or passive, disk phase, the soot line migrates inside Earth’s current orbit. Note that the Si-rich and C-rich solids do not represent distinct reservoirs because carbonaceous material is likely associated with silicates. They are provided for ease in illustration. Credit: Li et al.
Drawing again from the paper:
If the bulk carbon content of Earth is low, then most of its source materials must have lost carbon through sublimation early in the nebula’s history or by additional processes such as planetesimal differentiation. Constraining the fraction of carbon-depleted source material accreted by Earth requires us to constrain the maximum amount of carbon in the bulk Earth.
Which the authors do by determining the maximum amount of carbon the Earth’s core could contain — after all, they mention planetary differentiation — a figure that turns out to be less than half a percent of Earth’s mass. Says Li’s colleague Edwin Bergin (University of Michigan):
“We asked how much carbon could you stuff in the Earth’s core and still be consistent with all the constraints. There’s uncertainty here. Let’s embrace the uncertainty to ask what are the true upper bounds for how much carbon is very deep in the Earth, and that will tell us the true landscape we’re within.”
The paper points to a severe carbon deficit in the newly formed Earth, and suggests still more about the environment producing it. Centimeter-to-meter sized pebbles delivering mass as they drift inward from the outer Solar System would carry both water and carbon. Simulations of their movement show that a giant planet core in the disk would create a pressure bump where drifting pebbles would pile up, diminishing the supply of carbon for the emerging inner system. The carbon-poor composition of iron meteorites is cited as evidence of this early carbon depletion.
In the second paper, the same group of researchers examined these iron meteorites, which represent the metallic cores of planetesimals, looking at how they retained carbon in their early formation. Here melting and loss of carbon is apparent. Marc Hirschmann (University of Minnesota) led the second study, which included the same co-authors along with Li:
“Most models have the carbon and other life-essential materials such as water and nitrogen going from the nebula into primitive rocky bodies, and these are then delivered to growing planets such as Earth or Mars.. But this skips a key step, in which the planetesimals lose much of their carbon before they accrete to the planets.”
Thus we see two different aspects of carbon loss, highlighting the delicate nature of carbon, so necessary for climate regulation but capable of creating Venus-like conditions when found in excess. The loss of carbon in the early Earth may play an essential role in our planet’s habitability. How carbon loss occurs in other planetary systems is a topic that will require a multidisciplinary approach involving both astronomy and geochemistry. As the second paper suggests, it’s a topic that could be vital to life’s chances elsewhere:
…the volatile-depleted character of parent body cores reflects processes that affected whole planetesimals. As the parent bodies of iron meteorites formed early in solar system history and likely represent survivors of a planetesimal population that was mostly consumed during planet formation, they are potentially good analogs for the compositions of planetesimals and embryos accreted to terrestrial planets. Less depleted chondritic bodies, which formed later and did not experience such significant devolatilization, are possibly less apt models for the building blocks of terrestrial planets. More globally, the process of terrestrial planet formation appears to be dominated by volatile carbon loss at all stages, making the journey of carbon-dominated interstellar precursors (C/Si > 1) to carbon-poor worlds inevitable.
Thus sublimation, not condensation, tells the tale of carbon abundance on Earth, with presumably the same processes at work elsewhere in the galaxy.
The Li paper is “Earth’s carbon deficit caused by early loss through irreversible sublimation,” Science Advances Vol. 7, No. 14 (2 April 2021). Full text. The Hirschmann paper is “Early volatile depletion on planetesimals inferred from C–S systematics of iron meteorite parent bodies,” Proceedings of the National Academy of Sciences Vol. 118 (30 March 2021). Full text.
Ne’er-do-wells leaked personal data — including phone numbers — for some 553 million Facebook users this week. Facebook says the data was collected before 2020 when it changed things to prevent such information from being scraped from profiles. To my mind, this just reinforces the need to remove mobile phone numbers from all of your online accounts wherever feasible. Meanwhile, if you’re a Facebook product user and want to learn if your data was leaked, there are easy ways to find out.
The HaveIBeenPwned project, which collects and analyzes hundreds of database dumps containing information about billions of leaked accounts, has incorporated the data into his service. Facebook users can enter the mobile number (in international format) associated with their account and see if those digits were exposed in the new data dump (HIBP doesn’t show you any data, just gives you a yes/no on whether your data shows up).
The phone number associated with my late Facebook account (which I deleted in Jan. 2020) was not in HaveIBeenPwned, but then again Facebook claims to have more than 2.7 billion active monthly users.
It appears much of this database has been kicking around the cybercrime underground in one form or another since last summer at least. According to a Jan. 14, 2021 Twitter post from Under the Breach’s Alon Gal, the 533 million Facebook accounts database was first put up for sale back in June 2020, offering Facebook profile data from 100 countries, including name, mobile number, gender, occupation, city, country, and marital status.
Under The Breach also said back in January that someone had created a Telegram bot allowing users to query the database for a low fee, and enabling people to find the phone numbers linked to a large number of Facebook accounts.
A cybercrime forum ad from June 2020 selling a database of 533 Million Facebook users. Image: @UnderTheBreach
Many people may not consider their mobile phone number to be private information, but there is a world of misery that bad guys, stalkers and creeps can visit on your life just by knowing your mobile number. Sure they could call you and harass you that way, but more likely they will see how many of your other accounts — at major email providers and social networking sites like Facebook, Twitter, Instagram, e.g. — rely on that number for password resets.
From there, the target is primed for a SIM-swapping attack, where thieves trick or bribe employees at mobile phone stores into transferring ownership of the target’s phone number to a mobile device controlled by the attackers. From there, the bad guys can reset the password of any account to which that mobile number is tied, and of course intercept any one-time tokens sent to that number for the purposes of multi-factor authentication.
My advice has long been to remove phone numbers from your online accounts wherever you can, and avoid selecting SMS or phone calls for second factor or one-time codes. Phone numbers were never designed to be identity documents, but that’s effectively what they’ve become. It’s time we stopped letting everyone treat them that way.
Any online accounts that you value should be secured with a unique and strong password, as well as the most robust form of multi-factor authentication available. Usually, this is a mobile app like Authy or Google Authenticator that generates a one-time code. Some sites like Twitter and Facebook now support even more robust options — such as physical security keys.
Removing your phone number may be even more important for any email accounts you may have. Sign up with any service online, and it will almost certainly require you to supply an email address. In nearly all cases, the person who is in control of that address can reset the password of any associated services or accounts– merely by requesting a password reset email.
Unfortunately, many email providers still let users reset their account passwords by having a link sent via text to the phone number on file for the account. So remove the phone number as a backup for your email account, and ensure a more robust second factor is selected for all available account recovery options.
Here’s the thing: Most online services require users to supply a mobile phone number when setting up the account, but do not require the number to remain associated with the account after it is established. I advise readers to remove their phone numbers from accounts wherever possible, and to take advantage of a mobile app to generate any one-time codes for multifactor authentication.
Why did KrebsOnSecurity delete its Facebook account early last year? Sure, it might have had something to do with the incessant stream of breaches, leaks and privacy betrayals by Facebook over the years. But what really bothered me were the number of people who felt comfortable sharing extraordinarily sensitive information with me on things like Facebook Messenger, all the while expecting that I can vouch for the privacy and security of that message just by virtue of my presence on the platform.
In case readers want to get in touch for any reason, my email here is krebsonsecurity at gmail dot com, or krebsonsecurity at protonmail.com. I also respond at Krebswickr on the encrypted messaging platform Wickr.
The Northern California DX Foundation is pleased to announce it will be the Lead Sponsor ($100,000) to the Intrepid DX Group’s 3YØJ Bouvet Island DXpedition planned for January/February 2023. Bouvet Island is currently ranked #2 globally on the Club Log Most Wanted List.
As a long-time supporter of NCDXF, I’m pleased to see this grant for a sorely needed new DXpedition. It’s going to be an expensive operation but the payoff is the hope this provides that things are getting back to normal. We will be further along in Cycle 25 by 2023 and having a chance to work the #2 Most Wanted entity isn’t chopped liver either.
After 15 months of increasingly intense and disruptive earthquakes on Iceland’s Reykjanes Peninsula, the region finally let off some pressure. On March 19, lava roared out of the ground in the uninhabited valley of Geldingadalur, marking the first time in 800 years that this southwesterly strip of land has been rocked by an eruption.
Volcanologists are thrilled, but this spectacle isn’t just an opportunity to explore Iceland’s fiery underworld. It’s also a window into another world entirely. “The eruption is, in my view, a fantastic analogue for Mars,” said Christopher Hamilton, a planetary scientist at the University of Arizona.
Mars is an unquestionably volcanic planet. In its earliest eras, it built volcanoes so immense that their formation deformed its surface. At one point, they caused the entire planet to tip over by 20 degrees. Its volcanic output gradually slowed, but Mars continued to make small volcanoes and spill lava for most of its lifetime. It may even be volcanically active today, with magma still gurgling below ground, perhaps gearing up for a future eruption.
Yet little is concretely understood about the origins, evolution and behavior of Mars’ volcanism, which is one reason why researchers are excited to use the Geldingadalur eruption as an analogue.
Over the course of Martian history, as giant volcanic provinces formed and the world cooled and shrank, parts of its surface were squashed together while others were pulled apart. And in places where the crust split, Martian magma erupted out of fissures.
The Geldingadalur eruption also began along a fissure. Lava has been bubbling out of an oddly shaped cone at a steady rate for the past two weeks, slowly filling up the valley. As is typical for this sort of eruption, both on Earth and Mars, a second fissure — perhaps one of many to come — opened up just north of the first on April 5, creating its own fountains and rivers of lava.
This pint-size paroxysm was made possible by the peninsula’s location: It sits atop the Mid-Atlantic Ridge, the volcanic rift that pushes the Americas away from Europe and Africa. This tectonic divorce normally happens at the rate your toenails grow, but the furious tectonic temblors preceding the eruption suggest that it gained a temporary speed boost, allowing magma to make incursions into the new vacancies within the shallow crust.
If the conduit to the magma below remains open, the Geldingadalur eruption could end up building a small shield volcano, an extremely broad magmatic mound with a very low-angle slope. Such an edifice would be comparable to those found all over Mars, said Hamilton, which means this eruption could allow scientists to watch one grow on Earth in real time.
In another similarity to Mars, the eruption involves basalt, a magma with a honeylike viscosity. Basalt is common to plenty of volcanoes, but the Geldingadalur eruption’s magma has an especially runny consistency. Its magma is coming up straight from the mantle and speeding right through the crust. It “didn’t really stall too much on its way to the surface,” said Tracy Gregg, a planetary volcanologist at the University at Buffalo. “It formed, and then it had places to go and things to do.”
Martian eruptions often involved an exceedingly fluid version of basalt, suggesting that its volcanoes may sometimes have had Icelandic-style plumbing.
But perhaps the most exciting correlation between Mars and Iceland is how their volcanic activity may influence biology.
Mars had a lot of eruptions where magma met ice. And if you’re interested in what happens when magma meets ice, Iceland is the place to be. “Iceland is an incredible place and one of the best analogues for Mars we have on Earth,” said Arola Moreras Marti, a researcher at the University of St. Andrews who studies biosignatures on Mars.
Iceland’s expansive Vatnajökull glacier, for example, sits atop various volcanoes. Magma inside these volcanoes powers hot hydrothermal pools at the surface, where Moreras Marti goes looking for microbial life. “In them you find microbes,” she said, “the most insane microbes.”
As magma from the Geldingadalur eruption meets groundwater, it sets up new hydrothermal pools at the surface as well as blisteringly hot ponds and streams underground. This happened all the time in Mars’ past — it may even be happening now — when magma cooked rocks and released hydrogen, carbon dioxide, methane, iron, sulfates and other compounds that microbes can use to sustain themselves. These sorts of compounds are also being transported around Geldingadalur’s soggy subsurface, setting up chemical imbalances that microbes thrive on.
Magma may be profoundly lethal to us, but the microbes under Geldingadalur are “probably loving it,” said Moreras Marti.
The Martian surface has long been an irradiated desert unsuitable for life. But what is happening at Geldingadalur today happened, and may still be happening, on Mars. At any point during the past 4.5 billion years, whenever hot rock met water, underground hydrothermal networks would crop up. Being shielded from the deadly radiation bombarding its surface, these subsurface hideaways would be relatively habitable environments.
Mars was also considerably wetter when it was young, with a thicker radiation-blocking atmosphere. Long ago, microbes could have existed on the surface.
There, they would have been periodically cooked by sterilizing lava flows, which is exactly what’s happening at Geldingadalur today. The lava is killing off preexisting microbial life in the soil. Over the coming months and years, a brand-new ecosystem will rise out of the ashes of the old.
That’s one of the reasons why Hamilton is there. When signs pointed toward an upcoming eruption on the peninsula, Hamilton and his colleagues sprang into action. “As soon as we saw the first earthquake swarms, we basically went out sampling all of the different areas to get an idea of what the baseline microbial ecology would be,” he said.
Hamilton and others will continue to take microbial samples from the soil and also from the air, as invading microbes may prefer gliding in as opposed to swimming. (Samples may need to be taken from people’s boots too, said Hamilton, to try and spot any stowaways — a distinctly earthly, not Martian, issue.)
Scientists will also sample the lava itself. Mere weeks after Iceland’s Eyjafjallajökull eruption in 2010, scientists found not only that microbes had colonized the surrounding soil, but that they were living on the new lava flows themselves. “The lava was still hot,” said Mario Toubes-Rodrigo, a microbiologist at the Open University, who explained that the visiting scientists had to be extremely careful. “I think a couple of their boots also melted.”
But perhaps most important, the researchers will be able to trace the evolution of underground ecologies from the very moment a new habitat appears. That makes the subsurface shenanigans at Geldingadalur a rare and nearly ideal biological simulacrum of what may have once happened, or may still be happening, on Mars.
The eruption could fizzle out in the coming days or weeks. Conversely, it could keep erupting for years, perhaps decades, much like the 35-year Pu‘u ‘Ō‘ō eruption on the flanks of Hawai‘i’s Kīlauea volcano. If so, this site will become a draw for planetary scientists and astrobiologists alike: a long-lived, safe and easily accessible natural laboratory in which to better understand two planets for the price of one eruption.
There is one critical difference, however: the scale of the events. Mars’ lava flows were jaw-droppingly prolific, often producing enough lava to bury a landmass the size of the United Kingdom in a matter of weeks. That makes the Geldingadalur eruption a “model-scale lava field,” said Tobias Dürig, a volcanologist at the University of Iceland. It’s a Martian eruption in miniature.
All things considered, that’s probably for the best.
Another great news story coming out of our Council meeting last week (and to contrast from my generally sour recent social media persona, because there is a lot to be frustrated by out there right now) was an update on the City’s Bold Steps Work Plan for 2021.
Like some other jurisdictions, the City of New Westminster declared a Climate Emergency. Like a sub-set of those jurisdictions, we are taking concrete actions in addressing that Climate Emergency, in practice and in policy. Far from being an empty declaration, it was immediately followed by Council asking staff to come up with an actionable plan and viable targets – 2050 targets to meet the IPCC goal that our Country agreed to, and more important 2030 targets that require immediate action to achieve.
I feel strongly those shorter term targets are important because they require us to act now, to put the necessary changes in to our work plans and budgets in 2021 if we hope to get there. It will be hard to hold me and my Council cohort accountable for a 2050 climate target missed (As a Mayor entering his 7th term, I’ll be untouchable!), but we will know if we are on track for 2030 in the next couple of years, and will know if our actions today will get us there.
We have talked quite a bit already about the 7 Bold Steps the City as put forward, but there is a nuance in how they exist within two overlapping magisteria (h/t Stephen J Gould) known as the Corporate Energy and Emissions Reduction Strategy (CEERS – what the City does with its own operations) and a Community Energy and Emissions Plan (CEEP – what the residents and businesses in town do). If we have 90% control over the former, we only have 10% control over the latter, and it is the much bigger nut to crack. That said, working with senior governments, we can create the right conditions for the entire community to adapt to a low-GHG economy.
The report we were provided outlines the many actions our Climate Action team and other City Departments will be undertaking in 2021. I’ll take the opportunity here to share some brief highlights from each of the 7 Bold Steps:
Carbon Free Corporation. Obviously, there are two big parts of this: our fleet and our buildings. We are replacing the CGP (our highest-emission building) and are shooting for a Zero Carbon standard for the replacement, while prioritization of retrofits and upgrades for the rest of the building stock is an ongoing project. The Green Fleet roadmap will allow us to shift to GHG-free vehicles as they become available, and assure we have the infrastructure to support them across our organization.
Car Light Community. The biggest part of this work will be shifting more spending to support Active Transportation (pedestrian safety improvements, transit support, and greenways), but it also means updating our development planning to assure we are building communities where active transportation is a viable option for more people.
Carbon Free Homes and Buildings. Two ways we can support lower-emission buildings in the City are through updating or accelerating our Step Code implementation to require that new buildings meet higher standards, and continuing to support the great work of Energy Save New West. (Did you know ESNW one of the longest running and most comprehensive community energy efficiency and GHG-reduction programs in Canada?) to help residents and businesses upgrade their own buildings and save money on energy. We are also supporting the Help Cities Lead campaign, asking the Provincial Government to give local government more tools to encourage and support a more efficient building stock.
Pollution Free Vehicles. Our biggest role here will be to support as best we can adoption of electric vehicles (e-cars, e-bikes, e-whatever comes next) by making sure we have adequate public charging, and support the installation of chargers in all new buildings.
Carbon-Free Energy. The inevitable shift from GHG-intensive energy sources to low-carbon electric power puts the city in a unique situation, with our own electrical utility. We need to update our electrical infrastructure to facilitate that, starting with our Advanced Metering Infrastructure project.
Robust Urban Forest. You may have noticed boulevard trees popping up across the Brow of the hill neighbourhood especially, we are going to keep moving ahead on that commitment, along with trying to find more opportunities to protect trees through development.
Quality Public Realm. This is one aspect of the Climate Action plan that includes adaptation to the climate change already inevitable even if we globally meet our 2050 goals. We will be doing climate risk mapping to inform that adaptation, along with other programs that may not seem like climate action (like improving road safety around schools) but is actually climate action (because it makes it more likely people won’t drive to school).
There is other work that spans all 7 Bold Steps, and indeed many of the things above overlap between steps. It is important that we have included these actions in our 5-year financial plan, which means our budget matches our priorities. But even more important, every department in the City has a role, and knows its role. The next 10 years are going to be transformational and require a culture change in how the City operates. Having everyone on board and padding the same direction is the only way we will succeed.
A 2018 paper byÂ Bar-On, Phillips, and Milo in PNAS contains a fascinating figure (Figure 1) that bears staring at for some time. It shows the dry carbon biomass distribution of various forms of life on Earth. Plants account for … Continue reading →
Every year, wildfires consume millions of acres of land around the globe. Such fires triggered by a combination of electrical networks and extreme weather include the 100,277-hectare Lutz Creek fire in British Columbia, “Black Saturday” in Victoria, Australia, and the Camp Fire in California in November 2018, which destroyed over 150,000 acres, 13,972 residences, 528 commercial structures, and 4,293 other buildings, and took the lives of 86 people.
While a variety of factors contribute to wildfire ignition, the increasing climate crisis coupled with electric assets has led to some of the most damaging wildfires globally. Moreover, wildfires are also damaging to electric utilities, leading to financial distress. One of the tools to combat wildfire ignitions is preventative power shutoffs, especially in high-risk weather conditions to mitigate the chance of electrical equipment wildfire ignition. These preventative blackouts can have significant impacts on health, safety, and economic repercussions.
One company seeking to make a tangible impact is Overstory.
Overstory aims to help solve the climate crisis by providing real-time intelligence about the planet’s vegetation powered by artificial intelligence (AI) and satellite data. Today, Overstory helps energy companies to mitigate wildfires and power outages by improving the safety and reliability of the transmission and distribution system and reducing grid operation costs.
“We focus on electric utilities because it’s a clear case where we can help. We see climate change as the biggest existential crisis we’ve ever faced and we believe trees and plants are critical to mitigating climate change and also adapting to it,” said Indra den Bakker, CEO and Co-Founder of Overstory.
Using machine learning to interpret satellite imagery and climate data Overstory extracts insights from Planet data, providing real-time information to its utility customers based on high spatial and temporal resolution satellite data, including multi- and hyperspectral imagery, SAR, and video. Moreover, by applying AI algorithms specialized in trees and vegetation, Overstory helps customers to predict grow-in and fall-in risks based on species, growth, weather and climate, and vitality.
Current vegetation management is limited by outdated information about the status of vegetation. As vegetation is continuously changing, growing, and impacted by weather, utility companies need current and accurate information to plan and manage their resources more in advance, to anticipate future risks, and a means of verification to confirm if the work by third-party contractors has been completed successfully. Without proper verification, the work may not be completed, creating greater wildfire risk.
“Our mission is to help solve the climate crisis by providing real-time information about the Earth’s vegetation. The availability of the combination of PlanetScope and SkySat images makes it a perfect fit for us and how we envision the future to have more data to analyze and improve decision-making. It has the frequency, level of detail, and scalability we need,” said den Bakker.
The costly economic and human toll necessitates the immediate need for vegetation management around utility to mitigate wildfire risk, especially in fire-prone areas. To reduce the risk of power outages and wildfire risks, predictive insights and up-to-date information about vegetation is needed for proactive vegetation management.
“The problem is that decision-makers don’t have access to accurate real-time data that’s up to date, they are not able to do the analysis or have trustable predictions for the future and this is where Overstory comes in. We are on a mission to provide that information for decision-making about our planet,” said den Bakker.
The fusion of diverse spatial, spectral and temporal resolution powered by Planet imagery allows for real-time and predictive insights for informed vegetation management. Overall, the technology offers utility companies data to help understanding negation features, tree height, how close they are to power lines, proximity, tree species, verification of trimming, tree health, and damage.
“We want to play our part in creating a more sustainable planet, with a quantified impact on forest and climate. With Planet data we can improve decision making and expand our reach,” said den Bakker.
Download the case study to learn more about how Overstory is able to provide real-time information to its utility customers.
Some of the top ransomware gangs are deploying a new pressure tactic to push more victim organizations into paying an extortion demand: Emailing the victim’s customers and partners directly, warning that their data will be leaked to the dark web unless they can convince the victim firm to pay up.
This letter is from the Clop ransomware gang, putting pressure on a recent victim named on Clop’s dark web shaming site.
“Good day! If you received this letter, you are a customer, buyer, partner or employee of [victim],” the missive reads. “The company has been hacked, data has been stolen and will soon be released as the company refuses to protect its peoples’ data.”
“We inform you that information about you will be published on the darknet [link to dark web victim shaming page] if the company does not contact us,” the message concludes. “Call or write to this store and ask to protect your privacy!!!!”
The message above was sent to a customer of RaceTrac Petroleum, an Atlanta company that operates more than 650 retail gasoline convenience stores in 12 southeastern states. The person who shared that screenshot above isn’t a distributor or partner of RaceTrac, but they said they are a RaceTrac rewards member, so the company definitely has their email address and other information.
Several gigabytes of the company’s files — including employee tax and financial records — have been posted to the victim shaming site for the Clop ransomware gang.
In response to questions from KrebsOnSecurity, RaceTrac said it was recently impacted by a security incident affecting one of its third-party service providers, Accellion Inc.
“By exploiting a previously undetected software vulnerability, unauthorized parties were able to access a subset of RaceTrac data stored in the Accellion File Transfer Service, including email addresses and first names of some of our RaceTrac Rewards Loyalty users,” the company wrote. “This incident was limited to the aforementioned Accellion services and did not impact RaceTracâ€™s corporate network. The systems used for processing guest credit, debit and RaceTrac Rewards transactions were not impacted.”
The same extortion pressure email has been going out to people associated with the University of California, which was one of several large U.S. universities that got hit with Clop ransomware recently. Most of those university ransomware incidents appeared to be tied to attacks on attacks on the same Accellion vulnerability, and the company has acknowledged roughly a third of its customers on that appliance got compromised as a result.
Clop is one of several ransom gangs that will demand two ransoms: One for a digital key needed to unlock computers and data from file encryption, and a second to avoid having stolen data published or sold online. That means even victims who opt not to pay to get their files and servers back still have to decide whether to pay the second ransom to protect the privacy of their customers.
As I noted in Why Paying to Delete Stolen Data is Bonkers, leaving aside the notion that victims might have any real expectation the attackers will actually destroy the stolen data, new research suggests a fair number of victims who do pay up may see some or all of the stolen data published anyway.
The email in the screenshot above differs slightly from those covered last week by Bleeping Computer, which was the first to spot the new victim notification wrinkle. Those emails say that the recipient is being contacted as they are a customer of the store, and their personal data, including phone numbers, email addresses, and credit card information, will soon be published if the store does not pay a ransom, writesLawrence Abrams.
“Perhaps you bought something there and left your personal data. Such as phone, email, address, credit card information and social security number,” the Clop gang states in the email.
Wosar said Clop isn’t the only ransomware gang emailing victim customers.
“Clop likes to do it and I think REvil started as well,” Wosar said.
Earlier this month, Bleeping Computerreported that the REvil ransomware operation was planning on launching crippling distributed denial of service (DDoS) attacks against victims, or making VOIP calls to victims’ customers to apply further pressure.
“Sadly, regardless of whether a ransom is paid, consumers whose data has been stolen are still at risk as there is no way of knowing if ransomware gangs delete the data as they promise,” Abrams wrote.
Spring started about two weeks ago, and the Northern Hemisphere has begun to warm, with flowers and trees in bloom. Gathered here today, a small collection of images from the past few weeks from North America, Asia, and Europe, of tulips, sunshine, and cherry blossomsâ€”surely signs of warmer days to come.
I’ve always been rather ham-fisted when it comes to sending Morse Code. Not so much that I squeeze the paddles so much they bend, but rather I knock the paddle all over the table. It is hard to send a “5” when the paddle is moving! Some of that is age and some of it is lack of practice. After acquiring a UR5CDX CT73 MB-L, “travel” paddle, the problem was amplified due to its light weight.
A friend of mine, Bruce NJ3K, had discovered DXengineering’s “Paddle Pad” and suggested I try one out. They come in two sizes (the photo below is the larger size). Both the bottom and top of the pad is sort of sticky (but not in a messy way). Sure enough, when I put the paddle on the pad, it no longer moves. Thank you Bruce!
The company that makes the stick pad for DXengineering is HandStands, which sells through Amazon and other dealers for about the same price (without the pretty DXengineering logo of course).
Our “Meet the Team” series profiles the people of First Mode. We are driven to find purposeful technology solutions to the world’s most important challenges. We take our work seriously but ourselves not too seriously.
“The overall feeling when I talk about First Mode is optimism. We feel like we can do anything once we put our minds to it. It’s pretty amazing.” - Aaron
What do you do at First Mode?
I am a mechanical engineer in the interconnect group for our hydrogen power module project. Anything to do with an electrical box, wire harness, routed system, or an electrical connector, I do that.
What drew you to First Mode originally?
I’m a hands-on engineer. I like to work toward an outcome and am action oriented. I also like working with people who enjoy not only designing but building hardware too. There are many people here with that type of experience, and we have a nice community of designers and builders.
I have learned more here since March 2020 than I did in the prior eight years. The team’s intellectual deliverables are emphasized as much as the hardware deliverables. Our thought process is important.
First Mode is designed for people to be collaborative. We can throw out crazy ideas and not feel stifled.
What gets you out of bed in the morning?
I’m excited to work with a very intelligent group of people. I have the opportunity to learn from them, and they from me. Being exposed to the new technologies we are building has been very fulfilling.
The overall feeling when I talk about First Mode is optimism. We feel like anything is possible once we put our minds to it. It’s pretty amazing.
How did your passion for engineering and tech begin?
I have known I would be an engineer since I was six years old, as I was always tinkering in the garage. My birthday and Christmas presents growing up consisted of tools I would circle in whatever tool catalog that was lying around. Growing up, my dad would bring home broken industrial equipment and we’d spend time together putting it back together.
My dad was so influential -- he was always building and fixing things, most of the time turning something that no one wanted into something great.
Do you have a motto?
I have two: 1) If it was easy everyone would do it. 2) Anything worth having is worth working for.
What do you think is the most significant discovery or human endeavor of the last few years?
Commercialized space flight. So many inventions and technologies have come from solving the huge challenges related to Space travel. Another one would be advancement in manufacturing techniques. Things like new composites and 3D printing. Taking an idea to reality is much faster now than it was 10 years ago.
Why does it matter that we keep inventing, testing, creating?
It’s important that we continue to try and improve the human condition, but with new emphasis on the resources we are using to do so.
I encourage my kids to consider their impact on others and encourage them to extract something great every day. Find some enjoyment from what you do so that you won’t regret your career choices, and don’t neglect building relationships with people you work with.
What does your typical day look like?
I’m up by 6:30 a.m. with the kids, we have breakfast, and I’m usually at my computer by 8 a.m. getting my day kicked off before meetings begin. I try to focus on personal goals at lunch (which sometimes means mowing the lawn, quite honestly). After 5 p.m. it’s back to time with kids until their bedtime, then I’ll have some intellectual time to think about work or spend time in the shop. I’m a consummate tinkerer.
What are your hobbies and interests outside of work?
I’m pretty social within our neighborhood. Before the pandemic, my wife and I enjoyed hosting events like our fall cider press party. I’m also involved with our HOA and our kids’ schools. We love the outdoors, camping, and boating. We’re looking forward to getting back on the water once the weather gets nice again.
A newspaper in Malaysia is reporting on a cell phone cloning scam. The scammer convinces the victim to lend them their cell phone, and the scammer quickly clones it. What’s clever about this scam is that the victim is an Uber driver and the scammer is the passenger, so the driver is naturally busy and can’t see what the scammer is doing.
In the fall of 1972, Vance Faber was a new professor at the University of Colorado. When two influential mathematicians, Paul Erdős and László Lovász, came for a visit, Faber decided to host a tea party. Erdős in particular had an international reputation as an eccentric and energetic researcher, and Faber’s colleagues were eager to meet him.
“While we were there, like at so many of these tea parties, Erdős would sit in a corner, surrounded by his fans,” said Faber. “He’d be carrying on simultaneous discussions, often in several languages about different things.”
Erdős, Faber and Lovász focused their conversation on hypergraphs, a promising new idea in graph theory at the time. After some debate they arrived at a single question, later known as the Erdős-Faber-Lovász conjecture. It concerns the minimum number of colors needed to color the edges of hypergraphs within certain constraints.
“ was the simplest possible thing we could come up with,” said Faber, now a mathematician at the Institute for Defense Analyses’ Center for Computing Sciences. “We worked on it a bit during the party and said, ‘Oh well, we’ll finish this up tomorrow.’ That never happened.”
The problem turned out to be much harder than expected. Erdős frequently advertised it as one of his three favorite conjectures, and he offered a reward for the solution, which increased to $500 as mathematicians realized the difficulty. The problem was well known in graph theory circles and attracted many attempts to solve it, none of which were successful.
But now, nearly 50 years later, a team of five mathematicians has finally proved the tea-party musing true. In a preprint posted in January, they place a limit on the number of colors that could ever be needed to shade the edges of certain hypergraphs so that no overlapping edges have the same color. They prove that the number of colors is never greater than the number of vertices in the hypergraph.
The approach involves carefully setting aside some edges of a graph and randomly coloring others, a combination of ideas that researchers have wielded in recent years to settle a number of long-standing open problems. It wasn’t available to Erdős, Faber and Lovász when they dreamed up the problem. But now, staring at its resolution, the two surviving mathematicians from the original trio can take pleasure in the mathematical innovations their curiosity provoked.
“It’s a beautiful work,” said Lovász, of Eötvös Loránd University. “I was really pleased to see this progress.”
Just Enough Colors
As Erdős, Faber and Lovász sipped tea and talked math, they had a new graph-like structure on their minds. Ordinary graphs are built from points, called vertices, linked by lines, called edges. Each edge joins exactly two vertices. But the hypergraphs Erdős, Faber and Lovász considered are less restrictive: Their edges can corral any number of vertices.
This more expansive notion of an edge makes hypergraphs more versatile than their hub-and-spoke cousins. Standard graphs can only express relationships between pairs of things, like two friends in a social network (where each person is represented by a vertex). But to express a relationship between more than two people — like shared membership in a group — each edge needs to encompass more than two people, which hypergraphs allow.
However, this versatility comes at a price: It’s harder to prove universal characteristics for hypergraphs than for ordinary graphs.
“Many of the miracles either vanish or things become much harder when you move to hypergraphs,” said Gil Kalai of IDC Herzliya and the Hebrew University of Jerusalem.
For instance, edge-coloring problems become harder with hypergraphs. In these scenarios, the goal is to color all the edges of a graph (or hypergraph) so that no two edges that meet at a vertex have the same color. The minimum number of colors needed to do this is known as the chromatic index of the graph.
The Erdős-Faber-Lovász conjecture is a coloring question about a specific type of hypergraph where the edges overlap minimally. In these structures, known as linear hypergraphs, no two edges are allowed to overlap at more than one vertex. The conjecture predicts that the chromatic index of a linear hypergraph is never more than its number of vertices. In other words, if a linear hypergraph has nine vertices, its edges can be colored with no more than nine colors, regardless of how you draw them.
The extreme generality of the Erdős-Faber-Lovász conjecture makes it challenging to prove. As you move to hypergraphs with more and more vertices, the ways of arranging their looping edges multiply as well. With all these possibilities, it might seem likely that there is some configuration of edges that requires more colors than it has vertices.
Proving this surprising prediction meant confronting several types of hypergraphs that are particularly challenging to color — and establishing that there are no other examples that are even harder.
Three Extreme Hypergraphs
If you’re doodling on a page and you draw a linear hypergraph, its chromatic index will probably be far less than its number of vertices. But there are three types of extreme hypergraphs that push the limit.
In the first one, each edge connects just two vertices. It’s usually called a complete graph, because every pair of vertices is connected by an edge. Complete graphs with an odd number of vertices have the maximum chromatic index allowed by the Erdős-Faber-Lovász conjecture.
The second extreme example is, in a sense, the opposite of a complete graph. Where edges in a complete graph only connect a small number of vertices (two), all edges in this type of graph connect a large number of vertices (as the number of total vertices grows, so does the number encompassed by each edge). It is called the finite projective plane, and, like the complete graph, it has the maximum chromatic index.
The third extreme falls in the middle of the spectrum — with small edges that join just two vertices and large edges that join many vertices. In this type of graph you often have one special vertex connected to each of the others by lone edges, then a single large edge that scoops up all the others.
If you slightly modify one of the three extreme hypergraphs, the result will typically also have the maximum chromatic index. So each of the three examples represents a broader family of challenging hypergraphs, which for years have held back mathematicians’ efforts to prove the Erdős-Faber-Lovász conjecture.
When a mathematician first encounters the conjecture, they may attempt to prove it by generating a simple algorithm or an easy procedure that specifies a color to assign to each edge. Such an algorithm would need to work for all hypergraphs and only use as many colors as there are vertices.
But the three families of extreme hypergraphs have very different shapes. So any technique for proving that it’s possible to color one of the families typically fails for hypergraphs in the other two families.
“It is quite difficult to have a common theorem to incorporate all the extremal cases,” said Kang.
While Erdős, Faber and Lovász knew about these three extreme hypergraphs, they didn’t know if there were any others that also have the maximum chromatic index. The new proof takes this next step. It demonstrates that any hypergraph that is significantly different from these three examples requires fewer colors than its number of vertices. In other words, it establishes that hypergraphs that resemble these three are as tough as it gets.
“If you exclude these three families, we kind of show that there are not more bad examples,” said Osthus. “If you’re not too close to any of these, then you can use significantly less colors.”
The new proof builds on progress by Jeff Kahn of Rutgers University who proved an approximate version of the Erdős-Faber-Lovász conjecture in 1992. Last November, Kühn and Osthus (both senior mathematicians) and their team of three postdocs — Kang, Kelly and Methuku — set out to improve Kahn’s result, even if they didn’t solve the full conjecture.
But their ideas were more powerful than they expected. As they set to work, they started to realize that they might be able to prove the conjecture exactly.
“It was all kind of magic,” said Osthus. “It was very lucky that somehow the team we had fit it exactly.”
They started by sorting the edges of a given hypergraph into several different categories based on edge size (the number of vertices an edge connects).
After this sorting they turned to the hardest-to-color edges first: edges with many vertices. Their strategy for coloring the large edges relied on a simplification. They reconfigured these edges as the vertices of an ordinary graph (where each edge only connects two vertices). They colored them using established results from standard graph theory and then transported that coloring back to the original hypergraph.
“They’re pulling in all kinds of stuff that they and other people have been developing over decades,” said Kahn.
After coloring the largest edges, they worked their way downward, saving the smallest edges of a graph for last. Since small edges touch fewer vertices, they’re easier to color. But saving them for last also makes the coloring harder in one way: By the time the authors got to the small edges, many of the available colors had already been used on other adjacent edges.
To address this, the authors took advantage of a new technique in combinatorics called absorption that they and others have been using recently to settle a range of questions.
“Daniela and Deryk have a lot of results looking at other famous questions using it. Now their group managed to prove the theorem using this method,” said Kalai.
The authors use absorption as a way of gradually adding edges into a coloring while ensuring along the way that the coloring always maintains the right properties. It’s especially useful for coloring the places where many small edges converge on a single vertex, like the special vertex connected to all the others in the third extreme hypergraph. Clusters like these use almost all the available colors and need to be colored carefully.
To do so, the authors created a reservoir of small edges, pulled from these tricky clusters. Then they applied a random coloring procedure to many of the small edges that remained (basically, rolling a die to decide which color to apply to a given edge). As the coloring proceeded, the authors strategically chose from the unused colors and applied them in a carefully chosen way to the reserved edges, “absorbing” them into the colorings.
Absorption improves the efficiency of the random coloring procedure. Coloring edges randomly is a nice basis for a very general procedure, but it’s also wasteful — if applied to all edges, it’s unlikely to produce the optimal configuration of colors. But the recent proof tempers the flexibility of random colorings by complementing it with absorption, which is a more exact method.
In the end — having colored the largest edges of a graph with one technique and then the smaller edges using absorption and other methods — the authors were able to prove that the number of colors required to color the edges of any linear hypergraph is never more than the number of vertices. This proves that the Erdős-Faber-Lovász conjecture is true.
Because of the probabilistic elements, their proof only works for large enough hypergraphs — those with more than a specific number of vertices. This approach is common in combinatorics, and mathematicians consider it a nearly complete proof since it only omits a finite number of hypergraphs.
“There is still the assumption in the paper that the number of nodes must be very large, but that’s probably just some additional work,” said Lovász. “Essentially, the conjecture is now proved.”
The Erdős-Faber-Lovász conjecture started as a question that seemed as if it could be asked and answered within the span of a single party. In the years that followed, mathematicians realized the conjecture was not as simple as it sounded, which is maybe what the three mathematicians would have wanted anyway. One of the only things better than solving a math problem over tea is coming up with one that ends up inspiring decades of mathematical innovation on the way to its final resolution.
“Efforts to prove it forced the discovery of techniques that have more general application,” said Kahn. “This is kind of the way Erdős got at mathematics.”
Planet is looking for creative submissions for its space-based exhibit as part of our Art at Planet program. Each year, Planet curates the largest art exhibit in space, selected from dozens of submissions of artwork by famous artists and members of the public alike. Planet laser-etches the selected works onto the side panels of our Dove satellites, which are then launched into orbit.
This year, we’re reserving up to 50 satellite panels for artwork from artists around the world. The theme of these panels will be “Caring for a Changing Planet,” and we are looking especially for artwork that highlights the issues of climate action, sustainability, the UN Sustainable Development Goals, and the preservation of nature and biodiversity – as well as the people who champion these ideas.
Here’s how you can participate:
First, download our art submission guideline – this will contain a template of the panels. Note that the panels are quite small, because our satellites are quite small, and only about 25% of the image can be “filled in” — so design carefully!
Next, add your own unique art. Celebrate caring for the planet in your own voice and your own style. Submissions can be serious, urgent, playful, beautiful, poetic, funny, or weird – or all of those things! We’re looking for a mix of contributors, themes, styles, sensibilities and geographic regions.
Finally, submit your completed artwork, accompanying description(s) and contact information via this form by May 1, 2021. You may submit up to three submissions per individual person or organization. If you’re under 13, be sure to have the permission of a parent or guardian. Read the full terms and conditions here.
Planet recognizes that we are stewards of the Earth for future generations. In honor of Earth Day on April 22, 2021, Planet would like to see children get creative and submit their own works of art about Caring for a Changing Planet! As a bit of inspiration, we hope you all will visit our gallery to explore the many wonders that our satellites capture each day.
Create your masterpiece! A line drawing (ink pen) or stylized quote work best. The final etching will be in black & white, but it is possible to achieve some tonal effects. This can be a combination of drawings and/or quotes – whatever signifies how important our earth is to you!
Ask an adult to submit your artwork to Planet here.
Use EARTHDAY21 as the project code
Planet will review and notify the participants whose work is selected in mid-2021. Unfortunately, we cannot respond to everyone who submits artwork. Participants whose work is selected for inclusion will receive images of their artwork on our spacecraft, satellite launch information so you can watch your art on its ride to space, and any other goodies we may think of.
This is a great chance to leave your own mark in space – we look forward to seeing what you create!
As we completed our successful VP8STI-South Sandwich and VP8SGI-South Georgia DXpeditions in 2016, we began to plan for our next Dxpedition. Our target is the Norwegian Island Bouvet. This is the #2 most wanted DXCC entity.
At this time, it gives us great pleasure to announce that we have joined forces with Intrepid Norwegian DXpeditioner Ken Opskar-LA7GIA in our quest to activate Bouvet.
Together, in January 2023, 14 men will board the Braveheart in Capetown and make the treacherous voyage to Bouvet. We will plan to spend twenty days at Bouvet and weather permitting, we plan to have 14 to 16 good days of radio activity.
This announcement is certainly a BIG one and perhaps will kickstart an avalanche of DXpeditions that have virtually disappeared since the pandemic. Lots of time to prepare, and thatâ€™s a good thing because it will provide plenty of time for some sorely needed DX hope to really sink-in.
The Nancy Grace Roman Space Telescope is the instrument until recently known as WFIRST (Wide-Field Infrared Survey Telescope), a fact I’ll mention here for the last time just because there are so many articles about WFIRST in the archives. From now on, I’ll just refer to the Roman Space Telescope, or RST. Given our focus on exoplanet research, we should keep in mind that the project’s history has been heavily influenced by concepts for studying dark energy and the expansion history of the cosmos. The exoplanet component has grown, however, into a vital part of the mission, and now includes both gravitational microlensing and transit studies.
We’ve discussed both methods frequently in these pages, so I’ll just note that microlensing relies on the movement of a star and its accompanying planetary system in front of a background star, allowing the detection because of the resultant brightening of the background star’s light. We’re seeing the effects of the warping of spacetime caused by the nearer objects, with the brightening carrying the data on one or more planets around the central star. You can see why a target-rich environment is needed here — such occulations are random and lots of stars are needed to cull out a few. Thus RST looks toward galactic center.
Assuming a successful launch and operations beginning in the mid-2020s, the RST should be a prolific planet-hunter indeed. NASA is now projecting, based on a 2017 paper from Benjamin Montet (now at the University of New South Wales) and colleagues, that the anticipated light curves for millions of stars should turn up as many as 100,000 planets. Montet et al. actually cite as many as 150,000, noting that the WFIRST field is more metal rich than the main Kepler field. All of these systems should have measured parallaxes, though most will be too faint for follow-ups.
Even so, the Montet paper finds ways to confirm some of these:
We find that secondary eclipse depth measurements can be used to confirm as many as 2900 giant planets, which can be detected at distances of > 8 kpc. From these confirmed WFIRST planets, we will be able to measure the variation in the occurrence rate of short-period giant planets. Furthermore, we show that WFIRST is capable of detecting TTVs which can be used to confirm the planetary nature of some systems, especially those with smaller planets.
This is particularly fruitful — consider that the transiting planets that RST finds will in most cases not be found around the same host stars as the planets found through microlensing. The two methods enable separate probes of the same population of planets but at different separations, with transits being most detectable for close-in planets and microlensing being the method to detect planets much further out in the system. We go from star-hugging hot Jupiters to planets beyond 10 AU. Microlensing should also turn up free-floating ‘rogue’ planets.
The Kepler mission studied stars in an area encompassing parts of Lyra, Cygnus and Draco, most of them ranging from 600 to 3,000 light years away. With RST, we will be going from Kepler’s 115 square degree field of view of relatively nearby stars to a 3 square degree field that, because it is toward galactic center, will track up to 200 million stars. The average distance of stars in this field will be in the range of 10,000 light years in what will be the first space-based microlensing survey, looking at planets all a wide range of distance from the host star.
Image: This graphic highlights the search areas of three planet-hunting missions: the upcoming Nancy Grace Roman Space Telescope, the Transiting Exoplanet Survey Satellite (TESS), and the retired Kepler Space Telescope. Astronomers expect Roman to discover roughly 100,000 transiting planets, worlds that periodically dim the light of their stars as they cross in front of them. While other missions, including Kepler’s extended K2 survey (not pictured in this graphic), have unveiled relatively nearby planets, Roman will reveal a wealth of worlds much farther from home. Credit: NASA’s Goddard Space Flight Center.
The synergy between microlensing and transit work is clear. Jennifer Yee, an astrophysicist at the Center for Astrophysics | Harvard & Smithsonian, notes that thousands of transiting planets are going to turn up within the microlensing data. “It’s free science,” says Yee. Benjamin Montet agrees:
“Microlensing events are rare and occur quickly, so you need to look at a lot of stars repeatedly and precisely measure brightness changes to detect them. Those are exactly the same things you need to do to find transiting planets, so by creating a robust microlensing survey, Roman will produce a nice transit survey as well.”
So we’ve gone from the relatively close — Kepler with stars at an average distance of 2,000 light years — to the much closer — TESS, with its scans of the entire sky focusing in particular on stars in the range of 150 light years — and now to RST, which backs out all the way to galactic center, a field encompassing stars as far as 26,000 light years away. NASA estimates that three-quarters of the transits RST detects will be gas or ice giants, with most of the rest being planets between four and eight times as massive as Earth, the intriguing ‘mini-Neptunes.’ For its part, microlensing takes us down to rocky planets smaller than Mars and up to gas giant size.
The Montet paper is “Measuring the Galactic Distribution of Transiting Planets with WFIRST,” Publications of the Astronomical Society of the Pacific Vol. 129, No. 974 (24 February 2017). Abstract.
Preliminary unofficial election results posted at 4am after the November 3rd 2020 election, by election administrators in Antrim County Michigan, were incorrect by thousands of votes–in the Presidential race and in local races. Within days, Antrim County election administrators corrected the error, as confirmed by a full hand recount of the ballots, but everyone wondered: what went wrong? Were the voting machines hacked?
The Michigan Secretary of State and the Michigan Attorney General commissioned an expert to conduct a forensic examination. Fortunately for Michigan, one of the world’s leading experts on voting machines and election cybersecurity is a professor at the University of Michigan: J. Alex Halderman. Professor Halderman submitted his report to the State on March 26, 2021 and the State has released the report.
And here’s what Professor Halderman found: “In October, Antrim changed three ballot designs to correct local contests after the initial designs had already been loaded onto the memory cards that configure the ballot scanners. … [A]ll memory cards should have been updated following the changes. Antrim used the new designs in its election management system and updated the memory cards for one affected township, but it did not update the memory cards for any other scanners.”
Here’s what that means: Optical-scan voting machines don’t (generally) read the text of the candidates’ names, they look for an oval filled in at a specific position on the page. The Ballot Definition File tells the voting machine what name corresponds to what position. And also informs the election-management system (EMS) that runs on the county’s election management computers how to interpret the memory cards that transfer results from the voting machines to the central computers.
Shown here at left is the original ballot layout, and at right is the new ballot layout. I have added the blue rectangles to explain Professor Halderman’s report.
Now, if the voting machine is loaded with a memory card with the ballot definition at left, but fed ballots in the format at right, what will happen?
A voter’s mark next to the name “Melanie Eckhart” will be interpreted as a vote for “Mark Edward Groenink”. That is, in the first blue rectangle, you can see that the oval at that same ballot position is interpreted differently, in the two different ballot layouts.
A voter’s mark next to “Yes” in Proposal 20-1 will be interpreted as “No” (as you can see by looking at the second blue rectangle).
We’d expect that problem with any bubble-ballot voting system (though there are ways of preventing it, see below). But the Dominion’s results-file format makes the problem far worse.
In Dominion’s file format for storing the results, every subsequent oval on the paper is given a sequential ID number, cumulative across all ballot styles used in the county. Now look at the figure above, just below the first blue rectangle. You’ll see that in the original “Local School District” race (at left) there are two write-in bubbles, but in the revised “Local School District” race (at right), there are three write-in bubbles. That means the ID number of every subsequent bubble, on this ballot and in all the ballot styles that come after it in this county, the ID numbers will be off by one. Figure 2 of the report illustrates:
Within three days, Antrim County officials had basically figured out what went wrong, and corrected most of the errors before publishing and certifying official election results on November 6th. By November 21, Antrim County had corrected almost all of the errors in its official restatement of its election results.
How do we know that the original results were wrong and the new results are right? That is, how do we know that the “corrected” results are true, and not fraudulent? We have two ways of knowing:
Hand-marked paper ballots speak for themselves. The contest for President of the U.S. was recounted by hand in Antrim County. Those results–from what bipartisan workers and witnesses could see with their own eyes–matched the results from scanning the paper ballots using the ballot-definition file that matches the layout of the paper ballot.
A careful forensic examination by a qualified expert can explain what happened, and that is why Professor Halderman’s report is so valuable–it explains things step by step.
But not every contest was recounted by hand. The expert analysis finds a few contests where the reported vote totals are still incorrect; and in one of those contests (a marijuana ballot question) the outcome of the election was affected.
In the court case of Bailey v. Antrim, plaintiffs had submitted a report (December 13, 2020) from one Russell J. Ramsland making many claims about the Dominion voting machines and their use in Antrim County: adjudication, error rates, log entries, software updates, Venezuela. Section 5 of Professor Halderman’s report addresses all of these claims and finds them unsupported by the evidence.
What can we learn from all of this?
Although the unofficial reports posted at 4am on November 4th showed Joseph R. Biden getting more votes in Antrim County than Donald J. Trump, the results posted November 6th show correctly that, in Antrim County, Mr. Trump got more votes.
Regarding the presidential contest, election administrators figured this out for themselves without needing any experts.
In other contests, where no recount was done, most of the errors got corrected, but not all.
There is no evidence that Dominion voting systems used in Antrim County were hacked.
And what can we learn about election administration in general?
Hand marked paper ballots are extremely useful as a source of “ground truth”.
“Unforced error:” Dominion’s election-management system (EMS) software doesn’t check candidate names. The EMS computer has a file mapping ballot-position numbers to candidate names; and the memory card uploaded from the voting machine has its own file mapping ballot-position numbers to candidate names. If only the EMS software had checked that these files agreed, then the problem would have been detected on election night, during the upload process.
Even without that built-in checking, to catch mistakes like this before the election, officials should do the kind of end-to-end pre-election logic-and-accuracy testing described in Professor Halderman’s report.
Risk-Limiting Audits (RLAs) could have detected and corrected this error, if they had been used systematically in the State of Michigan. RLAs are good protection not only against hacking, but also against mistakes and bugs.
There was roadwork about two hundred yards from our home. They used a pile driver every night from about midnight to four AM. We contacted the D.O.T. to ask “Why?!” It wasn’t actually a traffic issue. They said that if they used the pile driver during the day, it disturbed people.
That lasted a couple of weeks. Then, later that year, we went on a brief vacation, and found that our hotel was across the street from a construction site where they were using a pile driver during the night.
I like to think that someday we will laugh about it, but we aren’t there yet, and it’s been a decade or so, but we will get there.
As always, thanks for using my Amazon Affiliate links (US, UK, Canada).
For four days this past week, Internet-of-Things giant Ubiquiti did not respond to requests for comment on a whistleblower’s allegations the company had massively downplayed a “catastrophic” two-month breach ending in January to save its stock price, and that Ubiquiti’s insinuation that a third-party was to blame was a fabrication. I was happy to add their eventual public response to the top of Tuesday’s story on the whistleblower’s claims, but their statement deserves a post of its own because it actually confirms and reinforces those claims.
Ubiquiti’s IoT gear includes things like WiFi routers, security cameras, and network video recorders. Their products have long been popular with security nerds and DIY types because they make it easy for users to build their own internal IoT networks without spending many thousands of dollars.
But some of that shine started to come off recently for Ubiquiti’s more security-conscious customers after the company began pushing everyone to use a unified authentication and access solution that makes it difficult to administer these devices without first authenticating to Ubiquiti’s cloud infrastructure.
All of a sudden, local-only networks were being connected to Ubiquiti’s cloud, giving rise to countless discussion threads on Ubiquiti’s user forums from customers upset over the potential for introducing new security risks.
And on Jan. 11, Ubiquiti gave weight to that angst: It told customers to reset their passwords and enable multifactor authentication, saying a breach involving a third-party cloud provider might have exposed user account data. Ubiquiti told customers they were “not currently aware of evidence of access to any databases that host user data, but we cannot be certain that user data has not been exposed.”
Ubiquiti’s notice on Jan. 12, 2021.
On Tuesday, KrebsOnSecurity reported that a source who participated in the response to the breach said Ubiquiti should have immediately invalidated all credentials because all of the company’s key administrator passwords had been compromised as well. The whistleblower also said Ubiquiti never kept any logs of who was accessing its databases.
The whistleblower, “Adam,” spoke on condition of anonymity for fear of reprisals from Ubiquiti. Adam said the place where those key administrator credentials were compromised — Ubiquiti’s presence on Amazon’s Web Services (AWS) cloud services — was in fact the “third party” blamed for the hack.
From Tuesday’s piece:
“In reality, Adam said, the attackers had gained administrative access to Ubiquiti’s servers at Amazon’s cloud service, which secures the underlying server hardware and software but requires the cloud tenant (client) to secure access to any data stored there.
“They were able to get cryptographic secrets for single sign-on cookies and remote access, full source code control contents, and signing keys exfiltration,” Adam said.
Adam says the attacker(s) had access to privileged credentials that were previously stored in the LastPass account of a Ubiquiti IT employee, and gained root administrator access to all Ubiquiti AWS accounts, including all S3 data buckets, all application logs, all databases, all user database credentials, and secrets required to forge single sign-on (SSO) cookies.
Such access could have allowed the intruders to remotely authenticate to countless Ubiquiti cloud-based devices around the world. According to its website, Ubiquiti has shipped more than 85 million devices that play a key role in networking infrastructure in over 200 countries and territories worldwide.
“Nothing has changed with respect to our analysis of customer data and the security of our products since our notification on January 11. In response to this incident, we leveraged external incident response experts to conduct a thorough investigation to ensure the attacker was locked out of our systems.”
“These experts identified no evidence that customer information was accessed, or even targeted. The attacker, who unsuccessfully attempted to extort the company by threatening to release stolen source code and specific IT credentials, never claimed to have accessed any customer information. This, along with other evidence, is why we believe that customer data was not the target of, or otherwise accessed in connection with, the incident.”
Ubiquiti’s response this week on its user forum.
Ubiquiti also hinted it had an idea of who was behind the attack, saying it has “well-developed evidence that the perpetrator is an individual with intricate knowledge of our cloud infrastructure. As we are cooperating with law enforcement in an ongoing investigation, we cannot comment further.”
Ubiquiti’s statement largely confirmed the reporting here by not disputing any of the facts raised in the piece. And while it may seem that Ubiquiti is quibbling over whether data was in fact stolen, Adam said Ubiquiti can say there is no evidence that customer information was accessed because Ubiquiti failed to keep logs of who was accessing its databases.
“Ubiquiti had negligent logging (no access logging on databases) so it was unable to prove or disprove what they accessed, but the attacker targeted the credentials to the databases, and created Linux instances with networking connectivity to said databases,” Adam wrote in a whistleblower letter to European privacy regulators last month. “Legal overrode the repeated requests to force rotation of all customer credentials, and to revert any device access permission changes within the relevant period.”
It appears investors noticed the incongruity as well. Ubiquiti’s share price hardly blinked at the January breach disclosure. On the contrary, from Jan. 13 to Tuesday’s story its stock had soared from $243 to $370. By the end of trading day Mar. 30, UI had slipped to $349. By close of trading on Thursday (markets were closed Friday) the stock had fallen to $289.
We had a full agenda in Council last week, so we didn’t spend a lot of time going through the reports that arrived early in the meeting. There were two reports form staff that are pretty big deals for the City, so it is worth expanding a bit on them here. The first was a project update on the still-acronymically-named NWACC, but more commonly known as the Canada Games Pool replacement.
The big news, I guess, is that we have tendered the main construction works, which means we are really doing this thing. We put the tender process on hold a little less than a year ago as there was so much uncertainty in both municipal finances and the global economy during the unfolding of the pandemic. The regional construction market has adapted, and many projects are moving on across the region, and Council and our project team were confident that we could understand pricing and meet our objectives on budget, and at this point waiting further creates more uncertainty.
The report says we are within budget, though it’s not as simple as that it sounds. This is a big piece of infrastructure, and you can’t just go to Amazon and click on “new pool” and pop $106M on your credit card. The cost of construction materials is way up over the last two years since we began this procurement (lumber has almost doubled, steel up 50%), and trades are in major demand right now, which means some parts of the construction cost are also up. Our project team was able to “value engineer” some aspects of the project, which means going through design and assumptions and finding ways that less expensive techniques or materials can be creatively applied. We have also eaten a bit into our contingency budget that was included as part of the overall budget planning. So we are on budget, but pushing the top part of it, and need to be cognizant of that as the project moves along.
There are also parts of the project that we have not yet procured, like construction of the outdoor playscapes. Fortunately, there are aspects of those components that may still apply for senior government funding support, so we will continue to seek ICIP grants and funding support to reduce the overall finance load of the project.
The NWACC was designed over more than two years, and involved one of the most comprehensive public engagements the City has ever undertaken. There were a lot of ideas and desires for this facility, and it was a big challenge to prioritize and assemble a program that met most needs, fit on the footprint available, and was within the budget of a City of 80,000 people. I am really excited about the result.
The program includes a 50m pool with 8 full competition width lanes, two baffles and a partially mobile floor to provide greater flexibility of space for everything from competitive swimming to aquafit. There is also a second leisure pool that has shorter warm-up swim lanes to support competitions, and all of the leisure uses that people expect in a community pool. Having two pools also allows the cooler competition temperature in the big tank, which the leisure tank can be warmer and more comfortable for leisure users. There is also expanded hot tub and sauna options, greater accessibility to all tanks, much larger change room areas (with ample gender-neutral changing areas) and more deck space and storage areas.
The exercise space will be greater than the current CGP and Centennial Community Centre offer. Final details on equipment are to come, but the plan is for larger free weight space on the main deck floor (no more dropping barbells on wood floors!) and a large fitness equipment space overlooking to main pool. A dedicated spin class space, and rooms for dance, yoga, and other assorted uses. Add to this a full-sized multi-purpose gymnasium and a compact more versatile gymnasium space that opens to the outside. There will be a cafeteria, space for sports medicine practitioners, a significant childcare and childminding space, and multi-purpose rooms for community meeting and arts programming.
Perhaps informed by the Canada Games Pool experience, the new complex is going to emphasize natural light. The entire complex was designed to align better with the sun, there will be lots of window space between sections to let light pass through, and the main gym and pool will have saw-tooth roof designs with clerestory windows facing north to allow ample indirect sunlight to fill the rooms.
Finally, the NWACC is going to help the City meet its aggressive climate goals. The current pool is the City’s single largest GHG emitter, the new complex will not only use electricity for air and water heating, it will generate some power onsite, and is anticipated to be the first aquatic center in Canada to be certified as a Zero Carbon Building. The building systems for energy recovery, air management and pool filtration will be cutting edge, likely the most technologically advanced pool in Canada when it is done. We are building it right so it saves us money in the long run.
So, it is all exciting. But there will some hassles between then and now. As we committed early to have the existing pool and recreation centre operating during the 2+ year construction process, we are really tight on space over the existing lot.
This means inevitable parking hassles for the users and adjacent neighbourhoods, starting with official groundbreaking next week when the fences will go up and the site will start to look very different. I hope people will be patient and understand the long-term goal here (and, yah, I’m looking at you, my Royal City Curling Club cohort!). We should be doing a grand opening towards the end of 2023, which is about a year later than we probably hoped when we started this planning process back in 2016, but the end result is going to be great.
The seventh part of this series is all about the sensor board. It hosts the position sensor and four fill sensors. Speaking ofÂ sensorsÂ sounds complex, but these are just pairs of IR-LEDs and phototransistors. All design files for the board are in the GitHub repository and if you missed one of the previous parts, look at theÂ overview page.