in real space news,
in real space news,
No theme this week; just catching up with all the links I didn’t get to last week.
Despite the fact that Windows has a built in automatic time sync function, it doesn’t work as one might expect. Unless the difference between the NTP time server and the local clock is large enough, both the manual and automatic sync will not update the local clock.
A very simple way to get around this is to right-click on the time area, select “Adjust date/time”, then turn “Set time automatically” off. Use the Set the date and time manually “Change” button, then adjust the minute either 2 minutes ahead or behind. Finally put the “Set time automatically” feature back on. This will immediately sync the local clock.
There are plenty of ways to do this automatically, but sometimes I like to avoid installing new software (I’ve been bit by serious bugs on more than one occasion).
Milan M. Ćirković’s work has been frequently discussed on Centauri Dreams, as a glance in the archives will show. My own fascination with SETI and the implications of what has been called ‘the Fermi question’ led me early on to his papers, which explore the theoretical, cultural and philosophical space in which SETI proceeds. And there are few books in which I have put more annotations than his 2018 title The Great Silence: The Science and Philosophy of Fermi’s Paradox (Oxford University Press). Today Dr. Ćirković celebrates Stanislaw Lem, an author I first discovered way back in grad school and continue to admire today. A research professor at the Astronomical Observatory of Belgrade, (Serbia), Ćirković obtained his PhD at the Dept. of Physics, State University of New York in Stony Brook in 2000 with a thesis in astrophysical cosmology. He tells me his primary research interests are in the fields of astrobiology (habitable zones, habitability of galaxies, SETI studies), philosophy of science (futures studies, philosophy of cosmology), and risk analysis (global catastrophes, observation selection effects and the epistemology of risk). He co-edited the widely-cited anthology Global Catastrophic Risks (Oxford University Press, 2008) with Nick Bostrom, has published three research monographs and four popular science/general nonfiction books, and has authored about 200 research and professional papers.
by Milan Ćirković
This year we celebrate a centennial of the birth of a truly great author and thinker who is still, unfortunately, insufficiently well-known and read. Stanislaw Lem was born in 1921 in then Lwov, Poland (now Lviv, Ukraine). That was the year Čapek’s revolutionary drama R.U.R. premiered in Prague’s National Theatre and defined the word “robot”, Albert Einstein was awarded the Nobel Prize in physics for his work on the photoelectric effect in the course of which he effectively discovered photons, and one Adolf Hitler became the leader of a small far-right political party in Weimar Germany.
All three of these central-European developments have exerted a strong influence on Lem’s life and career. His studies of medicine, inspired by both his father’s distinguished medical career and his early-acquired mechanistic view of human beings, have been interrupted three times due to the chaos of WW2 and post-war changes. He narrowly escaped being executed by German authorities during the war for his resistance work. Finally, when he was on the verge of acquiring a diploma at the famous Jagellonian University of Krakow, in 1949, he abandoned the pursuit in order to avoid the compulsory draft to which physicians were susceptible in the new communist Poland. He did some practical medical work in a maternity ward, but very quickly left medicine for good and became a full-time writer.
The apex of Lem’s creative career spans about three decades, from The Investigation published in 1958, to the publication of Fiasco and Peace on Earth in 1987. During that period, he published his greatest novels, in particular Solaris (1961), The Invincible (1966), His Master’s Voice (1968), and The Chain of Chance (1976), along with numerous short story anthologies, the most important being The Cyberiad (1965), as well as the Ijon Tichy and Pilot Pirx story cycles.
Image: Polish science fiction writer Stanislaw Lem. Credit: Wojciech Zemek.
Finally, several works in the Borgesian meta-genre of imaginary forewords, introductions, and book reviews, notably The Perfect Vacuum of 1971. This has been complemented by very extensive non-fiction writing, mainly in several fields of philosophy of science, futures studies, and literary criticism. The last two decades of Lem’s life were characterized by essayistic and publishing activity, as well as receiving innumerable prizes and awards, but no original fiction writing. Lem passed away peacefully on March 27, 2006, at the age of 84 in his home in Krakow.
Lem was obssessed by the theme of Contact: from his very first science-fiction novel, The Astronauts in 1951 (which he himself denounced as “childish”) to the last, great and deeply disturbing Fiasco, which is a kind of literary and philosophical testament. Nowhere, however, is his thought more in touch with the practical aspects of our SETI/search for technosignatures projects as in His Master’s Voice (originally published in 1968, that is only 8 years after the original Ozma Project! Translated into English by Michael Kandel only in 1983).
It is a brilliant work, perhaps the best novel ever written about SETI, but also a dense tract indeed. So, instead of many examples, I shall concentrate upon this one as a case study for the tremendous usefulness of reading Lem for anyone interested in astrobiology/SETI studies.
The study of the motives and ideas relevant for these fields would require a book-length treatment, as is obvious from the list of auxiliary topics Lem masterfully weaves into the narrative: from the ontological status of mathematical objects to the psyche of the Holocaust survivors, from preconditions for abiogenesis to the origin of the arrow of time. It is a challenging text in more than one sense; there is almost no dialogue and no manifest action beyond the recounting of a SETI project that not only failed but was never truly comprehended in the first place.
Image: A 1983 English edition of His Master’s Voice from Harcourt Brace Jovanovich, one of many editions available worldwide.
And this is a book whose plot should not be spoilt, since it is not as widely read as it should be half a century later. Without revealing too much, His Master’s Voice is set at a time when neutrino astrophysics is advanced enough to be able to detect possible modulations (imagined to have occurred near the end of the 20th century in the continued Cold War world). A neutrino signal repeating every 416 hours is discovered from a point in the sky within 1.5° of Alpha Canis Minoris. An eponymous top-secret project is then formed in order to decrypt the extraterrestrial signal, burdened by all the Cold-War paranoia and heavy-handed bureaucracy of the second half of the twentieth century. The project has its ups and downs, including some quite dramatic and literally threatening the survival of human civilization, but it is—obviously—mostly unsuccessful. The protagonist, a mathematical genius and cynic named Peter Hogarth, is neither a hero nor a villain; the SETI plot ends in anticlimactic uncertainty.
An intriguing consequence of Lem’s scenario is a realization that, while detectability generally increases with the progress of our astronomical detector technology, it does so very unevenly, in jumps or bursts. Although the powerful source of the “message” in the novel (presumably an alien beacon) had been present for a billion years or more, it became detectable only after a sophisticated neutrino-detecting hardware was developed. And even then, the detection of the signal happened serendipitously. Thus, in a rational approach to SETI—not often followed in practice, alas—the issue of detectability should be entirely decoupled from the issue of synchronization (the extent to which other intelligent species are contemporary to us).
Fermi’s paradox does not figure explicitly in His Master’s Voice (in contrast to many other of Lem’s works, especially his late and in my opinion equally magnificent Fiasco), and for an apparently obvious reason: “the starry letter” has always been here, or at least long enough on geological timescales. Detectability is, at least in part, a function of historical human development.
And there is a very real possibility, in the context of the plot, that “the letter” does not originate with intentional beings at all. The fulcrum of the book is reached when three radical hypotheses are presented to weary researchers, including the one attributing the signal to purely natural astrophysical processes! But even in this revisionist case, there are other problems, especially in light of the fact that the signal manifests “biophilic” properties: it helps complex biochemical reactions, and scientists in the novel speculate about whether it helped the abiogenesis on Earth. If it did so, the same necessarily occurred on many other planets in the Galaxy, so even if we abstract the mysterious Senders, it is natural to ask: where are our peers? This leads to more severe versions of Fermi’s paradox. In the same time, it makes us think about the various forms directed panspermia could, in fact, take when we reject our anthropocentric thinking.
There is another key lesson. While the discovery of even a single extraterrestrial artefact (and Lem’s neutrino message can surely be regarded as an artefact in the sense of the contemporary search for technosignatures), would be a great step forward, it would not, at least not immediately, resolve the problem. If one could conclude, as some of the protagonists of His Master’s Voice do, that there exist just two civilizations in the Galaxy, us and the mysterious Senders, that would still require explanation. Two is, in this particular context, sufficiently close if not equal to one.
And this shows, finally, the true gift of Lem’s thought to astrobiology and SETI studies: a capacity to go one step beyond in strangeness, to kick us sufficiently strongly out of the grooves of conventional thinking, to disturb us—and offend us, if necessary—and make us reject the comfortable and usual and mundane. In a general sense, all philosophy should do the same for us; that it usually does not is indeed discouraging and depressing. From time to time, however, a thinker passes with a bright torch illuminating the path and indicating how clueless we in fact are.
Lem was just such a figure. Reading him is indeed the highest form of celebration of reason and wisdom.
Here’s something I just learned: If you are running dma(8), /etc/dma.conf will contain MAILNAME. If your email server is somewhere else, but you set MAILNAME as your domain – dma will deliver locally.
I had /etc/dma.conf set with MAILNAME shiningsilence.com – so dma kept delivering overnight periodic results to root, which was aliased to email@example.com in /etc/mail/aliases and so it was delivered to ‘justin’ locally on the machine.
Changing MAILNAME to www.shiningsilence.com – the host you are reading right now – fixed the problem. Now, whether this was an automatically set config or something I misconfigured some years ago… I can’t tell.
As of today, we know of more than a million asteroids. This number increases by hundreds of thousands every few years. However, studying asteroids close up is much harder than discovering them. Thus far, fewer than 20 asteroids have been visited by spacecraft. Only six of these were studied in detail by dedicated missions — the rest were ‘bonus’ flybys by spacecraft like Galileo and Rosetta heading to their primary targets.
While it is reasonable to assume that most asteroids are alike, there are significant variations between different groups depending on their origin, composition, history, etc.https://medium.com/media/505cfee415736cc8531a73b41e8f6bf3/href
Asteroids are categorised into a dozen spectral types and some belong to families believed to originate from a single large body. Active asteroids and main-belt comets exhibit mass-loss activity, which is poorly understood in some cases. Asteroids that fly by the Earth at less than 300k km are potentially hazardous to us. To better understand these characteristics, which would tell us a lot about the early Solar System and its evolution, it is necessary to study a larger number of asteroids more closely and sooner rather than later. Studying a statistically meaningful population of asteroids with monolithic spacecraft (one large spacecraft visiting a few asteroids) is economically infeasible as the cost of such missions can reach €1b and would take decades. With the Multi-Asteroid Touring (MAT) mission concept, we are striving to image hundreds of asteroids with tens of electric-sail (E-sail) propelled, toaster-sized probes over a period of 5 to 7 years.
The motivations to study asteroids range from understanding the early evolution of the Solar System to plans for in-space resource utilisation. As remnants from the Solar System’s formation, they are studied to understand the delivery of water and organics to the Earth and other planets, and to determine the evolution and composition of the asteroids themselves, of the planets, and of the Solar System as a whole. As the largest population of Solar System objects, asteroids and meteoroids regularly impact the Earth and its atmosphere, causing meteor showers, scientifically valuable meteorite impacts and devastating asteroid impacts. As rich and potentially accessible resources, asteroids could be exploited for water extraction, and for mining of platinum-group metals and building materials.
Asteroid families and primordial asteroids, as leftovers from planetary formation, are key subpopulations for understanding the history of the main asteroid belt (i.e. the region between the orbits of Mars and Jupiter in which the majority of asteroids have their orbit), as well as the composition and structure of planetesimals, small objects from which planets, like the Earth, could form. Therefore, we should have a reliable statistical view of the size and compositional distributions of at least the largest asteroid families. Asteroids are grouped into families based on similar orbital parameters and spectra, and members of the same family are thought to correspond to collisional fragments of a single original parent body.
There are two principal ways that asteroids are currently studied. First, via distant, typically point-source observations with ground-based and space telescopes. Second, via space missions with close-range observation, in situ measurements and sample return. The first method allows the study of thousands of objects using a single instrument at a time; by doing this, we gain overall knowledge about orbits, albedos (how well a surface reflects solar energy), colours, spectra and sizes of asteroids. Space missions can provide more information about asteroids, including maps of albedos, colours, spectra and morphological features (craters, faults, fractures, boulders, etc.); 3D models; precise mass and density estimates; overviews of physical properties and chemical composition of the surface materials; as well as hints regarding an asteroid’s internal composition. A single spacecraft can study only one or, in some cases, just a few objects in great detail with a range of instruments. However, monolithic missions are ill-suited for the study of tens or hundreds of asteroids.
A fleet of small spacecraft can tour multiple asteroids and gather remote measurements of a much larger number of objects than have thus far been studied at close range. We can visit asteroids belonging to different families, spectral types, sizes and other classes. Each spacecraft is equipped with a small E-sail tether to give it a large (in principle unlimited) manoeuvering ‘Delta-v’ capability so that it can tour the asteroid belt indefinitely (limited by the mission and spacecraft lifetime) and return data to the ground during one or more Earth flybys. The scientific payload on board each spacecraft is a lightweight camera taking high-resolution images in ultraviolet, visual and near-infrared spectral ranges.
The mission provides a unique contribution to the closing of the knowledge gap between a large number of surveyed asteroids and a handful of closely studied asteroids by performing flybys of tens of primary targets (two flybys per target) and of hundreds of secondary targets. By simply taking (spectral) images during flybys, observations can be obtained from asteroids across all spectral types and of many families, including all known active asteroids and dozens of potentially hazardous objects.
The MAT concept consists of a fleet of nanospacecraft, each weighing less than 10 kg and equipped with a single-tether E-sail propulsion system. Each spacecraft of the fleet can make flybys of several (typically 5–7) asteroids.
The mission is scalable by the size of the fleet, the number of targets and the number of spacecraft per target. For instance, two spacecraft are able to image most of a target’s surface.
A small launch vehicle such as PSLV (India’s Polar Satellite Launch Vehicle) can deliver a payload up to 500 kg to marginal-escape orbit (the E-sail operates in the solar wind, outside the Earth’s magnetosphere). The PSLV payload mass corresponds to a fleet size of about 50 spacecraft, hence enabling the study of about 300 different asteroids. To keep the telemetry costs down, data can be stored in flash memory during the mission and downlinked during the Earth flyby. Hence, increasing the size of the fleet incurs an extra production and launch cost, but only a marginal telemetry cost: the Deep-Space Network (DSN) time needed per spacecraft is around 20 hours only. In conventional interplanetary missions, DSNs are also used for navigation (localisation) via ranging. Since DSNs would not be used during most of the mission, autonomous optical navigation would be required to provide both the attitude and position of nanospacecraft. Due to the large size of the fleet and mission cost minimisation, spacecraft should be designed to be autonomous and send occasional status updates via low data-rate telemetry, which can also be used for two-way emergency communications. Launching is flexible because the only launch requirement is delivery to marginal-escape or higher orbit. Since each spacecraft works independently of the others, simultaneous launching is not mandatory; the launch can be dedicated, or piggyback hitching a ride with another mission, or combination of both.
The mission of each spacecraft is divided into several phases: Launch, deploy the tether, accelerate to the main belt (or another region of asteroids), approach (requires minor trajectory changes), fly by and image asteroids, cruise back to Earth’s proximity, and transmit data during an Earth flyby.
The baseline trajectory of a MAT spacecraft can reach the main belt and return back to the Earth in a minimum of 3.2 years. The first year is spent accelerating, some 1.5 years are spent performing flybys in the main belt, and, during the rest of the mission, the spacecraft cruises back to perform an Earth flyby and transmit asteroid images. Other target groups can be considered if a longer mission lifetime is permitted and the solar cells can produce enough power. For example, Hilda asteroids can be reached in 4.3 years and Jupiter trojans in 8.3 years. If the mission lifetime is longer, slower flybys can be performed: 5 years to the inner belt and 8.4 years to the outer belt. Slower flybys provide the benefit of imaging asteroids for a longer time and having less motion blur.
A key component of every science mission is its payload. In the case of MAT flybys, an imager is the only viable instrument because all other instruments require the spacecraft to be in the target’s proximity for an extended period of time. The visible electromagnetic spectrum of humans has been optimised (via evolution) for a frequency range in which the intensity of solar radiation is the highest. Imaging in visible and nearby frequency ranges is fundamentally most efficient because the sunlight’s energy dominates over more exotic wavelength ranges, such as microwaves and X-rays.
By employing recent advances in imaging technology, the instrument can be as little as one CubeSat unit (10 cm³ or 1 litre or two beers) in size, including multiple sensors and filters for image acquisition in wavelengths ranging from ultraviolet to infrared, with the visible spectrum in between. Reflective optics is preferred because it reflects the whole frequency range equally well. Glass and other refractive lens materials are heavier and optimised for specific wavelengths, therefore occupying a larger volume and mass.
The primary challenge of the MAT nanospacecraft is the mass limitation from a thrust point of view. The acceleration of a space vehicle driven by the E-sail strongly correlates with the number and length of tethers, applied voltage and mass of the spacecraft. In the case of the MAT concept, its E-sail features a single tether with a maximum length of 20 km. The only variable parameter left is the mass. To reach the main asteroid belt within one heliocentric orbit, it is necessary to limit the mass of the spacecraft to 6 kg.
The nanospacecraft design is based on a three-unit CubeSat. The spacecraft consists of the following main subsystems: spacecraft bus, attitude-control module, the imager and an independently operated remote unit. The spacecraft bus is a partially reused version of the ESTCube-2 bus with a larger battery pack.
The attitude control unit is a complicated one that consists of auxiliary propulsion that assists manoeuvres required for E-sail deployment and operation.
The deployment is performed by spinning the spacecraft and providing the centrifugal force at the tether’s end, where the remote unit is attached. The attitude control module is designed to perform such a manoeuvre. The attitude control system consists of three reaction wheels and a novel miniature propulsion system. One of two systems, which match anatomically and performance-wise, could be used in the attitude control system: i) the electrospray propulsion system TILE, or ii) the Field Emission Electric Propulsion (FEEP) system NanoFEEP.
The remote unit operates independently for spin-plane control, which is used to steer the entire system in the desired direction. This unit has an electrospray propulsion system, attitude-determination sensors, batteries and means to communicate with the main spacecraft. More importantly, the remote unit fits inside the main spacecraft during the launch and it is deployed once the desired trajectory is reached.
One reason why all previous and ongoing asteroid missions have used expensive monolithic spacecraft is the conservative approach of reusing previously flown technologies. As a matter of fact, the MAT concept includes several technologies not yet demonstrated in orbit. The E-sail propulsion system is being developed for demonstration on several missions to be launched in 2021/2022: ESTCube-2, FORESAIL-1 and AuroraSat-1. Since access to Deep-Space Networks is expensive and limited to space agency missions, alternative communications, navigation and autonomy solutions are deemed necessary. All these must be developed and tested on a small scale to enable MAT and other small and independent interplanetary missions. If spacecraft can operate mostly autonomously, then communications can be limited to the transmission of housekeeping data and emergency operations. The scientific data can be delivered back to Earth and transmitted in a relatively short burst. For navigation, the E-sail spin can be used to monitor the celestial sphere and look for the Sun, planets and asteroids, which can be used to triangulate the spacecraft’s location (i.e to calculate its position from the positions of three or more known celestial objects). An ideal candidate for such a demonstration mission would be a nanospacecraft transferring from lunar orbit to a near-Earth asteroid, taking images during a flyby and returning them to Earth.
This article is based on the MAT mission design, MAT instrument and its performance characterisation, and a detailed study presenting the MAT architecture, operation of deployable mechanics and thermal analysis. All co-authors of the relevant scientific articles have contributed to this blog post. The Economist has also publicised the concept.
Multi-asteroid touring mission concept for studying hundreds of asteroids was originally published in Space Travel Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
A Catholic priest was outed through commercially available surveillance data. Vice has a good analysis:
The news starkly demonstrates not only the inherent power of location data, but how the chance to wield that power has trickled down from corporations and intelligence agencies to essentially any sort of disgruntled, unscrupulous, or dangerous individual. A growing market of data brokers that collect and sell data from countless apps has made it so that anyone with a bit of cash and effort can figure out which phone in a so-called anonymized dataset belongs to a target, and abuse that information.
There is a whole industry devoted to re-identifying anonymized data. This was something that Snowden showed that the NSA could do. Now it’s available to everyone.
Researchers have released technical details on a high-severity privilege-escalation flaw in HP printer drivers (also used by Samsung and Xerox), which impacts hundreds of millions of Windows machines.
If exploited, cyberattackers could bypass security products; install programs; view, change, encrypt or delete data; or create new accounts with more extensive user rights.
The bug (CVE-2021-3438) has lurked in systems for 16 years, researchers at SentinelOne said, but was only uncovered this year. It carries an 8.8 out of 10 rating on the CVSS scale, making it high-severity.
Look for your printer here, and download the patch if there is one.
The dog days of summer seem a natural breakpoint in the calendar because not much will happen over the next six weeks other than laying low and hiding out from the ill effects of the season:
Dog days of summer are the hot, sultry days of summer. They were historically the period following the heliacal rising of the star system Sirius (known colloquially as the “Dog Star”), which Hellenistic astrology connected with heat, drought, sudden thunderstorms, lethargy, fever, mad dogs, and bad luck. They are now taken to be the hottest, most uncomfortable part of summer in the Northern Hemisphere.
We’ve got some vacation planned over the coming days, a getaway to Lake Michigan. Other than that and catching up on a backlog of reading, I have nothing better to do than sit in the shade and sip sweet tea for the next sixty days while waiting on the autumnal equinox.
So I’ve decided to take a break here too. Power down and reboot cycles are necessary from time to time and this seemed a good time for it. Things will resume here again after a refreshing pause and I’ll see you all again soon.
As always, thanks for reading and for sharing, stay well.
Preparation for the Tokyo Olympics, flood damage in Germany, Eid al-Adha prayers in Senegal, a manhunt in France, a crowded nightclub in London, a mural in Bangkok, a windy day in Montevideo, a grazing elephant in Kenya, and much more
One of my favorite sketch shows of all time is The Kids in the Hall.
My favorite kid in said Hall was Kevin McDonald.
This is a sketch he wrote and starred in about a guy with a to do list.
I’ve always enjoyed that sketch, but I appreciated it far more after I learned that it was inspired by a conversation he had with Dave Foley in which Foley made fun of McDonald after seeing that he had a beat up, tattered to do list, the final item on which was “Make new list.”
This week’s BSD Now covers different topics – you may think from the headline it’s a “tips and tricks” link, but no, it’s about confidential info.
Hey, in this issue: DeepMind open sources AlphaFold code, DARPA releases “Common Sense AI” dataset, why 90% of machine learning models never hit the market, how video game artificial intelligence is evolving, a library of self-supervised methods for visual representation learning, and more.
Blender Bot 2.0: An open-source chatbot that builds long-term memory and searches the internet
[Project] solo-learn: a library of self-supervised methods for visual representation learning
Google released a new physics simulator Brax; it will help speeding up reinforcement learning
Think, fight, feel: how video game artificial intelligence is evolving
Why 90% of machine learning models never hit the market
Scaling AI and data science – 10 smart ways to move from pilot to production
Enjoy the newsletter? Help us make it bigger and better by sharing it with your colleagues and friends.
Have a nice weekend! See you next week. — Andriy
Next week will mark the 50th anniversary of the launch of Apollo 15—the fourth crewed mission to reach the moon. Launched on July 26, 1971, Apollo 15 became the first Apollo mission to carry a lunar roving vehicle (LRV) to the lunar surface. While the command module pilot, Alfred Worden, remained in orbit around the moon, the commander, David Scott, and the lunar-module pilot, James Irwin, set down on the Hadley-Apennine landing site. The two astronauts later unfolded and deployed the 460-pound LRV (among other gear and experiments), and over the next three days they drove it about 17 miles (28 kilometers) across the lunar landscape. When they were done, they parked the “moon buggy” a short distance from the lunar module, where it still sits today—the first of three rovers left on the moon by Apollo missions. Gathered here are images of the development, training, and deployment of the first vehicle driven by humans on the surface of another world.
Yes, we’re still in a pandemic and yes, these types of events are still happening over videoconference and not in meat space. But you know what? That means that so many more people have the opportunity to show up and show off their hacks! As long as 1 PM PDT is within your personal uptime, that is. Maybe you can make an exception if not?
Here is your link: the summer edition of Bring a Hack with Tindie and Hackaday will take place on Thursday, August 5th at 1:00 PM Pacific Daylight time (that’s 4pm EDT | 9pm BST/CET). Choose your gnarliest hack of late and go register for the event, which will be held on the Crowdcast video chat platform this time around.
The remote Bring-A-Hack held way back in April was packed with awesome people. Now is your chance to join in! You all have awesome projects from the last few months (we’ve seen a lot of them on these very pages), so come show them off to the hacker elite from around the globe. You know the deal: it really doesn’t matter what level your project is on, so don’t worry about that. As long as you’re passionate about it, we’d love to see it and hear all about the problems you had to overcome and yes, even the mistakes you made. You never know what knowledge you might have that can push someone else’s project over the finish line.
The NEID spectrograph has passed the Operational Readiness Review necessary for final acceptance and regular operations. Developed by NASA and the National Science Foundation’s NN-EXPLORE exoplanet science program, it has been put through a lengthy commissioning process in the five years since the radial velocity planet hunter design was selected. NEID is mounted on the WIYN 3.5m telescope at Kitt Peak National Observatory in Arizona, and we now have word that its scientific mission has begun.
Image: Sunset over Kitt Peak National Observatory during NEID commissioning in January 2020. Credit: Paul Robertson.
As a radial velocity instrument, NEID is all about the tugs one or more planets exert on the host star, as measured radially — toward Earth, then away from it — during the planets’ orbits. The Doppler shift in the star’s light contains the information. That these are exquisitely tiny measurements should be obvious. Jupiter induces a 13 meter per second wobble on our star, but the Earth only manages to induce a wobble of 9 centimeters per second. NEID’s single measurement precision is already better than 25 centimeters per second, making it an excellent addition to the toolkit for finding new worlds.
Image: A schematic of the Doppler effect: as the star wobbles under the gravitational influence of its planets, NEID measures the resulting wavelength shifts in its spectrum. Credit: NEID team.
It’s a tribute to the effort behind NEID that the team had to work through COVID-related shutdowns, in essence forcing them to start the commissioning process over, and as the NEID blog notes, for two winters in a row, they had to endure 12-hour nights of observing for up to a week at a time to get the job done. Nice work!
From December of 2020 to April of 2021, a series of experiments tested the reliability of the instrument, its precision and its limitations, making measurements of Doppler-stable stars to analyze the spectrometer’s limiting velocity measurement precision, as recounted in this blog entry by team members, from which this:
What we learned is that across a wide variety of targets, and in a wide variety of conditions, NEID offers radial velocity measurement precision that rivals the best facilities in the world. Our measurements of stable stars consistently show variability less than 1 meter per second. This on-sky stability reflects a combination of noise sources, including the instrument, statistical fluctuations (so-called “photon noise”), and the star’s inherent atmospheric variability. Thus, while it is hard to pin down an exact number, we are assured that NEID’s instrument-limited measurement precision is significantly better than 1 meter per second.
NEID seems on course to complement other high precision spectrographs like HARPS (High Accuracy Radial Velocity Planet Searcher), which is installed at the European Southern Observatory’s 3.6m telescope at La Silla Observatory in Chile, and its successor ESPRESSO (Echelle Spectrograph for Rocky Exoplanet- and Stable Spectroscopic Observations). How far can these instruments push into the centimeters-per-second range, so crucial for finding Earth-class planets?
Image: NEID radial velocity measurements of the quiet star tau Ceti. Our on-sky measurements are stable to better than 50 centimeters per second, which indicates the instrument itself is even more stable. Credit: NEID Team.
The NEID team also plans to use a smaller solar telescope in combination with NEID during the day, its express purpose to gather data to produce better machine learning algorithms with which to separate the signal of planets from ‘starspots’ on the target star. These can confound detection efforts by mimicking a planet’s signature. The solar observations will be released publicly to help scientists address the problem, with data processing coordinated by the NASA Exoplanet Science Institute (NExScI) at Caltech/IPAC. The data will be made available through the NEID science archive.
For one direction NEID goes next, we can turn to Andrea Lin (Penn State), who designed and built the solar telescope. Lin explains her own choice of targets:
“The solar telescope was a fun project to work on. I look forward to using NEID for my doctoral dissertation research. One of my planned projects with NEID is to look for planets around K-dwarfs. These stars line up incredibly well with NEID’s capabilities, and the radial velocity method in general, so I’m hoping to discover some small—hopefully terrestrial!—planets around nearby K-stars.”
For further background on NEID, see the Centauri Dreams article New Entry in High Precision Spectroscopy.
You’re sitting at the end of a long conference table, interviewing for your dream job. You’ve made it this far, but there’s just one more question you have to answer. “Is it possible for a line that passes through the origin to pass through no other rational points?” Five pairs of intense eyes watch you, waiting for your response. Do you get the job? You might think this only happens in story...
We’re excited to announce that NASA has expanded our contract with their Commercial SmallSat Data Acquisition (CSDA) Program to provide access to PlanetScope imagery for scientific research use for all U.S. Federal Civilian researchers and National Science Foundation funded researchers, including their contractors and grantees – roughly 280,000 eligible users. This expands access on the existing contract that currently supports NASA and NASA funded researchers. Since our first contract with NASA in 2019, scientists have leveraged Planet imagery for a variety of research projects focused on climate change, biodiversity loss, and complex sustainability problems. We are eager to see what projects this expanded pool of researchers will pursue, as it will enable more strategic information sharing across research groups and facilitate greater scientific use. Earlier this month, Planet entered into a definitive merger agreement with dMY Technology Group, Inc. IV (NYSE:DMYQ), a special purpose acquisition company, to become a publicly-traded company.
Earth is in the midst of a climate and biodiversity crisis—including rapidly changing forests, high-risk agricultural practices, and melting polar ecosystems—caused by widespread and endless habitat destruction, and a global economy still reeling from the worst pandemic in a century. Planet’s high-cadence and high-resolution data allows researchers and scientists across the globe to better understand and monitor our dynamic planet. In just the first half of 2021, researchers utilizing Planet imagery via NASA CSDA have monitored the melting summer ice in Greenland, evaluated corn and soybean yields at the sub-field scale, mapped snow-covered areas via machine learning, and investigated the causes behind the massive Chamoli landslide in India.
“Responding to today’s climate crisis and the loss of biological diversity is urgently important, and the Earth observation community plays a critical role,” said Planet co-founder and CSO, Robbie Schingler. “It is imperative that researchers have access to the best tools that allow them to gain a deeper understanding of our changing planet. We are excited to deliver Planet’s high cadence data into the hands of even more users in this research community so they can highlight facts, discover trends, and prototype new solutions that accelerate scientific understanding to power climate action.”
We’re thrilled to provide Planet data to researchers in an effort to unlock even greater insights and discoveries that can benefit our world. Visit Planet’s NASA page to get more information on the Planet-NASA CSDA agreement and learn how to apply. Additionally, we’ll be hosting a free live webinar to share more about Planet data access through the NASA CSDA Program on Friday, August 6, 2021 – register here!
When I mentioned wanting a small propane stove to make coffee on a few days ago I had no idea that there was so much to choose from or that so many radio hams really know their hiking gear.
In retrospect, I should have known!
What I want to do is drive out a few early mornings a week into a park or forest area for some low-power radio adventure. Setup the antenna and equipment, make a few CW contacts, then make a fresh cup of Joe before heading back home to start the day right.
Sure, I could carry coffee from home but where’s the fun in that? I want to brew it fresh, right there in the field.
Nineteen different readers sent along recommendations (thank you very much!) and a while a few different systems were suggested there was a clear consensus that what best matched my requirements was one of these:
Blistering boil times come standard on the Flash Java Kit personal cooking system, which includes accessories for making a fresh cup of coffee on the trail. Optimized for efficiency, the Flash boils water in a lightning-quick 100 seconds, making it the fastest Jetboil ever. Jetboil’s 1-liter FluxRing cooking cup with insulating cozy makes boiling water—and keeping it warm—a breeze. The kit’s ultra-stowable Silicone Coffee Press stores perfectly inside the cooking vessel, so you can make French press coffee without hauling extra gear. Start heating instantly with the convenient, reliable, pushbutton igniter, and verify that the water’s ready with the thermochromatic color-change heat indicator. Bottom cup doubles as a measuring cup and a bowl and is easy to pack and carry at only 13.1 ounces. Check out our selection of Hikers Brew Coffee for the perfect Java kit coffee companion.
Message received 599, it’s already on the way!
Gus after Liberty Bell 7, July 1961
Gus and Betty at Patrick AFB after his Liberty Bell 7 flight, July 1961
Alan, Gordo, John, Scott, and the Grissom family entering Gusâ€™ post-flight press conference, July 1961
Gus Grissom at his post-flight press conference, 1961. Beautiful footage from the new documentary The Real Right Stuff (2020).
A 18-year-old Tennessee man who helped set in motion a fraudulent distress call to police that led to the death of a 60-year-old grandfather in 2020 was sentenced to 60 months in prison today.
Shane Sonderman, of Lauderdale County, Tenn. admitted to conspiring with a group of criminals that’s been “swatting” and harassing people for months in a bid to coerce targets into giving up their valuable Twitter and Instagram usernames.
At Sonderman’s sentencing hearing today, prosecutors told the court the defendant and his co-conspirators would text and call targets and their families, posting their personal information online and sending them pizzas and other deliveries of food as a harassment technique.
Other victims of the group told prosecutors their tormentors further harassed them by making false reports of child abuse to social services local to the target’s area, and false reports in the target’s name to local suicide prevention hotlines.
Eventually, when subjects of their harassment refused to sell or give up their Twitter and Instagram usernames, Sonderman and others would swat their targets — or make a false report to authorities in the target’s name with the intention of sending a heavily armed police response to that person’s address.
For weeks throughout March and April 2020, 60-year-old Mark Herring of Bethpage, Tenn. was inundated with text messages asking him to give up his @Tennessee Twitter handle. When he ignored the requests, Sonderman and his buddies began having food delivered to Herring’s home via cash on delivery.
At one point, Sonderman posted Herring’s home address in a Discord chat room used by the group, and a minor in the United Kingdom quickly followed up by directing a swatting attack on Herring’s home.
Ann Billings was dating Mr. Herring and was present when the police surrounded his home. She recalled for the Tennessee court today how her friend died shortly thereafter of a heart attack.
Billings said she first learned of the swatting when a neighbor called and asked why the street was lined with police cars. When Mr. Herring stepped out on the back porch to investigate, police told him to put his hands up and to come to the street.
Unable to disengage a lock on his back fence, Herring was instructed to somehow climb over the fence with his hands up.
“He was starting to get more upset,” Billings recalled. “He said, ‘I’m a 60-year-old fat man and I can’t do that.'”
Billings said Mr. Herring then offered to crawl under a gap in the fence, but when he did so and stood up, he collapsed of a heart attack. Herring died at a nearby hospital soon after.
Mary Frances Herring, who was married to Mr. Herring for 28 years, said her late husband was something of a computer whiz in his early years who securedÂ the @Tennessee Twitter handle shortly after Twitter came online. Internet archivist Jason Scott says Herring was the creator of the successful software products Sparkware and QWIKMail; Scott has 2 hours worth of interviews with Herring from 20 years ago here.
Perhaps the most poignant testimony today came when Ms. Herring said her husband — who was killed by people who wanted to steal his account — had a habit of registering new Instagram usernames as presents for friends and family members who’d just had children.
“If someone was having a baby, he would ask them, ‘What are your naming the baby?’,” Ms. Herring said. “And he would get them that Instagram name and give it to them as a gift.”
Valerie Dozono also was an early adopter of Instagram, securing the two-letter username “VD” for her initials. When Dozono ignored multiple unsolicited offers to buy the account, she and many family and friends started getting unrequested pizza deliveries at all hours.
When Dozono continued to ignore her tormentors, Sonderman and others targeted her with a “SIM-swapping attack,” a scheme in which fraudsters trick or bribe employees at wireless phone companies into redirecting the target’s text messages and phone calls to a device they control. From there, the attackers can reset the password for any online account that allows password resets via SMS.
But it wasn’t the subsequent bomb threat that Sonderman and friends called in to her home that bothered Dozono most. It was the home invasion that was ordered at her address using strangers on social media.
Dozono said Sonderman created an account on Grindr — the location-based social networking and dating app for gay, bi, trans and queer people — and set up a rendezvous at her address with an unsuspecting Grindr user who was instructed to waltz into her home as if he was invited.
“This gentleman was sent to my home thinking someone was there, and he was given instructions to walk into my home,” Dozono said.
The court heard from multiple other victims targeted by Sonderman and friends over a two-year period. Including Shane Glass, who started getting harassed in 2019 over his @Shane Instagram handle. Glass told the court that endless pizza deliveries, as well as SIM swapping and swatting attacks left him paranoid for months that his assailant could be someone stalking him nearby.
Judge Mark Norris said Sonderman’s agreement to plead to one count of extortion by threat of serious injury or damage carries with it a recommended sentence of 27 to 33 months in prison. However, the judge said other actions by the defendant warranted up to 60 months (5 years) in prison.
Sonderman might have been eligible to knock a few months off his sentence had he cooperated with investigators and refrained from committing further crimes while out on bond.
But prosecutors said that shortly after his release, Sonderman went right back to doing what he was doing when he got caught. Investigators who subpoenaed his online communications found he’d logged into the Instagram account “FreeTheSoldiers,” which was known to have been used by the group to harass people for their social media handles.
Sonderman was promptly re-arrested for violating the terms of his release, and prosecutors played for the court today a recording of a phone call Sonderman made from jail in which he brags to a female acquaintance that he wiped his mobile phone two days before investigators served another search warrant on his home.
Sonderman himself read a lengthy statement in which he apologized for his actions, blaming his “addiction” on several psychiatric conditions — including bipolar disorder. While his recitation was initially monotone and practically devoid of emotion, Sonderman eventually broke down in tears that made the rest of his statement difficult to hear over the phone-based conference system the court made available to reporters.
The bipolar diagnoses was confirmed by his mother, who sobbed as she simultaneously begged the court for mercy while saying her son didn’t deserve any.
Judge Norris said he was giving Sonderman the maximum sentenced allowed by law under the statute — 60 months in prison followed by three years of supervised release, but implied that his sentence would be far harsher if the law permitted.
“Although it may seem inadequate, the law is the law,” Norris said. “The harm it caused, the death and destruction….it’s almost unspeakable. This is not like cases we frequently have that involve guns and carjacking and drugs. This is a whole different level of insidious criminal behavior here.”
Sonderman’s sentence pales in comparison to the 20-year prison time handed down in 2019 to serial swatter Tyler Barriss, a California man who admitted making a phony emergency call to police in late 2017 that led to the shooting death of an innocent Kansas resident.
From the Open Source Hardware Assocation:
Today OSHWA, in collaboration with the Engelberg Center on Innovation Law & Policy at NYU Law, is excited to launch The 2021 State of Open Source Hardware. This graphic report builds on data from OSHWA’s Open Hardware Certification Program, the annual Open Hardware Summit, and the annual Open Hardware Community Survey.
The state of open source hardware is strong. In the eleven years since the first Open Hardware Summit we have seen open hardware grow, with new communities creating new hardware for new uses around the world. Hundreds of pieces of open source hardware have been certified as compliant with the Open Hardware Definition from countries on every continent except Antarctica.
A wide range of companies have been built and grown on the foundation of open source hardware. Dozens of Ada Lovelace Fellows have helped to diversify the open hardware community. Nonprofit organizations in academia, conservation, science, medical, and more have helped to broaden the impact of open hardware in innumerable ways.
The results of the community survey makes it clear that people come to open hardware for a range of reasons and use open hardware to address a range of needs. However they start with open hardware, once they start using it they are hooked. Community members study designs, adapt them, and build upon existing designs in order to achieve their goals. Open hardware is used in teaching, the development of commercial products, and everything in between.
What are you waiting for? Click over, check it out, and let us know what you think. While the state of open source hardware is strong in 2021, we think it may get even stronger in the future.
The version of qemu in dports is not set up to support this, yet. Until then, you can download a prebuilt version.
When the microbiologist Jillian Banfield and her colleagues started combing through samples of mud from wetland environments three years ago, they had a specific goal in mind: to recover and analyze fragments of DNA from large bacteria-killing viruses. And they did. But they also found something unexpected. Some of the DNA in those samples wasnâ€™t immediately recognizable as coming from viruses...
Explore 2021 is all about building connections across sectors and geographies to find solutions to pressing global challenges, and this year you will hear straight from the leaders and organizations reimagining how we drive economic growth, improve human livelihoods, and steward the environment.
Introducing the first three members of our keynote speaker lineup!
Dr. Catherine Nakalembe, Africa Program Director of NASA Harvest and 2020 Africa Food Prize Laureate, is using satellite imagery to improve the lives of smallholder farmers across Africa by facilitating data-driven agricultural decision-making.
Andreas Dahl-JØrgensen is the Managing Director of Norway’s International Climate and Forests Initiative (NICFI), overseeing ambitious projects that enable researchers and policy makers to access satellite data for tropical forest research and conservation programs. Join us for Andreas’ talk to discover how NICFI is bringing high-resolution data to the masses to uplevel tropical forest conservation projects.
Tune in to a fireside chat with Bayer SVP of Global Public Affairs Dr. Sara Boettiger for a deeper look into the digital transformation in agriculture, and the next-generation, sustainable practices that are benefiting growers, consumers, and the environment.
In the spirit of Global Connection, these speakers are coming together to kickoff the broader conversation with the afternoon breakout sessions across science, agriculture, sustainability, and government. From designing more sustainable agriculture policy, to preparing for climate risks, to monitoring supply chains; full agenda details are coming soon!
In addition to the thought leaders above, you’ll hear directly from Planet executives Will Marshall, CEO; Kevin Weil, President, Product and Business; Ashley Johnson, Chief Financial and Operating Officer; and James Mason, SVP of Space Systems. These leaders and more will dive deeper into the future of the company, the product vision, and the next big milestones in our mission toward Queryable Earth.
Explore 2021 will be convened virtually on October 12th-13th, and will feature a range of breakout sessions, hands-on workshops, and networking opportunities. Register for free and learn more at explore21.planet.com.
If we ever find life on a planet orbiting a white dwarf star, it will be life that has emerged only after the red giant phase has passed and the white dwarf has emerged as a stellar relic. That’s the conclusion of a study being discussed today at the National Astronomy Meeting of Britain’s Royal Astronomical Society, which convened online due to COVID concerns. The work is also recently published in Monthly Notices of the Royal Astronomical Society.
At issue is the damage caused by powerful stellar winds that occur as a star makes the transition from red giant to white dwarf stage. This is the scenario that awaits our own Sun, which should swell to red giant status in roughly five billion years, eventually becoming a dense white dwarf about the size of the Earth. We’ve speculated in these pages about life surviving this phase of stellar evolution, but the study, in the hands of Dimitri Veras (Warwick University) concludes that this is all but impossible.
We know that the Earth is protected by a magnetosphere that thwarts ultraviolet radiation by channeling harmful particles along magnetic field lines. You would think that a magnetosphere would ease atmospheric erosion for those planets that have one (Mars, for example, does not) but the stellar winds of the evolving star will be far stronger than the Sun’s today. The authors modeled the winds from eleven different kinds of stars in a range of masses. They find this:
The plot shows that an exo-Jovian analogue would just reach the threshold for hosting a magnetopause at some point during giant branch evolution. However, much higher fields would be required to maintain any magnetopause throughout these giant branch phases. For terrestrial and potentially habitable planets, any protection previously afforded by the magnetosphere would effectively disappear. This lack of protection, compounded with orbital expansion and varying stellar luminosities, suggest that life would be challenged to survive throughout the giant branch phases of stellar evolution.
Could scenarios emerge in which moons around the gas giants maintain life under an ice crust? It’s hard to see how. Veras, working with Aline Vidotto (Trinity College, Dublin) points out that a habitable zone supporting liquid water would move from some 150 million kilometers from the Sun to up to 6 billion kilometers, pushing it beyond the orbit of Neptune. Planets can migrate during this phase, but the paper argues that the habitable zone moves outward faster than the planet, a likely fatal threat. Thus life around a white dwarf will need to start over.
Image: An illustration of material being ejected from the Sun (left) interacting with the magnetosphere of the Earth (right). When the Sun evolves to become a red giant star, the Earth may be swallowed by our star’s atmosphere, and with a much more unstable solar wind, even the resilient and protective magnetospheres of the giant outer planets may be stripped away. MSFC / NASA. Licence type Attribution (CC BY 4.0).
Thus the movement of the habitable zone outward and the difficulty in maintaining a magnetosphere throughout this phase of stellar evolution make preserving habitability extremely unlikely. The authors’ model shows that the strong stellar wind combines with the expanding orbits of surviving planets to first shrink and then expand the magnetosphere of a planet over time. It would take a magnetic field 100 times stronger than Jupiter’s to maintain a stable magnetosphere all the way through the transition of red giant to white dwarf:
“We find that a planetary magnetosphere will always be quashed at some point during the giant branch phases, unless the planet’s magnetic field strength is at least two orders of magnitude higher than Jupiter’s current value.”
And afterwards? White dwarfs do not emit stellar winds, so that threat disappears. Any life we find around a white dwarf will doubtless have developed during the white dwarf phase. If such exists, we may be able to detect its biomarkers through future space missions — recall that white dwarfs are roughly the size of the Earth, and a transiting planet would produce profound transit depth and would seemingly be an ideal target for transmission spectroscopy, in which we analyze the components of a planetary atmosphere as starlight passes through it.
Most of the exoplanets we know about orbit main sequence stars, but about 100 are known to orbit red giants, and at least four have been found orbiting white dwarf stars. These worlds are survivors of stellar evolution and thus useful as benchmarks in tracing the lifetime of their systems. Two of the white dwarf planets, says Veras, are close to their star’s habitable zone, an indication of planet migration showing that an Earth-sized planet could exist in such an orbit. And he adds:
“These examples show that giant planets can approach very close to the habitable zone. The habitable zone for a white dwarf is very close to the star because they emit much less light than a Sun-like star. However, white dwarfs are also very steady stars as they have no winds. A planet that’s parked in the white dwarf habitable zone could remain there for billions of years, allowing time for life to develop provided that the conditions are suitable.”
The paper is Veras & Vidotto, “Planetary magnetosphere evolution around post-main-sequence stars,” Monthly Notices of the Royal Astronomical Society Vol. 506, Issue 2 (September 2021), pp. 1697-1703. Abstract / Preprint.
This week in deep learning, we bring you Facebook's open-source chat bot that builds long-term memory and searches the internet, an AI-powered grocery experience, a curated benchmark for distribution shifts and Google's novel approach to image synthesis using diffusion models.
You may also enjoy Samsung's robot vacuum with world-class object recognition, AI voice actors that sound more human than ever, a paper on multimodal representation for neural code search, a paper on searching for object detection architectures for mobile accelerators, and more!
As always, happy reading and hacking. If you have something you think should be in next week’s issue, find us on Twitter: @dl_weekly.
Until next week!
Facebook AI Research built BlenderBot 2.0, the first chatbot that can simultaneously build long-term memory it can continually access, search the internet for timely information, and have sophisticated conversations on nearly any topic
Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, explain the current challenges of deep learning and what the future might hold.
A Seattle-based startup that offers AI voices for corporate e-learning videos and other areas.
New starter kits make it easier to create your model as a microservice, deploy on RedHat OpenShift, and generally get your machine learning apps to production in a cloud-native environment.
Hungryroot, a delivery service that makes use of collaborative filtering, hopes to be the Netflix equivalent for online groceries in the United States.
Cat Face is the latest SnapML-powered Lens built by the team at Fritz AI, performing image-to-image translation to transform outlines of cat faces into realistic cats with a model 95% smaller than Pix2Pix.
Samsung announces the launch of Jet Bot AI+, the world’s first robot vacuum that comes equipped with a 3D active stereo-type sensor and world-class object recognition.
A comprehensive tutorial on setting up VoiceTurn using an Arduino Nano 33 BLE Sense and Edge Impulse.
A technical introduction to the manufacturing use-cases of Coral Edge TPU, a complete platform for accelerating neural networks on embedded devices.
A brief blog on the use of some synthetic data, a digital twin, and the Fleet Command, a hybrid-cloud platform for deploying and managing AI models at the edge.
WILDS, a curated benchmark of 10 datasets that reflect natural distribution shifts arising from different cameras, hospitals, molecular scaffolds, experiments, demographics, countries, time periods, users, and codebases.
Google presents two connected approaches that push the boundaries of the image synthesis quality for diffusion models — Super-Resolution via Repeated Refinements (SR3) and a model for class-conditioned synthesis, called Cascaded Diffusion Models (CDM).
A comprehensive article detailing the inefficiencies of labeling data by hand.
A blog that thoroughly describes a new method for collaborative distributed training that can adapt itself to the network and hardware constraints of participants.
The Netflix Data Explorer tool allows users to explore data stored in several popular datastores (currently Cassandra, Dynomite, and Redis).
A library for generating massive amounts of fake data on the browser and on Node.js.
A framework to simplify the integration, scaling and acceleration of complex multi-step analytics and machine learning pipelines on the cloud.
Semantic code search is about finding semantically relevant code snippets for a given natural language query. In the state-of-the-art approaches, the semantic similarity between code and query is quantified as the distance of their representation in the shared vector space. In this paper, to improve the vector space, we introduce tree-serialization methods on a simplified form of AST and build the multimodal representation for the code data. We conduct extensive experiments using a single corpus that is large-scale and multi-language: CodeSearchNet. Our results show that both our tree-serialized representations and multimodal learning model improve the performance of neural code search. Last, we define two intuitive quantification metrics oriented to the completeness of semantic and syntactic information of the code data.
Many reinforcement learning (RL) agents require a large amount of experience to solve tasks. We propose Contrastive BERT for RL (CoBERL), an agent that combines a new contrastive loss and a hybrid LSTM-transformer architecture to tackle the challenge of improving data efficiency. CoBERL enables efficient, robust learning from pixels across a wide range of domains. We use bidirectional masked prediction in combination with a generalization of recent contrastive methods to learn better representations for transformers in RL, without the need of hand engineered data augmentations. We find that CoBERL consistently improves performance across the full Atari suite, a set of control tasks and a challenging 3D environment.
Inverted bottleneck layers, which are built upon depthwise convolutions, have been the predominant building blocks in state-of-the-art object detection models on mobile devices. In this work, we investigate the optimality of this design pattern over a broad range of mobile accelerators by revisiting the usefulness of regular convolutions. We discover that regular convolutions are a potent component to boost the latency-accuracy trade-off for object detection on accelerators, provided that they are placed strategically in the network via neural architecture search. By incorporating regular convolutions in the search space and directly optimizing the network architectures for object detection, we obtain a family of object detection models, MobileDets, that achieve state-of-the-art results across mobile accelerators. On the COCO object detection task, MobileDets outperform MobileNetV3+SSDLite by 1.7 mAP at comparable mobile CPU inference latencies. MobileDets also outperform MobileNetV2+SSDLite by 1.9 mAP on mobile CPUs, 3.7 mAP on Google EdgeTPU, 3.4 mAP on Qualcomm Hexagon DSP and 2.7 mAP on Nvidia Jetson GPU without increasing latency. Moreover, MobileDets are comparable with the state-of-the-art MnasFPN on mobile CPUs even without using the feature pyramid, and achieve better mAP scores on both EdgeTPUs and DSPs with up to 2x speedup.
I’ve been as curious as anyone about the Lab599 TX-500 ultra compact all-mode 10W HF/50MHz SDR QRP transceiver. I’ve watched some of the many videos created about it since it was first announced and have been intrigued with its unique form-factor and ruggedness for field work.
But sleek-styling and gushing fan-boy videos aren’t really sufficient for making a purchase decision like this so I waited for the QST review (August 2021) before placing an order for the TX-500. Ham Radio Outlet is the US distributor for the transceiver though supplies are very limited and these were “out of stock” when I placed my order.
I’m told mine will be shipped with the “next group” and while there are no guarantees, that could be in the next few weeks. In the meantime, I wait and wonder what it will be like to own amateur radio equipment that was made in Russia - certainly a first for me.
Apparently there is a form fitting case for batteries that bolts on the back, but I’m not sure how interested I am in that option. I am disappointed there is no built-in auto-tuner (where would they put it?) and yes, I did read the comment in the QST review about the level of spurious emissions and the suggestion to not use an RF amplifier with this transceiver.
My use case for the TX-500 is for portable CW work only and I don’t plan to tote along an amplifier, though I am keeping a close eye on this development, a 60W amp with an auto-tuner in a form factor that matches the transceiver and bolts on the back…
Using open source data and easily customizable software, Azavea worked with PeopleForBikes to improve city transit networks.
The post Improving City Transit Networks through Data and Public Policy appeared first on Azavea.
We learned another new term this year. High-amplitude waves in the Jetstream, Rex Blocks, compressed high pressure zones: the details are complicated to us common folk, but well understood to atmospheric scientists. One thing is clear – this Heat Dome anomaly is one we are likely to see more often as anthropogenic climate change becomes anthropogenic climate disruption.
But I don’t want to talk about the causes, I want to talk about what happened, and what we do now.
Here in New Westminster, dozens of people died. We don’t have a complete accounting yet, and the coroner will no doubt report out in a few months when the horror of the situation has passed, but there may have been more than 40 “excess deaths” in New Westminster in the 4-day period of highest heat. Neighbours of ours, residents in our community. People who died in their home because it was too hot for their body to cope, and because they couldn’t get to help, or didn’t know they needed help. Or (alas) help was not available.
There were a few stories in the news, but aside from the horrific loss of Lytton, the news cycle around the Heat Dome has already begun to pass, which frightens me. More disasters are pushing it out of our mind. Is this the “New Normal” of living though COVID and an ongoing poisoned drug supply crisis – us becoming desensitized to mass death stories? With 900+ COVID deaths in our Health Region, 100+ opioid deaths in our community, does a few dozen avoidable heat-related deaths register? Do we even know how to get angry about this?
We talked about this in Council last week, and it come up in the UBCM executive meeting with the Minister of Municipal Affairs I attended on Friday. As we are contemplating the immediate impacts of wildfires, and the further-reaching effects of wildfire smoke, the conversation about what went wrong during the heat emergency is feeling lost in dealing in this week’s emergency, which will lose time to next week’s emergency. I lament that is what climate disruption looks like in practice.
We will get a report in Council on how we can update our emergency planning, and the Coroner will likely issue a report on provincial and regional responses, but I want to concentrate for now (and sorry, it has taken me some time to think about how to write this) on what happened here, in our community, especially in my neighbourhood of the Brow of the Hill, where many “sudden deaths” occurred.
New West Fire and Rescue and New Westminster Police responded to an unprecedented number of health emergencies and sudden death calls. We know the ambulance service failed – they simply could not dispatch people to calls fast enough, as there were not enough ambulances and crews available. This meant people at 911 couldn’t leave calls, and lines got backed up, causing E-Comm to fail. Firefighters were challenged to keep up, as they could not pass medical calls over the ambulances that were not arriving. As our fire trucks are not medical transports, Fire crews took the unprecedented step of calling taxis and having a member accompany patients to Hospital in that cab, so crews and equipment could move onto the next call, leaving our Firefighters under-staffed as many had to wait in the hospital for patients to be admitted, because the emergency room was slammed. Even as fire and police struggled to keep up and attend to “sudden death” calls, the coroner service phone lines were overwhelmed and at one point stopped responding.
It was a cascading failure, a demonstration we were simply not ready, as a City and as a Province. People died, leaving behind families and neighbours traumatized by the lack of response. I am afraid first responders were equally traumatized, as they had to operate in a broken and failing system that didn’t allow them to do the work they are trained for and dedicated to doing – protect and comfort the residents they serve. Instead, they spent three days in the stifling heat surrounded by the suffering and death of people they wanted to help. I cannot imagine, but once again, they deserve not just our recognition and gratitude, but a response – a way to fix this so they don’t have to go through it again.
Like many of you, I heard anecdotes about people who were in dangerous situations, and people who helped them out. A community member encountering an elderly man on the street who was disoriented after shopping for himself and his house-bound wife, with no access to cooling centre support because information was not available in his native language. A neighbour who saw a hyperthermic woman sitting in the driver’s seat of a car parked in front of his house, and took her in to cool in his basement overnight because she didn’t know of anywhere else to go and her apartment was not sustainable. Every neighbour-helping-neighbour story reminds us of the importance of community and compassion, but overshadows the story of the many people who surely fell through the cracks and were not lucky enough to have a good Samaritan help them through.
The City has a Heat Emergency plan, and it was invoked. Cooling centres were opened, communications around how to recognize and address heat stress and hyperthermia were distributed in the traditional way, outreach to impacted communities was initiated. City staff in community centres and first responders were prepared to operationalize the plan, carried water and ice and expected to be helping people. It turned out to not be nearly enough. I can be critical of the 911, Ambulance, and Coroner service failures and ask the Province to get this shit figured out right away, but we need to recognize at the same time the failures here at the local level.
First off, we learned (much like the rest of the Lower Mainland) that a plan that works for 32 degrees does not work for 40 degrees. This Heat Dome event was exacerbated by the high overnight lows – for a couple of days, temps never got below 25 degrees at night, so there was no opportunity for apartments to cool down or for people to get a comfortable sleep and build resilience. Cooling Centres that operate from 10:00am- 8:00pm are simply not enough in this situation. We have to figure out how to provide 24 hour centres, and how to staff them. We can also expand the opportunity for outdoor cooling with fountains and misters and tents, and the logistics of making them safe and accessible.
We also were not as effective as we need to be at communicating the seriousness of the heat situation. This was not a “regular” heat emergency, it was something different, and we should have seen that coming and taking measures to tell the community that. There is a language barrier (several, actually) we need to overcome, but there is also the physical barriers to getting information to the front doors of people who live in apartments, to getting information to people in the Uptown and Downtown commercial areas, and encouraging people to connect with their neighbours and the people in their buildings. Indeed, we may even want to regulate that building managers check in with every tenant at least once a day during a heat emergency, and provide resources to residents. This may be as lifesaving as regulating fire alarms.
This is so much our climate chickens coming home to roost. Our Emergency Planning (and this is reflected in the Emergency Response exercises performed in the region, where these plans are tested and refined) has traditionally centred around floods and earthquakes. The SARS outbreak added pandemic planning to that suite (which we were fortunate to have as we began our response to COVID) and Lac-Mégantic caused us to update our rail hazardous incident planning. We have cold temperature and warm temperature response plans, but the current scale of climate disruption is clearly going to lead us to re-think what a regional emergency is. Heat Domes and smoke events like last summer are going to need a new approach.
It is hard for government to admit we failed, but there is no doubt we did here, as a City and as a Province. We should have been better prepared, and we need to be better prepared. We need to communicate better and differently, and we need to assure First Responders are resourced to do the job of supporting people in dangerous times. We have work to do.
When I was a kid, I heard the term “hiding your light under a bushel” and couldn’t figure out what it meant. I asked my mom, who explained the concept.
When she was done, I believe the three primary questions I asked were:
Why would somebody do that?
A bushel of what?
Wouldn’t the bushel catch fire when the lightbulb gets hot?
I was a very literal-minded child.
A federal judge in Connecticut today handed down a sentence of time served to spam kingpin Peter “Severa” Levashov, a prolific purveyor of malicious and junk email, and the creator of malware strains that infected millions of Microsoft computers globally. Levashov has been in federal custody since his extradition to the United States and guilty plea in 2018, and was facing up to 12 more years in prison. Instead, he will go free under three years of supervised release and a possible fine.
A native of St. Petersburg, Russia, the 40-year-old Levashov operated under the hacker handle “Severa.” Over the course of his 15-year cybercriminal career, Severa would emerge as a pivotal figure in the cybercrime underground, serving as the primary moderator of a spam community that spanned multiple top Russian cybercrime forums.
Severa created and then leased out to others some of the nastiest cybercrime engines in history — including the Storm worm, and the Waledac and Kelihos spam botnets. His central role in the spam forums gave Severa a prime spot to advertise the services tied to his various botnets, while allowing him to keep tabs on the activities of other spammers.
Severa rented out segments of his Waledac botnet to anyone seeking a vehicle for sending spam. For $200, vetted users could hire his botnet to blast one million emails containing malware or ads for male enhancement drugs. Junk email campaigns touting employment or “money mule” scams cost $300 per million, and phishing emails could be blasted out through Severa’s botnet for the bargain price of $500 per million.Early in his career, Severa worked very closely with two major purveyors of spam. One was Alan Ralsky, an American spammer who was convicted in 2009 of paying Severa and other spammers to promote pump-and-dump stock scams.
The other was a major spammer who went by the nickname “Cosma,” the cybercriminal thought to be responsible for managing the Rustock botnet (so named because it was a Russian botnet frequently used to send pump-and-dump stock spam). Microsoft, which has battled to scrub botnets like Rustock off of millions of PCs, later offered a still-unclaimed $250,000 reward for information leading to the arrest and conviction of the Rustock author.
Severa ran several affiliate programs that paid cybercriminals to trick people into installing fake antivirus software. In 2011, KrebsOnSecurity dissected “SevAntivir” — Severa’s eponymous fake antivirus affiliate program — showing it was used to deploy new copies of the Kelihos spam botnet.
In 2010, Microsoft — in tandem with a number of security researchers — launched a combined technical and legal sneak attack on the Waledac botnet, successfully dismantling it. The company would later do the same to the Kelihos botnet, a global spam machine which shared a great deal of code with Waledac and infected more than 110,000 Microsoft Windows PCs.
Levashov was arrested in 2017 while in Barcelona, Spain with his family. According to a lengthy April 2017 story in Wired.com, he got caught because he violated a basic security no-no: He used the same log-in credentials to both run his criminal enterprise and log into sites like iTunes.
In fighting his extradition to the United States, Levashov famously told the media, “If I go to the U.S., I will die in a year.” But a few months after his extradition, Levashov would plead guilty to four felony counts, including intentional damage to protected computers, conspiracy, wire fraud and aggravated identity theft.
At his sentencing hearing today, Levashov thanked his wife, attorney and the large number of people who wrote the court in support of his character, but otherwise declined to make a statement. His attorney read a lengthy statement explaining that Levashov got into spamming as a way to provide for his family, and that over a period of many years that business saw him supporting countless cybercrime operations.
The plea agreement Levashov approved in 2018 gave Judge Robert Chatigny broad latitude to impose a harsh prison sentence. The government argued that under U.S. federal sentencing guidelines, Levashov’s crimes deserved an “offense level” of 32, which for a first-time offender means a sentence of anywhere from 121 to 151 months (10 to 12 years).
But Judge Chatigny said he had concerns that “the total offense level does overstate the seriousness of Mr. Levashov’s crimes and his criminal culpability,” and said he believed Levashov was unlikely to offend again.
“33 months is a long time and I’m sure it was especially difficult for you considering that you were away from your wife and child and home,” Chatigny told the defendant. “I believe you have a lot to offer and hope that you will do your best to be a positive and contributing member of society.”
Mark Rasch, a former federal prosecutor with the U.S. Justice Department, said the sentencing guidelines are no longer mandatory, but they do reflect the position of Congress, the U.S. Sentencing Commission, and the Administrative Office of the U.S. Courts about the seriousness of the offenses.
“One of the problems you have here is it’s hard enough to catch and prosecute and convict cybercriminals, but at the end of the day the courts often don’t take these offenses seriously,” Rasch said. “On the one hand, sentences like these do tend to diminish the deterrent effect, but also I doubt there are any hackers in St. Petersburg right now who are watching this case and going, ‘Okay, great now I can keep doing what I’m doing.'”
Judge Chatigny deferred ruling on what — if any — financial damages Levashov may have to pay as a result of the plea.
The government acknowledged that it was difficult to come to an accurate accounting of how much Levashov’s various botnets cost companies and consumers. But the plea agreement states a figure of approximately $7 million — which prosecutors say represents a mix of actual damages and ill-gotten gains.
However, the judge delayed ruling on whether to impose a fine because prosecutors had yet to supply a document to back up the defendant’s alleged profit/loss figures. The judge also ordered Levashov to submit to three years of supervised release, which includes constant monitoring of his online communications.
This week a group of global newspapers is running a series of articles detailing abuses of NSO Group’s Pegasus spyware. If you haven’t seen any of these articles, they’re worth reading — and likely will continue to be so as more revelations leak out. The impetus for the stories is a leak comprising more than 50,000 phone numbers that are allegedly the targets of NSO’s advanced iPhone/Android malware.
Notably, these targets include journalists and members of various nations’ political opposition parties — in other words, precisely the people who every thinking person worried would be the target of the mass-exploitation software that NSO sells. And indeed, that should be the biggest lesson of these stories: the bad thing everyone said would happen now has.
This is a technical blog, so I won’t advocate for, say, sanctioning NSO Group or demanding answers from the luminaries on NSO’s “governance and compliance” committee. Instead I want to talk a bit about some of the technical lessons we’ve learned from these leaks — and even more at a high level, precisely what’s wrong with shrugging these attacks away.
A perverse reaction I’ve seen from some security experts is to shrug and say “there’s no such thing as perfect security.” More concretely, some folks argue, this kind of well-resourced targeted attack is fundamentally impossible to prevent — no matter how much effort companies like Apple put into stopping it.
And at the extremes, this argument is not wrong. NSO isn’t some script-kiddy toy. Deploying it costs hundreds of thousands of dollars, and fighting attackers with that level of resources is always difficult. Plus, the argument goes, even if we raise the bar for NSO then someone with even more resources will find their way into the gap — perhaps charging an even more absurd price. So let’s stop crapping on Apple, a company that works hard to improve the baseline security of their products, just because they’re failing to solve an impossible problem.
Still that doesn’t mean today’s version of those products are doing everything they could be to stop attacks. There is certainly more that corporations like Apple and Google could be doing to protect their users. However, the only way we’re going to get those changes is if we demand them.
Because spyware is hard to capture, we don’t know precisely how Pegasus works. The forensic details we do have come from an extensive investigation conducted by Amnesty International’s technology group. They describe a sophisticated infection process that proved capable of infecting a fully-patched iPhone 12 running the latest version of Apple’s iOS (14.6).
Many attacks used “network injection” to redirect the victim to a malicious website. That technique requires some control of the local network, which makes it hard to deploy to remote users in other countries. A more worrying set of attacks appear to use Apple’s iMessage to perform “0-click” exploitation of iOS devices. Using this vector, NSO simply “throws” a targeted exploit payload at some Apple ID such as your phone number, and then sits back and waits for your zombie phone to contact its infrastructure.
This is really bad. While cynics are probably correct (for now) that we probably can’t shut down every avenue for compromise, there’s good reason to believe we can close down a vector for 0-interaction compromise. And we should try to do that.
What we know that these attacks take advantage of fundamental weaknesses in Apple iMessage: most critically, the fact that iMessage will gleefully parse all sorts of complex data received from random strangers, and will do that parsing using crappy libraries written in memory unsafe languages. These issues are hard to fix, since iMessage can accept so many data formats and has been allowed to sprout so much complexity over the past few years.
There is good evidence that Apple realizes the bind they’re in, since they tried to fix iMessage by barricading it behind a specialized “firewall” called BlastDoor. But firewalls haven’t been particularly successful at preventing targeted network attacks, and there’s no reason to think that BlastDoor will do much better. (Indeed, we know it’s probably not doing its job now.)
Adding a firewall is the cheap solution to the problem, and this is probably why Apple chose this as their first line of defense. But actually closing this security hole is going to require a lot more. Apple will have to re-write most of the iMessage codebase in some memory-safe language, along with many system libraries that handle data parsing. They’ll also need to widely deploy ARM mitigations like PAC and MTE in order to make exploitation harder. All of this work has costs and (more importantly) risks associated with it — since activating these features can break all sorts of things, and people with a billion devices can’t afford to have .001% of them crashing every day.
An entirely separate area is surveillance and detection: Apple already performs some remote telemetry to detect processes doing weird things. This kind of telemetry could be expanded as much as possible while not destroying user privacy. While this wouldn’t necessarily stop NSO, it would make the cost of throwing these exploits quite a bit higher — and make them think twice before pushing them out to every random authoritarian government.
Critics are correct that fixing these issues won’t stop exploits. The problem that companies like Apple need to solve is not preventing exploits forever, but a much simpler one: they need to screw up the economics of NSO-style mass exploitation.
Targeted exploits have been around forever. What makes NSO special is not that they have some exploits. Rather: NSO’s genius is that they’ve done something that attackers were never incentivized to do in this past: democratize access to exploit technology. In other words, they’ve done precisely what every “smart” tech business is supposed to do: take something difficult and very expensive, and make it more accessible by applying the magic of scale. NSO is basically the SpaceX of surveillance.
But this scalability is not inevitable.
NSO can afford to maintain a 50,000 number target list because the exploits they use hit a particular “sweet spot” where the risk of losing an exploit chain — combined with the cost of developing new ones — is low enough that they can deploy them at scale. That’s why they’re willing to hand out exploitation to every idiot dictator — because right now they think they can keep the business going even if Amnesty International or CitizenLab occasionally catches them targeting some human rights lawyer.
But companies like Apple and Google can raise both the cost and risk of exploitation — not just everywhere, but at least on specific channels like iMessage. This could make NSO’s scaling model much harder to maintain. A world where only a handful of very rich governments can launch exploits (under very careful vetting and controlled circumstances) isn’t a great world, but it’s better than a world where any tin-pot authoritarian can cut a check to NSO and surveil their political opposition or some random journalist.
In a perfect world, US and European governments would wake up and realize that arming authoritarianism is really is bad for democracy — and that whatever trivial benefit they get from NSO is vastly outweighed by the very real damage this technology is doing to journalism and democratic governance worldwide.
But I’m not holding my breath for that to happen.
In the world I inhabit, I’m hoping that Ivan Krstić wakes up tomorrow and tells his bosses he wants to put NSO out of business. And I’m hoping that his bosses say “great: here’s a blank check.” Maybe they’ll succeed and maybe they’ll fail, but I’ll bet they can at least make NSO’s life interesting.
But Apple isn’t going to do any of this if they don’t think they have to, and they won’t think they have to if people aren’t calling for their heads. The only people who can fix Apple devices are Apple (very much by their own design) and that means Apple has to feel responsible each time an innocent victim gets pwned while using an Apple device. If we simply pat Apple on the head and say “gosh, targeted attacks are hard, it’s not your fault” then this is exactly the level of security we should expect to get — and we’ll deserve it.
NSO Group, the Israeli cyberweapons arms manufacturer behind the Pegasus spyware — used by authoritarian regimes around the world to spy on dissidents, journalists, human rights workers, and others — was hacked. Or, at least, an enormous trove of documents was leaked to journalists.
Most interesting is a list of over 50,000 phone numbers that were being spied on by NSO Group’s software. Why does NSO Group have that list? The obvious answer is that NSO Group provides spyware-as-a-service, and centralizes operations somehow. Nicholas Weaver postulates that “part of the reason that NSO keeps a master list of targeting…is they hand it off to Israeli intelligence.”
This isn’t the first time NSO Group has been in the news. Citizen Lab has been researching and reporting on its actions since 2016. It’s been linked to the Saudi murder of Jamal Khashoggi. It is extensively used by Mexico to spy on — among others — supporters of that country’s soda tax.
NSO Group seems to be a completely deplorable company, so it’s hard to have any sympathy for it. As I previously wrote about another hack of another cyberweapons arms manufacturer: “It’s one thing to have dissatisfied customers. It’s another to have dissatisfied customers with death squads.” I’d like to say that I don’t know how the company will survive this, but — sadly — I think it will.
Finally: here’s a tool that you can use to test if your iPhone or Android is infected with Pegasus. (Note: it’s not easy to use.)
Our “Meet the Team” series profiles the creative and curious people of First Mode. We are driven to find purposeful technology solutions to the world’s most important challenges. We take our work seriously but ourselves not too seriously. Want to work with us? View our open positions here.
What do you do at First Mode?
I am a Principal Software Architect and the co-Director of Software Engineering in the Perth office of First Mode. When people ask what that means, I say “computers.”
What are you working on right now?
We are currently designing a large, underground diesel-electric haul truck from the ground up. I love reimagining all the things that go into making a vehicle, and the challenge of having to make the design trades along the way.
I also spend time mentoring and coaching our software team, as well as working on growing our team and evolving our processes.
Why is this important?
I am a very strong believer that engineering is a craft, and that requires all of us to invest in ongoing coaching and professional development.
My career has been shaped by the teaching, support, and influence of other engineers that I have been lucky enough to have worked with. Nowadays, some of my proudest moments are seeing people that I have had a small part in mentoring or coaching have success in their own careers, whether by launching products or becoming senior leaders.
What drew you to First Mode originally?
I can remember the day quite clearly; I met Jan, the General Manager of First Mode’s Perth office, for a coffee when we had just come out of lockdown, and he asked me flat out if I wanted to build something awesome. Who would say no to that?
After my first call with Maggie and Krunal, I knew that First Mode was where I needed to be. I believe I even told my partner after that first call that I had “found my people.”
How did your passion for engineering and tech begin?
From a very young age there were always computers in my home, and I was always fascinated with them—learning how they worked, programming them, pulling them apart and not always putting them back together. I also loved building contraptions out of Capsela, Meccano, or the go-to for many people at First Mode—Lego.
What gets you out of bed in the morning?
It sounds clichéd, but getting to work with the First Mode team. We have a great group of people who are incredibly intelligent and capable, but more importantly are supportive and fun. They’re also fully prepared to indulge my odd quirks, like the entire team wearing pink shirts on Wednesdays, listening to me talk endlessly about Cinnamon Scrolls, or supporting superstitions like never catching lift L first thing in the morning.
What does your typical day look like?
Working on the opposite side of the world to much of my team means my day is split into “Seattle Time” and “Perth Time”; Seattle Time is catching up on everything that is happening on our projects on the other side of the ocean, and working together with that part of the team. Perth Time is when I get to sit down and work through my own individual contributions to our projects, whether that is writing code, reviewing documentation, talking to vendors, or hassling my colleagues to go and get coffee (from very specific coffee shops).
Do you have a mantra, a motto, or a mission statement?
“Be curious.” There is a world of amazing stuff out there, and so many people you meet have had such incredible experiences—so always ask questions, always test your assumptions, always wonder “what if.”
What is great about software engineering?
I love software engineering because it is a career that is creative, dynamic, and highly collaborative; it is not the classic stereotype of a loner in a basement. It’s thrilling watching something you have made from the ground up work in the real world for the first time, or seeing people using something that you have built.
At the end of the day, I think good software engineering is an art, and there are a near-infinite number of ways you can solve a problem. I love learning about the different ways people take a problem and come up with a solution.
Could you point to a project that you are most proud of?
Early in my career, I was part of a small team that built a fleet of autonomous robots as part of a DSTO (Australia's Defence Science & Technology Organization) competition. In the space of a year, and over many sleepless nights, we went from knowing nothing about building a robot to watching our small fleet of vehicles drive around the Adelaide Showground.
I am still incredibly proud of what that small team achieved. The rapid exploration, prototyping, creativity, and bootstrapping, all with a group of good mates, is something that opened up an incredible path for me and has defined my career.
What do you think is the most significant discovery or human endeavor of the last few years?
The development and deployment of the COVID-19 vaccines. It shows that when funding is made available and collaboration is encouraged, humanity can take great steps to rapidly respond to enormous and dire threats. It gives me hope that collectively we do have the ingenuity and drive to solve current and future threats to our communities and planet.
Why does it matter that we keep inventing, testing, and creating?
To explore, learn, and create is human. It reminds us that collectively and thoughtfully we can build a better future.
What are your hobbies and interests outside of work?
I am one of those people that cannot stick to a hobby, so I tend to jump around between things. Recently I have been getting into riding motorbikes, HAM radio, competing in the Highland Games, and woodworking. This year I have decided to get my private pilot’s license. Lego is also still something I love.
Have you learned anything especially great in the last year?
At First Mode we have a “fun fact” section in our interview process, through which I have learned so much about the incredible experiences and achievements of the First Mode team.
Watching the Perseverance landing at 3:00 a.m. Perth time while getting to ask questions of the First Mode team via Slack in real time has to be one of the coolest things I’ve experienced as an engineer. To have access to that knowledge and experience, and answers to the questions the space-obsessed kid inside me wanted to ask, was just incredible, and to be a part of a team where knowledge is so generously shared reminds me of why I am an engineer.
Our recent discussion about Europa (Europa: Below the Impact Zone) has me thinking about those tempting Galilean moons and the problems they present for exploration. With a magnetic field 20,000 times stronger than Earth’s, Jupiter is a radiation generator. Worlds like Europa may well have a sanctuary for life beneath the ice, but exploring the surface will demand powerful radiation shielding for sensitive equipment, not to mention the problem of trying to protect a fragile human in that environment.
Radiation at Europa’s surface is about 5.4 Sv (540 rem), although to be sure it seems to vary, with the highest radiation areas being found near the equator, lessening toward the poles. In human terms, that’s 1800 times the average annual sea-level dose. Europa is clearly a place for robotic exploration rather than astronaut boots on the ground.
Jupiter offers up an environment where the solar wind, hurling electrically charged particles at ever-shifting velocities, interacts with the powerful magnetosphere, stretching it out almost 1000 kilometers away from the Sun. It’s essential that we learn more about the behavior of the magnetic fields generated by gas giants, and on that score new work out of Goddard Space Flight Center offers some insight. GSFC’s Yasmina Martos and team have been using the inner Galilean moon, Io, as a probe, studying what sets off one type of radio emissions known to emanate from Jupiter.
Io’s volcanoes have been observed since the first Voyager flyby, driven by internal heat as the moon experiences the gravitational pull not only of Jupiter but neighboring large moons. Gas and particles released by this activity are ionized and swiftly captured by Jupiter’s magnetic field, being accelerated along the field toward the Jovian poles.
Out of this we get decametric radio emissions (DAM) as electrons spiral in the magnetic field, waves that the Juno spacecraft’s Juno Waves Instrument has been detecting. Jupiter also produces radio waves at centimeter and decimeter wavelengths, caused by atmospheric phenomenon as well as activity in the magnetosphere apart from the Io interactions. The planet is, in fact, the noisiest radio emitter in the Solar System apart from the Sun. Homing in on the Io emissions, the GSFC work deploys a new magnetic field model with higher accuracy near the moon and targets the particular geometric configurations of planet and moon needed for Juno to detect the emissions.
Studying the radio emissions mediated by Io doesn’t help us cope with the radiation problem, but it does offer clues about this particular magnetosphere, a phenomenon we’ll come to know much better as future missions arrive. The researchers, reporting in the Journal of Geophysical Research: Planets, found that the decameter radio waves are controlled by not just the strength but the shape of Jupiter’s magnetic field. They emerge from a cone-like space thus formed, so that the spacecraft can only receive the radio signal when Jupiter’s rotation moves that cone across the instrument. The effect is similar to a lighthouse beacon sweeping out to sea.
Image: The multicolored lines in this conceptual image represent the magnetic field lines that link Io’s orbit with Jupiter’s atmosphere. Radio waves emerge from the source and propagate along the walls of a hollow cone (gray area). Juno, its orbit represented by the white line crossing the cone, receives the signal when Jupiter’s rotation sweeps that cone over the spacecraft. Credit: NASA/GSFC/Jay Friedlander.
I can remember trying to pick up radio emissions from Jupiter with my first shortwave receiving set — they’ve been a known phenomenon since 1955, detectable from Earth at between 10 and 40 MHz. What we get thanks to the Juno observations is a clarification about why decametric radio waves originating in the northern hemisphere seem more abundant than those from the southern. The paper explains for the first time the particular geometric configurations producing these effects. The authors offer up a plain language summary to go along with the paper’s abstract, from which this:
Thanks to Juno, the geometry of the magnetic field has been better constrained as waves and magnetic field data have been continuously collected within the Jovian environment since July 2016. In this study, we estimate where the radio waves generate and the energy of the electrons that generate these waves, which is up to 23 times higher than previously proposed. We ultimately demonstrate that the geometry of Jupiter’s magnetic field is a primary controller for the higher observation likelihood of radio wave groups originating in the northern hemisphere relative to those originating in the southern hemisphere.
Video: The decametric radio emissions triggered by the interaction of Io with Jupiter’s magnetic field. The Waves instrument on Juno detects radio signals whenever Juno’s trajectory crosses into the beam which is a cone-shaped pattern. This beam pattern is similar to a flashlight that is only emitting a ring of light rather than a full beam. Juno scientists then translate the radio emission detected to a frequency within the audible range of the human ear. Credit: University of Iowa/SwRI/NASA.
What a fascinating place the Jovian system is. Learning the precise locations within the magnetosphere where the decametric emissions originate helps to pin down the needed magnetic field strength and electron density to fit the Juno data. “The radio emission is likely constant,” says Martos, “but Juno has to be in the right spot to listen.”
The paper is Martos et al, “Juno Reveals New Insights Into Io‐Related Decameter Radio Emissions,” Journal of Geophysical Research: Planets (18 June 2020). Abstract.
We take for granted that an event in one part of the world cannot instantly affect what happens far away. This principle, which physicists call locality, was long regarded as a bedrock assumption about the laws of physics. So when Albert Einstein and two colleagues showed in 1935 that quantum mechanics permits “spooky action at a distance,” as Einstein put it, this feature of the theory seemed...
At Libre Space Foundation, we are dedicated to developing and supporting open-source space technology and projects that promote knowledge and improve space operations. Polaris is a project developed with the support of Libre Space Foundation (LSF). It brings together developers, engineers and university students from around the world. A diverse group of people with a shared interest in space and open-source technology. They are the ones working hard on designing, developing and optimising Polaris: a Python-based, Μachine-Learning (ML) tool, developed in an open-source, collaborative way, aimed at applying machine learning to satellite telemetry.
The challenge: Satellite operators, during a mission, have to tackle a daunting yet critical task; to monitor and keep track of numerous telemetry parameters to maintain a clear idea of the behaviour of their satellite. They need to comprehend how these parameters interfere with each other and estimate their impact accurately.
The Solution: This is where Polaris ML gets into the picture. It is a command-line based, machine-learning tool providing a satellite-telemetry analysis that can be of great help to satellite operators. The data collected are turned into comprehensive graph visualisations, using machine-learning models to understand and predict a satellite’s behaviour. Other data sources are also converted into valuable information available to spacecraft operators.
Note: Before examining Polaris ML in more detail, allow us to describe briefly how satellite operators handle these issues at the moment. Though satellites are built to be more self-aware nowadays, operators are still required to jump in and evaluate the situation by setting manual limits. This is called the Out-Of-Limit (OOL) technique which analyses and evaluates the behaviour of a satellite by collecting data and raising out-of-limit alarms about the state of a satellite. Consequently, a ‘soft out-of-limit’ alarm indicates a dangerous trend, while a ‘hard out-of-limit’ alarm is indicative of a failure occurring. At this point, let us clarify that Polaris ML does not seek to replace operators or the techniques already applied. Instead, what Polaris ML is after is to be able to provide assistance and amplify the process. Polaris ML is after becoming a reliable, efficient and complementary solution for Space Operators.
Before analysing the project, let us point out that this is a tool under active development, thus, the interface is very likely to change. Allow us now to delve into the different components of Polaris ML.
Polaris ML makes use of the XGBoost algorithm to analyse the relationship and the inter-dependencies between telemetry parameters. The XGBoost library allows for eXtreme gradient boosting. This is an approach enabling new, updated, and better-informed models to be created by predicting the errors of the previous models. Then, both models are added together to deliver a final prediction. Using this approach, Polaris ML predicts every telemetry parameter in a satellite. It then provides a graph illustrating the interdependence between the parameters, depicting the degree to which one parameter affects the other. The importance of telemetry parameters and how these are linked is presented as a web-based, 3D-interface graph. 3d-force-graph is the component used for the graph output.
Polaris ML is made up of these distinct parts:
Vinvelivaanilai is a Python module that fetches space weather data from servers of SWPC/NOAA. It then stores the data in text files or on the InfluxDB. It also includes functions enabling TLE and OMM parsing (any GP data) and propagating the orbit of a satellite to locate both its position and its velocity at any given time.
Vinvelivaanilai is the word for space weather in Tamil.
The word BETSI stands for “Behaviour Extraction for Time-Series Investigation”, and it aims to implement a state-of-the-art model to detect anomalies automatically without any manual intervention. The model for creating the concise representation is called autoencoder, and it utilises deep-learning techniques to detect any anomalies found within the telemetry data.
Anomalies are all events that fall outside the spectrum of the nominal behaviour of a system, generating massive shifts in the values of one or more parameters. Anomalies can have a catastrophic impact on the satellite, especially since their spectrum is quite broad, as it may range from a simple change in orientation to a massive explosion. (One can only imagine the impact).
Polaris is an open-source, python-based solution developed to facilitate space diagnostics and help spacecraft operators investigate anomalies. It aims to support and enhance satellite operators to improve their processes by enabling them to detect spacecraft behaviour and helping them drive their investigations when anomalies occur.
As we plan to take this project to a broader scale and see it developing further and expanding, we have created a list of resources that can help you familiarise yourself with Polaris ML. To start with, these are the Polaris repositories on GitLab. Then, you can check the detailed documentation that is available online as it walks users through the Polaris ML project. There are also several exciting talks and presentations by some of the members of the Project.
Polaris ML is backed up by a diverse, international team, welcoming people from different continents, cultures and backgrounds. The project is developed in a collaborative, open-source way and is fostered by an inclusive community. If you want to join, you can contact the team at the dedicated Polaris ML element/matrix channel: https://app.element.io/#/room/#polaris:matrix.org. You are welcome to join the channel, introduce yourself and contribute to the discussions there.
The Polaris ML team welcomes students from all over the world, and it is a popular project sought after by students applying for the Google Summer of Code program. Polaris ML participates in the programme for the third year in a row. The entire team has had great experiences with GSoC, and the project itself has expanded due to the support and contribution of the GSoC students. Adithya Venkateswaran, who has created the Vinvelivaanilai library, has been a star student of GSoC and is an integral member of the Polaris team.
Polaris ML is an aspiring project with great potential; it aims to optimise space diagnostics, reduce operation workload and enhance autonomous spacecraft operations. With the help of machine learning, Polaris ML aims at becoming a reliable, scalable solution for Space Operations. The project is still developing, optimised to deliver even better intuitive graph models, creating valuable and dynamic anomaly reports. Polaris ML is after opportunities to implement and test this solution in future satellite operations. We have already worked with BOBCAT-1 and the QUBIK pair of satellites. Still, as we wish to improve further and expand, collaborations with satellite operators for upcoming missions are welcome. Especially since we want to test further and optimise the solution itself and its key components.
The post Polaris: Applying Machine Learning to Satellite Operations appeared first on Libre Space Foundation.
ChiBUG meeting is at 6 PM at the normal place, which means you should go if you are near, and vaccinated.
honestly it’s kind of ridiculous that wally funk was placed on blue dragon when they’d know that bezos would overshadow the beauty of a mercury 13 woman, when so many of them have already passed away unable to experience space, finally going past the karmen line and participating in a space flight. so many people don’t know about mercury 13 and it makes me so so sad that this wonderful woman will be overshadowed by a man who, quite honestly, doesn’t deserve to go into space
Beginning right now, the 2021 Hackaday Prize challenges you to Reimagine Supportive Tech. Quite frankly, this is all about shortcuts to success. Can we make it easier for people to learn about science and technology? Can we break down some barriers that keep people from taking up DIY as a hobby (or way of life)? What can we do to build on the experience and skill of one another?
For instance, to get into building your own electronics, you need a huge dedicated electronics lab, right? Of course that’s nonsense, but we only know that because we’ve already been elbow-deep into soldering stations and vacuum tweezers. To the outsider, this looks like an unclimbable mountain. What if I told you that you could build electrics at any desk, and make it easy to store everything away in between hacking sessions? That sounds like a job for [M.Hehr’s] portable workbench & mini lab project. Here’s a blueprint that can take a beginner from zero to solder smoke while having fun along the way.
Larry, W2LJ recently reported on having purchased a mast support device that is supported by driving a car tire onto it. I’ve seen such mounts deployed in the field and always thought it a good idea. I’ve got a couple of 31-foot fiberglass push-up masts that aren’t seeing enough use so I ordered one of these mast holders too and hope it arrives in time to take on vacation next week.
When I use the 31-foot masts I usually strap it (temporarily) to the leg of a picnic table, bench, or whatever is handy and use it to support a wire antenna. Having it right beside my vehicle (and not having to look for something to lash it to) means operating out of the back-end of the Jeep with the hatch popped open which should make field deployment easy - and comfortable.
Given my intention to begin making regular trips into the field once I’m retired, this is just another step in that process. I really like the idea of having my portable station in the back of the Jeep and ready to go to one of the many parks in this area on a moments notice. I imagine this mostly being early morning adventures and ritual is important to me. Greeting the dawn with the songbirds and a little CW while smoking my pipe and sipping a hot cup of coffee.
Now I need a compact propane stove for making coffee…
Without a doubt, the most memorable live event I had witnessed as a teenage space enthusiast was the landing of Viking 1 on the surface of Mars on the morning of July 20, 1976 when I was 14 years old – seven years to the day after the Apollo 11 Moon landing. While I had been increasingly aware of NASA’s Mars landing plans as the early 1970s unfolded, my interest in Viking and its search for life on Mars was supercharged in 1974 when I watched a summer rerun of an episode of the then-new PBS science show, Nova, entitled “The Search for Life” (see “Growing Up in the Space Age: Summer Vacations in the ‘70s”). Over the next couple of years, my interest in Viking was further fueled by literature I had received by mail from NASA outlining the Viking project and its experiments. With the encouragement of my 8th grade Earth science teacher, Mr. L’Herault, I even performed some of the student experiments outlined in the Viking educational literature I had obtained during that school year preceding the landing.
As Americans were engaged in celebrations of the nation’s bicentennial as July 1976 began, I was also busy keeping track of every aspect of the upcoming Viking 1 landing attempt in this pre-internet age by watching news coverage on TV as well as reading the local daily newspaper and the latest issues from my new subscription to Sky & Telescope magazine. Armed with the literature I received from NASA as well as a map of Mars from the February 1973 issue of National Geographic, I was well prepared for this historic event.
I got up at about 6:00 AM EDT on the morning of Tuesday, July 20 to watch the live coverage of the Viking landing on TV – a tough thing for a 14-year old night owl to do in the middle of a summer school vacation. As I listened intently to the announcements of each milestone reached during descent, Viking 1 finally came down for a soft landing 17 seconds later than scheduled at 11:53:06 GMT at 22.27° N, 47.95° W in Chryse Planitia about 28 kilometers from its target point. Because of the time needed for the signals from the Viking lander to travel back to Earth at the speed of light, the news of the successful landing finally arrived here on Earth 19 minutes later at 8:12 AM EDT. Seconds after its soft landing, the Viking 1 lander started a preprogrammed sequence of events including the acquisition of its first images of the Martian surface.
The Viking landers each sported a pair of 7.3-kilogram cameras on their upper deck mounted 0.822 meters apart to provide stereo views of the landscape. Unlike the vidicon-based cameras used on NASA’s robotic Surveyor lunar landers a decade earlier which returned individual frames of the scene (see “Surveyor 1: America’s First Moon Lander”), the Viking Lander cameras used a scanning mirror to reflect the scene onto a set of a dozen light-sensitive photodiodes. The nodding motion of the mirror allowed one column of the scene to be scanned before the camera turret rotated stepwise in azimuth to allow the adjacent columns to be scanned one at a time. Each camera could scan up to 342.5° in azimuth and from 40° above to 60° below the horizon. Earlier Soviet Luna, Mars and Venera landers used devices the Soviets called “telephotometers” which operated on a similar principle (see “Luna 9: The First Lunar Landing” and “Venera 9 & 10 to Venus“).
The array of a dozen detectors allowed the scene to be scanned in six spectral bands for color and near infrared imaging at a scale of 0.12° per pixel or black and white images with a finer image scale of 0.04° per pixel with four different focus steps ranging from 1.9 to 13.3 meters. Each image column scan was broken up into 512 pixels digitized to 6 bits. The scanning rate was synchronized with the 16,000 bits per second transmission rate using the Viking Orbiters as a relay (as was being done for these first images from Viking 1) or the 250 bits per second rate for direct transmission to Earth via the Lander’s dish-shaped, high gain antenna. The images could also be stored on a 40 megabit tape recorder for later transmission.
Acquisition of the first image was started 25 seconds after landing using Camera 2 on the Viking 1 lander. It was a high-resolution, black and white image of a 70° by 20° strip immediately in front of the lander covering a distance of 1.5 to 2.0 meters from the camera and included a view of the Footpad 3 on the right side. Although the image was scanned in five minutes, it took 20 minutes for the data to be relayed back to Earth via the Viking 1 orbiter. Back on Earth, we watched as the image on TV was refreshed a few pixel columns at a time from left to right. The first features readily identifiable in the image were a tiny dune-like feature and a rock. As the scene was filled in, numerous small rocks up to ten centimeters across came into view in the dusty soil.
As we watched the first image coming in on Earth, Viking 1 had already begun to take a lower-resolution 300° panorama using Camera 2 providing a fuller view of the landing site out to the horizon. Taken at the equivalent of 4:17 PM local time on Mars, the scan was completed in nine minutes and proceeded to be relayed back to Earth. The rock-strewn landscape included dusty deposits with a surprisingly bright sky caused by the dust suspended in the thin Martian atmosphere.
In the days that followed, Viking transmitted more images and data about the conditions on the surface of the Red Planet. It would be the Viking 1 lander’s second Sol on the Martian surface before it took its first color image. After being transmitted back to Earth, initial publicly released versions of the image showed a blue sky and a salmon colored surface. It was only in later versions, which had been properly calibrated and color balanced, that the true rust red color of the surface was revealed along with a dust-laden red sky instead of the expected blue color. Over the coming weeks, Viking 1 performed its life detection experiments but yielded ambiguous results because of what are now recognized as faulty assumptions made in the experiments’ design (see “Viking and The Question of Life on Mars, Part 2”). Contact with the Viking 1 lander would continue until November 11, 1982 when a faulty command sent by a skeleton crew of ground controllers resulted in loss of contact after 2,600 Sols on the Martian surface. After the landing of Viking 2 on September 3, 1976 (see “First Pictures: Viking 2 on Mars – September 3, 1976”), it would be almost 21 years before NASA would land on Mars again.
Follow Drew Ex Machina on Facebook.
“Growing Up in the Space Age: Summer Vacations in the ‘70s”, Drew Ex Machina, July 22, 2019 [Post]
“First Pictures: Viking 2 on Mars – September 3, 1976”, Drew Ex Machina, September 3, 2020 [Post]
“Viking & The First Seismometers on Mars”, Drew Ex Machina, November 21, 2018 [Post]
“Viking and The Question of Life on Mars, Part 1”, SETIQuest, Vol. 3, No. 3, pp. 1-6, Third Quarter 1997 [Article]
“Viking and The Question of Life on Mars, Part 2”, SETIQuest, Vol. 3, No. 4, pp. 1-7, Fourth Quarter 1997 [Article]
Michael M. Mirabito, The Exploration of Outer Space with Cameras, McFarland, 1983
Andrew Wilson, Solar System Log, Jane’s Publishing, 1987
The Martian Landscape, NASA SP-425, 1978
Working from home creates an uncomfortable situation where I’m forced to decide every single day whether to shave, shower, or even get dressed. It’s a brave new world and the many signs that things were getting back to normal that appeared over the last month or two are suddenly becoming difficult to see again.
But being home all day affords me the opportunity to do things not previously possible, important things, like monitoring the cluster for needed spots. Earlier today I was alerted when the programmed trilled for a 20 meter CW spot I’ve been waiting to find. It was Mort, SV5/G2JL on Lipsi where he spends about six months a year.
I don’t even know how to pronounce “Dodecanese” and can’t easily point it out on a map. Besides, it’s already in my log. Number 252 on the Most Wanted List is not particularly rare. I don’t need the entity, I’ve just wanted to work Mort for a long time and have yet to do that.
I didn’t copy him today either.
I first heard G2JL about six months ago but he didn’t hear me when I called then. That near-miss led me to look-up his QRZ bio page and I found it incredibly interesting and entertaining. He’s got serious DX chops, “DXCC (300+), DUF4, OHA, WASM2, WAE, WAZ, WAS (well inside 20 years; WAZ came before WAS !)” and says:
“WAC on AM ‘phone; more that 50 QSOs on ‘phone, in fact; two on RTTY, 23 countries on 50 MHz CW - well over 50 QSOs on VHF; the rest on HF using the Proper Mode for Real Grown-up Hams. I eschew these MDMs [Mindless Data Modes]. Unless ear & brain (if any) are involved, it’s not ham radio”.
He’s an 88 year-old radioman who says what he thinks and his story makes for enjoyable reading. Don’t believe me? Visit his QRZ bio page and dig in, unless you’re easily offended, which is fairly common these days (In that case, you should avoid that link and follow this instead).
If you happen to work Mort, tell him I’m looking for him. I need to know how to pronounce Dodecanese - among other things.
Browse the comments on virtually any story about a ransomware attack and you will almost surely encounter the view that the victim organization could have avoided paying their extortionists if only they’d had proper data backups. But the ugly truth is there are many non-obvious reasons why victims end up paying even when they have done nearly everything right from a data backup perspective.
This story isn’t about what organizations do in response to cybercriminals holding their data for hostage, which has become something of a best practice among most of the top ransomware crime groups today. Rather, it’s about why victims still pay for a key needed to decrypt their systems even when they have the means to restore everything from backups on their own.
Experts say the biggest reason ransomware targets and/or their insurance providers still pay when they already have reliable backups is that nobody at the victim organization bothered to test in advance how long this data restoration process might take.
“In a lot of cases, companies do have backups, but they never actually tried to restore their network from backups before, so they have no idea how long it’s going to take,” said Fabian Wosar, chief technology officer at Emsisoft. “Suddenly the victim notices they have a couple of petabytes of data to restore over the Internet, and they realize that even with their fast connections it’s going to take three months to download all these backup files. A lot of IT teams never actually make even a back-of-the-napkin calculation of how long it would take them to restore from a data rate perspective.”
Wosar said the next most-common scenario involves victims that have off-site, encrypted backups of their data but discover that the digital key needed to decrypt their backups was stored on the same local file-sharing network that got encrypted by the ransomware.
The third most-common impediment to victim organizations being able to rely on their backups is that the ransomware purveyors manage to corrupt the backups as well.
“That is still somewhat rare,” Wosar said. “It does happen but it’s more the exception than the rule. Unfortunately, it is still quite common to end up having backups in some form and one of these three reasons prevents them from being useful.”
Bill Siegel, CEO and co-founder of Coveware, a company that negotiates ransomware payments for victims, said most companies that pay either don’t have properly configured backups, or they haven’t tested their resiliency or the ability to recover their backups against the ransomware scenario.
“It can be [that they] have 50 petabytes of backups … but it’s in a … facility 30 miles away.… And then they start [restoring over a copper wire from those remote backups] and it’s going really slow … and someone pulls out a calculator and realizes it’s going to take 69 years [to restore what they need],” Siegel told Kim Zetter, a veteran Wired reporter who recently launched a cybersecurity newsletter on Substack.
“Or there’s lots of software applications that you actually use to do a restore, and some of these applications are in your network [that got] encrypted,” Siegel continued. “So you’re like, ‘Oh great. We have backups, the data is there, but the application to actually do the restoration is encrypted.’ So there’s all these little things that can trip you up, that prevent you from doing a restore when you don’t practice.”
Wosar said all organizations need to both test their backups and develop a plan for prioritizing the restoration of critical systems needed to rebuild their network.
“In a lot of cases, companies don’t even know their various network dependencies, and so they don’t know in which order they should restore systems,” he said. “They don’t know in advance, ‘Hey if we get hit and everything goes down, these are the services and systems that are priorities for a basic network that we can build off of.'”
Wosar said it’s essential that organizations drill their breach response plans in periodic tabletop exercises, and that it is in these exercises that companies can start to refine their plans. For example, he said, if the organization has physical access to their remote backup data center, it might make more sense to develop processes for physically shipping the backups to the restoration location.
“Many victims see themselves confronted with having to rebuild their network in a way they didn’t anticipate. And that’s usually not the best time to have to come up with these sorts of plans. That’s why tabletop exercises are incredibly important. We recommend creating an entire playbook so you know what you need to do to recover from a ransomware attack.”
Roger Ivester, North Carolina
Sue French, New York
NGC 6572, Planetary Nebula in Ophiuchus
Sharing Observations and Bringing Amateur Astronomers Together
The purpose of the Observer’s Challenge is to encourage the pursuit of visual observing. It’s open to everyone who’s interested, and if you’re able to contribute notes and/or drawings, we’ll be happy to include them in our monthly summary. Visual astronomy depends on what’s seen through the eyepiece. Not only does it satisfy an innate curiosity, but it allows the visual observer to discover the beauty and the wonderment of the night sky. Before photography, all observations depended on what astronomers saw in the eyepiece, and how they recorded their observations. This was done through notes and drawings, and that’s the tradition we’re stressing in the Observer’s Challenge. And for folks with an interest in astrophotography, your digital images and notes are just as welcome. The hope is that you’ll read through these reports and become inspired to take more time at the eyepiece, study each object, and look for those subtle details that you might never have noticed before.
This month’s target
Our object for the 150th monthly edition of the Observer’s Challenge is the tiny, but bright, planetary nebula NGC 6572, variously nicknamed the Emerald Nebula, the Blue Racquetball, and the Turquoise Orb. These names highlight the range of hues perceived by different observers. The nebula is young, perhaps only a few thousand years old. Its diminutive size led to its inclusion in some early star catalogs. NGC 6572 has a visual magnitude of 7.3, as determined by Stephen O’Meara, while its central star dimly shines at 13th magnitude. As with many planetary nebulae, published distances vary wildly. Values in the vicinity of 5000 light-years seem most likely. This pretty little gem was discovered in1825 by Wilhelm Struve.
NGC 6572 displays bipolar outflows in deep images. There’s evidence of interaction between the collimated outflows and the nebula’s elliptical shell. The interaction has broken up the elliptical shell such that parts of the shell have been accelerated, while the outflow has been slowed down and/or deflected. This supports the idea that such outflows are common in planetary nebulae and may play an important role in shaping nebular shells. https://ui.adsabs.harvard.edu/abs/1999ApJ…520..714M/abstract
Mike McCabe: Observer from Massachusetts
The observer’s challenge object for July 2021 was NGC 6572, a bright planetary nebula in the constellation Ophiuchus, and I feel lucky to have gotten to have gotten an observation of it at all this month. Here in the northeast U.S. we’ve been experiencing record rains for this time of year, and clear nights have felt like a precious commodity. As luck would have it, on the exact night of the new moon we got a break and had a clear enough sky to do some observing. I put a scope out before dinner and waited for darkness.
The conditions were fairly typical of an early summer evening, with a high moisture content in the air lending to just fair transparency, but those same conditions also provided a somewhat steady state of seeing. On my sketch I noted the seeing as 2/5 early on, but further assessment as the evening wore on resulted in a rating of at least 3/5. And even though the target was quite bright by the observer’s challenge standards and I probably could have easily gotten away with a medium sized refractor, I chose the 10” F/5 Newtonian regardless.
The target was a relatively easy star hop from the 3rd magnitude star Cebalrai, including a quick drop in on IC 4665, the “Summer Beehive” cluster, just because it was there. At low power the planetary nebula is virtually stellar, with very little evidence that you’re not looking at just a plain old star. Boosting the power up though begins to reveal the fuzzy discos nature of the object, and true to the form of many planetary nebula, magnification doesn’t seem to dim it excessively. Even though the dimensions listed in various resources shows the object to be somewhat elongated, it never appeared anything other than round to me.
With a variety of nicknames, including Blue Racquetball, Emerald Nebula, Green Nebula, and Turquoise Orb, I figured one more couldn’t hurt. I hereby dub thee “Fuzzy Blue Star”, because that’s exactly what it looked like to me. It’s well known that color interpretation is a highly individual thing, and greens are my weakest area in color vision. I feel a little like I’m missing out on something in all these astronomy observations, like the green in the coma of a comet, the green flash on the Sun, and now the emerald in this planetary nebula. But I fret not. I’m happy with the colors I see, even if it gives my wife fits when I tell her something is green, quite to the contrary of popular interpretation. She calls me colorblind. I tell her I see colors just fine. What does she know, anyway?
Glenn Chaple: Observer from Massachusetts
NGC 6572 – Planetary Nebula in Ophiuchus (Mag: 8.1, Size: 16” X 13”)
The visual observer is all too aware that, with the exception of double stars like gold and yellow Albireo and ruby-red carbon stars like R Leporis, the deep sky is a pretty colorless place. Bright planetary nebulae like this month’s Observer’s Challenge, NGC 6572 in Ophiuchus, are a notable exception.
NGC 6572 was discovered by the Russian-German astronomer Friedrich Georg Wilhelm von Struve in 1825. Struve was in the midst of a survey to catalog double stars when he came upon “a star surrounded by bright green ellipse of fuzzy light.” At the time, astronomers were unaware of the true nature of such a curiosity. Today we know that NGC 6572 is a planetary nebula – an expanding luminous shell of gas ejected by an aging star. It’s relatively young as planetary nebulae go, perhaps no more than 2600 years.
The 2000.0 coordinates for NGC 5672 are: R.A. 18h 12m 06.6s , Dec. +6° 51’ 13”. I star-hopped there by starting at the 5th magnitude star 71 Ophiuchi, the unlabeled star just south of 72 Ophiuchi on Finder Chart A. Finder Chart B shows an 8th magnitude star, SAO 123133 just northwest of 71 Ophiuchi. A line from this star through 71 Ophiuchi and extended 1.3° brought me to a triangle of 8th magnitude stars, NGC 6572 was a little less than a degree SSE of the southernmost star in the triangle.
At 39X in my 10-inch f/5 reflector, NGC 6572 appeared stellar. At 208X, it was definitely non-stellar when compared to a pair of stars immediately to its east, It seemed slightly elongated in a north-south orientation and was decidedly pale blue. I was unable to detect the central star, which is said to be 13th magnitude.
NGC 6572 is approximately 5000 light years away. This translates to an actual diameter of ⅓ light year.
Uwe Glahn: Observer from Germany
Jaakko Saloranta: Observer from Finland
July: NGC 6572 – Planetary Nebula –Ophiuchus
NGC 6572 with 10 inch GSO @ 625x (5′)
The observation and sketch was made from a suburban observing site with a naked eye limiting magnitude of ~5.8 near zenith and SQM-L reading of 19.80 from the same region. Telescope used was a 10 inch GSO telescope with multiple magnifications. Weather was fairly average: temperature in the low 50s, rising humidity towards midnight and some cirrus clouds starting to emerge at around 11.30 pm.
I felt I couldn’t get a clear view of the object – probably due to the average seeing conditions and somewhat low altitude of the object (32 degrees above the horizon). What I did manage to see was a “barely N-S elongated planetary with some structure in the middle. From time to time ring structure seems visible with some additional detail around it. No central star.” I could not discern any color from the object although I am familiar with the several – quite colorful nicknames – for this object.
Gregory Brannon: Observer from North Carolina
I went observing on the 11th. My cousin Quinn was with me, and I showed her M92 and M13, before deciding to try to find two planetary nebulae, Minkowski’s Footprint, and the Observer’s Challenge object for this month, NGC 6572. I was unable to find Minkowski’s Footprint after a while of trying, so I gave up for the time being and went for 6572.
By now, the clouds were coming across the sky in bands, obscuring and then revealing the part of the sky I was using in about equal measure. It took some time to star hop, using the screenshot from stellarium mobile as reference both mirrored and unmirrored. It became easier when I realized actually there was a dim but visible star right next to the object. At that point it was a race against time as a band of clouds, a band of clear, and a final neverending haze was approaching. The nebula appeared as a bright blue star at mid power (80x), nearly a point source, but with the weird averted vision behavior consistent with my experience of planetary nebulae. I scrambled to put a higher power in (I had already prepared the UHC filter), but in my haste I was a little too greedy with the magnification. At 428x it was not in the field of view. I pulled the Barlow out and put just the 7mm in (171x), and by then the cloud band was coming through. I lost it by the time the cloud band departed. Back at 80x, I located it again, switched to 7mm, and confirmed that it was a nebula. Very surface-bright and very noticeably pale-blue, very elongated (easily 2:1), and very small. I tried to scramble to go to the higher power (at that surface brightness I felt I could), but then the neverending haze arrived. I settled for a sketch from memory.
2021-July-12 10:40 PM EDTNGC 6572 – Planetary Nebula in Ophiuchus10″ f/5 Dobsonian, 2.5x Barlow, 7mm 58° eyepiece, 429x, UHC filterThe planetary nebula is a brilliant pale-blue at low power, its color and peculiar behavior under averted vision being the only thing which reveals its non-stellar nature. At 171x, it is noticeably elongated and yet retains significant surface brightness at a small size. It is almost stellar at first glance. At 429x, with the UHC filter, the blue coloration is even more dramatic and the nebula mecomes a bright blue ball with a slightly dimmer elliptical fringe elongated N/S, and slightly more sharply elongated in the North.
I also observed the object with the CPC800 (8″, 250x, no filter) belonging to the Cline Observatory, during a practice session on the Thursday before our re-opening to the public. It almost seemed to show a little better contrast though less saturation, and I felt it had a bit sharper edges and pointier ends. But this is the sort of thing your brain imagines. I got a brief impression out of observatory director Tom English–something to the effect of “nice color.” (Later: but not large or obvious enough to show during public nights.) Another observatory volunteer mistook it for a star at first.
Mario Motta: Observer from Massachusetts
NGC6572 is a very tiny object (16×12 arc seconds). Got this last week, poor night with some turbulence, with an H alpha, O3 , and S2 filters. Very short exposures as it is very bright. Visually a small “blue spot”.
Image attached, about 20 minutes each filter, O3 dominated…thus very blue. No detail that I can see. Only good image on line I found is by the Hubble, but can’t match that one! However, a nice object.