These days we take in data at such a clip that a mission like New Horizons will generate papers for decades. The same holds true for our burgeoning databanks of astronomical objects observed from the ground. So it only makes sense that we begin to recover older datasets, in this case the abundant imagery — photographs, radio maps, telescopic observations — collected in the pre-digital archives of scientific journals. The citizen science project goes by the name Astronomy Rewind, and it’s actively resurrecting older images for comparison with new data.
Launched in 2017, Astronomy Rewind originally classified scans in three categories: 1) single images with coordinate axes; 2) multiple images with such axes; and 3) single or multiple images without such axes. On October 9, the next phase of the project launched, in which visitors to the site can use available coordinate axes or other arrows, captions and rulers to work out the precise location of each image on the sky and fix its angular scale and orientation.
Image: Astronomer E. E. Barnard photographed the Rho Ophiuchi nebula near the border of Scorpius in 1905 through a 10-inch refractor. When he published the image in the Astrophysical Journal five years later, he discussed the possibility — then fiercely debated — that bright nebulae are partially transparent and dark nebulae are opaque, hiding material farther away. Other researchers argued that dark nebulae are simply regions where stars and gas are absent. Credit: American Astronomical Society, NASA/SAO Astrophysics Data System, and WorldWide Telescope.
We have over a century of images to work with, some 30,000 at present drawn from American Astronomical Society journals the Astronomical Journal (AJ), Astrophysical Journal (ApJ), ApJ Letters, and the ApJ Supplement Series. These images were provided through the Astrophysics Data System (ADS), which draws on NASA funding and provides bibliographical and archival services at the Smithsonian Astrophysical Laboratory (SAO), which is part of the Harvard-Smithsonian Center for Astrophysics.
What’s next for the initial round of imagery is inclusion into the WorldWide Telescope. Originally a Microsoft project, the WWT is now managed by the American Astronomical Society, and serves as what the AAS calls a ‘virtual sky explorer that doubles as a portal to the peer-reviewed literature and to archival images from the world’s major observatories.’ 10,000 images (those with coordinate axes) are to be placed within the WWT within a few months, while volunteers proceed to identify where the remaining 20,000 images belong on the sky.
Image: Barnard’s photo has been placed on the sky in its proper position and orientation and is displayed in WorldWide Telescope (WWT) superimposed on a false-color background image from NASA’s Wide-field Infrared Survey Explorer (WISE). Credit: American Astronomical Society, NASA/SAO Astrophysics Data System, and WorldWide Telescope.
But these images are not the only ones arriving for inclusion into the growing database. Results from the related ADS All Sky Survey are also going into the WorldWide Telescope, along with a European image display tool called Aladin, developed at the Centre de Données astronomiques (CDS), Strasbourg Observatory, France. The software highlights the effectiveness of the concept, for with Aladin, users will be able to click on any image that originally appeared in one of the AAS journals and call up the corresponding research paper. Alyssa Goodman, one of the project’s leaders at the Harvard-Smithsonian Center for Astrophysics (CfA), comments:
“Without Astronomy Rewind, astronomers would be unlikely to make the effort to extract an image from an old article, place it on the sky, and find related images at other wavelengths for comparison. Once our revivified pictures are incorporated into WorldWide Telescope, which includes images and catalogs from across the electromagnetic spectrum, contextualization will take only seconds, making it easy to compare observations from a century ago with modern data to see how celestial objects have moved or changed.”
Image: In these two figures, Barnard’s photo has been made partially and fully transparent, respectively, to reveal it in context. In the visible-light photo, gas glows brightly while dust appears in silhouette. In infrared light, as seen by WISE, dust glows brightly where in visible light there was nothing but blackness. Barnard was right! Credit: American Astronomical Society, NASA/SAO Astrophysics Data System, and WorldWide Telescope. Credit: AAS.
As Centauri Dreams readers know, I’ve often enthused about the potential for citizen science projects both in terms of their effectiveness at identifying and cataloging astronomical phenomena as well as the opportunity they present for non-professionals to contribute to fields ranging from deep sky objects to exoplanets and our own Solar System. Astronomy Rewind is clearly keeping the momentum of such efforts going. As it moves into a more challenging phase of confirming the position, scale, and orientation of decades-old astronomical images, the project will offer help features run by astronomy graduate students.
Thus we revive work going back to the 19th Century and link to the work discussing it, with all journal images contextualized on the sky. That’s quite a goal, and it invariably reminds me of the debate over Boyajian’s Star (KIC 8462852, more familiarly known as Tabby’s Star), in which the question of long-term dimming was addressed by a study of 500,000 photographs in the archives of Harvard College Observatory, over a century’s worth of images being digitized through the Digital Access to a Sky Century@Harvard (DASCH) project.
Projects like these are massive in scope and their efforts constitute a heartening work in progress. Ultimately, every astronomical image available in any scientific journal or academic or observatory collection will be catalogued, giving us a way to study the sky over periods of time that are lengthy in comparison to a human lifetime but tiny at the astronomical scale. Nonetheless, KIC 8462852 showed us how an unexpected need to examine old data could propel a scientific debate and flesh out information about a newly discovered mystery.
Well, it happened… since last Saturday, for the seventh time since 2012, Sassy is again on her trailer and in the dry, ready for another Canadian Winter.
She was brought to Canada in March 2010 and stayed in “dry dock” that year and the next. However, every season since 2012 she has been occupying slip 16 in the X-dock at the marina of the Nepean Sailing Club. Here are some statistics:
Estimating an outing every 2-3 weeks Sassy may have made close to 8 outings each season for a total of 50-60 so far. Estimated each outing at 10 nautical miles (NM), she would have been underway 80 NM each year for a total mileage so far of 500-600 NM. To these one would have to add her week-long cruising and gunkholing adventures in Georgian Bay and the upper Ottawa River in 2010-2015, which would add some 300-400 more miles for a total of 800-1000 NM already under her hull (belt?).
So far, thanks to the very able care of Alex C. (seen wearing rubber boots on the first picture in the composite above) and his colleagues at the J&S Service Station in Blackburn Hamlet, Sassy’s road-auxiliary (the 2006 CRD Jeep Liberty) is still going strong at over 150,000 Km. And also —knock wood— so has been so far Sassy’s skipper.
Looking forward to re-launching Sassy in 2019.
Here’s your reminder: MeetBSD is happening October 19-20 in Santa Clara, CA. That’s the end of this week. Go, if you are near.
If you're an American of European descent, there's a 60% you can be uniquely identified by public information in DNA databases. This is not information that you have made public; this is information your relatives have made public.
"Identity inference of genomic data using long-range familial searches."
Abstract: Consumer genomics databases have reached the scale of millions of individuals. Recently, law enforcement authorities have exploited some of these databases to identify suspects via distant familial relatives. Using genomic data of 1.28 million individuals tested with consumer genomics, we investigated the power of this technique. We project that about 60% of the searches for individuals of European-descent will result in a third cousin or closer match, which can allow their identification using demographic identifiers. Moreover, the technique could implicate nearly any US-individual of European-descent in the near future. We demonstrate that the technique can also identify research participants of a public sequencing project. Based on these results, we propose a potential mitigation strategy and policy implications to human subject research.
A good news article.
This is a current list of where and when I am scheduled to speak:
The list is maintained on this page.
When I was fifteen, the three coolest cars I could think of were the Camaro Berlinetta, the Subaru Brat, and the Pontiac Fiero. The Camaro’s stereo was mounted on a swiveling stalk that came out of the center console. The Brat had two hard plastic seats in the truck bed that had handles to hold on to instead of seatbelts. The Fiero had speakers built into the headrests.
I look at the cars for sale now, and I don’t see any of those features being offered. Its almost as if I fifteen-year-old me wanted things that weren’t actually good ideas.
I believe that, somewhere, there is a highly qualified security person who has had enough of corporate life and wants instead to make a difference in the world. If that's you, please consider applying.
Last month, I wrote about clean screen writing where all the extraneous information is removed from the screen and the writer is presented with a “blank page.” In the same vein, Abhinav Tushar has an interesting post on making Org mode itself look like a word processor. By that he means that Org looks like a word processor as you enter text.
Mostly it’s a matter of picking a nice font, a nice background, adjusting the spacing, and getting rid of everything but the text you’re editing. That include hiding all the Org markup and
#+ commands. The result, I must admit, looks gorgeous. Still, as I said in the blank page post, it’s not for me. When I’m writing, I like to see the markup because, among other things, it makes it easier to edit the text.
Not everyone will agree, of course, so you should take a look at his results to see if they’re something you’d like. He has links to his configuration so it shouldn’t be hard to recreate his setup or use it as a jumping off place for your own. Even if you don’t want his setup, you might like using a proportional font in Org. He uses the ET Book, which looks very nice. I am impressed at what he was able to do and how nice it looks.
As Douglas Adams explained in The Hitchhiker’s Guide to the Galaxy, digital watches are “pretty neat” to us primitive life forms. Something about the marriage of practicality, and sheer nerdiness gets me oddly excited. Somewhere in my fascination I asked myself, “can I make a digital watch entirely of my design?” I did! And it taught me a lot about pcb fabrication, low power programming, and shift registers.
Probably the most important function of a watch is that it keeps time. While you could use your microcontroller to count the seconds and save on parts, there are some major downsides to this. For one, the microcontroller is much worse at keeping time than a dedicated RTC (Real Time Clock) IC, the time would drift significantly with temperature and battery voltage. Another serious problem is that it would require the microcontroller to always be on, keeping track of the time. This would consume much more current than an RTC IC, draining the battery significantly faster. Thus we employ a DS3231 to casually sit in the background, consuming microamps from it’s own back-up battery (which, at the rate of 200µA, would take 12.56 years to drain).
A good, oddball week.
Your unrelated comics link of the week: Draculagate, a book funded by Kickstarter. Watch the video.
I’ve seen a lot of discussion on the Internet lately about the FT-817’s less-than-robust DC power connector. Its miniature coaxial power connector has long been recognized as a failure waiting to happen. I thought I’d chime in with my crude, little hack.
Over the years, users have come up with a variety of ways of dealing with the FT-817’s power connector. If you’re brave enough, you can just hard-wire the power cord directly to the FT-817’s main circuit board and eliminate the connector altogether. You can also buy a really slick adapter that gives you an Anderson Powerpole connector on your FT-817.
When I bought my FT-817 almost 15 years ago, I was immediately leary of the little 4.0 x 1.7 mm power connector; there was no way it was going to hold up in the field. I didn’t know of any commercial options at the time, so I raided my junk box to come up with a solution, albeit a crude one.
I merely attached a small right angle lug to the FT-817’s ground screw. Then, I used a couple of small nylon cable ties to secure the power cable to the lug and provide some strain relief. I installed Powerpole connectors on the other end of the cable. It’s not pretty but it served the purpose.
Although my FT-817 doesn’t see as much field use as it used to, my stupid-simple hack is still going strong after 15 years. While this approach doesn’t eliminate the FT-817’s little DC connector, it has (so far) survived many years of portable use in the field.
72, Craig WB3GCK
Some people see a paper boat, others a goat, some a sloppy triangle and others the mischievous grin of the Cheshire Cat. Whatever we behold in the curious constellation of Capricornus, I was just happy for clear skies to see it last night. Two weeks of cloudy nights have been too much for this star-craved human to bear.
With the return of the stars, I noticed that Mars has been on the move east and now shines from the center of Capricornus. The Red Planet hasn’t lost its visual magnetism despite having faded to magnitude –1.0 in part because it has no competition — Capricornus is one of the fainter constellations with no star brighter than third magnitude. Mars still bests Canopus, the second brightest star, and stands only half a magnitude behind the brightest star Sirius.
Mars won’t be locked inside its Capricornus cage for very long. Each passing night it edges eastward (to the left from the northern hemisphere) and further north as it orbits the sun. If you watch closely and compare the planet’s night-by-night position with stars in the figure, you can actually see Mars move in 24 hours. Don’t wait too long — by Halloween, the planet will have fled
But first you have to find Capricornus, and it’s not a bright constellation. Good thing Mars is there. Starting at the planet, look a little more than one outstretched fist to the upper right to find a pair of 3rd magnitude stars, Alpha and Beta Capricorni. Now, return to Mars and look a little more than a fist to the upper left to spot Delta Cap (short for Capricorni). Delta is the brightest star in the constellation by a hair. Second brightest is Beta and third brightest, Alpha.
Usually, a constellation’s brightest star is Alpha, followed by Beta then Gamma and so on. Usually. But sometimes a star’s position in the sky is more important than splitting hairs about its brightness. To the ancients, stars had other important qualities besides just brightness. Position mattered, too.
That’s the case with Alpha Cap, which got the designation because it’s the brightest, westernmost star in the figure. While it sets before the others, it also is the first to rise, lending it notoriety.
Beta really is Capricornus’ second brightest star, and having it right next to Alpha makes for a convenient, easy-to-remember Alpha-Beta pairing.
Astronomer James Kaler once wrote: “Omega stars,” those named with the last letter of the Greek alphabet, get little respect.” That may be true but Capricornus’ Omega takes a prominent position in the constellation because it marks the bottom apex of a triangle formed by Delta and the Alpha-Beta duo. If you can find the basic triangular outline, with or without the fainter “filler” stars, congratulations!
Capricornus is one of the most ancient constellations and represents a unique figure among the 88 groups, a creature that’s half goat and half fish. The myth comes to us from Mesopotamia from 5,000 years ago. Then, the figure represented the boat of the god Enki, the Sumerian god of water, wisdom and creation. Enki means “Lord of the Earth” and his symbols were the fish and the goat, both representations of fertility and identified with the constellation to this day.
Because all stars except the sun are so far away, their light takes from several to tens of thousands of years to reach our eyes. To look at a star is to look back in time. Capricornus telescopes time, taking us back across dozens of centuries to reconnect with one of the great cultures of the ancient world.
The US Government Accounting Office just published a new report: "Weapons Systems Cyber Security: DOD Just Beginning to Grapple with Scale of Vulnerabilities" (summary here). The upshot won't be a surprise to any of my regular readers: they're vulnerable.
From the summary:
Automation and connectivity are fundamental enablers of DOD's modern military capabilities. However, they make weapon systems more vulnerable to cyber attacks. Although GAO and others have warned of cyber risks for decades, until recently, DOD did not prioritize weapon systems cybersecurity. Finally, DOD is still determining how best to address weapon systems cybersecurity.
In operational testing, DOD routinely found mission-critical cyber vulnerabilities in systems that were under development, yet program officials GAO met with believed their systems were secure and discounted some test results as unrealistic. Using relatively simple tools and techniques, testers were able to take control of systems and largely operate undetected, due in part to basic issues such as poor password management and unencrypted communications. In addition, vulnerabilities that DOD is aware of likely represent a fraction of total vulnerabilities due to testing limitations. For example, not all programs have been tested and tests do not reflect the full range of threats.
It is definitely easier, and cheaper, to ignore the problem or pretend it isn't a big deal. But that's probably a mistake in the long run.
It's no secret that computers are insecure. Stories like the recent Facebook hack, the Equifax hack and the hacking of government agencies are remarkable for how unremarkable they really are. They might make headlines for a few days, but they're just the newsworthy tip of a very large iceberg.
The risks are about to get worse, because computers are being embedded into physical devices and will affect lives, not just our data. Security is not a problem the market will solve. The government needs to step in and regulate this increasingly dangerous space.
The primary reason computers are insecure is that most buyers aren't willing to pay -- in money, features, or time to market -- for security to be built into the products and services they want. As a result, we are stuck with hackable internet protocols, computers that are riddled with vulnerabilities and networks that are easily penetrated.
We have accepted this tenuous situation because, for a very long time, computer security has mostly been about data. Banking data stored by financial institutions might be important, but nobody dies when it's stolen. Facebook account data might be important, but again, nobody dies when it's stolen. Regardless of how bad these hacks are, it has historically been cheaper to accept the results than to fix the problems. But the nature of how we use computers is changing, and that comes with greater security risks.
Many of today's new computers are not just screens that we stare at, but objects in our world with which we interact. A refrigerator is now a computer that keeps things cold; a car is now a computer with four wheels and an engine. These computers sense us and our environment, and they affect us and our environment. They talk to each other over networks, they are autonomous, and they have physical agency. They drive our cars, pilot our planes, and run our power plants. They control traffic, administer drugs into our bodies, and dispatch emergency services. These connected computers and the network that connects them -- collectively known as "the internet of things" -- affect the world in a direct physical manner.
We've already seen hacks against robot vacuum cleaners, ransomware that shut down hospitals and denied care to patients, and malware that shut down cars and power plants. These attacks will become more common, and more catastrophic. Computers fail differently than most other machines: It's not just that they can be attacked remotely -- they can be attacked all at once. It's impossible to take an old refrigerator and infect it with a virus or recruit it into a denial-of-service botnet, and a car without an internet connection simply can't be hacked remotely. But that computer with four wheels and an engine? It -- along with all other cars of the same make and model -- can be made to run off the road, all at the same time.
As the threats increase, our longstanding assumptions about security no longer work. The practice of patching a security vulnerability is a good example of this. Traditionally, we respond to the never-ending stream of computer vulnerabilities by regularly patching our systems, applying updates that fix the insecurities. This fails in low-cost devices, whose manufacturers don't have security teams to write the patches: if you want to update your DVR or webcam for security reasons, you have to throw your old one away and buy a new one. Patching also fails in more expensive devices, and can be quite dangerous. Do we want to allow vulnerable automobiles on the streets and highways during the weeks before a new security patch is written, tested, and distributed?
Another failing assumption is the security of our supply chains. We've started to see political battles about government-placed vulnerabilities in computers and software from Russia and China. But supply chain security is about more than where the suspect company is located: we need to be concerned about where the chips are made, where the software is written, who the programmers are, and everything else.
Last week, Bloomberg reported that China inserted eavesdropping chips into hardware made for American companies like Amazon and Apple. The tech companies all denied the accuracy of this report, which precisely illustrates the problem. Everyone involved in the production of a computer must be trusted, because any one of them can subvert the security. As everything becomes a computer and those computers become embedded in national-security applications, supply-chain corruption will be impossible to ignore.
These are problems that the market will not fix. Buyers can't differentiate between secure and insecure products, so sellers prefer to spend their money on features that buyers can see. The complexity of the internet and of our supply chains make it difficult to trace a particular vulnerability to a corresponding harm. The courts have traditionally not held software manufacturers liable for vulnerabilities. And, for most companies, it has generally been good business to skimp on security, rather than sell a product that costs more, does less, and is on the market a year later.
The solution is complicated, and it's one I devoted my latest book to answering. There are technological challenges, but they're not insurmountable -- the policy issues are far more difficult. We must engage with the future of internet security as a policy issue. Doing so requires a multifaceted approach, one that requires government involvement at every step.
First, we need standards to ensure that unsafe products don't harm others. We need to accept that the internet is global and regulations are local, and design accordingly. These standards will include some prescriptive rules for minimal acceptable security. California just enacted an Internet of Things security law that prohibits default passwords. This is just one of many security holes that need to be closed, but it's a good start.
We also need our standards to be flexible and easy to adapt to the needs of various companies, organizations, and industries. The National Institute of Standards and Technology's Cybersecurity Framework is an excellent example of this, because its recommendations can be tailored to suit the individual needs and risks of organizations. The Cybersecurity Framework -- which contains guidance on how to identify, prevent, recover, and respond to security risks -- is voluntary at this point, which means nobody follows it. Making it mandatory for critical industries would be a great first step. An appropriate next step would be to implement more specific standards for industries like automobiles, medical devices, consumer goods, and critical infrastructure.
Second, we need regulatory agencies to penalize companies with bad security, and a robust liability regime. The Federal Trade Commission is starting to do this, but it can do much more. It needs to make the cost of insecurity greater than the cost of security, which means that fines have to be substantial. The European Union is leading the way in this regard: they've passed a comprehensive privacy law, and are now turning to security and safety. The United States can and should do the same.
We need to ensure that companies are held accountable for their products and services, and that those affected by insecurity can recover damages. Traditionally, United States courts have declined to enforce liabilities for software vulnerabilities, and those affected by data breaches have been unable to prove specific harm. Here, we need statutory damages -- harms spelled out in the law that don't require any further proof.
Finally, we need to make it an overarching policy that security takes precedence over everything else. The internet is used globally, by everyone, and any improvements we make to security will necessarily help those we might prefer remain insecure: criminals, terrorists, rival governments. Here, we have no choice. The security we gain from making our computers less vulnerable far outweighs any security we might gain from leaving insecurities that we can exploit.
Regulation is inevitable. Our choice is no longer between government regulation and no government regulation, but between smart government regulation and ill-advised government regulation. Government regulation is not something to fear. Regulation doesn't stifle innovation, and I suspect that well-written regulation will spur innovation by creating a market for security technologies.
No industry has significantly improved the security or safety of its products without the government stepping in to help. Cars, airplanes, pharmaceuticals, consumer goods, food, medical devices, workplaces, restaurants, and, most recently, financial products -- all needed government regulation in order to become safe and secure.
Getting internet safety and security right will depend on people: people who are willing to take the time and expense to do the right things; people who are determined to put the best possible law and policy into place. The internet is constantly growing and evolving; we still have time for our security to adapt, but we need to act quickly, before the next disaster strikes. It's time for the government to jump in and help. Not tomorrow, not next week, not next year, not when the next big technology company or government agency is hacked, but now.
Authorities are now saying at least 18 deaths have been caused by Hurricane Michael across four states. Michael crashed into the Florida Panhandle on October 10 as a powerful Category 4 storm, with sustained winds of 155 mph. One of the hardest-hit towns was Florida’s Mexico Beach, southeast of Panama City, where entire neighborhoods appear to have been erased by the ferocious winds—the debris of the structures scattered far inland among boats and shattered trees. Below, recent photographs from Mexico Beach, Panama City, and neighboring towns, as the full extent of the damage wrought by Michael becomes clearer.
Reverse engineering silicon is a dark art, and when you’re just starting off it’s best to stick to the lesser incantations, curses, and hexes. Hackaday caught up with Ken Shirriff at last year’s Supercon for a chat about the chip decapping and reverse engineering scene.
This season continues to be a busy one for hurricanes and typhoons in the northern hemisphere. And with all this activity, there have been lots of opportunities to observe these powerful storms from the ISS in support of the CyMISS (tropical Cyclone intensity Measurements from the ISS) project . Photography sessions of Hurricane Florence from the ISS were requested by our team during the mornings of September 13 and 14, 2018. October’s Image of the Month is a view of Hurricane Florence from supplemental photographs taken by the crew of the ISS at 11:41:15 GMT (7:41:15 AM EDT) on September 14, 2018 as the Sun was rising over the storm. At this time, the center of Hurricane Florence was located at about 34.2° N, 77.8° W and was just making landfall over North Carolina. Despite the fact Florence had been weakening over the previous two days as it approached the US, it was still rated as a Category 1 storm with sustained winds of about 145 kph (90 mph). The original JPEG image of ISS056-E-162816 from the Earth Science and Remote Sensing Unit at NASA Johnson Space Center is shown below.
This JPEG image was created as part of the automated image pipeline which processes the raw electronic files from the Nikon D5 cameras used on the ISS into image products shared with the public. The color balancing algorithm of this system combined with the large amount of Rayleigh scattering in this oblique view of the storm have combined to give the image an unnatural looking blueish cast. To create the Image of the Month, the original raw camera file was used and the color rebalanced using the cloud tops near the eye of the storm as a white standard. The resulting image was then cropped to remove parts of the ISS visible in the near field leaving an unobstructed view of the storm about 50 minutes after the Sun had risen over its eye. The reddish glow of the rising Sun filtering through the clouds is evident in the foreground. Beyond the eye of the storm, a bright solar glint is visible. This is created by the specular reflection of sunlight off of the flat ice crystals in the cirrus cloud layer which blankets this part of the storm.
This hand processed color image was remapped to create an image which approximates an overhead view of the Hurricane Florence. Shown below, this image covers an area approximately 1,000 kilometers on a side and clearly shows the spiral structure of this historic storm just as it was hitting the Atlantic coast.
The goal of the CyMISS (tropical Cyclone intensity Measurements from the ISS) project is to acquire image sequences of intense tropical cyclones (TCs), such as hurricanes, to support the development of an improved remote sensing method to determine more accurately the strength of these destructive storms using stereoscopy. The CyMISS team at Visidyne would like to thank the crew of the ISS as well as the staff at NASA’s Marshall Space Flight Center and Johnson Space Center for their efforts. The original images are courtesy of the Earth Science and Remote Sensing Unit at NASA Johnson Space Center. The work presented here was supported in part under CASIS Grant GA-2018-272.
Follow Drew Ex Machina on Facebook.
See earlier articles on the CyMISS program here.
Still playing catchup with links.
Earlier this month I spoke at a cybersecurity conference in Albany, N.Y. alongside Tony Sager, senior vice president and chief evangelist at the Center for Internet Security and a former bug hunter at the U.S. National Security Agency. We talked at length about many issues, including supply chain security, and I asked Sager whether he’d heard anything about rumors that Supermicro — a high tech firm in San Jose, Calif. — had allegedly inserted hardware backdoors in technology sold to a number of American companies.
The event Sager and I spoke at was prior to the publication of Bloomberg Businessweek‘s controversial story alleging that Supermicro had duped almost 30 companies into buying backdoored hardware. Sager said he hadn’t heard anything about Supermicro specifically, but we chatted at length about the challenges of policing the technology supply chain.
Below are some excerpts from our conversation. I learned quite bit, and I hope you will, too.
Brian Krebs (BK): Do you think Uncle Sam spends enough time focusing on the supply chain security problem? It seems like a pretty big threat, but also one that is really hard to counter.
Tony Sager (TS): The federal government has been worrying about this kind of problem for decades. In the 70s and 80s, the government was more dominant in the technology industry and didn’t have this massive internationalization of the technology supply chain.
But even then there were people who saw where this was all going, and there were some pretty big government programs to look into it.
BK: Right, the Trusted Foundry program I guess is a good example.
TS: Exactly. That was an attempt to help support a U.S.-based technology industry so that we had an indigenous place to work with, and where we have only cleared people and total control over the processes and parts.
BK: Why do you think more companies aren’t insisting on producing stuff through code and hardware foundries here in the U.S.?
TS: Like a lot of things in security, the economics always win. And eventually the cost differential for offshoring parts and labor overwhelmed attempts at managing that challenge.
BK: But certainly there are some areas of computer hardware and network design where you absolutely must have far greater integrity assurance?
TS: Right, and this is how they approach things at Sandia National Laboratories [one of three national nuclear security research and development laboratories]. One of the things they’ve looked at is this whole business of whether someone might sneak something into the design of a nuclear weapon.
The basic design principle has been to assume that one person in the process may have been subverted somehow, and the whole design philosophy is built around making sure that no one person gets to sign off on what goes into a particular process, and that there is never unobserved control over any one aspect of the system. So, there are a lot of technical and procedural controls there.
But the bottom line is that doing this is really much harder [for non-nuclear electronic components] because of all the offshoring now of electronic parts, as well as the software that runs on top of that hardware.
BK: So is the government basically only interested in supply chain security so long as it affects stuff they want to buy and use?
TS: The government still has regular meetings on supply chain risk management, but there are no easy answers to this problem. The technical ability to detect something wrong has been outpaced by the ability to do something about it.
TS: Suppose a nation state dominates a piece of technology and in theory could plant something inside of it. The attacker in this case has a risk model, too. Yes, he could put something in the circuitry or design, but his risk of exposure also goes up.
Could I as an attacker control components that go into certain designs or products? Sure, but it’s often not very clear what the target is for that product, or how you will guarantee it gets used by your target. And there are still a limited set of bad guys who can pull that stuff off. In the past, it’s been much more lucrative for the attacker to attack the supply chain on the distribution side, to go after targeted machines in targeted markets to lessen the exposure of this activity.
BK: So targeting your attack becomes problematic if you’re not really limiting the scope of targets that get hit with compromised hardware.
TS: Yes, you can put something into everything, but all of a sudden you have this massive big data collection problem on the back end where you as the attacker have created a different kind of analysis problem. Of course, some nations have more capability than others to sift through huge amounts of data they’re collecting.
BK: Can you talk about some of the things the government has typically done to figure out whether a given technology supplier might be trying to slip in a few compromised devices among an order of many?
TS: There’s this concept of the “blind buy,” where if you think the threat vector is someone gets into my supply chain and subverts the security of individual machines or groups of machines, the government figures out a way to purchase specific systems so that no one can target them. In other words, the seller doesn’t know it’s the government who’s buying it. This is a pretty standard technique to get past this, but it’s an ongoing cat and mouse game to be sure.
BK: I know you said before this interview that you weren’t prepared to comment on the specific claims in the recent Bloomberg article, but it does seem that supply chain attacks targeting cloud providers could be very attractive for an attacker. Can you talk about how the big cloud providers could mitigate the threat of incorporating factory-compromised hardware into their operations?
TS: It’s certainly a natural place to attack, but it’s also a complicated place to attack — particularly the very nature of the cloud, which is many tenants on one machine. If you’re attacking a target with on-premise technology, that’s pretty simple. But the purpose of the cloud is to abstract machines and make more efficient use of the same resources, so that there could be many users on a given machine. So how do you target that in a supply chain attack?
BK: Is there anything about the way these cloud-based companies operate….maybe just sheer scale…that makes them perhaps uniquely more resilient to supply chain attacks vis-a-vis companies in other industries?
TS: That’s a great question. The counter positive trend is that in order to get the kind of speed and scale that the Googles and Amazons and Microsofts of the world want and need, these companies are far less inclined now to just take off-the-shelf hardware and they’re actually now more inclined to build their own.
BK: Can you give some examples?
TS: There’s a fair amount of discussion among these cloud providers about commonalities — what parts of design could they cooperate on so there’s a marketplace for all of them to draw upon. And so we’re starting to see a real shift from off-the-shelf components to things that the service provider is either designing or pretty closely involved in the design, and so they can also build in security controls for that hardware. Now, if you’re counting on people to exactly implement designs, you have a different problem. But these are really complex technologies, so it’s non-trivial to insert backdoors. It gets harder and harder to hide those kinds of things.
BK: That’s interesting, given how much each of us have tied up in various cloud platforms. Are there other examples of how the cloud providers can make it harder for attackers who might seek to subvert their services through supply chain shenanigans?
TS: One factor is they’re rolling this technology out fairly regularly, and on top of that the shelf life of technology for these cloud providers is now a very small number of years. They all want faster, more efficient, powerful hardware, and a dynamic environment is much harder to attack. This actually turns out to be a very expensive problem for the attacker because it might have taken them a year to get that foothold, but in a lot of cases the short shelf life of this technology [with the cloud providers] is really raising the costs for the attackers.
When I looked at what Amazon and Google and Microsoft are pushing for it’s really a lot of horsepower going into the architecture and designs that support that service model, including the building in of more and more security right up front. Yes, they’re still making lots of use of non-U.S. made parts, but they’re really aware of that when they do. That doesn’t mean these kinds of supply chain attacks are impossible to pull off, but by the same token they don’t get easier with time.
BK: It seems to me that the majority of the government’s efforts to help secure the tech supply chain come in the form of looking for counterfeit products that might somehow wind up in tanks and ships and planes and cause problems there — as opposed to using that microscope to look at commercial technology. Do you think that’s accurate?
TS: I think that’s a fair characterization. It’s a logistical issue. This problem of counterfeits is a related problem. Transparency is one general design philosophy. Another is accountability and traceability back to a source. There’s this buzzphrase that if you can’t build in security then build in accountability. Basically the notion there was you often can’t build in the best or perfect security, but if you can build in accountability and traceability, that’s a pretty powerful deterrent as well as a necessary aid.
BK: For example….?
TS: Well, there’s this emphasis on high quality and unchangeable logging. If you can build strong accountability that if something goes wrong I can trace it back to who caused that, I can trace it back far enough to make the problem more technically difficult for the attacker. Once I know I can trace back the construction of a computer board to a certain place, you’ve built a different kind of security challenge for the attacker. So the notion there is while you may not be able to prevent every attack, this causes the attacker different kinds of difficulties, which is good news for the defense.
BK: So is supply chain security more of a physical security or cybersecurity problem?
TS: We like to think of this as we’re fighting in cyber all the time, but often that’s not true. If you can force attackers to subvert your supply chain, they you first off take away the mid-level criminal elements and you force the attackers to do things that are outside the cyber domain, such as set up front companies, bribe humans, etc. And in those domains — particularly the human dimension — we have other mechanisms that are detectors of activity there.
BK: What role does network monitoring play here? I’m hearing a lot right now from tech experts who say organizations should be able to detect supply chain compromises because at some point they should be able to see truckloads of data leaving their networks if they’re doing network monitoring right. What do you think about the role of effective network monitoring in fighting potential supply chain attacks.
TS: I’m not so optimistic about that. It’s too easy to hide. Monitoring is about finding anomalies, either in the volume or type of traffic you’d expect to see. It’s a hard problem category. For the US government, with perimeter monitoring there’s always a trade off in the ability to monitor traffic and the natural movement of the entire Internet towards encryption by default. So a lot of things we don’t get to touch because of tunneling and encryption, and the Department of Defense in particular has really struggled with this.
Now obviously what you can do is man-in-the-middle traffic with proxies and inspect everything there, and the perimeter of the network is ideally where you’d like to do that, but the speed and volume of the traffic is often just too great.
BK: Isn’t the government already doing this with the “trusted internet connections” or Einstein program, where they consolidate all this traffic at the gateways and try to inspect what’s going in and out?
TS: Yes, so they’re creating a highest volume, highest speed problem. To monitor that and to not interrupt traffic you have to have bleeding edge technology to do that, and then handle a ton of it which is already encrypted. If you’re going to try to proxy that, break it out, do the inspection and then re-encrypt the data, a lot of times that’s hard to keep up with technically and speed-wise.
BK: Does that mean it’s a waste of time to do this monitoring at the perimeter?
TS: No. The initial foothold by the attacker could have easily been via a legitimate tunnel and someone took over an account inside the enterprise. The real meaning of a particular stream of packets coming through the perimeter you may not know until that thing gets through and executes. So you can’t solve every problem at the perimeter. Some things only because obvious and make sense to catch them when they open up at the desktop.
BK: Do you see any parallels between the challenges of securing the supply chain and the challenges of getting companies to secure Internet of Things (IoT) devices so that they don’t continue to become a national security threat for just about any critical infrastructure, such as with DDoS attacks like we’ve seen over the past few years?
TS: Absolutely, and again the economics of security are so compelling. With IoT we have the cheapest possible parts, devices with a relatively short life span and it’s interesting to hear people talking about regulation around IoT. But a lot of the discussion I’ve heard recently does not revolve around top-down solutions but more like how do we learn from places like the Food and Drug Administration about certification of medical devices. In other words, are there known characteristics that we would like to see these devices put through before they become in some generic sense safe.
BK: How much of addressing the IoT and supply chain problems is about being able to look at the code that powers the hardware and finding the vulnerabilities there? Where does accountability come in?
TS: I used to look at other peoples’ software for a living and find zero-day bugs. What I realized was that our ability to find things as human beings with limited technology was never going to solve the problem. The deterrent effect that people believed someone was inspecting their software usually got more positive results than the actual looking. If they were going to make a mistake – deliberately or otherwise — they would have to work hard at it and if there was some method of transparency, us finding the one or two and making a big deal of it when we did was often enough of a deterrent.
BK: Sounds like an approach that would work well to help us feel better about the security and code inside of these election machines that have become the subject of so much intense scrutiny of late.
TS: We’re definitely going through this now in thinking about the election devices. We’re kind of going through this classic argument where hackers are carrying the noble flag of truth and vendors are hunkering down on liability. So some of the vendors seem willing to do something different, but at the same time they’re kind of trapped now by the good intentions of open vulnerability community.
The question is, how do we bring some level of transparency to the process, but probably short of vendors exposing their trade secrets and the code to the world? What is it that they can demonstrate in terms of cost effectiveness of development practices to scrub out some of the problems before they get out there. This is important, because elections need one outcome: Public confidence in the outcome. And of course, one way to do that is through greater transparency.
BK: What, if anything, are the takeaways for the average user here? With the proliferation of IoT devices in consumer homes, is there any hope that we’ll see more tools that help people gain more control over how these systems are behaving on the local network?
TS: Most of [the supply chain problem] is outside the individual’s ability to do anything about, and beyond ability of small businesses to grapple with this. It’s in fact outside of the autonomy of the average company to figure it out. We do need more national focus on the problem.
It’s now almost impossible to for consumers to buy electronics stuff that isn’t Internet-connected. The chipsets are so cheap and the ability for every device to have its own Wi-Fi chip built in means that [manufacturers] are adding them whether it makes sense to or not. I think we’ll see more security coming into the marketplace to manage devices. So for example you might define rules that say appliances can talk to the manufacturer only.
We’re going to see more easy-to-use tools available to consumers to help manage all these devices. We’re starting to see the fight for dominance in this space already at the home gateway and network management level. As these devices get more numerous and complicated, there will be more consumer oriented ways to manage them. Some of the broadband providers already offer services that will tell what devices are operating in your home and let users control when those various devices are allowed to talk to the Internet.
Since Bloomberg’s story broke, The U.S. Department of Homeland Security and the National Cyber Security Centre, a unit of Britain’s eavesdropping agency, GCHQ, both came out with statements saying they had no reason to doubt vehement denials by Amazon and Apple that they were affected by any incidents involving Supermicro’s supply chain security. Apple also penned a strongly-worded letter to lawmakers denying claims in the story.
Meanwhile, Bloomberg reporters published a follow-up story citing new, on-the-record evidence to back up claims made in their original story.
Scoping a problem, clearly defining a possible solution, and broadly communicating both are an essential skill for all engineers. After all, we do not build systems in isolation. The things we create are for other people and the choices we make when we build those things have trade-offs and consequences. Further, decision-making is often a murky process fraught with assumptions and misconceptions. To aid in this process, organizations should implement some form of peer-driven architecture review.
What follows is an adaptation of a document I drafted describing an architecture review process that I will be implementing at Bloomberg. I expect this document to be updated periodically. If you would like to contribute, you can find this and other documents relating to software engineering at my engineering repo where I’m collecting my ideas.
To start, consider these quotes regarding engineers as agents of change and the impact of their choices:
“Engineers cause change… We immediately run into three practical difficulties when we consider the engineer’s change: the engineer doesn’t know where he is going, how he is going to get there or if anyone will care when he does”.
We perform architecture reviews to surface potential changes to, to solicit feedback on, to develop confidence in, and gain consensus on, new software designs or operational approaches (departures) even (especially) if we are uncertain what the final outcomes will be. These discussions are made up of peers, stakeholders, and anyone who is curious. All attendees are expected to be open-minded, fair, and even-handed. Participation is intended to be broad.
If your first inclination is to begin building something, it’s very likely you’ve not thought through the problem you need to solve. Think deeply about the work and seek first to use existing technology/patterns. If you do not find an existing solution that meets your needs, it may still be out there. Reach out to fellow engineers and solicit feedback. Only afterwards, if you still believe you must depart from current solutions or approaches, initiate an architecture review.
First, describe the problem that is being addressed. If the work is intended to replace an existing solution, describe what is currently in place and why it is believed that a change is necessary.
Provide relevant content from documentation, engineer anecdotes, diagrams or other visual aids, etc. An RFC or design document is an excellent place to consolidate this content. Generally, a design document should include some, or all, of the following elements:
It is not required, though strongly recommended, that the proposal be reviewed by the Architecture Review Working Group to help refine it ahead of a scheduled architecture review where a broader audience is expected to attend.
It is always informative to spin up some VMs/containers and write a few lines of code to ascertain the rough edges of a solution. By all means, do so. Be cautious, however, that you do not pursue a proof-of-concept too deeply before performing an architectural review; your time is valuable and is likely to bear more fruit spent in a review.
An Architecture Review (AR) is a dialogue where an idea is put forward and reviewed. It is not a presentation to persuade or a sales pitch; such events tend to be one-sided and presuppose a decison has been made. Rather, the anticipated benefits and costs associated with the departure are discussed. Most important to this process is attempting to reduce ambiguity so that the operation of our systems may be easier to reason about. We can never eliminate complexity but when we appreciate that “[c]omplex systems are intrinsically hazardous systems” we will come to see ARs as a means of defense against future failure. To that end, during a review, questions like the following should be raised:
In general, the AR is not intended as an approval mechanism. This does not mean that participants cannot have dissenting views. The expectation should be that everyone involved has the right to voice their views in a professional and supportive manner. For example, if someone identifies an issue with a proposal, the discussion does not end at the point of its discovery; rather, a conversation should help everyone come to a shared understanding of the shortcoming and provide recommendations for addressing it.
Some discussions may result in an agreement that no work will proceed (i.e. a better solution already exists or the problem no longer needs to be solved) but by and large, “[e]ngineers (both presenting and participating as audience) need to understand the purpose of the architecture review is to develop better outcomes.”
Typically those with direct knowledge of the system and those that may consume it will be in attendance. These people, among others, will include:
While an AR should not be viewed as an approval mechanism, it is expected that the process is managed by a trusted cohort of engineers whose combined experience and influence can be used to effectively drive the direction and execution of work. To aid in the decision-making process and to provide a means for transparently communicating the process, each AR should output the following:
 An Architecture Review Working Group is a cohort of engineers that bring a range of experience and perspectives to these discussions and are willing to put in the time and energy to make architecture reviews successful.
A seal pup in Wales, a luxury hotel in a quarry pit in Shanghai, horse racing in Cambodia, space-suit testing in a cave on a tropical island, dancers in Tanzania, damage from Hurricane Michael in Florida, human towers in Catalonia, Swiss fighter aircraft in the Alps, and much more.
As we continue to track the Voyagers into interstellar space, the spacecraft have become the subject of a new documentary. Associate editor Larry Klaes, a long-time Centauri Dreams essayist and commentator, here looks at The Farthest: Voyager in Space, a compelling film released last year. Larry’s deep knowledge of the Voyager mission helps him spot the occasional omission (why no mention of serious problems on the way to Jupiter, or of the historic Voyager 1 photo of Earth and Moon early in the mission?), but he’s taken with the interviews, the special effects and, more often than not, with the spirit of the production. That spirit sometimes downplays science but does give the Golden Record plenty of air-time, including much that was new to me, such as the origin of the “Send more Chuck Berry!” quip, John Lennon’s role, NASA’s ambivalence, and an odd and insulting choice of venue for a key news conference. Read on for what you’ll see and what you won’t in this film about our longest and most distant mission to date.
By Lawrence Klaes
Can one properly represent humanity to the rest of the Milky Way galaxy with just two identical space vessels no bigger than a small school bus and two identical copies of a golden metallic long-playing (LP) record attached to the hulls of said vehicles which contain in their grooves sample images, sounds, languages, and music of their makers and their world?
Our species can only hope so at this point, since the objects in question left Earth over four decades ago and are now tens of billions of miles into deep space heading in different directions through the galaxy. Although primarily planned and built to explore the planets, moons, and rings of the outer Sol system, these vessels were ultimately given another purpose and destiny preordained by their encounters with the places they were sent to. Ironically, this destiny relies on the existence of beings for whom there is as yet no evidence that they are actually out there among the stars.
These records, their carrying vessels and their missions are the focus of the documentary The Farthest: Voyager in Space, which premiered in 2017. Written and directed by Emer Reynolds and produced by John Murray and Clare Stronge for the Irish documentary company Crossing the Line Productions, The Farthest does a masterful artistic job of introducing to generations who were either too young or not yet born to two real space probes on actual missions to alien worlds on a scale never attempted before.
The “stars” of The Farthest were originally designated Mariner 11 and 12 by their creators at the National Aeronautics and Space Administration (NASA). They were the descendants of a successful lineage of American deep space probes which had in their ancestry the first visitors to the planets Venus and Mars. However, the space agency wanted to “spice” up their public image and recast these newest members of the Mariner clan as Voyager 1 and 2.
The renaming was appropriate, for these sailors of the interplanetary seas were also descended from what remained of the original Grand Tour plan of the 1960s to examine the outer worlds of the Sol system from Jupiter to Pluto with four nuclear-powered space probes and return unprecedented magnitudes of data about them on a selection of wavelengths.
Launched from Earth in the late summer of 1977, the interplanetary trajectories of the Voyagers past the gas giant worlds would eventually fling them fast enough to permanently escape the gravitational influence of our yellow dwarf star. This would make them only the third and fourth space vessels made by humanity to head towards the interstellar realm after Pioneer 10 and 11, the members of another line of automated American space probes with an impressive exploration pedigree which had celestially paved the way for their more sophisticated brethren just a few years earlier.
The nearly twin Pioneer probes were not only the first spacecraft to visit the planets Jupiter and Saturn between 1973 and 1979, they were also the first to be the recipients of gravitational “slingshots” by those gas giants that sent them towards interstellar space. In case either or both of the probes might one day be found drifting between the stars by sophisticated alien intelligences, each Pioneer carries a small golden plaque attached to their antenna struts. A scientific greeting, the plaques are engraved with basic information on who made these vehicles, what their missions would be, where the probes came from, and when they were lofted into the void.
Recognizing the Voyager missions as two more rare opportunities to preserve and present selected aspects of their species to the wider Milky Way, a small group of professionals from a variety of fields (several of whom were involved with designing the Pioneer Plaques), thinking far ahead, planned and put together a more intricate and detailed “gift package” in the form of a gold-plated copper record. Protected by a thin aluminum cover inscribed with pictogram instructions on how to play it, the Golden Record (as the Voyager Interstellar Record is most often called) contains as much information as could be reasonably etched into the spiraling groove of the 12-inch disc, with over half of the data about the human race being selections of global music.
It has been conservatively estimated that the side of the Golden Records facing outwards (they are bolted to the exteriors of the probes’ main bus) will survive in playable form for at least one billion years in deep space, barring any remote chance of a large cosmic collision or other accident. Hopefully this will be more than enough time for someone to come upon the vessels and their priceless cargo before the galactic environment wears them away to join the rest of its voluminous interstellar dust.
Although the Golden Record was certainly not the main reason for the Voyager missions, they have long since become the prime focus in the public’s mind. After all, the probes’ primary missions ended when Voyager 2 flew through the Neptune system in 1989 and even though they are still functioning and collecting in situ science data on the interstellar medium, that current mission will end around 2030 when their nuclear power supplies can no longer generate enough energy to run any of the instruments. This will leave the Voyagers with its final, singular purpose: To carry that shining circular gift throughout the stars for eons, with the slim but still hopeful possibility that some day another mind in the galaxy will discover it and learn about us.
The Farthest is a beautifully crafted documentary on a subject that has been waiting a long time for a proper and respectful treatment. Emer Reynolds love for the subject is evident, inspired by childhood visits to her uncle’s farm in Ireland with its night skies free of light pollution, as she described in this interview:
“A fascination with space began on those farm stays. ‘Mohober House’s sky at night was the opposite to the skies over our home in Dublin. We would drive to Tipperary in our old Hillman Hunter, my dad doing maths puzzles with us all the way, just for fun.
“As night fell, the dark, dark skies overhead would reveal the sparkling cosmos. I was dazzled and in awe of the visible smudge of our Milky Way Galaxy overhead.
“I would spend hours lying on the grass, staring into the blackness. I was dreaming of tumbling through space, hurtling along at 67,000 mph, clutching onto a fragile blue planet.
“Aliens, Horse Head Nebula, star nurseries and time-travel, and exotic distant worlds filled my head as a child. They still do in fact.
“The film is a love story to that awe and wonder I first felt as a child in Tipperary,” Reynolds said.
With its under two-hour running time, The Farthest manages to perform quite the balancing act describing the forty-plus year history of how the Voyagers came to be and what they accomplished. This presentation included interviewing nearly two dozen people, most of whom were either directly involved with the development of the space probes and their science missions, or the creation of the Golden Record – and sometimes both. The documentary goes back and forth between the birth, development, launch, and missions of the Voyagers to the four gas and ice giant worlds dominating the outer realm of our celestial neighborhood – Jupiter, Saturn, Uranus, and Neptune – and the parallel development and contents of the Golden Record.
Had Reynolds et al the time and resources, it would have been fascinating to turn The Farthest into a multi-part documentary. For although the production succeeds in giving an excellent introduction to the overall mission, accomplishments, and the people of the Voyager expeditions, each aspect of these deep space probes could have been given their own documentary for a truly in-depth treatment.
As someone who has followed the Voyager probes since their conception in the Grand Tour plan, I was made keenly aware of various important moments in the Voyager history that were left out, no doubt due in large part to time. Each world system visited and revealed by our intrepid robot explorers could easily be given their own dedicated time.
Here is just one example: While the documentary did talk a bit about the true nature of Jupiter’s Galilean moon Europa, first made known to humanity by the Voyagers in 1979, it was literally just skimming the surface on this alien world. Europa has a global ocean of liquid water at twice the volume of all the water found on Earth that is perhaps sixty miles deep and hidden beneath a crust of ice covered in many long lines and cracks and very few impact craters in comparison. It has been speculated that the ruddy material which permeates these fractures in the ice are organic compounds churned up from the ocean below – and perhaps even include the remains of native aquatic life forms.
In regards to the technical aspects of the Voyager probes themselves, when considering how much The Farthest focused on all the technical troubles the twin Voyagers had when they were launched into the Final Frontier for extra dramatic effect, I was disappointed to witness no mention of the serious problems that Voyager 2 had with its radio receivers well before its one and only encounter with Jupiter.
In April of 1978, the space probe’s main radio receiver permanently failed after an unexpected power surge blew its fuses. Thankfully Voyager’s designers had installed a backup receiver which did activate automatically, although it took an entire week for this to happen as mission controllers had to wait for Voyager 2’s onboard computers to recognize that the probe was not receiving any commands from Earth.
Then the team discovered that the backup receiver was having issues of its own, namely that it could not detect changes in radio signal frequencies due to the failure of its “tracking loop capacitor.” As a result, the probe’s human handlers had to determine which frequency that Voyager 2 was listening to and then send commands on the available channel. This situation remained through every one of its planetary encounters right to the present day.
Had both radio systems failed, Voyager 2’s mission would have been effectively over before it had really begun. Both probes were sophisticated enough to conduct a basic science mission on their own in the event they lost contact with their human controllers, but without a way to relay their precious data back to Earth, no one would ever know what the Voyager had found out there. In the particulars of Voyager 2’s case, not only would we have lost its much closer examination of Europa but also humanity’s first encounters with Uranus and Neptune.
Seeing as we have yet to follow up with any new probe missions after Voyager 2’s singular flybys of those ice giants after a time span of over three decades, the loss to planetary science had the robot explorer gone permanently silent while sailing through the Main Planetoid Belt cannot be overestimated.
I was also surprised that the documentary made no mention, let alone failed to even show, one of the first historic actions one of the probes had done during its mission: Just thirteen days after being launched from Cape Canaveral, Florida, Voyager 1 aimed its cameras at the world it had just left and returned the first image of Earth and its moon captured in a single frame from 7.25 million miles away.
This is probably among the more famous images taken by the two deep space probes – and that is saying something. I wish the documenters had shown Earth and the Moon as Voyager 1 saw it while the robotic explorer was just underway on its long journey and then juxtaposed it with the later segment on the probe’s last image taken in February of 1990, the very famous photograph of our planet as a Pale Blue Dot as seen from the edge of our Sol system. This would have made for a nice counterpoint balance, bringing home just how far the Voyagers had gone not only in terms of distance but also in how much they had revealed to and enlightened the species that built them for this adventure, living back there on that blue globe now so far away in space and time.
The special effects in The Farthest were very nicely done. One standout in particular involved Voyager 1 sailing in front of Jupiter with its Great Red Spot while they played the eerie, screeching radio signals coming from the giant planet. These literally otherworldly sounds turned out to be immense electrical storms far bigger and more powerful than anything generated on Earth.
I also liked how they introduced the segment for each new planet as the Voyagers approached them for the first time. They took numerous still frames taken by the probes of the world as they were being approached and combined them into a video with subdued music playing in the background. This gave the viewer the feeling of how the mission team felt as the alien globes were slowly being revealed in increasing detail by the probes’ electronic eyes.
Another highlight of The Farthest were the interviews of the people who played both direct and indirect roles with the Voyagers and their Golden Records. Among the many standouts were Frank Locatell, Voyager’s Project Engineer, Mechanical Systems, whose expressive and amusing description of how the team had to surreptitiously wrap all of the probes’ external cables with aluminum foil bought from a local grocery store – so that Jupiter’s immense and powerful magnetic field wouldn’t fry the machines into electronic oblivion — is worth watching the documentary for that alone.
Nick Sagan, the third son of Carl Sagan, the famous astronomer and science popularizer who did much to make the Pioneer Plaques and Voyager Records a reality, shared his experiences and thoughts as a young boy who was asked to participate in the creation of the Golden Record. Among the sound sections on the LP were samples of 55 human languages greeting whoever would find the Voyagers and their gifts some day. Nick represented all those people who speak English with the phrase: “Hello from the children of planet Earth.”
The documentary made a point of mentioning throughout that the Voyager probes were over forty years old, meaning their technology was from the 1970s: The year 1972 to be precise, as the spacecraft designs had to be “frozen” five years before the probes’ launch dates. Perhaps to younger ears this seems positively ancient, leaving them to wonder how anyone back then, even NASA, could send automated spaceships billions of miles across hostile space to explore alien planets and moons in working order for more than a decade.
Rich Terrile, Voyager Imaging Science, visually brought home just how wide the technological gulf had become over the past four decades by producing a modern key fob and saying that the processing power of the computer chip inside this little device was comparable to the most advanced – and much larger – artificial brains of that earlier era.
As a nice counterpoint, this demonstration was immediately followed by a clip with Voyager Project Manager John Casani, who asked: “What’s wrong with ‘70s technology? I mean, you look at me, I’m 30s technology!” Casani added that he makes no apologies for the “limitations that we were working with at the time. We milked the technology for what we could get from it.” Seeing how the Voyagers have lasted well beyond their initial planned encounters with Jupiter and Saturn from 1979 to 1981, with every intention of recording and returning scientific data on the interstellar medium for perhaps two decades more, no apologies are necessary, indeed.
Apparently all of the interviews with the Voyager team members and some of the others shown in The Farthest averaged about three hours each. This is yet another reason to have an entire documentary series on the Voyager probes so that those who want to can hear everything these space pioneers accomplished beyond their tantalizing clips. I also hope that the documentary producers have either already archived or will archive these valuable interviews for the benefit of our historical record on the early Space Age.
One surprise taken from various interviews and news items regarding The Farthest was the documentary makers’ desire not to get too “space-y science-y” with their presentation. Here is one example of this attitude straight from Emer Reynolds herself:
“We were trying to find people that would be prepared to talk to us, but more than that – because we wanted to make a film that was very human and tapped into the human side as opposed to dry science,” says Reynolds.
The film, she assures, is not aimed at the “space-y science-y types” – it’s for everyone. It is, at its heart, a human story. “It goes into the heart of what makes us human, the great mysteries that define our existence,” says Reynolds.
Not only are “space-y” and “science-y” not actual words in the English language (plus referring to those who do like such topics as “types” is a bit insensitive and ostracizing), but this flies in the face of the fact that the Voyager missions were all about science, not to mention made possible because of science!
I understand to a degree what the documentary makers were trying to say here, as science can be and has been presented by its practitioners in a fashion that is often less than palatable to those who are not indoctrinated in the various fields of knowledge, even including something as naturally exciting and wondrous as outer space. However, such a statement – and a grammatically poor one at that – does not speak well either about the makers’ perceptions of their audience or the knowledge and interest levels of the audiences themselves.
I also understand that they wanted to be inclusive with their viewing audience, but to throw science under the bus like that is not only an offense to the general public but especially those who would be drawn to the subject matter of their documentary because it is both about a historic time in space science and communicating with intelligent extraterrestrial life.
With the exception of the fact that I wish they would turn this documentary into a series to expand upon the various aspects of the Voyager missions, I thought they did a fairly good job of presenting the space science of each world they visited. They even discussed the wild magnetic field of Uranus, a topic that could easily have become bogged down by someone having to explain its physics to a lay audience.
On the other hand, this explains why the documentary’s discussions on ETI were fairly basic, sticking to standard viewpoints and ideas on alien life and their possible behaviors. I cannot say I was entirely pleased with their use of children’s drawings of aliens as background effects. They tended to focus on portraying extraterrestrials as literal bug-eyed monsters and the big-headed, thin-bodied types from innumerable UFO reports. These depictions only serve to infantilize the subject and keep it from being taken seriously by the scientific community and others. Such stereotypical and immature presentations of aliens certainly do not help the case for the existence of the Golden Records.
It is just sad that our culture is so “afraid” of science or anything that might go above their heads, as if they would get lost and confused rather than learn something new to expand and enlighten their worldviews. After all, isn’t that what space exploration is all about? And the Voyagers did that big time. Plus, if we want future generations of those “space-y science-y types” to forge new missions to other worlds in our vast Cosmos, then documenters and educators need to appeal to them to spark their interest and destinies as much as the casually curious viewer, if not more so.
The makers of The Farthest worried that too much impersonal science in their documentary would turn away potential viewers, whom they perceived as uncomfortable with the subject matter. In an ironic contrast, a number of the makers and practitioners of the Voyager probes were even more concerned that one particular item being added to the twin vessels were of no scientific value at all.
Jared Lipworth, a consulting producer with HHMI Tangled Bank Studios, which collaborated with the production company Crossing The Line to present The Farthest, stated the situation succinctly in this interview:
“The scientists at the time, a lot of them, did not want to have anything to do with it,” Lipworth said. “They did not want it on [the spacecraft] and didn’t like it was getting all the attention. But that says a lot about humanity. It was as much for us as it was for any alien that might find it.”
The Farthest provides a good deal of evidence for these reaction to the Golden Records from the scientific and engineering quarters of the Voyager team. Several members confirm this during their interviews, including Frank Drake, the famed SETI pioneer who was also heavily involved with designing both the Pioneer Plaques and the Voyager Records. NASA also showed their ambivalence towards the records: In their early press releases depicting photographs and diagrams of the probes, the Voyagers were often shown with their sides which did not include the familiar disc, as if by hiding the record the space agency could avoid having to discuss it. Being perhaps the most relatable and intriguing aspect of the robotic vessels to the general public, this ploy naturally failed.
The documentary also revealed that the official NASA press conference for the Golden Record was shunted off to a nearby second-rate motel. Held just days after Voyager 2 was lofted skyward, record team member Timothy Ferris relayed how he and his fellow collaborators had to compete with the music and general noise from a Polish wedding going on nearby, separated only by a large partition. The scenario is initially amusing until you grasp just how insulting the whole production was to the record team and the concept. However, as with NASA’s attempts to hide the pesky disc within their media documentation, their efforts at obnubilation were ultimately futile.
The space agency did make one last attempt to physically remove the Golden Record from Voyager. Ferris, who was in charge of procuring the music selections, wanted to include a short engraved dedication message in the blank spaces of the record between its takeout grooves: “To the makers of music – all worlds, all times.” Ferris was inspired to do this by John Lennon of The Beatles, who in turn recommended his studio engineer, Jimmy Iovine, to help with the production of the records.
This seemingly harmless act did not sit well with NASA, as the words were not included in the very detailed specifications for the Voyager probes. The agency was ready to replace the records with blank discs. Only the intervention of Carl Sagan speaking with the NASA Administrator kept the Golden Records attached to their spacecraft. Using a bit of poetry, Sagan pointed out that the dedication was the only example of human handwriting aboard the Voyagers – although it would not surprise me if a few folks snuck in their signatures and perhaps even a note or two during the assembly process. This was and is a common behavior in the history of lunar and planetary exploration.
The Golden Records do indeed have scientific value – certainly to those who may find them one day, but also for humanity itself. They provide a sociological and psychological study of how we may present ourselves to others, in particular others who are highly intelligent but not necessarily human. Then there was the need for developing technologies and sciences to make these presentations viable aboard a spaceship that has to last a very long time drifting in the interstellar medium and remain readable by non-human beings. The records did not need to have a scientific reason to be a part of the Voyager missions despite the protests, but there they are along with several others, no doubt.
For someone like me, who has immersed himself for a lifetime moving about the worlds of science and the humanities/arts, it is hard to imagine how those who can dream of exploring the stars and all their unknowns – plus actually do something to make those dreams a reality – could simultaneously dismiss and even feel embarrassed and hostile to the idea of life elsewhere, including those capable of finding the probe and its shiny metal gift. However, as consulting producer Jeremy Lipworth said, this “says a lot about humanity,” which is the whole point for the existence of the Golden Record.
On the plus side, it was nice to see several Voyager team members defending what was on the Golden Records, in particular how certain images were not offensive despite public opinion and NASA’s concerns about offending those tax payers. This stemmed from the Golden Records’ predecessors, the Pioneer Plaques, which depicted a representation of a male and female human without any clothing upon them. People complained that the space agency was sending “smut” into space among other issues such as the positions of the man’s appendages compared to the woman, who just seemed to be standing there passively.
Having much more room to work with in comparison to the plaques, the Golden Records used this extra space to show any recipients how humans reproduce – within limits, of course. One photograph initially chosen showed a man and woman – nude again – holding hands and smiling at each other. The woman was clearly pregnant, or at least her condition is apparent to any adult human. NASA rejected this image out of fear of more public backlash, so Voyager Record team artist Jon Lomberg had to replace the tasteful photograph with a silhouette of the couple. This replacement at least had the advantage of showing the fetus developing in the woman’s womb as part of the reproduction story.
In a bit of irony, while the original photograph of the expectant couple was shown in the theatrical release of The Farthest, when the documentary arrived on PBS Television and elsewhere, someone had replaced the nude man and woman with the Lomberg silhouette! If you must see the original image, it may be found in the official book on the Golden Records titled Murmurs of Earth: The Voyager Interstellar Record (Random House, 1978), authored by all of the main team members. This is a work that anyone interested in communicating with ETI and Space Age history must have in their library. It would be quite interesting to see how our contemporary prudery might affect the understanding of any record recipients when it came to explaining how the beings who made this by-then ancient artifact and its means of transportation also made copies of themselves.
Speaking of changes to The Farthest between its transition from the cinema to the television screen, I noted in the segment on the Pale Blue Dot where there is a vignette depicting various scenes of human activity on Earth, the whole piece was inexplicably removed from the documentary’s presentation on Netflix (it remained intact in its PBS incarnation).
The scene was a nice visual accompaniment on Carl Sagan’s wonderful explanation during a NASA press conference from 1990 of the last image Voyager 1 ever took, less than one year after its twin probe had successfully flown through the Neptune system. A celebration of human life on Earth in a series of wordless images set to music, it shows what we are, the good and the bad, without becoming too graphic in either direction.
This is somewhat ironic, as the Voyager Record team decided early on only to present the Cosmos with our best face forward so as not to inadvertently frighten or offend any recipients with our less sanguine qualities. Only some of the music pieces give hints to the records’ listeners that we are less than ideal, perfect creatures – though they may be able to figure this out anyway just by examining the Voyagers, which will undoubtedly seem incredibly primitive to them, since we assume those who find the space probes will be experts at interstellar travel and detecting small inert alien vessels in the dark and cold of deep space.
As for this unexpected edit to The Farthest, I must wonder if anyone at Netflix consulted with the documentary makers before making this cut, or if they just decided to second-guess the artists’ decision for reasons I do not readily see, either practical or aesthetic. Removing this celebration of the Pale Blue Dot may have done no permanent damage to The Farthest in terms of conveying information about the Voyager missions, but it did diminish a bit what made this documentary a cut above those works which are in essence a dry recitation of facts.
Having pointed out what was removed from The Farthest, this is a good time and place to state my desire for there to have been much more of and about the contents of the Golden Record in the documentary. Naturally they played samples of the record’s actual sounds, voices, images, and music throughout the film: The very first scene displayed a reproduction of part of the letter written by then U.S. President Jimmy Carter specifically for the Golden Record and its recipients. The interview with Nick Sagan gave an illuminating background as to what it was like putting the language segments together, at least in one particular case.
It was also fun to see the Saturday Night Live origin of the joke that any aliens who listened to the record would respond with “Send more Chuck Berry!” given by comedian Steve Martin. Later on we were treated to the amazing moment when, during a celebration of the wildly successful Voyager missions in 1989, both Chuck Berry and Carl Sagan got up together on stage at the Jet Propulsion Laboratory (JPL) in Pasadena, California, singing and dancing to the musician’s hit single, “Johnny B. Goode” – the only representation of rock-and-roll music among the Golden Record’s 27 international songs.
With the highlights of the positive aspects of the Golden Record as shown in The Farthest duty noted, this documentary made me realize just how much I wanted to see far more about that grooved disc and its contents. Just as its story could and did fill an entire book along with countless articles and stories in the intervening years, the Voyager Interstellar Record deserves its own film documentary to do it proper justice in presenting itself to the vast majority of humanity. After all, it was explicitly designed to be our representative to the rest of the Milky Way galaxy, so I say the species that it speaks for should be given the chance to really see the Golden Record should they so desire it.
Of the two hours of samples from humanity and our world, over ninety minutes of the record’s offerings are devoted to global music. The Farthest gave most of its time in regards to the music to Chuck Berry’s landmark song. Certainly at least for Western audiences, “Johnny B. Goode” was probably among the best known of all the record pieces. It also had the benefit of very relevant and entertaining visuals to go with it.
Berry’s pioneering rock-and-roll number was not the only well-known song on the Golden Record, however. Even those who may know little about Ludwig van Beethoven know at least the first few bars of his Fifth Symphony, which are now preserved for eons in deep space. As for my earlier mention of how the record music contains the few indications that humanity is an imperfect species, “Dark Was the Night, Cold Was the Ground” by blues musician Blind Willie Johnson was chosen, in the words of Timothy Ferris in Murmurs of Earth:
“Johnson’s song concerns a situation he faced many times: Nightfall with no place to sleep. Since humans appeared on Earth, the shroud of night has yet to fall without touching a man or woman in the same plight.”
The final song on the Golden Record, which immediately follows Johnson’s number, was another Beethoven piece: The String Quartet No. 13 in B flat, Opus 130, Cavatina. It is a melancholy number made at a particularly dark time in the fading years of the composer’s life. However, this piece of music, like the being and species who made it, is neither simple nor provides simple answers to its meanings, as I quote Ferris once again from Murmurs of Earth:
“But sadness alone can’t define the Cavatina. Strains of hope run through it as well, and something of the serenity of a man who has endured suffering and come to terms with existence perceived without illusion.
“It may be that these ambiguities make for an appropriate conclusion to the Voyager record. We who are living the drama of human life on Earth do not know what measure of sadness or hope is appropriate to our existence. We do not know whether we are living a tragedy or a comedy or a great adventure. The dying Beethoven had no answers to these questions, and knew he had no answers, and had learned to live without them. In the Cavatina, he invites us to stare that situation in the face.”
There was also a recording of the thoughts of Voyager Record team member Ann Druyan conducted on an EEG machine, where she consciously pondered various historical ideas, events, and persons for one hour, along with “the exception of a couple of irrepressible facts” from her life. This part of the record’s development also did not appear in The Farthest, along with the conspicuous absence of Druyan herself. With both Druyan and Carl Sagan largely removed from having a direct influence on the documentary – Sagan passed away in December of 1996 from myelodysplasia, although clips of him conducting earlier interviews permeate the film and provide some measure of his presence there – I was left to wonder how the Golden Record might have been presented had the couple been around for their input, especially together.
The makers of The Farthest emphasized multiple times how much they wanted to bring out the human aspects of the Voyager missions. They did accomplish this, but as my examples have shown, there is so much more that was left untapped by this documentary which needs to be shared with the widest audiences possible. When you are trying to describe a complex and incredibly productive space expedition that is half a century in its making and undertaking in less than two hours aimed at an audience that is curious but not expected to be very knowledgeable about its subject matter, the best you can achieve is a sampler of the Voyager missions.
For those who care, The Farthest can only serve to whet one’s appetite for more. That a full-on documentary series or even a historical film (or mini-series) about the Golden Records has yet to be made after all this time is surprising, given all the depth, drama, and excitement that went into their realization.
I do not count John Carpenter’s 1984 science fiction film Starman as a proper answer to my request. Although it is a well-done film, the Golden Record only serves as the catalyst for the rest of the story. Besides, the film had The Rolling Stone’s 1965 song “(I Can’t Get No) Satisfaction” as part of the music selection on the record, which is patently false. Plus, if the ETI captured Voyager 2 in space in 1984, this would presumably disrupt its mission to the planet Uranus two years later and Neptune three years after that. We did not witness if the aliens released the space probe back onto its flight path, but the visiting ETI’s scout ship was later found on Earth with the Golden Record inside it, thus denying other potential recipients of the Voyager the chance to encounter the disc and its contents, presuming the robotic craft was allowed to continue on its cosmic journey. The aliens interpreted the record’s messages as a peaceful invitation to our world, which they did with a representative of their advanced species – only to have their scout ship and its lone occupant shot down by a missile.
There especially needs to be driven home the fact that Voyager 1 and 2 are now on their final, ultimate mission as ambassadors of Earth and humanity to the rest of the galaxy. Yes, they are still functioning and taking important measurements of the interstellar medium, but the reasons they were sent into space in the first place are now long behind them. What the Voyagers did was both extraordinary and ground-breaking for planetary science, of course, but now other deep space probes have followed in their paths, with more sophisticated instrumentation and revealing data that surpasses their mechanical ancestors. That is the evolution of planetary exploration.
Humanity is a short-lived species, both individually and culturally. Human life spans average about eighty years at present, while we have only had any semblance of a civilized society for roughly six thousand years – mere drops in the cosmic bucket as the phrase goes. The Voyager Records will last for at least one billion years if not much longer even by conservative estimates of their survivability in interstellar space.
To contrast and compare: One billion years ago, Earth’s first multicellular plants were just starting to move onto dry land from the oceans. The evolution of more sophisticated terrestrial creatures were still hundreds of million years in the future. The Voyagers may even survive the demise of Sol and Earth, assuming our planetary system has not already undergone some radical transformation – natural or otherwise – before then.
This is why the actions of a relatively small collection of humans forty years ago to boldly charge into the void called space and simultaneously dare to imagine communicating with unknown alien intelligences in some unimaginable far future time – all with technologies that were becoming obsolete before they even completed their first mission milestones – are an incredible and deeply important story that must be told now and for the edification of our descendants. Sharing our past with our future should and must be done the way the Voyager missions and the Golden Records were put together, as a complimenting fusion of science and artistry.
The Farthest was a very good start, but the tale has hardly even begun.
To see more about this documentary along with the Voyager space probes and the Golden Records, visit the official PBS Television Web site.
I’m happy many of my friends from the 1970s are still around. I feel the same way about the dual space probes Voyagers 1 and 2, launched by NASA in 1977 and still kicking to this day. Both went on to explore Jupiter and Saturn with Voyager 2 continuing to Uranus and Neptune. On August 25, 2012 Voyager 1 became the first spacecraft to cross the heliopause, the boundary where the pressure of the solar wind is in balance with the winds blowing from the stars. The heliopause is considered the transition zone between solar and interstellar space. Now, Voyager 2 appears poised to enter interstellar space too.
Voyager 2 is a little less than 11 billion miles (~17.7 billion km) from Earth at the moment, or more than 118 times the distance from Earth to the Sun. Since 2007 it’s been traversing the outermost layer of the heliosphere, a giant bubble surrounding the sun and planets dominated by streams of subatomic particles and magnetic fields from the sun called the solar wind. Within the bubble, we’re somewhat shielded from cosmic rays — high-speed protons bounding about the galaxy that can damage satellite electronics or life outside the protection of the atmosphere and Earth’s magnetic field. But outside of it, their number ticks up and space travel becomes more hazardous.
Since late August, the Cosmic Ray Subsystem instrument on Voyager 2 has measured about a 5% increase in the rate of cosmic rays hitting the spacecraft compared to early August. Another instrument has detected a similar increase in higher-energy cosmic rays. Both are signs we’re approaching the edge of the sun’s magnetic influence.
In May 2012, Voyager 1 experienced an increase in the rate of cosmic rays similar to what Voyager 2 is now detecting. That was about three months before Voyager 1 crossed the heliopause and entered interstellar space. That doesn’t necessarily mean Voyager 2 will be fancy free come December — the spacecraft’s in a different location and moving along a different path.
We also know that the sun’s domain expands and contracts in sync with the 11-year sunspot cycle. At cycle maximum, the solar wind blows faster and the heliosphere expands; at minimum it contracts. Voyager 1 is making the crossing very close to solar minimum in a shrinking heliosphere, so you’d think it would already be in interstellar space by now. My hunch is that the sun’s house is far from an idealized, perfect bubble!
Whatever the timeline, astronomers expect to know soon. Interestingly, just because interstellar space begins doesn’t mean the solar system ends! A vast roughly spherical cloud of comets called the Oort Cloud may be influenced by interstellar winds and even other stars, but it’s still dominated by the sun’s gravity and a proper part of the solar system. Incredibly far away, its inner edge is estimated to be 186 billion miles (300 billion km) away — 50 times farther than Pluto. It’s estimated that the Voyager won’t breach the Cloud for another 300 years.
That’s so remote that passing stars and big clouds of gas and dust can wiggle one or more comets loose and send them into the inner solar system. Every year, astronomers trace a subset of new comet discoveries back to the Oort Cloud.
Pencil sketch from the eyepiece using a 5 x 8 blank notecard, and 6-inch f/6 reflector @ 129x
Pencil sketch averted color
I’ve been happily Windows free for more than 20 years but it’s a sad fact that some of our brethren are forced, for reasons beyond their control, to work in the Windows environment. That’s hard on any Unix-head but especially so for Emacs users. It’s a sad fact that Emacs just doesn’t run very well on Windows and setting up the environment is harder than it should be.
Adrien Brochard recently gave a talk to the New York Emacs Meetup on his solution to this problem. He’s a Linux guy whose current job requires he work on a Windows machine. His answer is to Virtualize Emacs. What that means is that he runs Emacs on Arch Linux that in turn runs in a VirtualBox instance. As Brochard points out, this works even on locked down machines to which the developer doesn’t have administrative rights.
One of the nice things about this solution is how easy it is. Brochard has some scripts he uses to automate the installation of Linux and Emacs. During the presentation, he builds the entire environment, including VirtualBox, from scratch. It takes about 25 minutes but most of it is automated so that once he starts it, he can go on with his presentation. He says that even if you get something wrong you can simply blow away the VirtualBox instance and start over.
There is, of course, some overhead and Brochard does a good job on discussing that aspect too. All things considered, though, he believes that it’s the best solution for running Emacs on Windows.
The video is about 33 minutes so plan accordingly. It’s an excellent presentation and interesting even if you aren’t faced with running Emacs on Windows.
As a final note, I’ve discovered that I’m more partisan about editors than I thought. Back in 2016 I was outraged when someone did something similar to run Notepad++ under Ubuntu. In retrospect, it seems that it was the idea of using Notepad++ that outraged me not the use of a virtual environment to run it in.
Maker Faire Denver is entering its second year as the only Feature Maker Faire in the Rocky Mountains and surrounding states! It features awe-inspiring maker creations, hands-on activities for makers of all ages, presentations and competitions. We are also dedicating ourselves to growing our engagement with social impact makers.
Look for our Drew Fustini (@pdp7) in purple!
Bloomberg has another story about hardware surveillance implants in equipment made in China. This implant is different from the one Bloomberg reported on last week. That story has been denied by pretty much everyone else, but Bloomberg is sticking by its story and its sources. (I linked to other commentary and analysis here.)
Again, I have no idea what's true. The story is plausible. The denials are about what you'd expect. My lone hesitation to believing this is not seeing a photo of the hardware implant. If these things were in servers all over the US, you'd think someone would have come up with a photograph by now.
In the hands of a pre-teen boy, everything becomes a weapon. I had a toy Saturn 5 rocket that had a little spring-loaded crew capsule on the end. I launched Buzz Aldrin and Neil Armstrong on many a dangerous mission to one of my brothers’ heads.
Nikon just announced the winners of the 2018 Small World Photomicrography Competition, and it’s shared some of the winning and honored images with us. The contest invites photographers and scientists to submit images of all things visible under a microscope. Nearly 2,500 entries were received from 89 countries in 2018, the 44th year of the competition.
If you’re like me, you automatically think of the Org mode table editor (or Orgtbl minor mode) when you think of tables in Emacs. It’s hard to beat that functionality and Orgtbl mode makes it available everywhere in Emacs, even if you’re not in an Org buffer. Sometimes, though, you’d like to have special formatting for some or all of the table. That’s where
delim-col comes in.
Delim-col is built-in Emacs functionality that allows you to do things like adjust what string separates the columns, add a beginning or ending string to each item, add an ending string for each row, and adjust the padding in the table. It can be really handy for copying and pasting and then reformatting tables from an external source.
I didn’t know about
delim-col until I read about it over at Emacs Notes, where you’ll find a good explanation of the facility and what it can do. The Emacs Notes post also offers at bit of Elisp to make choosing the strings and delimiters a bit easier. By default you have to set them using a series of
setq statements if you want something different from the built-in choices. The Emacs Notes codes arranges for you to be prompted for the values.
You probably won’t need the
delim-col functionality very often but when you do it’s much easier than using something like a keyboard macro. Take a look at the post and see if you don’t agree.
A set-and-forget I2C/digital datalogger from Jan on Hackaday.io
This is a logical development from my first and second logger projects. The idea is simple: The logger needs to be small enough to fit inside small spaces, e.g. a bee hive.
With the press off a button it starts/stops logging. The casing can be as simple as shrink tubing!
With a robotic presence at Ryugu, JAXA’s Hayabusa2 mission is showing what can be done as we subject near-Earth asteroids to scrutiny. We’ll doubtless learn a lot about asteroid composition, all of which can factor into, among other things, the question of how we would approach changing the trajectory of any object that looked like it might come too close to Earth. The case for studying near-Earth asteroids likewise extends to learning more about the evolution of the Solar System.
NASA’s first near-Earth asteroid visit will take place on December 3, when the OSIRIS-REx mission arrives at asteroid Bennu, with a suite of instruments including the OCAMS camera suite (PolyCam, MapCam, and SamCam), the OTES thermal spectrometer, the OVIRS visible and infrared spectrometer, the OLA laser altimeter, and the REXIS x-ray spectrometer. Like Hayabusa2, this mission is designed to collect a surface sample and return it to Earth.
And while Hayabusa2 has commanded the asteroid headlines in recent days, OSIRIS-REx has been active in adjusting its course for the December arrival. The first of four asteroid approach maneuvers (AAM-1) took place on October 1, with the main engine thrusters braking the craft relative to Bennu, slowing the approach speed by 351.298 meters per second. The current speed is 140 m/sec after a burn that consumed on the order of 240 kilograms of fuel.
Science operations began on August 17, with PolyCam taking optical navigation images of Bennu on a Monday, Wednesday and Friday cadence until the AAM-1 burn, and then moving to daily ‘OpNavs’ afterwards. The MapCam camera is also taking images that measure changes in reflected light from Bennu’s surface as sunlight strikes it at a variety of angles, which helps to determine the asteroid’s albedo as we measure light reflection from various angles.
The AAM-1 maneuver is, as I mentioned above, the first in a series of four designed to slow the spacecraft to match Bennu’s orbit. Asteroid Approach Maneuver-2 is to occur on October 15. As the approach phase continues, OSIRIS-REx has three other high-priority tasks:
It was in August that the PolyCam camera obtained the first image from 2.2 million kilometers out. Subsequently, the MapCam image below was obtained.
Image: This MapCam image of the space surrounding asteroid Bennu was taken on Sept. 12, 2018, during the OSIRIS-REx mission’s Dust Plume Search observation campaign. Bennu, circled in green, is approximately 1 million km from the spacecraft. The image was created by co-adding 64 ten-second exposures. Credit: NASA/Goddard/University of Arizona.
Bennu shows up here as little more than a dot in an image that is part of OSIRIS-REx’s search for dust and gas plumes on Bennu’s surface. These could present problems during close operations around the object, and could also provide clues about possible cometary activity. The search did not turn up any dust plumes from Bennu, but a second search is planned once the spacecraft arrives.
The first month at the asteroid will be taken up with flybys of Bennu’s north pole, equator and south pole at distances between 19 and 7 kilometers, allowing for direct measurements of its mass as well as close observation of the surface. The surface surveys will allow controllers to identify two possible landing sites for the sample collection, which is scheduled for early July, 2020. OSIRIS-REx will then return to Earth, ejecting the Sample Return Capsule for landing in Utah in September of 2023.
The latest image offered up by the OSIRIS-REx team is an animation showing Bennu brightening during the approach from mid-August to the beginning of October.
Image: This processed and cropped set of images shows Bennu (in the center of the frame) from the perspective of the OSIRIS-REx spacecraft as it approaches the asteroid. During the period between August 17 and October 1, the spacecraft’s PolyCam imager obtained this series of 20 four-second exposures every Monday, Wednesday, and Friday as part of the mission’s optical navigation campaign. From the first to the last image, the spacecraft’s range to Bennu decreased from 2.2 million km to 192,000 km, and Bennu brightened from approximately magnitude 13 to magnitude 8.8 from the spacecraft’s perspective. Date Taken: Aug. 17 – Oct. 1, 2018. Credit: NASA/Goddard/University of Arizona.
Of all the OSIRIS-REx images I’ve seen so far, I think the one below is the prize. But then, I always did like taking the long view.
Image: On July 16, 2018, NASA’s OSIRIS-REx spacecraft obtained this image of the Milky Way near the star Gamma2 Sagittarii during a routine spacecraft systems check. The image is a 10 second exposure acquired using the panchromatic filter of the spacecraft’s MapCam camera. The bright star in the lower center of the image is Gamma2 Sagittarii, which marks the tip of the spout of Sagittarius’ teapot near the center of the galaxy. The image is roughly centered on Baade’s Window, one of the brightest patches of the Milky Way, which fills approximately one fourth of the field of view. Relatively low amounts of interstellar dust in this region make it possible to view a part of the galaxy that is usually obscured. By contrast, the dark region near the top of the image, the Ink Spot Nebula, is a dense cloud made up of small dust grains that block the light of stars in the background. MapCam is part of the OSIRIS-REx Camera Suite (OCAMS) operated by the University of Arizona. Date Taken: July 16, 2018. Credit: NASA/GSFC/University of Arizona.
We’re 53 days from arrival at Bennu. You can follow news of OSIRIS-REx at its NASA page or via Twitter at @OSIRIS-REx. Dante Lauretta (Lunar and Planetary Laboratory) is principal investigator on the mission; the University of Arizona OSIRIS-REx page is here. And be sure to check Emily Lakdawalla’s excellent overview of operations at Bennu in the runup to arrival.
Hurricane Michael crashed into Florida on October 10 as a powerful Category 4 storm—the third-most powerful hurricane ever to strike the U.S. mainland. With sustained winds of 155 mph, Michael is the strongest storm to hit Florida in 80 years, and the most powerful to ever strike its panhandle region. Trees were stripped bare, toppled, and splintered; railroad cars were blown off the tracks; houses and buildings were torn and battered; and neighborhoods were left flooded as the storm passed through. Florida residents up and down the coast near Panama City are now assessing the damage, as Michael, now a tropical storm, pushes north into Georgia and the Carolinas.
BSDNow 267 is posted a bit early this week, with an interview of Michael W. Lucas, about his upcoming Absolute FreeBSD 3rd edition and local BUG.
:config (global-git-gutter-mode t)
:bind ("C-x v s" . git-gutter:stage-hunk)
("C-x v n" . git-gutter:next-hunk)
("C-x v p" . git-gutter:previous-hunk))
Microsoft this week released software updates to fix roughly 50 security problems with various versions of its Windows operating system and related software, including one flaw that is already being exploited and another for which exploit code is publicly available.
The zero-day bug — CVE-2018-8453 — affects Windows versions 7, 8.1, 10 and Server 2008, 2012, 2016 and 2019. According to security firm Ivanti, an attacker first needs to log into the operating system, but then can exploit this vulnerability to gain administrator privileges.
Another vulnerability patched on Tuesday — CVE-2018-8423 — was publicly disclosed last month along with sample exploit code. This flaw involves a component shipped on all Windows machines and used by a number of programs, and could be exploited by getting a user to open a specially-crafted file — such as a booby-trapped Microsoft Office document.
KrebsOnSecurity has frequently suggested that Windows users wait a day or two after Microsoft releases monthly security updates before installing the fixes, with the rationale that occasionally buggy patches can cause serious headaches for users who install them before all the kinks are worked out.
This month, Microsoft briefly paused updates for Windows 10 users after many users reported losing all of the files in their “My Documents” folder. The worst part? Rolling back to previous saved versions of Windows prior to the update did not restore the files.
Microsoft appears to have since fixed the issue, but these kinds of incidents illustrate the value of not only waiting a day or two to install updates but also manually backing up your data prior to installing patches (i.e., not just simply counting on Microsoft’s System Restore feature to save the day should things go haywire).
Mercifully, Adobe has spared us an update this month for its Flash Player software, although it has shipped a non-security update for Flash.
As always, if you experience any issues installing any of these patches this month, please feel free to leave a comment about it below; there’s a good chance other readers have experienced the same and may even chime in here with some helpful tips. My apologies for the tardiness of this post; I have been traveling in Australia this past week with only sporadic access to the Internet.
Only two days old, the evening crescent’s back. You can catch it tonight 20 minutes to a half-hour after sunset low in the southwestern sky looking about as thin as a bread crust. A little more than one outstretched fist to its left, look for Jupiter in the twilight sky. Tomorrow night, the moon will stand just shy of 3° due north of the planet, when they’ll be in conjunction.
The solar system’s biggest planet and the one with by far the most interesting clouds will only be with us for a few more weeks. Earth’s orbital movement around the sun makes Jupiter appear to move about a degree to the west each night. Despite the put-down, Jupiter puts up a good fight, struggling to outpace the swifter Earth as it moves east in its orbit. Sorry, Charlie. By early November, the planet will be lost in the solar glare and reach conjunction with the sun on Nov. 26. It returns to the morning sky in late December in a wonderful, close conjunction with Mercury on the morning of the solstice.
While we’re on the moon, so to speak, the International Astronomical Union (IAU) today officially approved the naming of two lunar crater to commemorate the 50th anniversary of the Apollo 8 mission — Anders’ Earthrise and 8 Homeward. Anders’ crater is 25 miles (40 km) wide and was previously called Pasteur “T,” an outlier of the much larger Pasteur Crater. 8 Homeward is about 8 miles across (12.5 km) and originally called “Ganskiy M.” It represents wishes for a safe journey home for the Apollo 8 crew. Both craters lie on the far side of the moon.
Appropriately, the newly named craters appear in the foreground of the famous Earthrise photograph taken by astronaut Bill Anders on Dec. 24, 1968. The image became iconic and has even been credited with starting the environmental movement. And why not? Look how lovely and habitable Earth is next to the barren lunar landscape. Anders summed up the photo and mission best:
“We came all this way to explore the moon, and the most important thing is that we discovered the Earth.”
|Timeline of the Universe, showing recombination, the dark ages (not even labelled because that epoch just isn't interesting enough, apparently), reionization and the age of galaxies. Source. Credit: Bryan Christie Design)|
|Illustration of the transition between the cosmic fireball and the post-recombination Universe. Red spheres are protons, green spheres are neutrons, blue dots are electrons and yellow smudges are photons. The color bar on the bottom represents the average temperature (or energy) of the Universe at that epoch. Source.|
|Map of the cosmic microwave background, the radiation leftover from the primordial cosmic fireball. Tiny fluctuations in the temperature of microwave radiation coming to us from all directions give us clues about how matter was distributed at the earliest times in the Universe. In this rendering, we would be at a tiny dot in the center of the sphere. Source.|
|Opacity and transparency. The primordial fireball was opaque like fire is opaque: energetic particles couple with photons and keep them from free-streaming away. The edge of the wall of flame is like the last-scattering surface, where the light is finally free to escape. The dark ages were opaque like fog is opaque: the light was absorbed and scattered and attenuated. Once reionization cleared the "fog" of of the dark ages, light was able to travel unimpeded. Photo sources: here, here and here (but really here).|
|Artist's conception of bubbles of ionized gas percolating through the IGM during the dark ages. The CMB is at the far left, and the right is the present-day Universe. Source: illustration from a Scientific American article by Avi Loeb, which can be found here.|
|The origin of 21 cm radiation. In the higher-energy state, the hydrogen atom's electron and proton are aligned. If one flips its spin, the atom is in a lower-energy state and a 21 cm photon is produced. Source.|
|Is it here yet? (Photo by Mike Dodds)|
|Galactic synchrotron radiation at 408 MHz -- the emission is stronger at lower wavelengths. The color scale here gives the brightness temperature (a measure of the intensity of the signal) in Kelvins. For comparison, the 21 cm reionization signal would be around 10 mK. Source.|
|Simulation of reionization. Source.|
|Schematic of how much of the Universe we're seeing with different kinds of observations. Red, yellow and green are optical. The black circle around the edge is the CMB. Everything in blue can be observed with 21 cm radio signals. Source: Tegmark & Zaldarriaga 2009.|
|Credit: SPDO/TDP/DRAO/Swinburne Astronomy Productions.|
There’s a certain minimum set of stuff the typical Hackaday reader is likely to have within arm’s reach any time he or she is in the shop. Soldering station? Probably. Oscilloscope? Maybe. Multimeter? Quite likely. But there’s one thing so basic, something without which countless numbers of projects would be much more difficult to complete,…
The French street-theatre company Royal de Luxe has presented multiday outdoor performances featuring their giant marionettes for millions of people around the world for more than 20 years. Their current cast of puppets—Big Giant, Little Giantess, Xolo the Dog, Giant Grandmother, and Little Boy Giant—have just been retired, following their final performance last week in Liverpool, England. The BBC reports that the Royal de Luxe artistic director Jean-Luc Courcoult has decided to “end the saga of the Giants,” but he says the group has plans for a new show “involving a silverback gorilla.” Gathered here are images of the giants in Royal de Luxe performances over the years, from England, Mexico, Germany, Chile, and France.
To me, the image below is emblematic of space exploration. We look out at vistas that have never before been seen by human eye, contextualized by the banks of equipment that connect us to our probes on distant worlds. The fact that we can then sling these images globally through the Internet, opening them up to anyone with a computer at hand, gives them additional weight. Through such technologies we may eventually recover what we used to take for granted in the days of the Moon race, a sense of global participation and engagement.
We’re looking at the MASCOT Control Centre at the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) in Cologne, where the MASCOT lander was followed through its separation from the Japanese Hayabusa2 probe on October 3, its landing on asteroid Ryugu, and the end of the mission, some 17 hours later.
Image: In the foreground is MASCOT project manager Tra-Mi Ho from the DLR Institute of Space Systems in Bremen at the MASCOT Control Centre of the DLR Microgravity User Support Centre in Cologne. In the background is Ralf Jaumann, scientific director of MASCOT, presenting some of the 120 images taken with the DLR camera MASCAM. Credit: DLR.
As scientists from Japan, Germany and France looked on, MASCOT (Mobile Asteroid Surface Scout) successfully acquired data about the surface of the asteroid at several locations and safely returned its data to Hayabusa2 before its battery became depleted. A full 17 hours of battery life allowed an extra hour of operations, data collection, image acquisition and movement to various surface locations.
MASCOT is a mobile device, capable of using its swing-arm to reposition itself as needed. Attitude changes can keep the top antenna directed upward while the spectroscopic microscope faces downwards, a fact controllers put to good use. Says MASCOT operations manager Christian Krause (DLR):
“After a first automated reorientation hop, it ended up in an unfavourable position. With another manually commanded hopping manoeuvre, we were able to place MASCOT in another favourable position thanks to the very precisely controlled swing arm.”
MASCOT moved several meters to its early measuring points, with a longer move at the last as controllers took advantage of the remaining battery life. Three asteroid days and two asteroid nights, with a day-night cycle lasting 7 hours and 36 minutes, covered the lander’s operations.
Image: The images acquired with the MASCAM camera on the MASCOT lander during the descent show an extremely rugged surface covered with numerous angular rocks. Ryugu, a four-and-a-half billion year-old C-type asteroid has shown the scientists something they had not expected, even though more than a dozen asteroids have been explored up close by space probes. On this close-up, there are no areas covered with dust — the regolith that results from the fragmentation of rocks due to exposure to micrometeorite impacts and high-energy cosmic particles over billions of years. The image from the rotating MASCOT lander was taken at a height of about 10 to 20 meters. Credit: MASCOT/DLR/JAXA.
The dark surface of Ryugu reflects about 2.5 percent of incoming starlight, so that in the image below, the area shown is as dark as asphalt. According to DLR, the details of the terrain can be captured because of the photosensitive semiconductor elements of the 1000 by 1000 pixel CMOS (complementary metal-oxide semiconductor) camera sensor, which can enhance low light signals and produce usable image data.
Image: DLR’s MASCAM camera took 20 images during MASCOT’s 20-minute fall to Ryugu, following its separation from Hayabusa2, which took place at 51 meters above the asteroid’s surface. This image shows the landscape near the first touchdown location on Ryugu from a height of about 25 to 10 meters. Light reflections on the frame structure of the camera body scatter into the field of vision of the MASCAM (bottom right) as a result of the backlit light of the Sun shining on Ryugu. Credit: MASCOT/DLR/JAXA.
We now collect data and go about evaluating the results of MASCOT’s foray. The small lander had a short life but it seems to have delivered on every expectation. As Hayabusa2 operations continue at Ryugu, we’ll learn a great deal about the early history of the Solar System and the composition of near-Earth asteroids like these, all of which we’ll be able to weigh against what we find at asteroid Bennu when the OSIRIS-REx mission reaches its target in December.
Image: MASCOT as photographed by the ONC-W2 immediately after separation. MASCOT was captured on three consecutively shot images, with image capture times between 10:57:54 JST – 10:58:14 JST on October 3. Since separation time itself was at 10:57:20 JST, this image was captured immediately after separation. The ONC-W2 is a camera attached to the side of the spacecraft and is shooting diagonally downward from Hayabusa2. This gives an image showing MASCOT descending with the surface of Ryugu in the background. Credit: JAXA, University of Tokyo, Kochi University, Rikkyo University, Nagoya University, Chiba Institute of Technology, Meiji University, University of Aizu, AIST.
Bear in mind that the the 50th annual meeting of the Division for Planetary Sciences (DPS) of the American Astronomical Society (AAS) is coming up in Knoxville in late October. Among the press conferences scheduled are one covering Hayabusa2 developments and another the latest from New Horizons. The coming weeks will be a busy time for Solar System exploration.
From the staff to the Board of Directors, through the program committees, to our members and conference attendees, I have had overwhelmingly positive and thoughtful interactions with all those whom I’ve encountered.
A strong, committed community like this has valuable opinions to offer us, and so we have put together a survey to solicit those opinions.
I’ve long since moved all my writing—at least all my new writing—to Org mode. It does everything I need and let’s me stay in Emacs. Different folks favor different strokes, of course, so it’s always interesting to see what they are and especially how they use Emacs to scratch their itches.
Azer Koçulu recently moved to Linux from macOS and wanted to bring his writing protocol with him. He was using iA Writer on his Mac but it wasn’t available on Linux. Koçulu decided to see if he could recreate the iA Writer experience in Emacs. He was happy with the result and documented what he did in a blog post. It turns out that it’s pretty easy to set up an iA Writer environment in Emacs. I’m not an iA Writer user so I can’t say how well it recreates the experience but Koçulu is happy with it.
There’s probably a larger lesson here. If you really like some writing tool but need to move to an environment that doesn’t support it or want to move to Emacs for your writing for some other reason, Emacs probably has you covered. Koçulu shows you how to do it for iA Writer and here’s a video showing how to do it for Scrivener. Doubtless solutions exist for other tools too.
I’m posting a day early cause of time zone difference: there’s a meeting of the Polish BSD User Group tomorrow.