This seems so minor, but such a good idea: a regular check to make sure kernel and userland are in sync.
This seems so minor, but such a good idea: a regular check to make sure kernel and userland are in sync.
NASA shared astonishing images of its OSIRIS-REx spacecraft touching an asteroid yesterday, revealing how the vehicle stirred up rocks and debris on the objectâs surface when it made contact. The goal of the tap was to collect a sample of material from the asteroid, but the engineers behind the spacecraft say they wonât for sure if they collected anything until this weekend, when they spin the vehicle and measure how much material is inside.
However, the OSIRIS-REx team feels confident that they got something. âBottom line is from analysis of the images that weâve gotten down so far, is that the sampling event went really well, as good as we could have imagined it...
In Thailand, demonstrations against the military-backed government and Prime Minister Prayut Chan-o-cha have taken place, off and on, since February, interrupted by COVID-19 lockdowns until late July. On October 14, thousands of anti-government protestors rallied near Government House on the anniversary of a 1973 student uprising, calling for the resignation of Prayut Chan-o-cha and for reform of the monarchy. The following day a state of emergency was declared, and mass gatherings were prohibitedÃ¢â¬âbut thousands of protesters still came out to march. Attempts by the Thai government to disrupt the rallies by shutting down public transportation systems, and attempting to disrupt social media channels have had little effect on the recent string of protests, now in their eighth day.
James Lewis writes on Hackster about how this new initiative from SparkFun will allow people to mix and match multiple 32-bit microcontrollers with a vast array of peripherals:
SparkFun has just announced aÂ new modular ecosystem called MicroMod. Targeting rapid embedded development, MicroMod consists of two pieces: a microcontroller board and a carrier board. The interconnect between the two is the PC industryâs M.2 connector.
Look at any embedded deviceâs block diagram, and youâll see a microcontroller in the middle with a bunch of stuff surrounding it. That model is probably why the processor gets picked early in development. But, what happens when the design needs a microprocessor with a different architecture? Or unexpected capability, like WiFi, crept into the requirements? In the past, it would take significant effort to change either the processor or, worst case, the rest of the embedded system. With MicroModâs approach, the hardware change is as simple as swapping modules.âThe processor you start with is not always the one you end with â¦ MicroMod makes exploring different microcontrollers easy.â âÂ Nathan Seidle, SparkFun FounderÂ
The most striking physical feature of MicroModâs processor modules is the size. Their widths are similar to M.2 devices, but their lengths are much shorter. Each processor board contains very few components. For example, the ESP32 board has the SoC, an antenna, a flash memory, and the USB-to-serial chip. That is it! The carrier board contains extra things like a reset switch, voltage regulator, USB connector, and in-circuit programming header. With so much pushed to the carrier boards, it is no wonder SparkFun opted for a high-density, high-pin count, high-speed connector like M.2!
To be clear, while mechanically compatible with the M.2, MicroMod is not electrically compatible. Fortunately, SparkFun has open-sourced the pinout. That step makes it easy to use the pre-made modules or to design your own.
With todayâs launch, there are three processor boards and four carrier boards available.
This evening, three astronauts who have been living on board the International Space Station for the last six months will return back to Earth in a Russian Soyuz capsule, landing in the middle of the Kazakhstan desert. The trio have lived in space for nearly the entirety of the COVID-19 pandemic and are now returning as cases are rising again across the world.
NASA astronaut Chris Cassidy and Russian cosmonauts Ivan Vagner and Anatoly Ivanishin launched to the ISS on the Soyuz back in early April, nearly a month after the World Health Organization had declared that the COVID-19 outbreak had become a pandemic. Thanks to the outbreak, the crew underwent a stricter quarantine than usual before they took off. Cassidy said he remained mostly...
This post describes ongoing research by Kenny Peng, Arunesh Mathur, and Arvind Narayanan. We are grateful to Marshini Chetty for useful feedback.
Computer vision research datasets have been criticized for violating subjectsâ privacy, reinforcing cultural biases, and enabling questionable applications. But regulating their use is hard.
For example, although the DukeMTMC dataset of videos recorded on Dukeâs campus was taken down in June 2019 due to a backlash, the data continues to be used by other researchers. We found at least 135 papers that use this data and were published after this date, many of which were in the fieldâs most prestigious conferences. Worse, we found that at least 116 of these papers used âderivedâ datasets, those datasets that reuse data from the original source. In particular, the DukeMTMC-ReID dataset remains a popular dataset in the field of person reidentification and continues to be free for anyone to download.
The case of DukeMTMC illustrates the challenges of regulating a datasetâs usage in light of ethical concerns, especially when the data is separately available in derived datasets. In this post, we reveal how these problems are endemic and not isolated to this dataset.
Background: Why was DukeMTMC criticized?
DukeMTMC received criticism on two fronts following investigations by MegaPixels and The Financial Times. Firstly, the data collection deviated from IRB guidelines in two respects â the recordings were done outdoors and the data was made available without protections. Secondly, the dataset was being used in research with applications to surveillance, an area which has drawn increased scrutiny in recent years.
The backlash toward DukeMTMC was part of growing concerns that the faces of ordinary people were being used without permission to serve questionable ends.
Following its takedown, data from DukeMTMC continues to be used
In response to the backlash, the author of DukeMTMC issued an apology and took down the dataset. It is one of several datasets that has been removed or modified due to ethical concerns. But the story doesnât end here. In the case of DukeMTMC, the data had already been copied over into other derived datasets, which use data from the original with some modifications. These include DukeMTMC-SI-Tracklet, DukeMTMC-VideoReID, and DukeMTMC-ReID. Although some of these derived datasets were also taken down, others, like DukeMTMC-ReID, remain freely available.
Yet the data isnât just available âÂ it continues to be used prominently in academic research. We found 135 papers that use DukeMTMC or its derived datasets. These papers were published in such venues as CVPR, AAAI, and BMVC â some of the most prestigious conferences in the field. Furthermore, at least 116 of these used data from derived datasets, showing that regulating a given dataset also requires regulating its derived counterparts.
Together, the availability of the data, and the willingness of researchers and reviewers to allow its use, has made the removal of DukeMTMC only a cosmetic response to ethical concerns.
This set of circumstances is not unique to DukeMTMC. We found the same result for the MS-Celeb-1M dataset, which was removed by Microsoft in 2019 after receiving criticism. The dataset lives on through several derived datasets, including MS1M-IBUG, MS1M-ArcFace, and MS1M-RetinaFace â each, publicly available for download. The original dataset is also available via Academic Torrents. We also found that, like DukeMTMC, this data remains widely used in academic research.
Derived datasets can enable unintended and unethical research
In the case of DukeMTMC, the most obvious ethical concern may have been that the data was collected unethically. However, a second concern â that DukeMTMC was being used for ethically questionable research, namely surveillance â is also relevant to datasets that are collected responsibly.
Even if a dataset was created for benign purposes, it may have uses in more questionable areas. Oftentimes, these uses are enabled by a derived dataset. This was the case for DukeMTMC. The authors of the Duke MTMC dataset note that they haveÂ never conducted research in facial recognition, and that the dataset was not intended for this purpose. However, the dataset turned out to be particularly popular for the person re-identification problem, which has drawn criticism for its applications to surveillance. This usage was enabled by datasets like DukeMTMC-ReID dataset, which tailored the original dataset specifically for this problem.
Also consider the SMFRD dataset, which was released soon after the COVID-19 pandemic took hold. The dataset contains masked faces, including those in the popular Labeled Faces in the Wild (LFW) dataset with facemasks superimposed. The ethics of masked face recognition is a question for another day, but we point to SMFRD as evidence of the difficulty of anticipating future uses of a dataset. Released more than 12 years after LFW, SMFRD was created in a very different societal context.
It is difficult for a datasetâs author to anticipate harmful uses of their dataset â especially those that may arise in the future. However, we do suggest that a datasetâs author can reasonably anticipate that their dataset has potential to contribute to unethical research, and accordingly, think about how they might restrict their dataset upon release.
Derived datasets are widespread and unregulated
In the few years that DukeMTMC was available, it spawned several derived datasets. MS-Celeb-1M has also been used in several derived datasets.
More popular datasets can spawn even more derived counterparts. For instance, we found that LFW has been used in at least 14 derived datasets, 7 of which make their data freely available for download. These datasets were found through a semi-manual analysis of papers citings LFW. We suspect that many more derived datasets of LFW exist.Â
Before thinking about how one could regulate derived datasets, in the present circumstances, it is even challenging to know what derived datasets exist.
For both DukeMTMC and LFW, the authors lack control over these derived datasets. Neither requires giving any information to the authors prior to using the data, as is the case with some other datasets. The authors also lack control via licensing. DukeMTMC was released under the CC BY-NC-SA 4.0 license, which allows for sharing and adapting the dataset, as long as the use is non-commercial and attribution is given. The LFW dataset was released without a license entirely.
Though regulating data is notoriously difficult, we suggest steps that the academic community can take in response to the concerns outlined above.
In light of ethical concerns, taking down a dataset is often an inadequate method of preventing further use of a dataset. Derived datasets should also be identified and also taken down. Even more importantly, researchers should subsequently not use these datasets, and journals should assert that they will not accept papers using these datasets. Similarly to how NeurIPS is requiring a broader impact statement, we suggest requiring a statement listing and justifying any datasets used in a paper.
At the same time, more efforts should be made to regulate dataset usage from the outset, particularly with respect to the creation of derived datasets. There is a need to keep track of where a datasetâs data is available, as well as to regulate the creation of derived datasets that enable unethical research. We suggest that authors consider more restrictive licenses and distribution practices when releasing their dataset.
The NSA released an advisory listing the top twenty-five known vulnerabilities currently being exploited by Chinese nation-state attackers.
This advisory provides Common Vulnerabilities and Exposures (CVEs) known to be recently leveraged, or scanned-for, by Chinese state-sponsored cyber actors to enable successful hacking operations against a multitude of victim networks. Most of the vulnerabilities listed below can be exploited to gain initial access to victim networks using products that are directly accessible from the Internet and act as gateways to internal networks. The majority of the products are either for remote access (T1133) or for external web services (T1190), and should be prioritized for immediate patching.
Space exploration is hard, not least because of how difficult it is to communicate. Astronauts need to talk to mission control, ideally by video communication, and space vehicles need to send back data they gather, preferably at high speed and with little delay as possible. At first, space missions designed and carried their own distinct communications systems; that worked well enough, but it wasnât exactly a paragon of efficiency. Then one day in 1998, the internet pioneer Vinton Cerf imagined a network that could offer a richer capacity to serve the growing number of people and vehicles in space. The dream of an interplanetary internet was born.
But extending the internet to space isnât just a matter of installing Wi-Fi on rockets. Scientists have novel obstacles to contend with: The distances involved are astronomical, and planets move around, potentially blocking signals. Anyone on Earth who wants to send a message to someone or something on another planet must contend with often-disrupted communication paths.
âWe started doing the math for the which had worked perfectly well here on Earth. However, the speed of light was too slow,â Cerf said of his early work with colleagues in the InterPlanetary Networking Special Interest Group. Overcoming that problem would be a major undertaking, but this American computer scientist and former Stanford professor is used to helping make big things happen.
Decades ago, Cerf and Robert Kahn â the âfathers of the internetâ â developed the architecture and protocol suite for the terrestrial internet known as Transmission Control Protocol/Internet Protocol (TCP/IP). Anyone who has ever surfed the web, sent an email or downloaded an app has them to thank, though Cerf is quick to push back on the fancy title. âA lot of people contributed to the creation of the internet,â he said in his usual measured voice.
To transfer data on Earthâs internet, TCP/IP requires a complete end-to-end path of routers that forward packets of information through links such as copper or fiber optic cables, or cellular data networks. Cerf and Kahn did not design the internet to store data, partly because memory was too expensive in the early 1970s. So if a link along a path breaks, a router discards the packet and subsequently resends it from the source. This works well in Earthâs low-delay and high-connectivity environment. However, networks in space are more prone to disruptions, requiring a different approach.
âTCP/IP doesnât work at interplanetary distances,â Cerf said. âSo we designed a set of protocols that do.â
In 2003, Cerf and a small team of researchers introduced bundle protocols. Bundling is a disruption/delay-tolerant networking (DTN) protocol with the ability to take the internet (literally) out of this world. Like the protocols that underlie Earthâs internet, bundling is packet-switched. This means that packets of data travel from source to destination by way of routers that switch the direction in which the data moves along the networkâs path. However, bundling has properties the terrestrial internet does not have, such as nodes that can store information.
A data packet traveling from Earth to Jupiter might, for example, go through a relay on Mars, Cerf explained. However, when the packet arrives at the relay, some 40 million miles into the 400-million-mile journey, Mars may not be oriented properly to send the packet on to Jupiter. âWhy throw the information away, instead of hanging on to it until Jupiter shows up?â Cerf said. This store-and-forward feature allows bundles to navigate toward their destinations one hop at a time, despite large disruptions and delays. His most recent paper on the subject highlights the applicability of Loon SDN â technology capable of managing a network that moves around in the sky â to NASAâs next-generation space communications architecture.
Beyond the interplanetary internet, Cerf, now in his 70s, also focuses on his day job as chief internet evangelist for Google. This is a fancy title he embraces, brimming with a preacherâs eagerness to spread the internet, via global policy development, to the billions of people around the world without it. He is at once ambitious with serious ideas, while maintaining a playful side. Even though he typically sports a well-trimmed beard and three-piece suit â some say heâs the inspiration for the god-like Architect in the Matrix movies â he once started a keynote speech by unbuttoning his jacket and shirt, Superman-style, to reveal a T-shirt that read: âI P ON EVERYTHING!â
Quanta Magazine caught up with Cerf shortly after he recovered from COVID-19 and just before his participation in the Virtual Heidelberg Laureate Forum. The interview has been condensed and edited for clarity.
In the spring of 1998, nine of us got together at Jet Propulsion Laboratory Â to ask: What should we do in anticipation of what we might need for space exploration 25 years from now? Adrian Hooke, who was at JPL and then also served at NASA headquarters, was the guy who really got behind this and pushed. He passed away a few years ago, but he held this team together.
Weâd been exploring the solar system for decades, but the exploration â both manned and robotic â has typically involved radio communication, either direct point to point or through whatâs called a bent pipe: a radio relay that picks up the signal and rebroadcasts it to improve the likelihood that it reaches Earth.
Our group asked: Could we do better? Could we use the internetâs technology to improve space communication, especially as the number of spacecraft increases over time, or as we start putting settlements on the moon or Mars?
We donât have to build the whole thing and then hope somebody uses it. We sought to get standards in place, as we have for the internet; offer those standards freely; and then achieve interoperability so that the various spacefaring nations could help each other.
Weâre taking the next obvious step for multi-mission infrastructure: designing the capability for an interplanetary backbone network. You build whatâs needed for the next mission. As spacecraft get built and deployed, they carry the standard protocols that become part of the interplanetary backbone. Then, when they finish their primary scientific mission, they get repurposed as nodes in the backbone network. We accrete an interplanetary backbone over time.
In 2004, the Mars rovers were supposed to transmit data back to Earth directly through the deep space network â three big 70-meter antennas in Australia, Spain and California. However, the channelâs available data rate was 28 kilobits per second, which isnât much. When they turned the radios on, they overheated. They had to back off, which meant less data would come back. That made the scientists grumpy.
One of the JPL engineers used prototype software â this is so cool! â to reprogram the rovers and orbiters from hundreds of millions of miles away. We built a small store-and-forward interplanetary internet with essentially three nodes: the rovers on the surface of Mars, the orbiters and the deep space network on Earth. Thatâs been running ever since.
Weâve been refining the design of those protocols, implementing and testing them. The latest protocols are running back-and-forth relays between Earth and the International Space Station. Weâve done some other really cool tests. One spacecraft, EPOXI, that was off to visit a couple of comets was about 81 light-seconds away from Earth when we were told, âItâs OK with us if you upload your protocols and test them on that spacecraft.â So we did that too.
We did another test at the ISS where the astronauts were controlling a little robot vehicle in Germany. Normally, you wouldnât do that: If youâre trying to steer a vehicle on Mars and it takes 20 minutes for your signal to get there, you might turn the wheel and, 20 minutes later, the car turns and goes over a cliff. Then, 20 minutes after that, you discover that you just lost your $6 billion vehicle. It worked between the ISS and Earth because itâs only a few hundred miles. Itâs not totally crazy to imagine a mission where the astronauts donât actually land on the planet. They simply orbit around it and deploy remote equipment on the surface in real time.
It doesnât take long before youâre no longer in interactive mode. Youâre either in over-and-out mode, or youâre in hi-this-is-a-nice-video-recording-I-recorded-several-hours-ago mode, which is like email. The protocols were oriented around the recognition that the delay removes the possibility of interaction. That puts constraints on the protocol designs.
Itâs one thing to get agreement on the technical design and to implement the protocols. Itâs something else to get them in use where theyâre needed. Thereâs a lot of resistance to doing something new because ânewâ means: âThat might be risky!â and âShow me that it works!â You canât show that unless you take the risk.
Weâre working hard to convince the people designing space missions that the stuff is adequately tested. Thatâs been an uphill battle, and thereâs still much to be done. We have to get the commercial companies that support space exploration to have off-the-shelf capability. And we have to get scientists who design missions to say: âThis is what weâre capable of now.â
That way, you can be more ambitious and take advantage of the assumption that we have an interplanetary backbone. If you start out on the presumption that you donât, then youâll design a mission with limited communications capability.
The interplanetary internet is an infrastructure thatâs intended to support interplanetary activity, which could be research but someday could also be commercialization. Itâs an infrastructure in the same way that the internet is an infrastructure. The internet doesnât invent or discover anything. Itâs simply the medium through which people can do collaborative work and can discover new things.
An engineer in Sweden had the idea of trying out DTN protocols to track reindeer in Lapland, where the Sami tribe has been herding for 8,000 years. Reindeer wander around and are in and out of radio contact. Itâs an unpredictable environment, which is very different from an environment in which you can compute orbital mechanics and predict the likely contacts that might occur. This opportunistic capability, when communications are less predictable, is whatâs being tested in Lapland.
Also, in oceanic research, you have instruments generating and accumulating data on the oceanâs surface or the sea floor, but you donât necessarily have continuous connectivity to them. For Earth observations, sensors could sweep through a forest intermittently but not be continuously broadcast. In a fully connected internet environment, you get rid of the data as you produce it. But that wonât work in an environment with intermittent connectivity. You need a protocol that says: âDonât panic! Itâs OK, just hang onto it.â Also, battery-driven devices should not constantly transmit when they could be more efficient.
Store-and-forward, intermittent capability is quite useful terrestrially, especially after a major disaster when you may not have much communications capability. One could use DTN in a rapid recovery mode where resources are not sufficient to provide TCP/IP-style coverage.
Weâve shown that you can make the DTN protocols work at high speed, even though they have more overhead than the traditional TCP/IP protocols. But it would be very hard to introduce DTN everywhere, because look at how much TCP/IP there is. Thereâs also been an evolution on the internet side to another set of protocols called QUIC that achieves not only faster data rates but also faster recovery from failures or disconnects. However, this evolution is not in the direction of DTN.
On the other hand, for mobiles, where connectivity is still iffy, the DTN functionality might be pretty good. Weâre now looking at implementation of DTN in the mobile environment.
The abuse of the internet. Misinformation and malware. Harmful attacks. Phishing attacks. Ransomware. Itâs painful and distressing to realize that people will take an infrastructure like this, which has so much promise and has delivered so much, and do bad things with it. Unfortunately, thatâs human nature.
We need additional governance in the online environment, but thatâs hard. The Chinese built a big firewall and then put all their people inside it under surveillance. Thatâs not necessarily a society that the rest of us want to live in, and yet we still have to cope with the problem. How do we cope without going to that extreme? I donât have a good answer. I wish I did.
Thatâs very much on my mind right now. In fact, the interplanetary stuff is a refreshing shift away from that because there weâre thinking almost purely about scientific results.
For Immediate Release
October 21, 2020, (xwmÉÎ¸kwÉyÌ Ém (Musqueam), skÌ±wxÌ± wuÌ7mesh (Squamish) and sÉlÌ ilwÉtaÊÉ¬/selÌ iÌlwitulh (Tsleil-Waututh)/Vancouver, BC â Ten years after the death of his 22-year old son, father Al Wright is speaking out against ongoing police violence in the province and calling on all provincial party leaders to prioritize immediate police reform.
Al Wright is the father of Alvin Wright, shot and tragically killed by the Langley RCMP on August 7, 2010. According to Al Wright, âIt has been ten years since my sonâs death. I am angered and horrified that police killings are still rampant and nothing is being done about it. Nothing has changed. I thought we would hear more from political leaders about the crisis of policing in our province, but that has not been the case during this election. I expect every provincial party leader to prioritize immediate police reform, including meaningful avenues to hold police accountable and a comprehensive review of the Police Act.â
Continues Wright, âTen years ago, the RCMP killed my son. The RCMP entered my son Alvinâs house, went upstairs to his bedroom for a wellness check, failed to announce themselves, confronted him with guns drawn, and killed him. I have been seeking justice for my son for ten years, and now so many more families are enduring the same pain and anguish my family and I have suffered. British Columbia has the highest number of police deaths per capita in Canada. The police operate with impunity; they get away with crimes without being charged in ways no civilians ever would; and they basically still investigate themselves. Fifty percent of IIO investigators are former police officers. I just do not understand how this is being allowed to continue.â
The BC Civil Liberties Association has worked with Al Wright and other families of police violence for several decades. According to Harsha Walia, Executive Director of the BC Civil Liberties Association, âHow many more families need to pour out their pain or endure tragedies for immediate action to be taken on the crisis of policing in this province and around the country? There is rising public momentum calling for immediate action from all levels of government to end the harms of policing, especially as it affects Indigenous and Black communities and people in mental health distress. This means no more band-aid solutions and no more funds poured into an unjust system. Family members, such as Mr. Wright, deserve better. All political leaders must prioritize policing issues and ending the harms of policing.â
Al Wright: 604-727-6971; Harsha Walia: 778-885-0040
A spacecraft about the size of an SUV continues operations at an asteroid the size of a mountain. The spacecraft is OSIRIS-REx, the asteroid Bennu, and yesterdayÃ¢â¬â¢s successful touchdown and sample collection attempt elicits nothing but admiration for the science team that offered up the SUV comparison. TheyÃ¢â¬â¢re collecting materials with a robotic device 321 million kilometers from home. YesterdayÃ¢â¬â¢s operations seem to have gone off without a hitch, the only lingering question being whether the sample is sufficient, or whether further sampling in January will be needed.
Preliminary data show that today's sample collection event went as planned More details to come once all the data from the event are downlinked to Earth. Thanks, everybody, for following along as we journey #ToBennuAndBack!
Next stop: Earth 2023! pic.twitter.com/fP7xdOEeOs
â NASA's OSIRIS-REx (@OSIRISREx) October 20, 2020
If all goes well, we will acquire the largest surface sample from another world since Apollo. TAGSAM is the Touch-And-Go Sample Acquisition Mechanism aboard the craft, a 3.35-meter sampling arm extended from the spacecraft as OSIRIS-REx was descending roughly 800 meters to the surface. The Ã¢â¬ËCheckpointÃ¢â¬â¢ burn occurred at 125 meters as the craft maneuvered to reach the sample collection site, dubbed Ã¢â¬ËNightingale.Ã¢â¬â¢ The Ã¢â¬ËMatchpointÃ¢â¬â¢ burn followed Ã¢â¬ËCheckpointÃ¢â¬â¢ by 10 minutes to match BennuÃ¢â¬â¢s rotation at point of contact. A coast past the Ã¢â¬ËMount DoomÃ¢â¬â¢ boulder was followed by touchdown in a crater relatively free of rocks.
This is dramatic stuff. The image below is actually from August during a rehearsal for the sample collection (images of yesterdayÃ¢â¬â¢s touchdown are to be downlinked to Earth later today), but itÃ¢â¬â¢s an animated view that gets across the excitement of the event. Mission principal investigator Dante Lauretta (University of Arizona. Tucson) had plenty of good things to say about the result:
Ã¢â¬ÅAfter over a decade of planning, the team is overjoyed at the success of todayÃ¢â¬â¢s sampling attempt. Even though we have some work ahead of us to determine the outcome of the event Ã¢â¬â the successful contact, the TAGSAM gas firing, and back-away from Bennu are major accomplishments for the team. I look forward to analyzing the data to determine the mass of sample collected.Ã¢â¬ï¿½
Image: Captured on Aug. 11, 2020 during the second rehearsal of the OSIRIS-REx missionÃ¢â¬â¢s sample collection event, this series of images shows the SamCam imagerÃ¢â¬â¢s field of view as the NASA spacecraft approaches asteroid BennuÃ¢â¬â¢s surface. The rehearsal brought the spacecraft through the first three maneuvers of the sampling sequence to a point approximately 40 meters above the surface, after which the spacecraft performed a back-away burn. Credit: NASA/Goddard/University of Arizona.
The goal is 60 grams of material, with the first indication of sample size being new images of the surface to see how much material was disturbed by the TAGSAM activities. Michael Moreau (NASA GSFC) is OSIRIS-REx deputy project manager:
Ã¢â¬ÅOur first indication of whether we were successful in collecting a sample will come on October 21 when we downlink the back-away movie from the spacecraft. If TAG made a significant disturbance of the surface, we likely collected a lot of material.Ã¢â¬ï¿½
Images of the TAGSAM head, taken with the camera known as SamCam, should provide evidence of dust and rock in the collector, with some possibility of seeing inside the head to look for evidence of the sample within. Beyond imagery, controllers will try to determine the spacecraftÃ¢â¬â¢s moment of inertia by extending the TAGSAM arm and spinning the spacecraft about an axis perpendicular to the arm. Comparison to data from a similar maneuver before the sampling should allow engineers to measure the change in the mass of the collection head.
Between the imagery and the mass measurement, we should learn whether at least 60 grams of surface material have been collected. Once this has been verified, the sample collector head can be placed into the Sample Return Capsule (SRC) and the sample arm retracted as controllers look to a departure from Bennu in March of 2021. If necessary, a second maneuver, at the landing backup site called Ã¢â¬ËOsprey,Ã¢â¬â¢ could take place on January 12, 2021.
Image: These images show the OSIRIS-REx Touch-and-Go Sample Acquisition Mechanism (TAGSAM) sampling head extended from the spacecraft at the end of the TAGSAM arm. The spacecraftÃ¢â¬â¢s SamCam camera captured the images on Nov. 14, 2018 as part of a visual checkout of the TAGSAM system, which was developed by Lockheed Martin Space to acquire a sample of asteroid material in a low-gravity environment. The imaging was a rehearsal for a series of observations that will be taken at Bennu directly after sample collection. Credit: NASA/Goddard/University of Arizona.
Sample return is scheduled for September 24, 2023, with the Sample Return Capsule descending by parachute into the western desert of Utah. So far so good, and congratulations all around to the OSIRIS-REx team!
The annual Hackaday Supercon is taking place asÂ RemoticonÂ this year on November 6th to 8th. The talentedÂ Thomas FlummerÂ has design a PCB badge based on theÂ SMD challengeÂ that can be further customized in KiCad.
NOTE: make sure to check âAfter Darkâ in the cart
A long time ago, I worked at a video store. One of my coworkers was a young woman who was trying to quit smoking. She asked me to keep an eye on her, and if she seemed like she was considering having a cigarette, to talk her out of it.
Two days later, another coworker, her âbest friend,â asked her to go out back and have a cigarette with her. I told her that Iâd promised to try to talk her out of it. She said she remembered, and chose not to go smoke. Later her friend scolded me, saying that I didnât know how hard it is to quit and that I should be more supportive. I explained that I thought I was being supportive by helping her be strong. The friend disagreed and maintained that the kind thing was to let her have âone lousy cigaretteâ with her friends to make quitting easier.
The one who was trying to quit smoking and her best friend didnât work at the video store much longer. They were both fired, and prosecuted. The âbest friendâ talked her into helping her steal a customerâs credit card number and use it to buy an extremely elaborate bong. Iâm certain she was extremely supportive through the entire ordeal.
This afternoon, a NASA spacecraft more than 200 million miles from Earth successfully touched the surface of an asteroid, in an attempt to grab a handful of pebbles and dust from the space rock. Data from the spacecraft confirmed that the vehicle did indeed touch the asteroid today, but NASA wonât know until tomorrow if it actually snagged a sample of material.
âTouchdown declared,â a mission controller announced when the team received data confirming the maneuver. âSampling in process.â The news of the success was met with cheers and applause from engineers following along with the procedure.
The spacecraft that just tapped the asteroid is OSIRIS-REx, and this sampling maneuver has been years in the making. The main purpose of the...
Daniel Fojtâs updated libedit in DragonFly; not huge, but I mention it cause Iâve seen the very first bug fixed in the commit listing; garbled history.
Kerby Peak is a summit that can be reached on a very well maintained trail with good instructions and descriptions already on the web:
The Hike: https://www.oregonhikers.org/field_guide/Kerby_Peak_Hike
The Trail head: https://www.oregonhikers.org/field_guide/Kerby_Peak_Trailhead
ItÃ¢â¬â¢s rocky on top so I suggest bringing your own antenna support.
Brandy Peak is a short hike on a good trail. It is accessible in any vehicle. ItÃ¢â¬â¢s not close to a population center. You knew there was a catch. The views from the top are spectacular and 360 degrees. It might be on your shuttle route if you are running the Wild and Scenic section of the Rogue or the Illinois Rivers. Road is closed in winter.
Bear Camp Summit is a name I have given this peak which is just off of Bear Camp Road (NF-23). Bear Camp road is famous to river runners as the shuttle road when doing the Wild and Scenic section of the Rogue river. It is infamous for the Kim family tragedy. Don't attempt this activation in winter.
Autumn is definitely the best season. The autumnal equinox took place a few weeks ago, marking the end of summer and the start of fall across the Northern Hemisphere. Once again it is the season of harvests, festivals, migrations, winter preparations, and, of course, spectacular fall foliage. Across the North, people are beginning to feel a crisp chill in the evening air, leaves are splashing mountainsides with bright color, apples and pumpkins are being gathered, and animals are on the move. Collected here are some early images from this year, with maybe more to follow in the weeks to come.
This afternoon, NASAâs OSIRIS-REx spacecraft will grab a small sample of rocks from the surface of an asteroid named Bennu zooming through space more than 200 million miles from Earth. Itâs an ambitious task, but if it works, OSIRIS-REx may eventually return to Earth with the largest sample of material from another space body since NASAâs Apollo missions to the Moon.
The OSIRIS-REx spacecraft has been circling Bennu for the last two years, mapping its surface and hunting for the right spot to snag these rocks. After an intense amount of planning from the mission team, the engineers have a target all picked out on Bennu and are ready to send their spacecraft down to the surface. OSIRIS-REx will lightly touch the surface of Bennu with an...
Cedar Springs Mountain is a peak near Grants Pass that is an easy walk from a locked gate, 1.7 miles round trip with 500 feet of elevation gain. There is communications equipment on the summit, but I suffered no RFI because of it. It has a nice operating point with lots of options for stringing antennas including large and small trees and the communications area fence posts. I found an all natural pole support. View is nice, but is blocked a bit by the trees so a bit of walking around is needed to see it all.
Poly Top Butte is a good second summit to add on a trip to China Hat W7O/CE-053. HereÃ¢â¬â¢s a few notes from my activation:
No sooner had the radical equations of quantum mechanics been discovered than physicists identified one of the strangest phenomena the theory allows.
âQuantum tunnelingâ shows how profoundly particles such as electrons differ from bigger things. Throw a ball at the wall and it bounces backward; let it roll to the bottom of a valley and it stays there. But a particle will occasionally hop through the wall. It has a chance of âslipping through the mountain and escaping from the valley,â as two physicists wrote in Nature in 1928, in one of the earliest descriptions of tunneling.
Physicists quickly saw that particlesâ ability to tunnel through barriers solved many mysteries. It explained various chemical bonds and radioactive decays and how hydrogen nuclei in the sun are able to overcome their mutual repulsion and fuse, producing sunlight.
But physicists became curious â mildly at first, then morbidly so. How long, they wondered, does it take for a particle to tunnel through a barrier?
The trouble was that the answer didnât make sense.
The first tentative calculation of tunneling time appeared in print in 1932. Even earlier stabs might have been made in private, but âwhen you get an answer you canât make sense of, you donât publish it,â noted Aephraim Steinberg, a physicist at the University of Toronto.
It wasnât until 1962 that a semiconductor engineer at Texas Instruments named Thomas Hartman wrote a paper that explicitly embraced the shocking implications of the math.
Hartman found that a barrier seemed to act as a shortcut. When a particle tunnels, the trip takes less time than if the barrier werenât there. Even more astonishing, he calculated that thickening a barrier hardly increases the time it takes for a particle to tunnel across it. This means that with a sufficiently thick barrier, particles could hop from one side to the other faster than light traveling the same distance through empty space.
In short, quantum tunneling seemed to allow faster-than-light travel, a supposed physical impossibility.
âAfter the Hartman effect, thatâs when people started to worry,â said Steinberg.
The discussion spiraled for decades, in part because the tunneling-time question seemed to scratch at some of the most enigmatic aspects of quantum mechanics. âItâs part of the general problem of what is time, and how do we measure time in quantum mechanics, and what is its meaning,â said Eli Pollak, a theoretical physicist at the Weizmann Institute of Science in Israel. Physicists eventually derived at least 10 alternative mathematical expressions for tunneling time, each reflecting a different perspective on the tunneling process. None settled the issue.
But the tunneling-time question is making a comeback, fueled by a series of virtuoso experiments that have precisely measured tunneling time in the lab.
In the most highly praised measurement yet, reported in Nature in July, Steinbergâs group in Toronto used whatâs called the Larmor clock method to gauge how long rubidium atoms took to tunnel through a repulsive laser field.
âThe Larmor clock is the best and most intuitive way to measure tunneling time, and the experiment was the first to very nicely measure it,â said Igor Litvinyuk, a physicist at Griffith University in Australia who reported a different measurement of tunneling time in Nature last year.
Luiz Manzoni, a theoretical physicist at Concordia College in Minnesota, also finds the Larmor clock measurement convincing. âWhat they measure is really the tunneling time,â he said.
The recent experiments are bringing new attention to an unresolved issue. In the six decades since Hartmanâs paper, no matter how carefully physicists have redefined tunneling time or how precisely theyâve measured it in the lab, theyâve found that quantum tunneling invariably exhibits the Hartman effect. Tunneling seems to be incurably, robustly superluminal.
âHow is it possible for to travel faster than light?â Litvinyuk said. âIt was purely theoretical until the measurements were made.â
Tunneling time is hard to pin down because reality itself is.
At the macroscopic scale, how long an object takes to go from A to B is simply the distance divided by the objectâs speed. But quantum theory teaches us that precise knowledge of both distance and speed is forbidden.
In quantum theory, a particle has a range of possible locations and speeds. From among these options, definite properties somehow crystallize at the moment of measurement. How this happens is one of the deepest questions.
The upshot is that until a particle strikes a detector, itâs everywhere and nowhere in particular. This makes it really hard to say how long the particle previously spent somewhere, such as inside a barrier. âYou cannot say what time it spends there,â Litvinyuk said, âbecause it can be simultaneously two places at the same time.â
To understand the problem in the context of tunneling, picture a bell curve representing the possible locations of a particle. This bell curve, called a wave packet, is centered at position A. Now picture the wave packet traveling, tsunami-like, toward a barrier. The equations of quantum mechanics describe how the wave packet splits in two upon hitting the obstacle. Most of it reflects, heading back toward A. But a smaller peak of probability slips through the barrier and keeps going toward B. Thus the particle has a chance of registering in a detector there.
But when a particle arrives at B, what can be said about its journey, or its time in the barrier? Before it suddenly showed up, the particle was a two-part probability wave â both reflected and transmitted. It both entered the barrier and didnât. The meaning of âtunneling timeâ becomes unclear.
And yet any particle that starts at A and ends at B undeniably interacts with the barrier in between, and this interaction âis something in time,â as Pollak put it. The question is, what time is that?
Steinberg, who has had âa seeming obsessionâ with the tunneling-time question since he was a graduate student in the 1990s, explained that the trouble stems from the peculiar nature of time. Objects have certain characteristics, like mass or location. But they donât have an intrinsic âtimeâ that we can measure directly. âI can ask you, âWhat is the position of theÂ baseball?â but it makes no sense to ask, âWhat is the time of theÂ baseball?ââ Steinberg said. âThe time is not a property any particle possesses.â Instead, we track other changes in the world, such as ticks of clocks (which are ultimately changes in position), and call these increments of time.
But in the tunneling scenario, thereâs no clock inside the particle itself. So what changes should be tracked? Physicists have found no end of possible proxies for tunneling time.
Hartman (and LeRoy Archibald MacColl before him in 1932) took the simplest approach to gauging how long tunneling takes. Hartman calculated the difference in the most likely arrival time of a particle traveling from A to B in free space versus a particle that has to cross a barrier. He did this by considering how the barrier shifts the position of the peak of the transmitted wave packet.
But this approach has a problem, aside from its weird suggestion that barriers speed particles up. You canât simply compare the initial and final peaks of a particleâs wave packet. Clocking the difference between a particleâs most likely departure time (when the peak of the bell curve is located at A) and its most likely arrival time (when the peak reaches B) doesnât tell you any individual particleâs time of flight, because a particle detected at B didnât necessarily start at A. It was anywhere and everywhere in the initial probability distribution, including its front tail, which was much closer to the barrier. This gave it a chance to reach B quickly.
Since particlesâ exact trajectories are unknowable, researchers sought a more probabilistic approach. They considered the fact that after a wave packet hits a barrier, at each instant thereâs some probability that the particle is inside the barrier (and some probability that itâs not). Physicists then sum up the probabilities at every instant to derive the average tunneling time.
As for how to measure the probabilities, various thought experiments were conceived starting in the late 1960s in which âclocksâ could be attached to the particles themselves. If each particleâs clock only ticks while itâs in the barrier, and you read the clocks of many transmitted particles, theyâll show a range of different times. But the average gives the tunneling time.
All of this was easier said than done, of course. âThey were just coming up with crazy ideas of how to measure this time and thought it would never happen,â said RamÃ³n Ramos, the lead author of the recent Nature paper. âNow the science has advanced, and we were happy to make this experiment real.â
Although physicists have gauged tunneling times since the 1980s, the recent rise of ultraprecise measurements began in 2014 in Ursula Kellerâs lab at the Swiss Federal Institute of Technology Zurich. Her team measured tunneling time using whatâs called an attoclock. In Kellerâs attoclock, electrons from helium atoms encounter a barrier, which rotates in place like the hands of a clock. Electrons tunnel most often when the barrier is in a certain orientation â call it noon on the attoclock. Then, when electrons emerge from the barrier, they get kicked in a direction that depends on the barrierâs alignment at that moment. To gauge the tunneling time, Kellerâs team measured the angular difference between noon, when most tunneling events began, and the angle of most outgoing electrons. They measured a difference of 50 attoseconds, or billionths of a billionth of a second.
Then in work reported in 2019, Litvinyukâs group improved on Kellerâs attoclock experiment by switching from helium to simpler hydrogen atoms. They measured an even shorter time of at most two attoseconds, suggesting that tunneling happens almost instantaneously.
But some experts have since concluded that the duration the attoclock measures is not a good proxy for tunneling time. Manzoni, who published an analysis of the measurement last year, said the approach is flawed in a similar way to Hartmanâs tunneling-time definition: Electrons that tunnel out of the barrier almost instantly can be said, in hindsight, to have had a head start.
Meanwhile, Steinberg, Ramos and their Toronto colleagues David Spierings and Isabelle Racicot pursued an experiment that has been more convincing.
This alternative approach utilizes the fact that many particles possess an intrinsic magnetic property called spin. Spin is like an arrow that is only ever measured pointing up or down. But before a measurement, it can point in any direction. As the Irish physicist Joseph Larmor discovered in 1897, the angle of the spin rotates, or âprecesses,â when the particle is in a magnetic field. The Toronto team used this precession to act as the hands of a clock, called a Larmor clock.
The researchers used a laser beam as their barrier and turned on a magnetic field inside it. They then prepared rubidium atoms with spins aligned in a particular direction, and sent the atoms drifting toward the barrier. Next, they measured the spin of the atoms that came out the other side. Measuring any individual atomâs spin always returns an unilluminating answer of âupâ or âdown.â But do the measurement over and over again, and the collected measurements will reveal how much the angle of the spins precessed, on average, while the atoms were inside the barrier â and thus how long they typically spent there.
The researchers reported that the rubidium atoms spent, on average, 0.61 milliseconds inside the barrier, in line with Larmor clock times theoretically predicted in the 1980s. Thatâs less time than the atoms would have taken to travel through free space. Therefore, the calculations indicate that if you made the barrier really thick, Steinberg said, the speedup would let atoms tunnel from one side to the other faster than light.
In 1907, Albert Einstein realized that his brand-new theory of relativity must render faster-than-light communication impossible. Imagine two people, Alice and Bob, moving apart at high speed. Because of relativity, their clocks tell different times. One consequence is that if Alice sends a faster-than-light signal to Bob, who immediately sends a superluminal reply to Alice, Bobâs reply could reach Alice before she sent her initial message. âThe achieved effect would precede the cause,â Einstein wrote.
Experts generally feel confident that tunneling doesnât really break causality, but thereâs no consensus on the precise reasons why not. âI donât feel like we have a completely unified way of thinking about it,â Steinberg said. âThereâs a mystery there, not a paradox.â
Some good guesses are wrong. Manzoni, on hearing about the superluminal tunneling issue in the early 2000s, worked with a colleague to redo the calculations. They thought they would see tunneling drop to subluminal speeds if they accounted for relativistic effects (where time slows down for fast-moving particles). âTo our surprise, it was possible to have superluminal tunneling there too,â Manzoni said. âIn fact, the problem was even more drastic in relativistic quantum mechanics.â
Researchers stress that superluminal tunneling is not a problem as long as it doesnât allow superluminal signaling. Itâs similar in this way to the âspooky action at a distanceâ that so bothered Einstein. Spooky action refers to the ability of far-apart particles to be âentangled,â so that a measurement of one instantly determines the properties of both. This instant connection between distant particles doesnât cause paradoxes because it canât be used to signal from one to the other.
Considering the amount of hand-wringing over spooky action at a distance, though, surprisingly little fuss has been made about superluminal tunneling. âWith tunneling, youâre not dealing with two systems that are separate, whose states are linked in this spooky way,â said Grace Field, who studies the tunneling-time issue at the University of Cambridge. âYouâre dealing with a single system thatâs traveling through space. In that way it almost seems weirder than entanglement.â
In a paper published in the New Journal of Physics in September, Pollak and two colleagues argued that superluminal tunneling doesnât allow superluminal signaling for a statistical reason: Even though tunneling through an extremely thick barrier happens very fast, the chance of a tunneling event happening through such a barrier is extraordinarily low. A signaler would always prefer to send the signal through free space.
Why, though, couldnât you blast tons of particles at the ultra-thick barrier in the hopes that one will make it through superluminally? Wouldnât just one particle be enough to convey your message and break physics? Steinberg, who agrees with the statistical view of the situation, argues that a single tunneled particle canât convey information. A signal requires detail and structure, and any attempt to send a detailed signal will always be faster sent through the air than through an unreliable barrier.
Pollak said these questions are the subject of future study. âI believe the experiments of Steinberg are going to be an impetus for more theory. Where that leads, I donât know.â
The pondering will occur alongside more experiments, including the next on Steinbergâs list. By localizing the magnetic field within different regions in the barrier, he and his team plan to probe ânot only how long the particle spends in the barrier, but where within the barrier it spends that time,â he said. Theoretical calculations predict that the rubidium atoms spend most of their time near the barrierâs entrance and exit, but very little time in the middle. âItâs kind of surprising and not intuitive at all,â Ramos said.
By probing the average experience of many tunneling particles, the researchers are painting a more vivid picture of what goes on âinside the mountainâ than the pioneers of quantum mechanics ever expected a century ago. In Steinbergâs view, the developments drive home the point that despite quantum mechanicsâ strange reputation, âwhen you see where a particle ends up, that does give you more information about what it was doing before.â
What: BCCLA at Alberta Court of Appeal to intervene in A.C. and J.F. v. Her Majesty the Queen in Right of Alberta to protect benefits for young adults leaving government care.
When: October 22, 2020, at 10:00 a.m. MST
Where: Alberta Court of Appeal (Edmonton, AB)
Edmonton, AB (Treaty 6 Territory) â On Thursday, October 22, 2020, the BC Civil Liberties Association (BCCLA) will make oral arguments at the Alberta Court of Appeal in A.C. and J.F. v. Her Majesty the Queen in Right of Alberta. This case is about an Alberta law that puts young adults raised in government care at risk of losing financial and emotional benefits to help them transition to independence. A.C., one of the young adults who brought this case, fears that the loss of these benefits will force her to return to sex work and may lead her to engage in substance abuse and contemplate suicide. The Court of Appeal will determine whether it should temporarily suspend the operation of this law due to constitutional concerns.
The BCCLA will argue that the law should be suspended because it violates the rights to life, liberty, and security of the person under s. 7 of the Charter. Section 7 can protect positive socio-economic rights. The protection of positive rights is consistent with Canadaâs international obligations. The Court must protect positive rights in this case because the challenged law will cause serious hardship and the young adults who will lose benefits are extremely vulnerable.
The BCCLA is represented by Joe Arvay, OC, OBC, QC of Arvay Finlay LLP and Jessica Magonet of the BCCLA.
The SAINT-EX telescope, operated by NCCR PlanetS, produces a nice resonance as I write this morning. The latter acronym stands for the National Centre of Competence in Research PlanetS, operated jointly by the University of Bern and the University of Geneva. The former, SAINT-EX, identifies a project called Search And characterIsatioN of Transiting EXoplanets, and the team involved explicitly states that they shaped their acronym to invoke Antoine de Saint-ExupÃ©ry, legendary aviator and author of, among others, Wind, Sand and Stars (1939), Night Flight (1931) and Flight to Arras (1942).
Iâve talked about Saint-ExupÃ©ry now and again throughout the history of Centauri Dreams, not only because he was an inspiration for my own foray into flying, but also because for our interstellar purposes he is credited with this inspirational thought:
âIf you want to build a ship, donât drum up the men to gather wood, divide the work and give orders. Instead, teach them to yearn for the vast and endless sea.â
I say Saint-ExupÃ©ry is âcreditedâ with this because it seems to be a condensation of a longer passage that appeared in his 1948 title La Citadelle and Iâve never been able to track the oft-quoted short version down to an original in this specific form. No matter, itâs a grand thought and it plays to our passions as explorers and mappers of new worlds.
So good for NCCR PlanetS for the nod to a favorite writer, and thanks for its recent work at TOI-1266, a red dwarf now known to host at least two planets. The confirmation was achieved with the SAINT-EX robotic 1-meter telescope that the project operates in Mexico; the paper was recently published in Astronomy & Astrophysics. TOI identifies a TESS Object of Interest, meaning the object had been considered promising for follow-up work to validate the candidate signatures as planets. The paper on TOI-1266 provides the needed confirmation.
Hereâs what we know: TOI-1266 b is an apparent sub-Neptune about two and a half times the diameter of the Earth that orbits the primary in 11 days. TOI-1266 c is closer in size to a âsuper-Earthâ in being about one and a half times the size of our planet in a 19-day orbit.
The orbits are circular, co-planar and stable. Referring to the more than 3000 transiting planets thus far identified, which includes 499 with more than a single transiting world, the authors discuss distinctive planetary populations. Both planets at TOI-1266 are found at what lead author Brice-Olivier Demory calls the âradius valley,â for reasons explained in the paper:
This large sample of transiting exoplanets allows for in-depth exploration of the distinct exoplanet populations. One such study by Fulton et al. (2017) identified a bi-modal distribution for the sizes of super-Earth and sub-Neptune Kepler exoplanets, with a peak at â¼1.3 Râ and another at â¼2.4 Râ. The interval between the two peaks is called the radius valley and it is typically attributed to the stellar irradiation received by the planets, with more irradiated planets being smaller due to the loss of their gaseous envelopes.
Image: Size of TOI-1266 system compared to the inner solar system at a scale of one astronomical unit, the distance between the Earth and the Sun. The orbital distances of the two exoplanets discovered around the star TOI-1266, which is half of the size of the Sun, are smaller than Mercuryâs orbital distance. TOI-1266 b, the closest planet to the star at a distance of 0.07 astronomical units, has a diameter of 2.37 times that of Earthâs and is therefore considered a sub-Neptune. TOI-1266 c, at 0.01 astronomical units from its star and with 1.56 times the Earthâs diameter, is considered a super-Earth. For each planetary system, the starâs diameter and the orbital distances to its planets are shown in scale. The relative diameter of all planets of both systems are on scale, being TOI-1266 b the largest planet and Mercury the smallest. The zoom-in to TOI-1266, in the lower part of the image, shows that the irradiation received by TOI-1266 c from its star is 21% larger than the irradiation received by Venus from the Sun in the upper part of the image. Â© Institute of Astronomy, UNAM/ Juan Carlos Yustis.
Itâs useful to have a super-Earth and a sub-Neptune in the same system because this allows for tighter constraints on formation models. The paper argues that this will be âa key system to better understand the nature of the radius valley around early to mid-M dwarfs.â
Itâs also interesting that TOI-1266âs small size and relative proximity (about 117 light years) make it a possible target for atmospheric studies with future instruments. The authors show that the James Webb Space Telescope should be able to operate well above the observatoryâs noise floor level in performing transmission spectroscopy here (i.e., viewing changes in the starâs light as the either planet moves first in front of and then is eclipsed by the star). The TOI-1266 planets compare favorably to TRAPPIST-1b on this score, and the host star shows no evidence of significant starspots that might distort the signal of the exoplanet atmospheres.
Image: Direct comparison among the planets of TOI-1266 system with the interior planets of the solar system. Â© Institute of Astronomy, UNAM / Juan Carlos Yustis.
The paper is Brice-Olivier Demory et al., âA super-Earth and a sub-Neptune orbiting the bright, quiet M3 dwarf TOI-1266,â Astronomy & Astrophysics Volume 642 (2 October 2020). Abstract.
The COVID-19 pandemic has had far-reaching effects on not only human health, but also economic impacts around the world due to the severe disruption of our daily lives. Individuals in regions where smallholder farms comprise most of the agricultural production sector, such as many sub-Saharan African countries, are at greater risk of economic and food insecurity due to disruptions to supply chains and local food markets. However, up-to-date, and high-resolution spatial information about croplands (often called cropland or crop type maps) are often lacking in these regions, making it difficult for governments to locate farmers and assess food security and production.
Planet, in partnership with NASA Harvest, NASAâs Food Security and Agriculture Program, run out of the University of Maryland (UMD), aims to enable and advance the adoption of satellite Earth observations to benefit food security, agriculture, and human and environmental resiliency, which has become increasingly necessary in the midst of COVID-19.Â
NASA Harvest scientists recently supported the Government of Togo in their innovative COVID-19-related food security relief efforts by creating a cropland map, at 10-meter resolution, of the entire country. One of the West African nationâs efforts is âYOLIMâ an interest-free digital loan program designed to boost food production across smallholder farms by funding the cost of farming essentials, and providing access to an e-wallet which qualified growers can use to withdraw funds towards fertilizers, pesticides or renting tractors. The Togolese Government is also experimenting with ways of utilizing the maps within the context of its flagship social protection program âNOVISSIâ which is a digital cash transfer scheme aimed at providing a social safety net for vulnerable groups affected by the COVID-19 pandemic. Their aim here is to understand which communities have significant concentrations of smallholder farming activities for critical crops. These areas could then be prioritized for cash transfers under the program or may be included in a dedicated social protection campaign to protect farmers from shocks triggered by the pandemic which may negatively impact national food security.
This is where satellite data can help fill in the gaps. Thankfully, NASA Harvest maps derived from satellite data alongside poverty and census data are providing more complete insights instrumental to identifying priority areas rapidly and effectively, where programs like YOLIM and NOVISSI have the most impact.
The map produced by NASA Harvest âprovides unmatched clarity into the nature and distribution of agricultural land nationwide,â states Cina Lawson, Minister of Posts, Digital Economy, and Technological Innovation of Togo. âOn top of this map, we are overlaying data from poverty maps that we have developed in collaboration with UC Berkeleyâs Data-Intensive Development Lab and Innovations for Poverty Action (IPA). Together, they provide decisive knowledge being used to design social protection policies aimed at improving the livelihoods of agrarian rural communities.â
âWhen rapid action was needed and mobility across the country was limited due to the COVID-19 outbreak, satellite data offered an effective and accelerated means to map the countryâs distribution of croplands and characterize the nature of agricultural fields during the pandemic,â adds Dr. Inbal Becker-Reshef, NASA Harvest Program Director.
Earth observations can reveal where croplands are located as well as provide insight into crop conditions, yields and production forecasts, giving early warning of impending food shortages to policymakers and farmers alike. In Togoâand across Sub-Saharan Africaâmany smallholder farms are less than one hectare in size, making them difficult or impossible to resolve in most satellite images. Because of this, cropland mapping has traditionally required a wealth of ground-truth data for training and verifying machine learning classifiers. However, publicly available ground-truth data is often sparse in smallholder-dominated regions and is the main barrier to developing machine learning methods to support agricultural monitoring in smallholder regions. The Togolese Ministry of Agriculture was progressive and proactive in the face of the pandemic by looking to satellite imagery to face unforeseen challenges and bolster food security throughout the country.
NASA Harvest created a new method for rapidly generating crop maps over a large heterogeneous area by harnessing the power of machine learning and high-resolution SkySat imagery from Planet and data coming from the European Space Agencyâs Copernicus Sentinel-2 and from NASA-USGS Landsat satellite to map Togoâs croplands without the need for ground-truth dataâall in the span of only 10 days from the initial request for the map. The generated cropland map, alongside poverty and census information, enabled the Togolese Government team to identify priority areas rapidly and effectively for its relief programs. The cropland map is now also featured in the NASA, European Space Agency, and Japan Aerospace Exploration Agencyâs joint COVID-19 impact-monitoring dashboard as a demonstration of the utility of earth observations for food security applications.Â Â
With the cropland maps, Togolese government officials had trustworthy information on the physical size and geographic location of agricultural lands that census data might have missed. This collaborative effort not only emphasizes the utility of satellite-driven information in times where ground access is limited and information is needed quickly, such as during the ongoing pandemic, but also illustrates the benefits of public and private institutions working together towards a common goal. This enables timelier responses to much-needed information in support of agricultural policy decisions.
âCropland maps are critical in times of crisis when decision makers need to rapidly design and enact agriculture-related policies and mitigation strategies, including providing humanitarian assistance, disbursing targeted aid, or boosting productivity for farmers,â says Dr. Hannah Kerner, Assistant Research Professor at UMD and lead on the Togo mapping project for NASA Harvest. For these maps to be of maximum utility for aid in response to COVID-19, they needed to be created using up-to-date imagery. Previously available maps were at least two years old and created from lower resolution imagery that did not sufficiently detect smallholder farms in the country. Dr. Kerner was pleased with the ready action from Planet commenting that, âAfter hearing that we needed high-resolution recent data in Togo, Planet quickly mobilized to get our team access in less than 48 hours, and were immediately responsive when we needed support during our mapping sprint.âÂ
Matthew Dillon added âexistence locksâ to DragonFly, which as usual he committed with a long, descriptive message.
Saturday was the dayâ¦ But the Jeep was in âintensive careâ â a few days before, Sassyâs faithful âland auxiliaryâ had to be transferred by âambulanceâ due to a âtransmission ailmentâ:
Last news from the âtransmission surgeonâ was that it looked operable, but no health insurance, thoughâ¦) So, Ale fetched one of the shop trucks to haul Sassy out of the water. The first surprise was that the docks were frosty and slipperyâ¦ and so was Sassyâs deck: getting the mast down was not for the faint of heartâ¦ Second surprise was that the outboard engine refused to start. Then I realized â third surprise â that the switch to choose between the internal or external tank was in the wrong positionâ¦ (I never touch itâ¦ the wind perhapsâ¦?). Eventually I got the engine going and after some juggling of the choke and the accelerator it soon warmed up and started moderating. The morning was cold but windless, so, the undocking from the slip, the docking at the ramp and the haul out itself were uneventful (after eleven years (!) I am starting to know the drillâ¦):
By mid-morning, Sassy and her trailer were tucked at the back of the yard of Aleâs shop:
And today Monday, duly masked, I took bus #40 in the Elmvale Plaza to get to the St. Laurent Centre where I jumped on train #1 East for a ride to its terminal in Blair Station. Once there. I hopped on bus #25 that delivered me a short walk away from Aleâs shop. The cockpit was dried the best I could (it was drizzling), the anchor light was protected with a can and pads were installed along the mast to protect the tarp from ripping under the weight of the ice that could accumulate over winter. Then the tarp was pulled up and Sassy almost entirely disappeared under itâ¦:
Back at the St Laurent Centre, the scent of Cinnabons was too strong to pass by and a couple found their way homeâ¦
Happy hibernation, Sassy! With some luck, Iâll be waking you up in the early Springâ¦
While Rhode Island is sort of small (Yellowstone can hold TWO Rhode Islands), it is a beautiful place, and is currently host to forty Parks On The Air entities.Â One of them, Arcadia Wildlife Management Area covers a broad area on central west RI, and is the largest recreational area in the state.Â Surprisingly, no one has ever activated Arcadia, K-6979, according to ParksOnTheAir.Â That changed today!
I drove into Arcadia and parked near Browning Mill Pond.Â I was intending to test a couple of antennas, and chose to begin with a 20-meter HamStick attached to the roof of my truck with a 5â³ MagMount.Â I used an Icom IC-7000, a small all band/mode 100 watt rig (idle since the 2016 ARRL NPOTA).Â The HamStick was tuned to slightly favor the SSB portion of the band, with a SWR of 1.3:1 (it was 2.0:1 at 14.001 MHz and 1.8:1 at 14;349 MHz).
On-air operation began at 17:45 UTC, and ceased at 18:59 UTC; thatâs 74 minutes.Â Oddly enough, I completed 74 contacts in that period, coast to coast and a few into Europe.Â Signals were unbelievably strong into the Southeast, with TN and AL booming in well over S9 (these states are a single F-layer hop away from RI).Â During that time, I consumed 9.1 AH from my 30 AH battery (or a rate of 7.4 AH per hour of âpileupâ use).
In the end, I never did try my new end-fed half-wave antenna, as the pileup was quite deep with just the HamStick.Â Thanks to all Hunters who made the activation fun!
At less than 4 miles of good road off Highway 93, Sula Peak is a popular âtravelersâ summit for drive-up VHF DN25/DN35 and SOTA activations.Â A modern US Forest lookout and a communications facility occupy the summit.Â Frequently there are bighorn sheep grazing in the area and especially in the spring on the highway.Â
OSIRIS-REx is about to perform its signature feat
From Christina Ramsey on the Tindie blog:
Do you love everything hardware?! ThenÂ the 2020 Hackaday RemoticonÂ has you covered this November!
Remoticon is a fully virtual hardware conference with 20+ workshops, 2 keynote talks, and 8 different demos. Join the weekend fun from wherever you are. Remoticon will have instructors teaching workshops from all across the globe, from Australia to India, from North America to the Netherlands.
Meeting virtually provides the perfect platform for more space, more people, and more options. Attend demos about Design Methodology, Robots, Zero to ASIC, Edge-Based Voice AI, and other awesome topics. Join workshops covering topics such as Reverse Engineering, Tiny ML, How to Hack a Car, Glowy Origami, andÂ so many more.
In need of some creative inspiration and socialization with fellow hackers? Come hang out Friday night for a community Bring-A-Hack! Thereâs even a virtual Hackaday SMD Challenge for those who want to learn and those who want to put their skills to the test.
Youâll never guess the best part. Iâm sure youâre thinking, âhow could this get any better?âÂ Remoticon Main Track ticketsÂ are free! You can also donate with a pay-as-you-wish ticket. Donations will go to charities that feed, house, or educate people.
Attendees only pay $10 to join a workshop. Some workshops do require hardware, which may include things you already have sitting on your workbench.
So the real question is what workshops and demos are you going to pack into your schedule the weekend of November 6-8th? We canât wait to see you all there!
The rational numbers are the most familiar numbers: 1, -5, Â½, and every other value that can be written as a ratio of positive or negative whole numbers. But they can still be hard to work with.
The problem is they contain holes. If you zoom in on a sequence of rational numbers, you might approach a number that itself is not rational. This short-circuits a lot of basic mathematical tools, like most of calculus.
Mathematicians usually solve this problem by arranging the rationals in a line and filling the gaps with irrational numbers to create a complete number system that we call the real numbers.
But there are other ways of organizing the rationals and filling the gaps: the p-adic numbers. They are an infinite collection of alternative number systems, each associated with a unique prime number: the 2-adics, 3-adics, 5-adics and so on.
The p-adics can seem deeply alien. In the 3-adics, for instance, 82 is much closer to 1 than to 81. But the strangeness is largely superficial: At a structural level, the p-adics follow all the rules mathematicians want in a well-behaved number system.
Developed over a century ago, p-adic numbers have become an essential setting in which to investigate questions about rational numbers that go back millennia.
The p-adic numbers are based in modular arithmetic, which is a method of counting that loops back on itself, like a clock. Just as 1300 on a 24-hour clock is the same as 1 p.m., mathematicians say that 13 âmodulo 12â is equivalent to 1.
To see how p-adic number systems emerge from modular arithmetic, start by classifying all integers modulo a specific prime number. Classifying the integers modulo 3, for example, sorts them into three buckets, or rooms.
You could also classify integers modulo higher powers of 3: modulo 9 (32 ) or modulo 27 (33).
Mathematicians sort integers modulo powers of 3 to detect features of their prime factorizations: Integers that are equivalent to 0 modulo 3 are in the same room and have at least one 3 in their prime factorizations; integers that are equivalent to 0 modulo 9 have at least two; integers equivalent to 0 modulo 27 have at least three.
Now imagine the integers modulo 3, 9 and 27 stacked atop each other like a tower. Each level of the tower is an exact threefold covering of the level below it. This pattern continues forever, creating an elegant arrangement of the integers modulo ever-higher powers of 3.
Each p-adic integer is defined by following an infinite path up the tower. A birdâs-eye view of this tower gives a picture of all the p-adic integers.
One of the biggest advances in 21st-century mathematics is an object called a âperfectoid space,â which embodies this perspective. It was developed by Peter Scholze of the University of Bonn, who won the Fields Medal in 2018 in part for this work. Itâs just one example of how mathematicians make use of these infinitely layered towers.
âInstead of considering the individual levels of the tower, you consider the whole tower at once,â said David Savitt of Johns Hopkins University. âThis is a fundamental insight that crops up everywhere in modern number theory, in the Langlands Program and arithmetic geometry.â
Mathematicians write p-adic numbers based on the frequency with which each power of p appears in the numberâs âbase pâ expansion. For example, this is how you write 11 in the 3-adics :
The size of a p-adic number is determined by the prevalence of p in its prime factorization. Numbers with more ps are smaller. For example, in the 3-adics, 486 is âsmallâ because it has many 3s in its prime factorization (486 = 2 x 3 x 3 x 3 x 3 x 3).
Another way to think about size is to think about which numbers are close to 0. In the p-adics, integers are closer together when they share a room at higher levels of the tower. The numbers 0 and 486 share a room up to the fifth level, whereas 0 and 6 share a room on only the first level â indicating that 0 is closer to 486 than to 6 (and thus 486 is smaller than 6).
The p-adic towers accommodate fractions by expanding the tower belowground. Numbers with large powers of p in the numerators are small, and numbers with larger powers of p in the denominators are large.
Arithmetic has a different feel, too. Take, for example, the sum 486 + 486 = 972. In the real numbers, 972 is much bigger than 486. But in the 3-adics, 972 is the same size as 486, because both the sum (972) and the summand (486) have the same number of 3s in their prime factorization.
The p-adics take a different shape than the real number line: They form a fractal made of the infinitely nested rooms at the âtopâ of the p-adic tower. But this fractal has its own gaps. Mathematicians fill them by forming the âcompletionâ of the p-adic rationals â a procedure analogous to adding irrational values to the number line. In this sense, at least, the principles underlying the p-adic numbers and the real numbers are similar.
âTheyâre all completions, so they actually have a lot in common,â said Jessica Fintzen of the University of Cambridge and Duke University.
The infinite family of p-adic number systems provides mathematicians with a wide range of settings in which to explore questions about rational numbers.
For example, mathematicians would like to know when polynomial equations like 3x3 + 4y3 + 5z3 = 0 have rational solutions. This is generally a hard question. But itâs relatively easy to find p-adic solutions.
âThings want to be nice over the p-adics. They want to have solutions,â said Bianca Viray of the University of Washington.
One tool mathematicians use to answer this question is the local-global principle, or the Hasse principle, which dates to the 1920s. It proposes that if a polynomial has a solution in the real numbers and in all of the p-adic numbers, then that polynomial also has a solution in the rational numbers. The local-global principle is true for some types of polynomials but not others.
The premise behind the local-global principle seems strange: To prove the existence of solutions within the rational numbers, mathematicians look for solutions in infinitely many other number systems â the reals and all the p-adics.
The need to work like this highlights the extent of the problems created by the holes in the rational numbers: You need to cross the universe just to get around them. At the same time, it suggests that in a cosmos of infinitely many number systems, it would be almost strange to restrict ourselves to the one that just happens to be closest to home.
âWeâre all on Earth and we work with the reals, but if you went else, youâd work with the p-adics,â Viray explained. âItâs the reals that are the outliers.â
As the 2020 Mars Society Convention has just finished, Iâm publishing here my entry in the Mars City State Design Competition. Also available as a pdf. Congratulations to the winners team Nexus Aurora and all the other 176 competitors!
Twenty pages is hardly adequate to describe the totality of any city, let alone the first city on Mars. Too much is uncertain or unknowable for me to be prescriptive. And yet, to chart our course we need some idea of a destination. The tools of science and the talents of a generation are easily equal to the task, provided only that we set out in the right direction. This design competition entry therefore places an emphasis on developing not answers but questions, as a step toward focusing our attention and, if we are lucky, sloughing off a layer of two of ignorance. Let us focus on the less intuitive aspects of Mars city design and seek useful insights.Â
In these 20 pages, I present a cross section of the first city on Mars.
What are the requirements? What functions is the city intended to perform?
How are these functions to be executed? How can a good design serve and promote these core functions? On what questions can we base our trades?
Letâs lay out the city and determine what goes where. What aspects can be determined by analogy with organically-developed cities on Earth, and what has to be re-invented to meet the conditions on Mars?
Some functions ought to be collocated, such as living, food, healthcare, education, and entertainment. Some functions must be segregated, such as noisy or dangerous industrial processes. And some functions must necessarily be more remote, such as the space port, solar farms, or mines.
How these parts are laid out determines their interfaces. It is crucially important that both people and cargo are able to move efficiently throughout the city, even as population and traffic continues to increase. This requires adequate space to create wide thoroughfares, as well as dense, walkable environments that permit rapid, shirt-sleeves pedestrian movement between every area.Â
Mars may be the second most hospitable planet in the known universe, but it is still a frozen poisonous cratered irradiated asphyxiating place that cares not if we live or die. A Mars city of any size needs to maximize productivity to increase the odds of survival. Some industrial factors, such as human labor, will always be relatively scarce. As much as possible, other key resources such as water, electrical power, living and working space, heat, and raw materials should be made abundant. Scarcity and rationing inflict enormous costs on any process, and the Mars city simply cannot afford them.Â
How can we minimize avoidable scarcity and ensure that our factories exist in a comfortable buyerâs market? While aggressive recycling and waste minimization are an important part of the picture, the Mars city needs to generate ongoing surpluses of everything, even as demand continues to grow. Much of primary production is gradually being automated on Earth â on Mars, even the tooling manufacturing is routinely automated.
More generally, automation exists as an abstraction layer between human intention and actual manipulation of matter. To ensure constant increases in productivity relative to human labor inputs, automation has to continually ascend the value chain, from manufacturing robots to robot production robots.
Finally, pressurized volume is itself a valuable commodity. Inadequate supply of space in factories, farms, or living areas exacts an exponentially increasing toll. If the Mars city is to flourish in a surplus of space, the labor and material cost of generating more volume must be aggressively minimized.
While I can be relatively certain about what the Mars city needs, I cannot be as certain about how to meet those needs. I offer here a partial sketch of how a Mars city might go about solving these problems but remind the reader that Vision 2040 is ultimately the product of the expertise of millions of people working for many decades.
Image: Rendering showing how periodic anchors transmit pressure load to the surface for effectively limitless pressurized volume at minimum cost.
Certain commodities are widely used and so âon tapâ.
Image: Diagram of notional city plan, showing industrial bays radiating from the central core. All functions can be independently resized as needed with minimal disruption.
What does self-sufficiency look like for a city on Mars? A popular image of self-sufficiency provides a rugged, capable pioneer with a small plot and some animals building themselves up from nothing. While easy to articulate, a Mars city cannot bootstrap like this, because the environment is too hostile.Â
Environmental hostility is a way of thinking about what will kill people and how quickly. Astronauts, oil rig divers, and mountain climbers all work in hostile environments, depending on advanced technology and rigorous procedural problem solving to stay alive.
While the Mars city encloses a large enough volume that the inhabitants can move around unencumbered by spacesuits, the system as a whole still embodies precarious advanced technology that is not capable of regenerating itself by default. Therefore, all the shiny life-supporting widgets must either last forever, be readily importable, or readily replaced. This is a tall order.
On Earth, with its habitable environment and billions of people, there are only five nation states that have achieved sufficiently advanced industry to âmake everythingâ. They are China, Japan, South Korea, Germany, and the USA. A Mars city doesnât need to make fighter jets but they need to make nearly everything that goes into one, including advanced robotics, computers, plastics, metals, composites, and tooling for advanced manufacturing.
While an early Mars base must import nearly everything, a more complete city necessarily has to make more things locally. How can we think about prioritizing local manufacturing?
Generally speaking, local production favors bulk raw materials that are both easy to make and must be provided in large quantities. Conversely, import favors complex technology that embodies large quantities of energy and labor, such as microchips. With a handful of exceptions, products with a lower cost per kilogram (as a proxy for manufacturing difficulty) and higher use rate would be made locally first. Market pricing mechanisms allow for ânaturalâ prioritization without rigid central planning.
Local production for any given product begins with prototyping and moves into mass production and then fully automated production. It is critical that localizing production consumes proportionately less resources, in particular human labor, than they create.
Graph: Some industrial products by cost density and US per capita production. Asterisks mark products (water, methane, oxygen, CO2) with substantially different sources and usage patterns on Mars. Self-sufficiency starts at the bottom right (water, rubble) and moves towards the upper left (flash memory, morphine). Regions for Mars production, import, and export at a million people as discussed in the economics section below.
As an example, early production favors oxygen, nitrogen, water, electricity, methane, plastic feedstocks, masonry, and other bulk commodities that can be produced on Mars with only gas or liquid water feedstocks, or unprocessed dirt. These materials do not require dedicated facilities for remote ore extraction and processing.
Later production favors food (expensive in a place where arable land must be made from scratch), metals (some recycled from retired spacecraft), and a wide range of industrial chemicals.
Still later, secondary production (manufacturing) processes raw materials into discrete products beginning with mass-intensive structural parts of machines (booms, fasteners) and eventually trending into electronics and integrated circuits.
It cannot be overstated how difficult and ambitious this program is. Many nations on Earth have tried and failed to achieve this, despite higher populations and better resources. There are, however, a few considerations which could affect the overall difficulty of the enterprise.
As the Mars city grows, demand for products also grows. Factories are built to continually increase productivity, with design incorporating room for growth and for greater automation. There is a fundamental limit to the rate at which a human being can perform manual tasks, so over time, human workers are separated from the actual products by steadily increasing layers of automation, or robotic abstraction. In mature industries, even factory construction and machine calibration are automated or remotely operated from Earth.
The McMaster-Carr catalogue lists 550,000 discrete items. Beyond a certain point, provision of additional part diversity suffers diminishing returns. Mars-focused products are sourced from a simplified parts catalogue. Additionally, sub-industries that are irrelevant to Mars, such as coal steam turbines or container ship construction, need not be built.
More than a few megaprojects and industrial sectors have run into severe problems due to path dependency, lock in, and technical debt. This occurs when an early and apparently unimportant decision has unintended consequences that are both difficult to correct and difficult to live with. As an example, the internet was designed in an era when everyone on it knew everyone else, so security and attribution were afterthoughts at best. This has left our entire society with endless security issues in critical infrastructure!Â
Technical debt is one of the canonical âhard problemsâ, since it affects every industry to some degree and there is no easy answer. That said, a degree of mindfulness when performing systems engineering may help prevent the deepest of regrets.
One concrete example is deciding what atmosphere to operate the city under. A lower pressure atmosphere reduces pressure loads in structures, as well as decompression effects when using space suits. On the other hand, it transports heat less effectively, meaning that every cooling fan and system has to be made bigger to dissipate unwanted waste heat.
A Mars city of a million people is able to produce nearly every resource it needs to survive. By mass, the cargo manifest is human immigrants, by a super-majority. That is to say, per immigrant cargo allotment has shrunk from perhaps 10 T at the outset to less than 10 kg, a reduction of a factor of 1000. By mass, more than 99.9% of products and resources used by Martians are locally sourced.Â
This means that robust local surpluses exist for all gases, liquids, materials, food, water, precision machinery, vehicles, structures, bulkheads, infrastructure of all kinds, chemicals, bulk electronics parts (actuators, sensors, capacitors, circuit boards, batteries, solar panels, etc) and some local production of integrated circuits such as a generic rad hard x86 processor, FPGA, and flash memory unit.Â
Imports, therefore, comprise primarily luxury goods and consumer computing devices such as tablets and mobile phones, along with a range of pharmaceuticals that have very low use rates.
Of the million people, perhaps half are devoted to tertiary services and facilitation ensuring that labor remains specialized and efficiently allocated across all sectors. Not everyone works in a factory or mine.
To put this into perspective, near autarky with a million people on Mars implies an improvement in per-capita disposition of resources equivalent to all the advances since the industrial revolution, again. No miracles are required, only a lot of hard work. Not only is this physically possible, it can be achieved from our present technological state with only steady incremental advances.
Despite my strong temptation to design and specify a command economy, the Mars city has adapted best to a regulated open market and free enterprise. In any system beyond some small critical size, the distributed asynchronous mechanisms of capitalist buyerâs market economies scale much better than the alternatives, such as any form of centralized control or artificial pricing.Â
That said, the thousands of ambitious entrepreneurs in the city face some unusual conditions which merit discussion.
The Mars city is economically successful if its economy is capable of sustaining the industrial capacity to exceed the material wants and needs of the people who live there. That is, it has achieved robust prosperity despite its relatively small size, hostile external environment, and high shipping costs from the nearest developed market.
Economic success of a human city in space is sometimes defined as achieving wild profitability for traditional Earth-based speculative investors. Without the allure of a get-rich-quick scheme, we are told, it will be impossible to fund this sort of development. I may be a member of a small minority that believes that no-one who wants to make an easy buck should look to space, whether it be mining the Moon, asteroids, or building a profitable space hotel.
This is why net profitability is a counterproductive success condition. While building a city on Mars canât generate net wealth for all Earth-based investors, it is meaningful to ask how much progress might be made for a given investment. Indeed, nearly all space exploration, whether using rockets or telescopes, depends on either private philanthropy or government expenditure. Since von Braunâs Das Marsprojekt, even exploration missions to Mars have come with an impossibly steep price tag. Mars Direct showed how to reduce the cost by perhaps three orders of magnitude, while SpaceXâs reusable Starship architecture aims for a further improvement of the same scale. If these transportation innovations are successful, Vision 2040 could be built for a total cost measured in hundreds of billions of dollars over several decades, which is affordable enough for a global civilization in full bloom.
It is worthwhile to explore what assumptions underlie the competition specifications.
One passenger, their luggage and life support supplies weigh about 400kg, implying a one-way ticket cost of $200k. Once on Mars, an adult human consumes about a tonne of food, water, and air per year. If imported, this would cost $500k, but around 99.9% of this is locally produced/recycled by the time the population reaches a million people. Finally, the cost saving of someone staying on another 26 months until the next launch window is about $300k. All together, this implies that supporting workers on Mars is about ten times more expensive than is typical on Earth. That is, even with a million people on Mars their net productivity has to be at least ten times greater than one would reasonably expect on Earth. This also meshes nicely with expectations for the level of automation and self-sufficiency, since the Mars city must have an industrial stack and product manufacturing diversity more in line with a country of a hundred million people.Â
Regarding self-sufficiency, a cargo import cost of $500/kg is prohibitive for commodities that cost much less than this (such as water and food) but relatively insignificant for products that cost much more. These include industrial machinery, certain chemicals, long lived radioactive isotopes, integrated circuits, and other stuff that routinely travels by air courier. Nevertheless, if a certain product can be obtained more cheaply from local manufacturers, imports necessarily diminish. Local manufacturers get, in effect, a $500/kg import duty which has to be balanced against a 10x human labor cost increase. Robotic labor on Mars is more expensive than on Earth but relatively cheaper than human labor. This means local manufacturing costs are between one and ten times more expensive than on Earth, depending on the process and material inputs. As an example, a product that requires $55/kg of labor input on Earth would cost $550 to produce on Mars. Given import costs, production on Earth or Mars is equally favored at this price. Since relatively few tangible products require that much hand labor, a Mars city of a million people makes nearly everything locally.
Letâs explore cost implications for exports. For a SpaceX Starship flying 100 T to Mars, approximately 15,000 T of propellant would be burned, costing perhaps $10m, or 20% of the overall ticket price. The rest covers overhead, including amortization of the Starshipâs manufacture. A returning Starship can return about 20T of cargo for 1200T of propellant burned, requiring about 500 megawatt-days (that is, one megawatt for 500 days) of electricity to synthesize from CO2 and water. If 50% of the return ticket cost is fuel, then wholesale electricity prices on Mars are about 16c/kWh, comparable to electricity prices in California in 2010 before solar crushed everything. Today solar costs are around 2c/kWh including storage, so we find that Mars power cost increase is consistent with the 10x labor price increase. This is slightly troubling as labor scarcity would prefer to exploit electric power where available. However, for all but the most power intensive processes the electricity cost is not very important. These power intensive processes include desalination, electrolysis, aluminum smelting, sodium production, and in particular propellant production, which is why shipping stuff back to Earth remains expensive despite other advances.
The terms of the competition invite contestants to examine the potential for exports. Broadly these may be divided between physical products, which need to be physically transported, and knowledge products, which may be sent by radio or laser.
Letâs examine the trade balance. A million people on Mars each costing $500k/year implies a total GDP of $500b. If 5% of GDP ($25b) is spent on imports with an average price of $1500/kg including shipping, then the city imports 16,000T of cargo (160 Starships, in addition to perhaps triple that number carrying externally-funded migrants) per year. Since most humans on Mars stay, letâs say 500 Starships are available per year for exports, with a total capacity of 10,000 T, and costing $2b to fuel. If the city aims to earn back that $25b in exports then it needs to sell $27b of goods, implying a value density of $2700/kg. This is substantially higher than the make/buy threshold of $55/kg discussed above. This disparity rules out essentially any commodity product, as they are available more cheaply on Earth. As far as material exports go, the city needs Mars-unique branded premium goods or, potentially, rare minerals of highly unusual local abundance.
At about $50,000/kg, technically gold and platinum-group metals could qualify, if and only if their local production costs were, for some reason, far lower than on Earth, and Earth demand remains strong. For example, Earthâs annual production of gold is about 2500 T. Even if Mars could produce gold at a competitive price, it couldnât export more than about 250 T/year without demand elasticity causing the price to fall.Â
The implications of this discussion for local production, import, and export are displayed pictorially in the product graph in the previous section.
As an aside, fueling 700 Starships a year (or 1500 per launch window) would require about a gigawatt of electricity. Assuming that fuel production consumes 30% of the cityâs power, the total area of solar panels is about 100 square kilometers, or 100 square meters and 3 kW per person.Â (This is assuming 500 W/m^2 insolation at Mars, 20% panel efficiency, 0.3 capacity factor. Roughly twice the US per-capita consumption.)
The other potential export is information. Traditional suggestions include Martian IP to be licensed on Earth. In general, however, the high relative cost of labor is a strong forcing function for all non-essential knowledge labor to be non-local. That is, for any task that doesnât require either physical access or temporal immediacy, Martian companies would be well served to hire contractors on Earth and import (not export!) their software and databases. In the early days of the city, large dedicated teams on Earth would support individuals on Mars to optimize their productivity and tele-operate machinery. By the time the city reached a million people, this had largely given way to free enterprise. Meanwhile the falling cost of Mars labor justifies the hiring of Earth-based assistants, but not necessarily individualized teams. Another model might be distributed organizations where Earth-based engineers develop products inspired by observation and operation of facilities on Mars, then share profits as a form of ongoing R&D investment.
The Mars economy is structured similarly to any country on Earth that specializes in mining and manufacturing, such as Germany, South Korea, Japan, or the US. Diverse capital markets with sophisticated risk management enable aggressive investment and expansion of critical infrastructure. There is no need to reinvent the wheel here.
Fiscal policy is streamlined to maximize the exchange of value mediated by money, with the US dollar being an obvious and safe choice. Inevitable market failures are managed through a combination of reactive taxation and direct subsidy. Policy is improved based on quantitative assessment of effectiveness, rather than ideological theorizing. A successful economic policy is mostly invisible, while unnecessary complexity increases transactional overhead and impedes the flow of value.
Much ink has been spilled speculating as to the ultimate source of funding for a Mars city. Who pays for it all? It is worth remembering that the most valuable thing migrants bring to Mars is not the paper in their wallets, but the skills in their mind, the strength in their body, and the flinty determination in their eye. Enable the economic and physical mechanisms to permit mass migration and the rest will follow.
What should Martian society be like? What is the human experience on Mars? Some visions of life on the frontier foresee abridgment of human freedom and hard, short lives. Yet throughout history economic growth and productivity has always walked hand in hand with facilitation of individual excellence for all. The single greatest asset of Vision 2040 are the people who choose to build it, and their celebration of human capability reassures us that life on Mars, while difficult in some ways, is exciting and empowering.
A million people on Mars, and every single one surmounted substantial hurdles to be there. Like other elite self-selective communities with a high barrier to entry, Mars society is shaped by ambition, celebration of achievement, and a strong work ethic. From the Peace Corps to Everest, SEAL Team Six to Grad School, there is something different and special about existing in a community where drive and determination replace sloth and angst. The same on Mars, only more so.
Moving to Mars and building a new world isnât for everyone. But for people who live for a challenge and love the frontier, it is the only place to be. It is the most focused concentration of passion and feverish innovation in the history of humanity. A place for doers to get stuff done unencumbered by the legacy of fifty centuries of business as usual.
Everyone who moves to Mars has both skills and the ability to develop them, but continuing development presents two unusual challenges. First, maintaining high morale and productivity in a workforce that cannot ârotate homeâ requires freedom to vary employment, alternate gigs, and develop new skills on the job. Second, very few jobs on Mars do not change rapidly as automation and AI steadily consume the industrial stack. Therefore, the design of the collectively lived environment must be continually iterated through open contributions to maximize learning and performance improvement. This means the normalization of hacking reality to enable technological miracles.
Any Mars city will suffer a continuing labor shortage, ensuring that employers compete to attract and retain the best workers. By removing barriers to competition in the labor market, we can ensure optimal alignment between interests and needs.Â
High costs drive labor outsourcing to Earth, where air is free. Any task that doesnât require physical proximity or real-time interaction are mostly done by professionals on Earth. Such jobs include software development, planning, remote operation of mines and other machinery, and environmental monitoring. People on Mars work closely with assistants on Earth who monitor their work environment and implement constant improvements via augmented reality interfaces or backend software improvements. Imagine waking up to find that âthe fairiesâ have fixed the previous dayâs problems!
Ensuring long term high productivity of the labor force precludes 20 hour work days, so Martians have plenty of time to engage in non-work activities. Whether art, music, sports, cooking, literature, or any of a million other things, the Martian do-ocracy enables well-motivated people to build whatever they want or need to perform their activities. As a result, the culture is explosively creative and varied, like Burning Man but a hundred times bigger.Â
With readily available Kevlar-reinforced ETFE to create pressurized volumes, thereâs no reason to cram everyone together. Most people live in dense walkable neighborhoods to facilitate easy access to the needs of life, but there are few practical limitations on pressurized volume. Fly a section of roof a kilometer high, plant a forest of giant redwoods, and export Martian lumber at $3000/kg. Throw a tent over a nearby mountain, keep it cool, and operate a ski slope. Pressurize a volume to 4 bar and have human-powered ornithopters fly in the low gravity, while executing a game of 3D golf. Perfect a Martian pizza recipe. Build a ranch and farm mutant dwarf buffalo. With a nearly automated industrial stack, an embarrassing surplus of nearly all material resources and an unbuilt planet, thereâs no reason to think small or slow.
Science fiction city design is always a good opportunity to flog some personal hobby horse and governance is no exception. Itâs always easier to identify problems from a distance than to do the dirty work of actually building peaceful consensus among us lightly-evolved apes. So instead of dictating how Martian governance functions on some untested theoretical level, I will instead interrogate the very notion of government.Â
Why have one? What sort of functions does leadership perform?
While an early Mars base can exist as a self-organizing anarchic collective much like scientific research stations in Antarctica, as organizations grow their management difficulties also grow.
The functions of industrial development being largely devolved to subject-specific corporations, the responsibility of government is to safeguard peace and prosperity. This requires:Â
In accordance with enlightenment views, the leadership should govern by consent and maintain accountability for actions performed in the public service.
Of the six major functions listed above, the one that varies most from typical political or corporate governance models is self regulation and improvement. As the Mars city grows the demands on governance continually change. It is highly unlikely that an optimal governance structure can be generated by induction on the first try. Instead, mechanisms and practices must be continually tweaked and updated to ensure that the government remains a nimble servant of the publicâs needs. The single most important function of the government is to maintain and improve the mechanism by which it improves itself.
I do not regard myself as an expert on systems of governance but I can imagine worse places to start than a bicameral representative democracy. It has, afterall, worked in Iceland for nearly 1100 years.
Corporate governance is intentionally not prescribed. The strongest political and economic systems are syncretic, which is to say diverse and inclusive. For example, some industrial functions may be well-served by a traditional corporate governance structure, while others may function best as worker-owned cooperatives following the Mondragon model, or anything in between.Â
What is the success condition? Beyond industrial autarky and meeting material needs, how does a city of a million people know theyâve âmade itâ? Certainly migration appeal turns steadily more mainstream, but consider instead the lived experience of Martian-born children.
There is no reason to suppose that children could not exist and have happy lives on Mars from the very earliest days, but as far as labor goes, importation is much faster and cheaper, on Mars, than making new humans from scratch. Indeed, even on Earth it is generally considered easier to hire people to perform tasks than to make them oneself. We no longer have children to ensure financial security and care in old age. Like the Mars city itself, the objective is to perform a worthy activity and minimize accompanying financial losses, rather than execute with the expectation of profit for external investors. It turns out that the set of things that make money overlaps incompletely with the set of human activities that are worthwhile.Â
Thus the economic senselessness of rearing children on Mars is a microcosm of the overall economic senselessness of building the Mars city in the first place. Since we agree that a Mars city must be built despite its inevitable consumption of enormous quantities of treasure, we may sensibly ask: Why is Mars the best place to be a child?
Children are the future. A child raised today on Earth may not believe that the best of human civilization is yet to come. The old frontiers are closed, the population is rapidly aging, and many institutions are calcified around a stable consensus view of the way business is done. As beings that grow into our future, our future on Earth is not as unlimited as it once was.
A child growing up on Mars is as separated from authentic wilderness as any Earthborn city kid, but they have the benefit of being around adults who believe powerfully in the future, knowing that their world needs them and has a meaningful place for them.Â
Who could take that from a child?
A nearly self-sufficient city of a million people on Mars is a stupendously ambitious project. Technically, economically, and socially it is possible, that is to say, not forbidden by the laws of physics. But building the Mars city âVision 2040â requires more than physical possibility. It needs millions of people to make this project their lifeâs work. And that requires something else.
All successful large scale collaborative projects obviously had sufficient technical execution. But they also evince love and celebration of beauty. Wikipedia, Linux, the Internet, the American experiment. It is not enough to be a good idea, or to assemble some patchwork constituency who kind of like it. It has to also inspire the profoundly human response of collective nurturing.
We have come to the Vision 2040 aesthetic. Vision 2040 is both a physical place and a powerful idea. It may be perceived through the senses and through the mind. These factors reinforce harmoniously to invoke a sense of the numinous. A sense of vertigo, that humanity is collectively teetering on the brink of a significant moment, a birth of history and a death of our confinement to the planet of our origin. It is this feeling that motivates my Terraformed Mars art project, where I have made planetary scale renders of a Mars with life and water.
The power of this vision has been employed by Elon Musk in his recruitment of idealistic genius engineers at both Tesla and SpaceX, but for Vision 2040, it must go further. Moving to Mars is a significantly bigger and more permanent commitment than moving to a foreign country. Vision 2040 exerts powerful magnetism on the ambitious and idealistic. At $200,000 per ticket, the pitch has to be better than âmaybe come to Mars, maybe you wonât dieâ.Â
Letâs imagine the process of becoming a Mars migrant.Â
We have a good life on Earth among family and friends, but we remember our earliest memory of contemplating the stars on a chill fall evening. Like everyone, weâve followed the last two decades of progress on building a Mars base. Early setbacks. Improbable victories. Now, it looks like itâs going to stick. Little by little, we realize that our vision for Mars includes us being there.
Of course, even now private migration is only just possible. Get recruited, sell everything, get a loan. We donât remember ever having seen that much money, let alone spending it on a single thing. Itâs like a briefcase stuffed with cash.
Not that the money matters, not really. Plenty to be made on the other end, or in any number of jobs back here on Earth. Beyond a certain point, additional money just buys anxiety rather than freedom. No, the real investment isnât a distillation of personal possessions and a tearful goodbye. Itâs putting our body and mind out there, adding our voice to the swelling chorus filling this splinter of humanity on the dusty Arcadia Planitia.Â
The launch window approaches, a relentless schedule of tasks necessary to shut down a life at 1 AU and restart it somewhat further out. Our friend drives us to the airport, we give them our car keys as we haul a carefully weighed duffel of mostly old teeshirts into the bowels of the transport machine. Will we meet again?
The usual buffeting as the suborbital electric jet drops through the sound barrier over Brownsville. Seemingly minutes later up the elevator, across the gantry, and into the Starship through its scorched and still warm hatch. So, thatâs what Earth looks like from space. Surprisingly shiny in the sun. Smooth at this scale. Then four months in deep space, about which said the less the better.
Mars, a bright star, grows to a turning disk, the city lights just visible beneath the dawn terminator. Patches of ice on the higher mountains. A few moments of tension, then mere minutes of noise and force. Landing with a bump.Â
A spiral shaped vehicle access gantry locks on, a crowded corridor of faces, small spaces reflected in myopic eyes. Stepping over the threshold blinking into the dayâs red light, we are refugees from Platoâs cave.Â
Tented roads stretch from the Starport back towards a crescent shaped city complex, with various satellite facilities and enormous fields of solar panels. A hint of green beneath the shiny plastic in the distance. The road takes us past older landing pads and starships, now being subsumed into the growing city.
The road tent passes through a steel bulkhead and opens up into a cavernous volume filled with giant redwoods, their dark evergreen needles fluttering noiselessly as we, the newest Martians, stare.
We alight at a central plaza surrounded by four and five story structures, windows open to the mild air. We had arranged living quarters in a modern apartment. A compact and cozy place with a decent view and common facilities for eating and entertainment. It looks like it had been finished about two weeks before, and it probably had. Our shift began that afternoon, so we resolved to walk there by the least direct route. Light gravity feels like flying. On the way we grabbed a tasty snack from a venerable looking food truck emblazoned with the proud words âestablished in 2027â.
At work, a placard reminded us that doubling productivity in two years requires only 2.7% improvement per month, or 0.1% per day. Better get going!
Weâve lived on Mars for 500 days now. Theyâve passed in a blur, and yet in that time the city has changed noticeably. One section is kept how it was at the beginning, and sometimes a bewhiskered old timer tells war stories about how it used to be, back when the dirt beneath our feet was exposed to the vacuum of space.Â
We thought this day would be a big one. A go/nogo decision whether to return to Earth or stay at least another two years until the next launch window. Most employers offer a rotation bonus to stay, because transport is so expensive, but we didnât give it a second thought.
Weâve only had one birthday since leaving Earth and yet that seems like a previous life. Our whole life on Earth could be fit into about two weeks here. We see tangible evidence of our progress as we wrest order and life from the chaos that has been here since the beginning of the universe.
Less evangelizing. Itâs not as obvious if youâre not living it. What is happening here? A million people â more than we will ever meet, all moving to the same rhythm. The challenge is clear, it confronts us every day and taunts us. Will we establish a permanent foothold or will we slip into the void?
We monitor progress in all kinds of ways. The most concrete is the actuarial table which shows us how long it would take to run out of essential supplies in the event of supply chain degradation. Right now, we could survive 10% degradation indefinitely, and 100% degradation for seven Earth years. And thatâs the best it has ever been.Â
Can you imagine how it feels to be in this position? On the one hand, only two doses of bad luck from oblivion. And on the other, complete autonomy and empowerment to do whatever we can to help the situation. When I look around at the million here solving problems every day I have a tangible grasp of the inherent capability of humanity. We have the audacity to abandon the dysfunctional old ways and try something new. Experiment. Unleash creativity.
It is hard to explain but easy to see. Look around. It turns out that the frozen dead Martian soil was a fertile substrate for our dreams. Thatâs why four of my old Earth friends are already on their way.
All images, graphs, and data Â© the author unless otherwise attributed. These were originally footnotes â check the pdf for context.
For more on industrialization, check out my Mars Society 2018 talk https://caseyhandmer.wordpress.com/2018/09/03/how-to-industrialize-mars/.Â
Contrary to popular belief, lack of radiation shielding is not a showstopper. https://caseyhandmer.wordpress.com/2019/10/20/omg-space-is-full-of-radiation-and-why-im-not-worried/ https://en.wikipedia.org/wiki/Radiation_assessment_detectorÂ
Inflatable plane: https://en.wikipedia.org/wiki/Goodyear_Inflatoplane
Highest density human habitation ever: https://en.wikipedia.org/wiki/Kowloon_Walled_City
For Earth-Mars internet synchronization, see Mars Colonies. F. Crossman (ed). The Mars Society, 2019. p. 163. (J. Greenblatt and A. Rao.)
Mole, A, and Frank Williams. Baseline Design for a Mars Colony. Website: https://citystate.marssociety.org/MARSColonyi2.pdf p. 7, for specifics on Mars nuclear power.
Most likely obtained from a ârodwellâ melted into subsurface ice. Wooster, Paul. Personal communication, 2020. See also: https://www.southpolestation.com/trivia/rodwell/rodwell.html
For incompatible keyed interconnections, see e.g. https://en.wikipedia.org/wiki/Bob_Hoover#Hoover_Nozzle_and_Hoover_RingÂ
For a thorough Mars city simulator focused on material cycles, see SIMOC: https://interplanetary.asu.edu/simoc
Mars Colonies (ibid), 2019 p. 89. (C. Plevyak and A. Douglas).
For a great summary of ore processing chemistry, see: Mars Colonies (ibid), 2019. pp. 57-66. (J. D. Little).
Notable autarky failures include Cuba, Albania, North Korea, Cambodia, Brazil, Yugoslavia, and Romania. Most didnât even get close, but all had agriculture, air, warmth, and more than a million people.
McMaster-Carr Catalogue: https://digital.hbs.edu/platform-rctom/submission/mcmaster-carr-delivering-supplies-and-service/#:~:text=McMaster%20carries%20over%20550%2C000%20products,98%25%20of%20items%20from%20stock.
Simplified industrial parts catalog suggested by Marinova, Margarita. Personal communication, 2020.
Aldrin, B. Mission to Mars. National Geographic, 2013. p. 176.
Economic difficulty of exploiting space resources: https://caseyhandmer.wordpress.com/2019/08/27/there-are-no-known-commodity-resources-in-space-that-could-be-sold-on-earth/
MacDonald, Alexander. The Long Space Age: The Economic Origins of Space Exploration from Colonial America to the Cold War. Yale University Press, 2017.
Werner von Braunâs âMars Projectâ https://en.wikipedia.org/wiki/The_Mars_Project
Zubrin, R and R. Wagner. The Case for Mars. Touchstone, 1996. p. 37.
For a discussion of MarsSpec standardization, see: Mars Colonies (ibid), 2019. p. 141 (K. Nebergall). See also https://caseyhandmer.wordpress.com/2020/05/27/building-the-mars-industrial-coalition/Â
Mars working conditions: https://caseyhandmer.wordpress.com/2020/01/20/what-would-it-be-like-to-work-on-mars/
For historical examples of exceptional innovation and speed of execution, see: https://patrickcollison.com/fast
Less conventional commercial structures: https://en.wikipedia.org/wiki/Mondragon_Corporation
Zubrin, R. Entering Space. Putnam, 1999. p. 114.
Explored in Mars Colonies (ibid), 2019. pp. 193-194. (A. Dworzanczyk).
Explored in Mars Colonies (ibid), 2019. p. 423. (S. Schur). Housing built according to demand.
Zubrin, R. The Case for Space. Prometheus, 2019. p. 116.
A phone call to an Internet provider in Oregon on Sunday evening was all it took to briefly sideline multiple websites related to 8chan/8kun â a controversial online image board linked to several mass shootings â and QAnon, the far-right conspiracy theory which holds that a cabal of Satanic pedophiles is running a global child sex-trafficking ring and plotting against President Donald Trump. Following a brief disruption, the sites have come back online with the help of an Internet company based in St. Petersburg, Russia.
A large number of 8kun and QAnon-related sites (see map above) are connected to the Web via a single Internet provider in Vancouver, Wash. called VanwaTech (a.k.a. âOrcaTechâ). Previous appeals to VanwaTech to disconnect these sites have fallen on deaf ears, as the companyâs owner Nick Lim reportedly has been working with 8kunâs administrators to keep the sites online in the name of protecting free speech.
But VanwaTech also had a single point of failure on its end: The swath of Internet addresses serving the various 8kun/QAnon sites were being protected from otherwise crippling and incessant distributed-denial-of-service (DDoS) attacks by Hillsboro, Ore. based CNServers LLC.
On Sunday evening, security researcher Ron Guilmette placed a phone call to CNServersâ owner, who professed to be shocked by revelations that his company was helping QAnon and 8kun keep the lights on.
Within minutes of that call, CNServers told its customer â Spartan Host Ltd., which is registered in Belfast, Northern Ireland â that it would no longer be providing DDoS protection for the set of 254 Internet addresses that Spartan Host was routing on behalf of VanwaTech.
Contacted by KrebsOnSecurity, the person who answered the phone at CNServers asked not to be named in this story for fear of possible reprisals from the 8kun/QAnon crowd. But they confirmed that CNServers had indeed terminated its service with Spartan Host. That person added they werenât a fan of either 8kun or QAnon, and said they would not self-describe as a Trump supporter.
CNServers said that shortly after it withdrew its DDoS protection services, Spartan Host changed its settings so that VanwaTechâs Internet addresses were protected from attacks by ddos-guard[.]net, a company based in St. Petersburg, Russia.
Spartan Hostâs founder, 25-year-old Ryan McCully, confirmed CNServersâ report. McCully declined to say for how long VanwaTech had been a customer, or whether Spartan Host had experienced any attacks as a result of CNServersâ action.
McCully said while he personally doesnât subscribe to the beliefs espoused by QAnon or 8kun, he intends to keep VanwaTech as a customer going forward.
âWe follow the âlaw of the landâ when deciding what we allow to be hosted with us, with some exceptions to things that may cause resource issues etc.,â McCully said in a conversation over instant message. âJust because we host something, it doesnât say anything about we do and donât support, our opinions donât come into hosted content decisions.â
But according to Guilmette, Spartan Hostâs relationship with VanwaTech wasnât widely known previously because Spartan Host had set up whatâs known as a âprivate peeringâ agreement with VanwaTech. That is to say, the two companies had a confidential business arrangement by which their mutual connections were not explicitly stated or obvious to other Internet providers on the global Internet.
Guilmette said private peering relationships often play a significant role in a good deal of behind-the-scenes-mischief when the parties involved do not want anyone else to know about their relationship.
âThese arrangements are business agreements that are confidential between two parties, and no one knows about them, unless you start asking questions,â Guilmette said. âIt certainly appears that a private peering arrangement was used in this instance in order to hide the direct involvement of Spartan Host in providing connectivity to VanwaTech and thus to 8kun. Perhaps Mr. McCully was not eager to have his involvement known.â
8chan, which rebranded last year as 8kun, has been linked to white supremacism, neo-Nazism, antisemitism, multiple mass shootings, and is known for hosting child pornography. After three mass shootings in 2019 revealed the perpetrators had spread their manifestos on 8chan and even streamed their killings live there, 8chan was ostracized by one Internet provider after another.
The FBI last year identified QAnon as a potential domestic terror threat, noting that some of its followers have been linked to violent incidents motivated by fringe beliefs.
There is no secret to making friends with crows. Like many things in life, it requires time, patience, and disposable income.
Crows tend to hang out where humans leave trash, so theyâre everywhere.
Crows are carrion-eaters, theyâll eat almost anything.
They like raw peanuts (in the shell, unsalted, not roasted either), they like suet, and they like wet and dry pet food. They love popcorn, too.
Donât sit and watch them eat, itâs rude. That, and theyâll think youâre a predator.
Youâre often better off dropping food and walking slowly away, until you earn their trust. Give it time.
Eventually, the crows begin to recognize you. Eventually, the crows wonât be as wary of you as they are other people. Still, even when the crows know you, they donât come that close.
At the end of the day, the crows are wild animals. They have good reason to be wary of humans.
Iâve been feeding my local crows for over a year. They still keep their distance, but they will chase me around the park.
Step 1: Find crows.
Step 2: Leave food for them to eat, but donât sit and watch them.
Step 3: Wait.
Pet food stores are your best bet for unsalted peanuts, suet too.
Once they begin to trust you, well, be less afraid of you, theyâll start to eat food when youâre nearby. From there, itâs just a slow and steady progression until the crows start following you home.
There isnât much more to itâyou just do kind things, and wait to earn their trust.
Soviet advertising, 1989.
This was inspired by a real article I read, and yes, the writer did say that Appleâs design studio reminded him of âthe last scene of the movie 2010.â And yes, I had trouble concentrating about anything else for several hours after I read that.
The last scene of the movie 2010, for the record, depicts mankind being reminded that there are more powerful beings in the cosmos and that we need to shape up. There are no obvious parallels to Apple in that, unless youâve been following their legal battle with Epic Games.
Several stimulating discussions around the Saturday morning club breakfast table have taken place recently in connection with our use of bandpass filters, diplexers and triplexers.Â This article is designed to remove some of the mystery surrounding these devices, which we use both at the OTC and at Field Day.Â Although the discussion relates to HF devices, the same general principles apply to VHF and UHF.
As propagation conditions change throughout the day, week and year-to-year, HF stations need to have the flexibility to change to those bands which are open. Typically 20 m is open during the daytime hours with 160, 80 and 40 m opening up in the evening and nighttime hours.Â In years when sunspot activity is greater, 15 and 10 m also open up during the day.Â Currently we are near the sunspot low with the result that DX contacts are a challenge at any time of the day with only the low bands consistently productive for DX.
ÂIdeally, a transceiver will utilize an independent antenna for each band on which it operates.Â However this is not always possible, where space does not permit or when several transmitters are operating simultaneously (at Field Day, for example).Â So we may deploy a multi-band antenna in conjunction with electronic devices that will allow more than one transmitter to use this single antenna, so long as each transmitter is operating on a different band.
SARCâs first exposure to these electronic devices was ca 2015 when we acquired a set of bandpass filters and triplexer for use with our 10-15-20 m TH7 beam antenna.Â This was successful and allowed us to have the one antenna on a high tower serve multiple transmitters without significant mutual interference.Â Â
Then a couple of years ago at Field Day, we began using an off-centre fed long wire for 40 and 80 m.Â During the late evening hours these two bands were the only game in town, so the antenna was in demand by two stations simultaneously.Â Again, a triplexer and bandpass filters allowed this to happen.Â Alas, one of the devices failed at the critical time.Â
In 2017, we acquired an identical set of the devices described above for use at the OTC, where we have a tri-band beam for 10-15-20 m plus an OCF dipole for 40 and 80 m.Â Once again, the 160-80-40 triplexer failed during use.Â Â
This could not continue as failures in these devices place expensive radios in danger of serious front-end damage (i.e. smoke) due to strong other-band signals not being adequately blocked.Â It was time for serious reflection about our physical setup.Â Â
An inductor connected to a capacitor will have a unique frequency at which the pair resonates, called the resonant frequency.Â Â At exact resonance, the inductive reactance equals the capacitive reactance expressed as XL = XC and the impedance will either be very low or very high depending on their parallel or series configuration.Â The effect of resistance in any practical circuit does not change the resonant frequency but it does affect the sharpness (or Q) of the tuning.
In other words, an inductor in series with a capacitance has a low impedance at its resonant frequency, but the same pair connected in parallel exhibits a high impedance to the flow of current.Â These properties are the basis of many types of radio circuits, used most notably for tuning purposes.Â They can also be deployed in various combinations as RF filters and in power supply filters to change pulsating DC to âpureâ DC.
A low pass filter will pass low frequencies and block high frequencies.Â A high pass filter does the opposite.Â Bandpass and bandstop filters allow a band of frequencies to pass or be blocked, respectively. The figures above show the generalized frequency response of the 4 basic filter types.Â
Below are some simple examples of L-C circuits used in practice for the various kinds of filter devices.Â The presence of R in the circuits represents loads but otherwise does not affect the general type of filter and can be ignored for the sake of this discussion.
Intuitively, it is not difficult to determine which type of filter it is by examination of the circuit, if you think of the way L and C respond to low and high frequencies, whether in isolation, in series or in parallel when presented with a range of different frequencies.Â Â
More complicated circuits have been devised that improve the performance of these basic circuits and make them more useful.Â A study of such devices will bring forth variations named for the engineers who studied their properties, such as Butterworth, Chebyshev, Cauer and Bessel.Â More complicated circuits are not within the scope of this introductory article, but a comprehensive discussion can be found in any ARRL Handbook.
The complexity of a filter circuit is described in terms of its âorderâ, a measure of the number of L and C elements.Â Here, for example, is a 4th order high pass filter:
A diplexer allows two transmitters to feed one antenna or, conversely, two antennas to serve one transmitter (donât confuse a diplexer with a duplexer, which is a different animal).Â A diplexer simply consists of a low pass filter and a high pass filter operating in parallel, with the cutoff of each somewhere between the two operating frequencies.Â With an HF unit used to separate 40 m (~7.0-7.3 MHz) from 80 m (~3.5-4.0 MHz), the cutoff frequency typically would be 5 MHz.Â Â
A diplexer may be able to discriminate 80m from 40m signals by 20-40 dB.Â While 20 dB represents a power suppression of the unwanted signal by a factor of 102Â it is insufficient to protect the radio.Â Â
That is why an HF diplexer is seldom used by itself.Â A bandpass filter in series with the diplexer might suppress the unwanted frequency an additional 40-60 dB depending on its design.Â Â So the diplexer and bandpass filter, operating together, would typically suppress the adjacent band signal by a total of 60-100 dB or a factor of 106-1010.Â Â
If a triplexer rather than a diplexer, is desired to facilitate a third band, the problem becomes more complex.Â The âmiddleâ frequency would necessarily have to be a bandpass filter.
One problem is that the size of components for diplexers and triplexers for 160, 80 and 40m bands will be large.Â This size factor and associated high cost generally make high power diplexers, triplexers and bandpass filters quite costly.Â Â
Our Dunestar triplexers appear to be rather simple filter circuitry.Â Why do these units fail repeatedly, even with the radios operating at 100 watts?Â It can only be inadequate current or voltage ratings on the components or excessive SWR, or both.Â This would suggest that the antennas connected to the triplexer should be close to resonant at the desired frequencies.Â Operating at extreme ends of the band, especially under Field Day conditions when time does not always permit âtweakingâ of their length, height or configuration may produce unacceptably high SWR.Â Â
Here is the lesson we have learned: carefully research the characteristics of the diplexer or triplexer you are considering for purchase.Â Not only are the band isolation and insertion loss important, but the need to have conservative voltage and current ratings on components is critical.Â Then do not deploy these devices on antennas where a near resonant condition cannot be achieved.Â Â
We will probably replace both our Dunestar 160-80-40 triplexers with more robust devices to ensure another failure does not happen.Â Units available from VE6AM (www.va6am.com) and DX Engineering (dxengineering.com) and 4O3A (www.4o3a.com/products/high-power-filters/combiner/) are under consideration to meet this need.Â [In the end we went with VE6AM's product, which has given excellent service]
More good reading can also be found at:Â
~ John VA7XB
I finally got around to organizing my (small) meteorite collection. I donât have the space for a display cabinet right now, and when the pandemic lifts Iâd like to be able to easily transport everything to schools and club outreach events, so I got a couple of HDX storage cases from the toolbox section at Home Depardieu. I think these things are the beeâs knees. Theyâre big, sturdy, and dirt cheapâright now you can get two cases, which lock together with the side tabs, for ten bucks. Best deal going. I got a couple of sets for Vicki, to help organize her histology slides, and theyâre working great for her, too. Iâm tempted to buy a bunch of them just to have them on hand in case they ever stop making them or jack up the price.
I cut bubble wrap to fit and taped it into the lids, padded the little cubbies, put cards at the back of each cubby with info on each specimen, and every time I get silicone gel packets with anything I toss them in the front of the case.
I did the same for my impactites. At the meteorite show-and-tell at a PVAA general meeting a couple of years ago (described here), the sight of Ken Elchertâs monster tektite really fired my interest, and I went on a little tektite-collecting binge.ÃÂ
Here are my indochinites, from an impact in Southeast Asia, about 780,000 years ago, that produced the Australasian strewn field (australites, indochinites, philippinites, rizalites).
And here are the rest. The philippinite is from the same impact as the indochinites, it just flew further. The australites flew the farthest of all, and as they re-entered Earthâs atmosphere (yeah!) their front edges melted and flowed to produce perfect little aerodynamic heat-shield shapes called âbuttonsâ. Real ones are a little outta my price range right now, but I got a nice cast of one from Gary Fujihara on eBay (hereâs his store). The bediasite is a personal favoriteâitâs from the impact 35 million years ago that gouged out Chesapeake Bay. That tektite was sitting in east Texas for more than half of the Age of Mammals before someone recognized it and collected it.
Why am I so fascinated by tektites, in particular? I think it is the diversity of shapes. Tektites are travelers in space and time, a frozen snapshot from the moment that a giant rock from space slammed into our planet. Each one is unique, and its shape tells a story about its flight through the atmosphere and subsequent erosion. Tektites embody everything that interests me: space, time, astronomy, geology, aerodynamics, and the history of our planet.
Parting shot: I have a question about storage. Right now Iâm just using cotton balls for padding in my cases, because they were fast and cheap. Are there any downsides to using cotton balls over the long run? Should I spring for some Polyfil, or other artificial fiber? I live in a fairly dry climate and mold and mildew are generally not problems. Thanks in advance for any wisdom!
From the Intelligent Toasters blog:
Itâs time to create the enclosure for the CPC2. This is my first foray into the world of 3D printing and it was quite daunting at first. However, free online software like TinkerCad make the process of creating a model fairly simple. I chose to start by creating the base that would hold the mainÂ [â¦]
Itâs time to create the enclosure for the CPC2. This is my first foray into the world of 3D printing and it was quite daunting at first. However, free online software likeÂ TinkerCadÂ make the process of creating a model fairly simple. I chose to start by creating the base that would hold the main PCB as this would likely be the most difficult piece to get correct, mainly because the cut-outs for the ports had to line up correctly. I ended up with this:
When printed, it gave me this:
Not quite right, but almost! The holes for the HDMI and USB donât quite align. To be fair to me, I was working from the mechanical design, rather than measuring the actual board that I built, so itâs not too bad.
One of the problems I have in testing the board fit is that there are two pin headers on the bottom of the board for the ESP32-Wroom32 and the FPGA, so it wonât quite fit onto the mounting pegs for a flush fitting. My plan is to build a second board that will connect to the main board withÂ pogo-pinsÂ and remove the pin headers completely. That way, the same board can sit low in the case when in use and sit on the pogo-pins for testing and programming. This also means it must be easy to remove the board from the case and return it when programmed as it will flop-flop between the test harness and the finished case as needed for system programming.
Without spending time to really understand the capabilities of 3D printing, I opted for a conservative design for the enclosure. It will be a three part-piece comprising the base shown above, the top and the keyboard. It could probably have been done with two pieces, but would require support structures and have been a lot more difficult to print.
Giveout Mountain's summit can be reached by car or truck on generally good logging roads. Portions of the drive up are steep and narrow; I recommend 4WD. There is recent evidence of a shooting range on the summit. During my visit several groups scouted the area and there were at least a dozen such ranges set up in the vicinity. I am guessing that arriving in the late morning on a weekend day may lead to disappointment.
Texas is an enormous placeÃ¢â¬âthe second-largest state in the U.S., and larger than the entire country of France. About 29 million people live there, mostly in metropolitan areas in the eastern half of the state, around Dallas, Houston, and San Antonio. From the streets of El Paso to the hills of East Texas, here are a few glimpses of the landscape of Texas, and some of the wildlife and people calling it home.
This photo story is part of Fifty, a collection of images from each of the United States.
Earlier this year (Spring 2020) I was communicating withÂ TomÂ ReilandÂ of Pennsylvania.Â Tom was recently a recipient of theÂ AstronomicalÂ League, Leslie Peltier award, and a lifelong amateur. He mentioned to me about a cluster in Cepheus which he discovered back in the 80âs, and was given the name,Â ReilandÂ 1.
Right Ascension: 23h 04m.8â³ Declination: +60Âº 05â²
The following image provided by Mario Motta from Massachusetts: 32-inch Reflector; 40 mins asi6200 camera
The following images Provided by James Dire of Illinois: 8-inch f/8 Ritchey-Chretien telescope with a 0.8x FR/FF and a SBIG ST-2000XCM camera. Exposure 60 minutes
An excellent report by Mike McCabe from Massachusetts: Click on the above link.
Some esoteric gems this week.
I was happy to have received an email (September 17th) from my astronomy friend in Jordan, Ana Sawallha with this 19 hour 36 minute new moon photo. Thank you Anas.
A 'sorta' near drive-up in the Gifford Pinchot National Forest north of Carson. You can drive within a mile of this unremarkable peak and likely have a nice quiet time playing radio.
Since my post on Oct 2 about activating Brenton Point Park, Iâve done another activation at Trustom Pond (K-0517) on October 9th along the southern coast of RI, and plan on going to a park in the Point Judith area tomorrow afternoon.Â No doubt activating is fun because you get to be outside and sometime have a chance to see friends in a safe location.
But the other half of the Parks On The Air program is being a Hunter â someone who looks for Activators.Â Without even knowing, I had word a few parks in the past year or so, so spent a couple of hours working some parks the past few days.Â The award system for POTA is automatic and simple â just download a PDF.Â Here is the entry-level award for Hunters:
We selected the southern route for Elk Point.Â This southern route allows for a double activation with Sula Peak W7M/RC-138.Â This mostly open-ridgeline route has good views of distant Bitterroot Mountains, Anaconda Pintler Mountains and closer views of the Sapphire Mountains
All too often, I hear about some unfortunate ham who lost their computer-based log files due to some hardware or software failure. I donât know about you, but just the thought of losing a decade or more of QSO data gives me the chills.Â
Back in my working days as a Systems Engineer, I was called upon a few times to developÂ contingency plans for large computer systems and networks. While working on those projects, I would continually ask myself, âWhat would we do ifâ¦âÂ
As a result of all that, I still think about backup plans and backups for those backups. One customer once told me I was aÂ belt and suspendersÂ kind of guy; one method of holding up my pants just wasnât enough.
Storing your log filesâor any data thatâs important to youâin one place is a recipe for disaster. Hard drives can and do fail. (Been there, done that.) If your log file only exists on that failed hard drive, youâre out of luck.
The obvious solution is to keep a copy of your log somewhere other than your hard drive. Iâve had computers fail on me a few times over the years, and I was thankful I had backup copies of my important files.
The easiest way to backup your log files is to create copies of them on removable storage media, such as an external hard drive, USB flash drive, or SD memory card.Â
The cost of storage devices has dropped significantly over the years. You can get a 1TB external hard drive these days for less than $50. I have a 1T USB-connected drive that I use to backup all of my data, including my log files.Â
If youâre just concerned with backing up your log files, a USB flash drive or an SD memory card is an inexpensive way to go. I often see 32GB flash drives for less than $10. I also use a thumb drive for an extra nightly backup of my logs. (Remember the belt and suspenders thing?)
If youâre an N3FJP ACLog user, you have an easy way to back up your logs. You can configure ACLog to save a backup each time you close the program. So, if you attach an external storage device (flash drive, SD memory card, etc.) to your computer, your backups will happen automatically. I do this with SD memory cards on each of my laptops. So, when Iâm logging in the field with no Internet access, Iâm still backing up my logs. More belts and suspenders.Â
Back in the day, the computer systems I worked with regularly transported copies of their backups to another location across town. These off-site backups ensured that copies of data would survive a catastrophic event in the computer room. Hopefully, none of us ever face that situation.
For off-site storage, you could make a copy of your log data on removable media and take it to another location for safe-keeping. Iâm too lazy for that. Thanks to the magic of the Internet, however, there are ways to do this electronicallyâand for free.
I keep my main log files (N3FJP ACLog and SKCC Logger) in a Dropbox folder that gets replicated to all of my computers. This approach allows me to run those logging programs on any of my computers using the same database. It also keeps a copy on Dropboxâs server.Â For good measure, I also backup my logs to Google Drive. (There are the belt and suspenders again.)Â
I would be remiss if I didnât mention Logbook of the World as an off-site backup method. If you routinely upload to LoTW, you have a backup of at least the rudimentary information about your QSOs (callsign, date, time, band, mode, etc.). In my case, there is information in my logs that isnât captured by LoTW. So, restoring from LoTW would be the last resort for me.
I make nightly backups of all my logs to an external hard drive, a thumb drive, and Google Drive. If I was disciplined enough, I could manually copy the necessary files to all three locations. Knowing me, though, that probably wouldnât be a very reliable option.
So, I use backup software to automate all that. I use a paid version ofÂ SyncBakSE, but there are lots of other options available. I know Windows has a built-in backup capability, for example, but I have no experience using it.
Admittedly, my approach is somewhat overkill, bordering on paranoia. Iâm not suggesting that you should do the same; Iâm just offering up some possibilities for your consideration.Â
Regardless of how you do it, please make regular backup copies of your logs or any other data thatâs important to you. Someday, if your computer goes belly-up, youâll be awful glad you did.
73, Craig WB3GCK
News from the Open Source Hardware Association (OSHWA):
In 2020 we conducted the third OSHW Community Survey (seeÂ 2012Â andÂ 2013), which collected 441 responses. All questions were optional, so you may notice response counts do not always add up to 441. In particular, a number of individuals didnât feel comfortable with the demographic questions. We ask these questions as part of our efforts to promote diversity in the community, but these too were optional and anonymous.
A few highlights from this yearâs survey compared to the 2013 survey:
- The portion of people coming to open source hardware from open source software increased from 14.6% to 23.9%
- In 2013, 42.8% of respondents indicated they have worked on or contributed to an open hardware project. This jumped to 85.6% in 2020.
- While 2013 showed a plurality of people using blogs to publish design files, this yearâs survey shows public repositories as the most popular option. The increase in people with open source software experience and improvement in repository collaboration offerings may be contributing factors.
- This yearâs survey shows a large increase in attendees for the 2020 Open Hardware Summit. This is likely due to 2020 being the first virtual summit. Although it was moved online due to unfortunate circumstances, the virtual platform offered the upside of greatly expanding the audience.
- A small gain in the communityâs gender diversity was seen, with those identifying as either female or other making up 18% of respondents, compared to 7% in 2013.
Interested in more granular results for any of these questions? Reach out to us at email@example.com.
This list of links runs in the same order of the BSD RSS feeds in my reader.Â What a coincidence!
ssh-keygen(1)Â with aÂ FIDO Authenticator.
Thereâs now -K (kernel) and -U (user env) options to uname.ÃÂ Minor, but good to know the change.
The SOTA peak database has the name of this peak incorrect as "Cardwell Hills HP", it is actually a well known peak called "McCulloch Peak" in the McDonald State Forest and all signs leading there, plus the marker on top, uses that name.