Itâs always helpful to see other peopleâs questions and answers; in this case itâs a conversation about HAMMER2 snapshots and how to manage them.ÃÂ (Follow the thread)
Itâs always helpful to see other peopleâs questions and answers; in this case itâs a conversation about HAMMER2 snapshots and how to manage them.ÃÂ (Follow the thread)
The great white sharkâa fast, powerful, 16-foot-long torpedo thatâs armed to the teeth with teethâhas little to fear except fear itself. But also: killer whales.
For almost 15 years, Salvador Jorgensen from the Monterey Bay Aquarium has been studying great white sharks off the coast of California. He and his colleagues would lure the predators to their boats using bits of old carpet that they had cut in the shape of a seal. When the sharks approached, the team would shoot them with electronic tags that periodically emit ultrasonic signals. Underwater receivers, moored throughout Californian waters, detected these signals as the sharks swam by, allowing the team to track their whereabouts over time.
In 2009, the team tagged 17 great whites, which spent months circling Southeast Farallon Island and picking off the local elephant seals. But this period of steady hunting ended on November 2 of that year, when two pods of killer whales (orcas) swam past the islands in the early afternoon. In the space of eight hours, all 17 great whites abruptly disappeared. They werenât dead; their tags were eventually detected in distant waters. They had just fled from Farallon. And for at least a month, most of them didnât return.
Jorgensen wondered if this was a one-off, but the tags recorded similar examples in later yearsâorcas arrive, and sharks skedaddle. Some orcas also hunt seals, so itâs possible that the sharks are just trying to avoid competitionâbut that seems improbable, given how quickly they bolt. The more likely explanation is that the most fearsome shark in the world is terrified of orcas.
Killer whales have a friendlier image than great white sharks. (Perhaps because of their respective portrayals in movies: Jaws 2 even begins with the beached carcass of a half-eaten orca.) But orcas are âpotentially the more dangerous predator,â says Toby Daly-Engel, a shark expert at the Florida Institute of Technology. âThey have a lot of social behaviors that sharks do not, which allows them to hunt effectively in groups, communicate among themselves, and teach their young.â
Combining both brains and brawn, orcas have been known to kill sharks in surprisingly complicated ways. Some will drive their prey to the surface and then karate chop them with overhead tail swipes. Others seem to have worked out that they can hold sharks upside-down to induce a paralytic state called tonic immobility. Orcas can kill the fastest species (makos) and the largest (whale sharks). And when they encounter great whites, a few recorded cases suggest that these encounters end very badly for the sharks.
In October 1997, fishing vessels near Southeast Farallon Island observed a young white shark interrupting a pair of orcas that were eating a sea lion. One of the whales rammed and killed the shark, and the duo proceeded to eat its liver. More recently, after orcas passed by a South African beach, five great-white carcasses washed ashore. All were, suspiciously, missing their liver.
A great whiteâs liver can account for a quarter of its body weight, and is even richer in fats and oils than whale blubber. Itâs âone of the densest sources of calories you can find in the ocean,â Jorgensen says. âThe orcas know their business, and they know where that organ lies.â
Rather than ripping their prey apart, it seems that orcas can extract livers with surprising finesse, despite lacking arms and hands. No one has observed their technique, but the wounds on otherwise intact carcasses suggest that they bite their victims near their pectoral fins and then squeeze the liver out through the wounds. âItâs like squeezing toothpaste,â Jorgensen says.
An orca, then, is an apex predatorâs apex predator. No wonder sharks flee from them. But orcas donât actually have to kill any great whites to drive them away. Their mere presenceâand most likely their scentâis enough. Many predators have similar effects. Their sounds and smells create a âlandscape of fearââa simmering dread that changes the behavior and whereabouts of their prey. The presence of tiger sharks forces dugongs into deeper waters, where food is scarcer but cover is thicker. The mere sound of dogs can keep raccoons off a beach, changing the community of animals that lives in the tide pools.
The fear of death can shape the behavior of animals more than death itself. âLions, for example, do not eat a lot of impala, but impala fear lions more than any other predator on the landscape except humans,â says Liana Zanette from Western University in Canada, who studies landscapes of fear. Similarly, killer whales donât have to kill many white sharks to radically change their whereabouts. In 2009, for example, orcas passed by Southeast Farallon for less than three hours, but the great whites stayed away for the rest of the year. For the elephant seals, the island became a predator-free zone. âThe two predators faced off, and the winners were the seals,â Jorgensen says.
And what about the sharks? âThey had to move to find a new food source when the killer whales ruined the neighborhood,â Zanette says. âThis could interfere with their ability to successfully migrate, which requires a bulk-up of fat and nutrients.â
âWe think of white sharks as these great ocean predators, but their bag of tricks includes knowing when to pack it in,â Jorgensen says. âThat play might have contributed to their long-standing success.â
Or, in other words: Run away, doo doo doo doo doo doo, run away, doo doo doo doo doo doo, run away, doo doo doo doo doo doo, run away.
Fashion in Pakistan, Ivanka Trump in Ethiopia, ongoing protests in Sudan, Passover in Israel, a huge election in Indonesia, Holy Week celebrations in Spain, an aircraft with the worldâs longest wingspan, Notre-Dame cathedral ablaze in Paris, Easter preparations in Ukraine, performances at Coachella, spring skiing in Siberia, and much more.
Do you support dark-skies? Â Are you a member of the IDA? Â For the past ten or more years, Iâve not been a member either. Â But next week (April 2019) Iâm going to show my support for dark-skies and become a member again. Â
You should consider doing the same. Â
It is estimated, thatÂ only one (1) in a hundred (100) amateur astronomers are members. Â One percentÂ among the entire amateur astronomy community is not a good number. Â Would you agree? Â Â Roger Ivester Â
The source code of a set of Iranian cyberespionage tools was leaked online.
Good sources of entropy (noise) are an essential part of modern cryptographic systems. I designed a mobile-friendly avalanche noise generator as part of the background work Iâve been doing for the betrusted project (more on that project later). I had to do a new design because the existing open-source ones I could find were too large and power hungry to integrate into a mobile device. I also found it hard to find solid theory pieces on avalanche noise generators, so in the process of researching this I wrote up all my notes in case someone needs to do a ground-up redesign of the system again in the future.
Hereâs an excerpt from the notes:
Avalanche breakdown is essentially a miniature particle accelerator, where electrons that enter a PN junctionâs depletion region (through mechanisms that include thermal noise) are accelerated across an electrical field, to the point where new electron-hole pairs are generated when these high-energy electrons collide with atoms in the depletion region, creating an amplification cascade with low reproducibility.
An approximate analogy is an inflatable pool filled with water. The height of the pool is the potential barrier of the reverse-biased PN junction. A hose feeding water into the pool represents a constant current of electrons. The volume of the pool can be thought of as the depletion capacitance, that is, the capacitor created by the region of the junction that is void of carriers due to natural drift and diffusion effects. As water trickles into the pool, the water level rises and eventually forms a meniscus. Random disturbances, such as ripples on the surface due to wind, eventually cause the meniscus to crest over the edge of the pool. The water flowing over the edge pushes down on the inflatable poolâs side, causing more water to flow, until the level has reduced to a point where the inflatable poolâs side can snap back into its original shape, thus restarting the cycle of filling, cresting, and breakdown. The unpredictability of when and where the breakdown might happen, and how much water flows out during the event, is analogous to the entropy generated by the avalanche effect in a PN junction.
The electrical characteristic of avalanche noise biased by a constant current source is a Ã¢â¬ÅsawtoothÃ¢â¬Â waveform: a linear ramp up in voltage as the capacitance of the depletion region charges to the point where the electric field becomes large enough to initiate the cascade, and then a sharp drop off in voltage as the cascade rapidly discharges the junction capacitance. The cascade then abruptly halts once the field is no longer strong enough to sustain the cascade effect, leading to a subsequent cycle of charging and breakdown.
The site also includes detailed schematics and measurement results, such as this one.
The final optimized design takes <1cm^2 area and draws 520uA at 3.3V when active and 12uA in standby (mostly 1.8V LDO leakage for the output stage, included in the measurement but normally provided by the system), and it passes preliminary functional tests from 2.8-4.4V and 0-80C. The output levels target a 0-1V swing, meant to be sampled using an on-chip ADC from a companion MCU, but one could add a comparator and turn it into a digital-compatible bitstream I suppose. I opted to use an actual diode instead of a NPN B-E junction, because the noise quality is empirically better and anecdotes on the Internet claim the NPN B-E junctions fail over time when operated as noise sources. Iâll probably go through another iteration of tweaking before final integration, but afaik this is the smallest, lowest power open-source avalanche noise generator to date (slightly smaller than this one).
The only thing Iâm more tired of than zombies, is people taking zombies, making minor changes, calling them something else and pretending they arenât zombies. Any time the dead are reanimated, and bent on attacking the living, youâve got yourself a zombie, no matter what you call it.
Deadites are zombies. Shouting threats and wanting to âswallow your soulâ instead of eating your brains makes no difference.
White Walkers are also zombies. Yeah, I know, theyâre organized, but that just makes them zombies with a boss.
There are many names for full moons, most of which trace back to Native American peoples and the early colonists. Aprilâs full moon is traditionally known as the Full Pink Moon, named for the flower moss pink or ground phlox, which blooms across eastern North America in April. Other native tribes know it as the Sugarbushing Moon, Egg Moon, Frog Moon and Sprouting Grass Moon. Each names reflects a different seasonal event or tradition.
Here in the Lake Superior region where I live, theÂ Anishinaabe (Ojibwe) people call an April moon the Broken Snowshoe Moon. Although I donât know the origin of the name I suspect it refers to how a pair of snowshoes might end up after a long, brutal winter. We still have snow on the ground at my house, and if I were to head into the woods wearing mine, Iâd scrape them up good.
Whichever name you prefer, most full moons rise pinkish-orange because the thick atmosphere near the horizon scatters away the blue-violet end of the white light that reflects off the moon. Watching the moon rise from a location with an unobstructed horizon is still one of the best sights in the world. Itâs even better when you can watch two full moons rise.
Yes, thatâs right â weâve got two full-moon nights to look forward to. At least if youâre a skywatcher in the Americas. Because the moment of greatest fullness (99.9 percent) happens at 6:12 a.m. Central Time Friday morning, we split the difference between tonight and tomorrow night. The moon will be about 99.7 percent illuminated tonight and 99 percent Friday evening. I doubt most of us will see the difference with the naked eye, but you might be able to tell in binoculars.
A full moon sits directly opposite the sun and faces it square on, the reason we see it all lit up. But wait. I just said that 99.9 percent would be visible at max full. What about that other tenth of a percent? The moon exactly faces the sun ONLY during a total lunar eclipse, when it forms a nearly perfect straight line with the sun and Earth. At other full moons, itâs slightly out of line, so a tiny fraction of the moon remains shaded â the .1 percent!
In April, as in most months, the full moon passes a little above or below Earthâs shadow (and breaks the perfect lineup) because its orbit is tilted with respect to that of Earthâs orbit. If it orbited exactly in the same plane as Earth orbits the sun, we see a total lunar eclipse every month.
If you have good weather try to catch a moonrise the next couple nights. ClickÂ here to find moonrise times for where you live.
The crooks responsible for launching phishing campaigns that netted dozens of employees and more than 100 computer systems last month at Wipro, Indiaâs third-largest IT outsourcing firm, also appear to have targeted a number of other competing providers, including Infosys and Cognizant, new evidence suggests. The clues so far suggest the work of a fairly experienced crime group that is focused on perpetrating gift card fraud.
On Monday, KrebsOnSecurity broke the news that multiple sources were reporting a cybersecurity breach at Wipro, a major trusted vendor of IT outsourcing for U.S. companies. The story cited reports from multiple anonymous sources who said Wiproâs trusted networks and systems were being used to launch cyberattacks against the companyâs customers.
In a follow-up story Wednesday on the tone-deaf nature of Wiproâs public response to this incident, KrebsOnSecurity published a list of âindicators of compromiseâ or IOCs, telltale clues about tactics, tools and procedures used by the bad guys that might signify an attempted or successful intrusion.
If one examines the subdomains tied to just one of the malicious domains mentioned in the IoCs list (internal-message[.]app), one very interesting Internet address is connected to all of them â 185.159.83[.]24. This address is owned by King Servers, a well-known bulletproof hosting company based in Russia.
According to records maintained by Farsight Security, that address is home to a number of other likely phishing domains:
The subdomains listed above suggest the attackers may also have targeted American retailer Sears; Green Dot, the worldâs largest prepaid card vendor; payment processing firm Elavon; hosting firm Rackspace; business consulting firm Avanade; IT provider PCM; and French consulting firm Capgemini, among others. KrebsOnSecurity has reached out to all of these companies for comment, and will update this story in the event any of them respond with relevant information.
It appears the attackers in this case are targeting companies that in one form or another have access to either a ton of third-party company resources, and/or companies that can be abused to conduct gift card fraud.
Wednesdayâs follow-up on the Wipro breach quoted an anonymous source close to the investigation saying the criminals responsible for breaching Wipro appear to be after anything they can turn into cash fairly quickly. That source, who works for a large U.S. retailer, said the crooks who broke into Wipro used their access to perpetrate gift card fraud at the retailerâs stores.
AnotherÂ source said the investigation into the Wipro breach by a third party company has determined so far the intruders compromised more than 100 Wipro systemsÂ and installed on each of them ScreenConnect, a legitimate remote access tool. Investigators believe the intruders were using the ScreenConnect software on the hacked Wipro systems to connect remotely to Wipro client systems, which were then used to leverage further access into Wipro customer networks.
This is remarkably similar to activity that was directed against a U.S. based company in 2016 and 2017. In May 2018, Maritz Holdings Inc., a Missouri-based firm that handles customer loyalty and gift card programs for third-parties, sued Cognizant (PDF), saying a forensic investigation determined that hackers used Cognizantâs resources in an attack on Maritzâs loyalty program that netted the attackers more than $11 million in fraudulent eGift cards.
That investigation determined the attackers also used ScreenConnect to access computers belongingÂ to Maritz employees. âThis was the same tool that was used to effectuate the cyber-attack in Spring 2016. Intersec [the forensic investigator] also determined that the attackers had run searches on the Maritz system for certain words and phrases connected to the Spring 2016 attack.â
According to the lawsuit by Maritz Holdings, investigatorsÂ also determined that the âattackers were accessing the Maritz system usingÂ accounts registered to Cognizant. For example, in April 2017, someone using a Cognizant account utilized the âfiddlerâ hacking program to circumvent cyber protections that Maritz hadÂ installed several weeks earlier.â
Maritz said its forensic investigator found the attackers had run searches on the Maritz system for certain words and phrases connected to the Spring 2016 eGift card cashout. Likewise, my retailer source in the Wipro attack told KrebsOnSecurity that the attackers who defrauded them also searched their systems for specific phrases related to gift cards, and for clues about security systems the retailer was using.
Itâs unclear if the work of these criminal hackers is tied to a specific, known threat group. But it seems likely that the crooks who hit Wipro have been targeting similar companies for some time now, and with a fair degree of success in translating their access to cash given the statements by my sources in the Wipro breach and this lawsuit against Cognizant.
Whatâs remarkable is how many antivirus companies still arenât flagging as malicious many of the Internet addresses and domains listed in the IoCs, as evidenced by a search at virustotal.com.
Update, April 19, 11:25 a.m. ET: I heard back from some of the other targets. Avanade shared the following statement:
âAvanade was a target of the multi-company security incident, involving 34 of our people in February. Through our cyber incident response efforts and technologies, we swiftly contained and remediated the situation. As a result, there was no impact to our client portfolio or sensitive company data. Our review has concluded this was isolated incident. Our security defenses have continued to protect against any potential threat related to this matter. And, we continue take our responsibility to safeguard our clientsâ data with the utmost seriousness.â
âWe are aware of reports that our company was among many other service providers and businesses whose email systems were targeted in an apparent criminal hacking scheme related to gift card fraud. Since the criminal activity first surfaced earlier this week and following reports that another service providerâs email system was allegedly compromised, Cognizantâs security experts took immediate and appropriate actions including initiating a review.â
âWhile our review remains ongoing, we have seen no indication to date that any client data was compromised. It is not unusual for a large company like Cognizant to be the target of spear phishing attempts such as this. The integrity of our systems and our clientsâ systems is of paramount importance to Cognizant. We continuously monitor, update and strengthen our systems against unauthorized access and have put additional protocols in place related to this specific industry-wide incident.â
Infosys said it has not observed any breach of its network based on its monitoring and threat intelligence. âThis has been ascertained through a thorough analysis of the indicators of compromise that we received from our threat intelligence partners,â the company said in a statement.
Rackspace said it has no evidence to indicate that there has been impact to the Rackspace environment: âRackspace Security Operations continuously monitors our environment for threats and takes appropriate action should an issue be identified.â
Capgemini said its internal Security Operation Center (SOC) detected and monitored suspicious activity that showed similar patterns to the attack faced by WIPRO. âThis occurred between March 4 and March 19. The activity concentrated on a very limited number of laptops and servers. Immediate remedial action took place. There has been no impact on us, nor on our clients to date.â
Seven planets of roughly Earth-size make TRAPPIST-1 a continuing speculative delight, as witness the colorful art it generates below. And with three of the planets arguably in the starâs habitable zone, this diminutive star attracts the attention of astrobiologists anxious to examine the possible parameters under which they orbit. One thing that is only now receiving attention is the question of planet-to-planet tidal effects, as opposed to the starâs tidal effects on its planets.
Image: An artistâs impression of the perpetual sunrise that might greet visitors on the surface of planet TRAPPIST-1f. If the planet is tidally locked, the âterminator regionâ dividing the night side and day side of the planet could be a place where life might take hold, even if the day side is bombarded by energetic protons. In this image, TRAPPIST-1e can be seen as a crescent in the upper left of the image, d is the middle crescent, and c is a bright dot next to the star. Credit: NASA/JPL-Caltech.
In our Solar System, weâve become familiar with the idea that tidal deformation can cause interior heating, a fact that could well support both Europa at Jupiter and Enceladus at Saturn with energy needed to retain temperatures suitable for life below their icy surfaces. The effects are extreme at Io (though hardly life-inducing!) and also noteworthy on Neptuneâs large moon Triton. Here again TRAPPIST-1 stands out, because we know of no other system where planets, not moons, are so tightly wound that they can raise significant tides on each other.
Consider TRAPPIST-1g, the sixth planet in the system, which according to a study performed by Hamish Hay and Isamu Matsuyama (Lunar and Planetary Laboratory, University of Arizona) experiences the mixed effects of tidal heating from the central star and the other planets more strongly than any other planet in the system.
Tides from the other planets in a planetary system are rarely seen as a factor, say the scientists, but heating due to tidal deformation is definitely in play here. From the paper:
Such tides are typically negligible because the mass of the central tide raising body is usually far greater than other bodies in the system, and also because the distances between these bodies are vast and the strength of tidal forces decreases with the distance between them cubed. The seven planet extrasolar system TRAPPIST-1â¦is the first system to be discovered where this is not the case. The separation distance at conjunction is small enough that tides raised by neighbouring planets can become significant, and heating must occur as a result.
Similarly, TRAPPIST-1âs two inner planets come close enough to raise powerful tides on each other, possibly sustaining volcanic activity on worlds that would be too hot on the day side to support life. An atmosphere maintained by volcanic eruptions could move heat to the night side, assuming tidal lock.
Image: An artistâs concept for a view of the TRAPPIST-1 system from near TRAPPIST-1f. The system is located in the constellation Aquarius and is just under 40 light-years away from Earth. Credit: NASA/JPL-Caltech.
Weâve also recently looked at Lisa Kalteneggerâs work on the effect of intense radiation on M-dwarf planets (see M-Dwarfs: Weighing UV Radiation and Habitability). Kaltenegger (Cornell University/Carl Sagan Institute) has been investigating possible ways for life to survive the intense flares and ultraviolet radiation that pummel such worlds. Various mechanisms suggest themselves, enough to keep open the possibility that planets like these could sustain life.
What Federico Fraschetti (Harvard Smithsonian Center for Astrophysics) and colleagues have been studying is the ability of a star so much cooler and less massive than the Sun to emit such quantities of radiation. The scientists have simulated the path of high-energy protons through the magnetic field of the star, finding that the first of the three TRAPPIST-1 planets thought to be in the habitable zone (TRAPPIST-1e) is receiving up to 1 million times more flux than Earth.
Weâre fortunate, of course, in being protected by our planetâs magnetic field from our starâs energetic proton bath, but Fraschettiâs calculations show that to have the same effect at TRAPPIST-1e, the planetâs magnetic field would need to be hundreds of times more powerful than Earthâs. The conclusion is based on the starâs most likely field alignment, which brings its energetic protons directly to the surface of TRAPPIST-1e, where damaging biological effects could occur. But much depends upon how the starâs magnetic field is angled away from its axis of rotation, making this a key datapoint for future investigations. From the paper:
Based on the scaling relation between far-UV emission and energetic protons for solar flares by Youngblood et al. (2017), we estimate that the innermost putative habitable planet, TRAPPIST-1e, is bombarded by a proton flux up to 6 orders of magnitude larger than experienced by the present-day Earth. Such a bombardment of planets in this study is found to result largely from the misalignment of the B-field/rotation axis assumed for the star-proxy. Since the exact magnetic morphology and alignment of the magnetic field is currently unknown for TRAPPIST-1, and for M dwarfs in general, our results indicate that determination of these quantities for exoplanet hosts would be of considerable value for understanding their radiation environments.
TRAPPIST-1e, then, may need some of Lisa Kalteneggerâs proposed solutions to the radiation flux problem if it is to be considered habitable. Lithophilic life, or perhaps life beneath an ocean, is one solution among those that Kaltenegger has proposed, and of course there is the possibility of tidal lock, which could keep the âdarkâ side of the planet free of the flux. Habitability, as we continue to learn, is by no means an easy call, no matter where a planet is located within or without the putative habitable zone of its host.
The papers are Fraschetti et al., âStellar Energetic Particles in the Magnetically Turbulent Habitable Zones of TRAPPIST-1-like Planetary Systems,â Astrophysical Journal Vol. 874, No. 1 (18 March 2019) (abstract / preprint); and Hay & Matsuyama, âTides Between the TRAPPIST-1 Planets,â Astrophysical Journal Vol. 875, No. 1 (9 April 2019) (abstract / preprint).
KiCad is the electronic design automation software that lives at the intersection of electronic design and open source software. Itâs seen a huge push in development over the last few years which has grown the suite into a mountain of powerful tools. To help better navigate that mountain, the first ever KiCad conference,Â KiCon, is happening next week in ChicagoÂ and Hackaday is hosting one of the afterparties.
TheÂ two days of talksÂ take place on April 26th and 27th covering a multitude of topics. KiCadâs project leader, Wayne Stambaugh, will discuss the state of the development effort. Youâll find talks on best practices for using the software as an individual and as a team, how to avoid common mistakes, and when you should actually try to use the auto-router. You can learn about automating your design process with programs that generate footprints, by connecting it through git, and through alternate user interfaces. KiCad has 3D modeling to make sure your boards will fit their intended enclosures and talks will cover generating models in FreeCAD and rendering designs in both Fusion360 and Blender. Dust off your dark arts with RF and microwave design tips as well as simulating KiCad circuits in SPICE. If you can do it in KiCad, youâll learn about it at KiCon.
Of course thereâs a ton of fun to be had as interesting hackers from all over the world come together in the Windy City. Hackadayâs own Anool Mahidharia and Kerry Scharfglass will be presenting talks, and Mike Szczys will be in the audience. We anticipate an excellent âlobby conâ where the conversations away from the stages are as interesting as the formal talks. And of course there are afterparties!
- Friday 4/26Â Pumping Station: One, the popular Chicago hackerspace now celebrating its 10 year anniversary, is hosting an afterparty (details TBA)
- Saturday 4/27: Hackaday is hosting an after partyÂ atÂ Jefferson TapÂ from 6-8:30. Weâre providing beverages and light food for all who attended the conference.
DNS hijacking isn't new, but this seems to be an attack of unprecedented scale:
Researchers at Cisco's Talos security division on Wednesday revealed that a hacker group it's calling Sea Turtle carried out a broad campaign of espionage via DNS hijacking, hitting 40 different organizations. In the process, they went so far as to compromise multiple country-code top-level domains -- the suffixes like .co.uk or .ru that end a foreign web address -- putting all the traffic of every domain in multiple countries at risk.
The hackers' victims include telecoms, internet service providers, and domain registrars responsible for implementing the domain name system. But the majority of the victims and the ultimate targets, Cisco believes, were a collection of mostly governmental organizations, including ministries of foreign affairs, intelligence agencies, military targets, and energy-related groups, all based in the Middle East and North Africa. By corrupting the internet's directory system, hackers were able to silently use "man in the middle" attacks to intercept all internet data from email to web traffic sent to those victim organizations.
Cisco Talos said it couldn't determine the nationality of the Sea Turtle hackers, and declined to name the specific targets of their spying operations. But it did provide a list of the countries where victims were located: Albania, Armenia, Cyprus, Egypt, Iraq, Jordan, Lebanon, Libya, Syria, Turkey, and the United Arab Emirates. Cisco's Craig Williams confirmed that Armenia's .am top-level domain was one of the "handful" that were compromised, but wouldn't say which of the other countries' top-level domains were similarly hijacked.
Another news article.
The Mozilla IoT team is excited to announce that after two years of development and seven quarterly software updates that have generated significant interest from the developer & maker community, Project Things is graduating from its early experimental phase and from now on will be known as Mozilla WebThings.
Mozillaâs mission is to âensure the Internet is a global public resource, open and accessible to all. An Internet that truly puts people first, where individuals can shape their own experience and are empowered, safe and independent.â
Mozilla WebThings is an open platform for monitoring and controlling devices over the web, including:
We look forward to a future in which Mozilla WebThings software is installed on commercial products that can provide consumers with a trusted agent for their âsmartâ, connected home.
The WebThings Gateway 0.8 release is available to download from today. If you have an existing Things Gateway it should have automatically updated itself. This latest release includes new features which allow you to privately log data from all your smart home devices, a new alarms capability and a new network settings UI.
Have you ever wanted to know how many times the door was opened and closed while you were out? Are you curious about energy consumption of appliances plugged into your smart plugs? With the new logs features you can privately log data from all your smart home devices and visualise that data using interactive graphs.
In order to enable the new logging features go to the main menu Settings Experiments and enable the âLogsâ option.
Youâll then see the Logs option in the main menu. From there you can click the â+â button to choose a device property to log, including how long to retain the data.
The time series plots can be viewed by hour, day, or week, and a scroll bar lets users scroll back through time. This feature is still experimental, but viewing these logs will help you understand the kinds of data your smart home devices are collecting and think about how much of that data you are comfortable sharing with others via third party services.
Note: If booting WebThings Gateway from an SD card on a Raspberry Pi, please be aware that logging large amounts of data to the SD card may make the card wear out more quickly!
Home safety and security are among the big potential benefits of smart home systems. If one of your âdumbâ alarms is triggered while you are at work, how will you know? Even if someone in the vicinity hears it, will they take action? Do they know who to call? WebThings Gateway 0.8 provides a new alarms capability for devices like smoke alarms, carbon monoxide alarms or burglar alarms.
This means you can now check whether an alarm is currently active, and configure rules to notify you if an alarm is triggered while youâre away from home.
In previous releases, moving your gateway from one wireless network to another when the previous Wi-Fi access point was still active could not be done without console access and command line changes directly on the Raspberry Pi. With the 0.8 release, it is now possible to re-configure your gatewayâs network settings from the web interface. These new settings can be found under Settings Network.
You can either configure the Ethernet port (with a dynamic or static IP address) or re-scan available wireless networks and change the Wi-Fi access point that the gateway is connected to.
Weâre also excited to share that weâve been working on a new OpenWrt-based build of WebThings Gateway, aimed at consumer wireless routers. This version of WebThings Gateway will be able to act as a wifi access point itself, rather than just connect to an existing wireless network as a client.
This is the beginning of a new phase of development of our gateway software, as it evolves into a software distribution for consumer wireless routers. Look out for further announcements in the coming weeks.
Along with a refresh of the Mozilla IoT website, we have made a start on some online user & developer documentation for the WebThings Gateway and WebThings Framework. If youâd like to contribute to this documentation you can do so via GitHub.
Thank you for all the contributions weâve received so far from our wonderful Mozilla IoT community. We look forward to this new and exciting phase of the project!
By the beginning of 1963 as the crippled Mars 1 was making its way towards the Red Planet, Chief Designer Sergei Korolev and his team at the OKB-1 design bureau were already making plans for an improved family of interplanetary spacecraft designated Object 3MV that would be sent to Venus and Mars during the next launch opportunities in 1964. These new spacecraft would incorporate a laundry list of incremental improvements and design changes based on the experience with the earlier 2MV series of spacecraft (see âYou Canât Fail Unless You Try: The Soviet Venus & Mars Missions of 1962â). Among these was a set of experimental ion thrusters that could serve as a backup to the primary compressed nitrogen attitude control system that had failed on Mars 1.
Problems with the 8K78 launch vehicle (later to be known as âMolniyaâ after the series of Soviet communications satellite which would use this rocket), which had doomed eight out of ten planetary probes launched between 1960 and 1962, not to mention the first two out of three E-6 unmanned lunar landers launched during the first quarter of 1963, were also to be addressed with a new version called the 8K78M under development at OKB-1âs Branch No. 3 in Samara under the direction of Dmitri Kozlov (which became an independent organization in 1974 and is today known as RKTs Progress and is responsible for the Soyuz family of launch vehicles). The upgraded 8K78M incorporated its own list of improvements, including upgraded engines and significant modifications to the Blok L escape stage, whose poor performance had stranded a half dozen planetary probes in their temporary parking orbits around the Earth.
Externally, the 3MV series of spacecraft resembled the earlier 2MV series. The 3.6-meter tall 3MV would was built around a cylindrical orbital compartment with a diameter of 1.1 meters that was about as tall. This compartment contained control systems, power supplies, and communications gear, as well as some instrument electronics with a planetary compartment mounted beneath that was geared towards specific investigations of the target planet. Like the earlier 2MV design, mounted on top of the orbital compartment was a course correction system that employed a KDU-414 engine designed and built at the OKB-2 design bureau led by Aleksei Isayev. The pressure-fed KDU-414 had a 35-kilogram supply of UMDH (unsymmetrical dimethylhydrazine) and nitric acid to generate two kilonewtons of thrust. Depending on the actual mass of the spacecraft, this was sufficient propellant for a delta-v of 80 to 100 meters per second. Normally this engine would be employed twice during a typical mission: once a few days after leaving the Earth to correct the 3MV trajectory for launch errors, and a second time a few days before encountering its target to refine its approach trajectory to meet the missionâs objectives. Also located here was the attitude control system, which used pressurized nitrogen stored in a pair of tanks mounted on the orbital compartment.
Mounted either side of the orbital compartment were a pair of solar panels with a total span of about 4 meters that provided power for the spacecraft. Attached to the ends of the solar panels were hemispherical radiators designed to provide thermal control for the spacecraftâs interior systems. Water pumped through heat exchangers in the interior would circulated out through black- or white-painted sections of the radiators to heat or cool the spacecraft systems as needed to maintain the interiorâs temperature between 20Â°C and 30Â°C. A two-meter in diameter high-gain directional antenna mounted on the anti-sun side of the orbital compartment was used for long distance communications. Various low gain antennae were also mounted on the exterior of the orbital compartment to provide an omni-directional communications capability. Instrument sensors to measure magnetic fields, various types of radiation and micrometeoroids that were mounted on the exterior rounded out the orbital compartment.
As before, the planetary compartment came in two varieties. The first was a one meter tall compartment with a film camera system, a set of ultraviolet and infrared instruments designed to study the target planet during a close flyby as well as its own transmitter which fed through the orbital compartmentâs high gain antenna. The other version was a roughly spherical lander with a diameter of about 0.9 meters designed to detach from the orbital compartment before encounter and touch down on the target planet. Both the orbital and planetary compartments were pressurized to provide a laboratory-like environment for the internal equipment in order to simplify the design and testing of various systems as well as provide easier thermal control.
As with the 2MV, the 3MV design had four design variants. The 3MV-1 would have a launch mass of 948 kilograms and carry a planetary compartment designed to land on Venus. Its sister craft, the 3MV-2, would have a launch mass of 935 kilograms and sport a planetary compartment designed to study Venus during a flyby. The nominal launch window to Venus extended from late March to early April 1964. The 3MV probes would follow fast Type I trajectories to arrive at Venus in the second half of July. Because of the success of NASAâs Mariner 2 mission, in January 1963 the United States cancelled plans to launch a follow on mission using Mariner R-3 (to have been assembled from spare flight hardware) during this window, so these Soviet Venera spacecraft would be flying without competition.
Because the launch window to Mars in November 1964 was so much more favorable than the earlier launch window used by Mars 1 in November 1962, the Mars-bound 3MV spacecraft would be significantly more massive than their earlier 2MV counterparts. The 3MV-3 designed to land on Mars would have a launch mass of 1,042 kilograms while the 3MV-4 flyby craft would weigh in at 1,037 kilograms. These craft would carry a significantly more massive instrument payload as a result. The 3MV-3 and 3MV-4 would be four times more massive and much more capable that the American Mariner-Mars 1964 flyby mission that was also under development at this time.
Korolev understood that there was a need to flight test the new 3MV design in order to iron out the problems that would inevitably crop up. To this end, Korolev envisioned flying two or three dedicated 3MV engineering missions to be launched into interplanetary space starting in the summer of 1963. It was hoped that any problems uncovered during these test missions could be resolved in time for the 1964 planetary missions thus increasing their chances for success. Since these test flights would not be directed towards Venus or Mars, after launch they would receive the generic name of âZond,â which means âprobeâ in Russian.
The first Zond version would be a stripped down model of the Venus lander craft designated 3MV-1A. Originally envisioned to have a launch mass of about 800 kilograms, this spacecraft would carry little scientific instrumentation and a lightweight 275-kilogram entry probe. The 8K78 rocket would be used to launch the 3MV-1A into a nearly circular solar orbit inclined about 5Â° to the ecliptic. About nine days after launch, a course correction maneuver using the propulsion system employing the KDU-414 engine would be performed. In this solar orbit, the probe would recede to a maximum distance of 12 to 16 million kilometers before returning back to the vicinity of the Earth five to six months after launch. About 10 to 15 days before reaching Earth, the 3MV-1A would make one more course correction to refine its final approach trajectory. Before reaching the Earth, the entry probe would detach from the orbital compartment and reenter the atmosphere at a speed in excess of 11.5 kilometers per second to simulate the conditions of an entry into the Venusian atmosphere. After reentry, the parachute system would be tested and, presumably, the lander would be recovered. In addition to giving Korolevâs engineers a chance to test the 3MV-1 design on a long mission, it would be the first spacecraft ever to return to the Earth from interplanetary space, giving the Soviet Union yet another space first.
The second Zond variant would be a stripped down version of the Mars flyby spacecraft designated 3MV-4A. Originally envisioned to have a launch mass of 996 kilograms, this spacecraft would carry a planetary compartment equipped with a significantly updated miniaturized film-based imaging system and other scientific instruments. The 3MV-4A would use an 8K78 to send it on a simulated trajectory towards the orbit of Mars. At a distance of 40,000 to 200,000 kilometers, the 3MV-4A would turn its camera back towards the receding Earth and acquire a sequence of photographs that would be subsequently developed automatically on board. The spacecraft would then transmit its scanned photographs and other data gathered on the interplanetary environment out to distances as great as 200 to 300 million kilometers as part of a long distance communications test. The 3MV-4A mission, if successful, would provide the first images of the Earth taken from deep space and give the Soviets another long distance communication record in addition to a much needed engineering test of the 3MV-4 design.
On March 21, 1963, the Soviet government officially approved the 3MV program. It would consist of one or two 3MV-1A flights and a single 3MV-4A flight to be launched in 1963 as well as a total of six operational 3MV spacecraft to be launched to Venus and Mars in 1964. As had happened all too frequently in the past with other projects, however, design and construction of the new 3MV took longer than expected. However, by July of 1963 Korolev had set the launch schedule for the Zond tests. A 3MV-1A would launch sometime between September 1 and October 15 (with the optimum launch date being October 12) on an Earth-return mission. This launch date would allow the 3MV-1A to be visible high in the sky as viewed from Soviet tracking stations during most of its mission and would result in a return just as the launch window to Venus was opening. While this would be too late to make any major changes to the design of the 3MV-2 Venus lander, it would still allow any problems with the basic 3MV design to be uncovered early enough to make changes a few months before the Venus launches. The 3MV-4A Zond mission was scheduled for launch in March 1964 with sufficient time to make any needed changes to the Mars-bound 3MV design to be launched eight months later.
Continued delays in the preparation of the first 3MV-1A ended up pushing its launch date out by several weeks, but by early November 1963 it was finally ready. Less time would be available to correct any problems uncovered by this flight, but it was felt to be vitally important. On November 4, the Central Committee officially approved two TASS press releases for the upcoming mission. If the test craft was successfully placed on its interplanetary trajectory, it would be called âZond 1â. But if it got stranded in its short-lived parking orbit by another launch vehicle failure, it would receive a generic âKosmosâ designation to avoid the international treaty and public relations issues that had arisen as a result of the unannounced 2MV launch failures towards Venus and Mars a year earlier.
At 06:23:35 GMT on November 11, 1963, 8K78 serial number G103-18 lifted off from Site 1/5 at the Baikonur Cosmodrome in Soviet Kazakhstan carrying the 800-kilogram 3MV-1A No. 2 on an Earth-return mission. The first three stages of the launch vehicle successfully placed the Blok L escape stage and its payload into a 195 by 229 kilometer parking orbit with an inclination of 64.8Â°. But as had happened eight times before, the Blok L stage failed to function as intended. During its coast in parking orbit, all telemetry stopped at about 06:45:45 GMT as attitude control was lost. When the time came to make its burn to escape its parking orbit, the Blok L ended up firing its engine in the wrong direction stranding the rocket and its payload in Earth orbit. The orbit of what was now called Kosmos 21 decayed three days later. This would be the first of many failed lunar and planetary missions to be hidden (rather poorly, I might add) by the Kosmos designation.
With the launch failure of the first Zond engineering test mission and the continued unsatisfactory condition of the 3MV spacecraft under final assembly and test, by the end of December 1963 a second 3MV-1A mission was officially approved for launch in January 1964 on the first improved 8K78M, with the 3MV-4A test flight to follow in the April-May time frame after the launch of the Venus missions. While a longer test flight would have been preferred, at least the new 8K78M could be tested and a few weeks of flight experience could be gained before the launch of the Venus missions in two months. Since the problems with Venera 1 (see âVenera 1: The First Venus Mission Attemptâ) and Mars 1 occurred during the first few days of their missions, it was hoped that any hidden problems with the 3MV design would become apparent early enough â and be minor enough â so that the operational 3MV craft could be modified before launch. Eventually it was decided that no 3MV-2 flyby probes would be launched towards Venus during the 1964 window and that all resources would be instead concentrated on preparing just a pair of 3MV-1 landers.
Still more delays pushed the launch of the 3MV-1A test flight out further, towards the very beginning of the 1964 Venus window. Additional issues with the scheduling of E-6 lunar landing missions, also being developed at OKB-1 at the same time, complicated matters further. Officials finally decided that the 3MV-1A test flight would now take place at the very beginning of the Venus launch window in late February 1964. The 8K78M had sufficient payload capability to launch the lightweight 3MV-1A on a longer Type II trajectory that would reach Venus at sometime around the end of August (see âTrajectory Analysis of the 1964 Soviet Venus Missionsâ). While it is likely that there was little expectation that this test mission would actually survive the six-plus months needed to reach Venus, it would still provide a vital test of the new 8K78M launch vehicle and a monthâs worth of in-flight engineering data that could help improve the chances of success for the pair of operational 3MV-1 landers. After the next E-6 lunar lander was launched during its narrow window around March 21, launch opportunities to Venus on the faster Type I trajectories that would get the landers to Venus in the last half of July became available over the course of the following couple of weeks.
At 05:45:40 GMT on February 19, 1964, the first of the improved 8K78M rockets, serial number T15000-19, lifted off carrying 3MV-1A No. 4A on the last chance at a test flight of the Venus probe design. But just as the Blok I third stage was supposed to ignite, failure occurred once again. Liquid oxygen leaking from a bad valve seal froze a kerosene fuel line, causing it to break. This resulted in an explosion and the loss of the launch vehicle with the debris falling to Earth 85 kilometers north of Barabinsk in Siberia.
With the loss of the last available 3MV-1A test craft, it appears that a rather drastic decision was made. Apparently desperate to get some test flight data to improve the chances of reaching Venus, officials decided to launch one of the pair of operational 3MV-1 spacecraft early at the beginning of March on a slower Type II trajectory to Venus that would provide three weeks of flight data before the opening of the faster Type I launch window later in the month (see âTrajectory Analysis of the 1964 Soviet Venus Missionsâ). While the longer, six-month trip time increased the chances that the 3MV-1 would fail before reaching Venus in early September, it would seem that it was felt to be less risky than launching a pair of untried spacecraft with absolutely no flight testing. But the launch of 8K78M serial number T15000-22 for March 1, 1964 was ultimately scrubbed because of problems encountered during prelaunch integration of the 3MV-1 with the 8K78M. In order to resolve the problem, the launch was postponed to the opening of the Type I launch window at the end of the month. The 3MV-1 would now have to fly untested.
As the final preparations for the pair of 3MV-1 landers were being made, the focus shifted to the launch of the unmanned lunar lander, E-6 No. 6. The E-6 Luna program was coming out of a self-imposed year-long stand down as major problems with the E-6 uncovered during the first three attempts in early 1963 were fixed. It was hoped that the E-6 problems were resolved and that this flight would beat Americaâs delayed Surveyor lunar lander by a year or more (see âSurveyor 1: Americaâs First Lunar Landingâ). At 08:15:35 GMT on March 21, 1963, 8K78M serial number T15000-20 lifted off carrying the new E-6. But a series of problems with the 8D715P engine on the Blok I third stage culminated in its premature shutdown 487 seconds into the flight. The rocket and its payload were destroyed during reentry (for more details on this and other early E-6 failures, see âThe Mission of Luna 5â).
With two successive failures of the new 8K78M, it was now the turn of the 950-kilogram 3MV-1 Venus landers. With its earlier problems now resolved, 3MV-1 No. 5 was launched at 03:24:42 GMT on March 27, 1964, on 8K78M serial number T15000-22. This time the first three stages of the new 8K78M worked as intended to place the Blok L escape stage and its Venus-bound payload into a 191 by 237 kilometer parking orbit with an inclination of 64.8Â°. But during the unpowered coast, an electrical fault caused attitude control to be lost and the Blok L never ignited its 11D33 engine. Stranded in Earth orbit, the rocket and its payload were designated Kosmos 27. Despite the failure, one of the many improvements made to the new Blok L paid off this time. Data from the various systems gathered by the new telemetry system were recorded and radioed back to ground controllers on the next orbit allowing engineers to diagnose the failure like never before. This failure (and probably a couple of earlier failures) were found to be the result of a fault in the design of the wiring of a key control system in the Blok L. Fortunately, the fix involved only 20 minutes of a technicianâs time with a soldering iron to resolve for the next launch attempt.
With its wiring fault fixed just in time for the end of the Venus launch window, 8K78M serial number T15000-23 lifted off the morning of April 2, 1964, at 02:42:40 GMT carrying 3MV-1 No. 4 and its escape stage into a 187 by 213 kilometer parking orbit with an inclination of 64.8Â°. This time all four stages of the 8K78M worked sending the spacecraft out of Earth orbit and into a 0.65 by 1.06 AU solar orbit with an inclination of 3.92Â° to the ecliptic and a period of 290 days. What was to be called âVenera 2â was now on its way to Venus and the first attempted landing on another planet. However, before the triumphant launch announcement could be made, ground controllers discovered a major problem during the first communication session with the receding probe. The pressurized orbital compartment was leaking and all its gas would be lost within a week severely compromising the ability of its equipment to operate. With bleak prospects for success, the Soviets announced the probe simply as âZond 1â making no mention of its mission to Venus.
Because of the torque from the leak in the orbital compartment, Soviet engineers were able to pin down the problem quickly as a bad weld near the quartz window for the probeâs star and Sun sensors. Although it would not help Zond 1, future 3MV craft would have their welds X-rayed as a new quality control check. While the probe was still fully functional and engineers formulated a contingency plan to keep the spacecraft operating as long as possible, Zond 1 made a course correction maneuver the day after launch at a distance of 563,780 kilometers to refine its trajectory. This was the first time the KDU-414-based propulsion system had been used on a Soviet planetary mission.
By April 9 the pressure in the orbital compartment had dropped to the point where it became unreadable by the onboard sensors. Since the Zondâs main transmitter required a pressurized compartment to maintain thermal control and suppress arcing in its high voltage circuitry, ground controllers routed communications through one of the redundant pair of transmitters in its 290-kilogram lander. Obviously Soviet engineers had learned the same hard lessons of redundancy that their American counterparts had during the failures in NASAâs early Ranger missions (see âNASAâs First Lunar Landerâ). With some measure of command, control, and communications, Zond 1 proceeded to gather limited data from its instruments about the interplanetary environment. Its new ion thrusters were also tested but were found to operate erratically, possibly due to the loss of pressure in the orbital compartment where the thrustersâ control systems were located.
Continued tracking of Zond 1 showed that it would still miss Venus by a large margin, so on May 14, a second course correction was attempted. At a range of 14 million kilometers from Earth, the KDU-414 engine ignited for a second time changing the velocity of Zond 1 by 50 meters per second before the engine apparently cut off early. Still 20 meters per second shy of the required velocity change, Zond 1 would still miss Venus by about 100,000 kilometers.
As the cruise to Venus continued, more problems cropped up including the apparent loss of one of its star sensors required for attitude control. Zond 1 was placed into a flat spin to stabilize its orientation and keep the solar panels pointing towards the Sun. Unfortunately, the high gain antenna could no longer be used so contact via the landerâs communication systems could only be maintained until about the middle of June before it was too far away â a full month before reaching Venus. As luck would have it, though, Zond 1 would not last that long. The last public announcement about Zond 1 came on May 19 and all communications were lost on May 24. The now silent Zond 1 flew by Venus on July 19.
With this latest failure, Korolev and his team at OKB-1 were left scrambling to diagnose and correct the problems uncovered with the 3MV and the 8K78M. Based on revised plans for the upcoming Mars launch opportunity, Korolev and his team intended to launch a pair of 3MV-3 landers and another pair of 3MV-4 flyby spacecraft towards Mars in November 1964. But in addition to the ongoing hardware issues, Soviet ambitions for this Mars launch window were now being threatened by the latest revelations about Mars gleaned from ground-based observations.
When design work on the first generation Soviet Mars landers started in 1960, the general consensus of the astronomical community was that Mars had an atmosphere dominated by nitrogen, much like the Earthâs, with traces of carbon dioxide. Based on decades of photometric and polarimetric observations of how the Martian atmosphere scattered sunlight, it was estimated that the atmospheric surface pressure on Mars was about 85 millibars compared to Earthâs 1,013 millibars. Of course, today we now know that the Martian atmosphere is composed primarily of carbon dioxide with a mean surface pressure of only 6 millibars â less than a tenth as dense as had been generally assumed at the beginning of the Space Age. As a result, the original Soviet Mars lander design was simply inadequate for such a thin atmosphere and would crash during any landing attempt.
The low pressure of the Martian atmosphere started to become apparent at about the same time as the 3MV-3 landers were in the process of being developed and manufactured. The first indications of trouble came from Soviet astronomer Vassili I. Moroz at the Sternberg State Astronomical Institute in Moscow. A pioneer in infrared spectral studies of bodies in the solar system, his analysis of the infrared spectra he had obtained of Mars during its opposition in early 1963 showed that the surface pressure of Mars was likely only about 24 millibars and possibly much less. While Korolev and his engineers must have been aware of this work, which had been submitted for publication in September of 1963 and might have dismissed it as an isolated anomalous result, similar results were being published in peer-reviewed scientific journals in the West starting on New Yearâs Day 1964. By the summer of 1964, NASA-sponsored studies for Mars landing missions were now reflecting this new reality of a much thinner Martian atmosphere with a surface pressure possibly as low as 10 millibars. Only later did scientists determine that the large amounts of fine dust in the Martian atmosphere had biased earlier measurements making the atmosphere scatter more light and appear denser than it actually is. The original Soviet Mars lander design was simply inadequate for such a thin atmosphere and would crash during any landing attempt (for a full discussion, see âZond 2: Old Mysteries Solved & New Questions Raisedâ).
With the continuing delays in the development of the Mars-bound 3MV spacecraft and the realization that the 3MV-3 landers were doomed to fail, Soviet officials eventually scrapped the original plans to launch four spacecraft to Mars in November 1964. Since it was already in an advanced state of preparation, officials decided instead that only the single 950-kilogram 3MV-4A test craft would be launched towards Mars. But given the poor track record of the design during previous planetary missions, they recognized that the chances of this spacecraft actually surviving all the way to Mars were slim even if it followed a faster 195-day trajectory that would allow it to reach Mars a full month before NASAâs Mariner spacecraft. As a result, Mars would be only a secondary objective for this mission with the primary objective being the original engineering test flight of the Zond series. The 3MV-4A, which would receive a âZondâ designation, would be launched into a slow trajectory with a 249-day transit time. This would simulate a future Mars lander mission profile with the spacecraft essentially performing a very long-duration engineering test flight.
Unlike NASAâs Mariner-Mars 1964 spacecraft, this Zond mission would attempt no Mars photography despite the fact that its more advanced camera system, equipped with 35 and 750 mm focal length lenses to expose up to 40 images on 25 mm format film, could return more than an order of magnitude more imaging data. Instead, the Zond would presumably photograph the Earth shortly after its departure as planned for the original test flight plan. Aside from engineering data and exercising the new imaging system, the spacecraft would acquire scientific data on fields and particles during its interplanetary cruise. If the test craft managed to survive its long flight to the Red Planet in good condition, it would be redirected to impact Mars and deliver a set of commemorative pennants it was carrying. While Mariner 4 would be the first spacecraft to flyby Mars and return images of its surface, the Soviet probe could be the first to impact its surface to at least satisfy propaganda purposes.
The unsuccessful launch of NASAâs first Mars-bound spacecraft, Mariner 3 on November 5, 1964 (see âThe Launch of Mariner 3â), was followed by a crash program to correct the fault with the rocketâs new launch shroud that caused the failure. With its new shroud barely ready in time, Mariner 4 successfully lifted off on November 28, 1964 with a scheduled encounter date of July 14, 1965 (see âMariner 4 to Marsâ). The Sovietâs Mars-bound engineering test craft, 3MV-4A No. 2, lifted off two days later on November 30 from the Baikonur Cosmodrome at 13:12 GMT with its 8K78M launch vehicle placing it into a 153 by 219 kilometer parking orbit. After a short coast, the Blok L escape stage came to life and successfully injected Zond 2 out of its initial low Earth orbit and into a 0.98 by 1.52 AU solar orbit with an inclination of 6.4Â° to the ecliptic and a period of 508 days. This much slower trajectory would not reach Mars until three weeks after Mariner 4.
As had happened during earlier Soviet planetary missions, problems were already apparent during the first communication session with Zond 2 when the spacecraft reported that only half of the expected power was being generated by its solar panels. As the problem was being diagnosed, controllers took measures to conserve power including the cancellation of the Earth imaging session as well as postponing the first scheduled course correction maneuver. In the end, they determined that one of the two solar panels on Zond 2 failed to deploy as intended. When the Blok L finished its burn, protective shrouds that were connected to the solar panels with lines were supposed to jettison pulling the solar panels out of their stowed position. It seems that one of the two pull-cords had broken resulting in the failure of one of the solar panels to deploy. After several engine firings to shake the spacecraft, the stuck panel finally deployed on December 15. But by this point it was already too late to perform the first planned midcourse correction.
This would prove to be only the beginning of Zondâs problems. A failed timer resulted in the thermal control system not functioning properly hampering operation of onboard equipment. While the new plasma engines were test-fired successfully, communications with the probe apparently became increasingly erratic after its scheduled December 18 communication session. Some accounts of the mission suggest that a course correction was finally made around February 17, 1965 that further refined the path of Zond 2 towards Mars. At some point in time after this maneuver, and possibly as late as May 2, controllers finally lost contact with Zond 2 with an official public announcement being made by Soviet authorities on May 5, 1965. Three months later on August 6, Zond 2 flew silently past Mars at a reported distance of 1,500 kilometers (although some later sources claim the flyby was at a much greater distance of 650,000 kilometers suggesting that no course correction was ever made).
In addition to the Zond 2 engineering test flight towards Mars, the Soviet government had also approved more missions of the 3MV design. The latest plans called for a pair of landers and a pair of flyby spacecraft to be launched towards Venus during the November 1965 launch opportunity. Eventually these spacecraft were assigned the defunct 3MV-3 and 3MV-4 designations of the Mars spacecraft that were never completed and launched. In addition, the launch of one or two more 3MV-4 engineering flights was authorized in the first half of 1965 that would follow the earlier 3MV-4A mission profile to test the improvements made to the 3MV design in the wake of the Zond 2 failure.
As before, delays in the preparation of the improved 3MV spacecraft pushed the launch of the Zond test flight into the summer of 1965. But the delay also opened up a genuine opportunity for discovery for this mission as a result. As luck would have it, the Moon could be easily reached during this time of year via Zondâs departure trajectory towards the orbit of Mars and the lighting conditions would be ideal to observe the Moonâs western hemisphere. It was proposed that instead of photographing the Earth on its way into solar orbit, the new Zond mission could be directed to photograph the Moon including most of the lunar far side that had not been photographed earlier by the Soviet Luna 3 mission in 1959. This would provide a perfect opportunity for engineers to perform a complete end-to-end test the 3MV systems during an actual planetary encounter and provide new data on the Moon in the process.
The instrument complement of this 3MV-4 flight included the advanced photo-television system capable of taking either photographs or ultraviolet spectra in the range of 250 to 350 nanometers. This system was housed inside of the planetary compartment behind portholes at the base that allowed the instruments to view the target. Mounted on the exterior of the compartment were ultraviolet and infrared spectrophotometers sensitive to bands of 190 to 270 nanometers and 3 to 4 microns, respectively. The spectrophotometers and ultraviolet spectrometer were originally designed to study planetary atmospheres and thus were of little use in a lunar mission, but the photo-television system was going to be the star of this mission.
The 6.5-kilogram photo-television system was basically a much-improved version of that employed six years earlier by Luna 3 and a later model flown on the unsuccessful Mars 1 mission. Images from a single 106.4-millimeter focal length f/8 lens were focused onto 25 mm photographic film. A total of 25 exposures of one-thirtieth or one one-hundredth of a second were made. Using the same film, the ultraviolet spectrometer would expose the eighth, ninth, and tenth frames bringing the total number of exposures up to 28. After the film was exposed, it was automatically developed on board.
The dried negatives were then scanned and transmitted back to Earth in one of two formats. A quick-look format broke the photograph into 67 lines that could be transmitted back to Earth via a high-power C-band transmitter using the spacecraftâs two-meter high gain antenna in 135 seconds. A more detailed scanning of the photographs was also possible. In this mode, each photograph was broken into 1,100 lines of 860 points each that were comparable in quality to Rangerâs full-scan television images of the Moon (see âThe Mission of Ranger 7â) but far superior to the digital imaging system used by NASA Mariner 4 mission to Mars. In this high-resolution mode, a single photograph could be transmitted over interplanetary distances in 34 minutes. Each photograph could be scanned multiple times to help increase the signal-to-noise ratio of the images reconstructed back on Earth.
For this test flight, the 960-kilogram 3MV-4 number 3 was prepared for launch (some sources claim this was a 3MV-4A spacecraft but the difference was probably irrelevant at this stage.) At 14:38:00 GMT on July 18, 1965, what became Zond 3 was launched on an 8K78 Molniya rocket into a temporary 164 by 210 kilometer parking orbit with an inclination of 64.8Â°. After a short coast, the Blok L escape stage ignited sending Zond 3 to the Moon and beyond.
Once on its way, engineers discovered that Zond 3 was experiencing no major system malfunctions unlike all of its predecessors where problems were immediately apparent during the first communication session. Since the 3MV-4 spacecraft was so much lighter than the 1,500-kilogram E-6 lunar landers the Soviets had been launching towards the Moon at that time using the same rocket (unfortunately, with little success up to this point in history), Zond 3 reached the vicinity of the Moon in just half the time: a relatively short 33 hours.
At 01:24 GMT on July 20, at a distance of 11,570 kilometers, Zond 3 pointed its camera and other instruments towards the Moon and started taking one photographic exposure every 134 seconds. The initial images included not only the unmapped far side but also the near side so that newly discovered features could be tied into the already existing lunar mapping control net. This continued as the fast-moving probe reached its closest point to the Moon of 9,219 kilometers and then receded to a distance of 9,960 kilometers at the end of its photography session at 02:32 GMT with the Moon having been viewed through an angle of about 60 degrees. After this 68-minute photography session, Zond 3 immediately developed its film as it headed into a 0.9 by 1.5 AU solar orbit which simulated trajectory to Mars â simulated since Mars was not in position for a low-energy encounter and would not be for another year and a half.
On July 29, at a distance of 2.25 million kilometers, Zond 3 was out far enough for the sensors on its high gain antenna to lock onto the Earth and start transmitting the recorded images to waiting scientists. The images were spectacular and far superior to the ones returned six years earlier by Luna 3. Details as small as five kilometers across could be seen in the photographs which showed little more than a cratered wasteland. These photographs confirmed that there was a lack of maria on the Moonâs far side compared to the more familiar near side, which was dominated by these dark and relatively flat basaltic flows.
The photographs also showed no signs of the purported Mare Parvum which some ground-based observers claimed to have seen near Mare Orientale during especially extreme librations of the Moon. Zond 3 did discover a new type of lunar feature that Soviet scientists proposed be called thalassoids. These were the battered concave-shaped remnants of basins over 500 kilometers across and were thought to be the precursors of maria. For some reason these far side structures were never flooded with basaltic lava to form true maria (the term âthalassoidsâ fell out of official use in 1967 since these features were found to be indistinguishable from other large, unflooded impact basins). The spectral instruments onboard Zond 3 showed that the Moon reflected just 1% of the ultraviolet radiation hitting its barren surface. In contrast, the lunar surface reflected 80% to 90% in the incident infrared light, with a broad peak around 3.6 microns.
With these photographs in hand, the Soviets had mapped all but 5% of the Moonâs surface. While NASAâs Mariner 4 mission had just sent back the first images of Mars, Zond 3 had managed to send back the best images of the lunar far side with far superior quality. Even at this early stage of the mission, Zond 3 was a propaganda success after a long string of failed planetary probes and Luna missions.
Zond 3 continued to operate as it travelled farther from Earth and towards the general direction of the orbit of Mars. In addition to engineering information, Zond 3 sent back data from a complement of instruments designed to study the interplanetary environment during the long cruise. On September 16, at a distance of 12.5 million kilometers, Zond 3 used its propulsion system to change the probeâs velocity by 50 meters per second to simulate a mid-course correction. The spacecraft successfully retransmitted its lunar images in mid-August, mid-September, and for the last time on October 23, when it was 31.5 million kilometers away from the Earth.
By the time of the launch of Venera 2 on November 12, 1965, the first of four spacecraft planned to be launched during this Venus window, Zond 3 had been operating for 117 days in solar orbit confirming that the 3MV design could survive the 107-day transit to Venus. Regular communication sessions with Zond 3 were maintained until March 3, 1966, at a range of 153.5 million kilometers. Although the spacecraft was not heard from afterwards, Zond 3 had managed to operate for 228 days â barely long enough to survive the typical flight time to Mars but vindicating the soundness of the improving 3MV design. Unfortunately, contact with Venera 2 and 3 (the only two of the four spacecraft to be successfully launched towards Venus the previous November) had been lost days before the spacecraft were to begin their encounter activities at Venus due to thermal control issues (see âVenera 2 & 3: Touching the Face of Venusâ).
Even before the final 3MV launches (and their eventual failures), responsibility for future Soviet probes to Venus and Mars had been transferred in April 1965 to NPO Lavochkin under Chief Designer Georgi Babakin â an organization known for its intensive testing and quality control of the hardware it built. Because of the limitations of the 3MV design and the new realization of how thin the atmosphere of Mars actually is, it was decided to scrap any further plans to send these probes to the Red Planet and instead design a new spacecraft that would use the much more capable Proton launch vehicle (see âThe Largest Launch Vehicles Through Historyâ). For the Venus landing missions, Babakin and his team would significantly upgrade the 3MV spacecraft to create the new 1V design to be launched in June 1967 (see âVenera 4: Probing the Atmosphere of Venusâ). The hope was that these new spacecraft would fare better than their unsuccessful predecessors.
Follow Drew Ex Machina on Facebook.
âYou Canât Fail Unless You Try: The Soviet Venus & Mars Missions of 1962â, Drew Ex Machina, November 1, 2017 [Post]
âTrajectory Analysis of Soviet 1964 Venus Missionsâ, Drew Ex Machina, April 2, 2014 [Post]
âZond 2: Old Mysteries Solved & New Questions Raisedâ, Drew Ex Machina, July 17, 2014 [Post]
âVenera 2 & 3: Touching the Face of Venusâ, Drew Ex Machina, March 1, 2016 [Post]
Boris Chertok, Rockets and People Volume III: Hot Days of the Cold War (ed. Asif Siddiqi), SP-2009-4110, NASA History Division, 2009
Brian Harvey, Race Into Space: The Soviet Space Programme, Halsted Press, 1988
Brian Harvey, Russian Planetary Exploration: History, Development, Legacy and Prospects, Springer-Praxis, 2007
Bart Hendrickx, âManaging the News: Analyzing TASS Announcements on the Soviet Space Program (1957-1964)â, Quest, Vol. 19, No. 3. pp. 44â58, 2012
Wesley T. Huntress and Mikhail Ya. Marov, Soviet Robots in the Solar System: Mission Technologies and Discoveries, Springer-Praxis, 2011
Nicholas L. Johnson, Handbook of Soviet Lunar and Planetary Exploration, Univelt, 1979
Yuri N. Lipsky, âZond 3 Photographs of the Moonâs Far Sideâ, Sky & Telescope, Vol. 30, No. 6, pp. 338-341, December 1965
Timothy Varfolomeyev, âThe Soviet Venus Programmeâ, Spaceflight, Vol. 35, No. 2, pp. 42â43, February 1993
Timothy Varfolomeyev, âSoviet Rocketry that Conquered Space Part 5: The First Planetary Probe Attempts, 1960â1964â, Spaceflight, Vol. 40, No. 3, pp. 85â88, March 1998
Timothy Varfolomeyev, âSoviet Rocketry that Conquered Space Part 6: The Improved Four-Stage Launch Vehicle, 1964â1972â, Spaceflight, Vol. 40, No. 5, pp. 181â184, May 1998
Trillions of degrees. That was the temperature a fraction of a second after the Big Bang. So darn hot it took 380,000 years for it to cool down enough for individual protons and neutrons to come together to form the first atoms. Cool is a relative term. The temperature at the time simmered around 6,700Â° F (3,700Â° C), not exactly a spring afternoon. About three-quarters of those early atoms were hydrogen, the simplest element, and most of the remainder helium.
Hydrogen is the simplest atom with a single positively-charged proton for a nucleus orbited by a negatively-charged electron. Helium has two protons and two neutral particles called neutrons in its core orbited by two electrons. Atoms join together to form molecules. When lots of atoms join in a variety of ways, complex molecules result.
Our bodies are made of a mix of simple molecules like water (H2O) and complicated ones like hemoglobin, the molecule that transports oxygen to our blood cells and carries away carbon dioxide âwasteâ to our lungs. Hemoglobin looks like this:Â C2952H4664N812O832S8Fe4. Thatâs right â 2,952 carbons atoms, 4,664 hydrogens, 812 nitrogens, 838 oxygens, 8 sulfurs and four irons. A big molecule for a big job.
To get the ball rolling toward complex matter and life, individual atoms have to link up. Since the early universe was mostly hydrogen and helium you wonât be surprised to learn that hydrogen and helium came together to make the first molecules. It wasnât easy. More like an arranged marriage than freedom of choice. Helium doesnât like combining with anything. Itâs holds on so tightly to its electrons, you have to apply a tremendous amount of energy to pry one away so another atom can hook in. But in the extreme heat of the early universe, it happened and helium hydrideÂ or HeH+ was born. Because of its fragility however, it didnât last long â one of the reasons itâs been nearly impossible to find.
Long before the molecule was detected in space, it was concocted in a lab in 1925. Now for the first time, after decades of searching, scientists have discovered it in space for the first time using NASAâs Stratospheric Observatory for Infrared Astronomy or SOFIA. The observatory is a specially outfitted aircraft that flies above much of Earthâs atmosphere to make sensitive observations in infrared light. Air blocks much of the infrared, a type of light that we sense as heat.
SOFIA found modern helium hydride in the planetary nebula called NGC 7027 located 3,000 light-years away in the Northern Cross. As a sunlike star ages into a white dwarf, it expels its atmosphere as a beautiful, flower-form cloud called a planetary nebula. The discovery is important for the same reason the recent photograph of the black hole in the galaxy M87 is â we know that helium hydride can exist in space. Theory predicted it would and now we have proof it does. Thatâs a big deal.
âThis molecule was lurking out there, but we needed the right instruments making observations in the right position â and SOFIA was able to do that perfectly,â said Harold Yorke, director of the SOFIA Science Center, in Californiaâs Silicon Valley.
Helium hydride turns out to be a crucial molecule. As the early universe continued to expand and cool, hydrogen atoms interacted with helium hydride to make molecular hydrogen (H2) â two hydrogens bonded together. This molecule is primarily responsible for the formation of the first generation of stars. Hydrogen molecules helped to cool the clouds of collapsing gases, so that gravity could draw the material into stars. As those early suns aged, they forged these simpler substances into more complex elements in their cores including carbon, the backbone of the hemoglobin molecule. For all you know, some of those first-generation carbon atoms might be swimming around in your very own blood.
Astronomers suspected that even if the original helium hydride might forever elude detection, the planetary nebula NGC 7027 would be a good place to look for current-day material. Ultraviolet light radiating from the exceedingly hot dwarf star (342,000Â° F / 190,000Â° C) star strips electrons from hydrogen and helium around the star, creating the right conditions for helium hydride to form.Â In 2016, scientists boarded SOFIA and flew to 45,000 feet (13.7 km), high above the interfering layers of Earthâs atmosphere. A recent upgrade on one of its instruments enabled them to tune into the frequency of helium hydride similar to how youâd tune in an FM radio station. Bingo! They picked up the signal loud and clear.
With that, astronomers have that much more confident theyâre on the right track when it comes to figuring out the chemistry of the early universe.
For the wealth of information optical satellite imagery contains, there will always remain a stubborn share of pixels that arenât super helpful for analysis. Clouds (which cover around two-thirds of the Earth on any given day), shadows cast from clouds, haze, and snow can preclude us from accurately seeing whatâs on the ground.
As Planet collects in excess of 200 million square kilometers of imagery every day, itâs important that we help our customers sift out which are the best pixels for their applications from this massive dataset. This was the primary motivation behind our latest feature release: Usable Data Masks.
With Usable Data Masks, Planet sought to address two things: significantly enhance the experience around data discovery, and reliably increase the fraction of pixels that are able to be consumed by a broad range of customers and applications.
Previously, Planet only provided information about which pixels within a scene were unusable in a binary yes/no fashion. While this helps filter out bad pixels, it provided little insight about how useful the remaining pixels are.
The new Usable Data Masks leverage powerful machine learning image segmentation techniques to identify which pixels in the image are clear or cloudy, or are contaminated by light or heavy haze, or snow. The resulting image mask layers, packaged as a GeoTIFF, helps visualize what parts of the image contain these elements and what parts are clear.
In addition to the new asset, customers have six new metadata categories to search within the Planet API for images that meet their needs. This greatly enhances our customerâs ability to search, filter, and download the optimal or âusableâ pixels for their specific applications (hence, the name Usable Data Masks). With that extra time saved, customers can spend more time on analysis because they arenât burdened with unnecessary imagery processing steps.
Today, Usable Data Masks are available in the Planet API for PlanetScope imagery across the entire globe back to August 2018. Usable Data Masks are also available for select agricultural regions back to January 2018.
To learn more, check out the Developer Resource Center and reach out to the Customer Success team to get access to Usable Data Masks.
The brain, supposedly, cannot long survive without blood. Within seconds, oxygen supplies deplete, electrical activity fades, and unconsciousness sets in. If blood flow is not restored, within minutes, neurons start to die in a rapid, irreversible, and ultimately fatal wave.
But maybe not? According to a team of scientists led by Nenad Sestan at Yale School of Medicine, this process might play out over a much longer time frame, and perhaps isnât as inevitable or irreparable as commonly believed. Sestan and his colleagues showed this in dramatic fashionâby preserving and restoring signs of activity in the isolated brains of pigs that had been decapitated four hours earlier.
The team sourced 32 pig brains from a slaughterhouse, placed them in spherical chambers, and infused them with nutrients and protective chemicals, using pumps that mimicked the beats of a heart. This system, dubbed BrainEx, preserved the overall architecture of the brains, preventing them from degrading. It restored flow in their blood vessels, which once again became sensitive to dilating drugs. It stopped many neurons and other cells from dying, and reinstated their ability to consume sugar and oxygen. Some of these rescued neurons even started to fire. âEverything was surprising,â says Zvonimir Vrselja, who performed most of the experiments along with Stefano Daniele.
There have long been signs that oxygen deprivation doesnât necessarily kill neurons as quickly as is often assumed. Still, Jimo Borjigin of the University of Michigan says that when she started studying brain activity in dying rats, âmy colleagues told me that as soon as oxygen isnât there, every cell dies within minutes.â Sestanâs team âshowed that cells are still intact not just a few minutes later, but a few hours later. This kind of study is long overdue.â
Disembodied brains in jars are a familiar and disquieting science-fiction staple, but in those stories, the brains are alive, conscious, and self-aware. Those in Sestanâs experiments were zero for three. Though individual neurons could fire, there were no signs of the coordinated, brainwide electrical activity that indicates perception, sentience, consciousness, or even life. The team had anesthetics on standby in case any such flickers materializedâand none did. âThe pigs were brain-dead when their brains came in the door, and by the end of the experiment, they were still brain-dead,â says Stephen Latham, a Yale University ethicist who advised the team.
For that reason, âI donât see anything in this report that should undermine confidence in brain death as a criterion of death,â says Winston Chiong, a neurologist at the University of California at San Francisco. The matter of when to declare someone dead has become more controversial since doctors began relying more heavily on neurological signs, starting around 1968, when the criteria for âbrain deathâ were defined. But that diagnosis typically hinges on the loss of brainwide activityâa line that, at least for now, is still final and irreversible. After MIT Technology Review broke the news of Sestanâs work a year ago, he started receiving emails from people asking whether he could restore brain function to their loved ones. He very much cannot. BrainEx isnât a resurrection chamber.
âItâs not going to result in human brain transplants,â adds Karen Rommelfanger, who directs Emory Universityâs neuroethics program. âAnd I donât think this means that the singularity is coming, or that radical life extension is more possible than before.â
So why do the study? âThereâs potential for using this method to develop innovative treatments for patients with strokes or other types of brain injuries, and thereâs a real need for those kinds of treatments,â says L. Syd M Johnson, a neuroethicist at Michigan Technological University. The BrainEx method might not be able to fully revive hours-dead brains, but Yama Akbari, a critical-care neurologist at the University of California at Irvine, wonders whether it would be more successful if applied minutes after death. Alternatively, it could help to keep oxygen-starved brains alive and intact while patients wait to be treated. âItâs an important landmark study,â Akbari says.
Such applications are a long way off, and even if they never materialize, âthis is already an extraordinary breakthrough,â says Nita Farahany, a bioethicist at Duke University. Although neuroscientists can study lab-grown neurons or peer at thin slices of brain tissue, these capture nothing of the three-dimensional intricacy that makes the brain, the brain. By restoring some activity to postmortem pig brains, Sestanâs team has created a much better proxy for the real thing. The irony, of course, is that âthe better the proxy, the sharper the ethical dilemmas,â Farahany says.
Johnson adds that no animals died for the sake of the study: The team used brains from pigs that had been killed for food. âThousands of sentient animals have been killed in studies searching for neuroprotective treatments that have not borne fruit,â she says. âMeanwhile, millions of animals are killed for food every year, and thatâs a potentially rich source of experimental brains that would involve no additional harm.â
The study still needs to be replicated by other independent teams. And before anyone takes the technique further, or even contemplates the possibility of human trials, there are several ethical issues to consider. For example, is the team really sure that the partly revived brains have no consciousness? Latham, the Yale ethicist, feels confident. Even people under anesthesia show signs of coordinated, brainwide electrical activity, he says, so the absence of such signals strongly suggests that âwe donât even have the possibility of consciousness showing up.â
But consciousness is still hard to define, much less measure. And no one has ever had to measure it in a brain that lacks a body. How would you assess awareness, pain, or suffering âin a brain that has restored circulation and neural function, but thatâs disconnected from external sensation?â asks Steven Hyman, a neuroscientist at the Broad Institute of Harvard and MIT. âThis is a very hard scientific problem and policy issue.â As Johnson says, âI think itâs very unlikely that consciousness or sentience could be restored in a several-hours-dead brain, but Iâm also pretty sure that if it was, we wouldnât know that it was.â
Itâs also unclear why the pig brains never regained coordinated activity. Is it because team members waited for four hours? Is it because they only treated the brains for six hours? Was it something about the way the pigs were killed? Or is it because they added chemicals that dampen neural activity to the fluid that they pumped through the brains? (They did this because excessive firing helps to kill neurons in oxygen-starved brains.) And if thatâs the case, could isolated brains gain consciousness if the blockers were removed?
Possibly, and that would certainly blur the line between living and dead. But that experiment is emphatically not in the cards. The teamâs next and only step is to try BrainEx for longer periods of time. If that leads to signs of coordinated activity, âweâd have to close down the research for a while,â Latham says, âbecause thereâs no institutional body for us to consult. Weâd need to create one.â Current regulations on animal research exclude individuals that either were raised for food or have died. Nothing covers the gray area posed by an isolated brain with signs of cellular activity and that may or may not be conscious.
This illustrates a problem that I wrote about last year: Advancements in neuroscienceâfrom preserving postmortem tissue to growing blobs of brain tissue in a dishâare outpacing the ethical frameworks that help us think about such research. Sestanâs team ârecognized that they were up against a blurred line, and did everything they could to seek guidanceâmore than many researchers would have,â Farahany says. âBut the truth is that there wasnât guidance.â
In a commentary that accompanies the new study, she and others suggest several immediate guidelines. Donât remove the neural-activity blockers until we know what they do. Donât do similar studies without anesthetics. Prioritize research into ways of detecting neural signals that might indicate sentience or consciousness. Be transparent. With those principles in place, an organization like the National Institutes of Health or the National Academies of Sciences, Engineering, and Medicine should convene groups of scientists and citizens to discuss the ethical boundaries of this research and draw up clear guidelines. Â
âFirst we have to figure out how to do this work ethically in animals,â Farahany says. If one eventually could revive a dead brain to the point of consciousness, âwhat comes with that, and what doesnât? Are memories intact? Are self-identities intact? How would we answer those questions if you canât ask an animal?â
And what might change if researchers move from isolated brains to brains that are still inside the skulls of their owners? Or to human trials? Could that increase the already big shortage of transplantable organs, if the point at which medical interventions are futile becomes blurry? These are all questions for the distant future, but itâs worth having answers before the future becomes the present.
Iâm not a huge fan of stories about stories, or those that explore the ins and outs of reporting a breach. But occasionally I feel obligated to publish such accounts when companies respond to a breach report in such a way that itâs crystal clear they wouldnât know what to do with a data breach if it bit them in the nose, let alone festered unmolested in some dark corner of their operations.
And yet, here I am again writing the second story this week about a possibly serious security breach at an Indian company that provides IT support and outsourcing for a ridiculous number of major U.S. corporations (spoiler alert: the second half of this story actually contains quite a bit of news about the breach investigation).
On Monday, KrebsOnSecurityÂ broke the news that multiple sources were reporting a cybersecurity breach at Wipro, the third-largest IT services provider in India and a major trusted vendor of IT outsourcing for U.S. companies. The story cited reports from multiple anonymous sources who said Wiproâs trusted networks and systems were being used to launch cyberattacks against the companyâs customers.
Wipro asked me to give themÂ several days to investigate the request and formulate a public comment. Three days after I reached out, the quote I ultimately got from them didnât acknowledge any of the concerns raised by my sources. Nor did the statement even acknowledge a security incident.
Six hours after my story ran saying Wipro was in the throes of responding to a breach, the company was quoted in an Indian daily newspaper acknowledging a phishing incident. The companyâs statement claimed its sophisticated systems detected the breach internally and identified the affected employees, and that it had hired an outside digital forensics firm to investigate further.
Less than 24 hours after my story ran, Wipro executives were asked on a quarterly investor conference call to respond to my reporting. Wipro Chief Operating Officer Bhanu BallapuramÂ told investors that many of the details in my story were in error, and implied that the breach was limited to a few employees who got phished. The matter was characterized as handled, and other journalists on the call moved on to different topics.
At this point, I added a question to the queue on the earnings conference call and was afforded the opportunity to ask Wiproâs executives what portion(s) of my story was inaccurate. A Wipro executive then proceeded to read bits of a written statement about their response to the incident, and the companyâs chief operating officer agreed to have a one-on-one call with KrebsOnSecurity to address the stated grievances about my story. Security reporter Graham Cluley was kind enough to record that bit of the call and post it on Twitter.
In the follow-up call with Wipro, BallapuramÂ took issue with my characterization that the breach had lasted âmonths,â saying it had only been a matter of weeks since employees at the company had been successfully phished by the attackers. I then asked when the company believed the phishing attacks began, and Ballapuram said he could not confirm the approximate start date of the attacks beyond âweeks.â
Ballapuram also claimed that his corporation was hit by a âzero-dayâ attack. Actual zero-day vulnerabilities involve somewhat infrequent and quite dangerous weaknesses in software and/or hardware that not even the maker of the product in question understands before the vulnerability is discovered and exploited by attackers for private gain.
Because zero-day flaws usually refer to software that is widely in use, itâs generally considered good form if one experiences such an attack to share any available details with the rest of the world about how the attack appears to work â in much the same way you might hope a sick patient suffering from some unknown, highly infectious disease might nonetheless choose to help doctors diagnose how the infection could have been caught and spread.
Wipro has so far ignored specific questions about the supposed zero-day, other than to sayÂ âbased on our interim investigation, we have shared the relevant information of the zero-day with our AV [antivirus] provider and they have released the necessary signatures for us.â
My guess is that what Wipro means by âzero-dayâ is a malicious email attachment that went undetected by all commercial antivirus tools before it infected Wipro employee systems with malware.
Ballapuram added that Wipro has gathered and disseminated to affected clients a set of âindicators of compromise,â telltale clues about tactics, tools and procedures used by the bad guys thatÂ might signify an attempted or successful intrusion.
Hours after that call with Ballapuram, I heard from a major U.S. company that is partnering with Wipro (at least for now). The source said his employer opted to sever all online access to Wipro employees within days of discovering that these Wipro accounts were being used to target his companyâs operations.
The source said the indicators of compromise that Wipro shared with its customers came from a Wipro customer who was targeted by the attackers, but that Wipro was sending those indicators to customers as if they were something Wiproâs security team had put together on its own.
So letâs recap Wiproâs public response so far:
-Ignore reporterâs questions for days and then pick nits in his story during a public investor conference call.
-Question the stated timing of breach, but refuse to provide an alternative timeline.
-Downplay the severity of the incident and characterize it as handled, even when theyâve only just hired an outside forensics firm.
-Say the intruders deployed a âzero-day attack,â and then refuse to discuss details of said zero-day.
-Claim the IoCs youâre sharing with affected clients were discovered by you when they werenât.
The criminals responsible for breaching Wipro appear to be after anything they can turn into cash fairly quickly. A source I spoke with at a large retailer and Wipro customer said the crooks who broke into Wipro used their access to perpetrate gift card fraud at the retailerâs stores.
I suppose thatâs something of a silver lining for Wipro at least, if not also its customers: An intruder that was more focused on extracting intellectual property or other more strategic assets from Wiproâs customers probably could have gone undetected for a much longer period.
A source close to the investigation who asked not to be identified because he was not authorized to speak to the news media said the company hired by Wipro to investigate the breach dated the first phishing attacks back to March 11, when a single employee was phished.
The source said a subsequent phishing campaign between March 16 and 19 netted 22 additional Wipro employees, and that the vendor investigating the incident has so far discovered more than 100 Wipro endpoints that were seeded with ScreenConnect, a legitimate remote access tool sold by Connectwise.com. Investigators believe the intruders were using the ScreenConnect software on the hacked Wipro systems to connect remotely to Wipro client systems, which were then used to leverage further access into Wipro customer networks.
Additionally, investigators found at least one of the compromised endpoints was attacked with Mimikatz, an open source tool that can dump passwords stored in the temporary memory cache of a Microsoft Windows device.
The source also said the vendor is still discovering newly-hacked systems, suggesting that Wiproâs systems are still compromised, and that additional hacked endpoints may still be undiscovered within Wipro.
Wipro has not yet responded to follow-up requests for comment.
Iâm sure there are smart, well-meaning and capable people who care about security and happen to work at Wipro, but Iâm not convinced any of those individuals are employed in leadership roles at the company. Perhaps Wiproâs actions in the wake of this incident merely reflect the reality that India currently has no laws requiring data owners or processors to notify individuals in the event of a breach.
Overall, Iâm willing to chalk this entire episode up to a complete lack of training in how to deal with the news media, but if I were a customer of Wipro Iâd be more than a little concerned about the tone-deaf nature of the companyâs response thus far.
As one follower on Twitter remarked, âopenness and transparency speaks of integrity and a willingness to learn from mistakes. Doing the exact opposite smacks of something else entirely.â
In the interests of openness, here are some indicators of compromiseÂ that Wipro customers are distributing about this incident (I had to get these from one of Wiproâs partners as the company declined to share the IoCs directly with KrebsOnSecurity).
I managed to find an ancient Wyse-185 terminal at my workplace today, left in the corner of the server room.Â For entertainment purposes only, I booted DragonFly in VirtualBox and attached the physical terminal to the physical serial port on my Windows laptop docking station, mapped through to that virtual machine.
I have already discovered that the character output will often pause until the keyboard is used, which may be a settings issue.Â Mash the keyboard enough and VirtualBox dies.Â Iâd use different emulation but Hyper-V doesnât support serial and Qemu I havenât figured out.
Itâs entertaining, though I am not sure what I will do, other than maybe run GRDC once I figure out the reason for output pausing.
In the desert hills of Chinaâs Gansu province, a company called C-Space has just opened âMars Base 1,â a simulated Martian base of operations for future astronauts. Plans for the base, currently an educational facility, include expansionâit will become more of a tourist destination soon, with a space-themed hotel and restaurant. Photographers were on hand as some of the first student groups arrived to tour this vision of Mars in the Chinaâs Gobi desert.
Do we have a second interstellar visitor, following on the heels of the controversial âOumuamua? If so, the new object is of a much different nature, as was its detection. In 2014, a meteor north of Manus Island, off the coast of Papua New Guinea produced a powerful blast that, upon analysis, implied a â¼ 0.45m meter object massing about 500 kg. Events like this, not uncommon in our skies, are cataloged by the Center for Near Earth Object Studies (CNEOS); this one shows up as being detected at 2014-01-08 17:05:34 UTC.
Image: This gorgeous wide-angle photo from the 1997 Perseid shower captures a 20-degree-long fireball meteor and another, fainter meteor trail in a rich area of the northern summer Milky Way. Showers like these are predictable, but could some solitary fireballs mark the end of a meteor with an interstellar origin? Credit & Copyright: Rick Scott & Joe Orman.
Now the CNEOS catalog, which covers the last three decades, is useful indeed, for it takes advantage of detectors maintained by the U.S. government to analyze the sound and light of the passage of objects through the atmosphere, producing information on velocity and position at the time of impact. Harvardâs Avi Loeb, a familiar face in the media thanks to the âOumuamua discussion, worked with undergraduate student Amir Siraj, whom he set to calculating. What could we learn about the prior trajectory of meteors in the catalog, homing in on the fastest?
In a paper submitted to Astrophysical Journal Letters, Loeb and Siraj note the latterâs identification of the 2014 Manus Island meteor as interstellar in origin. The paper finds no substantial gravitational interactions between the meteor and any planet other than Earth. Indeed, based on the CNEOS-reported impact speed of 44.8 km s-1, Loeb and Siraj calculate a speed of 43.8 km s-1 outside the Solar System. For the object to be bound, the observed speed at impact would have to be off by more than 45%.
This meteor, then, was on an unbound hyperbolic orbit. We can go on from here to note the objectâs relation to another useful metric. For measured relative to the Local Standard of Rest, this meteor entered the Solar System with a speed of 60 kilometers per second.
The Local Standard of Rest (LSR) is produced by averaging the motion of all stars in the Sunâs neighborhood. Siraj and Loeb speculate that this velocity could indicate ejection from a planetary system, specifically from the inner regions where orbital speeds are high. The objectâs speed would imply a position inside the orbit of Mercury were it to come from a star like our own, but a red dwarf like Proxima Centauri would have an ejection speed from its habitable zone in this very regime. Recall that the habitable zone around Proxima Centauri is 20 times closer to the star than the HZ in our own system. So hereâs an interesting thought: âSince dwarf stars are most common, the detection of this meteor offers new prospects for âinterstellar panspermia,â namely the transfer of life between planets that reside in the habitable zones of different stars.â
What Iâm quoting from above is an as yet unpublished summation Loeb has recently written of the paperâs findings, one that goes on to speculate about its implications. Panspermia would require a larger object because it would have to survive the fiery passage through the atmosphere, but the notion that objects could be passed from star to star in this way is interesting (and note that Loeb is not identifying a Proxima Centauri origin for this meteor, but rather pointing to possible scenarios between stars). The point is that dwarf stars are the most common in the universe, and the detection of an interstellar meteor could point to what is perhaps a common form of transfer between stars.
Beyond that, consider the possibilities in studying interstellar materials when we may find them entering our own atmosphere. Says Loeb:
Using the Earthâs atmosphere as a detector for interstellar objects offers new prospects for inferring the composition of the gases they leave behind as they burn up in the atmosphere. In the future, Astronomers may establish an alert system that triggers follow-up spectroscopic observations to an impact by a meteor of possible interstellar origin. Alert systems already exist for gravitational wave sources, gamma-ray bursts, or fast radio bursts at the edge of the Universe. Even though interstellar meteors reflect the very local Universe, they constitute a âmessage in a bottleâ with fascinating new information about nurseries which may be very different from the Solar System. Some of them might even represent defunct technological equipment from alien civilizations, which drifted towards Earth by chance, just like a plastic bottle swept ashore on the background of natural seashells.
Thus spectroscopy of gaseous debris burning up in the Earthâs atmosphere could offer us a way to make interstellar investigations of the kind weâve been assuming would be decades (at least) off, assuming we can make a timely identification of likely targets.
The paper is Siraj & Loeb, âDiscovery of a Meteor of Interstellar Origin,â submitted to Astrophysical Journal Letters (preprint).
Presidential candidate John Delaney has announced a plan to create a Department of Cybersecurity.
I have long been in favor of a new federal agency to deal with Internet -- and especially Internet of Things -- security. The devil is in the details, of course, and it's really easy to get this wrong. In Click Here to Kill Everybody, I outline a strawman proposal; I call it the "National Cyber Office" and model it on the Office of the Director of National Intelligence. But regardless of what you think of this idea, I'm glad that at least someone is talking about it.
EDITED TO ADD: Yes, this post is perilously close to presidential politics. Any comment that opines on the qualifications of this, or any other, presidential candidate will be deleted.
Fluent is a family of localization specifications, implementations and good practices developed by Mozilla. It is currently used in Firefox. With Fluent, translators can create expressive translations that sound great in their language. Today weâre announcing version 1.0 of the Fluent file format specification. Weâre inviting translation tool authors to try it out and provide feedback.
With almost 100 supported languages, Firefox faces many localization challenges. Using traditional localization solutions, these are difficult to overcome. Software localization has been dominated by an outdated paradigm: translations that map one-to-one to the source language. The grammar of the source language, which at Mozilla is English, imposes limits on the expressiveness of the translation.
Consider the following message which appears in Firefox when the user tries to close a window with more than one tab.
The message is only displayed when the tab count is 2 or more. In English, the word tab will always appear as plural tabs. An English-speaking developer may be content with this message. It sounds great for all possible values of
Many translators, however, will quickly point out that the word tab will take different forms depending on the exact value of the
In traditional localization solutions, the onus of fixing this message is on developers. They need to account for the fact that other languages distinguish between more than one plural form, even if English doesnât. As the number of languages supported in the application grows, this problem scales up quicklyâand not well.
There are many grammatical and stylistic variations that donât map one-to-one between languages. Supporting all of them using traditional localization solutions isnât straightforward. Some language features require trade-offs in order to support them, or arenât possible at all.
Fluent turns the localization landscape on its head. Rather than require developers to predict all possible permutations of complexity in all supported languages, Fluent keeps the source language as simple as it can be.
We make it possible to cater to the grammar and style of other languages, independently of the source language. All of this happens in isolation; the fact that one language benefits from more advanced logic doesnât require any other localization to apply it. Each localization is in control of how complex the translation becomes.
Consider the Czech translation of the âtab closeâ message discussed above. The word panel (tab) must take one of two plural forms: panely for counts of 2, 3, and 4, and panelÅ¯ for all other numbers.
Fluent empowers translators to create grammatically correct translations and leverage the expressive power of their language. With Fluent, the Czech translation can now benefit from correct plural forms for all possible values of the
At the same time, no changes are required to the source code nor the source copy. In fact, the logic added by the Czech translator to the Czech translation doesnât affect any other language. The same message in French is a simple sentence, similar to the English one:
The concept of asymmetric localization is the key innovation of Fluent, built upon 20 years of Mozillaâs history of successfully shipping localized software. Many key ideas in Fluent have also been inspired by XLIFF and ICUâs MessageFormat.
At first glance, Fluent looks similar to other localization solutions that allow translations to use plurals and grammatical genders. What sets Fluent apart is the holistic approach to localization. Fluent takes these ideas further by defining the syntax for the entire text file in which multiple translations can be stored, and by allowing messages to reference other messages.
A Fluent file may consist of many messages, each translated into the translatorâs language. Messages can refer to other messages in the same file, or even to messages from other files. In the runtime, Fluent combines files into bundles, and references are resolved in the scope of the current bundle.
Referencing messages is a powerful tool for ensuring consistency. Once defined, a translation can be reused in other translations. Fluent even has a special kind of message, called a term, which is best suited for reuse. Term identifiers always start with a dash.
Once defined, the
-sync-brand-name term can be referenced from other messages, and it will always resolve to the same value. Terms help enforce style guidelines; they can also be swapped in and out to modify the branding in unofficial builds and on beta release channels.
Using terms verbatim in the middle of a sentence may cause trouble for inflected languages or for languages with different capitalization rules than English. Terms can define multiple facets of their value, suitable for use in different contexts. Consider the following definition of the
-sync-brand-name term in Italian.
Thanks to the asymmetric nature of Fluent, the Italian translator is free to define two facets of the brand name. The default one (uppercase) is suitable for standalone appearances as well as for use at the beginning of sentences. The lowercase version can be explicitly requested by passing the capitalization parameter, when the brand name is used inside a larger sentence.
Defining multiple term variants is a versatile technique which allows the localization to cater to the grammatical needs of many languages. In the following example, the Polish translation can use declensions to construct a grammatically correct sentence in the
Fluent makes it possible to express linguistic complexities when necessary. At the same time, simple translations remain simple. Fluent doesnât impose complexity unless itâs required to create a correct translation.
Youâve already seen a taste of Fluent Syntax in the examples above. It has been designed with non-technical people in mind, and to make the task of reviewing and editing translations easy and error-proof. Error recovery is a strong focus: itâs impossible for a single broken translation to break the entire file, or even the translations adjacent to it. Comments may be used to communicate contextual information about the purpose of a message or a group of messages. Translations can span multiple lines, which helps when working with longer text or markup.
Fluent files can be opened and edited in any text editor, lowering the barrier to entry for developers and localizers alike. The file format is also well supported by Pontoon, Mozillaâs open-source translation management system.
You can learn more about the syntax by reading the Fluent Syntax Guide. The formal definition can be found in the Fluent Syntax specification. And if you just want to quickly see it in action, try the Fluent Playgroundâan online editor with shareable Fluent snippets.
Firefox has been the main driver behind the development of Fluent so far. Today, there are over 3000 Fluent messages in Firefox. The migration from legacy localization formats started early last year and is now in full swing. Fluent has proven to be a stable and flexible solution for building complex interfaces, such as the UI of Firefox Preferences. It is also used in a number of Mozilla websites, such as Firefox Send and Common Voice.
We think Fluent is a great choice for applications that value simplicity and a lean runtime, and at the same time require that elements of the interface depend on multiple variables. In particular, Fluent can help create natural-sounding translations in size-constrained UIs of mobile apps; in information-rich layouts of social media platforms; and in games, to communicate gameplay statistics and mechanics to the player.
Weâd love to hear from projects and localization vendors outside of Mozilla. Because weâre developing Fluent with a future standard in mind, we invite you to try it out and let us know if it addresses your challenges. With your help, we can iterate and improve Fluent to address the needs of many platforms, use cases, and industries.
The post Fluent 1.0: a localization system for natural-sounding translations appeared first on Mozilla Hacks - the Web developer blog.
For a few years my Dad lived in a houseboat on the Willamette River in Oregon. It is a time my brothers and I remember fondly, as many of our stories from that period end with the phrase, âAnd then he fell in the river.â
The reason I bring this up is that while he was living on the river, a museum in Oregon bought the Spruce Goose. They partially disassembled the plane and moved it to its new home. The wings were transported via barge, and parked for the night across the river from my fatherâs home.
I wasnât there at the time, and Dad took one picture. It looks like a white wall on a barge. Itâs not a particularly impressive picture, but it proves that it happened, and thatâs what mattered to him.
Note from Missy: Where do you think the dead link at the bottom of the comic led to, lo these long 8 years ago?
Note from Scott: Iâve wracked my brains, and I have no idea.
Another Note from Scott: It turns out, the link we couldnât remember was to this website. Thanks to all of the readers who reminded me.
A day after the devastating blaze that destroyed the roof and spire of the Notre-Dame cathedral, investigators and photographers were able to get a first look at the damage inside, including the preservation of a number of valuable artifacts and features among piles of debris and a heavily damaged roof. Private citizens and companies in France have stepped forward, pledging hundreds of millions of euros to help restore the treasured building.
Failure of SpaceXâs Falcon 9 rocket in September 2016. (US Launch Report via Spaceflight Now)
Every time a spaceflight failure occurs, the phrase âspace is hardâ will invariably be uttered in response. The sentiment typically being expressed is one of disappointment tinged with begrudging acceptance that spaceflight is challenging and not all attempts will succeed.
This was certainly the case last week with Beresheet, the first private Moon mission. It was led by SpaceIL, an Israeli nonprofit that was originally competing for the Google Lunar X Prize. Launched on a SpaceX Falcon 9 rocket on February 22, 2019, the spacecraft operated nominally during cruise but failed its attempt at a soft landing on April 11, 2019.
Recognizing the boldness of the attempt, the space communityâs reaction was one of overwhelming support and acknowledgement that âspace is hardâ:
Space is hard, but worth the risks. If we succeeded every time, there would be no reward. Itâs when we keep trying that we inspire others and achieve greatness. Thank you for inspiring us @TeamSpaceIL. Weâre looking forward to future opportunities to explore the Moon together. pic.twitter.com/yZ35IJKOYCâ Thomas Zurbuchen (@Dr_ThomasZ) April 11, 2019
Hold on to the feeling. You got this. Space is hard. Great first attempt. Weâll be watching when you stick the landing on another try! https://t.co/NLfI66xyzeâ Scott Kelly (@StationCDRKelly) April 11, 2019
Any large, high-stakes project carries a risk of failure, especially if itâs on the cutting edge of technology. Infamous examples of technical failures include the sinking of the Titanic, Hindenburg airship explosion, Tacoma Narrows Bridge collapse, Chernobyl nuclear reactor disaster, and BP oil spill in the Gulf of Mexico. What makes space even more challenging?
While thereâs no shortage of technical problems to tackle here on Earth, spaceflight presents a unique set of conditions that make the margin between success and failure particularly narrow. Here are just a few of the reasons why space is so hard:
Propellant volatility. Rocket launches are controlled explosions in the most literal sense. The combustive power of liquid oxygen and hydrogen, alcohols, and kerosenes leaves little room for error. Harnessing that energy into usable thrust requires carefully designed combustion chambers, nozzles, and other components capable of withstanding extreme heat and pressure. Hydrodynamic instabilities are so complex and difficult to predict that the early rocketeers relied on experimentation rather than empirical modeling to perfect their designs.
Launch vibration. Rocket engines cause intense vibration during launch, which both rockets and their payloads (including humans!) must survive. Space-bound components and systems must be thoroughly tested on Earth to ensure that they can withstand the launch environment. Vibration testing often reveals anomalies that can be addressed on the ground before the real thing, such as a problematic latch that was discovered during testing of the James Webb Space Telescope.
Cleanliness standards. During assembly on Earth, dust, fluids, and other contaminants will settle to the bottom of a spacecraft or collect in spots with little air flow. In zero gravity, those particles can become airborne (so to speak) and damage electrical components, shorting them out. This is why spacecraft are assembled in clean roomsâitâs to protect the spacecraft from humans, not the other way around.
Temperature control. The temperature extremes of space require a system that either has robust temperature control or can safely operate within that range. The fact that heat cannot dissipate in a vacuum makes thermal design for space systems particularly challenging compared to Earth, where engineers can use air to move heat.
Radiation. Beyond the protective shell of Earthâs atmosphere and the Van Allen belt, electronic equipment (computers and microelectronics in particular) is sensitive to the harms of ionizing radiation, which in space comes from sources such as high-energy particles and cosmic radiation.
Automated sequencing. Spacecraft and rockets stand in sharp contrast to aircraft because the sequence of steps to operate a airplane can occur within the reaction time of a humanâa pilot. In contrast, â[r]ocket stage separation required precise synchronization of the electrical signals that fired the pyrotechnic charges with the signals that governed the fuel values and pumps controlling propellant flowâ (Johnson 2002). This challenge extends today to complex, autonomous operations such as delivering rovers to Mars or landing rocket boosters on ocean platforms.
Reliability. Unlike most projects on Earth, where engineering mistakes can be fixed as they arise, a space system has to operate correctly the first time since repairs are impossible after launch. (The Hubble Space Telescope is a notable exception; its first repair to fix a faulty primary mirror and other components cost $293M in 1993 USD.) The reliability, or dependability under stated conditions for a set period of time, of a space system must be quantified and well-understood before it even gets to a launchpad.
This is not an exhaustive list; spacecraft and rockets are adept at finding new ways to self-destruct. Yet each failure is an opportunity to learn from mistakes and to feed that knowledge forward through lessons learned. Just as space scientists and engineers stand on the shoulders of giants, successful missions in modern spaceflight operate thanks to the failures of their predecessors.
If youâve ever complained about DIY home repairs, spare a thought for the colonial aphid, Nipponaphis monzeni, for whom the task of fixing the house can be spectacularly fatal. It fixes holes in its nest by suicidally erupting and, in its death throes, plastering its bodily fluids over the openings.
Each of these aphids is a white bead, just half a millimeter across. In large numbers, they can compel Japanese trees to form large, hollow spheres called gallsâroomy mansions in which hundreds or thousands of them can live. Like ants, bees, and termites, aphids divide their labor: Adults reproduce, while immature nymphs act as both workers and soldiers. If moth caterpillars tunnel their way into the galls, the nymphs stab these intruders to death, using the sharp mouthparts that they normally use to suck sap from trees. That deals with the caterpillar, but what about the huge hole that it leaves in the gall?
The aphidâs solution, discovered in 2003, is dramatic. Dozens or hundreds of the young soldiers will gather around a hole and discharge fluid from a pair of tubes on their backsides. This isnât a gentle leak but a violent eruption, which drains the nymphs so thoroughly that they shrivel down to just a third of their initial volume. As they dry and die, they also use their legs to mix the fluids over the holes. These harden within an hour, sealing the gap and sometimes entombing the suicide plasterers.
These acts of sacrifice save the colonies. In 2009, Mayako Kutsukake from the National Institute of Advanced Industrial Science and Technology showed in a study that when she halted the aphidsâ repairs by absorbing the plasterersâ bodily fluids with tissue, the galls almost always die. If she carried out the repairs herself with glue, the galls and the colonies within them survived. An open gall is vulnerable to predators and desiccation. âLetting the plants heal the galls naturally takes a long time and is very risky for the aphid colony,â says Takema Fukatsu, who led the study.
Back then, Kutsukake and Fukatsu compared the aphidsâ repairs to the clotting process that we and other animals use to heal open wounds. Now, after a decade of work, theyâve realized that the analogy is more fitting than they could have imagined.
If an insect is wounded, cells called hemocytes rush to the breach and burst, releasing fats that quickly coagulate into a soft plug. Those bursting cells also release an enzyme called phenoloxidase, which cross-links molecules in the insectsâ blood into a hard, reinforced meshâa scab. Â
This is almost exactly what happens when the soldier aphids sacrifice themselves at a gall hole. Hemocytes in their bodily fluids rupture, releasing fats that quickly form a plug, which phenoloxidase slowly reinforces. All the components are the same; the soldiers just have a lot more of them. By exaggerating their own immune defenses, theyâve evolved a way of defending their entire colonies. âItâs clotting, just not of the body,â says Nancy Moran from the University of Texas at Austin. âTheyâre clotting the house that they live in.â
That discovery was hard-won. The aphids canât be reared in the lab, and in the field, âthe gall-repairing soldiers are available only for a few months every year,â Kutsukake says. Her work had to proceed in fits and starts, which is why it took a decade to divine the details of the process.
There are many examples where a colonyâs defenses mirror an individualâs: Some ants, for example, will kill infected larvae to stop diseases from spreading, just as their immune systems will destroy infected cells. But Kutsukakeâs research âshows that these parallels can even exist at the level of the molecules,â says Sylvia Cremer from the Institute of Science and Technology Austria. In the social aphids, âthe same cells and molecules responsible for individual wound healing have been co-opted for colony-level wound healing.â
Many social animals will give up their life to protect their genetically related colony matesâhoneybee workers famously die after stinging. But the brand of suicidal altruism where individuals sacrificially rupture themselves has a special nameââautothysis.â
One termite species commits autothysis to release a chemical weapon: When it bursts, blue crystals in its back mingle with chemicals in its salivary glands to create a toxic sludge that kills its opponents. Only older individuals do this; their jaws are too worn to be useful in combat, and, to misquote Neil Young, itâs better to blow up than to fade away.
Meanwhile, several species of exploding ants defend their colonies by flexing so hard that they rip apart their abdomens, unleashing sticky, toxic, corrosive chemicals onto their attackers. To maximize the effect of these chemicals, some workers will stick their abdomens into the mouths of their opponents before letting rip. New species of exploding ants are being discovered all the time; one, which was identified last year, is aptly named Colobopsis explodens.
None of these examples, however, established a clear link between the self-sacrificing behavior and the insectâs own immune system. The social aphids are the first, and they should prompt scientists âto revisit in more detail whether the immune system in those other species is also involved in self-explosion,â says Rebeca Rosengaus from the Northeastern University College of Science.
The moon just keeps getting wetter and wetter. Trace amounts of water were found in rocks returned from manned Apollo and unmanned Soviet missions. Then in the late 1990s, during orbital missions to the moon to map and study its surface minerals, scientists detected the water ice in permanently shadowed craters in the moonâs polar regions. It was also found bound up in lunar minerals here and there across the entire lunar globe.
Now, researchers from NASA and the Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland, report that streams of meteoroids striking the Moon infuse the thin lunar atmosphere with a short-lived water vapor.
Finding water on the sun-baked orb with its vacuum-like atmosphere is incredible in itself. But thereâs also a practical side. Water is a crucial resource for a long-term stay on the moon. If we find extract it from the rocks, astronauts might live there for months or years. The team made the recent discovery after sifting through the data collected by LADEE (Lunar Atmosphere and Dust Environment Explorer) and finding dozens of events. LADEE studied lunar dust and the moonâs scanty atmosphere during its 7-month mission in 2013-2014.
âWe traced most of these events to known meteoroid streams, but the really surprising part is that we also found evidence of four meteoroid streams that were previously undiscovered,â said Mehdi Benna of NASAâs Goddard Space Flight Center and the lead author of the study.Â The new ones seen by LADEE occurred on Jan. 9, April 2, April 5 and April 9, 2014.
Meteoroids are bits of rock shed by comets and asteroids. When a meteoroid strikes Earthâs atmosphere and burns up itâs called a meteor. Meteoroid streams are responsible for our favorite meteor showers like the Perseids in August and Geminids in December. Meteors from those same showers also strike our lunar neighbor, but instead of burning up, they slam into and zap the surface.
Most of the time the moon has next to no water vapor in its skimpy atmosphere, but when it passed through one of the streams enough vapor was released for LADEE to detect. When the shower was over, the H2O â and its cousin, OH, called the hydroxyl radical â fizzled away.
To release water, the meteoroids had to penetrate at least 3 inches (8 cm) below the surface. That may sound challenging for a rock only a few millimeters across but consider that itâs traveling at tens of thousands of miles an hour. Second, thereâs no atmosphere to slow them down. Third, the lunar soil, called regolith, is fluffy. Even a meteoroid one-fifth of an inch (5 mm) wide can burrow down deep enough to release a puff of water vapor.
With each impact, a small shock wave fans out and ejects water from the surrounding area. About two-thirds of that vapor escapes into space, but about one-third lands back on the surface of the Moon.
The top layer of lunar regolith is bone-dry but below that thereâs a hydrated layer where water molecules likely stick to bits of soil and rock. Based on measures of the moonâs exosphere (its barely-there atmosphere), researchersÂ calculated that the hydrated layer has a water concentration of about 200 to 500 parts per million, or about 0.02 to 0.05 percent by weight. Thatâs far drier than the driest desert soils on Earth. To squeeze half a quart of water out of the moon, youâd have to process more than a 1.1 tons (1 metric ton) of regolith.
These findings could help explain the deposits of ice in cold traps in the dark reaches of craters near the poles. Untouched by sunlight water can remain stable up to several billion years. Water vapor liberated by meteoroid impacts may have migrated to the poles where it remains in âsafe keeping.â
Some meteoroids contain water and hydroxyls, but the team confirmed that most of the water detected had to be the moonâs because the meteoroids contain too little to account for what was detected. So where does the water come from? Sources include ancient bombardment by water-rich comets and asteroids, meteoroids and even the steady stream of protons (hydrogen atoms stripped of their electrons) blowing from the sun in the solar wind. Protons hit the regolith at high speed and combine with oxygen to make the precious stuff life finds so necessary.
Â Â Â I have been a light pollution advocate for many years. Certainly 30 years ago I was most interested in the skyglow that affects our view of the starry sky, and though that remains a major concern, I have since learned of the many medical, safety, and environmental concerns that are paramount. On an energy committee on my town, I was able to show that poorly lit intersections with severe glare by unshielded lighting had the highest accident rate. Â
Â Â Â Further review of published studies has shown that as the eye ages, it becomes much more sensitive to disability glare, impairing safe driving. That led to my 2009 AMA resolution that suggested that all streetlights be properly shielded to prevent such glare to make streets safer, allowing elderly to drive in the evening safer. This resolution is still cited by lighting companies.
Â Â Â In 2012 knowing the research activities of many scientists in the world on the effects of night time lighting on human physiology, I invited 4 prominent researchers to help me write a CSAPH report âLight Pollution: Adverse health effects of Nighttime lightingâ.
Â Â Â This 27 page report with 134 peer reviewed references highlighted the adverse health effects of circadian rhythm disturbance. Suppressing melatonin production by excessive night lighting, especially blue light, leads to myriad health deleterious health effects.Â
Â Â Â The most stunning is an increase in certain endocrine related carcinomas. It is now well known that circadian disturbance causes a 15-20% increase in breast cancer rates, and a similar increase in prostrate cancers. Indeed, this past year (2017) the Nobel Prize in Medicine was awarded to Young, Rosbach, and Hall, the groundbreaking research that elucidated the biochemical pathways that lead to increased illnesses by melatonin suppression. Cancer rates, obesity, diabetes, metabolism issues, and immune system are all affected by melatonin suppression. The World health organization has even listed shift workers, who have repeated melatonin suppression as a âknown Carcinogen, level 2â.Â
Â Â Â After the 2012 report came out there was some pushback from the lighting industry, however, in 2014 General Electric wrote its own âwhite paperâ on this subject, and not only agreed with the AMA report, by liberally quoted from my report, stating that corporate policy would change to take note of melatonin production in its lighting policies and products. Shortly after that Apple developed a blue reduction in its phones and computers for late night. Many other companies have since adopted this practice. Again, with the Nobel Prize, and over 1000 peer reviewed papers, this now settled science! The last section of the 2012 report also raised the alarm that excessive outdoor blue light was also causing environmental harm, as all living creatures have a circadian rhythm, even one celled organisms!
Â Â Â In the ensuing years the lighting industry has developed LED lighting with plans to replace all outdoor lighting with LEDâs over the next 10 years, but were poised to use excessive blue producing 4000K LEDs. Given my 2012 paper, and many reports of environmental damage by excessive blue, I was able to move the CSAPH to let me lead on one more report âHuman and Environmental Â Effects of Light Emitting Diode (LED) Community Lightingâ adopted at the AMA annual 2016 meeting by the HOD. This particular report hit a nerve with the lighting industry. The report actually says however that we should indeed replace outdoor lighting with LED lights to save energy, but still shield all streetlights to prevent glare, that was widely accepted. The last resolve stating that blue light should be limited in outdoor lighting and streetlights should use low blue emitting 3000K or lower color temperature led to severe consternation in the lighting industry.Â
Â Â Â The issue was many companies were trying to sell 4000K lighting, as those were the first type of LEDâs that were manufactured. They had inventory already made. LED lights use a blue LED and coat it to absorb the blue and re-emit at lower âwarmerâ color temperature, eg 3000K. 4000K lighting is 30-34% blue light. The 2012 paper and thousands of studies have already shown this is bad for humans and the environment in general. Â The AMA report suggested no higher than 3000K. Nowadays, there is good 2700K lighting, and even 2400K lighting as well, and the trend is lower. There is evidence that high blue leads to severe insect, bird, and mammalian effects in nature. It has even been shown to affect salmon runs, and even plankton!
Â Â Â When this AMA report came out it was hailed by researchers, and many cities paused to study it closely. They came to the same conclusion, and demanded warmer 3000k or even 2700K lighting. Many companies changed their products and are now thriving, others are still fighting. Â
Â Â Â To date most large cities now have adopted the AMA recommendation, and in fact some (like Toronto) state in their lighting that they are âAMA Compliant Lightingâ !! To date, New York, Chicago, Tucson, Phoenix, Los Angeles, San Francisco, San Diego, Georgia, Toronto, Montreal, and many others have changed their lighting plans and demand 3000K or lower. This is helped by the fact that wherever 4000K lighting was installed, citizens immediately complained about the harsh glare bluish light.
Â Â Â Some cities such as Monterey and Davis in California even sued their cities, and demanded a switch to 3000K or lower. Just a few weeks ago (March 2019), the city of Seattle, an early user of 4000K lighting, announced that all 4000K lighting which was recently installed, will be removed and replaced by 3000K lighting due to multiple citizen complaints.Â
Â Â Â Any town contemplating installing LED lighting should take note of the fact that essentially everywhere 4000K and excessive lighting has been installed, they are universally detested and abhorred. Donât make an expensive mistake and install this type of lighting.
Â Â Â The 2016 report has in the words of many lighting engineers ârevolutionizedâ the lighting industry. This would not have occurred without the AMA putting this report out there forcing lighting companies to address the human health and environmental effects of the lighting they produce. This would not have happened without our AMA report.
Mario Motta, MD, FACC Â
âThis baby is great and all, but how am I supposed to apply lipgloss in the airport when I have to CONSTANTLY carry him around?â
Introducing âMagic Sassy Strap!â Magic Sassy Strap allows you to apply all of your favorite glosses, balms, and lacquers, unencumbered by the crushing weight of a small human being who depends solely on you for survival.
Use Magic Sassy Strap for hands-free control of any large, awkward object. Including, but not limited to:
-An 8 lb. baby in a 25 lb. car seat carrier
-A large bundle of wood
-The alternator for a 1988 Buick La Sabre
-A group of tantruming toddlers, known collectively as a âhissyâ
-The baby calf that Billy Crystal delivers in âCity Slickersâ and then heroically carries across a river in the middle of a massive flash flood.
Magic Sassy Strap - when you want to apply lipgloss, but not the laws of physics. (From the makers of Baby Bag)
Foundations of Interstellar Studies Workshop in UK
A workshop on interstellar flight titled Foundations of Interstellar Studies is to take place from 27 to 30 June of this year in the town of Charfield, Gloucestershire, United Kingdom, at the current headquarters of the Initiative for Interstellar Studies. This follows an initial âfoundationsâ conference in 2017 that was held at City College New York and the Harvard Club of New York; future conferences, ârun jointly between several organisations depending on the host country,â are planned on a roughly two-year schedule. I immediately warmed to the theme that the Initiative for Interstellar Studies (i4IS) introduced by quoting Robert H. Goddard:
How many more years I shall be able to work on the problem I do not know; I hope, as long as I live. There can be no thought of finishing, for âaiming at the starsâ both literally and figuratively, is a problem to occupy generations, so that no matter how much progress one makes, there is always the thrill of just beginning.
Browsing through the conference materials I note that, with reference to famous physics conferences like Shelter Island, Pocono and Oldstone, the emphasis is on both academic rigor but also informal conversation, a format that i4IS president Kelvin Long hopes will energize the interstellar community. The aim is âto get researchers together and to maximize the social interaction time for idea swapping and information exchange and it is expected that the ideas and discussions (and maybe even calculations) should continue into the evening social sessions.â
The three days of discussions in this yearâs conference will take place at the Bone Mill, which has been the i4IS headquarters since 2017. This is beautiful country, as those of you who have been to the Cotswolds will already know, in the village of Charfield, near Wootton-under-Edge, in the English county of Gloucestershire. The three themes under focus, each with a day devoted to it:
In addition to the formal scientific proceedings, there will be an opening social event on the evening of Thursday 27th June, starting at 18:00 hours at the Bone Mill. There will also be a formal dinner on Saturday 29th June starting at 19:00 hours at a venue to be announced.
An invitation will be made to submit papers from selected authors post-conference, to the Journal of the British Interplanetary Society (JBIS) and/or publication in the official conference proceedings. For more on the Foundations of Interstellar Studies Workshop 2019, including maps and information on accommodation, go to https://www.fisw.space/fisw-2019.
Horizon 2061 Synthesis Workshop in Toulouse
2061 will commemorate an interesting year in space exploration. It is the centennial not just of the first human flight into space by Yuri Gagarin but also of the speech by which John Kennedy propelled the US aerospace community into a determined drive for a lunar landing. But we might also add another memorable factor. In 2061, Comet Halley makes its return. The last time we saw Halley was in 1986, when five spacecraft ranging in origin from the European Space Agency to the Soviet Union and France as well as Japan studied the comet in the inner system.
Thus we had the first comet observed in detail by spacecraft, giving us information about the cometary nucleus, the coma and the tail, helping us understand cometary structure. The fact that the Halley expeditions were so determinedly multinational (although the studied US solar sail never materialized) gives impetus to an effort called Planetary Exploration Horizon 2061 which, according to its founders is creating a long-term analysis of four primary areas of space exploration, all of these addressed from a determinedly international perspective.
From the Horizon 2061 website:
By 2061, all the âfrontiersâ (or outer boundaries) of exploration should have moved dramatically outwards: human exploration might have reached Mars and perhaps the main asteroid belt; sample return missions should have reached, beyond the asteroid belt, the Trojan asteroids on the orbit of Jupiter and the icy moons of Jupiter and Saturn; robotic exploration should have reached the very local interstellar medium, well beyond the outer shock of the heliosphere, thus opening the way towards the closest stars and their planetary systems.
Thus the âthe four pillars of planetary explorationâ Horizon 2061 is examining:
The overall goal:
[The year 2061] symbolically represents our intention to encompass both robotic and human exploration in the same perspective. Its distant horizon, located well beyond the usual horizons of the planning exercises of space agencies and of their standing committees, which generally address shorter time scales, avoids any possible confusion with them and is intended to trigger a joint foresight thinking of the scientific and technology communities of planetary exploration that will free the imagination of the planetary scientists, who are invited to formulate what they think are the most relevant and important scientific questions independently of the a priori technical feasibility of answering them; of the engineers and technology experts, who are invited to explore innovative technical solutions that will make it possible to fly by 2061 the challenging space missions that will allow us to address these questions.
Space missions are designed, the Horizon 2061 proponents note, around a Science Traceability Matrix (STM) in which mission science questions and objectives define the instruments needed, the mission profile and the kind of platform on which the mission will be flown. Unlike single missions, though, Horizon 2061 intends to write the STMs for a set of representative missions that will investigate everything from the origin of planetary systems to the detection of life. Observations to be made and destinations within the Solar System where such measurements can be performed will determine the type of space missions that emerge from this matrix.
Two meetings have already occurred, the first in Bern in September of 2016, the second in Lausanne in April of 2018. Coming now is the next step, devoted to the synthesis of the exercise. This will take place in an international colloquium hosted by the UniversitÃ© Paul Sabatier in Toulouse from June 5th to 7th, 2019.
The primary organizers will be the Institut de Recherche en Astrophysique et PlanÃ©tologie (IRAP) and the Observatoire Midi-PyrÃ©nÃ©es (OMP). This colloquium, placed under the sponsorship of COSPAR [Committee on Space Research, established by the International Council for Science in 1958], will complete the design of the four pillars and initiate the drafting of the final report, which will be edited and published under the auspices of COSPAR.
Tentative conclusions from the colloquium will be presented for discussion at the joint EPSC-DPS meeting (European Planetary Science Conference â AAS Division for Planetary Sciences) in Geneva (September 15th to 20th, 2019), and later for discussion and final approval at the COSPAR General Assembly (Sydney, August 15th to 23rd, 2020).
Meeting agenda, registration and other materials are available at https://h2061-tlse.sciencesconf.org/.
Pyodide is an experimental project from Mozilla to create a full Python data science stack that runs entirely in the browser.
The impetus for Pyodide came from working on another Mozilla project, Iodide, which we presented in an earlier post. Â Iodide is a tool for data science experimentation and communication based on state-of-the-art web technologies. Â Notably, itâs designed to perform data science computation within the browser rather than on a remote kernel.
Itâs also been argued more generally that Python not running in the browser represents an existential threat to the languageâwith so much user interaction happening on the web or on mobile devices, it needs to work there or be left behind. Therefore, while Pyodide tries to meet the needs of Iodide first, it is engineered to be useful on its own as well.
For another quick example, hereâs a simple doodling script that lets you draw in the browser window:
from js import document, iodide canvas = iodide.output.element('canvas') canvas.setAttribute('width', 450) canvas.setAttribute('height', 300) context = canvas.getContext("2d") context.strokeStyle = "#df4b26" context.lineJoin = "round" context.lineWidth = 5 pen = False lastPoint = (0, 0) def onmousemove(e): global lastPoint if pen: newPoint = (e.offsetX, e.offsetY) context.beginPath() context.moveTo(lastPoint, lastPoint) context.lineTo(newPoint, newPoint) context.closePath() context.stroke() lastPoint = newPoint def onmousedown(e): global pen, lastPoint pen = True lastPoint = (e.offsetX, e.offsetY) def onmouseup(e): global pen pen = False canvas.addEventListener('mousemove', onmousemove) canvas.addEventListener('mousedown', onmousedown) canvas.addEventListener('mouseup', onmouseup)
And this is what it looks like:
The best way to learn more about what Pyodide can do is to just go and try it! There is a demo notebook (50MB download) that walks through the high-level features. The rest of this post will be more of a technical deep-dive into how it works.
There were already a number of impressive projects bringing Python to the browser when we started Pyodide. Â Unfortunately, none addressed our specific goal of supporting a full-featured mainstream data science stack, including NumPy, Pandas, Scipy, and Matplotlib.
PyPyJs is a build of the alternative just-in-time compiling Python implementation, PyPy, to the browser, using emscripten. Â It has the potential to run Python code really quickly, for the same reasons that PyPy does. Â Unfortunately, it has the same issues with performance with C extensions that PyPy does.
All of these approaches would have required us to rewrite the scientific computing tools to achieve adequate performance. Â As someone who used to work a lot on Matplotlib, I know how many untold person-hours that would take: other projects have tried and stalled, and itâs certainly a lot more work than our scrappy upstart team could handle. Â We therefore needed to build a tool that was based as closely as possible on the standard implementations of Python and the scientific stack that most data scientists already use. Â
After a discussion with some of Mozillaâs WebAssembly wizards, we saw that the key to building this was emscripten and WebAssembly: technologies to port existing code written in C to the browser. Â That led to the discovery of an existing but dormant build of Python for emscripten, cpython-emscripten, which was ultimately used as the basis for Pyodide.
There are many ways of describing what emscripten is, but most importantly for our purposes, it provides two things:
Pyodide is put together by:
By emulating the file system and other features of a standard computing environment, emscripten makes moving existing projects to the web browser possible with surprisingly few changes. (Some day, we may move to using WASI as the system emulation layer, but for now emscripten is the more mature and complete option).
Putting it all together, to load Pyodide in your browser, you need to download:
These files can be quite large: Python itself is 21MB, NumPy is 7MB, and so on. Fortunately, these packages only have to be downloaded once, after which they are stored in the browserâs cache.
Using all of these pieces in tandem, the Python interpreter can access the files in its standard library, start up, and then start running the userâs code.
We run CPythonâs unit tests as part of Pyodideâs continuous testing to get a handle on what features of Python do and donât work. Â Some things, like threading, donât work now, but with the newly-available WebAssembly threads, we should be able to add support in the near future. Â
Other features, like low-level networking sockets, are unlikely to ever work because of the browserâs security sandbox. Â Sorry to break it to you, your hopes of running a Python minecraft server inside your web browser are probably still a long way off. Nevertheless, you can still fetch things over the network using the browserâs APIs (more details below).
Notably, code that runs a lot of inner loops in Python tends to be slower by a larger factor than code that relies on NumPy to perform its inner loops. Below are the results of running various Pure Python and Numpy benchmarks in Firefox and Chrome compared to natively on the same hardware.
object instances as two distinct types.
dicts (dictionaries) are just mappings of keys to values. Â On the other hand,
Object. Â (Yes, Iâve oversimplified here to make a point.)
Object, itâs impossible to efficiently guess whether it should be converted to a Python
object. Â Therefore, we have to use a proxy and let âduck typingâ resolve the situation.
Object: it wraps it in a proxy and lets the Python code using it decide how to handle it. Of course, this doesnât always work, the duck may actually be a rabbit. Thus, Pyodide also provides ways to explicitly handle these conversions.
Proxies also turn out to be the key to accessing the Web APIs, or the set of functions the browser provides that make it do things. Â For example, a large part of the Web API is on the
document object. You can get that from Python by doing:
from js import document
This imports the
All of this happens through proxies that look up what the
document object can do on-the-fly. Â Pyodide doesnât need to include a comprehensive list of all of the Web APIs the browser has.
There are important data types that are specific to data science, and Pyodide has special support for these as well. Â Multidimensional arrays are collections of (usually numeric) values, all of the same type. They tend to be quite large, and knowing that every element is the same type has real performance advantages over Pythonâs
Arrays that can hold elements of any type.
Since in practice these arrays can get quite large, we donât want to copy them between language runtimes. Â Not only would that take a long time, but having two copies in memory simultaneously would tax the limited memory the browser has available.
One of the advantages of doing the data science computation in the browser rather than in a remote kernel, as Jupyter does, is that interactive visualizations donât have to communicate over a network to reprocess and redisplay their data. Â This greatly reduces the latency â the round trip time it takes from the time the user moves their mouse to the time an updated plot is displayed to the screen.
The Python scientific stack is not a monolithâitâs actually a collection of loosely-affiliated packages that work together to create a productive environment. Â Among the most popular are NumPy (for numerical arrays and basic computation), Scipy (for more sophisticated general-purpose computation, such as linear algebra), Matplotlib (for visualization) and Pandas (for tabular data or âdata framesâ). Â You can see the full and constantly updated list of the packages that Pyodide builds for the browser here.
Some of these packages were quite straightforward to bring into Pyodide. Generally, anything written in pure Python without any extensions in compiled languages is pretty easy. In the moderately difficult category are projects like Matplotlib, which required special code to display plots in an HTML canvas. On the extremely difficult end of the spectrum, Scipy has been and remains a considerable challenge. Â
Roman Yurchak worked on making the large amount of legacy Fortran in Scipy compile to WebAssembly. Kirill Smelkov improved emscripten so shared objects can be reused by other shared objects, bringing Scipy to a more manageable size. (The work of these outside contributors was supported by Nexedi). Â If youâre struggling porting a package to Pyodide, please reach out to us on Github: thereâs a good chance we may have run into your problem before.
Since we canât predict which of these packages the user will ultimately need to do their work, they are downloaded to the browser individually, on demand. Â For example, when you import NumPy:
import numpy as np
Pyodide fetches the NumPy library (and all of its dependencies) and loads them into the browser at that time. Â Again, these files only need to be downloaded once, and are stored in the browserâs cache from then on.
Adding new packages to Pyodide is currently a semi-manual process that involves adding files to the Pyodide build. Weâd prefer, long term, to take a distributed approach to this so anyone could contribute packages to the ecosystem without going through a single project. Â The best-in-class example of this is conda-forge. It would be great to extend their tools to support WebAssembly as a platform target, rather than redoing a large amount of effort.
Additionally, Pyodide will soon have support to load packages directly from PyPI (the main community package repository for Python), if that package is pure Python and distributes its package in the wheel format. Â This gives Pyodide access to around 59,000 packages, as of today.
We definitely want to encourage this brave new world, and are excited about the possibilities of having even more languages interoperating together. Â Let us know what youâre working on!
If you havenât already tried Pyodide in action, go try it now! (50MB download)
Itâs been really gratifying to see all of the cool things that have been created with Pyodide in the short time since its public launch. Â However, thereâs still lots to do to turn this experimental proof-of-concept into a professional tool for everyday data science work. If youâre interested in helping us build that future, come find us on gitter, github and our mailing list.
The post Pyodide: Bringing the scientific Python stack to the browser appeared first on Mozilla Hacks - the Web developer blog.
NASA has yet to outline its approach to meeting the goal announced in a March 26 speech by Vice President Mike Pence of landing humans on the south pole of the moon within five years. The agency has been working internally on at least a high-level approach for doing so, and plans to start sharing details with the White House, including the Office of Management and Budget, this week in order to finalize a revised budget request thatâs expected to seek several billion dollars more in fiscal year 2020 alone.Â
However, in comments at the 35th Space Symposium, NASA Administrator Jim Bridenstine said the agency would pursue a two-phase approach that would initially emphasize speed. That approach is expected to use the Space Launch System and Orion, lunar landers and some version of a lunar Gateway.But the Lunar Gateway is expected to be scaled down dramatically from earlier plans. According to the post:
Some concepts under consideration require only the Power and Propulsion Element, which NASA is in the process of procuring, along with a docking node of some kind that could also serve as a habitation module.
Publicly, potential Gateway partners have said little about how NASAâs accelerated approach would affect their ability or willingness to participate. During an April 10 panel session here on exploration, officials from NASA, the Canadian Space Agency, European Space Agency and Japan Aerospace Exploration Agency largely avoided direct discussion of what NASAâs new plans would mean for international contributions to the Gateway or other elements of the exploration architecture.Expect no public comments from CSA and the other potential Gateway partners until NASA's budget is finalized, sometime later this year.
|Overview of the CCP. Graphic c/o CSA.|
This is also a good -- but older -- article on Triton. We don't know who wrote it. Initial speculation was Iran; more recent speculation is Russia. Both are still speculations.
Yesterdayâs discussions with the science team focused on determining which target in the vicinity of âAberladyâ will become the focus of the next drill campaign: target 2, or target 3 (pictured in the Sol 2379 Mission Update). In the end, target 3 was recommended by rover planners for its flatter texture, as an APXS raster of both targets showed there wasnât a large difference in composition between the two. Once formally included in plan activities, target 3 will be given a proper name consistent with those being used in the âGlen Torridonâ region.
Tosol begins with a MAHLI open cover image of the Aberlady sample dump pile (shown above) and then an arm retract to get it out of the way for a Mastcam multispectral observation of the dump pile that follows. Next, a Navcam dust devil survey and suprahorizon movie are included to monitor clouds and dust devils in the current transition from dusty to cloudy season. Then a ChemCam 10Ã1 vertical LIBS and RMI observation on the Aberlady drill tailings and a Mastcam documentation image wraps up the 1 hour science block.
After sunset, two APXS rasters on two differently toned drill tailing targets are planned to run until the wee hours of the night, when CheMin will take over with its third integration on the Aberlady drill sample, using X-ray diffraction to identify the signals of the minerals present in the sample.
Standard DAN passives and REMS observations were included to continue monitoring the environmental conditions at the current workspace. Tomorrow the goal is to finish up at Aberlady, and bump to target 3 for âDrill Sol 0.â
Written by Brittney Cooper, Atmospheric Scientist at York University
I just had the dispiriting experience of receiving a paper to review from Journal B, that was unchanged from a prior submission to Journal A. The "dispiriting" part of the experience was that the paper was completely unchanged, despite a host of minor and major comments on the paper from all three reviewers for Journal A.
I ended up writing that I was disappointed that the authors had not seen fit to confront the bigger issues in any way, much less correct even the smallest and easiest errors; and then pasted in my previous review. What I wanted to do was paste in the expert reviews from the other two reviewers for Journal A, but I didn't feel like that was OK.
(If I get the paper back with some revisions, I'll reevaluate it in light of the Journal A reviews, too.)
I think the behavior of the authors is very questionable, too, and I hope they rethink this strategy. If your paper is desk-rejected by a hoity-toity journal without review, that's one thing; if reviewers put in hours of effort and give you detailed comments, you goshdarn well should put in an hour or two of your own time before resubmitting.
David Koslicki visited my lab yesterday, and I was reminded of the mash and MetaPalette situation from a few years back. Briefly:
I was a reviewer on both the mash paper (Ondov et al. (2016)) and the MetaPalette paper (Koslicki and Falush (2016)) and in my final review of MetaPalette I mentioned the mash paper enthusiastically. (Both were already up on biorxiv.)
At some point later on I sent David an e-mail to follow up on some suggestions I'd had, and we realized that he'd never received the text from my review of MetaPalette. He later told me that he thought that receiving my comments would have accelerated his research by a few months, by pointing him at a new area.
So why didn't mSystems send him the review text?!
(There are plenty of journals that are guilty of this. Nature Biotech is one that I've noted in the past.)
Peer reviews often provide important context that can help people understand why the paper is important and interesting. It's fine and dandy to say that that should all be in the final paper, but that's a hard task and often papers are space constrained (...for some reason).
I think journals should make reviews public along with the article.
The biggest argument against this is that it might take some work by someone to properly adjust reviews for fixes from earlier versions. A short term fix might be to have a box for "this is the part of the review that I would like to make public if this paper is accepted".
I no longer review for PNAS, because they started including a provision that I couldn't make any part of my review available in any form, even anonymously. I can understand that they don't want reputation laundering (e.g. my previous behavior in posting reviews, which boosts my own reputation while also being a sign of my own privilege), but I see little harm in allowing it to be posted anonymously.
Journals sure are proprietary about work they didn't pay for. That's a bigger theme here, I guess :)
Anyway. Those are my ranty off the cuff comments for today.
Indian information technology (IT) outsourcing and consulting giant Wipro Ltd.Â [NYSE:WIT] is investigating reports that its own IT systems have been hacked and are being used to launch attacks against some of the companyâs customers, multiple sources tell KrebsOnSecurity. Wipro has refused to respond to questions about the alleged incident.
Earlier this month, KrebsOnSecurity heard independently from two trusted sources that Wipro â Indiaâs third-largest IT outsourcing company â was dealing with a multi-month intrusion from an assumed state-sponsored attacker.
Both sources, who spoke on condition of anonymity, said Wiproâs systems were seen being used as jumping-off points for digital fishing expeditions targeting at least a dozen Wipro customer systems.
The security experts said Wiproâs customers traced malicious and suspicious network reconnaissance activity back to partner systems that were communicating directly with Wiproâs network.
On April 9, KrebsOnSecurity reached out to Wipro for comment. That prompted an email on Apr. 10 from Vipin Nair, Wiproâs head of communications. Nair said he was traveling and needed a few days to gather more information before offering an official response.
On Friday, Apr. 12, Nair sent a statement that acknowledged none of the questions Wipro was asked about an alleged security incident involving attacks against its own customers.
âWipro has a multilayer security system,â the company wrote. âThe company has robust internal processes and a system of advanced security technology in place to detect phishing attempts and protect itself from such attacks. We constantly monitor our entire infrastructure at heightened level of alertness to deal with any potential cyber threat.â
Wipro has not responded to multiple additional requests for comment. Since then, two more sources with knowledge of the investigation have come forward to confirm the outlines of the incident described above.
One source familiar with the forensic investigation at a Wipro customer said it appears at least 11 other companies were attacked, as evidenced from file folders found on the intrudersâ back-end infrastructure that were named after various Wipro clients. That source declined to name the other clients.
The other sourceÂ said Wipro is now in the process of building out a new private email network because the intruders were thought to have compromised Wiproâs corporate email system for some time. The source also said Wipro is now telling concerned clients about specific âindicators of compromise,â telltale clues about tactics, tools and procedures used by the bad guys thatÂ might signify an attempted or successful intrusion.
Wipro says it has more than 170,000 employees helping clients across six continents with Fortune 500 customers in healthcare, banking, communications and other industries. In March 2018, Wipro said it passed the $8 billion mark in annual IT services revenue.
The apparent breach comes amid shifting fortunes at Wipro. On March 5, the State of Nebraska abruptly canceled a contract with Wipro after spending $6 million with the company. In September 2018, the Nebraska Department of Health and Human Services issued a cease-and-desist letter to Wipro, ordering it to stop work on the upgrade to the stateâs Medicaid enrollment system, and to vacate its state offices. Wipro is now suing Nebraska, saying its project was on schedule and on budget.
Another curious, if only coincidental, development: On April 4, 2019, the government of India sold âenemyâ shares in Wipro worth approximately $166 million. According to this article in The Business Standard, enemy shares are so called because they were originally held by people who migrated to Pakistan or China and are not Indian citizens any longer.
âA total of 44.4 million shares, which were held by the Custodian of Enemy Property for India, were sold at Rs 259 apiece on the Bombay Stock Exchange,â The Business Standard reported. âThe buyers were state-owned Life Insurance Corporation of India (LIC), New India Assurance and General Insurance Corporation. LICâ
Wipro is expected to announce its fourth-quarter earnings report on Tuesday, April 16Â (PDF).
Update, April 16, 9:11 a.m. ET: Not sure why it did not share this statement with me, but Wipro justÂ confirmed to the India Times that it discovered an intrusion and has hired an outside security firm to investigate.
Update, April 17, 2:33 p.m. ET: Check out my latest story on the Wipro breach, the latter half of which includes important new updates about the breach investigation.
Researchers have found several vulnerabilities in the WPA3 Wi-Fi security protocol:
The design flaws we discovered can be divided in two categories. The first category consists of downgrade attacks against WPA3-capable devices, and the second category consists of weaknesses in the Dragonfly handshake of WPA3, which in the Wi-Fi standard is better known as the Simultaneous Authentication of Equals (SAE) handshake. The discovered flaws can be abused to recover the password of the Wi-Fi network, launch resource consumption attacks, and force devices into using weaker security groups. All attacks are against home networks (i.e. WPA3-Personal), where one password is shared among all users.
News article. Research paper: "Dragonblood: A Security Analysis of WPA3's SAE Handshake":
Abstract: The WPA3 certification aims to secure Wi-Fi networks, and provides several advantages over its predecessor WPA2, such as protection against offline dictionary attacks and forward secrecy. Unfortunately, we show that WPA3 is affected by several design flaws,and analyze these flaws both theoretically and practically. Most prominently, we show that WPA3's Simultaneous Authentication of Equals (SAE) handshake, commonly known as Dragonfly, is affected by password partitioning attacks. These attacks resemble dictionary attacks and allow an adversary to recover the password by abusing timing or cache-based side-channel leaks. Our side-channel attacks target the protocol's password encoding method. For instance, our cache-based attack exploits SAE's hash-to-curve algorithm. The resulting attacks are efficient and low cost: brute-forcing all 8-character lowercase password requires less than 125$in Amazon EC2 instances. In light of ongoing standardization efforts on hash-to-curve, Password-Authenticated Key Exchanges (PAKEs), and Dragonfly as a TLS handshake, our findings are also of more general interest. Finally, we discuss how to mitigate our attacks in a backwards-compatible manner, and explain how minor changes to the protocol could have prevented most of our attack
Earlier today, smoke was observed billowing from the landmark Notre-Dame Cathedral, in central Paris; it was undergoing renovation work. The smoke grew and was followed by flames, which consumed the roof and caused the central spire of the cathedral to collapse. The gothic cathedral is visited by millions of tourists and locals every year. Authorities report no injuries or deaths at the moment, and have tentatively linked the fire to the renovations, which were due to have been completed in 2022. Below, some images of the disaster, and a handful of images from inside Notre-Dame before the fire.
A possible second planet around Proxima Centauri raises all kind of questions. I wasnât able to make it to Breakthrough Discuss this year, but Iâve gone over the presentation made by Mario Damasso of Turin Observatory and Fabio Del Sordo of the University of Crete, recounting their excellent radial velocity analysis of the star. Proxima c is a fascinating world, if itâs there, because it would be a super-Earth in a distant (and cold) 1.5 AU orbit of a dim red star. Exactly how it formed and whether it migrated to its current position could occupy us for a long time.
But is it there? The first difficulty has to do with stellar activity, which Damasso and Del Sordo were careful to screen out; itâs one of the major problem areas for radial velocity work in this kind of environment, for red dwarf stars are often quite active. During the question and answer session, another key question emerged: We know from Kepler that many stars are orbited by multiple planets, and there is no reason to assume that Proxima Centauri has but one.
The question: If there are other, smaller worlds in play here, could the effect of their combined masses produce a âphantomâ Proxima c in the orbit Damasso and Del Sordo have discussed?
The two astronomers are completely open to this possibility, and point to the need for follow-up observations with ESPRESSO, not to mention the useful Gaia measurements that could give us even more detail. Flare activity is always an issue in any case â it may have affected the results of Anglada et al. in 2018 (citation below), when researchers found possibly two inner dust belts and one outer belt around the star (see Proxima Centauri Dust Indicates a Complicated System). The Damasso and Del Sordo work is comprehensive as far as it can go, but both were careful to note that we are dealing solely with a candidate, not a confirmed world. And it could well be the result of other, unseen planets affecting the star as well as stellar noise.
This work draws on the earlier Proxima Centauri radial velocity dataset compiled by Guillem Anglada-EscudÃ© (University of London) and team, but folds in an additional 61 RV observations, with considerable attention to the question of filtering out the 85 day rotation period of the parent star and the associated noise of stellar surface perturbations. The instrument in play is the European Southern Observatoryâs High Accuracy Radial Velocity Planet Searcher (HARPS) spectrograph at La Silla.
I suspect weâre going to find a number of small worlds around Proxima Centauri, so weâll see how their gravitational interactions might affect the spectroscopic data and hence the confirmation of the current candidate. But if this detection is confirmed, this is what weâve found: The planet would mass about six Earths â remember that because this is radial velocity, we can only measure a minimum mass, because we donât know planetary inclination â and would orbit Proxima Centauri with a period of 1900 days at 1.5 AU. Not exactly a habitable place for the likes of our species. Del Sordo estimates temperatures there would be about 40 K.
We may know, via Gaia, whether Proxima Centauri c is an actual world by the end of this year. A key follow up question is, can we snag a direct image in visible light? If so, it would mark the first such detection of a planet outside our Solar System, the imaged worlds found thus far having been discovered via infrared. There is plenty, in other words, to like about the hypothetical Proxima Centauri c, provided itâs really there. Waiting a few more months could give us a firm answer.
On another matter, as a great admirer of Thoreau, I was pleased that Damasso and Del Sordo quoted him at the beginning of their presentation, and to good effect: âIf you have built castles in the air, your work need not be lost; that is where they should be. Now put the foundations under them.â Thatâs a good metaphor for RV studies as exceedingly delicate as these. Iâll add a favorite bit from one of Thoreauâs poems:
For lore thatâs deep must deeply studied be,
As from deep wells men read star-poetryâ¦
Thereâs poetry indeed in the spectroscopic data of our nearest star, if we can just tease out its meaning. And hereâs an image that might evoke a bit of poetry to close todayâs entry.
Image: Rigil Kentaurus is the bright star near the top of this broad southern skyscape. Of course itâs probably better known as Alpha Centauri, nearest star system to the Sun. Below it sprawls a dark nebula complex. The obscuring interstellar dust clouds include Sandqvist catalog clouds 169 and 172 in silhouette against the rich starfields along the southern Milky Way. Rigil Kent is a mere 4.37 light-years away, but the dusty dark nebulae lie at the edge of the starforming Circinus-West molecular cloud about 2,500 light-years distant. The wide-field of view spans over 12 degrees (24 full moons) across southern skies. Credit & Copyright: Roberto Colombari.
The paper on dust belts around Proxima Centauri is from Guillem Anglada, âALMA Discovery of Dust Belts Around Proxima Centauri,â Astrophysical Journal Letters Vol. 850 No. 1 (15 November 2017) (abstract). (Note: This is not Guillem Anglada-EscudÃ©, despite the similarity in names!) The Damasso and Del Sordo paper is as yet unpublished, though undergoing peer review. Video of their presentation is available at https://www.youtube.com/watch?v=DLzzg9p0-AI&t=15648s (go to about 4:16:45 on the video).
Supply chain security is an insurmountably hard problem. The recent focus is on Chinese 5G equipment, but the problem is much broader. This opinion piece looks at undersea communications cables:
But now the Chinese conglomerate Huawei Technologies, the leading firm working to deliver 5G telephony networks globally, has gone to sea. Under its Huawei Marine Networks component, it is constructing or improving nearly 100 submarine cables around the world. Last year it completed a cable stretching nearly 4,000 miles from Brazil to Cameroon. (The cable is partly owned by China Unicom, a state-controlled telecom operator.) Rivals claim that Chinese firms are able to lowball the bidding because they receive subsidies from Beijing.
Just as the experts are justifiably concerned about the inclusion of espionage "back doors" in Huawei's 5G technology, Western intelligence professionals oppose the company's engagement in the undersea version, which provides a much bigger bang for the buck because so much data rides on so few cables.
This shouldn't surprise anyone. For years, the US and the Five Eyes have had a monopoly on spying on the Internet around the globe. Other countries want in.
As I have repeatedly said, we need to decide if we are going to build our future Internet systems for security or surveillance. Either everyone gets to spy, or no one gets to spy. And I believe we must choose security over surveillance, and implement a defense-dominant strategy.
When reading I've always underlined sentences that make me happy. Once the kids got old enough to understand there's no email or fun on a Kindle I switched from dead tree books, and now the underlining is stored in Amazon's datacenters.
After a few years of highlighting on Kindle I started to wonder if the number of sentences that I liked and the eventual five-star scale rating I gave a book had any correlation. Amazon owns Goodreads and Kindle services sync data into Goodreads, but unfortunately highlight data isn't available through any API.
I was able to put together a little Python to scrape the highlight counts per book (yay, BeautifulSoup) and combine it with page count and rating info from the goodreads APIs. Our family scientist explained "the statistical tests to compare values of a continuous variable across levels of an ordinal variable", and there was no meaningful relationship. Still it makes a nice picture:
The data I pulled only covered books I'd made highlights in, which seems to be about 2/3rds of them. I was happy to see that more than 40% of the books I'd read and highlighted since getting a kindle were written by women, and even better than that over the last two years. That probably comes from following good people on Twitter.
This is a pretty awful story of how Andreas Gal, former Mozilla CTO and US citizen, was detained and threatened at the US border. CBP agents demanded that he unlock his phone and computer.
Know your rights when you enter the US. The EFF publishes a handy guide. And if you want to encrypt your computer so that you are unable to unlock it on demand, here's my guide. Remember not to lie to a customs officer; that's a crime all by itself.
I really like the drawing of me in panel one. Itâs a shame I never came up with another use for it. Sitting here looking at it, the several ideas for comics that might have worked are suggesting themselves.
How to pretend to be a party DJ.
How to sneak up behind somebody on a chilly fall day.
How to play keyboards for a band called The Unabombers.
Note from Missy: It occurs to me that every guy in this comic (besides Mustache Boss) is wearing a gray hoodie. Dear Scott, did you choose which coworkers to highlight in this tale for that reason?
Note from Scott: It was not deliberate. Now that I think about it, most of the people who allowed me to use their images for characters in the first couple of years of the strip, except you and me, were sporting the layered look. Thatâs because the pictures of you and me were taken in our nice, warm home, and everybody elseâs were taken in various locations around Seattle.
Later, when we moved to Florida, new characters tended toward t-shirts.
As for the color, one way or the other, every garment everyone in the strip wears is either black or gray.