Accurate (the cartoon, not the machines).
Accurate (the cartoon, not the machines).
By Beverly Perry
We don’t know who will take those first steps on Martian soil, ushering in the age of humans as a multi-planetary species. But we do already know a couple things about those first intrepid explorers: They’re taking steps on Earth right now; and they belong to a generation that is tech-savvy, and raised on the internet and social media. But do today’s students think about exploring beyond this world and into deep space?
“Every day – we can’t get enough of that stuff!” said Ben Collins from the University of Illinois Urbana-Champaign on a recent windy morning that was spent launching rockets in a field north of Huntsville. Collins and his teammates were among 51 student rocketry teams that competed in various challenges and sent their amateur rockets soaring during the 16th annual Student Launch rocketry challenge April 13-16.
At this year’s Student Launch, middle and high school students and university computer scientists, physicists and engineers of all stripes (aerospace and mechanical were particularly well-represented) got to tour NASA’s Marshall Space Flight Center, the center responsible for developing the Space Launch System (SLS), the country’s next-generation heavy-lift launch vehicle.
While there, the students heard from a member of their generation actively involved in designing and engineering SLS: Marshall engineer Kathryn Crowe, who is part of a generations-spanning workforce blending fresh thinking with years of experience. (See Time Flies: Next-Generation Rocket is the Work of Generations for more about Kathryn’s work.)
For some, the competition – and the visit – were a taste of things to come.
“My biggest career goal is to work on the Journey to Mars – to somehow be a part of it,” said Brandon Murchinson, also of the University of Illinois Urbana-Champaign. “I think SLS is incredible. As someone who’s always been interested in space exploration and travel, it’s why I chose this career path.”
NASA’s call for new astronauts earlier this year also made an impact on the future engineers and scientists at the Student Launch. Paul Grutzmacher, a 17-year-old senior at St. Vincent-St. Mary High School in Akron, Ohio, said that his career goal is to become a pilot for the Orion crew vehicle that will launch on SLS. “SLS excites me because it’s supposed to take us farther than we’ve gone before and it’s also our next heavy lifter,” he added.
Grutzmacher thinks he’s got the right stuff to fly on SLS, but so does Vanderbilt University’s Rebecca Riley, a senior computer science major who plans to continue her education in particle physics. “I think we’re all pretty excited that we might be the right age to be going to Mars. I’m like, Man, that’s going to be me going to Mars!”
These students recognize the value in missions that build expertise in long-duration spaceflight – and the technological spinoffs that arise from the process. To hear them tell it, long timelines just don’t scare them.
Auburn University’s student rocketry team tracks progress on America’s next great rocket by following social media and events like solid rocket booster static test firings and RS-25 main engine tests. “Social media makes it a lot more tangible,” said Auburn’s Burak Adanur. “And I think it gives people something to look forward to,” he said.
Vanderbilt University’s Andrew Voss has participated in the Student Launch over the past four years. “I have seen a lot of work go down,” he said. “And I like seeing the test stands because the work that goes into testing is a feat of engineering.” Check out our recent blog post on Engine 2059 for more about how an engine helped test a test stand.
Tech-obsessed students have no trouble spouting off advancements that have arisen from America’s space program: cell phone cameras, scratch-resistant sunglasses, memory foam, and the list goes on. Vanderbilt’s Voss said, “That’s part of what NASA’s always done, and what could come out of SLS is not just spaceflight, but technology that drives the world forward.”
“I think that’s one of the most important aspects to space exploration,” said Auburn’s Adanur. “We have to go space because it’s a mechanism – it’s a crucible – that will change us as a society and give us new technologies. I think it has more of a ripple effect than most people think.”
Chris Lorenz of the University of Illinois at Urbana-Champaign said he sees the value of NASA’s proving ground missions to build up for human Mars landings. “I’m a big fan of what NASA does in robotic exploration. It’s smart to go unmanned and build up infrastructure first before attempting manned missions,” he said.
Vanderbilt’s Mitch Masia said that while proving ground missions are necessary, deep space exploration really gets people going. “The space station is awesome and a huge feat and deep space missions will get people even more excited.” Case in point: Worldwide amazement and wonder at the photos of Pluto NASA’s New Horizons spacecraft has been sending back to Earth.
Participants at the Student Launch emphasized that their generation wants its chance to make history. They want their Mars shot. “I think SLS will bring our generation together,” said Michael D’Onofrio, a 17-year-old senior at Sylvania Northview High School in Sylvania, Ohio. “Something that’s greater than where we are – going beyond Earth – will bring us together.”
Vanderbilt’s Riley said, “I’m excited about SLS in a very patriotic way. SLS and going to Mars is that big goal that we can all get behind and be excited about as an American people.”
Join in the conversation: Visit our Facebook page to comment on the post about this blog. We’d love to hear your feedback!
My students always chuckle when I point out constellations with two or three stars like Canis Minor and tell them those several points of light represent a dog. Or a crab. Or a sextant. All those creatures and more glimmer overhead. Makes you wonder what the ancients saw up there. But since they didn’t leave notes, we’re left to speculate why they pictured sometimes elaborate figures among the scanty stars that comprise the original 48 constellations. 88 have been recognized since 1930.
I’m sure in some cases they did the best with the few stars they had to work with. In others, their powers of imagination may have been more fertile. One thing’s for sure — they saw a lot more stars in the sky than most of their descendants. Why? Little to no manmade light pollution. The ability to see fainter stars gives you far more choices when you’re “connecting the dots” to create a hunter, bear, sea monster or queen. With only the brightest stars visible to most 21st century eyes, the constellations are less recognizable.
As one of our readers pointed out, German-born illustrator H.A. Rey, best known for the Curious George books, also wrote The Stars: A New Way to See Them. Published in 1952 and still a hit, Rey sought alternative ways to connect constellation stars to better match the figures they represent. To accomplish this, his figures included the fainter stars accessible to the ancients but more difficult for the modern skywatcher to spy.
Still, there are numerous examples of groups that match up pretty well with their names like Leo the lion, Hydra the water snake and Ursa Major the great bear. Today we’ll take a look at Gemini the twins, a wintertime constellation, but one that lingers into May in a most delightful way. Gemini represents the two identical twin brothers Castor and Pollux from ancient Greek mythology. These best of friends joined the expedition of Jason and the Argonauts in search of the golden fleece, the fleece of the gold-haired, winged ram that conferred the authority of kingship to its bearer.
Zeus made the brothers immortal by placing them in the sky as Gemini the twins. To this day, they stand in a shoulder-to-shoulder embrace as they saunter toward the western horizon on May nights. Because we view them in several different directions in the sky, constellations twist around from the time they first appear in the east until they disappear in the west.
The twins rise on their side, tilt east when due south and finally stand up straight by the time they migrate into the western sky. That’s why there’s no better time than now to see them. With arms comfortably tossed around each other’s shoulders stick-figure-style, the boys pause as it to take in the mellow glow of a May twilight.
A drought in India, a rescue in Nairobi, wildfires in Alberta, the Olympic Torch in a Brazilian pool, a hoverboard record set in France, the Jack In The Green parade in England, a drummer outdoors in Kazakhstan, a final rock concert in Beijing, and much more.
All week long, raging wildfires have swept across neighborhoods and forests of the city of Fort McMurray in Alberta, Canada, forcing more than 80,000 people to flee. The fire, driven by strong winds and hot, dry weather, is estimated to have burned more than 250,000 acres so far, destroying nearly 2,000 buildings, and will likely be one of Canada’s most expensive disasters, with insurance claims estimated to top $9 billion. Fortunately, there have been no casualties reported from the fire so far.
Cameron Smith is no stranger to these pages, having examined the role of evolution in human expansion into space (see Biological Evolution in Interstellar Human Migration), cultural changes on interstellar journeys (Human Universals and Cultural Evolution on Interstellar Voyages), as well as the composition of worldship crews (Optimal Worldship Populations). An anthropologist and prehistorian at Portland State University, Dr. Smith today offers up his thoughts on the emerging discipline he calls space anthropology. How do we adapt a field that has grown up around the origin and growth of our species to a far future in which humans may take our forms of culture and consciousness deep into the galaxy? What follows is the preface for Dr. Smith’s upcoming book Principles of Space Anthropology: Establishing an Evolutionary Science of Human Space Settlement, to be published by Springer later this year.
By Cameron M. Smith, PhD
In 1963, Siegfried J. Gerathewohl, NASA’s biotechnology chief, wrote the following passage early in his foundation text, Principles of Bioastronautics, outlining the need for this new field of study:
“Manned excursions into space require new types of vehicles, machines and hardware which were unknown in conventional flying. They will carry the traveler into such foreign environments as to pose serious problems of health and survival. The new field of medicine, which studies the human factors involved and the protective measures required, has been called space medicine. From its cooperation with modern technology, particularly with electronics, cybernetics, physics, and bionics, space biotechnology has branched out as a novel field of bioengineering.” [1:5-6]
At the time of that publication, less than ten people had been in space; the moon landings were yet vague plans, the robotic reconnaissance of our solar system was in its infancy and virtually nothing was known of human biology in space. Two generations later, space exploration and space sciences are at an historical apex of activity and rapid technical progress. Low Earth Orbit has been continuously occupied by at least one person, continuously, for over a quarter century, yielding thousands of scientific studies on space biology; Mars has been swarmed by robotic explorers seeking traces of life and mapping landscapes for human exploration and settlement; dozens of private companies and even individuals are re-inventing basic space exploration technologies with cheaper materials and methods than those of last century’s space age with the aim of lowering the cost of space access, and astronomy has entered a new age, with space-based technologies identifying multitudes of exoplanets now slated to be examined for traces of life with methods just coming on-line.
One significant outcome of these many efforts to better understand our stellar neighborhood will be the settlement of space by populations of humans and their domesticates. The ancient dream of setting off across space to explore and settle new lands—for freedom, exploration, economic advantage, the safeguarding of humanity by spreading out from the home planet, and a multitude of other motivations—appears more likely than ever, and its earliest steps are being taken now. For example, the SpaceX corporation was founded “…to revolutionize space technology, with the ultimate goal of enabling people to live on other planets”  and indeed in October 2016 Elon Musk is set to announce detailed Mars settlement plans. Such proposals involve not just individual people but populations, which have their own biological and behavioral (cultural) properties. In the same way that space exploration required Gerathewohl’s bioastronautics, space settlement planning requires a field of study to ensure that plans are designed and carried out informed by all we know of the adaptive tools and techniques of our species.
Traditionally, the study of humanity’s adaptations has been the domain of anthropology. Over the last century this field has capably documented our species’ remote origins, long and complicated evolution, and myriad manifestations in the present, but it has only occasionally (and then unsystematically) forayed into humanity’s distant future (e.g. see [3,4,5]). It is a premise of this book that that future should include the human settlement of environments beyond Earth, particularly for the purposes of safeguarding humanity’s apparently unique mode of consciousness, its hologenome and many of its domesticates, and the totality of human knowledge—accumulated over about 3,000 generations since the origins of behavioral modernity—by the method of establishing populations of humanity culturally and biologically independent of our home planet. I discuss arguments for space settlement in  and , but in the present book I focus on how the resources and expertise of anthropology may be deployed to assist in the goals of human space settlement.
While bioastronautics was established during the First Space Age (hereafter FSA) with tight focus on safeguarding the short-term health of individuals or small crews, today, plans include space settlement by communities, which raises many new issues; individual physiology is a different phenomenon than, say, population genetics, and individual psychology as short-term adaptation is different from cultural adaptation by reshaping cultural norms in accordance with new circumstances; I tabulate some other such differences below.
|Space Exploration||Space Settlement|
|Goals||specific, short-term||general, long-term|
|Group Size||small (crews)||large (communities)|
|Social Organization||command hierarchy||civil community|
|Essential Social Units||crews||families and communities|
|Adaptive Means||technological, individual behavior, and some reversible acclimatization||technological, cultural and biological adaptation|
|Adaptive Timescale||short; weeks to months||long; multigenerational|
For these reasons a new field of study is required. In this book I propose, describe, and outline the scope of space anthropology or exoanthropology, and present some of my own results in this new discipline.
In the same way that Gerathewohl identified the need for his field in the quotation at the opening of this Preface, below I formally outline the need for space anthropology:
Space settlement will require novel biological and cultural adaptations to support populations of humans, on multigenerational timescales, in environments so far unfamiliar to our species even after 100,000 years of human cultural and biological adaptation to myriad Earth environments. The new field of anthropology that studies such adaptive efforts is space anthropology or exoanthropology, exo- referring to beyond Earth, in the same way it is used in the term exobiology.
Specifically, I propose space anthropology to have three main functions:
The scope of exoanthropology, then, will be broad. I propose it as an applied form of anthropology with the specific goal of evaluating the adaptive capacities of our species, both biologically and culturally, so that they may be best deployed to assist in successful permanent space settlement. This will guide space settlement planning in a genuinely adaptive and evolutionarily-informed way, applying the lessons of billions of years of Earth life adaptation to what I consider to be the completely natural and expected dispersal of life throughout the solar system and beyond. This book, then, will thoroughly review the phenomenon of evolutionary adaptation, particularly among our species.
Human ‘adaptive tools’ are biological and cultural (which subsumes technology) ; an array of such adaptations so far recognized in the Earth’s cold, high altitude and hot regions are tabulated below as examples—these will be fully explored later in this book.
|Biome||Limiting Factors||Biological, Cultural and Technological Adaptations|
|Arctic / Cold||*Extremely low temperatures for long periods|
*Extreme light / dark seasonal cycles
*Low biological productivity
* increased Basal Metabolic Rate
* increased shivering, vasoconstriction and cold thermoregulation activity and efficiency
* compact, heat-retaining body stature
Cultural and Technological
* bilateral kinship = demographic flexibility
* clothing insulates but can prevent sweating
* semi-subterranean housing including igloo made of local, free, inexhaustible reosurce (snow)
* high fat diet yielding many calories and vitamins
* low tolerance of self-aggrandizement
* low tolerance of adolescent bravado
* high value of educating young
* social fission
* mobile, field-maintainable, reliable tools
* population control methods including voluntary suicide, infanticide
* high value on apprenticeship
* low tolerance for complaint: 'laugh don't cry'
|High Altitude||* Low oxygen pressures|
* Nighttime cold stress
* Low biological productivity
* High neonatal mortality
* dense capillary beds shorten distance of oxygen transport
* larger placenta providing fetus with more blood-borne oxygen
* greater lung ventilation (capacity)
Cultural and Technological
* promotion of large families to offset high infertility
* use of coca leaves to promote vasoconstriction and caffeine-like alertness
* woolen clothing retains heat when wet
* trade connections with lowland populations
|Arid / Hot||* Low and uncertain rainfall|
* High evaporation rate
* Low biological productivity
* tall, lean, heat-dumping body
* lowered body core temperature
* increased sweating efficiency
* lower urination rate
* increased vasodilation efficiency
Cultural & Technological
* flexible kinship & land tenure system = demographic flexibility matching shifting water resources
* intercourse taboo maintain sustainable population
* loose, flowing clothing blocks sunlight
* wide sandals block ground-reflected sunlight
* nakedness socially accepted during physical labor
In fulfilling Function 1, exoanthropology will survey humanity’s adaptations through time and across the globe, identifying patterns pertinent to space settlement planners. In fulfilling Function 2, it will review the adaptive competence of many of our species’ adaptive tools, allowing us to evaluate our readiness for space settlement and, where we find ourselves unready, suggest courses of action; it will also characterize forseeable space settlement conditions and limiting factors as needed. In fulfilling Function 3, recommendations for space settlement planners will be formulated, varying in specificity, based on the lessons identified in the surveys serving Functions 1 and 2. Finally, in fulfilling Function 4, directly actionable engineering and other design recommendations will be made, materially assisting in space settlement planning.
References to Author’s Preface
1. Gerathewohl, S. 1963. Principles of Bioastronautics. Prentice-Hall, New Jersey.
2. SpaceX website (accessed 14 April 2016): http://www.spacex.com/about.
3. Finney, B. and E. Jones (eds). 1985. Interstellar Migration and the Human Experience. Berkeley, University of California Press.
4. Finney, B. 1992. Space Migrations: Anthropology and the Humanization of Space. NASA SP-509: Space Resources, Volume 4: Social Concerns. Washington, D.C.
5. Smith, C.M. and E.T. Davies. 2012. Emigrating Beyond Earth: Human Adaptation and: Space Colonization. Springer, Berlin.
6. Smith, C.M. and E.T. Davies. 2005. The Extraterrestrial Adaptation. Spaceflight 47(12):46.
7. Morphy, H. and G. Harrison (eds). 1998. Human Adaptation. Oxford: Oxford University Press.
© 2016 by Cameron M. Smith, PhD
Using case studies on credit lending, employment, higher education, and criminal justice, the report we are releasing today illustrates how big data techniques can be used to detect bias and prevent discrimination. It also demonstrates the risks involved, particularly how technologies can deliberately or inadvertently perpetuate, exacerbate, or mask discrimination.
The purpose of the report is not to offer remedies to the issues it raises, but rather to identify these issues and prompt conversation, research -- and action -- among technologists, academics, policy makers, and citizens, alike.
The report includes a number of recommendations for advancing work in this nascent field of data and ethics. These include investing in research, broadening and diversifying technical leadership, cross-training, and expanded literacy on data discrimination, bolstering accountability, and creating standards for use within both the government and the private sector. It also calls on computer and data science programs and professionals to promote fairness and opportunity as part of an overall commitment to the responsible and ethical use of data.
Sir David Attenborough, creator of the world’s best nature documentaries (and now namesake of a research vessel), turns 90 this Sunday. To celebrate, I watched all 79 episodes of his Life series and ranked them over at The Atlantic. Meanwhile, here, I’ve created two difficult quizzes to test your knowledge of the man’s work, and of the animals whose lives he has so memorably described.
Each consists of ten quotes from one of his classic series, from Life on Earth in 1979 to Life in Cold Blood in 2008. Your job is to work out which animal he’s talking about. Enjoy!
And if you crave further
punishment fun, try the Really Really Hard edition.
Astronaut Alan Shepard poses for LIFE photographer Ralph Morse, 1961.
The package volatile highlights temporarily highlights changes to the buffer associated with certain commands that add blocks of text at once. An example is that if you paste (yank) a block of text, it will be highlighted until you press the next key. This is just a small tweak, but gives a nice bit of visual feedback.
You can install it in the normal way:
;; volatile highlights - temporarily highlight changes from pasting etc (use-package volatile-highlights :config (volatile-highlights-mode t))
Reuters photographer Aly Song recently visited a small neighborhood in a corner of Shanghai, China—a remnant patch of smaller houses and shops where residents live in homes surrounded by demolition debris, a concrete wall, and looming skyscrapers on all sides. Song writes “On paper, the Guangfuli neighborhood is a real estate investor's dream: a plot in the middle of one of the world's most expensive and fast-rising property markets. But the reality is more like a developer's nightmare, thanks to hundreds of people living there who have refused to budge from their ramshackle homes for nearly 16 years as the local authority sought to clear the land for new construction.”
ATM maker NCR Corp. says it is seeing a rapid rise in reports of what it calls “deep insert skimmers,” wafer-thin fraud devices made to be hidden inside of the card acceptance slot on a cash machine.
KrebsOnSecurity’s All About Skimmers series has featured several stories about insert skimmers. But the ATM manufacturer said deep insert skimmers are different from typical insert skimmers because they are placed in various positions within the card reader transport, behind the shutter of a motorized card reader and completely hidden from the consumer at the front of the ATM.
NCR says these deep insert skimming devices — usually made of metal or PCB plastic — are unlikely to be affected by most active anti-skimming jamming solutions, and they are unlikely to be detected by most fraudulent device detection solutions.
“Neither NCR Skimming Protection Solution, nor other anti-skimming devices can prevent skimming with these deep insert skimmers,” NCR wrote in an alert sent to banks and other customers. “This is due to the fact the skimmer sits well inside the card reader, away from the detectors or jammers of [NCR’s skimming protection solution].
The company said it has received reports of these skimming devices on all ATM manufacturers in Greece, Ireland, Italy, Switzerland, Sweden, Bulgaria, Turkey, United Kingdom and the United States.
“This suggests that ‘deep insert skimming’ is becoming more viable for criminals as a tactic to avoid bezel mounted anti-skimming devices,” NCR wrote. The company said it is currently testing a firmware update for NCR machines that should help detect the insertion of deep insert skimmers and send an alert.
A DEEP DIVE ON DEEP INSERT SKIMMERS
Charlie Harrow, solutions manager for global security at NCR, said the early model insert skimmers used a rudimentary wireless transmitter to send card data. But those skimmers were all powered by tiny coin batteries like the kind found in watches, and that dramatically limits the amount of time that the skimmer can transmit card data.
Harrow said NCR suspects that the deep insert skimmer makers are using tiny pinhole cameras hidden above or beside the PIN pad to record customers entering their PINs, and that the hidden camera doubles as a receiver for the stolen card data sent by the skimmer nestled inside the ATM’s card slot. He suspects this because NCR has never actually found a hidden camera along with an insert skimmer. Also, a watch-battery run wireless transmitter wouldn’t last long if the signal had to travel very far.
According to Harrow, the early model insert skimmers weren’t really made to be retrieved. Turns out, that may have something to do with the way card readers work on ATMs.
“Usually what happens is the insert skimmer causes a card jam,” at which point the thief calls it quits and retrieves his hidden camera — which has both the card data transmitted from the skimmer and video snippets of unwitting customers entering their PINs, he said. “These skimming devices can usually cope with most cards, but it’s just a matter of time before a customer sticks an ATM card in the machine that is in less-that-perfect condition.”
The latest model deep insert skimmers, Harrow said, include a tiny memory chip that can hold account data skimmed off the cards. Presumably this is preferable to sending the data wirelessly because writing the card data to a memory chip doesn’t drain as much power from the wimpy coin battery that powers the devices.
The deep insert skimmers also are designed to be retrievable:
“The ones I’ve seen will snap into some of the features inside the card reader, which has got various nooks and crannies,” Harrow said. “The latest ones also have magnets in them which are used to hold them down against the card reader.” Harrow says the magnets are on the opposite side of the device from the card reader, so the magnets don’t interfere with the skimmer’s job of reading the data off of the card’s magnetic stripe.
Many readers have asked why the fraudsters would bother skimming cards from ATMs in Europe, which long ago were equipped to read data off the chip embedded in the cards issued by European banks. The trouble is that virtually all chip cards still have the account data encoded in plain text on the magnetic stripe on the back of the card — mainly so that the cards can be used in ATM locations that cannot yet read chip-based cards (i.e., the United States).
When thieves skim data from ATMs in Europe, they generally sell the data to fraudsters who will encode the card data onto counterfeit cards and withdraw cash at ATMs in the United States or in other countries that haven’t yet fully moved to chip-based cards. In response, some European financial institutions have taken to enacting an anti-fraud mechanism called “geo-blocking,” which prevents the cards from being used in certain areas.
“Where geo-blocking has been widely or partially implemented, the international loss profile is very different, with minimal losses reported,” wrote the European ATM Security Team (EAST) in their latest roundup of ATM skimming attacks in 2015 (for more on that, see this story). “From the perspective of European card issuers the USA and the Asia-Pacific region are where the majority of such losses are being reported.”
Even after most U.S. banks put in place chip-capable ATMs, the magnetic stripe will still be needed because it’s an integral part of the way ATMs work: Most ATMs in use today require a magnetic stripe for the card to be accepted into the machine. The principal reason for this is to ensure that customers are putting the card into the slot correctly, as embossed letters and numbers running across odd spots in the card reader can take their toll on the machines over time.
I’ve gone on some mini-rants in other posts about starting daemons immediately after they’re installed in Ubuntu and Debian. Things are a little different in Ubuntu 16.04 and I thought it might be helpful to share some tips for that release.
Before we do that, let’s go over something. I still don’t understand why this is a common practice within Ubuntu and Debian.
Take a look at the
postinst-systemd-start script within the
init-systems-helpers package (source link):
if [ -d /run/systemd/system ]; then systemctl --system daemon-reload >/dev/null || true deb-systemd-invoke start #UNITFILES# >/dev/null || true fi
daemon-reload is totally reasonable. We must tell systemd that we just deployed a new unit file or it won’t know we did it. However, the next line makes no sense. Why would you immediately force the daemon to start (or restart)? The
deb-systemd-invoke script does check to see if the unit is disabled before taking action on it, which is definitely a good thing. However, this automatic management of running daemons shouldn’t be handled by a package manager.
If you don’t want your package manager handling your daemons, you have a few options:
This method involves creating a script called
/usr/sbin/policy-rc.d with a special exit code:
# echo -e '#!/bin/bash\nexit 101' > /usr/sbin/policy-rc.d # chmod +x /usr/sbin/policy-rc.d # /usr/sbin/policy-rc.d # echo $? 101
This script is checked by the
deb-systemd-invoke script in the
init-systems-helpers package (source link). As long as this script is in place, dpkg triggers won’t cause daemons to start, stop, or restart.
You can start your daemon at any time with
systemctl start service_name whenever you’re ready.
If you need to prevent a single package from starting after installation, you can use systemd’s mask feature for that. When you run
systemctl mask nginx, it will symlink
/dev/null. When systemd sees that, it won’t start the daemon.
However, since the package isn’t installed yet, we can just mask it with a symlink:
# ln -s /dev/null /etc/systemd/system/nginx.service
You can install nginx now, configure it to meet your requirements, and start the service. Just run:
# systemctl enable nginx # systemctl start nginx
The post Preventing Ubuntu 16.04 from starting daemons when a package is installed appeared first on major.io.
David McComas (Princeton University) calls what his team of researchers have learned about the solar wind at Pluto ‘astonishing,’ adding “This is a type of interaction we’ve never seen before anywhere in our Solar System.” The reference is to data from the Solar Wind Around Pluto (SWAP) instrument that flew aboard New Horizons. McComas knows the instrument inside out, having led its design and development at the Southwest Research Institute.
Image: The first analysis of Pluto’s interaction with the ubiquitous space plasma known as the solar wind found that Pluto has some unique and unexpected characteristics that are less like a comet and more like larger planets. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute.
What startled McComas was that Pluto’s interactions with the solar wind are nowhere near what had been predicted. This stream of charged particles flowing outbound from the Sun can reach speeds of 500 kilometers per second and above, a ragged, bursting outrush that we may one day be able to capitalize on as a way to drive spacecraft. While planets act to divert the solar wind, comets slow it far more gently, an effect that researchers assumed they would find at Pluto.
Instead, Pluto’s complexities place it somewhere between planet and comet as its atmosphere copes with the solar wind. We learn in the paper from McComas and team that despite expectations, Pluto’s gravity is capable of holding electrically charged ions in its extended atmosphere. The dwarf planet has an ion tail extending ‘downwind’ to a distance of about 100 Pluto radii (120,000 kilometers). Earth has a similar ion tail, and Pluto’s is said to show ‘considerable structure.’
From the paper:
Initial studies of the solar wind interaction with Pluto’s atmosphere…, all assuming the absence of an intrinsic magnetic field, suggested that it would depend on whether the atmospheric escape flux is strong — producing a ‘comet-like’ interaction where the interaction region is dominated by ion pick-up and many times larger than the object — or weak — producing a ‘Mars-like’ interaction dominated by ionospheric currents with limited upstream pick-up and where the scale size is comparable to the object.
And Pluto, we learn, behaves more like its larger planetary cousins, as the image below suggests.
Image: This figure shows the size scale of the interaction of Pluto (at lower left) with the solar wind. Scientists thought Pluto’s gravity would not be strong enough to hold heavy ions in its extended atmosphere, but Pluto, like Earth, has a long ion tail (red) loaded with heavy ions from the atmosphere. Pluto has a very thin “Plutopause” (purple), or the boundary of Pluto’s tail of heavy ion tail and the sheath of the shocked solar wind (blue) that presents an obstacle to its flow. Credit: American Geophysical Union.
The SWAP instrument was able to separate heavy ions of methane, which is the primary gas escaping the atmosphere, from the light hydrogen ions coming from the Sun. From it we also learn that Pluto has a thin ‘Plutopause,’ a boundary region where the heavy ion tail meets the solar wind along a sheath of particles. Moreover, the solar wind is not blocked until it reaches within about about 3000 kilometers of the dwarf planet — the paper calls this the ‘upstream standoff distance’ — a bit less than three Pluto radii.
As the paper notes, we’ll have a good deal of data from different parts of the Solar System to examine as we try to put Pluto into context:
While the small size of the interaction region relative to Pluto is reminiscent of Mars and Venus, we note that recent observations of the solar wind interaction with the relatively weakly out-gassing Comet 67P Churyumov-Gerasimenko by instruments on ESA’s Rosetta spacecraft show deflection of the solar wind with relatively modest decrease in speed. We anticipate interesting scientific discussions of the relative roles of atmospheric escape rate, solar wind flux, and IMF strength at Mars, Comet 67P and Pluto as the data from the MAVEN, Rosetta and New Horizons spacecraft are further analyzed.
And as McComas notes in this Princeton news release, “The range of interaction with the solar wind is quite diverse, and this gives some comparison to help us better understand the connections in and beyond our solar system. The SWAP data will … be reanalyzed … for many years to come as the community collectively grapples with Pluto’s unique solar wind interaction — one that is unlike that at any other body in the solar system.”
Now that recreational quadcopters* are widely available from e-tailers, big-box stores, and even your local pharmacy, it may be an appropriate time to look at the word drone and its various meanings.
The original meaning of drone is “a male honeybee.” Drone bees, unlike the sterile female worker bees, don’t do the work of gathering nectar or making honey, nor do they have stingers with which to defend the hive; their primary purpose in life is to mate with a queen bee, after which the drone dies.
Based on the drone bees’ apparent lack of productive labor, which is especially striking in contrast to the industriousness of the worker bees, drone took on a figurative sense starting in the 1500s, being used to mean “an idle person who lives off others.” It often referred to an aristocrat or member of the upper classes, as in this passage from the 1858 poem “The Deacon’s Masterpiece,” in which Oliver Wendell Holmes uses it in reference to King George II: Seventeen hundred and fifty-five / Georgius Secundus was then alive, / Snuffy old drone from the German hive.
Also in the 1500s, drone took on several other senses, both as a verb (“to make a continuous low dull humming sound” and “to speak in a monotonous tone”) and as a noun (“a continuous low humming or buzzing sound” and “a pipes on a bagpipe that lacks finger holes and produces a single tone”). These senses presumably came about in reference to the buzzing sound of the drones’ wings as they congregate in a patch of warm air waiting for a queen to arrive.
The first use of drone to refer to a pilotless or remotely-piloted aircraft came in the 1930s, when the United States Navy developed such aircraft as targets for gunnery practice. These aircraft resembled drone bees in many ways—in their ability to fly, in the constant buzzing noise from their propellers, in their lack of armament, and in the fact that they weren’t expected to survive their mission. Those qualities, combined with the planes’ dependence on an operator on the ground or in a mother ship or aircraft (likened to a queen bee) made the name drone seem appropriate. Nowadays, of course, drones are used for much more than target practice: some are armed, many pilot themselves autonomously, and most are designed to be reusable, but the name has stuck.
Thank you for visiting the American Heritage Dictionary at ahdictionary.com!
*(Quadcopter is one of the hundreds of new words that we’re adding to the dictionary for our next update later this summer.)
The AT&T TSD was an early 1990s telephone encryption device. It was digital. Voice quality was okay. And it was the device that contained the infamous Clipper Chip, the U.S. government's first attempt to put a back door into everyone's communications.
Marcus Ranum is selling a pair on eBay. He has the description wrong, though. The TSD-3600-E is the model with the Clipper Chip in it. The TSD-3600-F is the version with the insecure exportable algorithm.
Conforguration is a basic working example of configuration management in Org. I use source code blocks and tangling to make shell scripts that get synced to a remote machine and then download, install and configure R from source.
conforguration.org (that’s a file, not a site) has all the code. It really did work for me, and it might work for you. Is this a reasonable way of doing configuration management? I don’t know, but it’s worth trying. I’ll add things as I come across them.
I don’t know anything about formal configuration management, and I’ve never done literate programming and tangling in Org before. Anyone who’s interested in having a go at conforguring something else is most welcome to do so!
Don’t look now but you may see a meteor or two flash by if you’re up in the wee hours tomorrow through the weekend. It’s the peak of the annual Eta Aquarid meteor shower. If you live in the southern U.S. or tropics, up to 25-40 meteors per hour could fly by during during maximum on the mornings of May 5-6. Skywatchers further north will spot closer to 10-15 per hour.
The north-south disparity occurs because of the altitude of the radiant. That’s the point in the sky from which the meteors appear to originate. For the Eta Aquarids, that’s the constellation Aquarius which doesn’t even rise for northerners until around 3 a.m. or about 1-1/2 hours before the start of dawn. Because the radiant never gets very high, many otherwise fine meteors are cut off by the horizon.
But the further south you go, the earlier Aquarius rises and the higher the radiant. Translation: more meteors! North or south, conditions will be ideal with no moon to make a fuss and brighten the sky. For the best view, find a location with as little light pollution as possible and set up a reclining chair facing east. Make sure you’ve got enough layers to stay warm and bring a radio or a friend (a very special friend willing to get up before dawn) to keep you company. The next step’s my favorite: look up, watch and wait.
Back in 2013, while out before dawn to observe comets and aurora, I was surprised to catch a couple very special Eta Aquarids, two so-called earthgrazers. These are meteoroids that arrive at Earth at a very low angle. Instead of following a more typical, steeply-slanted path downward, they skim the upper air, often taking many seconds to finally fizzle out. Earthgrazers are more common when a shower’s radiant is either just below or a short distance above the horizon. In my opinion, they’re the equal of bright fireballs in their own way.
Nearly all meteor showers have parents. They originate from larger objects that have broken apart into smaller and smaller pieces. In most cases that’s a comet, and that’s true for the Eta Aquarids which call Halley’s Comet “mama”. Every May and October, Earth intersects the orbit of the comet; debris left along its trail strikes Earth’s atmosphere at at 147,000 mph (237,000 km/hr), fast enough to completely burn up in a sudden flash or meteor.
Here’s hoping you have great weather as well as the will to get up early for a look at the shower. Southerners can start around 2 a.m. while for us in the north, closer to 3 is better. Good luck and happy Halley-watching!
As automation lightens the load on IT operations personnel, and the DevOps cultural shift brings dev and ops together, people from managers to sys admins may be worrying about their jobs.
Furthermore, as DevOps takes hold, other roles in the organization are set to transform. DevOps potentially reworks project management methodologies, so what about the project managers? DevOps flattens organizations, so should middle managers be worried?
Fear not! Automation, and DevOps more broadly, are creating new job opportunities, expanding and empowering current roles, and enabling folks at every level to better collaborate and advance their teams along with their own careers.
In this recorded webinar, industry analyst and president of Intellyx Jason Bloomberg offers some broad observations about the organizational impact DevOps is having across enterprises.
Pauly Comtois, VP of DevOps at Hearst Business Media, joins Jason for a thought-provoking, in-depth discussion of how DevOps impacts individuals’ roles within the organization, with some first-person stories of Hearst Business Media’s DevOps transformation.
Watch the recording to learn:
Contact science activities on Sol 1330 went well, and we’re ready to drill at “Okoruso.” As seen in the above MAHLI image, this target looks like pretty typical Stimson bedrock, so it will be helpful to compare to the altered rock that we sampled at Lubango.
Today’s two-sol plan is focused on drilling and MAHLI imaging on the first sol, with a lot of targeted remote sensing on the second sol. Activities on the second sol include a Mastcam multispectral observation of the drill hole, a large Mastcam mosaic to document the local geology, ChemCam observations of “Kobos” and “Strathmore” to investigate altered and unaltered rocks, and a long distance ChemCam RMI mosaic as part of a change detection experiment. We’ll also acquire a Mastcam tau, ChemCam passive sky, and Navcam movie to monitor the atmosphere.
I’m impressed by how efficient we’ve become at drilling (we just wrapped up the last drill hole a couple of sols ago). Sometimes I need to pause and remind myself how unique and exciting this is. On what seems like just a typical Wednesday, we’re drilling a hole on another planet! I’m grateful for the skilled operations team that makes this seem so easy, and I’m looking forward to seeing results from the newest drill hole on Mars.
By Lauren Edgar
Dates of planned rover activities described in these reports are subject to change due to a variety of factors related to the Martian environment, communication relays and rover status.
David Zuber over at Storax has a useful post on enabling multiple inputs in an Org capture template. Templates are extraordinarily useful. I have several that I use several times a day. They do have a problem, though: you can input data into only one spot in the template.
Zuber is using capture templates to implement a ticketing system for his workflow. He wants to be able to enter a project and incident number and have them replicated elsewhere in the template automatically. While that's not supported directly, it's fairly easy to implement using the custom lisp expressions that the templates do support.
I've used custom lisp expressions in the templates I use to manage my blog queue but I hadn't thought of using them to get keyboard input. It's a nice idea that makes the templates potentially more useful. If you're using capture templates, you should definitely give it a read. If you aren't using capture templates you should consider ways that they might simplify your workflow.
Forbes estimates that football player Laremy Tunsil lost $7 million in salary because of an ill-advised personal video made public.
If you’ve ever wondered how having multiple swap devices can work, here’s your DragonFly-specific answer.
When you don’t have the technology to get to an interesting place like the Oort Cloud, it’s more than a little helpful when nature brings an Oort Cloud object to you. At least we think that the object known as C/2014 S3 (Pan-STARRS) has moved into the warmer regions of the Solar System from the Oort. A gravitational nudge in that distant region would be all it took to send the object, with an orbital period now estimated to be 860 years, closer to the Sun.
And here things get interesting, because C/2014 S3 is the first object discovered on a long-period cometary orbit that shows all the spectral characteristics of an inner system asteroid. The level of activity on the object, apparently the result of sublimation of water ice, is five to six orders of magnitude lower than what we would expect from an active long-period comet at a similar distance from the Sun. Karen Meech (University of Hawaii) and colleagues believe that the object formed in the inner system at about the same time as the Earth, after which it was ejected before it could be bathed by long-term solar radiation.
Meech calls C/2014 S3 “…the first uncooked asteroid we have found,” which means we’re not only dealing with a building block of the early Solar System, but one that has been preserved for billions of years in a pristine environment. Further underlining its unusual status is the fact that C/2014 S3 is not developing the tail we would expect from a long-period comet as it approaches the Sun. Hence the moniker ‘Manx object’ given by Meech and team, a reference to the tail-less breed of cat from the Isle of Man.
Image: Observations with ESO’s Very Large Telescope, and the Canada France Hawaii Telescope, show that C/2014 S3 (Pan-STARRS) is the first object to be discovered that is on a long-period cometary orbit, but that has the characteristics of a pristine inner Solar System asteroid. It may provide important clues about how the Solar System formed. This image of the comet was acquired using the Canada France Hawaii Telescope. Credit: K. Meech (IfA/UH) / CFHT/ESO.
A population of objects of the C/2014 S3 class would be useful indeed, and as this University of Hawaii Institute for Astronomy news release suggests, would help us differentiate between differing models of Solar System development. The problem is that while several of these models can duplicate the Solar System’s current configuration, some require the migration of the gas giants while others do not. Moreover, the models yield different predictions on the amount of rocky material expelled early on into the Oort Cloud.
Just how widely these models vary is explained in the paper. Here it describes the gas giant migration model:
The “Grand Tack” model starts the simulation of solar system formation at an early phase, when the giant planets grew and migrated in a gas-rich protoplanetary disk. During their inward migration, the giant planets scattered inner solar system material outward; during their outward migration, they implanted a significant amount of icy planetesimals [from 3.5 to 13 astronomical units (AU)] into the inner solar system. The Grand Tack model predicts the presence of rocky objects in the Oort cloud at an icy comets/rocky asteroids ratio of 500:1 to 1000:1…
But how we view the Oort Cloud varies depending upon which model we accept:
Other dynamical models, which assume nonmigrating giant planets, make different predictions about the fraction of the Oort cloud population comprising planetesimals initially within the asteroid belt or the terrestrial planet region. These predictions range from 200:1 to 2000:1. Other models do not explicitly estimate the mass of rocky planetesimals eventually implanted in the Oort cloud, but from the amount initially available, it is reasonable to expect that the ratio of icy planetesimals to rocky planetesimals in the final Oort cloud is between 200:1 to 400:1. Instead, a recent radically different model of terrestrial planet formation predicts that the planetesimals in the inner solar system always had a negligible total mass; in this case, there would be virtually no rocky Oort cloud population.
Be aware that C/2014 S3 is not the first inactive object discovered on a long-period comet orbit, with 1996 PW, found in 1996, being characterized as an extinct comet or asteroid ejected into the Oort Cloud. Five other ‘Manx candidates’ have been observed by Meech’s team, all of them showing colors similar to 1996 PW. C/2014 S3 is the only candidate to date that shows the spectrum of an S-type asteroid, a type usually found in the main asteroid belt.
So it’s an intriguing find, and according to the paper, learning how many S-type objects like it exist in the Oort Cloud will be a useful test of the various models. The paper argues that making a selection between the models will require characterization of up to 100 such ‘Manx objects,’ with the number of S-types helping us determine the ratio of icy to rocky objects in the Oort. As we are now discovering Manx objects on the order of about 15 per year as the Pan-STARRS project continues, scientists will soon have a sufficient number to work with.
The paper is Meech et al., “Inner solar system material discovered in the Oort cloud,” Science Advances Vol. 2, No. 4 (29 April 2016). Full text.
On 29 April 2016, ESA astronaut Tim Peake controlled, from the International Space Station, a rover nicknamed Bridget at Airbus Defence and Space in Stevenage as part of an international experiment to prepare for human–robotic missions to the Moon, Mars and beyond. Building on previous test campaigns in the ESA-led Meteron project, the experiment saw ESA, the UK Space Agency and Airbus Defence and Space working together to investigate distributed control of robots in a mock-Mars environment (full text).
Following the successful rover-driving experiment on 29 April, we asked ESA's Kim Nergaard, ground segment manager for Meteron at ESOC, for a brief 'after-action' report. Here's what he sent:
Two of the primary objectives from the 29 April rover-driving experiment (dubbed 'SUPVIS-M') were crew driving of a rover in low-light conditions and end-to-end operation of a human-rover system with handover of control between operators.
The aim of the crew experiment was not to ensure that Timothy Peake would drive the rover flawlessly around the Mars Yard in Stevenage, but rather to collect information on how his 'situational awareness' was impacted by the limited information available to him and to get metrics on how the different centres performed under such stressful conditions.
These objectives were fully met during Tim's activity last week. The operations team at ESOC, led by Paul Steele, the Meteron System Operations Manager, was responsible for the overall coordination of the experiment. Throughout the experiment day, the team here worked closely with our partners in Airbus UK and Belgium's ISS User Support and Operations Centre. The team at ESOC have gained valuable experience in this type of operations over the past years, having also operated the Meteron Opscom-1 and Opscom-2 experiments, not to mention benefiting from the overall mission operations expertise in ESOC.
Meteron stands for "Multi-Purpose End-To-End Rover Operations Network" and the SUPVIS-M experiment satisfied each word in that acronym. ESOC used generic multi-purpose infrastructure systems for ground and rover monitoring and control, a complete end-to-end scenario was in operation (multiple ground teams, crew and rover), rover operations were the main focus and the whole system was based on a network architecture using the Disruption Tolerant Network (DTN) protocol. DTN is very useful for this type of complex system as allows us to implement an IP-like network considering the non IP-friendly environment of space, with its long distances causing delays and disruptions due to loss-of-signal etc.
We have collected most of the data from the experiment already and once all of it is in place the engineers will analyse the data and prepare the outputs. These will be used when preparing for future human and/or robotic missions to other destinations, in particular the Moon and Mars.
We also picked up a quote from ESA's David Parker, Director of Human Spaceflight and Robotic Exploration, who said:
This experiment further proves we can operate rovers while orbiting a planet, another significant step in our vision to send astronauts and robots together to explore our Solar System.
Our investments over the last years in human spaceflight and robotic exploration has showed concrete achievement today and prepared the way for our future ambitions to the Moon and beyond.
An Archive Science Review was successfully held in February resulting the need for some improvements in the data and metadata being delivered by the Rosetta instruments, some of which have been taken into account in this OSIRIS release.
The release covers the period 16 September – 19 December 2014 and includes narrow- and wide-angle camera images from Rosetta’s close observation phase when the spacecraft was just 8 km from the surface of Comet 67P/Churyumov-Gerasimenko, as well as pre- and post-landing imagery.
They show the astonishing detail of the comet surface at close range, including images used to help characterise Philae’s landing site at Agilkia.
The image set also includes incredible views of Philae drifting across the surface of the comet on 12 November 2014 as it approached Agilkia and then bounced out of view. One example is shown below – can you spot Philae? (Hint: check against the image mosaic of Philae's journey across the surface released after landing here).
Traditional computer security concerns itself with vulnerabilities. We employ antivirus software to detect malware that exploits vulnerabilities. We have automatic patching systems to fix vulnerabilities. We debate whether the FBI should be permitted to introduce vulnerabilities in our software so it can get access to systems with a warrant. This is all important, but what's missing is a recognition that software vulnerabilities aren't the most common attack vector: credential stealing is.
The most common way hackers of all stripes, from criminals to hacktivists to foreign governments, break into networks is by stealing and using a valid credential. Basically, they steal passwords, set up man-in-the-middle attacks to piggy-back on legitimate logins, or engage in cleverer attacks to masquerade as authorized users. It's a more effective avenue of attack in many ways: it doesn't involve finding a zero-day or unpatched vulnerability, there's less chance of discovery, and it gives the attacker more flexibility in technique.
Rob Joyce, the head of the NSA's Tailored Access Operations (TAO) group -- basically the country's chief hacker -- gave a rare public talk at a conference in January. In essence, he said that zero-day vulnerabilities are overrated, and credential stealing is how he gets into networks: "A lot of people think that nation states are running their operations on zero days, but it's not that common. For big corporate networks, persistence and focus will get you in without a zero day; there are so many more vectors that are easier, less risky, and more productive."
This is true for us, and it's also true for those attacking us. It's how the Chinese hackers breached the Office of Personnel Management in 2015. The 2014 criminal attack against Target Corporation started when hackers stole the login credentials of the company's HVAC vendor. Iranian hackers stole US login credentials. And the hacktivist that broke into the cyber-arms manufacturer Hacking Team and published pretty much every proprietary document from that company used stolen credentials.
As Joyce said, stealing a valid credential and using it to access a network is easier, less risky, and ultimately more productive than using an existing vulnerability, even a zero-day.
Our notions of defense need to adapt to this change. First, organizations need to beef up their authentication systems. There are lots of tricks that help here: two-factor authentication, one-time passwords, physical tokens, smartphone-based authentication, and so on. None of these is foolproof, but they all make credential stealing harder.
Second, organizations need to invest in breach detection and -- most importantly -- incident response. Credential-stealing attacks tend to bypass traditional IT security software. But attacks are complex and multi-step. Being able to detect them in process, and to respond quickly and effectively enough to kick attackers out and restore security, is essential to resilient network security today.
Vulnerabilities are still critical. Fixing vulnerabilities is still vital for security, and introducing new vulnerabilities into existing systems is still a disaster. But strong authentication and robust incident response are also critical. And an organization that skimps on these will find itself unable to keep its networks secure.
This essay originally appeared on Xconomy.
NYCBUG is meeting tonight, and Thomas Levine will be there to talk about Urchin, a shell-based test framework. The announcement also has future meeting/speaker dates noted.
Please install evil-escape at first.
When Swiper/Ivy candidate window pops up. You can press
fd quickly to close the window.
fd is the default key binding from evil-escape. I changed it to
I assign hotkey =,ii= to
M-x counsel-imenu, and I can press
kj to quit imenu popup. As you can see, I don't need move my fingers too much.
As you might have guessed, I pay siginificant attention to UI/UX issues. Since I’ve been implementing undo functionality recently, I thought I’d share a few thoughts about it.
On the surface, undo seems easy: you record all changes you have made and revert the last one on demand. When you look at details, however, things can get messy.
First of all, how do you actually “record all changes”? You might record the state after every change (probably simple in many applications, but potentially a memory bottleneck). You might just record the deltas (better from the memory perspective, but a bit more involved). In the case of Logeox, however, I decided to do something even different. Instead of recording the state after each “change” (i.e., turtle command), I record the commands themselves. Since they are sufficient to completely recreate the state at any point of time, undoing the last command is easy: you just delete the last link in the chain and replay them again. (Theoretically, it means that undoing a change is a lot more expensive timewise than issuing commands, but I do not expect this to be a real problem. And if it ever becomes one, I can easily add some caching, like e.g. recording the full state every 40 commands or something like that.)
Then there comes another question: what exactly a change is? In my case, I have a few commands: go forward, turn left/right, pen up/down. Should all of them be undoable? This would be the simplest thing to do (and the first one I did), but probably not the best. After consulting half of my users (i.e., my dear wife), I decided that only actual turtle movements should be undoable. (By movement, I mean a command changing the turtle’s position, so turns are not included.)
This raises another question. Assume the following sequence of user actions: go forward, turn right, undo. Should the undo restore the direction from before the “go forward” command? I think the answer is positive (and fairly obvious). Consider now this: pen down, go forward, pen up. Should the undo change the state to pen down again? This time the answer seems less obvious, but still affirmative. And so I did it this way.
And how did I actually implement all these? It turned out to be pretty simple. I added another method to the
isUndoable, which returns false for each command except
GoForward. (I briefly considered making it return false by default and only overriding it in
GoForward. After a while, I decided that I like defining it explicitly in each command a bit better. Yes, it looks like code duplication, but (1) it makes me think before deciding – for each and every command – whether it should be undoable, and (2) it felt kind of wrong to me to hardcode the choice that happens to be more common in the parent class.) And
Undo, instead of just removing the last command from the list, keeps removing commands until the first one with
isUndoable returning true. (And while at that, I fixed a small but irritating bug with the app crashing when trying to undo when no commands were yet recorded.)
return false; and nothing else.
And now that I googled it a bit and gave it a minute of thinking, I guess there’s even a better way: to have the
isUndoable() method return a field, initialized to the proper value by the constructor. And BTW, googling for that revealed a whole mess of opinions and possibilities of having fields with various qualifiers –
If you happen to be testing kernel modules, DragonFly can now load them from a modules.local directory. This keeps modules that aren’t part of the base system, separate. This is probably of most use to developers. It’s controlled by local_modules being set in /boot/loader.conf, and defaults to on.
(Updated for correct file location – thanks, swildner)
The word balloon in panel four is way down low to signify that there is a long silence before I say it. A lot of comedy is in the timing, and it’s hard to communicate timing in a written medium.
About The Prisoner. I owned the whole series as a DVD box set, then when it came out on Blu-ray I got that too. The changes were interesting, not just in that the Blu-ray version took up way fewer disks and was much sharper. The most interesting thing was that the documentaries and commentaries on the DVD were made while the show’s star and creator, Patrick McGoohan, was still alive. The ones on the Blu-ray were made after he died.
The tone was dramatically different.
One paints him as an uncompromising visionary who was just too far ahead of his time to be understood. The other describes a tyrannical jerk who bullied a beloved character actor to the point that the poor man had a nervous breakdown. It also says that he was making things up as he went, wrote himself into a corner, and had to just spew out some nonsense to wrap it all up.
Not surprising really, that a person who created a show about a man constantly fighting against everybody and everything was, in his real life, a bit prickly. Watch the show’s opening credits (if you have THREE MINUTES to kill. TV used to be very different.) and try to tell me that’s the work of a mellow guy. It was, essentially, a show about belligerently refusing to answer questions.
Also, who packs for a trip to a tropical island, and brings along a big picture of a tropical island? It’s like he intends to raise hell if the beach doesn’t match the picture.
You can comment on this comic on Facebook.
auto-fill-mode is named by
auto-fill-function via magnars.
The following pencil sketch was made using a 10-inch reflector, and a 5 x 8 blank notecard with the colors inverted via scanner. Roger Ivester
NGC 3077 – Galaxy – Ursa Major
Date: April 25, 2016
Telescope: 10-inch Newtonian reflector
Eyepiece: 12.5 mm + 2.8x Barlow
At 57x, fairly easy to see, appearing mostly as a circular glow. At 91x, the galaxy becomes elongated with a NE-SW orientation, and a brighter central region, however, subtle. When increasing the magnification to 256x, a stellar nucleus is visible, but cannot be held constantly. The surface brightness of this galaxy is fairly low, making it difficult from my moderately light polluted backyard.
After viewing close neighboring galaxies, M81 and M82, which are much brighter and larger, NGC 3077 can be difficult, and maybe even a bit disappointing.
The following report and images are courtesy of Dr. James Dire of Hawaii.
By Dr. James R. Dire
NGC3077 is a peculiar galaxy located in Ursa Major near the galaxy pair M81 and M82. The galaxy was discovered by William Herschel on November 8, 1801. Although the galaxy looks like an elliptical galaxy in the eyepiece, images of it show it has wispy edges and dark dust lanes, atypical of elliptical galaxies. Carl Seyfert included it in his list of active galaxies (now called Seyfert galaxies) in 1943. Today it is considered an irregular galaxy. Its distorted shape is probably casued by gravitational interactions with the large spiral galaxy M81, similar to Barnard’s Galaxy, NGC6822, which is equally close to the Milky Way.
Magnitude estimates for NGC3077 range from 9.9 to 10.8. The galaxy is 5.3′ x 4.4′ in size and is located 12.8 ± 0.7 Mly away. The galaxy is located three-quarters of a degree east-southeast of M81.
The first image was taken with a Stellarvue SV102 102 mm apochromatic refractor at f/6.3 using a Televue 0.8x FF/FR. The camera was a Canon 30D and the exposure was 60 minutes. In all images, north is up and east to the left. Image 1 was framed to have M81 and M82 centered. NGC3077 is labeled in the lower left-hand corner of the frame.
The second image was taken with a 10″ f/6 Newtonian with a Paracorr II coma corrector, yielding an f/6.9 optical system. A SBIG ST-2000XCM CCD camera was used. The exposure was 100 minutes. I really need 300-400 minutes of data to bring out the wispy edges and dark dust areas of the galaxy. But they can (barely) be seen in this short exposure. Unfortunately, time and weather did not allow more imaging before submitting this report.
It’s not actually surprising that somebody would claim to be the creator of Bitcoin. Whoever “Satoshi Nakamoto” is, is worth several hundred million dollars. What is surprising is that credible people were backing Craig Wright’s increasingly bizarre claims. I could speculate why, or I could just ask. So I mailed Gavin Andresen, Chief Scientist of the Bitcoin Foundation, “What the heck?”:
What is going on here?There’s clear unambiguous cryptographic evidence of fraud and you’re lending credibility to the idea that a public key operation could should or must remain private?
He replied as follows, quoted with permission:
Yeah, what the heck?
I was as surprised by the ‘proof’ as anyone, and don’t yet know exactly what is going on.
It was a mistake to agree to publish my post before I saw his– I assumed his post would simply be a signed message anybody could easily verify.
And it was probably a mistake to even start to play the Find Satoshi game, but I DO feel grateful to Satoshi.
If I’m lending credibility to the idea that a public key operation should remain private, that is entirely accidental. OF COURSE he should just publish a signed message or (equivalently) move some btc through the key associated with an early block.
Feel free to quote or republish this email.
Good on Gavin for his entirely reasonable reaction to this genuinely strange situation.
Craig Wright seems to be doubling down on his fraud, again, and I don’t care. The guy took an old Satoshi signature from 2009 and pretended it was fresh and new and applied to Sartre. It’s like Wright took the final page of a signed contract and stapled it to something else, then proclaimed to the world “See? I signed it!”.
That’s not how it works.
Say what you will about Bitcoin, it’s given us the world’s first cryptographically provable con artist. Scammers always have more to say, but all that matters now is math. He can actually sign “Craig Wright is Satoshi Nakamoto” with Satoshi’s keys, openly and publicly. Or he can’t, because he doesn’t have those keys, because he’s not actually Satoshi.
Water may boil at 212° F at sea level but on a 10,000-foot-high mountain boiling temperature drops to 194° F and on top of Mt. Everest all the way down to 160° F. The reason for this is that you’re above more and more of the atmosphere the higher you climb, so there’s less air pressure “pushing down” on water preventing it from boiling. Lower boiling temperatures also mean longer cooking times. At 9,000 feet, a “3-minute” boiled egg takes about 5 minutes to cook.
Mars is even worse when it comes to cooking eggs! With an atmospheric pressure 1/100th that of Earth, pure water boils at between 32° and 50° F (0-10° C). If you were to set a cup of water on Mars’s surface, depending upon local conditions it would either freeze of quickly boil away to vapor. Temperatures and pressures don’t allow liquid water to pool on Mars. Funny to think that if you set out a pot of water to boil on Mars, you could stick your fist in the bubbly stuff, and it would feel cold!
If you add salt to water to make it a brine, you lower its freezing point. Water containing dissolved salts freezes at a significantly lower temperature. In Don Juan Pond in Antarctica the water is so salty, liquid water has been measured at temperatures as low as -11° F (-24° C). Might salty water flow on Mars under the right conditions?
Last year, scientists announced the discovery of salt in streaks lining crater walls and Martian slopes called recurring slope lineae. These and other gully-like features appear to be carved by flowing water. The lines, up to a 1,000 feet (several hundred meters) long and typically fewer than 15 feet (5 meters) wide, appear on slopes during warm seasons, lengthen for a time and then fade as temperatures cool.
In an article published yesterday in Nature Geoscience, a team led by Marion Masse of the University of Nantes in France described experiments they performed to figure out how water could have created the streaks.
See water “melt” on Mars. Video of the flow produced by pure water ice melting on a sand soil in Martian conditions
They placed a block of ice on a 30-degree plastic slope covered with loose, fine-grained sand, and allowed it to melt and seep into the soil in a chamber simulating Martian air pressure and summertime temperatures. They ran tests with both pure and briny water and compared them against a similar set of experiments performed under normal Earth conditions.
Video of a briny flow under Martian surface conditions
Under Martian conditions, the ice boiled energetically, blasting grains into the air (watch the video!) as it seeped into the sand below. With pure water, ridges of sand particles formed that subsequently spilled off to the sides creating little channels between the ridges. The salty water also boiled and sent sand grains hopping but also carved channels that strongly resemble the slope streaks and seepage lines seen on the Red Planet.
Masse and team propose a two-pronged mechanism for recurring slope lineae: salty water seeps combined with hopping sand grains, a combination wet-dry process. Neat.
Authenticating to a wired or wireless network using 802.1x is simple using NetworkManager’s GUI client. However, this gets challenging on headless servers without a graphical interface. The
nmcli command isn’t able to store credentials in a keyring and this causes problems when you try to configure an interfaces with 802.1x authentication.
Start by setting some basic configurations on the interface using the
nmcli editor shell:
# nmcli con edit CONNECTION_NAME nmcli> set ipv4.method auto nmcli> set 802-1x.eap peap nmcli> set 802-1x.identity USERNAME nmcli> set 802-1x.phase2-auth mschapv2 nmcli> save nmcli> quit
Be sure to set the
802-1x.phase2-auth to the appropriate values for your network. You might have noticed that the password isn’t specified here. That’s because NetworkManager has no access to a keyring where it can store the password. That comes next.
Create a new file called
/etc/NetworkManager/system-connections/CONNECTION_NAME to hold your password. If your connection name has spaces in it, be sure to maintain those spaces in the filename. Add the following to that file:
[connection] id=CONNECTION_NAME [802-1x] password=YOUR_8021X_PASSWORD
Save the file and close it. Restart NetworkManager to pick up the changes:
systemctl restart NetworkManager
You may need to bring the interface down and up to test the new changes:
nmcli con down CONNECTION_NAME nmcli con up CONNECTION_NAME
Once the network settles down, the authentication should complete within a few seconds in most cases. Be sure to check your system journal or other NetworkManager logs for more details if the interface doesn’t work properly.
Harry Schwartz, who appears to have moved to Boston from New York, gave a talk at the Boston Emacs Meetup on Getting Started with Org-mode. It's not really a tutorial but a demonstration of many of Org's features and how you can use them.
Schwartz spends a lot of time on the publishing aspects, which in many ways is Org's best feature. One of the things he mentioned in the talk that I didn't know was that Org can export to the Twitter Bootstap framework. That makes it really easy to build a nice looking Web site from the comfort of Emacs and Org mode.
He also mentions Owncloud, a self-hosted file sync and share server. Schwartz describes it as a sort of private Dropbox. It allows you to sync and share files among your different devices. It's probably most useful for, say, a team but like Schwartz you can use it to keep your own machines in sync if you don't want to bother with something like Git.
The talk is about 56 minutes so schedule some time. Even if you're familiar with Org, you may learn something useful from the talk.
It's such a badly written bill that I wonder if it's just there to anchor us to an extreme, so we're relieved when the actual bill comes along. Me:
"This is the most braindead piece of legislation I've ever seen," Schneier -- who has just been appointed a Fellow of the Kennedy School of Government at Harvard -- told The Reg. "The person who wrote this either has no idea how technology works or just doesn't care."
The National Geographic Travel Photographer of the Year Contest is now underway, and entries will be accepted until the end of the month, May 27, 2016. The grand prize winner will receive a seven-day Polar Bear Safari for two in Churchill, Canada. National Geographic was kind enough to allow me to share some of the early entries with you here, gathered from three categories: Nature, Cities, and People. The photos and captions were written by the photographers.
Identity thieves stole tax and salary data from payroll giant ADP by registering accounts in the names of employees at more than a dozen customer firms, KrebsOnSecurity has learned. ADP says the incidents occurred because the victim companies all mistakenly published sensitive ADP account information online that made those firms easy targets for tax fraudsters.
Patterson, N.J.-based ADP provides payroll, tax and benefits administration for more than 640,000 companies. Last week, U.S. Bancorp (U.S. Bank) — the nation’s fifth-largest commercial bank — warned some of its employees that their W-2 data had been stolen thanks to a weakness in ADP’s customer portal.
ID thieves are interested in W-2 data because it contains much of the information needed to fraudulently request a large tax refund from the U.S. Internal Revenue Service (IRS) in someone else’s name. A reader who works at U.S. Bank shared a letter received from Jennie Carlson, the financial institution’s executive vice president of human resources.
“Since April 19, 2016, we have been actively investigating a security incident with our W-2 provider, ADP,” Carlson wrote. “During the course of that investigation we have learned that an external W-2 portal, maintained by ADP, may have been utilized by unauthorized individuals to access your W-2, which they may have used to file a fraudulent income tax return under your name.”
The letter continued:
“The incident originated because ADP offered an external online portal that has been exploited. For individuals who had never used the external portal, a registration had never been established. Criminals were able to take advantage of that situation to use confidential personal information from other sources to establish a registration in your name at ADP. Once the fraudulent registration was established, they were able to view or download your W-2.”
U.S. Bank spokesman Dana Ripley said the letter was sent to a “small population” of the bank’s more than 64,000 employees. Asked to comment on the letter from U.S. Bank, ADP confirmed that the fraud visited upon U.S. Bank also hit “a very small subset” of the ADP’s total customers this year.
ADP emphasized that the fraudsters needed to have the victim’s personal data — including name, date of birth and Social Security number — to successfully create an account in someone’s name. ADP also stressed that this personal data did not come from its systems, and that thieves appeared to already possess that data when they created the unauthorized accounts at ADP’s portal.
ADP Chief Security Officer Roland Cloutier said customers can choose to create an account at the ADP portal for each employee, or they can defer that process to a later date (but employers do have to chose one or the other, Cloutier said).
According to ADP, new users need to be in possession of two other things (in addition to the victim’s personal data) at a minimum in order to create an account: A custom, company-specific link provided by ADP, and a static code assigned to the customer by ADP.
The problem, Cloutier said, seems to stem from ADP customers that both deferred that signup process for some or all of their employees and at the same time inadvertently published online the link and the company code. As a result, for users who never registered, criminals were able to register as them with fairly basic personal info, and access W-2 data on those individuals.
U.S. Bank’s Ripley acknowledged that the bank published the link and company code to an employee resource online, but said the institution never considered that the data itself was privileged.
“We viewed the code as an identification code, not as an authentication code, and we posted it to a Web site for the convenience of our employees so they could access their W-2 information,” Ripley said. “We have discontinued that practice.”
In the meantime, ADP says it has developed systems to monitor the Web for any other customers that may inadvertently publish their signup link and code.
“We’ve now aggressively put in some security intelligence by trying to look for that code and turn off self-service registration access if we find that code” published online, Cloutier said.
ADP’s portal, like so many other authentication systems, relies entirely on static data that is available on just about every American for less than $4 in the cybercrime underground (SSN/DOB, address, etc). It’s true that companies should know better than to publish such a crucial link online along with the company’s ADP code, but then again these are pretty weak authenticators.
Cloutier said ADP does offer an additional layer of authentication — a personal identification code (PIC) — basically another static code that can be assigned to each employee. He added that ADP is trialing a service that will ask anyone requesting a new account to successfully answer a series of questions based on information that only the real account holder is supposed to know.
Cloutier declined to say who was providing the verification service, but these so-called knowledge-based authentication (KBA) or “out-of-wallet” questions generally focus on things such as previous address, loan amounts and dates and can be successfully enumerated with random guessing. In many cases, the answers can be found by consulting free online services, such as Zillow and Facebook.
The IRS found this out the hard way, and over the past year has removed two separate authentication systems that placed too much reliance on KBA and static data to authenticate taxpayers. In May 2015, the IRS took down its “Get Transcript” service after tax refund fraudsters began using it to pull W-2 data on more than 724,000 taxpayers. In those cases, the fraudsters also already had the victim’s SSN, DoB and other personal data. In March 2016, the IRS suspended its “Get IP PIN” feature for the same reason.
But somehow, KBA questions are an innovation that’s worth looking forward to at ADP.
“The IRS didn’t have a PIC code or client code,” Cloutier said when I brought up the IRS’s experience. “They didn’t have as many levels and individual authentication components that we provide our clients.”
Cloutier’s words recalled to mind a scene from the movie Office Space, in which Jennifer Aniston’s character is upbraided by her manager for wearing too few “pieces of flair” on her ‘Chotchkie’s’ uniform. His comment also made me think about one of the best scenes from the cult hit “This is Spinal Tap,” in which the character Nigel Tufnel shows off how all the knobs on his amplifier go to “level 11,” while other amps only go to the more boring and standard level 10.
It’s truly a measure of the challenges ahead in improving online authentication that so many organizations are still looking backwards to obsolete and insecure approaches. ADP’s logo includes the clever slogan, “A more human resource.” It’s hard to think of a more apt mission statement for the company. After all, it’s high time we started moving away from asking people to robotically regurgitate the same static identifiers over and over, and shift to a more human approach that focuses on dynamic elements for authentication. But alas, that’s fodder for a future post.
Update 1:59 p.m. ET: Clarified Spinal Tap reference.
Update, 10:07 p.m. ET: It looks like ADP’s stock took a pretty big hit immediately after this story ran today.
The stock later rebounded:
I’ve always found the final factor in the Drake Equation to be the most telling. Trying to get a rough idea of how many other civilizations there might be in the galaxy, Drake looked at factors ranging from the rate of star formation to the fraction of planets suitable for life on which life actually appears. Some of these items, like the fraction of stars with planets, are being clarified almost by the day with continuing work. But the big one at the end — the lifetime of a technological civilization — remains a mystery.
By ‘technological,’ Drake was referring to those civilizations that were capable of producing detectable signals; i.e., releasing electromagnetic radiation into space. And when we have but one civilization to work with as example, we’re hard pressed to know what this factor is. This is where Adam Frank (University of Rochester) and Woodruff Sullivan (University of Washington, Seattle) come into the picture. In a new paper in Astrobiology, the researchers argue that there are other ways of addressing the ‘lifetime’ question.
Image credit: http://www.ForestWander.com [CC BY-SA 3.0 us], via Wikimedia Commons.
The idea is to calculate how unlikely our advanced civilization would be if none has ever arisen before us. In other words, Frank and Sullivan want to put a lower limit on the probability that technological species have, at any time in the past, evolved elsewhere than on Earth. Here’s how their paper describes this quest:
Standard astrobiological discussions of intelligent life focus on how many technological species currently exist with which we might communicate (Vakoch and Dowd, 2015). But rather than asking whether we are now alone, we ask whether we are the only technological species that has ever arisen. Such an approach allows us to set limits on what might be called the ‘‘cosmic archaeological question’’: How often in the history of the Universe has evolution ever led to a technological species, whether short- or long-lived? As we shall show, providing constraints on an answer to this question has profound philosophical and practical implications.
To do this, the authors produce their own equation drawing on Drake’s. Consider A the number of technological civilizations that have formed over the history of the observable universe. Rather than dealing with Drake’s factor L — the lifetime of a technological civilization — Frank and Sullivan propose what they call an ‘archaeological form’ of Drake’s equation. The need for the L factor disappears. The new equation appears in this form:
A = Nast * fbt
Where A = The number of technological species that have evolved at any time in the universe’s past
Nast = The number of habitable planets in a given volume of the universe
fbt = The likelihood of a technological species arising on one of these planets.
You can see that what Frank and Sullivan rely on are recent advances in the detection and characterization of exoplanets. We’re learning a great deal more about how common planets are and how many are likely to orbit in the habitable zone around their star, where liquid water could exist. Their term Nast relies on this work and draws together various terms from the original Drake equation including the total number of stars, the fraction of those stars that form planets, and the average number of planets in the habitable zone of their stars.
From the paper:
With our approach we have, for the first time, provided a quantitative and empirically constrained limit on what it means to be pessimistic about the likelihood of another technological species ever having arisen in the history of the Universe. We have done so by segregating newly measured astrophysical factors from the fully unconstrained biotechnical ones, and by shifting the focus toward a question of ‘‘cosmic archaeology’’ and away from technological species lifetimes. Our constraint addresses an issue that is of particular scientific and philosophical consequence: the question ‘‘Have they ever existed?’’ rather than the usual narrower concern of the Drake equation, ‘‘Do they exist now?’’
The paper is short and interesting; I commend it to you. The result it produces is that human civilization can be considered unique in the cosmos only if the odds of a civilization developing are less than one part in 10 to the 22nd power. Frank and Sullivan call this the ‘pessimism’ line. If the probability of a technological civilization developing is greater than this standard, then we can assume civilizations have formed before us at some time in the universe’s history.
And yes, this is a tiny number — one in ten billion trillion. Frank says in this University of Rochester news release that he believes it implies technology-producing species have evolved before us. Even if the chances of civilization arising were one in a trillion, there would be about ten billion civilizations in the observable universe since the first one arose. As for our own galaxy, another civilization is likely to have appeared at some point in its history if the odds against it evolving on any one habitable planet are better than one in 60 billion.
We fall back on cosmic archaeology in suggesting that given the size and age of the universe, Drake’s factor L may still play havoc with our chances of ever contacting another civilization. Sullivan puts it this way:
“The universe is more than 13 billion years old. That means that even if there have been a thousand civilizations in our own galaxy, if they live only as long as we have been around—roughly ten thousand years—then all of them are likely already extinct. And others won’t evolve until we are long gone. For us to have much chance of success in finding another “contemporary” active technological civilization, on average they must last much longer than our present lifetime.”
We can play with this a bit. Taking the Milky Way and choosing a probability of 3 X 10-9, we are likely to be one of hundreds of civilizations that have arisen. But drop that probability to 10-18 (one in a billion billion) and we are likely the first advanced civilization in the galaxy. Yet even with the latter constraint, that would still mean we are one of thousands of civilizations that have developed at some time in the visible universe.
I always appreciate work that frames an issue in a new perspective, which is what Frank and Sullivan’s paper does. We can’t know whether there are other civilizations currently active in our galaxy, but it appears that the odds favor their having arisen at some time in the past. In fact, these numbers show us that we are almost certainly not the first technological civilization to have emerged. Is the galaxy filled with the ruins of civilizations that were unable to survive, or is it a place where some cultures have mastered the art of keeping themselves alive?
The paper is Frank and Sullivan, “A New Empirical Constraint on the Prevalence
of Technological Species in the Universe,” Astrobiology Vol. 16, No. 5 (2016). Preprint available.