A Galaxy Awaits – Mass Effect: Andromeda Review

Like Knights of the Old Republic before it, Mass Effect cemented itself as the go-to science fiction RPG of the last generation.  The trilogy is synonymous with space exploration, deep lore, and exciting combat.  The original trilogy is often regarded as a mixed bag, but one of the all time greats.  Enter Mass Effect: Andromeda, the inevitable follow up.  Andromeda is a different experience, though, putting you in the shoes of a pioneer exploring the unknown galaxy neighboring our own Milky Way.  It’s just you and a giant ship of human colonists, looking for a new home in the bright new galaxy.

Mass Effect Open Scroll

Mass Effect: Andromeda does itself a good service by starting as a blank slate.  Disconnected from the original trilogy, Andromeda has all the potential to bust out and see some new ground.  The entire Mass Effect lore is one that can be easily expanded upon; think Battlestar Galactica or Star Trek.  I was keen to explore this vast new galaxy and meet the inhabitants that awaited me.

You start by creating a male or female version of Ryder, the main character who serves as the captain on planet exploration.  The custom character creation tools are varied, but not as expansive as those you’d see in a Elder Scrolls title.  There’s a bit to fool around with, and you’re allotted a fair amount of options.  I had a fun time crafting my very own Ryder, even if they didn’t feel wholly original by the time I hit “finish”.  From there, you get to experience the most inconsistent RPG I’ve seen in a long time.

Mass Effect Matt Ryder

Meet Matt Ryder. He’s kind of plastic looking.

Mass Effect: Andromeda is very much a game of exploration.  As Pathfinder (leader of the crew that surveys and sets up outposts on new planets), you’ll spend a lot of time visiting new worlds and making them viable for human life.  As you play through the main story, you’ll come across new planets with their own story lines and problems.  The main quest line is almost it’s own separate thing; you can complete it without even visiting all the main planets.  This main quest follows Ryder’s attempt to discover the mysteries of The Remnant, an ancient alien species that harbors incredible technology.  Playing the ever present antagonistic force is the Kett.  This instantaneously evil race of aliens look to use the same technology as Ryder, but for evil.  It’s a pretty retread story, one that mirrors a lot of themes from even the original Mass Effect.

After I completed the main quest line, I realized that a lot of the meat of the game comes down to exploring these planets and completing the quests within.  Unfortunately, that’s where it all falls apart.

Andromeda is a big game, but it’s one about quantity rather than quality.  Combat is fun and remains fun throughout, but motivation is the biggest deterrent.  As you search the new and often beautiful looking worlds, you’ll have a good feeling of awe and wonder.  After the brutally unimpressive opening which is linear and lengthy, exploring the first planet is a refreshing step in the right direction.  Soon after you get your feet wet, you’ll start to notice that a lot of quests are similar.

It’s the MMO effect.  You’ll have a ton of quests that all require you to do the same thing, just for different reasons.  You’ll often have to hop off planet and visit another planet, scanning certain items and activating different terminals.  You’ll then return to your quest giver for some XP and (occasionally) narrative closure.  After the twentieth fetch quest, things start to get repetitive.  Enough so, that I found myself skipping entire quest lines, searching for the good stuff among the mediocre.  There are some great stories being told in side quests, such as loyalty missions and some stuff involving politics.  Unfortunately, a good majority of these quality quests are buried between repetitive ones.

Mass Effect Eos

Planets are large and look good, but are often full of open space.

Mid-way through the game, I hurried towards a new quest marker with excitement.  This quickly turned sour when I realized my entertaining new quest was a literal beer run.  That’s not a turn of phrase or a figure of speech; someone wanted me to go to another planet and get beer for them.  These fetch quests are made worse by the numerous unskippable cut scenes required to travel from planet to planet.  These scenes look nice, and the spectacle of space travel is pretty, but after a while becoming time consuming and tedious.

Presentation is just another place for Mass Effect: Andromeda to be inconsistent.  It’s a constant barrage of sub-par animations matched with absolutely gorgeous world design.  Each planet has it’s own feel, which is really cool.  The frozen planet of Voeld is vast and full of flurries, Eos is stark and deserted.  Some later planets look especially impressive, but this can often produce less than appealing performance.

Mass Effect Nomad

Andromeda is ripe with technical hiccups.  It’s something that you simply get used to as you play, but that doesn’t make it acceptable.  As you attempt to take in beautiful mountain top vistas and appreciate the world around you, you’re assaulted with frame rate drops and freezes.  The game often stutters, feels choppy, and you’ll run into plenty of graphical glitches.  There’s pop-in galore, and the general lack of consistency makes it a mixed bag.  It’s disappointing, as Mass Effect: Andromeda would be twice as good if it ran without hiccup.  For an open world RPG, I can accept some technical mishaps.  However, Andromeda is often more choppy than smooth.

I truly hate the inconsistency in Mass Effect: Andromeda, mainly because there are some really cool moments to be had.  If you’re a fan of Mass Effect in general, skipping Andromeda is a bad idea.  There’s some great content here that plays true to the heart of the franchise, it’s just often buried in mediocre filler.

Driving around the planets in the Nomad (a big ATV that you’ll use for quite a bit of your traversal) feels great, and upgrading it over time is fun and rewarding.  I had quite a few “woo-hoo!” moments, as I boosted off the side of a cliff on a low gravity planet, flying high over the surface.  Combat also produced moments like this from time to time, as it’s probably the most solid mechanic of the game.  Like previous titles, you can “play your way”, either favoring weaponry or abilities.  Leveling up allows you to allocate skill points into three categories: combat, biotic, and tech.  You can equip your character with three skills in any combination, giving you some pretty cool combinations to try.  Approaching combat is fun, even if the result will always be the same regardless of how you play it.

Look, I’m going to be completely honest.  Mass Effect: Andromeda isn’t a terrible game.  It’s not even a bad game, really.  Andromeda is simply a game that’s in over it’s head and struggles to keep consistent.  For Mass Effect fans, there’s definitely something to love.  I found myself getting lost in world exploration, and while the quality isn’t always top notch, there’s plenty of content for you to dig into.

At just under 40 hours, I had managed to save the galaxy, but I didn’t feel like I had done all that much.  That’s because Andromeda is a game that doesn’t want you to rush, it wants you to take in all of it’s minutia and detail.  But that detail is rough, and often not fun to play.  In a way, I felt punished by adhering to the main path, only diverting when I felt the motivation to.  That’s not a great indicator for an overall story; I shouldn’t spend 40 hours in a game and feel unaccomplished.  I know that there’s another 20 hours or so of content to be experienced, but my trip ends here.  If you’re a fan of Mass Effect, maybe you’ll fare better.  Unfortunately, Mass Effect: Andromeda is too inconsistent to warrant more time from me.  From the writing and narrative to the presentation and gameplay, everything is a bit too padded out.  If there’s any plans for continuing the series (and I’m sure there are), Bioware would be smart to focus on creating an engaging narrative, rather than a large checklist of similar content to keep us busy.

Mass Effect Andromeda Title

Stefanik co-sponsors US/Israel space collaboration legislation

U.S. Rep. Elise Stefanik, R-Willsboro, on March 2 co-sponsored legislation Rep. Derek Kilmer, D-Wash., introduced Feb. 16 to direct the National Aeronautics and Space Administration to continue to work jointly with the Israel Space Agency “in identifying and cooperatively pursuing peaceful space exploration and science initiatives in areas of mutual interest,” according to the Library of Congress government information web site.

In October 2015, the two agencies signed an agreement that establishes the framework for NASA to utilize ISA technology for future missions to Mars and other endeavors.

The two space agencies have collaborated on various ventures since 1985.

The legislation — HR 1159 — had 25 co-sponsors, as of Sunday — 13 Republicans and 12 Democrats.

Other New York co-sponsors are Reps. Peter King, R-Long Island, and Kathleen Rice, D-Long Island.

Elon Musk's Billion-Dollar Crusade to Stop the AI Apocalypse

PROPHET MOTIVE Elon Musk, co-founder of Tesla and OpenAI, inside part of a SpaceX Falcon 9 rocket, in Cape Canaveral, Florida, 2010.
Photograph by Jonas Fredwall Karlsson.

I. Running Amok

It was just a friendly little argument about the fate of humanity. Demis Hassabis, a leading creator of advanced artificial intelligence, was chatting with Elon Musk, a leading doomsayer, about the perils of artificial intelligence.

They are two of the most consequential and intriguing men in Silicon Valley who don’t live there. Hassabis, a co-founder of the mysterious London laboratory DeepMind, had come to Musk’s SpaceX rocket factory, outside Los Angeles, a few years ago. They were in the canteen, talking, as a massive rocket part traversed overhead. Musk explained that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonization.

Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity. Amused, Hassabis said that A.I. would simply follow humans to Mars.

This did nothing to soothe Musk’s anxieties (even though he says there are scenarios where A.I. wouldn’t follow).

An unassuming but competitive 40-year-old, Hassabis is regarded as the Merlin who will likely help conjure our A.I. children. The field of A.I. is rapidly developing but still far from the powerful, self-evolving software that haunts Musk. Facebook uses A.I. for targeted advertising, photo tagging, and curated news feeds. Microsoft and Apple use A.I. to power their digital assistants, Cortana and Siri. Google’s search engine from the beginning has been dependent on A.I. All of these small advances are part of the chase to eventually create flexible, self-teaching A.I. that will mirror human learning.

WITHOUT OVERSIGHT, MUSK BELIEVES, A.I. COULD BE AN EXISTENTIAL THREAT: “WE ARE SUMMONING THE DEMON.”

Some in Silicon Valley were intrigued to learn that Hassabis, a skilled chess player and former video-game designer, once came up with a game called Evil Genius, featuring a malevolent scientist who creates a doomsday device to achieve world domination. Peter Thiel, the billionaire venture capitalist and Donald Trump adviser who co-founded PayPal with Musk and others—and who in December helped gather skeptical Silicon Valley titans, including Musk, for a meeting with the president-elect—told me a story about an investor in DeepMind who joked as he left a meeting that he ought to shoot Hassabis on the spot, because it was the last chance to save the human race.

Elon Musk began warning about the possibility of A.I. running amok three years ago. It probably hadn’t eased his mind when one of Hassabis’s partners in DeepMind, Shane Legg, stated flatly, “I think human extinction will probably occur, and technology will likely play a part in this.”

Before DeepMind was gobbled up by Google, in 2014, as part of its A.I. shopping spree, Musk had been an investor in the company. He told me that his involvement was not about a return on his money but rather to keep a wary eye on the arc of A.I.: “It gave me more visibility into the rate at which things were improving, and I think they’re really improving at an accelerating rate, far faster than people realize. Mostly because in everyday life you don’t see robots walking around. Maybe your Roomba or something. But Roombas aren’t going to take over the world.”

In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”

Elon Musk at the V.F. Summit: Artificial Intelligence Could Wipe Out Humanity

At the World Government Summit in Dubai, in February, Musk again cued the scary organ music, evoking the plots of classic horror stories when he noted that “sometimes what will happen is a scientist will get so engrossed in their work that they don’t really realize the ramifications of what they’re doing.” He said that the way to escape human obsolescence, in the end, may be by “having some sort of merger of biological intelligence and machine intelligence.” This Vulcan mind-meld could involve something called a neural lace—an injectable mesh that would literally hardwire your brain to communicate directly with computers. “We’re already cyborgs,” Musk told me in February. “Your phone and your computer are extensions of you, but the interface is through finger movements or speech, which are very slow.” With a neural lace inside your skull you would flash data from your brain, wirelessly, to your digital devices or to virtually unlimited computing power in the cloud. “For a meaningful partial-brain interface, I think we’re roughly four or five years away.”

Musk’s alarming views on the dangers of A.I. first went viral after he spoke at M.I.T. in 2014—speculating (pre-Trump) that A.I. was probably humanity’s “biggest existential threat.” He added that he was increasingly inclined to think there should be some national or international regulatory oversight—anathema to Silicon Valley—“to make sure that we don’t do something very foolish.” He went on: “With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out.” Some A.I. engineers found Musk’s theatricality so absurdly amusing that they began echoing it. When they would return to the lab after a break, they’d say, “O.K., let’s get back to work summoning.”

Musk wasn’t laughing. “Elon’s crusade” (as one of his friends and fellow tech big shots calls it) against unfettered A.I. had begun.

II. “I Am the Alpha”

Elon Musk smiled when I mentioned to him that he comes across as something of an Ayn Rand-ian hero. “I have heard that before,” he said in his slight South African accent. “She obviously has a fairly extreme set of views, but she has some good points in there.”

But Ayn Rand would do some re-writes on Elon Musk. She would make his eyes gray and his face more gaunt. She would refashion his public demeanor to be less droll, and she would not countenance his goofy giggle. She would certainly get rid of all his nonsense about the “collective” good. She would find great material in the 45-year-old’s complicated personal life: his first wife, the fantasy writer Justine Musk, and their five sons (one set of twins, one of triplets), and his much younger second wife, the British actress Talulah Riley, who played the boring Bennet sister in the Keira Knightley version of Pride & Prejudice. Riley and Musk were married, divorced, and then re-married. They are now divorced again. Last fall, Musk tweeted that Talulah “does a great job playing a deadly sexbot” on HBO’s Westworld, adding a smiley-face emoticon. It’s hard for mere mortal women to maintain a relationship with someone as insanely obsessed with work as Musk.

“How much time does a woman want a week?” he asked Ashlee Vance. “Maybe ten hours? That’s kind of the minimum?”

Mostly, Rand would savor Musk, a hyper-logical, risk-loving industrialist. He enjoys costume parties, wing-walking, and Japanese steampunk extravaganzas. Robert Downey Jr. used Musk as a model for Iron Man. Marc Mathieu, the chief marketing officer of Samsung USA, who has gone fly-fishing in Iceland with Musk, calls him “a cross between Steve Jobs and Jules Verne.”As they danced at their wedding reception, Justine later recalled, Musk informed her, “I am the alpha in this relationship.”

Photographs by Anders Lindén/Agent Bauer (Tegmark); by Jeff Chiu/A.P. Images (Page, Wozniak); by Simon Dawson/Bloomberg (Hassabis), Michael Gottschalk/Photothek (Gates), Niklas Halle’n/AFP (Hawking), Saul Loeb/AFP (Thiel), Juan Mabromata/AFP (Russell), David Paul Morris/Bloomberg (Altman), Tom Pilston/The Washington Post (Bostrom), David Ramos (Zuckerberg), all from Getty Images; by Frederic Neema/Polaris/Newscom (Kurzwell); by Denis Allard/Agence Réa/Redux (LeCun); Ariel Zambelich/ Wired (Ng); © Bobby Yip/Reuters/Zuma Press (Musk).

In a tech universe full of skinny guys in hoodies—whipping up bots that will chat with you and apps that can study a photo of a dog and tell you what breed it is—Musk is a throwback to Henry Ford and Hank Rearden. In Atlas Shrugged, Rearden gives his wife a bracelet made from the first batch of his revolutionary metal, as though it were made of diamonds. Musk has a chunk of one of his rockets mounted on the wall of his Bel Air house, like a work of art.

Musk shoots for the moon—literally. He launches cost-efficient rockets into space and hopes to eventually inhabit the Red Planet. In February he announced plans to send two space tourists on a flight around the moon as early as next year. He creates sleek batteries that could lead to a world powered by cheap solar energy. He forges gleaming steel into sensuous Tesla electric cars with such elegant lines that even the nitpicking Steve Jobs would have been hard-pressed to find fault. He wants to save time as well as humanity: he dreamed up the Hyperloop, an electromagnetic bullet train in a tube, which may one day whoosh travelers between L.A. and San Francisco at 700 miles per hour. When Musk visited secretary of defense Ashton Carter last summer, he mischievously tweeted that he was at the Pentagon to talk about designing a Tony Stark-style “flying metal suit.” Sitting in traffic in L.A. in December, getting bored and frustrated, he tweeted about creating the Boring Company to dig tunnels under the city to rescue the populace from “soul-destroying traffic.” By January, according to Bloomberg Businessweek, Musk had assigned a senior SpaceX engineer to oversee the plan and had started digging his first test hole. His sometimes quixotic efforts to save the world have inspired a parody twitter account, “Bored Elon Musk,” where a faux Musk spouts off wacky ideas such as “Oxford commas as a service” and “bunches of bananas genetically engineered” so that the bananas ripen one at a time.

Of course, big dreamers have big stumbles. Some SpaceX rockets have blown up, and last June a driver was killed in a self-driving Tesla whose sensors failed to notice the tractor-trailer crossing its path. (An investigation by the National Highway Traffic Safety Administration found that Tesla’s Autopilot system was not to blame.)

Musk is stoic about setbacks but all too conscious of nightmare scenarios. His views reflect a dictum from Atlas Shrugged: “Man has the power to act as his own destroyer—and that is the way he has acted through most of his history.” As he told me, “we are the first species capable of self-annihilation.”

Here’s the nagging thought you can’t escape as you drive around from glass box to glass box in Silicon Valley: the Lords of the Cloud love to yammer about turning the world into a better place as they churn out new algorithms, apps, and inventions that, it is claimed, will make our lives easier, healthier, funnier, closer, cooler, longer, and kinder to the planet. And yet there’s a creepy feeling underneath it all, a sense that we’re the mice in their experiments, that they regard us humans as Betamaxes or eight-tracks, old technology that will soon be discarded so that they can get on to enjoying their sleek new world. Many people there have accepted this future: we’ll live to be 150 years old, but we’ll have machine overlords.

VIDEO: Elon Musk Multitasks Better Than You

Maybe we already have overlords. As Musk slyly told Recode’s annual Code Conference last year in Rancho Palos Verdes, California, we could already be playthings in a simulated-reality world run by an advanced civilization. Reportedly, two Silicon Valley billionaires are working on an algorithm to break us out of the Matrix.

Among the engineers lured by the sweetness of solving the next problem, the prevailing attitude is that empires fall, societies change, and we are marching toward the inevitable phase ahead. They argue not about “whether” but rather about “how close” we are to replicating, and improving on, ourselves. Sam Altman, the 31-year-old president of Y Combinator, the Valley’s top start-up accelerator, believes humanity is on the brink of such invention.

“The hard part of standing on an exponential curve is: when you look backwards, it looks flat, and when you look forward, it looks vertical,” he told me. “And it’s very hard to calibrate how much you are moving because it always looks the same.”

You’d think that anytime Musk, Stephen Hawking, and Bill Gates are all raising the same warning about A.I.—as all of them are—it would be a 10-alarm fire. But, for a long time, the fog of fatalism over the Bay Area was thick. Musk’s crusade was viewed as Sisyphean at best and Luddite at worst. The paradox is this: Many tech oligarchs see everything they are doing to help us, and all their benevolent manifestos, as streetlamps on the road to a future where, as Steve Wozniak says, humans are the family pets.

But Musk is not going gently. He plans on fighting this with every fiber of his carbon-based being. Musk and Altman have founded OpenAI, a billion-dollar nonprofit company, to work for safer artificial intelligence. I sat down with the two men when their new venture had only a handful of young engineers and a makeshift office, an apartment in San Francisco’s Mission District that belongs to Greg Brockman, OpenAI’s 28-year-old co-founder and chief technology officer. When I went back recently, to talk with Brockman and Ilya Sutskever, the company’s 30-year-old research director (and also a co-founder), OpenAI had moved into an airy office nearby with a robot, the usual complement of snacks, and 50 full-time employees. (Another 10 to 30 are on the way.)

Altman, in gray T-shirt and jeans, is all wiry, pale intensity. Musk’s fervor is masked by his diffident manner and rosy countenance. His eyes are green or blue, depending on the light, and his lips are plum red. He has an aura of command while retaining a trace of the gawky, lonely South African teenager who immigrated to Canada by himself at the age of 17.

In Silicon Valley, a lunchtime meeting does not necessarily involve that mundane fuel known as food. Younger coders are too absorbed in algorithms to linger over meals. Some just chug Soylent. Older ones are so obsessed with immortality that sometimes they’re just washing down health pills with almond milk.

At first blush, OpenAI seemed like a bantamweight vanity project, a bunch of brainy kids in a walkup apartment taking on the multi-billion-dollar efforts at Google, Facebook, and other companies which employ the world’s leading A.I. experts. But then, playing a well-heeled David to Goliath is Musk’s specialty, and he always does it with style—and some useful sensationalism.

Let others in Silicon Valley focus on their I.P.O. price and ridding San Francisco of what they regard as its unsightly homeless population. Musk has larger aims, like ending global warming and dying on Mars (just not, he says, on impact).

Musk began to see man’s fate in the galaxy as his personal obligation three decades ago, when as a teenager he had a full-blown existential crisis. Musk told me that The Hitchhiker’s Guide to the Galaxy, by Douglas Adams, was a turning point for him. The book is about aliens destroying the earth to make way for a hyperspace highway and features Marvin the Paranoid Android and a supercomputer designed to answer all the mysteries of the universe. (Musk slipped at least one reference to the book into the software of the Tesla Model S.) As a teenager, Vance writes in his biography, Musk formulated a mission statement for himself: “The only thing that makes sense to do is strive for greater collective enlightenment.”

OpenAI got under way with a vague mandate—which isn’t surprising, given that people in the field are still arguing over what form A.I. will take, what it will be able to do, and what can be done about it. So far, public policy on A.I. is strangely undetermined and software is largely unregulated. The Federal Aviation Administration oversees drones, the Securities and Exchange Commission oversees automated financial trading, and the Department of Transportation has begun to oversee self-driving cars.

Musk believes that it is better to try to get super-A.I. first and distribute the technology to the world than to allow the algorithms to be concealed and concentrated in the hands of tech or government elites—even when the tech elites happen to be his own friends, people such as Google founders Larry Page and Sergey Brin. “I’ve had many conversations with Larry about A.I. and robotics—many, many,” Musk told me. “And some of them have gotten quite heated. You know, I think it’s not just Larry, but there are many futurists who feel a certain inevitability or fatalism about robots, where we’d have some sort of peripheral role. The phrase used is ‘We are the biological boot-loader for digital super-intelligence.’ ” (A boot loader is the small program that launches the operating system when you first turn on your computer.) “Matter can’t organize itself into a chip,” Musk explained. “But it can organize itself into a biological entity that gets increasingly sophisticated and ultimately can create the chip.”

Musk has no intention of being a boot loader. Page and Brin see themselves as forces for good, but Musk says the issue goes far beyond the motivations of a handful of Silicon Valley executives.

“It’s great when the emperor is Marcus Aurelius,” he says. “It’s not so great when the emperor is Caligula.”

III. The Golden Calf

After the so-called A.I. winter—the broad, commercial failure in the late 80s of an early A.I. technology that wasn’t up to snuff—artificial intelligence got a reputation as snake oil. Now it’s the hot thing again in this go-go era in the Valley. Greg Brockman, of OpenAI, believes the next decade will be all about A.I., with everyone throwing money at the small number of “wizards” who know the A.I. “incantations.” Guys who got rich writing code to solve banal problems like how to pay a stranger for stuff online now contemplate a vertiginous world where they are the creators of a new reality and perhaps a new species.

Microsoft’s Jaron Lanier, the dreadlocked computer scientist known as the father of virtual reality, gave me his view as to why the digerati find the “science-fiction fantasy” of A.I. so tantalizing: “It’s saying, ‘Oh, you digital techy people, you’re like gods; you’re creating life; you’re transforming reality.’ There’s a tremendous narcissism in it that we’re the people who can do it. No one else. The Pope can’t do it. The president can’t do it. No one else can do it. We are the masters of it . . . . The software we’re building is our immortality.” This kind of God-like ambition isn’t new, he adds. “I read about it once in a story about a golden calf.” He shook his head. “Don’t get high on your own supply, you know?”

Google has gobbled up almost every interesting robotics and machine-learning company over the last few years. It bought DeepMind for $650 million, reportedly beating out Facebook, and built the Google Brain team to work on A.I. It hired Geoffrey Hinton, a British pioneer in artificial neural networks; and Ray Kurzweil, the eccentric futurist who has predicted that we are only 28 years away from the Rapture-like “Singularity”—the moment when the spiraling capabilities of self-improving artificial super-intelligence will far exceed human intelligence, and human beings will merge with A.I. to create the “god-like” hybrid beings of the future.

It’s in Larry Page’s blood and Google’s DNA to believe that A.I. is the company’s inevitable destiny—think of that destiny as you will. (“If evil A.I. lights up,” Ashlee Vance told me, “it will light up first at Google.”) If Google could get computers to master search when search was the most important problem in the world, then presumably it can get computers to do everything else. In March of last year, Silicon Valley gulped when a fabled South Korean player of the world’s most complex board game, Go, was beaten in Seoul by DeepMind’s AlphaGo. Hassabis, who has said he is running an Apollo program for A.I., called it a “historic moment” and admitted that even he was surprised it happened so quickly. “I’ve always hoped that A.I. could help us discover completely new ideas in complex scientific domains,” Hassabis told me in February. “This might be one of the first glimpses of that kind of creativity.” More recently, AlphaGo played 60 games online against top Go players in China, Japan, and Korea—and emerged with a record of 60–0. In January, in another shock to the system, an A.I. program showed that it could bluff. Libratus, built by two Carnegie Mellon researchers, was able to crush top poker players at Texas Hold ‘Em.

Peter Thiel told me about a friend of his who says that the only reason people tolerate Silicon Valley is that no one there seems to be having any sex or any fun. But there are reports of sex robots on the way that come with apps that can control their moods and even have a pulse. The Valley is skittish when it comes to female sex robots—an obsession in Japan—because of its notoriously male-dominated culture and its much-publicized issues with sexual harassment and discrimination. But when I asked Musk about this, he replied matter-of-factly, “Sex robots? I think those are quite likely.”

VIDEO: Silicon Valley’s Buffer Zones

Whether sincere or a shrewd P.R. move, Hassabis made it a condition of the Google acquisition that Google and DeepMind establish a joint A.I. ethics board. At the time, three years ago, forming an ethics board was seen as a precocious move, as if to imply that Hassabis was on the verge of achieving true A.I. Now, not so much. Last June, a researcher at DeepMind co-authored a paper outlining a way to design a “big red button” that could be used as a kill switch to stop A.I. from inflicting harm.

Google executives say Larry Page’s view on A.I. is shaped by his frustration about how many systems are sub-optimal—from systems that book trips to systems that price crops. He believes that A.I. will improve people’s lives and has said that, when human needs are more easily met, people will “have more time with their family or to pursue their own interests.” Especially when a robot throws them out of work.

Musk is a friend of Page’s. He attended Page’s wedding and sometimes stays at his house when he’s in the San Francisco area. “It’s not worth having a house for one or two nights a week,” the 99th-richest man in the world explained to me. At times, Musk has expressed concern that Page may be naïve about how A.I. could play out. If Page is inclined toward the philosophy that machines are only as good or bad as the people creating them, Musk firmly disagrees. Some at Google—perhaps annoyed that Musk is, in essence, pointing a finger at them for rushing ahead willy-nilly—dismiss his dystopic take as a cinematic cliché. Eric Schmidt, the executive chairman of Google’s parent company, put it this way: “Robots are invented. Countries arm them. An evil dictator turns the robots on humans, and all humans will be killed. Sounds like a movie to me.”

Some in Silicon Valley argue that Musk is interested less in saving the world than in buffing his brand, and that he is exploiting a deeply rooted conflict: the one between man and machine, and our fear that the creation will turn against us. They gripe that his epic good-versus-evil story line is about luring talent at discount rates and incubating his own A.I. software for cars and rockets. It’s certainly true that the Bay Area has always had a healthy respect for making a buck. As Sam Spade said in The Maltese Falcon, “Most things in San Francisco can be bought, or taken.”

Musk is without doubt a dazzling salesman. Who better than a guardian of human welfare to sell you your new, self-driving Tesla? Andrew Ng—the chief scientist at Baidu, known as China’s Google—based in Sunnyvale, California, writes off Musk’s Manichaean throwdown as “marketing genius.” “At the height of the recession, he persuaded the U.S. government to help him build an electric sports car,” Ng recalled, incredulous. The Stanford professor is married to a robotics expert, issued a robot-themed engagement announcement, and keeps a “Trust the Robot” black jacket hanging on the back of his chair. He thinks people who worry about A.I. going rogue are distracted by “phantoms,” and regards getting alarmed now as akin to worrying about overpopulation on Mars before we populate it. “And I think it’s fascinating,” he said about Musk in particular, “that in a rather short period of time he’s inserted himself into the conversation on A.I. I think he sees accurately that A.I. is going to create tremendous amounts of value.”

Although he once called Musk a “sci-fi version of P. T. Barnum,” Ashlee Vance thinks that Musk’s concern about A.I. is genuine, even if what he can actually do about it is unclear. “His wife, Talulah, told me they had late-night conversations about A.I. at home,” Vance noted. “Elon is brutally logical. The way he tackles everything is like moving chess pieces around. When he plays this scenario out in his head, it doesn’t end well for people.”

Eliezer Yudkowsky, a co-founder of the Machine Intelligence Research Institute, in Berkeley, agrees: “He’s Elon-freaking-Musk. He doesn’t need to touch the third rail of the artificial-intelligence controversy if he wants to be sexy. He can just talk about Mars colonization.”

Some sniff that Musk is not truly part of the whiteboard culture and that his scary scenarios miss the fact that we are living in a world where it’s hard to get your printer to work. Others chalk up OpenAI, in part, to a case of FOMO: Musk sees his friend Page building new-wave software in a hot field and craves a competing army of coders. As Vance sees it, “Elon wants all the toys that Larry has. They’re like these two superpowers. They’re friends, but there’s a lot of tension in their relationship.” A rivalry of this kind might be best summed up by a line from the vainglorious head of the fictional tech behemoth Hooli, on HBO’s Silicon Valley: “I don’t want to live in a world where someone else makes the world a better place better than we do.”

Musk’s disagreement with Page over the potential dangers of A.I. “did affect our friendship for a while,” Musk says, “but that has since passed. We are on good terms these days.”

Musk never had as close a personal connection with 32-year-old Mark Zuckerberg, who has become an unlikely lifestyle guru, setting a new challenge for himself every year. These have included wearing a tie every day, reading a book every two weeks, learning Mandarin, and eating meat only from animals he killed with his own hands. In 2016, it was A.I.’s turn.

Zuckerberg has moved his A.I. experts to desks near his own. Three weeks after Musk and Altman announced their venture to make the world safe from malicious A.I., Zuckerberg posted on Facebook that his project for the year was building a helpful A.I. to assist him in managing his home—everything from recognizing his friends and letting them inside to keeping an eye on the nursery. “You can think of it kind of like Jarvis in Iron Man,” he wrote.

One Facebooker cautioned Zuckerberg not to “accidentally create Skynet,” the military supercomputer that turns against human beings in the Terminator movies. “I think we can build A.I. so it works for us and helps us,” Zuckerberg replied. And clearly throwing shade at Musk, he continued: “Some people fear-monger about how A.I. is a huge danger, but that seems far-fetched to me and much less likely than disasters due to widespread disease, violence, etc.” Or, as he described his philosophy at a Facebook developers’ conference last April, in a clear rejection of warnings from Musk and others he believes to be alarmists: “Choose hope over fear.”

In the November issue of Wired, guest-edited by Barack Obama, Zuckerberg wrote that there is little basis beyond science fiction to worry about doomsday scenarios: “If we slow down progress in deference to unfounded concerns, we stand in the way of real gains.” He compared A.I. jitters to early fears about airplanes, noting, “We didn’t rush to put rules in place about how airplanes should work before we figured out how they’d fly in the first place.”

Zuckerberg introduced his A.I. butler, Jarvis, right before Christmas. With the soothing voice of Morgan Freeman, it was able to help with music, lights, and even making toast. I asked the real-life Iron Man, Musk, about Zuckerberg’s Jarvis, when it was in its earliest stages. “I wouldn’t call it A.I. to have your household functions automated,” Musk said. “It’s really not A.I. to turn the lights on, set the temperature.”

Zuckerberg can be just as dismissive. Asked in Germany whether Musk’s apocalyptic forebodings were “hysterical” or “valid,” Zuckerberg replied “hysterical.” And when Musk’s SpaceX rocket blew up on the launch pad in September, destroying a satellite Facebook was leasing, Zuckerberg coldly posted that he was “deeply disappointed.”

IV. A Rupture in History

Musk and others who have raised a warning flag on A.I. have sometimes been treated like drama queens. In January 2016, Musk won the annual Luddite Award, bestowed by a Washington tech-policy think tank. Still, he’s got some pretty good wingmen. Stephen Hawking told the BBC, “I think the development of full artificial intelligence could spell the end of the human race.” Bill Gates told Charlie Rose that A.I. was potentially more dangerous than a nuclear catastrophe. Nick Bostrom, a 43-year-old Oxford philosophy professor, warned in his 2014 book, Superintelligence, that “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.” And, last year, Henry Kissinger jumped on the peril bandwagon, holding a confidential meeting with top A.I. experts at the Brook, a private club in Manhattan, to discuss his concern over how smart robots could cause a rupture in history and unravel the way civilization works.

In January 2015, Musk, Bostrom, and a Who’s Who of A.I., representing both sides of the split, assembled in Puerto Rico for a conference hosted by Max Tegmark, a 49-year-old physics professor at M.I.T. who runs the Future of Life Institute, in Boston.

“Do you own a house?,” Tegmark asked me. “Do you own fire insurance? The consensus in Puerto Rico was that we needed fire insurance. When we got fire and messed up with it, we invented the fire extinguisher. When we got cars and messed up, we invented the seat belt, air bag, and traffic light. But with nuclear weapons and A.I., we don’t want to learn from our mistakes. We want to plan ahead.” (Musk reminded Tegmark that a precaution as sensible as seat belts had provoked fierce opposition from the automobile industry.)

Musk, who has kick-started the funding of research into avoiding A.I.’s pitfalls, said he would give the Future of Life Institute “10 million reasons” to pursue the subject, donating $10 million. Tegmark promptly gave $1.5 million to Bostrom’s group in Oxford, the Future of Humanity Institute. Explaining at the time why it was crucial to be “proactive and not reactive,” Musk said it was certainly possible to “construct scenarios where the recovery of human civilization does not occur.”

Six months after the Puerto Rico conference, Musk, Hawking, Demis Hassabis, Apple co-founder Steve Wozniak, and Stuart Russell, a computer-science professor at Berkeley who co-authored the standard textbook on artificial intelligence, along with 1,000 other prominent figures, signed a letter calling for a ban on offensive autonomous weapons. “In 50 years, this 18-month period we’re in now will be seen as being crucial for the future of the A.I. community,” Russell told me. “It’s when the A.I. community finally woke up and took itself seriously and thought about what to do to make the future better.” Last September, the country’s biggest tech companies created the Partnership on Artificial Intelligence to explore the full range of issues arising from A.I., including the ethical ones. (Musk’s OpenAI quickly joined this effort.) Meanwhile, the European Union has been looking into legal issues arising from the advent of robots and A.I.—such as whether robots have “personhood” or (as one Financial Times contributor wondered) should be considered more like slaves in Roman law.

At Tegmark’s second A.I. safety conference, last January at the Asilomar center, in California—chosen because that’s where scientists gathered back in 1975 and agreed to limit genetic experimentation—the topic was not so contentious. Larry Page, who was not at the Puerto Rico conference, was at Asilomar, and Musk noted that their “conversation was no longer heated.”

But while it may have been “a coming-out party for A.I. safety,” as one attendee put it—part of “a sea change” in the last year or so, as Musk says—there’s still a long way to go. “There’s no question that the top technologists in Silicon Valley now take A.I. far more seriously—that they do acknowledge it as a risk,” he observes. “I’m not sure that they yet appreciate the significance of the risk.”

Steve Wozniak has wondered publicly whether he is destined to be a family pet for robot overlords. “We started feeding our dog filet,” he told me about his own pet, over lunch with his wife, Janet, at the Original Hick’ry Pit, in Walnut Creek. “Once you start thinking you could be one, that’s how you want them treated.”

He has developed a policy of appeasement toward robots and any A.I. masters. “Why do we want to set ourselves up as the enemy when they might overpower us someday?” he said. “It should be a joint partnership. All we can do is seed them with a strong culture where they see humans as their friends.”

When I went to Peter Thiel’s elegant San Francisco office, dominated by two giant chessboards, Thiel, one of the original donors to OpenAI and a committed contrarian, said he worried that Musk’s resistance could actually be accelerating A.I. research because his end-of-the-world warnings are increasing interest in the field.

“Full-on A.I. is on the order of magnitude of extraterrestrials landing,” Thiel said. “There are some very deeply tricky questions around this . . . . If you really push on how do we make A.I. safe, I don’t think people have any clue. We don’t even know what A.I. is. It’s very hard to know how it would be controllable.”

He went on: “There’s some sense in which the A.I. question encapsulates all of people’s hopes and fears about the computer age. I think people’s intuitions do just really break down when they’re pushed to these limits because we’ve never dealt with entities that are smarter than humans on this planet.”

V. The Urge to Merge

Trying to puzzle out who is right on A.I., I drove to San Mateo to meet Ray Kurzweil for coffee at the restaurant Three. Kurzweil is the author of The Singularity Is Near, a Utopian vision of what an A.I. future holds. (When I mentioned to Andrew Ng that I was going to be talking to Kurzweil, he rolled his eyes. “Whenever I read Kurzweil’s Singularity, my eyes just naturally do that,” he said.) Kurzweil arrived with a Whole Foods bag for me, brimming with his books and two documentaries about him. He was wearing khakis, a green-and-red plaid shirt, and several rings, including one—made with a 3-D printer—that has an S for his Singularity University.

Computers are already “doing many attributes of thinking,” Kurzweil told me. “Just a few years ago, A.I. couldn’t even tell the difference between a dog and cat. Now it can.” Kurzweil has a keen interest in cats and keeps a collection of 300 cat figurines in his Northern California home. At the restaurant, he asked for almond milk but couldn’t get any. The 69-year-old eats strange health concoctions and takes 90 pills a day, eager to achieve immortality—or “indefinite extensions to the existence of our mind file”—which means merging with machines. He has such an urge to merge that he sometimes uses the word “we” when talking about super-intelligent future beings—a far cry from Musk’s more ominous “they.”

I mentioned that Musk had told me he was bewildered that Kurzweil doesn’t seem to have “even 1 percent doubt” about the hazards of our “mind children,” as robotics expert Hans Moravec calls them.

“That’s just not true. I’m the one who articulated the dangers,” Kurzweil said. “The promise and peril are deeply intertwined,” he continued. “Fire kept us warm and cooked our food and also burned down our houses . . . . Furthermore, there are strategies to control the peril, as there have been with biotechnology guidelines.” He summarized the three stages of the human response to new technology as Wow!, Uh-Oh, and What Other Choice Do We Have but to Move Forward? “The list of things humans can do better than computers is getting smaller and smaller,” he said. “But we create these tools to extend our long reach.”

Just as, two hundred million years ago, mammalian brains developed a neocortex that eventually enabled humans to “invent language and science and art and technology,” by the 2030s, Kurzweil predicts, we will be cyborgs, with nanobots the size of blood cells connecting us to synthetic neocortices in the cloud, giving us access to virtual reality and augmented reality from within our own nervous systems. “We will be funnier; we will be more musical; we will increase our wisdom,” he said, ultimately, as I understand it, producing a herd of Beethovens and Einsteins. Nanobots in our veins and arteries will cure diseases and heal our bodies from the inside.

He allows that Musk’s bête noire could come true. He notes that our A.I. progeny “may be friendly and may not be” and that “if it’s not friendly, we may have to fight it.” And perhaps the only way to fight it would be “to get an A.I. on your side that’s even smarter.”

Kurzweil told me he was surprised that Stuart Russell had “jumped on the peril bandwagon,” so I reached out to Russell and met with him in his seventh-floor office in Berkeley. The 54-year-old British-American expert on A.I. told me that his thinking had evolved and that he now “violently” disagrees with Kurzweil and others who feel that ceding the planet to super-intelligent A.I. is just fine.

Russell doesn’t give a fig whether A.I. might enable more Einsteins and Beethovens. One more Ludwig doesn’t balance the risk of destroying humanity. “As if somehow intelligence was the thing that mattered and not the quality of human experience,” he said, with exasperation. “I think if we replaced ourselves with machines that as far as we know would have no conscious existence, no matter how many amazing things they invented, I think that would be the biggest possible tragedy.” Nick Bostrom has called the idea of a society of technological awesomeness with no human beings a “Disneyland without children.”

“There are people who believe that if the machines are more intelligent than we are, then they should just have the planet and we should go away,” Russell said. “Then there are people who say, ‘Well, we’ll upload ourselves into the machines, so we’ll still have consciousness but we’ll be machines.’ Which I would find, well, completely implausible.”

From the V.F. Summit: Elon Musk on Thinking for the Future

Russell took exception to the views of Yann LeCun, who developed the forerunner of the convolutional neural nets used by AlphaGo and is Facebook’s director of A.I. research. LeCun told the BBC that there would be no Ex Machina or Terminator scenarios, because robots would not be built with human drives—hunger, power, reproduction, self-preservation. “Yann LeCun keeps saying that there’s no reason why machines would have any self-preservation instinct,” Russell said. “And it’s simply and mathematically false. I mean, it’s so obvious that a machine will have self-preservation even if you don’t program it in because if you say, ‘Fetch the coffee,’ it can’t fetch the coffee if it’s dead. So if you give it any goal whatsoever, it has a reason to preserve its own existence to achieve that goal. And if you threaten it on your way to getting coffee, it’s going to kill you because any risk to the coffee has to be countered. People have explained this to LeCun in very simple terms.”

Russell debunked the two most common arguments for why we shouldn’t worry: “One is: It’ll never happen, which is like saying we are driving towards the cliff but we’re bound to run out of gas before we get there. And that doesn’t seem like a good way to manage the affairs of the human race. And the other is: Not to worry—we will just build robots that collaborate with us and we’ll be in human-robot teams. Which begs the question: If your robot doesn’t agree with your objectives, how do you form a team with it?”

Last year, Microsoft shut down its A.I. chatbot, Tay, after Twitter users—who were supposed to make “her” smarter “through casual and playful conversation,” as Microsoft put it—instead taught her how to reply with racist, misogynistic, and anti-Semitic slurs. “bush did 9/11, and Hitler would have done a better job than the monkey we have now,” Tay tweeted. “donald trump is the only hope we’ve got.” In response, Musk tweeted, “Will be interesting to see what the mean time to Hitler is for these bots. Only took Microsoft’s Tay a day.”

With Trump now president, Musk finds himself walking a fine line. His companies count on the U.S. government for business and subsidies, regardless of whether Marcus Aurelius or Caligula is in charge. Musk’s companies joined the amicus brief against Trump’s executive order regarding immigration and refugees, and Musk himself tweeted against the order. At the same time, unlike Uber’s Travis Kalanick, Musk has hung in there as a member of Trump’s Strategic and Policy Forum. “It’s very Elon,” says Ashlee Vance. “He’s going to do his own thing no matter what people grumble about.” He added that Musk can be “opportunistic” when necessary.

I asked Musk about the flak he had gotten for associating with Trump. In the photograph of tech executives with Trump, he had looked gloomy, and there was a weary tone in his voice when he talked about the subject. In the end, he said, “it’s better to have voices of moderation in the room with the president. There are a lot of people, kind of the hard left, who essentially want to isolate—and not have any voice. Very unwise.”

VI. All About the Journey

Eliezer Yudkowsky is a highly regarded 37-year-old researcher who is trying to figure out whether it’s possible, in practice and not just in theory, to point A.I. in any direction, let alone a good one. I met him at a Japanese restaurant in Berkeley.

“How do you encode the goal functions of an A.I. such that it has an Off switch and it wants there to be an Off switch and it won’t try to eliminate the Off switch and it will let you press the Off switch, but it won’t jump ahead and press the Off switch itself?” he asked over an order of surf-and-turf rolls. “And if it self-modifies, will it self-modify in such a way as to keep the Off switch? We’re trying to work on that. It’s not easy.”

I babbled about the heirs of Klaatu, HAL, and Ultron taking over the Internet and getting control of our banking, transportation, and military. What about the replicants in Blade Runner, who conspire to kill their creator? Yudkowsky held his head in his hands, then patiently explained: “The A.I. doesn’t have to take over the whole Internet. It doesn’t need drones. It’s not dangerous because it has guns. It’s dangerous because it’s smarter than us. Suppose it can solve the science technology of predicting protein structure from DNA information. Then it just needs to send out a few e-mails to the labs that synthesize customized proteins. Soon it has its own molecular machinery, building even more sophisticated molecular machines.

“If you want a picture of A.I. gone wrong, don’t imagine marching humanoid robots with glowing red eyes. Imagine tiny invisible synthetic bacteria made of diamond, with tiny onboard computers, hiding inside your bloodstream and everyone else’s. And then, simultaneously, they release one microgram of botulinum toxin. Everyone just falls over dead.

“Only it won’t actually happen like that. It’s impossible for me to predict exactly how we’d lose, because the A.I. will be smarter than I am. When you’re building something smarter than you, you have to get it right on the first try.”

I thought back to my conversation with Musk and Altman. Don’t get sidetracked by the idea of killer robots, Musk said, noting, “The thing about A.I. is that it’s not the robot; it’s the computer algorithm in the Net. So the robot would just be an end effector, just a series of sensors and actuators. A.I. is in the Net . . . . The important thing is that if we do get some sort of runaway algorithm, then the human A.I. collective can stop the runaway algorithm. But if there’s large, centralized A.I. that decides, then there’s no stopping it.”

Altman expanded upon the scenario: “An agent that had full control of the Internet could have far more effect on the world than an agent that had full control of a sophisticated robot. Our lives are already so dependent on the Internet that an agent that had no body whatsoever but could use the Internet really well would be far more powerful.”

Even robots with a seemingly benign task could indifferently harm us. “Let’s say you create a self-improving A.I. to pick strawberries,” Musk said, “and it gets better and better at picking strawberries and picks more and more and it is self-improving, so all it really wants to do is pick strawberries. So then it would have all the world be strawberry fields. Strawberry fields forever.” No room for human beings.

But can they ever really develop a kill switch? “I’m not sure I’d want to be the one holding the kill switch for some superpowered A.I., because you’d be the first thing it kills,” Musk replied.

Altman tried to capture the chilling grandeur of what’s at stake: “It’s a very exciting time to be alive, because in the next few decades we are either going to head toward self-destruction or toward human descendants eventually colonizing the universe.”

“Right,” Musk said, adding, “If you believe the end is the heat death of the universe, it really is all about the journey.”

The man who is so worried about extinction chuckled at his own extinction joke. As H. P. Lovecraft once wrote, “From even the greatest of horrors irony is seldom absent.”

Full ScreenPhotos:

Photos: Tech C.E.O.s Who Have The Most to Lose in a Trump Presidency

Jeff Bezos: The C.E.O. of e-commerce and delivery giant Amazon and the owner of The Washington Post has already sparred with Trump. But Trump could come after Bezos for anti-trust issues, too: Trump is on the record as saying Amazon “is controlling so much of what they are doing.” The fact that The Washington Post has been reporting on Trump, often critically, probably does not endear Bezos to Trump, either.

Photo: From Rex/Shutterstock.

Tim Cook: Trump has repeatedly criticized Apple for making its products overseas, and has called on the company to “start building their damn computers and things” in America. Cook must also contend with tariffs that will inevitably arise if Trump gets the U.S. into a trade war with China. And then there’s the fact that Trump denounced Apple in 2016 for refusing a court order to cooperate with an F.B.I. request to unlock an iPhone belonging to one of the shooters in the San Bernardino terrorist attack last year.

Photo: By Drew Angerer/Getty Images.

Jack Dorsey: Twitter, already a tech company struggling with employee retention and a falling stock price, has been forced to contend with its role in handing Trump a megaphone to spout his opinions, whether those include attacking a union leader or merely suggesting the U.S. stock up on nuclear arms. Dorsey was also excluded by Trump from the tech summit at Trump Tower in December, reportedly as retribution for not allowing the Trump team to use an emoji-fied version of the #CrookedHillary hashtag. Sad!

Photo: By Drew Angerer/Getty Images.

Mark Zuckerberg: Trump’s favorite golden boy in Silicon Valley, Peter Thiel, is both an early Facebook investor and serves on Facebook’s board, which bodes well for the company’s ties to the president-elect. But Trump could also change immigration laws in a way that affects Facebook’s ability to hire highly skilled employees. Earlier this year, Zuckerberg and others in the tech community signed onto a brief submitted to the Supreme Court in favor of Obama’s executive actions, arguing that more immigration benefits the tech industry and the country. Trump appears to disagree.

Photo: From Bloomberg/Getty Images.

Marc Lore (Jet.com): E-commerce companies like Jet.com could become victims of a Chinese trade war. Trump has threatened to add tariffs of 45 percent of Chinese exports. “We can’t continue to allow China to rape our country, and that’s what they’re doing,” he told supporters earlier in 2016. Trump’s proposed solution could make foreign-made goods—which comprises the bulk of e-commerce products—vastly more expensive.

Photo: From Rex/Shutterstock.

Josh Kushner: Jared Kushner’s brother, Josh, runs a healthcare start-up in New York called Oscar Health—which just so happens to be built on the back of the Obamacare exchanges that Trump, Jared’s father-in-law, has threatened to destroy. The company, which is reportedly bleeding money, is now pivoting its business model to focus on narrow networks and roll-out plans to small and large businesses, moving away from plans connected to the Affordable Care Act.

Photo: By Patrick McMullan/Getty Images.

Elon Musk: Though Musk and Trump ally Peter Thiel are close—they helped co-found PayPal together, and made their respective first millions of dollars off of it—two of Musk’s companies may be in a precarious situation under a Trump administration. Shareholders in both SolarCity and Tesla Motors now must consider what Trump could do to federal clean-energy tax credits and subsidies, which both companies currently receive. Current electric-car and solar-energy subsidies will expire under Trump’s tenure, and aren’t likely to be renewed.

Photo: From Rex/Shutterstock.

Jeff Bezos: The C.E.O. of e-commerce and delivery giant Amazon and the owner of The Washington Post has already sparred with Trump. But Trump could come after Bezos for anti-trust issues, too: Trump is on the record as saying Amazon “is controlling so much of what they are doing.” The fact that The Washington Post has been reporting on Trump, often critically, probably does not endear Bezos to Trump, either.
From Rex/Shutterstock.

Tim Cook: Trump has repeatedly criticized Apple for making its products overseas, and has called on the company to “start building their damn computers and things” in America. Cook must also contend with tariffs that will inevitably arise if Trump gets the U.S. into a trade war with China. And then there’s the fact that Trump denounced Apple in 2016 for refusing a court order to cooperate with an F.B.I. request to unlock an iPhone belonging to one of the shooters in the San Bernardino terrorist attack last year.
By Drew Angerer/Getty Images.

Jack Dorsey: Twitter, already a tech company struggling with employee retention and a falling stock price, has been forced to contend with its role in handing Trump a megaphone to spout his opinions, whether those include attacking a union leader or merely suggesting the U.S. stock up on nuclear arms. Dorsey was also excluded by Trump from the tech summit at Trump Tower in December, reportedly as retribution for not allowing the Trump team to use an emoji-fied version of the #CrookedHillary hashtag. Sad!
By Drew Angerer/Getty Images.

Mark Zuckerberg: Trump’s favorite golden boy in Silicon Valley, Peter Thiel, is both an early Facebook investor and serves on Facebook’s board, which bodes well for the company’s ties to the president-elect. But Trump could also change immigration laws in a way that affects Facebook’s ability to hire highly skilled employees. Earlier this year, Zuckerberg and others in the tech community signed onto a brief submitted to the Supreme Court in favor of Obama’s executive actions, arguing that more immigration benefits the tech industry and the country. Trump appears to disagree.
From Bloomberg/Getty Images.

Marc Lore (Jet.com): E-commerce companies like Jet.com could become victims of a Chinese trade war. Trump has threatened to add tariffs of 45 percent of Chinese exports. “We can’t continue to allow China to rape our country, and that’s what they’re doing,” he told supporters earlier in 2016. Trump’s proposed solution could make foreign-made goods—which comprises the bulk of e-commerce products—vastly more expensive.
From Rex/Shutterstock.

Josh Kushner: Jared Kushner’s brother, Josh, runs a healthcare start-up in New York called Oscar Health—which just so happens to be built on the back of the Obamacare exchanges that Trump, Jared’s father-in-law, has threatened to destroy. The company, which is reportedly bleeding money, is now pivoting its business model to focus on narrow networks and roll-out plans to small and large businesses, moving away from plans connected to the Affordable Care Act.
By Patrick McMullan/Getty Images.

Elon Musk: Though Musk and Trump ally Peter Thiel are close—they helped co-found PayPal together, and made their respective first millions of dollars off of it—two of Musk’s companies may be in a precarious situation under a Trump administration. Shareholders in both SolarCity and Tesla Motors now must consider what Trump could do to federal clean-energy tax credits and subsidies, which both companies currently receive. Current electric-car and solar-energy subsidies will expire under Trump’s tenure, and aren’t likely to be renewed.
From Rex/Shutterstock.

AMSAT Fox Series Launch Schedule Update

Submit the press release

The launches of AMSAT satellites Fox-1Cliff and Fox-1D have been rebooked from the original Spaceflight Formosat-5/Sherpa mission aboard a SpaceX Falcon 9  on to two separate new launches.

Fox-1D will now ride to orbit on an Indian PSLV vehicle scheduled to launch from Satish Dhawan Space Centre in Sriharikota, India in late 2017.

Fox-1Cliff will launch on Spaceflight’s SSO-A dedicated rideshare mission aboard a SpaceX Falcon 9 scheduled to launch from Vandenberg Air Force Base in California in late 2017 or early 2018.

These moves will serve to expedite the launch of these two satellites, both of which carry an amateur radio U/v FM repeater and an experimental L/v FM repeater.  The satellites also carry scientific experiments, from university partners Penn State, Vanderbilt University ISDE, Virginia Tech, and University of Iowa.

In addition to the launch of Fox-1Cliff and Fox-1D, AMSAT is awaiting the launches of RadFxSat and RadFxSat-2. RadFxSat is currently manifested for launch on August 29, 2017 aboard the ELaNa XIV mission, as a secondary payload with the Joint Polar Satellite System (JPSS)-1 on a Delta II from Vandenberg Air Force Base, California. RadFxSat-2 will be launched by Virgin Galactic on their LauncherOne air launch system from Mojave, CA on the ELaNa XX mission no earlier than December 2017.

New study: Furfuryl alcohol market boosted by public nature of space race between space x and …

The Furfuryl Alcohol Market deals with the development, manufacture and distribution of the organic compound known as furfuryl alcohol. Furfural is an organic compound which is derived from agricultural material like sawdust, oats, wheat bran and sugarcane bagasse.

The catalytic reduction or hydrogenation of furfural leads to furfuryl alcohol production.

Furfural manufacturers have even made for themselves in the rocket industry as furfuryl alcohol is utilized as fuel which happily ignites when placed around fuming nitric acid and red nitric acid.

– Advertising –

The Furfuryl Alcohol Market has also made space for itself in the manufacturing of materials like solvents, plastics, adhesives,.

Check complete report @
www.marketintelreports.com/report/hjr2133/global-furfuryl-alcohol-industry-market-research-2017

Scope & Regional Forecast of the Furfuryl Alcohol Market

There are a few factors which have transformed into the primary growth drivers for the Furfuryl Alcohol Market. Climate change fears have led numerous governments across the world to tighten environmental regulations regarding manufacturing processes and waste disposal.

This has led to an increase in awareness and demand for bio-based products like furan resin. Furfural manufacturers are also making sure that furfuryl alcohol production has a minimal carbon footprint.

In recent years, the furfuryl alcohol suppliers have also witnessed a spurt in demand from the plastic industry. A global rise in construction and paints industry has had a very positive effect on the demand for plastics.

Bio-based products like furan resin are also growing very popular in developed countries thanks to industrial biotechnology. Various advanced countries have intensified their focus on industrial biotechnology applications due to climate change fears.

As mentioned above, the furfuryl alcohol suppliers have identified the rocket industry as a potential avenue for growth. The advances being made by private players like Space X and Blue Origin has triggered a second space race in the history of humanity.

Rocket fuel research and innovation has never been as big as it is right now. The Furfuryl Alcohol Market has a great advantage in this due to its environment-friendly nature.

Avail more information from Sample Brochure of report @ www.marketintelreports.com/pdfdownload.php?id=hjr2133

Asia-Pacific has been dominating the Furfuryl Alcohol Market, in terms of furfuryl alcohol production and consumption, over the last years owing to the presence of massive manufacturing bases littered across the region. North America and Europe follow close behind in terms of demand.

Segmentations & Key Players involved in the Furfuryl Alcohol Market

According to Market findings, the Furfuryl Alcohol Market can be broken down into various segmentations on the basis of –

Application: Resins, Solvents, Plastics, Adhesives and Others.

Industry Vertical: Chemical Industry, Metals Industry and Others.

Some of the key players involved in the Furfuryl Alcohol Market according to Market are as follows:

  • Continental Industries Group, Inc.
  • Hongye Chemical Co., Ltd.
  • Shenzen Shu Hang Industrial Development Co. Ltd
  • SweetLake Chemical Ltd.
  • Novasynorganics


Order a copy of Global Market Research Report @
 www.marketintelreports.com/purchase.php?id=hjr2133

News From

Market Intel Reports - Market Research Firm, Analysis and ForecastMarket Intel Reports
Category: Market Research Publishers and RetailersCompany profile: MarketIntelReports (MIR) aim to empower our clients to successfully manage and outperform in their business decisions, we do this by providing Premium Market Intelligence, Strategic Insights and Databases from a range of Global Publishers. MarketIntelReports currently has more than 10,000 plus titles and 35+ publishers on our platform and growing consistently to fill the “Global Intelligence Demand – Supply Gap”. We cover more than 15 industry verticals being: Automotive, Electronics, Manufactu

For more information:

5 companies that will sell you a ticket to space

A recent revival of interest in space travel has made commercial space flight a reality for the wealthy and adventurous.

Companies such as SpaceX and Blue Origin have become major driving forces behind space exploration and the advancement of rocket design.

These companies monetise spaceflight by offering solutions such as SpaceX’s partnership with Iridium to provide the launch platform for its global satellite Internet network.

These companies also offer commercial flights, and although it is prohibitively expensive for most tourists, rich clientele can take a trip around the moon for a fee.

Below are five companies that will sell you a return trip into space.


Virgin Galactic Orbit

Space tourists can book a flight on Virgin Galactic’s SpaceShipTwo for around $250,000.

Celebrities and scientists have booked a trip on the spacecraft, although SpaceShipTwo is currently not ready for commercial flight.

The vehicle is launched during suborbital flight and is delivered to its launch altitude by Virgin Galactic’s White Knight 2 dual-hull aircraft.

Virgin Galactic SpaceShipTwo


SpaceX

SpaceX has agreed to fly a pair of tourists around the moon in 2018.

No price was mentioned, but the company said the tourists have paid a significant deposit for the mission.

SpaceX will use a Dragon 2 spacecraft and Falcon Heavy rocket for the 2018 mission.

While Virgin’s spaceflight project only sees tourists reach lower Earth orbit, SpaceX has agreed to transport the pair into deep space – marking the first human return to deep space in 45 years.

SpaceX logo on rocket


Blue Origin

Blue Origin offers any applicants the opportunity to sign up for a journey into lower Earth orbit.

The company’s website outlines its astronaut experience, which involves a day of training followed by a brief jaunt over the Kármán line into space.

Blue Origin has not announced pricing or a launch timetable for its programme, but you can sign up for updates.

Blue Origin


Space Adventures

Space Adventures was founded in 1998 with the vision of providing opportunities for space tourism.

The company partners with various launch providers and programmes to send private citizens into space and allow them to partake in space walks and zero-gravity activities.

There is no set price for a ticket using Space Adventures, as the cost depends on which resources are used and available partners.

Space walk


World View

World View plans to send private citizens to the edge of the atmosphere using a giant balloon.

This method requires no special training and is notably cheaper than conventional methods of space travel, with an Early Bird ticket costing $75,000.

Of course, floating up into space in a balloon will not allow you to experience a zero-gravity environment, but the view will be the same as it is from orbit.

World View Voyager Capsule


Now read: Virgin launches Orbit satellite service

Students to Plan Moon Base for Deep-Space Exploration

Humans have set foot on the moon and may one day walk on Mars, but to push farther into space we will likely need a pit stop. With that in mind, 32 students from around the world will meet up at Caltech from March 26–31 for the 2017 Caltech Space Challenge, a competition to design a launch-and-supply station—dubbed Lunarport—for future space missions. The event is organized by the Graduate Aerospace Laboratories of the California Institute of Technology (GALCIT) to help mentor the next generation of aerospace engineers.

During the weeklong biennial event, the students—a mix of graduate and undergraduate—are divided into two teams, each of which has just five days to create a fresh design to tackle an upcoming space-exploration challenge. At the first Caltech Space Challenge in 2011, the teams were tasked with exploring an asteroid and returning with a sample of rock or ice. In 2013, the teams designed campaigns to land humans on a martian moon. That year, the winning team proposed a robotic precursor mission followed up by a three-astronaut exploration of both of Mars’ moons, Phobos and Deimos. And at the most recent Caltech Space Challenge, in 2015, the students planned a mission to an asteroid that had been brought into lunar orbit, to extract its resources and demonstrate how they could be used.

The goal of every competition is to present students with a challenge that humanity is expected to face in the not-too-distant future. For example, a station like the Lunarport, if constructed someday, would provide a staging facility for heavy payloads, at which rockets could be refueled to continue their journey to deep space.

While working on the challenge, the students will also receive expert guidance via lectures from engineers at Orbital ATK, Blue Origin, the Jet Propulsion Laboratory (JPL, which Caltech manages for NASA), and other organizations. At the end of the week, each team will present its solution, and a winner will be selected by a jury of industry experts.

This year, 806 students applied to participate in the event—more than the combined number of applicants for the three prior Caltech Space Challenges. The 32 successful applicants come from 14 different countries on four continents.

This year’s Caltech Space Challenge is being organized by Caltech graduate students Ilana Gat (MS ’14) and Thibaud Talon (MS ’14). The Caltech faculty advisers are Paul Dimotakis (BS ’68, MS ’69, PhD ’73), the John K. Northrop Professor of Aeronautics and professor of applied physics; Jakob van Zyl (MS ’83, PhD ’86), senior faculty associate in electrical engineering and aerospace, lecturer in electrical engineering, and director for solar system exploration at JPL; and Anthony Freeman, lecturer in aerospace and manager of the JPL Innovation Foundry. The event is supported by Caltech and its Division of Engineering and Applied Science, JPL, the Keck Institute for Space Studies, and Caltech’s Moore-Hufstedler Fund. Its corporate sponsors include Airbus, Microsoft, Orbital ATK, Northrop Grumman, Blue Origin, Boeing, Lockheed Martin, Schlumberger, and Honeybee Robotics.