It is an inevitability that cryptographers dread: the arrival of powerful quantum computers that can break the security of the Internet. Although these devices are thought to be a decade or more away, researchers are adamant that preparations must begin now.
Computer-security specialists are meeting in Germany this week to discuss quantum-resistant replacements for today’s cryptographic systems—the protocols used to scramble and protect private information as it traverses the web and other digital networks. Although today’s hackers can, and often do, steal private information by guessing passwords, impersonating authorized users or installing malicious software on computer networks, existing computers are unable to crack standard forms of encryption used to send sensitive data over the Internet.
But on the day that the first large quantum computer comes online, some widespread and crucial encryption methods will be rendered obsolete. Quantum computers exploit laws that govern subatomic particles, so they could easily defeat existing encryption methods.
“I’m genuinely worried we’re not going to be ready in time,” says Michele Mosca, co-founder of the Institute for Quantum Computing (IQC) at the University of Waterloo in Canada and chief executive of evolutionQ, a cyber-security consulting company.
It will take years for governments and industry to
It looks like the holiday season came early for Google’s Quantum Artificial Intelligence Lab.
Google, NASA, and the Universities Space Research Association announced today that they’re getting the D-Wave 2X, the newest and most powerful quantum computer on the market. The 2X doubles the amount of qubits (a unit of quantum information analogous to a classical bit) from D-Wave’s previous model, to 1,000, and operates at 15 millikelvin (very, very, very cold). In the seven-year agreement, D-Wave will supply Google with any updated models they produce of the machine as well.
The new machine will continue the work presently being done in Google’s lab, optimization problems and machine learning, with time on the D-Wave given to all partners.
Quantum computing is a tricky business. Beyond the general premise, which uses the laws of quantum physics and embraces randomness, we’re not entirely sure how much faster quantum computing is right now, compared to classical computing. This mainly comes from a study coauthored by Mattias Troyer, a prominent physicist, who claimed that quantum computers did not outperform traditional computers on key benchmarks. Also part of the study was physicist John Martinis, who was
In the age of social media, texting, mobile e-commerce and video streaming it’s easy to overlook an experience hasn’t gotten better for smartphone users: talking on the phone.
Despite sophisticated smartphones and networks, many mobile users are not satisfied with call clarity. None of the 100-plus smartphones in Consumer Reports’ 2014 phone ratings earned better than a good score for voice quality. A large number of smartphones rated only as “fair.”
In larger part that is because device makers often shrink, flatten and cover speakers in plastic to improve their phones’ overall functionality. Even on a high-end smartphone that uses several microphones and noise-cancellation algorithms, a caller is not guaranteed clear sound, especially in noisy environments.
Change is happening slowly but there are promising new technologies are on the horizon. Start-up Cypher Corp. has built an artificial intelligence engine that analyzes the unique quality of the human voice that distinguishes it from other noises. Cypher could be bundled into new smartphones or deployed as a software update, says John Yoon, the company’s vice president of product design. Yoon says demos of the software are currently running on a variety of Android handsets—including LG, Samsung, Kyocera, Korea Telecom and Google Nexus—when it becomes publicly
Today, 99 percent of our transoceanic data traffic—including phone calls, text and e-mail messages, Web sites, digital images and video, and even some television—travels across the oceans via undersea cables. These cable systems, as opposed to satellites, carry most of the intercontinental Internet traffic. In her new book, The Undersea Network, New York University assistant professor of media, culture and communication Nicole Starosielski tracks submarine systems as they thread together small islands and major urban hubs, conflicts at coastal landing points, and Cold-War–era cable stations.
In this excerpt Starosielski visits the network operations centers where global cable systems are monitored and maintained by a small group of elite engineers.
Excerpted with permission from The Undersea Network, by Nicole Starosielski. Available from Duke University Press. All rights reserved. Copyright 2015, by Nicole Starosielski.
Gateway: From Cable Colony to Network Operations Center
Entering the network operations center of a globe-spanning undersea cable system, I find what you might expect: a room dominated by computer screens, endless information feeds of network activity, and men carefully monitoring the links that carry Internet traffic in and out of the country. At first glance, it seems to be a place of mere supervision, where the humans sit around and
Virtual voice-controlled assistants such as Siri, Cortana and Google Now are magical. You can say things such as “Will I need an umbrella in Dallas this weekend?” or “What flights are overhead?”—or even jokey things like “Is Santa Claus real?” Each time, you get an accurate (or witty) answer.
Behind the scenes, though, all their responses were scripted in advance by writers and programmers. (In fact, Apple employs a team of comedy writers exclusively for drafting Siri’s wisecracks.) Their underlying software is still, in essence, a passel of if/then statements.
Soon, though, your voice assistant will be much, much smarter. After leaving Apple, three of Siri’s creators—Dag Kittlaus, Adam Cheyer and Chris Brigham—started a company called Viv Labs.
Whereas a Siri or a Cortana might know how to handle requests about weather, sports and about 20 other areas, Viv’s knowledge and vocabulary will be extensible and unlimited. They will tap into the databases of thousands of online services—stores, flight-booking sites, car-sharing services, flight trackers, restaurants, florists, dating sites—and understand how everything all fits together.
“You can ask Siri, ‘Where does my sister live?’ and ‘What’s the weather in Boston?’” Cheyer explained to me, “but you can’t say, ‘What’s the weather where my sister lives?’
We have trained machines to dream, but can we make them follow their dreams of moving to New York to become a fashion designer? In a world where startups use algorithms to find the next fashion trends, it’s not impossible that computers could go beyond finding what’s popular, and start creating it. Stitch Fix, a fashion startup that aims to provide a personal shopping experience remotely, already uses machine learning to understand its customers’ tastes. Last week, Stitch Fix data scientist TJ Torres poked into the potential future of computer-generated clothing designs.
Titled “Deep Style: Inferring the Unknown to Predict the Future of Fashion,” Torres’ post details a process not unlike that used by Google this summer to generate those really freaky images with dog faces everywhere. The core technology is an artificial neural network that can be trained to recognize a specific object by analyzing pictures of it, and gradually the computer builds up its own picture of what the object looks like. Sometimes, the computers’ results are spot on. Sometimes, they misidentify Yoda from Star Wars as a giraffe. It’s a kind of machine learning, and even the bad answers are informative.
For fashion, Torres took neural