Xbox’s big boss Phil Spencer isn’t feeling particularly positive about cross-platform gaming with the PlayStation 4, as Sony doesn’t seem to be getting keen on the idea.
Microsoft has been working on expanding cross-platform play to both Sony and Nintendo, notably for Minecraft, but Spencer, in an interview with GameSpot, noted that only the latter has been keen to play nice with Redmond’s Xbox.
“The relationship with Nintendo on this front has been strong. They’ve been great supporters and we continue to collaborate with them,” he said.
But Spencer in’t so convinced that Sony is up for the cross-platform gaming, despite Microsoft’s dialogue with its gaming rival.
“We talk to Sony all the time. With Minecraft on PlayStation, we have to be one of the biggest games on their platform in terms of sales and gameplay,” he said. “But I think Sony’s view is different. They should talk about what their view is…”
Sony has previously said it didn’t want to go in for cross-platform gaming with Minecraft because it would need to relinquish some control over how it looks after is online gamer base. Moreover, in a cross-platform environment it couldn’t manage any problems that crop up, such as bullying between gamers, and exposing young children who play Minecraft to some of the toxic attitudes and behaviours of older gamers.
While Spencer is not exactly hopeful that Sony will change its stance, he was clear to point out to the GameSpot interviewer that he can’t talk on Sony’s behalf and that some day PlayStation gamers may be able to play online with Xbox One users. And he’s a big advocate of cross-platform gaming in general, which would lead us to suspect we’ll see more Xbox Live and Windows 10 PC cross-play games before to long.
“I think people look at [cross-play] and say is it better for gamers. If it’s better for gamers, I have a hard time thinking why we shouldn’t go do this, especially when you’re trying to make the gaming business a bigger business; grow it, get more games, create more opportunity,” explained Spencer.
Perception is everything with technology. When reports that the latest operating system for iPhones, called iOS 11, was making older phones slower, I had to wonder. Would Apple purposefully make an older iPhone slower to make people want to upgrade? Is there a conspiracy that is intended to line the coffers of the most famous company in tech?
Then I actually installed iOS 11 on an older iPhone 6. It actually seemed faster to me.
I ran multiple apps, including the Chrome browser, the Gmail app, Outlook, and several others. I even tested the game Infinity Blade. In all of my tests, the iPhone 6 seemed to run about the same. In fact, I swear it seemed just a hair faster for some Apple apps, like Mail.
Last week, the results were confirmed by Futuremark, which makes benchmarking software. After running performance tests on older models, the company confirmed the speed is likely a result of user perception–the phones run roughly the same speed. A small note about the testing suggested that some of the latest features–perhaps those that depend the most on the processor such as multitasking or gaming–run a tad slower.
Why the misinformation about older iPhones slowing down?
Here’s my theory.
Users are likely comparing the new iOS on their phone–since it is a free download and is easy for anyone to install–to how it runs on a newer iPhone. Yet, that’s not really fair. Apple makes no claims about iOS 11 speeding up an older phone, and a newer phone will run faster. The same apps on an iPhone 8 run much faster with iOS 11 than they do on an iPhone 6. After a user installs iOS 11 on an older phone, he or she might be comparing the suddenly “sluggish” phone to a newer model at the Apple store or that a friend uses.
To use a car example, that’s like using a higher octane fuel in an older Mazda Miata and then complaining about how slow it is compared to a new Miata. But the speed is dictated by the fact that the older Miata has around a 128-horsepower engine. The new model has a 155-horsepower engine. Changing the fuel isn’t going to make the older model seem sporty, but it might seem like the car feels slower if you expected a change in performance.
This is where the analogy starts to break down. An older iPhone actually does get a little faster for some of the most common Apple apps. I tested the Photos app and it definitely lets you swipe through photos a bit faster after loading iOS 11. And, maybe due to how Apple has improved Wi-Fi and Bluetooth, but my older phone connected faster.
If your phone does feel more sluggish, there are a few things to try. One is to free up memory by closing a few apps and deleting a few files. Every operating system likes to have room to breathe. Also, make sure you reboot the phone. That can work wonders, and I’ve heard of a few friends who thought iOS 11 seemed faster after a reboot.
Your perceptions will surely change once you know the facts. If you still think iOS 11 makes an older phone slower, try driving a Miata from 2007. It’s slower than the sunrise.
During his current trip in Europe, Apple’s Tim Cook sat down with The Independent for a wide-ranging interview. The primary focus of the talk was on ARKit and how Apple has implemented it in iOS and where else it augmented reality could be useful…
“The way that you get lots of great ideas is for us to do the heavy lifting of the complexity of locational things and software, and put those in the operating system,” says Cook. “And then you have all the developers that are able to put their energy into their passion.”
The ecosystem further helps Apple when in competing with other smartphone manufacturers, Greg Joswiack says:
“Our competitors are trying to mimic what we’ve done,” says Greg Joswiack, Apple’s vice president for iOS, iPad and iPhone marketing. “But they just don’t have that scale we bring to it.”
Cook also noted that Apple has an advantage in that it controls both the hardware and the software o the iPhone, a level of control that competitors don’t have:
That gives Apple an especially strong position because its competitors “don’t control the hardware and software”, Cook says. “It goes to what Apple is about – the integration of those two things, with the App Store on the server side. I think it’s going to be hard for other folks.”
The conversation then shifted primarily to augmented reality in general. Cook likened AR’s affect to that of the App Store, saying that it will be just as “dramatic” as the App Store was for mobile technology. The Apple CEO also noted of how ARKit become “the largest AR platform” instantly because of the existing iPhone user base.
“If it were on a different device then you would never have a commercial opportunity, and without the commercial opportunity you’d never have 15 million people that say, ‘I want to design my passion with AR’.”
By putting it on iPhone, Apple was able to “instantly overnight become the largest AR platform”, Cook says.
Cook also vaguely addressed the rumors that Apple is building a pair of augmented reality glasses, saying that the technology to create such a product isn’t there yet, while noting that Apple doesn’t care about being first.
“But today I can tell you the technology itself doesn’t exist to do that in a quality way. The display technology required, as well as putting enough stuff around your face – there’s huge challenges with that.
“The field of view, the quality of the display itself, it’s not there yet,” he says. And as with all of its products, Apple will only ship something if it feels it can do it “in a quality way”.
“We don’t give a rat’s about being first, we want to be the best, and give people a great experience,” he says. “But now anything you would se on the market any time soon would not be something any of us would be satisfied with. Nor do I think the vast majority of people would be satisfied.”
Shares of Symantec (SYMC) get another downgrade today after a big run in the stock this year, with Morgan Stanley’s Keith Weiss cutting his rating on the shares to Equal Weight from Overweight, and trimming his price target to $34, writing that the “next leg” of share price performance depends on revenue growth, which faces some challenges.
Symantec shares this morning are down 91 cents, or 2.8%, at $31.72. The stock is up 33% this year, besting the Nasdaq Composite’s 22%.
Today’s piece by Weiss follows the one yesterday from Cowen & Co.’s Gregg Moskowitz, who had cut the stock to the equivalent of a Sell, writing that the shares had run too far, too fast on positive feeling about its “LifeLock” product because of the Equifax (EFX) breach.
Weiss point is simply that a pick up in growth is not showing up yet, in the fiscal year that ends in March.
“Our field work on Symantec has remained relatively stable QoQ and YoY, suggesting that they company’s improved product positioning and greater upsell opportunity has yet to translate into accelerating demand,” writes Weiss.
“This raises the risk profile for Symantec’s second half FY18 outlook, which assumes an upward inflection in revenue growth.”
To date, a lot of the investment story on the stock has been about management’s focus on cost cutting, notes Weiss, but “these efficiency gains are largely reflected in consensus estimates,” he believes.
He still thinks Symantec is “well posited to capitalize on the secular trend towards consolidation within security,” but he thinks that the LifeLock business is not yet going to be a big contributor.
Even if LifeLock adds 55% more users through the end of this year, and boost prices for the service by 5%, it would only add 6 to 8 cents per share in extra earnings for Symantec, he reckons. That’s a “relatively small EPS impact.”
With the recent announcements of Apple’s ARKit and Google’s ARCore, the hype is building around AR becoming the next big thing after smartphones — the computing form factor that has dominated consumer technology in the past decade. As such, it’s become common to compare the emergence of AR to the rise of smartphones. After all, AR has the potential to have a global impact as massive as that of smartphones in the past decade. Some have even gone as far to claim that the death of the smartphone is imminent and inevitable at the hands of AR. However, to understand the rise of AR, we should look at the dotcom boom, not smartphones.
AR, like the internet during the dotcom boom, is an entirely new and unique medium. When dial-up internet became a common household feature, we were suddenly able to communicate in ways that were simply not possible before. We no longer had to telephone or visit people in-person; we could send them an instant message on AOL Instant Messenger. This may seem trivial today but at the time it was revolutionary. Yet during that era, it was generally unclear how to best harness the internet to unlock its full potential. We lacked case studies and context.
In contrast, by the time smartphones came around, we already had the example of desktop web browsers such as Netscape to give us context to navigating the internet, albeit in a much smaller and portable format. Even though the form factor changed, we were still interacting with a 2D browser, 2D content, and 2D applications.
Unknown form factor
We have no existing context for the next form factor that will emerge, most likely augmented reality “smart glasses.” For ubiquitous AR to function at a mass consumer scale, we’ll have to build an entirely new 3D internet from scratch that overlays the real world and interacts with people, objects, and environments in ways that have yet to be defined and invented.
What will a website look like when we are walking around inside of a 3D internet? Will that idea even make sense anymore? What will happen to society when 80 percent of the population walks around with cameras on their faces, constantly scanning and recording their immediate environment? How will we engage with 3D objects overlaid in our world when our primary method of interaction is our hands and fingers? How will hackers be able to disrupt our lives when they can literally augment our world?
It’s unclear how this will play out, and these are the problems we already know about. How will we deal with the inevitable “unknown unknowns” as AR becomes ubiquitous? These are the same types of questions that tech entrepreneurs were asking during the dotcom boom.
Dangers to the market
For all the revolutionary promise of the internet, the dotcom boom turned to bust in a market crash that wiped $5 trillion off the total value of companies in the tech sector. Skeptics warn that the AR market is similarly overhyped. While any emerging sector will have doubters, their argument is given credence by the high-profile failure of some high-end AR headsets like Google Glass, which showed us that AR wearables can cause anxiety in a population already worried about the rise of the machines.
Like the dotcom crash, the potential doom for AR could be rooted in billions of dollars being invested in poorly designed and useless products that should never have been built in the first place. Many of the early iOS AR apps that have been released are cumbersome and battery-draining, and, worst of all, it’s often unclear why AR is a value-add to the purpose of the applications.
Is it really useful to be able to drop a 3D model of a car into the kitchen? How often are we realistically going to need to place Ikea furniture into our living rooms? On top of this, early iOS AR apps drain battery life at a frightening pace and will likely deter people from frequently engaging in mobile AR experiences. Until AR companies can build apps that have a practical day-to-day utility and enable us to do things that were previously impossible, smartphone AR may remain a novelty at best.
The rise of AR will be unpredictable for all of its stakeholders, who will have to be patient in waiting for this medium’s potential to truly emerge. In this sense, it’s like the dotcom boom — an exciting and unknown frontier waiting to be built and carved into being — but also fraught with peril, as the “rules” of this new space are still very unclear, even to its nascent experts.
Michael Park is the CEO and founder of PostAR, a platform that lets you build, explore, and share augmented realities.
A couple of SpaceX’s launches scheduled for October have been bumped later, mostly due to launch pad availability. The Falcon Heavy launch in November, for that matter, might be delayed as well. Here’s what it means.
Elon Musk’s rocket company has already completed over 40 flights for its reusable Falcon 9 rockets, including a bunch of missions for the International Space Station (ISS) as part of a long-standing contract with NASA, as well as a mission for the U.S. Air Force with a highly secretive payload. Indeed, 2017 has been quite a year for SpaceX, and it isn’t over yet.
SpaceX has scheduled the maiden launch of the Falcon Heavy rocket, supposedly the next big thing to come from Musk’s rocket labs, for this November. Recent developments, however, seem to suggest that this highly-anticipated flight may be up for some delay because of launch pad issues.
SpaceX has been conducting its East Coast launches from the Kennedy Space Center in Florida, but they hope to return to their SLC-40 launch site in Cape Canaveral for the Falcon Heavy. SLC-40 is still undergoing repairs, after SpaceX’s last Falcon 9 explosion over a year ago. An alternative launch site, the Kennedy Space Center’s 39A, is being modified to accommodate the Falcon Heavy.
Yet two October SpaceX launches—the SES-11 and NEXT-3 missions, both launching NASA satellites—have already been moved forward due to launch pad availability. The Falcon Heavy’s launch date will be dependent on availability at 39A and SpaceX’s speed at repairing the TEL (Transporter/Erector/Launcher) destroyed at SLC-40, as well as on the Falcon Heavy itself.
Expected to be Difficult
Musk has previously said that he expects the Falcon Heavy maiden launch to be quite a challenge; in fact, he’s been rather unsure about a November flight. He said that he’s not expecting it to “make it to orbit. I want to make sure to set expectations accordingly. I hope it makes it far enough away from the pad that it does not cause pad damage,” Musk said previously.
Still, work on the Falcon Heavy has continued. There are still a number of things needed before SpaceX’s largest rocket gets a go signal for a launch, such as rollout and fit checks, a so-called Wet Dress Rehearsal (WDR), and a Static Fire test on the 39A launch pad at Kennedy. There’s no official word yet from SpaceX confirming a delay.
The Falcon Heavy test mission is considered a critical part of SpaceX’s overall preparations for crewed flights. This, in turn, is important for the company’s plan for Mars—which, incidentally, is due for an update from Musk this coming Friday. Of course, a schedule upset may be better than launching with a low success probability.
Still, this is SpaceX we’re talking about. The company is no stranger to failures, which they’ve managed to turn into successes in the long run.
Video game news, reviews and commentary with Gazette reporter Jake Magee.
Wednesday, September 27, 2017
The map for Trials of the Nine is pretty alien.
Once you win seven Trials of the Nine matches in a row, you’re rewarded with the giant head of some alien.
I was never very good at competitive combat in the original “Destiny,” and I haven’t improved much since the release of “Destiny 2.”
When I was in high school, I mowed kids down in intense matches of “Gears of War” and “Halo.” Now I struggle to hold my own in player-vs.-player bouts.
Maybe I’m out of practice. Perhaps “Destiny” doesn’t feel as natural for me to play as other games did. The truth is, probably, that I’m just getting old.
My lack of skill at “Destiny” is what made going flawless in Trials of Osiris more than a year ago taste so sweet. The feeling wasn’t as great when I did the same in “Destiny 2” last week.
For those unfamiliar, Trials of Osiris is a game mode where two teams of three face each other in several rounds. Whichever team is killed off first loses the round until one team nabs enough wins to face a different team in another match. A team must win nine matches in a row to be deemed flawless.
In “Destiny 2,” Trials of Osiris has been tweaked. Now it’s called Trials of the Nine, and two teams of four square off. On top of that, you now need only seven wins to go flawless.
You have to essentially use in-game currency to pay to play Trials of Osiris, but Trials of the Nine is free–lowering the bar of entry. There’s no risk to playing Trials of the Nine, which means you face some truly terrible teams instead of the mostly skilled players abundant in Trials of Osiris.
And we faced our share of bad players. Worse-than-me levels of bad.
On Saturday, myself, my friends Carson and Ryan and a skilled player they’d met playing “Destiny” grouped up to try Trials of the Nine. It was my second attempt at the game mode.
We won our first match and were completely destroyed in our second. Our progress reset, and we gave it another try.
We barely encountered any challenging teams during the next seven matches. We won them all with ease. I (mostly) carried my own weight, but there’s no denying my three teammates did most of the work.
Still, it was a slaughter.
On the last round, we faced probably the worst team we’d seen all day. We finished the match in record time and, with that, we had gone flawless.
It felt good, but it doesn’t compare to the time I went flawless in the original “Destiny.”
In Trials of Osiris, you tend to get matched up with teams that have an equal number of wins as you. That means on your seventh game, you’re probably going to face another team with six straight wins and that both teams aren’t only skilled but will be doing their very best to win.
If my experience Saturday was any indication, Trials of the Nine matches players up with random teams regardless of their skills or win streaks. It almost felt wrong to dominate that last team of probably 10-year-olds as hard as we did, but that’s the hand we were dealt.
With only three people on a team in Trials of Osiris, one bad or even average player can severely handicap your team. In Trials of the Nine, having four players on a team allows for just a bit more wiggle room as poor players won’t hinder their teams quite as much.
My buddies and I tried for weeks to go flawless in Trials of Osiris. When we finally did it, it felt like a huge, well-earned accomplishment we’d practiced tirelessly to achieve.
In Trials of the Nine, well … it only took me a week to get there, and we didn’t exactly exert ourselves.
Don’t get me wrong. I still had a blast playing with my friends and achieving something not many players do. I only wish it had felt a bit more satisfying.
Video game columnist Jake Magee has been with GazetteXtra since 2014. His opinion is not necessarily that of Gazette management. Let him know what you think by emailing email@example.com, leaving a comment below, or following @jakemmagee on Twitter.
It’s true, we’ve covered Rise of the Tomb Raider so many times over the past couple of years that we’ve even given it a YouTube playlist – but this is by no means a bad thing. It simply demonstrates the love and care developer Nixxes has poured into maintaining the game over the years. From supporting the ageing Xbox 360 to the PlayStation 4 Pro and PSVR all the way up to the Xbox One X, Nixxes has become the caretaker for Crystal Dynamics’ most recent outing. At this point, the only thing missing is a Switch port. But right now, all eyes on are the upcoming Xbox One X port, demonstrating what looks like the best console version of the game yet.
Before we jump into that, a curious anomaly has popped up in our most recent captures of both Rise of the Tomb Raider and Forza Motorsport 7 – a colour-shift that seems to desaturate the footage a touch. Interestingly, the other games we grabbed at the recent Microsoft showcase event that do not support HDR (Killer Instinct, Quantum Break) do not exhibit the issue, possibly suggesting a firmware tone-mapping bug in the preview Xbox One X units. We’ve alerted Microsoft to the issue, but in the meantime, remember that everything tested here is pre-production software running on production hardware hosting a non-final operating system.
And there’s certainly plenty to test. This is the Gamescom demo we’re hands-on with, covering off around ten minutes of gameplay from the Syria-based Prophet’s Tomb level. We actually played through the demo four times – in 4K, 4K HDR, and using the alternative rendering modes: Enriched 4K and High Frame-Rate. The native 4K mode is indeed impressive as Rise of the Tomb Raider remains a highly demanding game on the PC. Granted, the level of detail is pushed back compared to the PC version, but it holds up better than you might think.
We’ve been hands-on with the early Rise of the Tomb Raider Xbox One X demo. Here’s the detailed report you’ve been waiting for.
Unfortunately, the Prophet’s Tomb is just about the worst level for drawing conclusions on detail settings but essentially, we’re looking at settings equivalent to PS4 Pro’s 4K checkerboard mode which is lower in turn than PC’s highest possible settings. However, we can finally draw an element of confusion about our last analysis to a close. Does Xbox One X run with higher quality textures or not? There was a lot of confusion surrounding this comparison last time due to the procedural dirt system, but with further hands-on time, plus running specific comparisons through the PC version at various settings, we can make some firm conclusions.
Firstly, the extra memory on Xbox One X means that textures nearly on par with the PC’s highest setting are available here, and a welcome twist is that these textures persist across all three display modes on Xbox One X, so you’ll always enjoy higher quality assets regardless of the selected option. Yes, they even manifest on the 1080p high frame-rate mode.
Secondly, the Gamescom build of Rise of the Tomb Raider on Xbox One X runs with depth of field disabled, giving the illusion of further detail resolved in the scene even though it’s an important effect in the post-process pipeline that has actually been removed (for now, at least). Stacking up Xbox One X’s results against PC with the same effect disabled, we get a very close match in detail, confirming higher quality assets on the new console build. It’s strange that the effect is missing, as it was present in the original Xbox One release. It will be interesting to see if it reappears in the final build and, if so, whether or Nixxes they use the higher quality bokeh depth of field available on PC.
Far more important from our perspective is the matter of performance. Nixxes has set a lofty target for itself in the native 4K mode of Rise of the Tomb Raider, but it’s fair to say that the Gamescom build is a little shaky here. The PS4 version uses triple-buffering for a tear-free experience, while Xbox One – and X – use an adaptive sync technology, allowing frames that run over their rendering budget a little wiggle room in when to present, manifesting as noticeable tearing at the top of the screen. Most of the demo plays out at 30fps, but busy scenes can drop beneath – a little concerning bearing in mind that The Prophet’s Tomb is actually one of the lightest areas of the game.
Nixxes knows what it is doing so we expect improvement in the final game, but even now there is already a preferable solution. The PS4 Pro’s 1080p Enriched mode – gets a 4K upgrade, retaining what looks like the same visual feature-set (enhanced draw distance, tessellation etc) but using a reconstruction technique similar to checkerboard rendering to produce a pixel count of 2160p just like on the Pro. Not only does this mode look great on Xbox One X, it also offers improved performance. The areas that struggled to hold 30fps in native 4K mode run without a hitch. You lose some fidelity when using this type of rendering but it works beautifully on a 4K screen, performance is improved and the graphics are just straight-up better. What’s not to like? Right now, this gets our vote as the best way to play the game.
Then we have the high frame-rate mode, which is quite impressive based on this showing. It offers a 1080p output just like PS4 Pro but as we mentioned earlier, it now supports higher resolution textures. It also uses nearest neighbor scaling when output to a 4K display – that means a sharper but slightly more pixelated image compared to a filter and scaled one. As the name implies, this is a mode with a higher frame-rate but not one promising a locked 60fps. The demo comes close to this with most areas running locked. While the drop in resolution is certainly noticeable, the improvement in fluidity and controller response makes for a nice change of pace. It feels fantastic to play in this mode.
Unfortunately, it’s not a complete lock. The very last section of the demo exhibits minor slowdown and tearing compared to the rest of the level. Now, to be fair, these dips are rather minimal in this particular area but this is also one of the least demanding sections in the game. So if we’re seeing minor dips here, it’s fair to assume that later areas, like the Geothermal Valley, will struggle to hit 60fps. What we can confirm though is that like-for-like tests on the demo area confirm that Xbox One X does indeed run the same content faster than PS4 Pro. We should see that reflected across the full game, something we’ll check out upon release.
Ultimately, this demo is just our first extended encounter with this iteration of the game. Overall impressions are good – the improvements over PS4 Pro are readily evident and by extension, this places Xbox One X in pole position. However, question marks remain, specifically related to missing features and unstable performance in native 4K mode. The lack of depth of field is puzzling, but the frame-rate drops in the native 4K mode are more of a concern bearing in mind the relatively non-complex part of the game chosen for the demo. If we’re seeing issues in this level it could be much more of an issue later on – but we should remember that this demo is quite old now, and Nixxes has a proven track record in rolling out improvements to its work. Even if issues remain though, the Enriched 4K mode seems set to get the job done.
With around 130 titles confirmed for Xbox One X upgrades, we’re in for a busy end to the year. Games like Doom, The Witcher 3, Titanfall 2 and Forza Horizon 3 are genuine tech showcases that have the potential to look stunning on a 4K screen. But with its extensive support for high frame-rates, increased resolution or improved visual effects, it’s Rise of the Tomb Raider that has become more than just a game, and just as much of a technical benchmark for current-gen console hardware. Our extensive hands-on with the Gamescom demo actually asks as many questions as it provides answers, and we’re really looking forward to checking out final code.
Thomas Reardon puts a terrycloth stretch band with microchips and electrodes woven into the fabric—a steampunk version of jewelry—on each of his forearms. “This demo is a mind fuck,” says Reardon, who prefers to be called by his surname only. He sits down at a computer keyboard, fires up his monitor, and begins typing. After a few lines of text, he pushes the keyboard away, exposing the white surface of a conference table in the midtown Manhattan headquarters of his startup. He resumes typing. Only this time he is typing on…nothing. Just the flat tabletop. Yet the result is the same: The words he taps out appear on the monitor.
Steven Levy is Backchannel’s founder and Editor in Chief.
That’s cool, but what makes it more than a magic trick is how it’s happening. The text on the screen is being generated not by his fingertips, but rather by the signals his brain is sending to his fingers. The armband is intercepting those signals, interpreting them correctly, and relaying the output to the computer, just as a keyboard would have. Whether or not Reardon’s digits actually drum the table is irrelevant—whether he has a hand is irrelevant—it’s a loop of his brain to machine. What’s more, Reardon and his colleagues have found that the machine can pick up more subtle signals—like the twitches of a finger—rather than mimicking actual typing.
You could be blasting a hundred words a minute on your smart phone with your hands in your pockets. In fact, just before Reardon did his mind-fuck demo, I watched his cofounder, Patrick Kaifosh, play a game of Asteroids on his iPhone. He had one of those weird armbands sitting between his wrist and his elbows. On the screen you could see Asteroids as played by a decent gamer, with the tiny spaceship deftly avoiding big rocks and spinning around to blast them into little pixels. But the motions Kaifosh was making to control the game were barely perceptible: little palpitations of his fingers as his palm lay flat against the tabletop. It seemed like he was playing the game only with mind control. And he kind of was.
2017 has been a coming-out year for the Brain-Machine Interface (BMI), a technology that attempts to channel the mysterious contents of the two-and-a-half-pound glop inside our skulls to the machines that are increasingly central to our existence. The idea has been popped out of science fiction and into venture capital circles faster than the speed of a signal moving through a neuron. Facebook, Elon Musk, and other richly funded contenders, such as former Braintree founder Bryan Johnson, have talked seriously about silicon implants that would not only merge us with our computers, but also supercharge our intelligence. But CTRL-Labs, which comes with both tech bona fides and an all-star neuroscience advisory board, bypasses the incredibly complicated tangle of connections inside the cranium and dispenses with the necessity of breaking the skin or the skull to insert a chip—the Big Ask of BMI. Instead, the company is concentrating on the rich set of signals controlling movement that travel through the spinal column, which is the nervous system’s low-hanging fruit.
Reardon and his colleagues at CTRL-Labs are using these signals as a powerful API between all of our machines and the brain itself. By next year, they want to slim down the clunky armband prototype into a sleeker, watch strap-style so that a slew of early adopters can dispense with their keyboards and the tiny buttons on their smartphones’ screens. The technology also has the potential to vastly improve the virtual reality experience, which currently alienates new users by asking them to hit buttons on controllers that they can’t see. There might be no better way to move around and manipulate an alternate world than with a system controlled by the brain.
Reardon, CTRL-Labs’ 47-year-old CEO, believes that the immediate practicality of his company’s version of BMI puts it a step ahead of his sci-fi-flavored competitors. “When I see these announcements about brain-scanning techniques and the obsession with the disembodied-head-in-a-jar approach to neuroscience, I just feel like they are missing the point of how all new scientific technologies get commercialized, which is relentless pragmatism,” he says. “We are looking for enriched lives, more control over things over things around us, [and] more control over that stupid little device in your pocket—which is basically a read-only device right now, with horrible means of output.”
Reardon’s goals are ambitious. “I would like our devices, whether they are vended by us or by partners, to be on a million people within three or four years,” he says. But a better phone interface is only the beginning. Ultimately, CTRL-Labs hopes to pave the way for a future in which humans can seamlessly manipulate broad swaths of their environment using tools that are currently uninvented. Where the robust signals from the arm—the secret mouthpiece of the mind—become our prime means of negotiating with an electronic sphere.
This initiative comes at a prescient moment for CTRL-Labs, where the company finds itself perfectly positioned to innovate. The person leading this effort is a talented coder with a strategic bent, who has led big corporate initiatives—and left it all for a while to become a neuroscientist. Reardon understands that everything in his background has randomly culminated in a humongous opportunity for someone with precisely his skills. And he’s determined not to let this shot slip by.
Reardon grew up in New Hampshire as one of 18 children in a working class family. He broke from the pack at age 11, learning to code at a local center funded by the local tech giant, Digital Equipment Corporation. “They called us ‘gweeps,’ the littlest hackers,” he says. He took a few courses at MIT, and at 15, he enrolled at the University of New Hampshire. He was miserable—a combination of being a peach-fuzz outsider and having no money. He dropped out within a year. “I was coming up on 16 and was, like, I need a job,” he says. He wound up at Chapel Hill, North Carolina, at first working in the radiology lab at Duke, helping to get the university’s computer system working smoothly with the internet. He soon started his own networking company, creating utilities for the then-mighty Novell networking software. Eventually Reardon sold the company, meeting venture capitalist Ann Winblad in the process, and she hooked him up with Microsoft.
Reardon’s first job there was leading a small team to clone Novell’s key software so it could be integrated into Windows. Still a teenager, he wasn’t used to managing, and some people reporting to him called him Doogie Howser. Yet he stood out as exceptional. “You’re exposed to lots of type of smart people at Microsoft, but Reardon would kind of rock you,” says Brad Silverberg, then head of Windows and now a VC (invested in CTRL-Labs). In 1993, Reardon’s life changed when he saw the original web browser. He created the project that became Internet Explorer, which, because of the urgency of the competition, was rushed into Windows 95 in time for launch. For a time, it was the world’s most popular browser.
A few years later, Reardon left the company, frustrated by the bureaucracy and worn down from testifying in the anti-trust case involving the browser he helped engineer. Reardon and some of his browser team compatriots began a startup focused on wireless internet. “Our timing was wrong, but we absolutely had the right idea,” he says. And then, Reardon made an unexpected pivot: He left the industry and enrolled as an undergraduate at Columbia University. To major in classics. The inspiration came from a freewheeling 2005 conversation with the celebrated physicist Freeman Dyson, who mentioned his voluminous reading in Latin and Greek. “Arguably the greatest living physicist is telling me don’t do science—go read Tacitus,” says Reardon. “So I did.” At age 30.
In 2008, Reardon did get his degree in classics—magna cum laude—but before he graduated he began taking courses in neuroscience and fell in love with the lab work. “It reminded me of coding, of getting your hands dirty and trying something and seeing what worked and then debugging it,” he says. He decided to pursue it seriously, to build a resume for grad school. Even though he still was well-off from his software exploits, he wanted to compete for a scholarship—he got one from Duke—and do basic lab work. He transferred to Columbia, working under renowned neuroscientist Thomas Jessell (who is now a CTRL-Labs advisor, along with other luminaries like Stanford’s Krishna Shenoy).
According to its website, the Jessell Lab “studies systems and circuits that regulate movement,” which it calls “the root of all behavior.” This reflects Columbia’s orientation in a neuroscience divide between those concentrating on what goes on purely inside the brain and those who study the brain’s actual output. Though a lot of glamour is associated with those who try to demystify the mind through its matter, those in the latter camp quietly believe that the stuff the brain makes us do is really all the brain is for. Neuroscientist Daniel Wolpert once famously summarized this view: “We have a brain for one reason and one reason only, and that’s to produce adaptable and complex movements. There is no other reason to have a brain…Movement is the only way you have of affecting the world around you.”
That view helped shape CTRL-Labs, which got its start when Reardon began brainstorming with two of his colleagues in the lab in 2015. These cofounders were Kaifosh and Tim Machado, who got their doctorates a bit before Reardon did and began setting up the company. During the course of his grad study, Reardon had became increasingly intrigued by the network architecture that enables “volitional movement”—skilled acts that don’t seem complicated but actually require precision, timing, and unconsciously gained mastery. “Things like grabbing that coffee cup in front of you and raising it to your lips and not just shoving it through your face,” he explains. Figuring out which neurons in the brain issue the commands to the body to make those movements possible is incredibly complicated. The only decent way to access that activity has been to drill a hole in the skull and stick an implant in the brain, and then painstakingly try to figure out which neurons are involved. “You can make some sense of it, but it takes a year for somebody to train one of those neurons to do the right thing, say, to control a prosthesis,” says Reardon.
But an experiment by Reardon’s cofounder Machado opened up a new possibility. Machado was, like Reardon, excited about how the brain controlled movement, but he never really thought that the way to do BMI was to plant electrodes into the skull. “I never thought people would do that to send texts to each other,” says Machado. Instead, he explored how motor neurons, which extend through the spinal cord to actual muscles in the body, might be the answer. He created an experiment in which he removed the spinal cords of mice and kept them active so that he could measure what was happening with the motor neurons. It turned out that the signals were remarkably organized and coherent. “You could understand what their activity is,” Machado says. The two young neuroscientists and the older coder-turned-neuroscientist saw the possibility of a different way of doing BMI. “If you’re a signals person, you might be able to do something with this,” Reardon says, recalling his reaction.
The logical place to get ahold of those signals is the arm, as human brains are engineered to spend a lot of their capital manipulating the hand. CTRL-Labs was far from the first to understand that there’s value in those signals: A standard test to detect neuromuscular abnormalities uses the signals in what’s called electromyography, commonly referred to as EMG. In fact, in its first experiments CTRL-Labs used standard medical tools to get its EMG signals, before it began building custom hardware. The innovation lies in picking up EMG more precisely—including getting signals from individual neurons—than the previously existing technology, and, even more important, figuring out the relationship between the electrode activity and the muscles so that CTRL-Labs can translate EMG into instructions that can control computer devices.
Adam Berenzweig, the former CTO of the machine learning company Clarifai who is now lead scientist at CTRL-Labs, believes that mining this signal is like unearthing a communications signal as powerful as speech. (Another lead scientist is Steve Demers, a physicist working in computational chemistry who helped create the award-winning “bullet time” visual effect for the Matrix movies.) “Speech evolved specifically to carry information from one brain to another,” says Berenzweig. “This motor neuron signal evolved specifically to carry information from the brain to the hand to be able to affect change in the world, but unlike speech, we have not really had access to that signal until this. It’s as if there were no microphones and we didn’t have any ability to record and look at sound.”
Picking up the signals is only the first step. Perhaps the most difficult part is then transforming them into signals that the device understands. This requires a combination of coding, machine learning, and neuroscience. For some applications, the first time someone uses the system, he or she has a brief training period where the CTRL-Labs software figures out how to match a person’s individual output to the mouse clicks, key taps, button pushes, and swipes of phones, computers, and virtual reality rigs. Amazingly, this takes only a few minutes for some of the more simple demos of the technology so far.
A more serious training process will be required when people want to go beyond mimicking the same tasks they now perform—like typing using the QWERTY system—and graduate to ones that shift behavior (for instance, pocket typing). It might be ultimately faster and more convenient, but it will require patience and the effort to learn. “That’s one of our big, open, challenging questions,” says Berenzweig. “It actually might require many hours of training—how long does it take people to learn to type QWERTY right now? Like, years, basically.” He has a couple of ideas for how people will be able to ascend the learning curve. One might be to gamify the process (paging Mavis Beacon!). Another is asking people to think of the process like learning a new language. “We could train people to make phonetic sounds basically with their hands,” he says. “It would be like they’re talking with their hands”
Ultimately, it is those new kinds of brain commands that will determine whether CTRL-Labs is a company that makes an improved computer interface or a gateway to a new symbiosis between humans and objects. One of CTRL-Labs’ science advisors is John Krakauer, a professor of neurology, neuroscience, and physical medicine and rehabilitation at Johns Hopkins University School of Medicine who heads the Brain, Learning, Animation, and Movement Lab there. Krakauer told me that he’s now working with other teams at Johns Hopkins to use the CTRL-Labs system as a training ground for people using prosthetics to replace lost limbs, specifically by creating a virtual hand that patients can master before they undergo a hand transplant from a donor. “I am very interested in using this device to help people have richer moving experiences when they can no longer themselves play sports or go for walks,” Krakauer says.
But Krakauer (who, it must be said, is somewhat of an iconoclast in the neuroscience world) also sees the CTRL-Labs system as something more ambitious. Though the human hand is a pretty darn good device, it may be that the signals sent from the brain can handle much, much more complexity. “We don’t know whether the hand is as good as we can get with the brain we have, or whether our brain is actually a lot better than the hands,” he says. If it’s the latter, EMG signals might be able to support hands with more fingers. We may be able to control multiple robotic devices with the ease with which we play musical instruments with our own hands. “It’s not such a huge leap to say that if you could do that for something on a screen, you can do it for a robot,” says Krakauer. “Take whatever body abstraction you are thinking about in your brain and simply transmit it to something other than your own arm—it could be an octopus.”
The ultimate use might be some sort of prosthetic that proves superior to the body parts with which we are born. Or maybe a bunch of them, attached to the body or somewhere else. “I love the idea of being able to use these signals to control some extraneous device,” Krakauer says. “I also like the idea of being healthy and just having a tail.”
For a company barely two years old, CTRL-Labs has been through a lot. Late last year, co-founder Tim Machado left. (He is now at Stanford’s prestigious Deisseroth bioengineering lab, but remains an advisor to the company and co-holder of the precious intellectual property.) And just last month the company changed its name. It was originally called “Cognescent,” but last month the team finally accepted the fact that keeping that name would mean perpetual confusion with the IT company Cognizant, whose market cap is over $40 billion. (Not that anyone will remember how to spell the startup’s new name, which is pronounced “Control.”)
But if you ask Reardon, the biggest development has been the rapid pace of building a system to implement the company’s ideas. This is a change from the halting progress in the early days. “It took at least three to four months to just be able to see something on the screen,” says Vandita Sharma, a CTRL-Labs engineer. “It was a pretty cool moment finally when I was able to connect my phone system with the band and see EMG data on the screen.” When I first visited CTRL-Labs earlier this summer, I played with a demo of Pong, the most minimal control test, and watched Mason Remaley, a 23-year-old game wizard, play a game of Asteroids with only a few of the features in the arcade game. Only a few weeks later, Asteroids was fully implemented and Kaifosh was playing it with twitches. Remaley is working on Fruit Ninja now. “When I saw a live demo in November, they seemed to have come a little way. More recently they really seemed have nailed it,” says Andrew J. Murray, a research scientist at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour, who spent time in Jessell’s lab with Reardon.
“The technology we’re working on is kind of binary in its opportunity—it either works or it doesn’t work,” says Reardon. “Could you imagine a [computer] mouse that worked 90 percent of the time? You’d stop using the mouse. The proof we have so far is, Goddamn, it’s working. It’s a little bit shocking that it’s working right now, ahead of where we thought we were going to be.” According to cofounder Kaifosh, the next step will be dogfooding the technology in-house. “We’ll probably start with throwing out the mouse,” he says.
But getting all of us to throw out our keyboards and mouses will take a lot more. Such a move would almost certainly require adoption from the big companies that determine what we use on a daily basis. Reardon thinks they will fall in line. “All the big companies, whether it’s Google, Apple, Amazon, Microsoft, or Facebook are making significant bets and explorations on new kinds of interaction,” he says. “We’re trying to build awareness.”
There’s also competition for EMG signals, including a company called Thalmic Labs, which recently had a $120 million funding round led by Amazon. Its product, first released in 2013, only interprets a few gestures, though the company is reportedly working on a new device. CTRL-Labs’ chief revenue officer, Josh Duyan, says CTRL-Labs’ non-invasive detection of individual motor neurons is “the big thing that…makes true Brain-Machine Interfaces…it’s what separates us from not becoming another un-used device like Thalmic.” (CTRL-Labs’ $11 million Series A funding came from a range of investors including Spark Capital, Matrix Partners, Breyer Capital, Glazer Investments, and Fuel Capital.) Ultimately, Reardon feels that his technology has an edge over other BMI operations—like Elon Musk, Bryan Johnson, and Regina Dugan of Facebook, Reardon has been a successful tech entrepreneur. But unlike them, he’s got a PhD in neuroscience.
“This doesn’t happen many times in life,” says Reardon, to whom it’s happened more than most of us. “It’s kind of a Warren Buffet-ish moment. You wait and you wait and you wait for that thing that looks like, Oh, good Lord, this is really going to happen. This is that big thing.”
And if he’s right, in the future when people say things like this, they may wag their tails.