A guide to common types of two-factor authentication

Two-factor authentication (or 2FA) is one of the biggest-bang-for-your-buck ways to improve the security of your online accounts. Luckily, it’s becoming much more common across the web. With often just a few clicks in a given account’s settings, 2FA adds an extra layer of security to your online accounts on top of your password.

In addition to requesting something you know to log in (in this case, your password), an account protected with 2FA will also request information from something you have (usually your phone or a special USB security key). Once you put in your password, you’ll grab a code from a text or app on your phone or plug in your security key before you are allowed to log in. Some platforms call 2FA different things—Multi-Factor Authentication (MFA), Two Step Verification (2SV), or Login Approvals—but no matter the name, the idea is the same: Even if someone gets your password, they won’t be able to access your accounts unless they also have your phone or security key.

There are four main types of 2FA in common use by consumer websites, and it’s useful to know the differences. Some sites offer only one option; other sites offer a few different options. We recommend checking twofactorauth.org to find out which sites support 2FA and how, and turning on 2FA for as many of your online accounts as possible. For more visual learners, this infographic from Access Now offers additional information.

Finally, the extra layer of protection from 2FA doesn’t mean you should use a weak password. Always make unique, strong passwords for each of your accounts, and then put 2FA on top of those for even better log-in security.

SMS 2FA

When you enable a site’s SMS 2FA option, you’ll often be asked to provide a phone number. Next time you log in with your username and password, you’ll also be asked to enter a short code (typically 5-6 digits) that gets texted to your phone. This is a very popular option for sites to implement, since many people have an SMS-capable phone number and it doesn’t require installing an app. It provides a significant step up in account security relative to just a username and password.

There are some disadvantages, however. Some people may not be comfortable giving their phone number—a piece of potentially identifying information—to a given website or platform. Even worse, some websites, once they have your phone number for 2FA purposes, will use it for other purposes, like targeted advertising, conversion tracking, and password resets. Allowing password resets based on a phone number provided for 2FA is an especially egregious problem, because it means attackers using phone number takeovers could get access to your account without even knowing your password.

Further, you can’t log in with SMS 2FA if your phone is dead or can’t connect to a mobile network. This can especially be a problem when travelling abroad. Also, it’s often possible for an attacker to trick your phone company into assigning your phone number to a different SIM card, allowing them to receive your 2FA codes. Flaws in the SS7 telephony protocol can allow the same thing. Note that both of these attacks only reduce the security of your account to the security of your password.

Authenticator App / TOTP 2FA

Another phone-based option for 2FA is to use an application that generates codes locally based on a secret key. Google Authenticator is a very popular application for this; FreeOTP is a free software alternative. The underlying technology for this style of 2FA is called Time-Based One Time Password (TOTP), and is part of the Open Authentication (OATH) architecture (not to be confused with OAuth, the technology behind “Log in with Facebook” and “Log in with Twitter” buttons).

If a site offers this style of 2FA, it will show you a QR code containing the secret key. You can scan that QR code into your application. If you have multiple phones you can scan it multiple times; you can also save the image to a safe place or print it out if you need a backup. Once you’ve scanned such a QR code, your application will produce a new 6-digit code every 30 seconds. Similar to SMS 2FA, you’ll have to enter one of these codes in addition to your username and password in order to log in.

This style of 2FA improves on SMS 2FA because you can use it even when your phone is not connected to a mobile network, and because the secret key is stored physically on your phone. If someone redirects your phone number to their own phone, they still won’t be able to get your 2FA codes. It also has some disadvantages: If your phone dies or gets stolen, and you don’t have printed backup codes or a saved copy of the original QR code, you can lose access to your account. For this reason, many sites will encourage you to enable SMS 2FA as a backup. Also, if you log in frequently on different computers, it can be inconvenient to unlock your phone, open an app, and type in the code each time.

Push-based 2FA

Some systems, like Duo Push and Apple’s Trusted Devices method, can send a prompt to one of your devices during login. This prompt will indicate that someone (possibly you) is trying to log in, and an estimated location for the login attempt. You can then approve or deny the attempt.

This style of 2FA improves on authenticator apps in two ways: Acknowledging the prompt is slightly more convenient than typing in a code, and it is somewhat more resistant to phishing. With SMS and authenticator apps, a phishing site can simply ask for your code in addition to your password, and pass that code along to the legitimate site when logging in as you. Because push-based 2FA generally displays an estimated location based on the IP address from which a login was originated, and most phishing attacks don’t happen to be operated from the same IP address ranges as their victims, you may be able to spot a phishing attack in progress by noticing that the estimated location differs from your actual location. However, this requires that you pay close attention to a subtle security indicator. And since location is only estimated, it’s tempting to ignore any anomalies. So the additional phishing protection provided by push-based 2FA is limited.

Disadvantages of push-based 2FA: It’s not standardized, so you can’t choose from a variety of authenticator apps, and can’t consolidate all your push-based credentials in a single app. Also, it requires a working data connection on your phone, while Authenticator apps don’t require any connection, and SMS can work on an SMS-only phone plane (or in poor signal areas).

FIDO U2F / Security Keys

Universal Second Factor (U2F) is a relatively new style of 2FA, typically using small USB, NFC or Bluetooth Low Energy (BTLE) devices often called “security keys.” To set it up on a site, you register your U2F device. On subsequent logins, the site will prompt you to connect your device and tap it to allow the login.

Like push-based 2FA, this means you don’t have to type any codes. Under the hood, the U2F device recognizes the site you are on and responds with a code (a signed challenge) that is specific to that site. This means that U2F has a very important advantage over the other 2FA methods: It is actually phishing-proof, because the browser includes the site name when talking to the U2F device, and the U2F device won’t respond to sites it hasn’t been registered to. U2F is also well-designed from a privacy perspective: You can use the same U2F device on multiple sites, but you have a different identity with each site, so they can’t use a single unique device identity for tracking.

The main downsides of U2F are browser support, mobile support, and cost. Right now only Chrome supports U2F, though Firefox is working on an implementation. The W3C is working on further standardizing the U2F protocol for the web, which should lead to further adoption. Additionally, mobile support is challenging, because most U2F devices use USB.

There are a handful of U2F devices that work with mobile phones over NFC and BTLE. NFC is supported only on Android. On iOS, Apple does not currently allow apps to interact with the NFC hardware, which prevents effective use of NFC U2F. BTLE is much less desirable because a BTLE U2F device requires a battery, and the pairing experience is less intuitive that tapping an NFC device. However, poor mobile support doesn’t mean that using U2F prevents you from logging in on mobile. Most sites that support U2F also support TOTP and backup codes. You can log in once on your mobile device using one of those options, while using your phishing-proof U2F device for logins on the desktop. This is particularly effective for mobile sites and apps that only require you to log in once, and keep you logged in.

Lastly, most other 2FA methods are free, assuming you already have a smartphone. Most U2F devices cost money. Brad Hill has put together a review of various U2F devices, which generally cost USD $10-$20. GitHub has written a free, software-based U2F authenticator for macOS, but using this as your only U2F device would mean that losing your laptop could result in losing access to your account.

Bonus: Backup Codes

Sites will often give you a set of ten backup codes to print out and use in case your phone is dead or you lose your security key. Hard-copy backup codes are also useful when traveling, or in other situations where your phone may not have signal or reliable charging. No matter which 2FA method you decide is right for you, it’s a good idea to keep these backup codes in a safe place to make sure you don’t get locked out of your account when you need them.

This story originally appeared on the EFF’s blog.

Sony CEO Wants More Virtual Reality Competition Amid PlayStation VR Success

The PlayStation VR leads the home virtual reality market, but an unlikely source isn’t entirely comfortable about its lead: Sony. In an interview with Reuters, Sony Interactive Entertainment CEO Andrew House said he thinks the virtual reality market needs to be more competitive.

“I‘m not entirely comfortable being the market leader in VR by such a margin that seems to be happening right now,” House said. “With such a brand new category, you want a variety of platforms all doing well to create that rising tide and create the audience.”

On the numbers alone, Sony is beating its competitors by a significant margin. According to IDC, Sony is only behind Samsung in worldwide headset sales. As of the second quarter of 2017, the PlayStation VR moved 519,400 units alongside a 24.4 percent total market share.

Excluding Samsung’s mobile Gear VR headset, Sony’s margin increases even more when you put it head-to-head with comparable higher-powered headsets like the Oculus Rift and HTC Vive. For the same quarter, Oculus moved 246,900 units for an 11.6 total market share, while HTC dropped 94,500 headsets for a shrinking 4.4 percent share. Both headsets were initially priced for enthusiast gamers, but HTC and Oculus dropped their headsets’ prices to $599 and $499 respectively in a bid to make them more affordable for gamers.

While the PlayStation VR isn’t necessarily as powerful as the Rift or Vive — both headsets can be paired with and take advantage of higher-end gaming PC hardware — Sony’s success with the headset has come from ease of use and price. The headset only needs a PlayStation 4 or PlayStation 4 Pro in order to work and only costs $349. Thanks in part to Sony’s sizable PlayStation 4 install base, the headset has seen steady support from gamers and developers. Since its launch last fall, the PlayStation VR has already broken the one million mark in sales and it’s also gotten exclusives in games like Resident Evil 7 and Farpoint .

In the past, Sony has readily admitted that the Playstation VR is an early exploration into VR for the company. In June, Sony executive Jim Ryan said PlayStation VR’s use in Resident Evil 7 was “a big surprise” to the company.

House’s remarks also come amid significant growing pains for the virtual reality field. While mobile VR headsets like the Samsung Gear VR have been strong drivers for adoption rates, developers are still figuring ways to transition the technology beyond its early adopters. Upcoming headsets from Google and Microsoft have made updates like wireless environmental awareness a big part of their featureset. Plus, VR also faces additional competition from augmented reality applications, which don’t require a headset or additional hardware. With its ARKit developer toolset, augmented reality has been a particular focus for Apple and the company has made it a central part of its recent iOS 11 update.

Lenovo wants Motorola to propel it back into top five

Aymar de Lencquesaing talking to Gulf News in Dubai. The company has grown outside China.

Dubai: Lenovo, which was once the number three smartphone brand worldwide (and the number two brand in China during 2014), has dropped out of the top five in just three years.

During that period Lenovo acquired Motorola for $2.91 billion (Dh10.7 billion) from Google, and now the Chinese company hopes to bounce back with its Motorola brand.

“For us to revive the brand, we need to anchor it into innovation and that is why Moto Z is so important to the brand,” Aymar de Lencquesaing, executive vice-president at Lenovo Group, chairman and president of Motorola, said in an exclusive interview with Gulf News.

Moto Z is a modular smartphone which allows users to customise hardware features such as cameras, speakers, projector and batteries and add-ons to the shell.

“Our goal is to become the number three brand in the next three to five years,” he said.

Moreover, he said the smartphone industry is experiencing a little bit of shift and change right now. The limited supplies of components are going to continue for a short term and will have an impact on the industry. It will make life difficult for some small and local players.

“The products have not evolved tremendously, in terms of technology, only marginal. It really opens up an opportunity if you are going to think out of the box. That is why our Moto Z modular smartphone has been a hit by selling 4.5 million devices in just 13 months.

“We opened up a new scope of possibilities and enabling customers to make more than just a phone. It shows that the industry still recognises innovation and values innovation,” he said.

Brand name

He said that instead of two brands — Lenovo and Motorola — the renewed focus will be on one brand: Motorola.

“Whenever we are spending a marketing dollar, we had to split it. So, we decided to launch all our brands under the Motorola brand name and we have five basic families — Z, X, G, E and the C,” he said.

Two devices under each family make for 10 devices per year and cover the whole price spectrum. So, “our average selling price has gone up”.

Lencquesaing admitted that their smartphone volumes have gone down in some markets but, at the same time, value has gone up in certain markets, expect China.

He said that Motorola has grown 137 per cent year-on-year in Western Europe last quarter ending June. In Latin America, it is number two after Samsung in major markets.

“We compete head-to-head in these markets and in India, and we win. I don’t fear that we are lacking in product innovation and in price point. It is a question of brand and building a brand. In the Middle East, we grew 13 per cent and are happy with that. Our revenues have grown 30 per cent from the region and the trends are positive,” he said.

Complex market

In rest of the world, outside of China, he said the company has grown both in volume and value. China is an extremely “complex market” because there are tier A, tier B and tier C series, broad distribution channels and multiple retail price points.

“We are re-entering the market from top end to the bottom, instead of bottom to the top. We go to the market with Motorola brand and push the volume up. In China, we are relaunching our Motorola brand and it will take time.

“We can get back into China but this is a business where you have to be humble and learn to walk before you run, one step at a time. Right now, we have less than one per cent in China,” he said.

Motorola, which was not that active in the US before one year, tied-up with all the four operators and Lencquesaing claims it is a “quantum leap”.

“You’ll see us grow by leaps and bounds in the US market,” he added.

Moleskine’s smart planner requires too much effort to use

Before you can even set up the pen, the app asks that you agree to Moleskine’s privacy policy and terms and conditions. You have to check several boxes on each page before you can proceed to the next one, and there’s no way to skip any of this. Normally, this wouldn’t be an issue; we agree to terms and conditions all the time. What bothered me was that on the last page, you must check the box that says you agree to Moleskine sending you promotional or informational material if you want to move forward. This is usually optional; making it mandatory is an oppressive move on Moleskine’s part.

A company spokesperson told Engadget that its privacy policy states that you may be asked in the future to provide an email address and password to access the app. “This is completely optional, so you’re not required to enter an email or password to use the app or product,” the company said. If you do provide an email, Moleskine said, it would send promotional emails from respective channels, and it won’t if you don’t supply your contact info. For the purpose of this hands-on, I checked the box, crossed my fingers and moved on.

I was intrigued by the premise of the connected planner at first. Using the smart pen, you write down your appointments on the physical planner and the system will send them to the digital calendar of your choice (iCal, Google or Outlook). The planner also has plain lined pages on the right, where you can jot down ideas or to-do lists, which then sync to the app.

Linking the pen to the Moleskine app was relatively easy: I held down a button until a light on the device turned blue, then placed the pen next to my iPhone. The app found my pen after a few seconds. Once I started writing in the planner, the app detected the new book, it jumped to the page where my pen was, and my scribbles appeared on my phone in real time. Just as with the existing smart writing set, you can use the pen to tap on the envelope icon on the top right of each page to send a PDF copy to yourself or your friends.

To get the planner to sync with your digital calendar, though, you have to make sure to follow these steps. Go into the Moleskine app’s settings, then select “Authentication Center.” Pick the calendar you prefer and give the app permission to sync. Each time you want to write anything in the planner or notebook, you have to make sure the pen is switched on, or nothing will sync. So if your pen is out of power, too bad; nothing you write will be saved to the app. Moleskine says the pen will last through up to 13 hours of “average use” and about seven hours of nonstop writing (hope they don’t mean that literally).

After I finally got the planner to link to my iCal, I activated the pen and wrote down a few dummy meetings. There are three ways to create an appointment: start your entry with the time of your meeting, and the app will save a one-hour block to your calendar; specify a start and end time, and the app will set aside a slot for that duration; or, if you don’t use any times at all, the app will save your entry as an all-day event. That all sounds nifty in theory, but because the system is so bad at recognizing my handwriting, it kept reading my 2s and 1s as Zs and Ls. This made it save 2pm and 1pm appointments as all-day events. It also failed to recognize my cursive or block writing, and labelled several of my meetings with gibberish. I had to be extremely careful when writing my entries before it would work.

When it did recognize what I wrote, though, the Moleskine app accurately set up appointments in my calendar. But for now, the software still feels too unreliable to justify buying the planner (a $30 add-on) specifically for the digital benefits. I also find the purchasing option unnecessarily complicated. To use the smart planner, you need the smart pen, which is available only with the $199 writing set for now. That means you’ll have to get the regular notebook no matter what, which I find unnecessary, since you have lined notebook pages in the planner anyway. But Moleskine says customers of its paper products tend to buy both notebooks and planners, and the company believes they’ll want to get both of the connected versions as well.

I happen to not be a Moleskine customer, so I don’t know if that’s true. There also aren’t very many alternatives available — Livescribe’s and Evernote’s options are either partnerships with Moleskine to begin with or, in the case of the latter, discontinued. The reusable Rocketbook appears to be a cheaper option with features similar to those of the smart notebook, but it doesn’t have the premium quality of a Moleskine and doesn’t offer scheduling tools. All told, Moleskine’s smart planner is a compelling concept that I’d embrace — if only it were more reliable and didn’t require so much effort.

Shareholders force Zuckerberg to give up plan for non-voting shares

Enlarge / CEO Mark Zuckerberg speaks at Facebook’s 2016 “F8” conference.

Mark Zuckerberg is giving up on an audacious plan to sell most of his Facebook shares without diminishing his total control over the company. The plan, which Facebook announced last year, would have given shareholders two new non-voting shares for each voting share they owned. Zuckerberg hoped to sell these shares to finance his charitable ambitions.

But shareholders sued, arguing that the plan would further consolidate power in Zuckerberg’s hands with no benefits to other shareholders. Zuckerberg was scheduled to testify in court in the case on Tuesday. Abandoning the plan saves Zuckerberg from having to do that.

Most companies operate according to a one-share-one-vote principle. But several high-profile technology companies, including Google, Facebook, and Snap, give extra per-share voting rights to founders and early investors. These extra votes give Larry Page and Sergey Brin a majority of Google’s voting power even though they own much less than half of Google’s shares. The same is true at Snap, where co-founders Evan Spiegel and Bobby Murphy together exercise a majority of the company’s votes, giving them total control over the company’s management.

Facebook’s corporate structure is even more concentrated: Zuckerberg alone controlled a majority of Facebook’s shares when the company went public in 2012.

You might wonder why anyone would buy shares in a company where they had no influence in how it was run. Partly, investors were betting on the business savvy of the companies’ legendary founders. But they also knew that Page, Brin, and Zuckerberg had a strong incentive to do a good job since most of their fortunes would be tied up in the companies they ran. If they tried to sell too many of their shares, their voting power would fall below 50 percent and other shareholders would gain the ability to fire them. Whatever else you might say about the arrangement, investors at least went into the deal with their eyes open.

But then Google decided to change the rules in a way that made things even more favorable to the co-founders. In 2012, Google proposed creating a new class of non-voting shares and distributing one for each share outstanding. That would have allowed the Google founders to sell half their shares without diluting their control of the company, and it also would have allowed them to issue new non-voting shares to use for acquisitions—again, allowing the company to grow without affecting Page and Brin’s control.

Normally, a change like this has to be approved by shareholders. But Brin and Page controlled a majority of Google’s voting shares, virtually guaranteeing that Google’s board would approve the arrangement.

So shareholders sued, arguing that the stock split benefited Page and Brin at the expense of other shareholders. With a smaller stake in Google, shareholders said, the co-founders would have a reduced incentive to manage Google well. But Google settled the lawsuit in 2014, allowing the stock split to go forward, and the courts never ruled on the legality of the proposal.

Zuck’s turn

Zuckerberg hoped to follow Google’s playbook with an even more ambitious three-for-one stock split. If successful, it would have allowed Zuckerberg to sell more than two-thirds of his shares without losing control of Facebook. That would have helped him fulfill his pledge to give away 99 percent of his wealth to charity.

But the plan hasn’t gone smoothly. As with Google’s plan, Facebook shareholders sued. Information uncovered during litigation revealed that one of Facebook’s board members, venture capitalist Marc Andreessen, was coaching Mark Zuckerberg via text message on how to win over other board members at the same time he was supposed to be representing the interests of all shareholders.

If the lawsuit had continued, Zuck could have faced awkward questions about this potential conflict of interest. Instead, Facebook is giving up the fight. “This is an unconditional surrender,” shareholder attorney Stuart Grant told BuzzFeed. “I do think the message is loud and clear: you can’t just run over the stockholders.”

Zuckerberg explained the decision to drop the proposal in a Friday post on Facebook, writing that last year he thought the stock split was “the best way to do both of these things. In fact, I thought it was the only way. But I also knew it was going to be complicated and it wasn’t a perfect solution.”

“Today I think we have a better one,” Zuckerberg wrote. “Over the past year and a half, Facebook’s business has performed well and the value of our stock has grown to the point that I can fully fund our philanthropy and retain voting control of Facebook for 20 years or more.”

Still, dropping the stock split limits how many shares he can sell if he wants to maintain his lock on the CEO job. If he ever gives away the bulk of his shares, he’ll lose his majority of Facebook’s voting power, creating the possibility that other shareholders could fire him if Facebook starts to under-perform expectations.

I Helped Create Facebook’s Ad Machine. Here’s How I’d Fix It

This month, two magnificently embarrassing public-relations disasters rocked the Facebook money machine like nothing else in its history.

First, Facebook revealed that shady Russian operators purchased political ads via Facebook in the 2016 election. That’s right, Moscow decided to play a role in American democracy and targeted what are presumed to have been fake news, memes, and/or various bits of slander (Facebook refuses to disclose the ad creative, though it has shared it with special counsel Robert Mueller) at American voters in an attempt to influence the electoral course of our 241-year-old republic. And all that on what used to be a Harvard hook-up app.

WIRED OPINION

ABOUT

Antonio García Martínez (@antoniogm) was the first ads targeting product manager on the Facebook Ads team, and author of the memoir Chaos Monkeys: Obscene Fortune and Random Failure in Silicon Valley. He wrote about the internet in Cuba in WIRED’s July issue.

Second, reporters at ProPublica discovered that via Facebook’s publicly available advertising interface, users with interests in bigoted terms like “how to burn Jews” could be easily targeted. In the current political climate, the optics just couldn’t be worse.

For me, reading the coverage from the usual tech journalist peanut gallery was akin to a father watching his son get bullied in a playground for the first time: How can this perfect, innocent creature get assailed by such ugliness?

You’re likely thinking: How can the sterile machinery of the Facebook cash machine inspire such emotional protectiveness? Because I helped create it.


In 2011, I parlayed the sale of my failing startup to Twitter into a seat on Facebook’s nascent advertising team (for the longer version, read the first half of my Facebook memoir, Chaos Monkeys). Improbably, I was tasked with managing the ads targeting team, an important product that had until then dithered in the directionless spontaneity of smart engineers writing whatever code suited their fancy.

“Targeting” is polite ads-speak for the data levers that Facebook exposes to advertisers, allowing that predatory lot to dissect the user base—that would be you—like a biology lab frog, drawing and quartering it into various components, and seeing which clicked most on its ads.

My first real task as Facebook product manager was stewarding the launch of the very system that was the focus of the recent scandal: Code-named KITTEN, it ingested all manner of user data—Likes, posts, Newsfeed shares—and disgorged that meal as a large set of targetable “keywords” that advertisers would choose from, and which presumably marked some user affinity for that thing (e.g. “golf,” “BMW,” and definitely nothing about burning humans).

Later that year, in another improbable turn of events that was routine in those chaotic, pre-IPO days, I was tasked with managing the cryptically named Ads Quality team. In practice, we were the ads police, a hastily assembled crew of engineers, operations people, and one grudging product manager (me), charged with the thankless task of ads law enforcement. It was us defending the tiny, postage-stamp-sized ads (remember the days before Newsfeed ads?) from the depredations of Moldovan iPad offer scammers, Israeli beauty salons uploading images of shaved vulvas (really), and every manner of small-time fraudster looking to hoodwink Facebook’s 800 million users (now, it’s almost three times that number).

So now you’ll perhaps understand how the twin scandals—each in a product that I helped bring to fruition—evoked such parental alarm.

What can Facebook do about all this?

Let’s set aside the ProPublica report. Any system that programmatically parses the data effluvia from gajillions of users, and outputs them into targeting segments, will necessarily produce some embarrassing howlers. As Buzzfeed and others highlighted in its coverage of the scandal, Google allows the very same offensive targeting. The question is how quickly and well those terms can be deleted. It’s a whack-a-mole problem, one among many Facebook has.

Also, there’s zero evidence that any actual ads targeting was done on these segments (beyond the $30 that ProPublica spent). Actual ad spend on the million-plus keywords that Facebook offers follow what’s called a long-tail distribution: Obscure terms get near-zero spend, and Facebook’s own tools show the reach for the offensive terms was minimal. Keyword targeting itself isn’t very popular anymore. Its lack of efficacy is precisely why we shipped far scarier versions of targeting around the time of the IPO; for example, targeting that’s aware of what you’ve browsed for online—and purchased in physical stores—nowadays attracts more smart ad spend than any keywords.

No, the real Facebook story here is the Russia thing, which should be of concern to anyone worried about the fate of our republic. While the amount of Russian spend Facebook admitted to is peanuts ($100,000) and certainly didn’t influence the election’s outcome, this should be considered a harbinger of what’s to come. Even US politicians didn’t spend much on Facebook in 2008; now they certainly do, and you can be sure the Russians will grow their budgets in 2018 unless Facebook acts.


The good news for democracy (and Mark Zuckerberg) is that these problems, unlike the unscalable miracles that most Facebook plaints would require to address, are eminently solvable. On Thursday, in fact, as this piece was being edited, Mark Zuckerberg livestreamed an address wherein he broadly elucidated the company’s next steps, which were remarkably in line with what I imagined—with one big exception.

Facebook already has a large political ad sales and operations team that manages ad accounts for large campaigns. Zuckerberg hinted that the company could follow the same “know your customer” guidelines Wall Street banks routinely employ to combat money laundering, logging each and every candidate and super PAC that advertises on Facebook. No initial vetting means no right to political advertising.

To prevent rogue advertisers, Facebook will monitor all ad creative for political content. That sounds harder than it is. Take alcohol advertising, for example, which nearly every country in the world regulated heavily. Right now, Facebook screens every piece of ad creative for anything alcohol-related. Once flagged, that content goes into a separate screening workflow
with all the varied international rules that govern alcohol ads (e.g. nothing in Saudi Arabia, nothing targeted to minors in the US, etc.).

Political content would fall into a similar dragnet and be triaged accordingly. As it does now, Facebook would block violating ad accounts, and could use account meta-data like IP address or payment details to prevent that advertiser from merely creating another account. It would be a perpetual arms race, but one Facebook is well-equipped to win, or at least keep as a stalemate. Zuckerberg’s video shows commitment to waging that war.

Next, based on Zuckerberg’s somewhat vague wording, Facebook will likely now comply with the Federal Election Campaign Act, a piece of 1971 legislation that governs political advertising, and from which Facebook finagled a self-serving exemption in 2011. The argument then was that Facebook’s ads were physically too small (no longer true) to allow the usual disclaimer—“I’m Joe Politico, and I approve this message…”—required on every piece of non-Facebook media. Facebook also claimed at the time that burdensome regulation would have quashed innovation at the burgeoning startup.

With Facebook’s market value now hovering at half a trillion dollars, that’s a preposterous thought. The company needs to put its big boy pants and assume its place on the world stage. The FECA disclaimers could easily live inside the upper right-hand-side dropdown menu that currently carries some ads targeting information (check it yourself), and would seamlessly integrate with the current product. Reporting of malicious political content could act in a similar manner to the recently added buttons that allow the reporting of fake news.

Lastly, the step I didn’t see coming, because of its inherent weirdness.

The biggest promise, at least at the product level, that came out of Zuckerberg’s video concerns the ominously named ‘dark posts’. The confusion around these is vast, and worth clearing up.

The language is a pure artifact of the rudimentary nature of Facebook’s ads system in the bad old days. Before the Newsfeed ads we have today, there was no commercial content in Feed at all, beyond so-called ‘organic’ (i.e. unpaid) posts that Pages would publish to whomever had liked their page. A Like was effectively license to spam your Feed, which is why companies spent millions to acquire them.

It would be a perpetual arms race, but one Facebook is well-equipped to win.

But modern digital advertisers constantly tweak and experiment with ads. When big brands requested the ability to post lots of different creative, it posed a real problem. Brands wanted to show a dozen different ad variations every day, but they didn’t want to pollute their page (where all posts necessarily appear). ‘Dark posts’ were a way to shoehorn that advertiser requirement into the Pages system, allowing brands to create as many special, unseen posts as they’d like, which would only be seen by targeted audiences in their Feeds, and not to random passers-by on their page. The unfortunate term ‘dark post’ assumed a sinister air this past election, as it was assumed that these shady foreign elements, or just certain presidential candidates, were showing very different messages to different people, engaging in a cynical and hypocritical politicking.

Zuckerberg’s proposes, shockingly, a solution that involves total transparency. Per his video, Facebook pages will now show each and every post, including dark ones (!), that they’ve published in whatever form, either organic or paid. It’s not entirely clear if Zuckerberg intends this for any type of ad or just those from political campaigns, but it’s mindboggling either way. Given how Facebook currently works, it would mean that a visitor to a candidate’s page—the Trump campaign, for instance, once ran 175,000 variations on its ads in a single day—would see an almost endless series of similar content.

As big a step as the transparency feature sounds, I don’t see how Facebook can launch it until these Pages product concerns are worked out. The Facebook Pages team product managers must be sitting right now in a conference room frantically scrawling new design ideas on a whiteboard. I’d bet anything that the Ads Quality and Pages teams are prioritizing that as you read this. This is one scandal Facebook isn’t going to weasel its way out of with generic appeals to “openness” and “community”.


Despite Zuckerberg’s sudden receptiveness to user (and government) feedback, should Facebook be pilloried for these blatant shortfalls, or even sanctioned by Washington? You’ll accuse me of never having taken off my corporate-issue Facebook hoodie, but the answer is not really.

It would take the omniscience of a biblical deity to correctly predict just what Facebook’s two billion chatting, posting, scheming, and whining users are up to at any given moment. If you’d come to me in 2012, when the last presidential election was raging and we were cooking up ever more complicated ways to monetize Facebook data, and told me that Russian agents in the Kremlin’s employ would be buying Facebook ads to subvert American democracy, I’d have asked where your tin-foil hat was. And yet, now we live in that otherworldly political reality.

If democracy is to survive Facebook, that company must realize the outsized role it now plays as both the public forum where our strident democratic drama unfolds, and as the vehicle for those who aspire to control that drama’s course. Facebook, welcome to the big leagues.

Why Equifax’s error wasn’t hiring someone with a music degree

All those people are about to be proven so, very, very wrong in an upcoming, in-depth report from internet infrastructure organization Packet Clearing House in collaboration with Prof. Coye Cheshire at the U.C. Berkeley School of Information. Their findings show data concluding that most infosec professionals don’t hold a degree in a computer science-related field. What’s more, the report shows that degrees are the least important feature of a competent practitioner and degree programs are the least useful places to learn security skills.

Portions of the report prior to its November publication, titled “A Fragmented Whole: Cooperation and Learning in the Practice of Information Security” were shared with Engadget. It combines surveys, interviews, and ethnographic research.

The project’s lead researcher Ashwin Mathew told us via email, “There are many things for which we should fault Equifax, which other coverage has already pointed to, such as insufficient staffing and bad practices.”

He added:

The CISO not having a CS degree is a distraction at best from the underlying problems — and it is incredibly problematic the fact that the CISO is a woman who is called upon to defend her qualifications, in a field dominated by white men, many of whom do not have CS degrees or infosec certifications.

The question of Ms. Mauldin’s fitness for the position became a lens for many — mostly dudes — to exact their anger at Equifax for probably ruining millions of people’s lives with a single missed patch. And as far as we’ve been told, that’s what it came down to: A flaw in Apache Struts that should’ve been fixed in March led to its major breach the same month, which we only found out about on September 7.

That’s not all, of course. Right when we were learning about the theft of sensitive information belonging to at least 200 million U.S. consumers, as well as information on some Canadians and up to 400 thousand British residents, we found out that Equifax execs sold off stocks before the breach was made public. Shares of Equifax plummeted 35 percent since the disclosure of its breach. Those shady Equifax stock sales are now the focus of a criminal probe by the FBI in conjunction with U.S. prosecutors in Atlanta.

Credit cards, a chain and an open padlock is seen in front of displayed Equifax logo in this illustration taken September 8, 2017. REUTERS/Dado Ruvic/Illutration

In addition to the FBI, several attorneys general in various states have announced formal investigations. Collectively, U.S. senators “want copies of all Equifax penetration test and audit reports by outside cybersecurity firms,” according to Bloomberg.

To top it all off, Equifax has behaved horribly in the wake of the breach. Its website to help consumers was broken, Equifax itself sent the public to the wrong website that was a fake phishing site set up by a white hat hacker, and the company quietly disappeared its apps from both the Apple App and Google Play Stores.

But when the male-dominated discussions about infosec heard about Ms. Mauldin’s degree in music, it was decided that she was a suitable target for their rage, with some well-deserved anger at Equifax as the catalyst. The hate was visible on Twitter, Reddit, and Slashdot, and put into press by MarketWatch’s Brett Arends (a history major himself). He wrote,

When Congress hauls in Equifax CEO Richard Smith to grill him, it can start by asking why he put someone with degrees in music in charge of the company’s data security.

And then they might also ask him if anyone at the company has been involved in efforts to cover up Susan Mauldin’s lack of educational qualifications since the data breach became public.

This thinking begins to look unqualified, and worse, in light of the upcoming Berkeley report. Lead researcher Mathew told us, “I spoke with CISOs and senior engineers at large Silicon Valley firms who both did and didn’t have degrees.”

He explained that among those who even had degrees, those with degrees outside of computer science outnumbered those with a degree in CS. “For many of the positions which they hired for (including their own), degrees are not a consideration,” he said. “Degrees are in general important only as a marker of character.”

What’s more, Mathew confided, “As several interviewees told me, having a degree shows a certain level of persistence and fortitude when evaluating junior positions, with the degree indicating that a candidate was willing to sit through several years of coursework — but the subject of the degree is irrelevant. Many of the online services which we take for granted are secured by people who do not have degrees, or whose degrees are not in CS.”

Insofar as what the report will tell us about what all those people in infosec actually have degrees in, Mr. Mathew told Engadget:

“Respondents indicated a diverse array of fields of study from “hard” sciences like biology, chemistry and physics, to agriculture, languages, journalism, sociology, and so on.”

I hope we find out what happened with the Equifax breach, but I’m not holding my breath. Maybe Ms. Mauldin and her forcibly retired colleague were part of a decision-making chain that deprioritized a single patch, or maybe they’re just scapegoats. Or maybe they were the ones who hired penetration testing teams to audit the company, but couldn’t get their superiors to take the audit’s finding seriously — a situation that happens so often it’s insane.

After all, according to her now private LinkedIn page, Mauldin was the Senior Director of Information Security Audits and Compliance for Hewlett Packard from 2002-2007.

Short of literally punching a baby, it’s hard to imagine what else Equifax has done wrong. The sky seems to be the limit here, and Ms. Mauldin was a part of it. So the only thing that’s certain is that things aren’t going to get better for anyone involved with Equifax, past or present. Especially all of us, who are involuntarily Equifax victims.

Either way, we should all be looking forward to “A Fragmented Whole: Cooperation and Learning in the Practice of Information Security.” It’ll be announced, and findable, on the front page of The Center for Long-Term Cybersecurity in early November.

Nest’s cheaper thermostat is better than the original

Usually when a company releases a cheaper version of its flagship product, it’s notably worse in some way in order to justify the cheaper price and to keep from cannibalizing the original, more expensive option.

Nest didn’t do that. Instead, the company that made the smart thermostat popular with a $250 device made a new thermostat that is just as good as the original, and knocked $80 off the price (mainly because it won’t be compatible with some higher-end heating and cooling systems).

If you’re in the market for a smart thermostat and the Nest Thermostat E works with your system, you should buy it. (If you have a two story house or a basement you might want to look at an Ecobee thermostat, which comes with remote temperature sensors.) It works just as good as the flagship Nest Thermostat, it’s cheaper, and the features it is missing are inconsequential at best.

Here are the differences between the $249 Nest Thermostat and the $169 Nest Thermostat E:

  1. The Nest Thermostat has a full-color display and a feature called Farsight that will show you the time, weather, or temperature from across the room. The Nest Thermostat E has a frosted display that only shows the indoor temperature.
  2. The Nest Thermostat works with 95 percent of homes. The Nest Thermostat E works with 85 percent of homes.
  3. The Nest Thermostat has a metal ring and comes in multiple colors. The Nest Thermostat E has a plastic ring and only comes in white.

They both work with services like Alexa and Google Assistant (still no HomeKit support for some reason), you can control them both away from your home with the Nest app, and all of the energy-saving features that make the original Nest great are fully available on the cheaper edition.


Photo by Micah Singleton / The Verge

I’ve been using the Nest Thermostat E for the past few weeks, trying to decipher why anyone would buy the more expensive version if both are compatible with your heating and cooling system. There are no build quality issues, installation was easier than I expected, and the auto-scheduling feature works well enough that it hasn’t been too hot or too cold in my house in days.

I waffled for years on buying a Nest because, well, it’s $250 and I spend literally no time looking at my thermostat. But if you knock nearly $100 off that? With no missing features? Now we’re talking. And I think that’s how most people will feel about it.

If spending $170 will lower my energy bill and let me control the temperature from my phone, that’s hard to pass up. And that’s what Nest is ultimately hoping for. The company says it wants to sell two to three times as many units over the next four years, and the E will likely help it do that. But I don’t know if Nest expected its budget option to potentially cannibalize its flagship device. That’s always a possibility when you make similar products at different price points, and safeguards are usually put in to stop that from happening.

That apparently didn’t happen this time around. The best Nest is no longer the original. It’s the Thermostat E.

Searching to help Mexico City and other top trends this week

Each week, we take a look at the most-searched trends (with help and data from the team at Google News Lab). Here are a few top trends from this week:

Mexico City earthquake

A fatal earthquake rocked Mexico City this week, and people turned to Google to find out how they can aid the recovery. Two of the top questions in the U.S. were “What fault line is Mexico City on?” and “Where to donate for the earthquake in Mexico?” Those questions were both in the top five searched questions in Mexico City as well, along with “What is needed in the shelters?” and “Where is the school that collapsed from the earthquake?”

From court to screen

Wednesday marked the anniversary of the famed tennis match between Billie Jean King and Bobby Riggs, and starting today, Emma Stone and Steve Carell portray them on the big screen. The release caused a racket in Search: Interest in “women’s tennis” spiked 140 percent higher than “men’s tennis.” (Game.) Billie Jean King was searched 230 percent more than Bobby Riggs. (Set.) And interest in Emma Stone was 290 percent higher than Steve Carell (match!).

Demagor-gone searching

One a scale from one to Eleven, how excited are you for “Stranger Things” season two? Unless you’ve been trapped in the Upside Down, you know that the show is coming back soon. We’ll help you out with a few of the top-searched questions this week: “When is season 2 of Stranger Things coming out?” (October 27), “Who went missing on Stranger Things?” (RIP Barb), and “How many Emmys did Stranger Things win?” (Zero.) It may have lost to “Handmaid’s Tale” at the Emmy’s, but it’s spooking the competition in other ways—“Stranger Things costume” was searched 1,040 percent more than “Handmaid’s Tale costume” in the last week. There’s only a few weeks to go, so get your Eggos ready.

Will it be a Graceful comeback?

Fans thought they said goodbye to “Will & Grace” in 2006 but now they’re searching, “What time will Will & Grace be on Hulu?” That’s right, the beloved NBC series is making a comeback on Hulu next week (all one hundred and ninety four episodes are now on Hulu as well). Other popular questions include, “How many episodes are there in Will & Grace season 1?” and “Is Leslie Jordan returning to the Will & Grace reboot?” (Karen Walker isn’t happy about that one.) There are a lot of “Will & Grace” lovers in Rhode Island, Iowa and North Dakota, the states that searched the most for the show this week.

Flu fighters

Flu season is around the corner, and people are aching to learn more. Search was congested with lots of queries, but the top ones were: “How long is a flu shot good for?” “How bad is flu season this year?” and “How to stay healthy during flu season? People are searching the most for “stomach flu,” followed by “keto flu.” Top regions searching for “flu season” were Delaware, North Carolina and Louisiana.

Motorola Has Fallen | The Stute

It’s 2005. Your mother’s Motorola RAZR is state of the art in the world of pink engineering. By 2007, 130 million RAZR phones had been sold. It was featured in Lost, Top Gear, Burn Notice, and other notable TV shows and movies at the time due to its popularity.  Motorola had gained ground on Nokia, the leading phone manufacturer at the time, which meant the landscape was much more diverse than today. Nokia had the largest market share at 35% and Motorola was in second place with 21%. They had momentum to build on.

Then, the unthinkable happened. Apple, which made iPods and computers, released a product no one expected: the iPhone. Existing phone manufacturers were stunned at the level of polish and user friendliness offered by the entirely new experience. Well, maybe they did in private. Publicly, many companies released statements that are now just plain embarrassing. Steve Balmer, the CEO of Microsoft at the time, said “There’s no chance that the iPhone is going to get any significant market share.” The CEO of Nokia said they would not change their thinking or approach. The CEO of Palm said, “We’ve learned and struggled for a few years here figuring out how to make a decent phone. PC guys are not going to just figure this out. They’re not going to just walk in.”

Motorola stayed quiet and just kept their tears inside, I suppose. They already had smartphones on the market that were being outclassed immediately.  Their first two smartphones came out in 2003, although they were hardly ‘smart.’ The Motorola Q was the smartphone best fit to compete with the iPhone at the time. It ran a terrible Windows mobile operating system and had no touchscreen support. Basically, it was nothing like the iPhone and a poor competitor.

It was two more years until Motorola had a more worthy competitor: the Motorola Droid. The Droid had been made in partnership with Verizon to compete directly with the iPhone. It had a touchscreen and a slide out keyboard to give people the best of both worlds. At least, that was the idea at the time. It ran a very early version of Android that was still not up to par. By this time, Motorola had lost market share all the way down to less than 5%. They had not responded fast enough to the new market pressures put on by Apple.

In early 2011, Motorola split into two separate companies. The first, called Motorola Solutions, kept all the profitable areas of their business, including police technologies, radios, and other commercial needs. Motorola Mobility was the second, and it was strictly their phone business and all its misfortune. By 2012, Motorola held just below 2% market share. They had one good thing going for them though. Google purchased Motorola Mobility and their entire patent portfolio for $ 12.9 billion that year.

Google’s purchase brought some excitement to the fans of Motorola’s Android devices. The Moto X, released in 2014, was one of the best smartphones of the year. Motorola was making competitive devices under Google, but they were not gaining market share. Then, it all fell apart. In late 2014, the remainder of Motorola Mobility was sold off to Lenovo for only 2.91 billion. Google had kept all the patents to itself and dumped the rest.

Present day Motorola Mobility is only kept alive by the ‘Moto by Lenovo’ branding to appeal to western customers. Almost all the staff that were part of Motorola have been let go and the devices Lenovo has been putting out under this branding is underwhelming.  It’s truly disappointing that the stellar devices Motorola was working on in 2014 could not have continued, as a continuation of those devices might have brought them back into the game. Instead, we all watched as they slowly died. Motorola Solutions lives on, but their legendary phones are now just a thing of the past.