Bluetooth Is Not Getting Safer

Over a year ago I published Seven Rules for Bluetooth at Starbucks. Recently, Armis, a security firm specializing in the Internet of Things (IoT), announced a new set of Bluetooth vulnerabilities they call BlueBorne. If you read “Seven Rules”, you have a good idea of what BlueBorne is like: hackers can get to your devices through Bluetooth. They can get to you without your knowledge. Windows, Android, Apple, and Linux Bluetooth installations are all vulnerable. Most of the flaws have been patched, but new ones are almost certain to be discovered.

Some of the flaws documented in BlueBorne are nasty: your device can be taken over silently from other compromised devices. Using BlueBorne vulnerabilities, hackers do not have to connect directly to your system. Someone walks within Bluetooth range with a hacked smartphone and you are silently infected. Ugly. Corporate IT should be shaking in their boots, and ordinary users have good reason to be afraid.

What should I do?

A few simple things make you much safer.

  • Be aware of your surroundings. Bluetooth normally has a range of 30 feet. More with special equipment, but whenever you don’t know who might be snooping within a 30-foot radius sphere, you are vulnerable. That’s half way to a major league pitcher’s mound and roughly three floors above and below.
  • Keep your systems patched. The problems Armis has documented in BlueBorne have been patched. Don’t give the bad guys a free ticket by leaving known soft spots unprotected. Make them discover their own holes. By patching regularly and quickly, you cut out the stupid and uninformed hackers. Smart hackers are rare.
  • Turn Bluetooth off when you are not using it or you enter a danger zone. When Bluetooth is turned off, you are safe from Bluetooth attacks, although you may still be affected by malware placed on your device while Bluetooth was turned on.

The seven rules for Bluetooth I published a year ago are still valid. Follow them.

Seven basic rules for Bluetooth

  1. Avoid high-stakes private activities, like banking transactions, when using Bluetooth in public.
  2. If you are not using Bluetooth, turn it off!
  3. Assume your Bluetooth connection is insecure unless you are positive it is encrypted and secured.
  4. Be aware of your surroundings, especially when pairing. Assume that low security Bluetooth transmissions can be snooped and intercepted from 30 feet in any direction, further with directional antennas. Beware of public areas and multi-dwelling buildings.
  5. Delete pairings you are not using. They are attack opportunities.
  6. Turn discoverability off when you are not intentionally pairing.
  7. If Internet traffic passes through a Bluetooth connection, your firewall may not monitor it. Check your firewall settings.

Network Service Providers and Privacy

Advertising runs on data. It always has. Long before programmatic ads and algorithms, we saw Mercedes-Benz ads in Fortune and Chevy ads in Mechanix Illustrated. Some clever guy had figured out that Fortune readers and Mechanix Illustrated readers bought different cars. The success of an advertising outlet has always depended on the outlet’s generation of sales. Successful sales depend on finding qualified buyers.

Today, qualified buyers are spotted by their on-line habits, that now include choice of websites to visit, age, gender, physical locations, income, purchase patterns and many other factors. Based on these factors, on-line ads are targeted to narrowly identified network users. Advertisers now have masses of data and abundant computing power to process the data.

Websites as Data Sources

But the advertisers want more data, ads targeted more precisely. Who is surprised? There are two main sources of consumer data for targeted advertising. The first source is the websites we use all the time. Google and Facebook are most prominent. They know their users and use the knowledge to aim the ads they sell to their advertisers. These targeted ads are the revenue source that funds the free services these sites offer.

Network Service Providers

The other main source of buyer information is network service providers like Comcast and Verizon. Google and Facebook have in depth information on what people do while using these sites but the know very little about what is happening outside their own sites. Service providers have a wider, but shallower, view of people’s activity.

Google knows you searched on “archery” and clicked on an informational archery site. Google identifies you as a candidate for bow and arrow ads. Comcast knows something else. Inside the sports site, you clicked on a link to Ed’s Sporting Goods. Comcast might try to sell Ed ads that they will target at you. Only Ed and you bank know that you ordered a baseball and mitt, so you probably won’t get any baseball ads.

Data Brokers

A data broker might try to purchase data from Google, Comcast, Ed, and your bank. With the purchased data, they can put together an even more detailed picture of your habits. Exactly what information the data broker will get depends on the privacy policies and regulations of Google, Comcast, Ed’s Sporting Goods, and your bank.

These data brokers disturb some people, even conspiracy skeptics like me, because they seem to have little accountability. Users have the “Terms of Service” and privacy policies that govern their relationships with Google, Comcast, and their bank, but the data brokers have no direct relationship with the people profiled in their data bases. Are the brokers good or bad? We don’t know. If they misuse our data, will we ever know? Do we have any recourse? I don’t have answers to these questions yet, but I think we all need them.

The FCC and the FTC

Both websites and network service providers are subject to regulations on what they can collect, how they can collect it, and the data they can sell, but the regulations vary. Google and Facebook are subject to Federal Trade Commission guidelines, like all businesses engaged in interstate trade. Network service providers are regulated by the Federal Communications Commission as common carriers.

There are significant differences. Network service providers are treated as utilities. Utilities are services such as electrical and telephone services that people must have. Google and Facebook are businesses that consumers choose to deal with. Because people have no choice, utilities are regulated more strictly than most businesses. Are network services a utility, or just businesses? Last year, the FCC declared them to be a utility and subject to FCC regulation, but some argue that the ruling was wrong and should be corrected.

Opt-in vs Opt-out

A critical point is whether collecting consumer information should be “opt-in” or “opt-out”? If collection is opt-in, information cannot begin to be collected until the customer says it is okay. If collection is opt-out, it is okay to collect information until the customer takes the effort to say no.

Which way is best? Consumers with informed opinions generally prefer opt-in, but a lot of people don’t care and think opt-out is fine. Businesses that collect and use data tend to prefer opt-out schemes.

Business or Utility?

When network service providers were classified utilities, they became subject to opt-in rules. FTC guidelines, which apply to Google and Facebook, are opt-out. Recently, the new administration changed the FCC regulation for network service providers to opt-out, similar to the FTC guidelines. Some consumers are quite concerned.

Cayla, A Living Doll from the Twilight Zone

Cayla, a computer driven talking doll, uses technology similar to that behind Amazon’s Alexa, Microsoft’s Cortana, Apple’s Siri, and Google Home to construct a toy that simulates a living friend for a child. Unfortunately, some believe that Cayla may be the embodiment of the murderous Talky Tina of the fifty-year-old episode of The Twilight Zone, The Living Doll.

In Germany, Cayla has been declared a banned surveillance device. Selling and even possessing a Cayla in Germany is illegal. The doll’s communication capability must be permanently disabled to make it legal in Germany. Also, several groups in the US have launched an action to have Cayla sanctioned under the Children’s Online Privacy Protection Act (COPPA).

I’m not here to advocate that these government and legal actions are justified or not justified, that’s for individuals to decide for themselves, but I think anyone who is concerned about cybersecurity should understand some of the issues involved. We are likely to see many more products like Cayla appearing on the market. Some will be for children, others for teens, and many aimed at adults. Some will be great, some exploitative, and some will, no doubt, be just plain shoddy.

So let’s take an engineer’s look at Cayla. The complaint document sent to the Federal Trade Commission is against Genesis Toys and Nuance Communications and was lodged by the Electronic Privacy Information Center and Consumers Union, among others. Genesis Toys is a Hong Kong corporation that developed the doll. Nuance Communications is a US corporation that retains and processes data collected by the Cayla doll. The exact relationship between Genesis and Nuance is not clear to me, but they are two separate corporations.

Cayla’s architecture is fairly simple. The doll itself is the equivalent of a Bluetooth headset that acts as a microphone and speaker for an app that runs on a smartphone, like an iPhone or an Android. The app communicates with a cloud service that supplies computing and storage resources that power Cayla.

This architecture has issues. Bluetooth headsets are insecure. I mentioned in a blog a few months ago that the NSA has banned commercial Bluetooth headsets for classified or confidential information. Here. A criminal hacker would not have much trouble listening in on a child’s conversations with Cayla and interjecting their own questions and suggestions. Imagine a pedophile speaking through Cayla suggesting to a three-year-old that they meet out in the street. The Bluetooth standard says the protocol is good to ten meters (30 feet) but special equipment can extend the range substantially. Also, Bluetooth signals, essentially the same as Wi-Fi, penetrate walls.

Even in isolated spots where Bluetooth intrusion may not be a consideration, Cayla has vulnerabilities. The FTC complaint points out that Cayla is programmed to promote certain commercial products, such as movies. In addition, the information that Cayla collects, like names, locations, favorite foods and toys, etc., is stored in the cloud. The Genesis Toys privacy policy states that this information is kept and analyzed by Nuance Communications and may be shared. I should note that while I was writing this blog, the posted Genesis privacy statement was changed. You may want to check it for yourself.

Cayla simulates conversation, answers and asks questions, and can, or potentially can, do all of the things Alexa, Cortana, Siri, and Google Home can do: order pizza, open the front door, adjust the thermostat, call for an Uber. The list gets longer every day. Cayla can’t do all these things now, but the technology she is built upon can. Cayla’s limits are set by the discretion of Genesis Toys and Nuance Communications. Parents may want to be certain that controls are in place that will prevent their three-year-old from ordering a dozen pizzas or their ten-year-old embarking on a trip to Aruba. I don’t suggest that Cayla is likely today to cause these things to happen. Rather, parents should be aware that these new products make such mishaps possible.

Like the living doll on Twilight Zone, Cayla is a new technology with unexpected powers and these powers can harm us if they are not used properly.

In another blog, I plan to discuss the steps I would take when deciding whether I want a product like Cayla in my home. These products have amazing potential for improving our lives and could be more fun than a barrel of monkeys for our children. But they can also be dangerous. You should choose with knowledge and good judgement.

Relabel the Email Send Button “Make Public”

Email is not private. Ever.

We’ve heard a lot about email security during this election year and I am afraid people may have gotten some wrong impressions from the discussion. Most of the debate has been over the use of secure email servers. People may get the impression that using a secure email server makes the information on email private. Securing an email server makes it difficult to snoop into email stored on the server, but that is only a fragment of the picture.

Using email for critical private information is unwise under any circumstances. I fear this point is lost in the discussion. An email server is only one vulnerability in the chain of vulnerabilities from sender to receiver. You can never be certain, even reasonably sure, they are all safe.

Sending information in an email exposes the information to unauthorized access that you will not be able to control. In addition to unauthorized snooping, any email sent or received on company email is open to both the employer of the sender and the receiver. A business may be legally required to make their email public in court. An additional danger is the email message you receive may not be the message your correspondent sent to you. The sender in the email header may not be the real sender. Email was designed for convenience, not for integrity or privacy of communications.

My attitude, and that of a few other software and network architects with whom I have discussed it with recently, is to treat an email as a postcard, open to anyone who cares to snoop.

How email snooping works

To understand email security, you have to know a little about the email system architecture. There are five components: the email sending client, the receiving client, the connecting infrastructure, and the sending and receiving servers. Usually the sending and receiving clients are a single piece of software, like Outlook or Thunderbird, but the sender and receiver each has their own. In addition, unless you are sending email to someone in your own domain (the right side of the “@” in both addresses are the same) the email will go from the sender’s client to the sender’s email service to the receiver’s email service to the receiver’s client. The connecting infrastructure is usually the Internet, and it is often the most vulnerable part of the process.

As an email sender, you can protect your email client by choosing a reputable email service, managing your email account passwords carefully, and following good security practices on the devices you use for sending and receiving email, but you do not control the receiver’s elements in the chain. Steps can be taken to increase the security of email, but there is no way to tell if they have been taken at the links you do not control in the chain. In other words, no matter how careful you are, there are still many opportunities for tampering with the email you send and receive.

Email encryption

However, you can do something to protect your privacy: you can send encrypted messages that you encrypt yourself and your recipients must decrypt themselves. Independent encryption that is controlled by you and your recipient eliminates most of the issues. The problem is that you can’t send an encrypted message to just anyone because you and your recipient have to share some secret key to the encryption. This is the method behind PGP (Pretty Good Privacy) that technical types have used for a long time for email privacy. Many off-the-shelf products require less technical skill to use than PGP, but senders and recipients still have to share some secret information before communication can take place. Off the shelf products can hide the sharing and lessen the pain, but you and your correspondents will still have to agree on tools and keys before you can exchange messages privately.

Encrypted email is the only kind that I consider secure. But I also keep in mind that encryption-based systems are still fallible. What is safe today may be vulnerable tomorrow because all encryption can be broken if sufficient computing power is applied. Today, breaking the most secure encryption requires decades of computer time, but tomorrow’s computers are likely to be much more powerful. Emails that are securely encrypted today will be easy to hack in a few years.  Also, if an encryption key gets into the wrong hands, the message is no longer private. If a careless recipient saves an unencrypted copy of a message, it is no longer private. Also, a strong but poorly implemented encryption is still weak. Encryption products that ought to have been secure have turned out to be insecure through implementation errors. Always keep in mind that email places whatever you send into the hands of strangers.

Email was, like the Internet, designed for flexible and open communications. Its complex and sprawling structure changes slowly. Computer and network security in general has improved greatly in recent years, but the criminals have gotten better too.

The upshot is that secure email servers do not secure email. I, and many other software engineers and architects, regard all email as insecure. Period. Always assume that hitting the send button makes the message public.

Email is fast and convenient, but not private.