Two Factor Authentication

Two factor or multi-factor authentication makes computing more secure. You’ve probably seen it already and you will see more of it. I highly recommend it, with some caveats. I remain skeptical of biometric authentication. Facial, fingerprint, and retina recognition are all convenient, but they also have issues that are not ironed out yet. No matter how optimistic the sensor makers’ marketing, faces, prints, and retinas can’t be replaced when they are compromised, and there are reports of gruesome compromisations. Multi-factor authentication adds extra steps to authentication, but there is no question that additional factors increase security.

What is multi-factor authentication?

As the name suggests, multi-factor authentication requires the authenticity to be established in multiple ways. The user name and password authentication that has been used for decades uses a single piece of evidence to prove you are who you claim to be: knowledge of the correct password. Two-factor authentication adds another piece of evidence. The second piece of evidence could be a second password, but all passwords are vulnerable in the same ways, so it is better to use more than one kind of evidence.

Security specialists often talk about three types of evidence of authenticity: what you know, what you have, and what you are. A password is something you know that no one else does. A physical key is an object that only you have. Your fingerprints, your facial appearance, your retinal pattern, and your DNA are examples of something you are.

An example

Physical safes commonly use single factor authentication, sometimes multi-factor authentication. Most single factor safes have combination locks. To enter a single factor safe, you simply enter the correct sequence of numbers. If you write the sequence down, someone could find the paper; or someone could look over your shoulder and watch you dial the combination. Whoever finds the paper or watches you has access to the safe. Sneaking in is a challenge, but by no means impossible.

Bank vaults frequently have two combinations each known to a single bank officer. To open the vault, both officers must dial in their combination. One officer may be incautious or a fraudster, but the double combination prevents a single officer from getting in without a witness.

We have a safe in our home that requires both a combination and a key. I know the combination, but without the key, I can’t get in. If thieves were to successfully snatch the combination, they would still have to find the key. Often, even I can’t find the key, so they’ll have a job to get into our safe. In this way, our two-factor, key and combination safe is an annoyance, but more secure than a single-factor combination-only safe.

Multi-factor user authentication

Typical two-factor authentication uses a password and something else. One common method uses a text message sent to your phone containing a four to eight-character token. After correctly entering your password you must enter the token that is automatically sent to your phone when you enter the correct password. In other words, you must both know your password and have your phone to get into the account. Another variation is to email a token. In that case, you must both know your password and have access to your email account. These methods are harder for criminals to deal with than a simple password.

Flaws in message-based authentication

These methods are good, as long as access to your email account or phone is secure. However, email is just another account to secure, which would be better done with multi-factor authentication. To do that, you would have to have another secure email account. At a certain point, the complexity becomes unbearable.

Cellphone issues

The cellphone method also has problems with phone numbers and SIM cards. Phone numbers are assigned to SIM cards. Usually, when you buy a new phone, the you move your SIM card and your phone number, contacts, and other information moves with you. However, the service providers can reassign phone numbers to a new SIM, say when your phone is lost or destroyed, or you get a new phone that is not compatible with your old SIM.

The ever considerate and conciliating providers can easily transfer your phone number to a new SIM. They hesitate to hassle a customer too much when numbers are reassigned and they do not press a requesting customer for too much identification and verification, which means that criminals with a handful of information can get your phone number transferred to their own phone. To make matters worse, cell carrier employees are not guaranteed to be honest: they might be bribed or they may be criminals themselves. As a result, criminals have found it fairly easy to get phone numbers reassigned without the owner’s consent.

Once your phone number has been transferred, the criminal can use it to gain access to your accounts, change passwords, run up bills, and drain your bank.

The cellular providers have not been forthcoming on how often this happens, but anecdotal evidence says the practice is on the rise. There are a few things to do to protect yourself. If your provider offers a PIN for changes to your account, take it. Most important, when your number changes, you will get a notification on your phone and it will no longer work. Call your provider as quick as you can when you get a notice. Criminals can wreak havoc in minutes with a stolen phone number.

A stronger method

A better alternative is to use another authentication factor that does not depend on sending a token to you. This can take several forms, but they all involve a small application that runs on a device in your possession that produces tokens. When the application is set up, your authenticator and the application exchange information that syncs the application with the authenticator. One method provides tokens that change with the date and time. If you can’t supply the unique time-based token from the app that corresponds to your account, access is denied. Another implementation relies on a private key held on the device. An elegant implementation places the token generator in a USB device similar to a thumb drive. Plug the “key” in, authenticate, and the USB device supplies the correct token. These methods do not rely on communication after the initial setup. Neither WiFi or a cellular connection to the key device is necessary.

I noted with approval in this article in the Washington Post, that the federal government will soon require two-factor authentication for administrators of all government web sites. The method chosen by the feds is better than relying upon calling or messaging the phone. They are using Google Authenticator, which runs on an Android or Apple phone.

These methods are more secure, but not all multi-factor sites accept tokens from all authenticator apps, so you may not be able to use your choice on all accounts.

There’s a podcast on Lawfare explaining Google’s approach to advanced security that is informative.

Spectre, Meltdown, and Virtual Systems

In June of 2017 I wrote a blog for InfoWorld on How to handle the risks of hypervisor hacking. In it, I described the theoretical points where Virtual Machines (VMs) and hypervisors could be hacked. My crystal ball must have been well polished. Spectre and Meltdown prey on one of the points I described there.

What I did not predict is where the vulnerability would come from. As a software engineer, I always think about software vulnerabilities, but I tend to assume that the hardware is seldom at fault. I took one class in computer hardware design thirty years ago. Since then, my working approach is to look first for software flaws and only consider hardware when I am forced, kicking and screaming, to examine for hardware failure. This is usually a good plan for a software engineer. As a rule, when hardware fails, the device bricks (is completely dead), seldom does it continue to function. There is usually not much beyond rewriting drivers that a coder can do to fix a hardware issue. Even rewriting a driver is usually beyond me because it takes more hardware expertise than I have to write a correct driver.

In my previous blog here, I wrote that Spectre and Meltdown probably will not affect individual users much. So far, that is still true, but the real impact of these vulnerabilities is being felt by service providers, businesses, and organizations that make extensive use of virtual systems. Although the performance degradation after temporary fixes have been applied is not as serious as previously estimated, some loads are seeing serious hits and even single digit degradation can be significant is scaled up systems. Already, we’ve seen some botched fixes, which never help anyone.

Hardware flaws are more serious than software flaws for several reasons. A software flaw is usually limited to a single piece of software, often an application. A vulnerability limited to a single application is relatively easy to defend against. Just disable or uninstall the application until it is fixed. Inconvenient, but less of a problem than an operating system vulnerability that may force you to shut down many applications and halt work until the operating system supplier offers a fix to the OS. A flaw in a basic software library can be worse: it may affect many applications and operating systems. The bright side is that software patches can be written and applied quickly and even automatically installed without computer user intervention— sometimes too quickly when the fix is rushed and inadequately tested before deployment— but the interval from discovery of a vulnerability to patch deployment is usually weeks or months, not years.

Hardware chip level flaws cut a wider and longer swathe. A hardware flaw typically affects every application, operating system, and embedded system running on the hardware. In some cases, new microcode can correct hardware flaws, but in the most serious cases, new chips must be installed, and sometimes new sets of chips and new circuit boards are required. If installing microcode will not fix the problem, at the very least, someone has to physically open a case and replace a component. Not a trivial task with more than one or two boxes to fix and a major project in a data center with hundreds or thousands of devices. Often, a fix requires replacing an entire unit, either because that is the only way to fix the problem, or because replacing the entire unit is easier and ultimately cheaper.

Both Intel and AMD have announced hardware fixes to the Spectre and Meltdown vulnerabilities. The replacement chips will probably roll out within the year. The fix may only entail a single chip replacement, but it is a solid prediction that many computers will be replaced. The Spectre and Meltdown vulnerabilities exist in processors deployed ten years ago. Many of the computers using these processors are obsolete, considering that a processor over eighteen months old is often no longer supported by the manufacturer. These machines are probably cheaper to replace than upgrade, even if an upgrade is available. More recent upgradable machines will frequently be replaced anyway because upgrading a machine near the end of its lifecycle is a poor investment. Some sites will put off costly replacements. In other words, the computing industry will struggle with the issues raised by Spectre and Meltdown for years to come.

There is yet another reason vulnerabilities in hardware are worse than software vulnerabilities. The software industry is still coping with the aftermath of a period when computer security was given inadequate attention. At the turn of the 21st century, most developers had no idea that losses due to insecure computing would soon be measured in billions of dollars per year. The industry has changed— software engineers no longer dismiss security as an optional afterthought, but a decade after the problems became unmistakable, we are still learning to build secure software. I discuss this at length in my book, Personal Cybersecurity.

Spectre and Meltdown suggest that the hardware side may not have taken security as seriously as the software side. Now that criminal and state-sponsored hackers are aware that hardware has vulnerabilities, they will begin to look hard to find new flaws in hardware for subverting systems. A whole new world of hacking possibilities awaits.

We know from the software experience that it takes time for engineers to develop and internalize methodologies for creating secure systems. We can hope that hardware engineers will take advantage of software security lessons, but secure processor design methodologies are unlikely to appear overnight, and a backlog of insecure hardware surprises may be waiting for us.

The next year or two promises to be interesting.

Cyber Defense Skill: URL Reading

Want to quickly sort out real emails from spam? Spot a bad links on web pages? Identify sham web sites? I have a suggestion: learn to read URLs.

Learning to read URLs is like taking a class in street self-defense or carrying a can of mace. Actually, much better because reading URLs can’t be turned against you. You might end up in the hospital or worse if you resist a street thug with your self-defense skills, but you will never be injured spotting a bad URL.

Uniform Resource Locators (URLs), more properly called Uniform Resource Identifiers (URIs), direct all the traffic on the World Wide Web. Almost every cyber-attack directs traffic to or from an illegitimate URL at some point in the assault. If you can distinguish a good address from a bad address and develop the habit of examining internet addresses, you will be orders of magnitude more difficult to hack.

Addresses are constructed according to simple rules. You can master the rules you need to know in order to distinguish legitimate addresses from scams in a few minutes. And be much safer.

If you want to dig deep into URLs, take a look at RFC 3986. There is much more to URLs than I cover here.

Here is a typical simple URL:

https://www.marvinwaschke.com

HTTP

The first part, called the scheme, “http:” tells you that it is a HyperText Transfer Protocol (HTTP) address. You need to know two things about the HTTP scheme. First, almost all data on the web travels to and from your desktop, laptop, tablet, or phone over HTTP. In fact, if an address does not begin with “http”, it’s not a web address. There other schemes, the most important of these is “mailto:”, which designates an email address. More on this below.

Secure HTTP

There is an important variant of HTTP called HTTPS. The “S” stands for “secure.” Data shipped via HTTPS is encrypted and the source and destination are verified with a security organization. HTTPS used to be reserved for financial transactions, but now, with all the dangers of the network, HTTPS is encouraged for all traffic. When you see “https” in a web address, hackers have a hard time snooping on your data or faking a web site. HTTPS is especially important if you are on open public WiFi at a coffee shop or other public place.

Not too long ago, security experts used to say HTTPS guaranteed that a site was legitimate. That is no longer good advice. HTTPS is not a guarantee that a site is legit. Smart scamming hackers can set up fake sites with HTTPS security. You have to check the rest of the address for signs of bogosity. However, setting up a fake site with a legitimate address is still hard, so a good address with HTTPS is still a strong bet.

HTTP address “authority”

The part of the address following the “//” is the “authority.” Most of the time, the authority is a registered domain name. The authority section of a URL ends with a “/”. Notice that the slash leans forward, not backward. A backward slash is completely different. The “query” follows the forward slash. The query usually contains search criteria that narrow down the data you want retrieved and is often hard to interpret without specific information about the domain. You can ignore it, although sometimes hackers can learn secrets about a web site from information inadvertently placed in the query.

Domain extensions

In the above address, “marvinwaschke.com” is a domain name that I have registered with the with the Internet Assigned Number Authority (IANA). “.com” is the extension. In the old days, there were only a few extensions allowed: “.gov”, “.edu”, “.net”, “.com”, and “.mil”. They are still the most common, although many others— such as “.tv”, “.partners”, “.rocks” and country abbreviations— have been added.

You can use extensions as a clue. For instance, most established firms and organizations still use the old standbys. A web site with a “amex.rocks” domain is likely not the American Express you think it is. We all know that some countries harbor more hackers than others. If an address has an extension that is an abbreviation for a cyber rogue state, be careful.

Remember, these are clues, not rules. A street lined with wrecked cars and broken windows may be crime free, but more often than not, it is a dangerous neighborhood. The same applies to incongruous domain names. They could be safe, but there is a good chance they are not.

Authority subsections

The authority section is divided by periods (“.”s) and reads in reverse. The extension that immediately precedes the first forward slash is the most important. “.com” in “marvinwascke.com” indicates that the marvinwaschke.com domain is in the vast segment of the internet made up of commercial ventures. “marvinwaschke” determines which commercial venture the address refers to. “www” indicates that the address points to the “www” part of the “marvinwaschke” venture. I could set up my website to have a “public.marvinwaschke.com” section or a “public.security.marvinwascke.com” section if I cared to. The “www” is historically so common, most browsers will strip it off or add it on as needed to make a connection.

“Microsoft.marvinwaschke.com” only indicates that my web site has a section devoted to Microsoft. “Microsoft.marvinwaschke.com” has nothing to do with Microsoft Corporation. Hackers make use of this to try to fool you that “Microsoft.pirates-r-us.ru” is a Microsoft site. It’s not! Hackers are creative. Make sure that the right end of the domain name makes sense.

Email URIs

Email addresses are URIs that follow a different scheme but use the same domain name rules. Usually, email addresses drop the “mailto” scheme but they can always be fully written out like mailto://boss@example.com. If you see an address like captain@microsoft.pirates-r-us.ru you can be fairly certain that the mail did not come from Bill Gates.

Near miss URIs

A favorite hacking trick is to register a domain that looks real, but is just a little off. For example, micrasoft.com instead of microsoft.com. Keep an eye out for those little tricks.

When in doubt, Google it

When you see a link or address with a suspicious domain name, Google the domain name before you use the address. Most of the time, Google will pick up information on dangerous domains.

Look at every link with caution

The internet is all about grabbing your attention. Absurd promises abound that that few people would take seriously after they took a moment to think. Losing weight is hard, wealth management is useless if you aren’t already accumulating wealth the hard way, and no miracle food will prevent cancer or make you a genius. Not all ads are scams, but  don’t tempt fate by clicking on links that prey on impossible hopes.

Finally

Make a habit of looking at internet addresses. Often, a link on a webpage or in an email is text like ” here “.  Hackers hide bogus URLs under innocuous text. They also sometimes use a legitimate URL for the text and stick in a dubious URL for the real target.  Like this: https://marvinwaschke.com  If you place the cursor over a link or address, most browsers and email tools will display the working address in the lower left-hand corner of the window. Look at the address remembering all the cautions in this post. Does something look wrong? If so, use care. Try the two links in this paragraph to see what I mean. The habit of looking at addresses will make you much harder to hack than unsavvy computer users.