Windows 11? Is Redmond Crazy?

Folks have gotten used to Windows 10. Now Microsoft is pulling out the rug with a new version of Windows. When I heard of Windows 11, my first thought was that the disbanded Vista product team had staged an armed coup in Bill Gates’ old office and regained control of Windows. I haven’t installed Windows 11, although grandson Christopher has. He doesn’t like it.

I think Microsoft has something cooking in Windows 11.

Microsoft releases

New releases of Windows are always fraught. Actually, new releases of anything from Microsoft get loads of pushback. Ribbon menu anxiety in Office, the endless handwringing over start menus moving and disappearing in Windows. Buggy releases. It goes on and on.

Having released a few products myself, I sympathize with Microsoft.

Developers versus users

A typical IT system administrator says “Change is evil. What’s not broke, don’t fix. If I can live with a product, it’s not broke.” Most computer users think the same way: “I’ve learned to work with your run down, buggy product. Now, I’m busy working. Quit bothering me.”

Those positions are understandable, but designers and builders see products differently. They continuously scrutinize customers using a product, and then ask how it might work more effectively, what users might want to do that they can’t, how they could become more productive and add new tasks and ways of working to their repertoire.

Designers and builders also are attentive to advances in technology. In computing, we’ve seen yearly near-doubling of available computing resources, instruction execution capacity, storage volume, and network bandwidth. In a word, speed. 2021’s smartphones dwarf super computers from the era when Windows, and its predecessor, DOS, were invented.

No one ever likes a new release

At its birth, Windows was condemned as a flashy eye candy that required then expensive bit-mapped displays and sapped performance with intensive graphics processing. In other words, Windows was a productivity killer and an all-round horrible idea, especially to virtuoso users who had laboriously internalized all the command line tricks of text interfaces. Some developers, including me, for some tasks, still prefer a DOS-like command line to a graphic interface like Windows.

However, Windows, and other graphic interfaces such as X on Unix/Linux, were rapidly adopted as bit-mapped displays proliferated and processing power rose. Today, character-based command line interface are almost always simulated in a graphical interface when paleolithic relics like me use them. Pure character interfaces still are around, but mostly in the tiny LCD screens on printers and kitchen appliances.

Designers and builders envisioned the benefits from newly available hardware and computing capacity and pushed the rest of us forward.

Success comes from building for the future, not doubling down on the past. But until folks share in the vision, they think progress is a step backwards.

Is the Windows 11 start menu a fiasco? Could be. No development team gets everything right, but I’ll give Windows 11 a spin and try not to be prejudiced by my habits.

Weird Windows 11 requirements

Something more is going on with Windows 11. Microsoft is placing hardware requirements on Windows 11 that will prevent a large share of existing Windows 10 installations from upgrading. I always expect to be nudged toward upgraded hardware. Customers who buy new hardware expect to benefit from newer more powerful devices. Requirements to support legacy hardware are an obstacle to exploiting new hardware. Eventually, you have to turn your back on old hardware and move on, leaving some irate customers behind. No developer likes to do this, but eventually, they must or the competition eats them alive.

Microsoft forces Windows 11 installations to be more secure by requiring a higher level of Trusted Platform Module (TPM) support. A TPM is microcontroller that supports several cryptographic security functions that help verify that users and computers are what they appear to be and are not spoofed or tampered with. TPMs are usually implemented as a small physical chip, although they can be implemented virtually with software. Requiring high level TPM support makes sense in our increasing cybersecurity compromised world.

But the Windows 11 requirements seem extreme. As I type this, I am using a ten-year-old laptop running Windows 10. For researching and writing, it’s more than adequate, but it does not meet Microsoft’s stated requirements for Windows 11. I’m disgruntled and I’m not unique in this opinion. Our grandson Christopher has figured out a way to install Windows 11 on some legacy hardware, which is impressive, but way beyond most users and Microsoft could easily cut off this route.

I have an idea where Redmond is going with this. It may be surprising.

Today, the biggest and most general technical step forward in computing is the near universal availability of high capacity network communications channels. Universal high bandwidth Internet access became a widely accepted national necessity when work went online through the pandemic. High capacity 5G cellular wireless network are beginning to roll out. (What passes for 5G now is far beneath the full 5G capacity we will see in the future.) Low earth orbit satellite networks promise to link isolated areas to the network. Ever faster Wi-Fi local area networks offer connectivity anywhere.

This is not fully real. Yet. But it’s close enough that designers and developers must assume it is already present, just like we had to assume bit-mapped displays were everywhere while they were still luxuries.

What does ubiquitous high bandwidth connection mean for the future? More streaming movies? Doubtless, but that’s not news: neighborhood Blockbuster Video stores are already closed.

Thinking it through

In a few years, every computer will have a reliable, high capacity connection to the network. All the time. Phones are already close. In a few years, the connection will be both faster and more reliable than today. That includes every desktop, laptop, tablet, phone, home appliance, vehicle, industrial machine, lamp post, traffic light, and sewer sluice gate. The network will also be populated with computing centers with capacities that will dwarf the already gargantuan capacities available today. Your front door latch may already have access to more data and computing capacity than all of IBM and NASA in 1980.

At the same time, ransomware and other cybercrimes are sucking the life blood from business and threatening national security.

Microsoft lost the war for the smartphone to Google and Apple. How will Windows fit in the hyperconnected world of 2025? Will it even exist? What does Satya Nadella think about when he wakes late in the night?

Windows business plan

The Windows operating system (OS) business plan is already a hold out from the past. IBM, practically the inventor of the operating system, de-emphasized building and selling OSs decades ago. Digital Equipment, DEC, a stellar OS builder, is gone, sunk into HP. Sun Microsystems, another OS innovator, is buried in the murky depths of Oracle. Apple’s operating system is built on Free BSD, an open source Unix variant. Google’s Android is a Linux. Why have all these companies gotten out of or never entered the proprietary OS development business?

Corporate economics

The answer is simple corporate economics: there’s no money in it. Whoa! you say. Microsoft made tons of money off its flagship product, Windows. The key word is “made” not “makes.” Making money building and selling operating systems was a money machine for Gates and company back in the day, but no longer. Twenty years ago, when Windows ruled, the only competing consumer OS was Apple, which was a niche product in education and some creative sectors. Microsoft pwned the personal desktop in homes and businesses. Every non-Apple computer was another kick to the Microsoft bottom line. No longer. Now, Microsoft’s Windows division has to struggle on many fronts.

Open source OSs— Android, Apple’s BSD, and the many flavors of Linux— are all fully competitive in ease of installation and use. They weren’t in 2000. Now, they are slick, polished systems with features comparable to Windows.

To stay on top, Windows has to out-perform, out-feature, and out secure these formidable competitors. In addition, unlike Apple, part of the Windows business plan is to run on generic hardware. Developing on hardware you don’t control is difficult. The burden of coding to and testing on varying equipment is horrendous. Microsoft can make rules that the hardware is supposed to follow, but in the end, if Windows does not shine on Lenovo, HP, Dell, Acer, and Asus, the Windows business plunges into arctic winter.

With all that, Microsoft is at another tremendous disadvantage. It relies on in house developers cutting proprietary code to advance Windows. Microsoft’s competitors rely on foundations that coordinate independent contributors to opensource code bases. Many of these contributors are on the payrolls of big outfits like IBM, Google, Apple, Oracle, and Facebook.

Rough times

Effectively, these dogs are ganging up on Microsoft. Through the foundations— Linux, Apache, Eclipse, etc.—these corporations cooperate to build basic utilities, like the Linux OS, instead of building them for themselves. This saves a ton of development costs. And, since the code is controlled by the foundation in which they own a stake, they don’t have to worry about a competitor pulling the rug out from under them.

Certainly, many altruistic independent developers contribute to opensource code, but not a line they write gets into key utilities without the scrutiny of the big dogs. From some angles, the opensource foundations are the biggest monopolies in the tech industry. And Windows is out in the cold.

What will Microsoft do? I have no knowledge, but I have a good guess that Microsoft is contemplating a tectonic shift.

Windows will be transformed into a service.

Nope, you say. They’ve tried that. I disagree. I read an article the other day declaring Windows 11 to be the end of Windows As A Service, something that Windows 10 was supposed to be, but failed because Windows 11 is projected for yearly instead of biannual or more frequent updates. Windows 11 has annoyed a lot of early adopters and requires hardware upgrades that a lot of people think are unnecessary. What’s going on?

Windows 10 as a service

The whole idea of Windows 10 as a service was lame. Windows 10 was (and is) an operating system installed on a customer’s box, running on the customer’s processor. The customer retains control of the hardware infrastructure. Microsoft took some additional responsibility for software maintenance with monthly patches, cumulative patches, and regular drops of new features, but that is nowhere near what I call a service.

When I installed Windows 10 on my ancient T410 ThinkPad, I remained responsible for installing applications and adding or removing memory and storage. If I wanted, I could rename the Program Files directory to Slagheap and reconfigure the system to make it work. I moved the Windows system directory to an SSD for a faster boot. And I hit the power switch whenever I feel like it.

Those features may be good or bad.

As a computer and software engineer by choice, I enjoy fiddling with and controlling my own device. Some of the time. My partner Rebecca can tell you what I am like when a machine goes south while I’m on a project that I am hurrying to complete with no time for troubleshooting and fixing. Or my mood when I tried to install a new app six months after I had forgotten the late and sporty night when I renamed the Program Files directory to Slagheap.

At times like those, I wish I had a remote desktop setup, like we had in the antediluvian age when users had dumb terminals on their desks and logged into a multi-user computer like a DEC VAX. A dumb terminal was little more than a remote keyboard with a screen that showed keystrokes as they were entered interlaced with a text stream from the central computer. The old systems had many limitations, but a clear virtue: a user at a terminal was only responsible for what they entered. The sysadmin took care of everything else. Performance, security, backups, and configuration, in theory at least, were system problems, not user concerns.

Twenty-first century

Fast forward to the mid twenty-first century. The modern equivalent of the old multi-user computer is a user with a virtual computer desktop service running in a data center in the cloud, a common set up for remote workers that works remarkably well. For a user, it looks and feels like a personal desktop, except it exists in a data center, not on a private local device. All data and configuration (the way a computer is set up) is stored in the cloud. An employee can access his remote desktop from practically any computing device attached to the network, if they can prove their identity. After they log on, they have access to all their files, documents, processes, and other resources in the state they left them, or in the case of an ongoing process, in the state their process has attained.

What’s a desktop service

From the employees point of view, they can switch devices with abandon. Start working at your kitchen table with a laptop, log out in the midst of composing a document without bothering to save. Not saving is a little risky, but virtual desktops run in data centers where events that might lose a document are much rarer than tripping on a cord, spilling a can of Coke, or the puppy doing the unmentionable at home. In data centers, whole teams of big heads scramble to find ways to shave off a minute of down time a month.

Grab a tablet and head to the barbershop. Continue working on that same document in the state you left it instead of thumbing through old Playboys or Cosmos. Pick up again in the kitchen at home with fancy hair.

Security

Cyber security officers have nightmares about employees storing sensitive information on personal devices that fall into the hands of a competitor or hacker. Employees are easily prohibited from saving anything from their virtual desktop to the local machine where they are working. With reliable and fast network connections everywhere, employees have no reason to save anything privately.

Nor do security officers need to worry about patching vulnerabilities on employee gear. As long as the employee’s credentials are not stored on the employee’s device, which is relatively easy to prevent, there is nothing for a hacker to steal.

The downside

What’s the downside? The network. You have to be connected to work and you don’t want to see swirlies when you are in the middle of something important while data is buffering and rerouted somewhere north of nowhere.

However. All the tea leaves say those issues are on the way to becoming as isolated as the character interface on your electric teapot.

The industry is responding to the notion of Windows as a desktop service. See Windows 365 and a more optimistic take on Win365.

Now think about this for a moment: why not a personal Windows virtual desktop? Would that not solve a ton of problems for Microsoft? With complete control of the Windows operating environment, their testing is greatly simplified. A virtual desktop local client approaches the simplicity of a dumb terminal and could run on embarrassingly modest hardware. Security soars. A process running in a secured data center is not easy to hack. The big hacks of recent months have all been on lackadaisically secured corporate systems, not data centers.

It also solves a problem for me. Do I have to replace my ancient, but beloved, T410? No, provided Microsoft prices personal Windows 365 reasonably, I can switch to Windows 365 and continue on my good old favorite device.

Marv’s note: I made a few tweeks to the post based on Steve Stroh’s comment.

Supply Chain Management: Averting a Covid-19 Catastrophe

Yossi Sheffi is a supply chain management expert who moves freely between business and academics. He founded several companies, and he sits on the boards of large corporations. He teaches engineering at MIT and has authored a half-dozen books that are read by engineers, economists, and business people. When I heard about his latest book, The New (Ab)Normal, in which he tackles the covid-19 disruption of the global supply chain, I got a copy as soon as I could, stayed up late reading, and got up early to finish it.

gosling-supply-chain
The gosling supply chain management system

The New (Ab)Normal

New (Ab)Normal was worth it. Sheffi’s insider views are compelling. He has talked with the executives and engineers who struggled to put food and toilet paper on supermarket shelves, produce and distribute medical protective gear, and prevent manufacturing plants from foundering from supply disruption.

Supply chains and the media

Sheffi has harsh words for some media. For example, he says empty supermarket shelves were misunderstood. Food and supplies were never lacking, but they were often in the wrong place. Until the lockdowns in March, a big share of U.S. meals were dispensed in restaurants, schools, and company cafeterias. These businesses purchase the same food as families, but they get it through a different supply network and packaged in different ways.

Cafeterias buy tons of shelled fresh eggs in gallon containers, but consumers buy cartons of eggs in supermarkets for cooking at home. When the eateries shut down or curtailed operations and people began eating at home, plenty of eggs were available, but someone had to redirect them to consumers in a form that was practical for a home kitchen. Sheffi says food shortages appeared in dramatic media video footage and sound bites, but not in supply chains.

Bursty services

Changing buying patterns worsened the appearance of shortages. Supermarket supply chains are adjusted to dispense supplies at a certain rate and level of burstiness. These are terms I know from network and IT service management. A bursty service has bursts of increased of activity followed by relatively quiet periods. At a bursty IT trouble ticket desk, thirty percent of a week’s tickets might be entered in one hour on Monday morning when employees return to work ready to tackle problems that they had put off solving during the previous week. A less bursty business culture might generate the same number of tickets per week, but with a more uniform rate of tickets per hour.

Bursty desks must be managed differently than steady desks. The manager of a bursty service desk must devise a way to deploy extra equipment and hire more staff to deal with peak activity on those hectic Monday mornings. Experienced managers also know that an unpredicted burst in tickets on a desk, say in the aftermath of a hurricane, will cause havoc and shortened tempers as irate customers wait for temporarily scarce resources. The best of them have contingency plans to deal with unanticipated bursts.

Cloud computing to the rescue

The rise of cloud computing architectures in the last decade has yielded increased flexibility for responding to bursts in digital activity. Pre-cloud, managers who had to provide service through activity bursts had to deploy purchased or leased servers with the capacity to handle peak periods of activity. Adding a physical server is a substantial financial investment that requires planning, sometimes changes in the physical plant, often added training and occasionally hiring new personnel.

Worse, the new capacity may remain idle during long non-peak periods, which is hard to explain to cost conscious business leaders. Some businesses are able to harvest off-peak capacity for other purposes, but many are not. Cloud computing offers on-demand computing with little upfront investment, greatly reducing the need to pay for unused capacity to improve service during peak periods.

The food supply

Covid-19 caused an unanticipated increase in the burstiness of supermarket sales. Under the threat of the virus, consumers began to shop once a week or less, buying larger quantities. Folks accustomed to picking up a few vegetables and a fresh protein on their way home from work began arriving at the store early in the morning to buy twenty-five-pound sacks of flour and dry beans, cases of canned vegetables, and bulk produce.

On the supply end with the farmers and packers, the quantities sold per month stayed fairly constant because the number of mouths being fed did not change, but in the stores, by afternoon, shelves were bare waiting for shipments arriving in the night because consumers were buying in bursts instead of their “a few items a day” pattern. This made for exciting media coverage of customers squabbling over the products remaining on the shelves. The media seldom pointed out that the shelves were full each morning after the night’s shipments had arrived and were on the shelves.

Toilet paper

The infamous toilet paper shortage was also either illusory or much more nuanced than media portrayals. Like restaurants and cafeterias, public restrooms took a big hit with the lockdowns. Like food, toilet paper consumption is inherently constant, but toilet paper purchasing burstiness and where the product is purchased varies.

Commercial toilet paper consumption plummeted as shoppers began to purchase consumer toilet paper in the same bursts that they were purchasing food supplies. There may have been some hoarding behavior, but many shoppers simply wanted to shrink their dangerous trips to the market by buying in bulk. Consumer toilet paper is not like the commercial toilet paper used in public restrooms, which is coarser and often dispensed in larger rolls from specialized holders. This presented supply problems similar to food supply issues.

Supply disruption

Supply chains had to respond quickly. Unlike digital services, responding to increased burstiness in supermarket sales required changes in physical inventory patterns. Increasing the supply of eggs by the dozen at supermarkets and decreasing eggs by the gallon on kitchen loading docks could not be addressed by dialing up a new batch of virtual cloud computers. New buying patterns had to be analyzed, revised orders had to be placed with packers, and trucks had to roll on highways.

Advances in supply chain management

Fortunately, supply chain reporting and analysis has jumped ahead in the last decade. Consumers see some of these advances on online sales sites like Amazon when they click on “Track package.” Unlike not too long ago when all they were offered was Amazon’s best estimated delivery date, they see the progress of their shipment from warehouse through shipping transfer points to the final delivery. Guesswork is eliminated: arrival and departure is recorded as the package passes barcode scanners.

The movement data is centralized in cloud data centers and dispensed to the consumer on demand. Many people have noted that Amazon shipments are not as reliable as they were pre-covid. However, the impression of unreliability would be much stronger without those “Track package” buttons.

Supply chain managers have access to the same kind of data on their shipments. In untroubled times, a shipping clerk’s educated estimate on an arrival time of a shipment of fresh eggs may have been adequate, but not in post-covid 2020, with its shifting demands and unpredictable delays. Those guesses can’t cope with an egg packing plant shut down for a week when the virus flares up or a shipment delayed by a quarantined truck driver.

Good news

Fortunately, with the data available today, these issues are visible in supply chain tracking systems. Orders can be immediately redirected to a different packing plant that has returned from a shutdown or dispatch a fresh relief driver instead of leaving a shipment to wait in a truck stop parking lot. Issues can be resolved today that would not have been visible as issues a decade ago. Consequently, supply chains have been strikingly resilient to the covid-19 disruption.

Supply chains were much different in my pioneer grandparents’ days. They grew their own meat, poultry, and vegetables, and lived without toilet paper for most of their lives. Although supply was less complicated, the effects of supply disruption, like a punishing thunderstorm destroying a wheat crop, was as significant as today’s disruptions.

In November of 2020 with steeply rising infection counts, predictions of new supply disruption occasionally surface. The response of supply chains so far this year leave me optimistic that we have seen the worst.

Blockchain Made Simple

Blockchain! Blockchain! Blockchain! That ought to get some attention. If anyone hasn’t noticed, blockchain is somewhere near the hysteria phase of the hype cycle. Everyone associates blockchain with digital currency. The apparent bubble around Bitcoin and other digital or crypto currencies adds to the intensity of the discussion. But very few people have a firm grasp of what blockchain is and it’s potential. In this blog, I will differentiate blockchain from digital currency and discuss block chain’s potential for profoundly disrupting commerce and society.

Not a single technology

First, blockchain is not a single technology. Like email, or a packet network, blockchain is a method of communication with certain qualities that many different technologies can be used to implement. The most popular implementations of blockchain (Bitcoin and its ilk) use specific technology that relies on cryptographic signatures for verifying a sequence of transactions. However, tying blockchain to a specific technology is unnecessarily limiting. The term “secure distributed ledger” is more descriptive, but secure distributed ledger is also pedantic and long-winded, so most people will continue to call it blockchain.

Essential blockchain

In essence, blockchain is a definitive statement of the result of a series of transactions that does not rely on a central authority for its validity. Putting it another way, contributions to a block chain may come from many different sources. Although the contributions are authenticated, they are not controlled by a central authority.

Think about real estate and deeds. Possession of real estate relies on courts and official records that validate ownership of physical lumps of earth. A central authority, in the form of a county records office has the official record of transactions that prove the ownership of parcels of physical land. Courts, lawyers, surveyors, title companies and individuals all contribute to the record. If you want to research the ownership of a parcel in another county, state, or country you either travel or hope that the local authority has put the records on-line and keeps them current. If you want to record a survey, deed or a sale, you must present the paperwork to the controlling local authority.

If you want to put together a real estate transaction that depends on interlocking simultaneous ownership by a complex structure of partners in parcels spread over several jurisdictions, the bureaucratic maneuvering and paperwork to complete the transaction is daunting.

Blockchain could be used to build a fully validated and universally accessible land records office that does not rely on a local authority. Instead of interacting with local jurisdictions, the blockchain would validate land information and contracts, simplifying and reducing the cost of land ownership deals.

This type of problem repeats itself in many different realms. Ownership of intellectual property, particularly digital intellectual property that travels from person to person and location to location at the speed of light presents similar problems.

Blockchains and distributed transactional databases

Although blockchain is implemented quite differently from a traditional distributed transactional database, it performs many of the same functions. Distributed transactional databases are nothing new. In preparing to write this blog, I pulled a thirty-year-old reference book from my shelf for a quick review. A distributed database is a database in which data is stored on a several computers. The data may be distinct (disjoint) or it may overlap, but the data on each computer is not a simple replica of the data on other computers in the distribution.

A transactional database is one in which each query or change to the database is a clearly defined transaction whose outcome is totally predictable based on itself and the sequence of prior transactions. For example, the amount of cash in your bank account depends on the sequence of prior deposits and withdrawals. By keeping close control of the order and amounts of transactions, your bank’s account database always accurately shows how much money you have in your account. If some transactions were not recorded, or recorded out of order, at some instance your account balance would be incorrect. You rely on the integrity of your bank for assurance that all transactions are correctly recorded.

The value added when a database is distributed is that individual databases can be smaller and maintained close to the physical origin of the data or by experts familiar with the data in the contributing database. Consequently, distributed data can be of higher quality and quicker to maintain than the enormous data lakes in central databases.

Both transactional and distributed databases are both common and easy to implement, or at least they have been implemented so many times, the techniques are well known. But when databases are supposed to be both distributed and keep transactional discipline, troubles pop up like a flush of weeds after a spring rain.

Distributed transactional databases in practice

The nasty secret is that although distributed transactional databases using something called “two-phase commit” are sound in theory and desirable to the point of sometimes being called a holy grail, in practice, they don’t work that well. If networks were reliable, distributed transactional database systems would work well and likely would have become common twenty years ago, but computer networks are not reliable. They are designed to work very well most of the time, but the inevitable trade-off is that two computers connected to the network cannot be guaranteed to be able to communicate successfully in any designated time interval. A mathematical theorem proves the inevitable trade off.

Look at a scaled up system, for example Amazon’s ordering system; if records were distributed between the distribution centers from which goods are shipped, the sources of the goods, and Amazon’s accounting system, a single order might require thousands of nearly simultaneous connections to start the goods on their way to the customer. These connections might reach over the entire globe. The probability that any one of those connections will be blocked long enough to stall the transaction is intolerably high.

Therefore, practical systems are designed to be as resilient as possible to network interruptions. In general, this means making sure that all the critical connections are within the same datacenter, which is designed with ample redundancy to guarantee that business will not halt when a daydreaming employee trips on a stray cable.

Reliable, performant transactional systems avoid widely distributed transactions. Accounts are kept in central databases and most transactions are between a local machine running a browser and a central system running in a cloud datacenter. The central database may be continuously backed up to one or more remote datacenters and the system may be prepared flip over to an alternative backup system whenever needed, but there is still a definitive central database at all times. When you buy a book on Amazon, you don’t have to wait for computers in different corners of the country and the world to chime in with assent before the transaction will complete.

That is all well and good for Amazon, where sales transactions in an enterprise resource planning database (accounting system to non-enterprise system wonks) makes sense, but not for problems that involve the coordination of many sources of truth, like the jurisdictions in our land records example above. In that case, each records office is the definitive source of truth for its share of the transaction, but there is no central authority that rules them all and we spend days and weeks wading through a snake pit of physical contacts and possibly erroneous paper when we cross jurisdictional boundaries.

Practical blockchain

This is where blockchain comes in. Imagine a land record system where rights, records, and surveys for a parcel of real estate were all recorded and anyone could record information and transfer rights by presenting a credential that proves they hold a given right, then transfer the right to a properly credentialed recipient all within a single computer interface. A chain of transactions stretching back to the beginning of the blockchain verify the participants’ data and rights. The identities of the rights holders could be required to be public or not depending on the needs of the system, but the participants can always present verifiable credentials to prove they are agents holding the rights they assert or are an authority for the data they contribute. And all this authenticated data spread over many different computer systems.

Digital currency

This is basically the way crypto currencies work. When a Bitcoin is acquired, its value is backed by a verifiable record tracing the transfers of the coin back to its moment of origin. Maintaining the record is the work of many computers distributed all over the world. Bitcoin solves the problem of the cost of maintaining the record by offering freelance maintainers, called miners, bitcoins in return for performing the maintenance. The integrity of the record is maintained through elaborate cryptographic protocols and controls. Bitcoin and similar currencies are designed to be untraceable: purchasing or selling Bitcoins require credentials, but not identities of the participants. Anonymity makes Bitcoins attractive for secret or illegal transactions.

A downside of the Bitcoin-type blockchain is the tremendous amount of expensive computing required to maintain the blockchain. In addition, some critics have charged that Bitcoin-type currencies are enormously complex schemes that promise profits based on future investments rather than true increases in value, which amounts to a Ponzi scheme. Whether Bitcoin is a Ponzi scheme is open to argument.

The red herring

Unfortunately, the crypto-currency craze is a red herring that distracts attention from blockchain’s profound disruptive potential. Stock, bond, and commodity exchanges, land record offices, supply chain accounting and ordering, tracking sales of digital assets like eBooks, and many other types of exchange rely on central clearing houses that charge for each transaction. These clearing houses facilitate exchange and collect fees without contributing substantial value. Blockchain has the potential to simplify and speed transactions and eliminate these middlemen, like Uber eliminates taxi fleets and online travel apps eliminate travel agents. Amazon eliminated publishers and literary agents for eBooks and blockchain could eliminate Amazon from eBook sales.

How can this work without a central authority holding the records of the transactions? The underlying concept is that as long as the blockchain is maintained according to rules known to a group of maintainers, no one maintainer need have central control as long as the other maintainers can detect when somebody fudges something. A proper blockchain algorithm is physically impossible to maintain in ways contrary to the rules. There is more than one way to do it. The Bitcoin model is not the only way to implement a blockchain. Paid miners and anonymity are not inherent in blockchains.

For example, an author could register an electronic document with an electronic book block chain registry. Readers could enter into a contract with the author for the right to access to the book. These contracts could be registered in the blockchain. Readers could relinquish their contracts with the author or transfer their contract to another reader and lose the right to access to the book.  The rules depend on the contracts represented in the blockchain, not the technology. Additional technology such as cryptographic keys might be used to enforce contracted book access. Unlike current DRM schemes, no authority like a publisher or an Amazon is involved, only the author, the reader, and the transaction record in the blockchain.

I’ll get more into how it’s done in another blog.

Spectre, Meltdown, and Virtual Systems

In June of 2017 I wrote a blog for InfoWorld on How to handle the risks of hypervisor hacking. In it, I described the theoretical points where Virtual Machines (VMs) and hypervisors could be hacked. My crystal ball must have been well polished. Spectre and Meltdown prey on one of the points I described there.

What I did not predict is where the vulnerability would come from. As a software engineer, I always think about software vulnerabilities, but I tend to assume that the hardware is seldom at fault. I took one class in computer hardware design thirty years ago. Since then, my working approach is to look first for software flaws and only consider hardware when I am forced, kicking and screaming, to examine for hardware failure. This is usually a good plan for a software engineer. As a rule, when hardware fails, the device bricks (is completely dead), seldom does it continue to function. There is usually not much beyond rewriting drivers that a coder can do to fix a hardware issue. Even rewriting a driver is usually beyond me because it takes more hardware expertise than I have to write a correct driver.

In my previous blog here, I wrote that Spectre and Meltdown probably will not affect individual users much. So far, that is still true, but the real impact of these vulnerabilities is being felt by service providers, businesses, and organizations that make extensive use of virtual systems. Although the performance degradation after temporary fixes have been applied is not as serious as previously estimated, some loads are seeing serious hits and even single digit degradation can be significant is scaled up systems. Already, we’ve seen some botched fixes, which never help anyone.

Hardware flaws are more serious than software flaws for several reasons. A software flaw is usually limited to a single piece of software, often an application. A vulnerability limited to a single application is relatively easy to defend against. Just disable or uninstall the application until it is fixed. Inconvenient, but less of a problem than an operating system vulnerability that may force you to shut down many applications and halt work until the operating system supplier offers a fix to the OS. A flaw in a basic software library can be worse: it may affect many applications and operating systems. The bright side is that software patches can be written and applied quickly and even automatically installed without computer user intervention— sometimes too quickly when the fix is rushed and inadequately tested before deployment— but the interval from discovery of a vulnerability to patch deployment is usually weeks or months, not years.

Hardware chip level flaws cut a wider and longer swathe. A hardware flaw typically affects every application, operating system, and embedded system running on the hardware. In some cases, new microcode can correct hardware flaws, but in the most serious cases, new chips must be installed, and sometimes new sets of chips and new circuit boards are required. If installing microcode will not fix the problem, at the very least, someone has to physically open a case and replace a component. Not a trivial task with more than one or two boxes to fix and a major project in a data center with hundreds or thousands of devices. Often, a fix requires replacing an entire unit, either because that is the only way to fix the problem, or because replacing the entire unit is easier and ultimately cheaper.

Both Intel and AMD have announced hardware fixes to the Spectre and Meltdown vulnerabilities. The replacement chips will probably roll out within the year. The fix may only entail a single chip replacement, but it is a solid prediction that many computers will be replaced. The Spectre and Meltdown vulnerabilities exist in processors deployed ten years ago. Many of the computers using these processors are obsolete, considering that a processor over eighteen months old is often no longer supported by the manufacturer. These machines are probably cheaper to replace than upgrade, even if an upgrade is available. More recent upgradable machines will frequently be replaced anyway because upgrading a machine near the end of its lifecycle is a poor investment. Some sites will put off costly replacements. In other words, the computing industry will struggle with the issues raised by Spectre and Meltdown for years to come.

There is yet another reason vulnerabilities in hardware are worse than software vulnerabilities. The software industry is still coping with the aftermath of a period when computer security was given inadequate attention. At the turn of the 21st century, most developers had no idea that losses due to insecure computing would soon be measured in billions of dollars per year. The industry has changed— software engineers no longer dismiss security as an optional afterthought, but a decade after the problems became unmistakable, we are still learning to build secure software. I discuss this at length in my book, Personal Cybersecurity.

Spectre and Meltdown suggest that the hardware side may not have taken security as seriously as the software side. Now that criminal and state-sponsored hackers are aware that hardware has vulnerabilities, they will begin to look hard to find new flaws in hardware for subverting systems. A whole new world of hacking possibilities awaits.

We know from the software experience that it takes time for engineers to develop and internalize methodologies for creating secure systems. We can hope that hardware engineers will take advantage of software security lessons, but secure processor design methodologies are unlikely to appear overnight, and a backlog of insecure hardware surprises may be waiting for us.

The next year or two promises to be interesting.