Safer Home Networks

As each day passes, home network security becomes more important for many of us. Working from home in the pandemic lockdowns boosted home networks from conveniences to necessities. Although returning to the office is now considered safe, many of us have discovered that we prefer to work from home some, if not all the time. Savvy employers have begun to insist on security standards when home networks are used for work and those of us who are self-employed at home must tend to our own safety.

Do home networks need to be segmented? Not always, but as our lives become more and more wired, the benefits of segmentation have increased.

Much can be done to increase safety. A key network security principle is network segmentation.

Segmentation is a cybersecurity concept derived from the same principle that governs ships built with watertight compartments. If a single compartment springs a leak, the ship still floats. If the security of one network segment is breached, the rest of the network is still safe.

Businesses and other organizations have long practiced segmented physical security. All employees may have a key or code to open the employee entrance, but smart organizations have separate keys for each department. Widely distributing keys that open all the locks in the business are dangerous. A criminal or rogue employee with the key to everything can steal everything.

In a typical physically segmented business, one section of the office is accounting. Only people from the accounting department have keys to accounting offices. Only shipping employees have access to the shipping room and warehouse, only some shipping staff have keys to the warehouse. And so on.

Risk averse businesses segment their computer networks in the same way. Typically, an air-conditioning technician will not be able to access accounting files, nor will an accountant have access to heating and air-conditioning controls. Unsegmented networks have been the scenes of devasting attacks, such as the Target heist of a few years ago in which an air-conditioning subcontractor’s account was used to steal customer credit card information. A better segmented network would have prevented that catastrophe.

Do home networks need to be segmented? Not always, but as our lives become more and more wired, the benefits of segmentation have increased.

Folks may remember that in the dark days before we were touched by the wireless light, each computer in the house had a modem attached to a phone line. While the computer modem was connected, anyone who picked up a phone was treated an earful of painful screeches. Compute intensive households had separate phone lines for each computer. DSL (Digital Subscriber Line), which is still around but no longer as common, got rid of the necessity for separate phone lines and introduced routers to home computing. The day you install a home router, you have a home network.

Home networks today are seldom as complicated as those of large businesses and other organizations, but many still require sophisticated administration.

I remember well when we got our first DSL modem and wireless router. How luxurious it felt to wander into the living room in stocking feet, sit down on the couch, and connect to the office on a laptop without plugging anything in. Never mind that it was the beginning of twenty-four-seven working days for many of us. Now broadband connections via cable or fiber often replace DSL for higher bandwidth connections but the home wireless router still prevails.

Critical Changes For Home Networks

  • Everyone, including the kids, now have smartphones that pack a computer considerably more powerful than the beige box home desktop computers that started home computing. Smartphones connect to home wireless routers whenever they have the chance.
  • Homes have embraced the “Internet of Things” (IoT). We now have doorbells, entrance locks, and security and heating systems that connect to our wireless routers so we can control them remotely through our smart phones.

At our house, the refrigerator, the kitchen range, and the microwave all want to connect to the world wide web. Network-connected speakers like Amazon Alexa, home entertainment systems, and health monitors are now common.

For the last decade, one of the cheapest and easiest features to add to a household appliance has been an interface for remote control via an app on a smartphone. Too often, these devices are from product designers with scant training in network security. Many of these devices are easily hacked. A hacker thief might use your internet connected video doorbell to detect when you are not at home and break and enter your house while you are away. Your smart lock might just pop open when the thief arrives.

Home networks today are seldom as complicated as those of large businesses and other organizations, but many still require sophisticated administration. A segmented network protects each segment from damage from other segments and each segment can be configured to permit activities that could be dangerous in other segments.

Typical Home Network Segments

Cyber security experts agree that typical home networks, especially when residents work from home some of the time, would benefit by dividing the network into at least three segments: 1) home computing, 2) Internet of Things (IoT), and 3) guests.

The home computing segment is a home network before our computing life got complicated. It contains the desktops, laptops, tablets, and phones of the primary residents. Within this segment, peripherals such as files and printers can shared, and, when necessary, one computer can access another within this segment. Most people keep their email, financial records, and financial accounts here. For a writer like me, my manuscripts are stored locally in this segment. The segment often holds home business records. For folks with online storefronts, they administer their storefront and access their business records through this segment.

The IoT segment is the wild west. The devices there are not quite trustworthy. It’s bad enough that a criminal might hack into your smart doorbell, but giving the miscreant access to your bank account and business documents doubles down on trouble. Isolating this segment allows you to take advantage of the convenience of networked devices without quite opening a vein in your arm for the crooks.

The guest segment is valuable when you have teenagers in the house who bring in friends. Sharing internet connections with visitors is basic hospitality these days, but keeping your home network secure can be a problem. You may not mind sharing your network password with your brother, but you have to worry about your kids’ squirrelly friends who just might leave their smartphone with access to your home network on a park bench or in the video arcade. Worse, even good kids might use the colossal bad judgement of adolescence to hack your system just to see if they can.

Even if kids don’t visit, you can’t be sure that all your friends are as careful as you are about keeping phones free from dangerous apps and criminal bots waiting to rob your network blind. A network segment with a special password that permits connections with the outside world, but not to the devices in your home, protects you from the mistakes of your guests.

Next Steps

In the best of all worlds, I would now give you quick and easy instructions for implementing a segmented home network. I can’t. The market is still catching up and implementing a segmented home network is not simple enough to describe here. For our house, I have a jury-rigged setup that reuses an old router and a network switch that I happened to have lying around. I did some fancy configuration that I would not wish on anyone but myself.

For most people, investing in professional help may be the solution. Expect to pay for some new equipment. If you want to try setting up your own segmented network, this link contains some specific information: An Updated Guide to Do-It-Yourself Network Segmentation . I caution you that newer hardware may be available but the link will get you started.

You’ll end up with a password for each part of your home network, but you will be safer.

Windows 11? Is Redmond Crazy?

Folks have gotten used to Windows 10. Now Microsoft is pulling out the rug with a new version of Windows. When I heard of Windows 11, my first thought was that the disbanded Vista product team had staged an armed coup in Bill Gates’ old office and regained control of Windows. I haven’t installed Windows 11, although grandson Christopher has. He doesn’t like it.

I think Microsoft has something cooking in Windows 11.

Microsoft releases

New releases of Windows are always fraught. Actually, new releases of anything from Microsoft get loads of pushback. Ribbon menu anxiety in Office, the endless handwringing over start menus moving and disappearing in Windows. Buggy releases. It goes on and on.

Having released a few products myself, I sympathize with Microsoft.

Developers versus users

A typical IT system administrator says “Change is evil. What’s not broke, don’t fix. If I can live with a product, it’s not broke.” Most computer users think the same way: “I’ve learned to work with your run down, buggy product. Now, I’m busy working. Quit bothering me.”

Those positions are understandable, but designers and builders see products differently. They continuously scrutinize customers using a product, and then ask how it might work more effectively, what users might want to do that they can’t, how they could become more productive and add new tasks and ways of working to their repertoire.

Designers and builders also are attentive to advances in technology. In computing, we’ve seen yearly near-doubling of available computing resources, instruction execution capacity, storage volume, and network bandwidth. In a word, speed. 2021’s smartphones dwarf super computers from the era when Windows, and its predecessor, DOS, were invented.

No one ever likes a new release

At its birth, Windows was condemned as a flashy eye candy that required then expensive bit-mapped displays and sapped performance with intensive graphics processing. In other words, Windows was a productivity killer and an all-round horrible idea, especially to virtuoso users who had laboriously internalized all the command line tricks of text interfaces. Some developers, including me, for some tasks, still prefer a DOS-like command line to a graphic interface like Windows.

However, Windows, and other graphic interfaces such as X on Unix/Linux, were rapidly adopted as bit-mapped displays proliferated and processing power rose. Today, character-based command line interface are almost always simulated in a graphical interface when paleolithic relics like me use them. Pure character interfaces still are around, but mostly in the tiny LCD screens on printers and kitchen appliances.

Designers and builders envisioned the benefits from newly available hardware and computing capacity and pushed the rest of us forward.

Success comes from building for the future, not doubling down on the past. But until folks share in the vision, they think progress is a step backwards.

Is the Windows 11 start menu a fiasco? Could be. No development team gets everything right, but I’ll give Windows 11 a spin and try not to be prejudiced by my habits.

Weird Windows 11 requirements

Something more is going on with Windows 11. Microsoft is placing hardware requirements on Windows 11 that will prevent a large share of existing Windows 10 installations from upgrading. I always expect to be nudged toward upgraded hardware. Customers who buy new hardware expect to benefit from newer more powerful devices. Requirements to support legacy hardware are an obstacle to exploiting new hardware. Eventually, you have to turn your back on old hardware and move on, leaving some irate customers behind. No developer likes to do this, but eventually, they must or the competition eats them alive.

Microsoft forces Windows 11 installations to be more secure by requiring a higher level of Trusted Platform Module (TPM) support. A TPM is microcontroller that supports several cryptographic security functions that help verify that users and computers are what they appear to be and are not spoofed or tampered with. TPMs are usually implemented as a small physical chip, although they can be implemented virtually with software. Requiring high level TPM support makes sense in our increasing cybersecurity compromised world.

But the Windows 11 requirements seem extreme. As I type this, I am using a ten-year-old laptop running Windows 10. For researching and writing, it’s more than adequate, but it does not meet Microsoft’s stated requirements for Windows 11. I’m disgruntled and I’m not unique in this opinion. Our grandson Christopher has figured out a way to install Windows 11 on some legacy hardware, which is impressive, but way beyond most users and Microsoft could easily cut off this route.

I have an idea where Redmond is going with this. It may be surprising.

Today, the biggest and most general technical step forward in computing is the near universal availability of high capacity network communications channels. Universal high bandwidth Internet access became a widely accepted national necessity when work went online through the pandemic. High capacity 5G cellular wireless network are beginning to roll out. (What passes for 5G now is far beneath the full 5G capacity we will see in the future.) Low earth orbit satellite networks promise to link isolated areas to the network. Ever faster Wi-Fi local area networks offer connectivity anywhere.

This is not fully real. Yet. But it’s close enough that designers and developers must assume it is already present, just like we had to assume bit-mapped displays were everywhere while they were still luxuries.

What does ubiquitous high bandwidth connection mean for the future? More streaming movies? Doubtless, but that’s not news: neighborhood Blockbuster Video stores are already closed.

Thinking it through

In a few years, every computer will have a reliable, high capacity connection to the network. All the time. Phones are already close. In a few years, the connection will be both faster and more reliable than today. That includes every desktop, laptop, tablet, phone, home appliance, vehicle, industrial machine, lamp post, traffic light, and sewer sluice gate. The network will also be populated with computing centers with capacities that will dwarf the already gargantuan capacities available today. Your front door latch may already have access to more data and computing capacity than all of IBM and NASA in 1980.

At the same time, ransomware and other cybercrimes are sucking the life blood from business and threatening national security.

Microsoft lost the war for the smartphone to Google and Apple. How will Windows fit in the hyperconnected world of 2025? Will it even exist? What does Satya Nadella think about when he wakes late in the night?

Windows business plan

The Windows operating system (OS) business plan is already a hold out from the past. IBM, practically the inventor of the operating system, de-emphasized building and selling OSs decades ago. Digital Equipment, DEC, a stellar OS builder, is gone, sunk into HP. Sun Microsystems, another OS innovator, is buried in the murky depths of Oracle. Apple’s operating system is built on Free BSD, an open source Unix variant. Google’s Android is a Linux. Why have all these companies gotten out of or never entered the proprietary OS development business?

Corporate economics

The answer is simple corporate economics: there’s no money in it. Whoa! you say. Microsoft made tons of money off its flagship product, Windows. The key word is “made” not “makes.” Making money building and selling operating systems was a money machine for Gates and company back in the day, but no longer. Twenty years ago, when Windows ruled, the only competing consumer OS was Apple, which was a niche product in education and some creative sectors. Microsoft pwned the personal desktop in homes and businesses. Every non-Apple computer was another kick to the Microsoft bottom line. No longer. Now, Microsoft’s Windows division has to struggle on many fronts.

Open source OSs— Android, Apple’s BSD, and the many flavors of Linux— are all fully competitive in ease of installation and use. They weren’t in 2000. Now, they are slick, polished systems with features comparable to Windows.

To stay on top, Windows has to out-perform, out-feature, and out secure these formidable competitors. In addition, unlike Apple, part of the Windows business plan is to run on generic hardware. Developing on hardware you don’t control is difficult. The burden of coding to and testing on varying equipment is horrendous. Microsoft can make rules that the hardware is supposed to follow, but in the end, if Windows does not shine on Lenovo, HP, Dell, Acer, and Asus, the Windows business plunges into arctic winter.

With all that, Microsoft is at another tremendous disadvantage. It relies on in house developers cutting proprietary code to advance Windows. Microsoft’s competitors rely on foundations that coordinate independent contributors to opensource code bases. Many of these contributors are on the payrolls of big outfits like IBM, Google, Apple, Oracle, and Facebook.

Rough times

Effectively, these dogs are ganging up on Microsoft. Through the foundations— Linux, Apache, Eclipse, etc.—these corporations cooperate to build basic utilities, like the Linux OS, instead of building them for themselves. This saves a ton of development costs. And, since the code is controlled by the foundation in which they own a stake, they don’t have to worry about a competitor pulling the rug out from under them.

Certainly, many altruistic independent developers contribute to opensource code, but not a line they write gets into key utilities without the scrutiny of the big dogs. From some angles, the opensource foundations are the biggest monopolies in the tech industry. And Windows is out in the cold.

What will Microsoft do? I have no knowledge, but I have a good guess that Microsoft is contemplating a tectonic shift.

Windows will be transformed into a service.

Nope, you say. They’ve tried that. I disagree. I read an article the other day declaring Windows 11 to be the end of Windows As A Service, something that Windows 10 was supposed to be, but failed because Windows 11 is projected for yearly instead of biannual or more frequent updates. Windows 11 has annoyed a lot of early adopters and requires hardware upgrades that a lot of people think are unnecessary. What’s going on?

Windows 10 as a service

The whole idea of Windows 10 as a service was lame. Windows 10 was (and is) an operating system installed on a customer’s box, running on the customer’s processor. The customer retains control of the hardware infrastructure. Microsoft took some additional responsibility for software maintenance with monthly patches, cumulative patches, and regular drops of new features, but that is nowhere near what I call a service.

When I installed Windows 10 on my ancient T410 ThinkPad, I remained responsible for installing applications and adding or removing memory and storage. If I wanted, I could rename the Program Files directory to Slagheap and reconfigure the system to make it work. I moved the Windows system directory to an SSD for a faster boot. And I hit the power switch whenever I feel like it.

Those features may be good or bad.

As a computer and software engineer by choice, I enjoy fiddling with and controlling my own device. Some of the time. My partner Rebecca can tell you what I am like when a machine goes south while I’m on a project that I am hurrying to complete with no time for troubleshooting and fixing. Or my mood when I tried to install a new app six months after I had forgotten the late and sporty night when I renamed the Program Files directory to Slagheap.

At times like those, I wish I had a remote desktop setup, like we had in the antediluvian age when users had dumb terminals on their desks and logged into a multi-user computer like a DEC VAX. A dumb terminal was little more than a remote keyboard with a screen that showed keystrokes as they were entered interlaced with a text stream from the central computer. The old systems had many limitations, but a clear virtue: a user at a terminal was only responsible for what they entered. The sysadmin took care of everything else. Performance, security, backups, and configuration, in theory at least, were system problems, not user concerns.

Twenty-first century

Fast forward to the mid twenty-first century. The modern equivalent of the old multi-user computer is a user with a virtual computer desktop service running in a data center in the cloud, a common set up for remote workers that works remarkably well. For a user, it looks and feels like a personal desktop, except it exists in a data center, not on a private local device. All data and configuration (the way a computer is set up) is stored in the cloud. An employee can access his remote desktop from practically any computing device attached to the network, if they can prove their identity. After they log on, they have access to all their files, documents, processes, and other resources in the state they left them, or in the case of an ongoing process, in the state their process has attained.

What’s a desktop service

From the employees point of view, they can switch devices with abandon. Start working at your kitchen table with a laptop, log out in the midst of composing a document without bothering to save. Not saving is a little risky, but virtual desktops run in data centers where events that might lose a document are much rarer than tripping on a cord, spilling a can of Coke, or the puppy doing the unmentionable at home. In data centers, whole teams of big heads scramble to find ways to shave off a minute of down time a month.

Grab a tablet and head to the barbershop. Continue working on that same document in the state you left it instead of thumbing through old Playboys or Cosmos. Pick up again in the kitchen at home with fancy hair.

Security

Cyber security officers have nightmares about employees storing sensitive information on personal devices that fall into the hands of a competitor or hacker. Employees are easily prohibited from saving anything from their virtual desktop to the local machine where they are working. With reliable and fast network connections everywhere, employees have no reason to save anything privately.

Nor do security officers need to worry about patching vulnerabilities on employee gear. As long as the employee’s credentials are not stored on the employee’s device, which is relatively easy to prevent, there is nothing for a hacker to steal.

The downside

What’s the downside? The network. You have to be connected to work and you don’t want to see swirlies when you are in the middle of something important while data is buffering and rerouted somewhere north of nowhere.

However. All the tea leaves say those issues are on the way to becoming as isolated as the character interface on your electric teapot.

The industry is responding to the notion of Windows as a desktop service. See Windows 365 and a more optimistic take on Win365.

Now think about this for a moment: why not a personal Windows virtual desktop? Would that not solve a ton of problems for Microsoft? With complete control of the Windows operating environment, their testing is greatly simplified. A virtual desktop local client approaches the simplicity of a dumb terminal and could run on embarrassingly modest hardware. Security soars. A process running in a secured data center is not easy to hack. The big hacks of recent months have all been on lackadaisically secured corporate systems, not data centers.

It also solves a problem for me. Do I have to replace my ancient, but beloved, T410? No, provided Microsoft prices personal Windows 365 reasonably, I can switch to Windows 365 and continue on my good old favorite device.

Marv’s note: I made a few tweeks to the post based on Steve Stroh’s comment.

Free Digital Services: TANSTAAFL? Not Exactly

For the last two decades, we’ve been in an era of free services: internet searches, social media, email, storage. Free, free, free. My parsimonious head spins thinking of all that free stuff.

Social and technical tectonic plates are moving. New continents are forming, old continents are breaking up. Driven by the pandemic and our tumultuous politics, we all sense that our world is changing. 2025 will not return to 2015. Free services will change. In this post, I explain that one reason free digital services have been possible is they are cheap to provide.

I’m tempted to bring up Robert Heinlein, the science fiction author, and his motto, TANSTAAFL (There Ain’t No Such Thing As A Free Lunch), as a principle behind dwindling free services, but his motto does not exactly apply.

Heinlein’s point was that free sandwiches at a saloon are not free lunches because customers pay for the sandwiches in the price of the drinks. This is not exactly the case with digital free lunches. Digital services can be free because they cost so little, they appear to be free. Yes, you pay for them, but you pay so little they may as well be free.

A sandwich sitting beside a glass of beer costs the tavern as much as a sandwich in a diner, but a service delivered digitally costs far less than a physical service. Digital services are free like dirt, which also appears to be free, but it looks free because it is abundant and easily obtained, not because we are surreptitiously charged for it.

Each physical service sale has a fixed cost for reproduction and delivery. Digital services have almost no cost per sale for reproduction and delivery, until they have invest in expanded infrastructure.

Free open source software and digital services

One factor behind free digital services is the free software movement which began in the mid -1980s with the formation of the Free Software Foundation. The “free” in Free Software Foundation refers more to software that can be freely modified rather than to products freely given away, but most software under the Free Software Foundation and related organizations is available without charge. Much open source software is written by paid programmers, and is therefore paid for by someone, but not in ways that TANSTAAFL might predict.

The workhorse utilities of the internet are open source. Most internet traffic is Hyper Text Transmission Protocol (HTTP) packets transmitted and received by HTTP server software. The majority of this traffic is driven by open source Apache HTTP servers running on open source Linux operating systems. Anyone can get a free copy of Linux or Apache. Quite often, servers in data centers run only open source software. (Support for the software is not free, but that’s another story.)

So who pays for developing and updating these utilities? Some of the code is volunteer written, but a large share is developed on company time by programmers and administrators working for corporations like IBM, Microsoft, Google, Facebook, Amazon, and Cisco, coordinated by foundations supported by these same corporations. The list of contributors is long. Most large, and many small, computer companies contribute to open source software utilities.

Why? Because they have determined that their best interest is to develop these utilities through foundations like the Linux Foundation, The Apache Foundation, Eclipse Foundation, Free Software Foundation, and others, instead of building their own competing utilities.

By collaborating, they avoid wasting money when each company researches and writes similar code or places themselves in a vulnerable business position by using utilities built by present or potential competitors. It’s a little like the interstate highway system. Trucking companies don’t build their own highways, and, with the exception of Microsoft, computer companies don’t build their own operating systems.

Digital services are cheap

Digital services are cheap, cheaper than most people imagine. Free utilities contribute to the cheapness, but more because they contribute to the efficiency of service delivery. The companies that use the open source utilities the most pay for them in contributed code and expertise that amounts to millions of dollars per year, but they make it all back because the investment gives them an efficient standardized infrastructure which they use to build and deliver services that compete on user visible features that yield market advantages. They don’t have to compete on utilities that their customers only care about when they fail.

Value and cost of digital services are disconnected

We usually think that the cost of a service is in some way related to the value of the service. Digital services break that relationship.

Often, digital services deliver enormous value at minuscule cost, much lower cost than comparable physical services. Consider physical Encyclopedia Britannica versus digital Wikipedia, two products which offer similar value and functionality. A paper copy of Britannica could be obtained for about $1400 in 1998 ($2260 in 2021 dollars). Today, Wikipedia is a pay-what-you-want service, suggested contribution around $5 a month, but only a small percentage of Wikipedia users actually donate into project.

Therefore, the average cost for using Wikipedia is microscopic compared to a paper Britannica. You can argue that Encyclopedia Britannica has higher quality information than Wikipedia (although that point is disputable) but you have to admit that the Wikipedia service delivers a convenient and comparable service whose value is not at all proportional to its price.

Digital reproduction and distribution is cheap

The cost of digital services behaves differently than the cost of physical goods and services we are accustomed to. Compare a digital national news service to a physical national newspaper.

The up-front costs of acquiring, composing, and editing news reports are the same for both, but after a news item is ready for consumption, the cost differs enormously. A physical newspaper must be printed. Ink and paper must be bought. The printed items must be loaded on trucks, then transferred to local delivery, and hand carried to readers’ doorsteps. Often printed material has to be stored in warehouses waiting for delivery. The actual cost of physical manufacture and delivery varies widely based on scale, the delivery area, and operational efficiency, but there is a clear and substantial cost for each item delivered to a reader.

A digital news item is available on readers’ computing devices within milliseconds of the editor’s keystroke of approval. Even if the item is scheduled for later delivery, the process is entirely automatic from then on. No manpower costs, no materials cost, minuscule network delivery costs.

The reproduction and network delivery part of the cost of an instance of service is often too small to be worth the trouble to calculate.

My experience with network delivery of software is that the cost of reproducing and delivering a single instance of a product is so low, the finance people don’t want to bother to calculate it. They prefer to lump it into corporate overhead, which is spread across products and is the same whether one item or a million items are delivered. The cost is also the same whether the item is delivered across the hall or across the globe. Physical items are often sit in expensive rented warehouses after they are reproduced. Digital products are reproduced on demand and never stored in bulk.

Stepwise network costs

Network costs are usually stepwise, not linear, and not easily allocated to the number of customers served. Stepwise costs means that when the network infrastructure is built to deliver news items to 1000 simultaneous users, the cost stays the about same for one or 1000 users because much of the cost is a capital investment. Unused capacity costs about the same as used capacity— the capacity for the 1000th user must be paid for even though 1000th user will not sign on for months.

The 1001st user will cost a great deal, but after paying out to scale the system up to, say, 10,000 users, costs won’t rise much until the 10,001st user signs on, at which point another network investment is required. Typically, the cost increment for each step gets proportionally less as efficiencies of scale kick in.

After a level of infrastructure is implemented and paid for, increasing readership of digital news service does not increase costs until the next step is encountered. Consequently, the service can add readers for free without affecting adversely affecting profits or cash flow, although each free reader brings them closer to a doomsday when they have to invest to expand capacity.

Compare this to a physical newspaper. As volume increases, the cost per reader goes down as efficiency increases, but unlike a digital service, each and every new reader increases the overall cost of the service as the cost of manufacturing and distribution increases with each added reader. The rate of overall increase may go done as scale increases, but the increment is always there.

Combine the fundamental low cost of digital reproduction and distribution with the stepwise nature of digital infrastructure costs and digital service operators can offer free services much easily and more profitably than physical service providers.

The future

I predict enormous changes in free services in approaching months and years, but I don’t expect a TANSTAAFL day of reckoning because providing digital services is fundamentally much cheaper than physical services.

I have intentionally not discussed the ways in which service providers get revenues from their services, although I suspect most readers have thought about revenue as they read. The point here is that digital services are surprisingly cheap and their economic dynamics are not at all like that of physical goods and services.

As digital continents collide, I expect significant changes in free digital services, changes that will come mainly from the revenue side and our evolving attitudes toward privacy and fairness in commerce. These are a subjects for future posts.

Supply Chain Management: Averting a Covid-19 Catastrophe

Yossi Sheffi is a supply chain management expert who moves freely between business and academics. He founded several companies, and he sits on the boards of large corporations. He teaches engineering at MIT and has authored a half-dozen books that are read by engineers, economists, and business people. When I heard about his latest book, The New (Ab)Normal, in which he tackles the covid-19 disruption of the global supply chain, I got a copy as soon as I could, stayed up late reading, and got up early to finish it.

gosling-supply-chain
The gosling supply chain management system

The New (Ab)Normal

New (Ab)Normal was worth it. Sheffi’s insider views are compelling. He has talked with the executives and engineers who struggled to put food and toilet paper on supermarket shelves, produce and distribute medical protective gear, and prevent manufacturing plants from foundering from supply disruption.

Supply chains and the media

Sheffi has harsh words for some media. For example, he says empty supermarket shelves were misunderstood. Food and supplies were never lacking, but they were often in the wrong place. Until the lockdowns in March, a big share of U.S. meals were dispensed in restaurants, schools, and company cafeterias. These businesses purchase the same food as families, but they get it through a different supply network and packaged in different ways.

Cafeterias buy tons of shelled fresh eggs in gallon containers, but consumers buy cartons of eggs in supermarkets for cooking at home. When the eateries shut down or curtailed operations and people began eating at home, plenty of eggs were available, but someone had to redirect them to consumers in a form that was practical for a home kitchen. Sheffi says food shortages appeared in dramatic media video footage and sound bites, but not in supply chains.

Bursty services

Changing buying patterns worsened the appearance of shortages. Supermarket supply chains are adjusted to dispense supplies at a certain rate and level of burstiness. These are terms I know from network and IT service management. A bursty service has bursts of increased of activity followed by relatively quiet periods. At a bursty IT trouble ticket desk, thirty percent of a week’s tickets might be entered in one hour on Monday morning when employees return to work ready to tackle problems that they had put off solving during the previous week. A less bursty business culture might generate the same number of tickets per week, but with a more uniform rate of tickets per hour.

Bursty desks must be managed differently than steady desks. The manager of a bursty service desk must devise a way to deploy extra equipment and hire more staff to deal with peak activity on those hectic Monday mornings. Experienced managers also know that an unpredicted burst in tickets on a desk, say in the aftermath of a hurricane, will cause havoc and shortened tempers as irate customers wait for temporarily scarce resources. The best of them have contingency plans to deal with unanticipated bursts.

Cloud computing to the rescue

The rise of cloud computing architectures in the last decade has yielded increased flexibility for responding to bursts in digital activity. Pre-cloud, managers who had to provide service through activity bursts had to deploy purchased or leased servers with the capacity to handle peak periods of activity. Adding a physical server is a substantial financial investment that requires planning, sometimes changes in the physical plant, often added training and occasionally hiring new personnel.

Worse, the new capacity may remain idle during long non-peak periods, which is hard to explain to cost conscious business leaders. Some businesses are able to harvest off-peak capacity for other purposes, but many are not. Cloud computing offers on-demand computing with little upfront investment, greatly reducing the need to pay for unused capacity to improve service during peak periods.

The food supply

Covid-19 caused an unanticipated increase in the burstiness of supermarket sales. Under the threat of the virus, consumers began to shop once a week or less, buying larger quantities. Folks accustomed to picking up a few vegetables and a fresh protein on their way home from work began arriving at the store early in the morning to buy twenty-five-pound sacks of flour and dry beans, cases of canned vegetables, and bulk produce.

On the supply end with the farmers and packers, the quantities sold per month stayed fairly constant because the number of mouths being fed did not change, but in the stores, by afternoon, shelves were bare waiting for shipments arriving in the night because consumers were buying in bursts instead of their “a few items a day” pattern. This made for exciting media coverage of customers squabbling over the products remaining on the shelves. The media seldom pointed out that the shelves were full each morning after the night’s shipments had arrived and were on the shelves.

Toilet paper

The infamous toilet paper shortage was also either illusory or much more nuanced than media portrayals. Like restaurants and cafeterias, public restrooms took a big hit with the lockdowns. Like food, toilet paper consumption is inherently constant, but toilet paper purchasing burstiness and where the product is purchased varies.

Commercial toilet paper consumption plummeted as shoppers began to purchase consumer toilet paper in the same bursts that they were purchasing food supplies. There may have been some hoarding behavior, but many shoppers simply wanted to shrink their dangerous trips to the market by buying in bulk. Consumer toilet paper is not like the commercial toilet paper used in public restrooms, which is coarser and often dispensed in larger rolls from specialized holders. This presented supply problems similar to food supply issues.

Supply disruption

Supply chains had to respond quickly. Unlike digital services, responding to increased burstiness in supermarket sales required changes in physical inventory patterns. Increasing the supply of eggs by the dozen at supermarkets and decreasing eggs by the gallon on kitchen loading docks could not be addressed by dialing up a new batch of virtual cloud computers. New buying patterns had to be analyzed, revised orders had to be placed with packers, and trucks had to roll on highways.

Advances in supply chain management

Fortunately, supply chain reporting and analysis has jumped ahead in the last decade. Consumers see some of these advances on online sales sites like Amazon when they click on “Track package.” Unlike not too long ago when all they were offered was Amazon’s best estimated delivery date, they see the progress of their shipment from warehouse through shipping transfer points to the final delivery. Guesswork is eliminated: arrival and departure is recorded as the package passes barcode scanners.

The movement data is centralized in cloud data centers and dispensed to the consumer on demand. Many people have noted that Amazon shipments are not as reliable as they were pre-covid. However, the impression of unreliability would be much stronger without those “Track package” buttons.

Supply chain managers have access to the same kind of data on their shipments. In untroubled times, a shipping clerk’s educated estimate on an arrival time of a shipment of fresh eggs may have been adequate, but not in post-covid 2020, with its shifting demands and unpredictable delays. Those guesses can’t cope with an egg packing plant shut down for a week when the virus flares up or a shipment delayed by a quarantined truck driver.

Good news

Fortunately, with the data available today, these issues are visible in supply chain tracking systems. Orders can be immediately redirected to a different packing plant that has returned from a shutdown or dispatch a fresh relief driver instead of leaving a shipment to wait in a truck stop parking lot. Issues can be resolved today that would not have been visible as issues a decade ago. Consequently, supply chains have been strikingly resilient to the covid-19 disruption.

Supply chains were much different in my pioneer grandparents’ days. They grew their own meat, poultry, and vegetables, and lived without toilet paper for most of their lives. Although supply was less complicated, the effects of supply disruption, like a punishing thunderstorm destroying a wheat crop, was as significant as today’s disruptions.

In November of 2020 with steeply rising infection counts, predictions of new supply disruption occasionally surface. The response of supply chains so far this year leave me optimistic that we have seen the worst.