Windows 11? Is Redmond Crazy?

Folks have gotten used to Windows 10. Now Microsoft is pulling out the rug with a new version of Windows. When I heard of Windows 11, my first thought was that the disbanded Vista product team had staged an armed coup in Bill Gates’ old office and regained control of Windows. I haven’t installed Windows 11, although grandson Christopher has. He doesn’t like it.

I think Microsoft has something cooking in Windows 11.

Microsoft releases

New releases of Windows are always fraught. Actually, new releases of anything from Microsoft get loads of pushback. Ribbon menu anxiety in Office, the endless handwringing over start menus moving and disappearing in Windows. Buggy releases. It goes on and on.

Having released a few products myself, I sympathize with Microsoft.

Developers versus users

A typical IT system administrator says “Change is evil. What’s not broke, don’t fix. If I can live with a product, it’s not broke.” Most computer users think the same way: “I’ve learned to work with your run down, buggy product. Now, I’m busy working. Quit bothering me.”

Those positions are understandable, but designers and builders see products differently. They continuously scrutinize customers using a product, and then ask how it might work more effectively, what users might want to do that they can’t, how they could become more productive and add new tasks and ways of working to their repertoire.

Designers and builders also are attentive to advances in technology. In computing, we’ve seen yearly near-doubling of available computing resources, instruction execution capacity, storage volume, and network bandwidth. In a word, speed. 2021’s smartphones dwarf super computers from the era when Windows, and its predecessor, DOS, were invented.

No one ever likes a new release

At its birth, Windows was condemned as a flashy eye candy that required then expensive bit-mapped displays and sapped performance with intensive graphics processing. In other words, Windows was a productivity killer and an all-round horrible idea, especially to virtuoso users who had laboriously internalized all the command line tricks of text interfaces. Some developers, including me, for some tasks, still prefer a DOS-like command line to a graphic interface like Windows.

However, Windows, and other graphic interfaces such as X on Unix/Linux, were rapidly adopted as bit-mapped displays proliferated and processing power rose. Today, character-based command line interface are almost always simulated in a graphical interface when paleolithic relics like me use them. Pure character interfaces still are around, but mostly in the tiny LCD screens on printers and kitchen appliances.

Designers and builders envisioned the benefits from newly available hardware and computing capacity and pushed the rest of us forward.

Success comes from building for the future, not doubling down on the past. But until folks share in the vision, they think progress is a step backwards.

Is the Windows 11 start menu a fiasco? Could be. No development team gets everything right, but I’ll give Windows 11 a spin and try not to be prejudiced by my habits.

Weird Windows 11 requirements

Something more is going on with Windows 11. Microsoft is placing hardware requirements on Windows 11 that will prevent a large share of existing Windows 10 installations from upgrading. I always expect to be nudged toward upgraded hardware. Customers who buy new hardware expect to benefit from newer more powerful devices. Requirements to support legacy hardware are an obstacle to exploiting new hardware. Eventually, you have to turn your back on old hardware and move on, leaving some irate customers behind. No developer likes to do this, but eventually, they must or the competition eats them alive.

Microsoft forces Windows 11 installations to be more secure by requiring a higher level of Trusted Platform Module (TPM) support. A TPM is microcontroller that supports several cryptographic security functions that help verify that users and computers are what they appear to be and are not spoofed or tampered with. TPMs are usually implemented as a small physical chip, although they can be implemented virtually with software. Requiring high level TPM support makes sense in our increasing cybersecurity compromised world.

But the Windows 11 requirements seem extreme. As I type this, I am using a ten-year-old laptop running Windows 10. For researching and writing, it’s more than adequate, but it does not meet Microsoft’s stated requirements for Windows 11. I’m disgruntled and I’m not unique in this opinion. Our grandson Christopher has figured out a way to install Windows 11 on some legacy hardware, which is impressive, but way beyond most users and Microsoft could easily cut off this route.

I have an idea where Redmond is going with this. It may be surprising.

Today, the biggest and most general technical step forward in computing is the near universal availability of high capacity network communications channels. Universal high bandwidth Internet access became a widely accepted national necessity when work went online through the pandemic. High capacity 5G cellular wireless network are beginning to roll out. (What passes for 5G now is far beneath the full 5G capacity we will see in the future.) Low earth orbit satellite networks promise to link isolated areas to the network. Ever faster Wi-Fi local area networks offer connectivity anywhere.

This is not fully real. Yet. But it’s close enough that designers and developers must assume it is already present, just like we had to assume bit-mapped displays were everywhere while they were still luxuries.

What does ubiquitous high bandwidth connection mean for the future? More streaming movies? Doubtless, but that’s not news: neighborhood Blockbuster Video stores are already closed.

Thinking it through

In a few years, every computer will have a reliable, high capacity connection to the network. All the time. Phones are already close. In a few years, the connection will be both faster and more reliable than today. That includes every desktop, laptop, tablet, phone, home appliance, vehicle, industrial machine, lamp post, traffic light, and sewer sluice gate. The network will also be populated with computing centers with capacities that will dwarf the already gargantuan capacities available today. Your front door latch may already have access to more data and computing capacity than all of IBM and NASA in 1980.

At the same time, ransomware and other cybercrimes are sucking the life blood from business and threatening national security.

Microsoft lost the war for the smartphone to Google and Apple. How will Windows fit in the hyperconnected world of 2025? Will it even exist? What does Satya Nadella think about when he wakes late in the night?

Windows business plan

The Windows operating system (OS) business plan is already a hold out from the past. IBM, practically the inventor of the operating system, de-emphasized building and selling OSs decades ago. Digital Equipment, DEC, a stellar OS builder, is gone, sunk into HP. Sun Microsystems, another OS innovator, is buried in the murky depths of Oracle. Apple’s operating system is built on Free BSD, an open source Unix variant. Google’s Android is a Linux. Why have all these companies gotten out of or never entered the proprietary OS development business?

Corporate economics

The answer is simple corporate economics: there’s no money in it. Whoa! you say. Microsoft made tons of money off its flagship product, Windows. The key word is “made” not “makes.” Making money building and selling operating systems was a money machine for Gates and company back in the day, but no longer. Twenty years ago, when Windows ruled, the only competing consumer OS was Apple, which was a niche product in education and some creative sectors. Microsoft pwned the personal desktop in homes and businesses. Every non-Apple computer was another kick to the Microsoft bottom line. No longer. Now, Microsoft’s Windows division has to struggle on many fronts.

Open source OSs— Android, Apple’s BSD, and the many flavors of Linux— are all fully competitive in ease of installation and use. They weren’t in 2000. Now, they are slick, polished systems with features comparable to Windows.

To stay on top, Windows has to out-perform, out-feature, and out secure these formidable competitors. In addition, unlike Apple, part of the Windows business plan is to run on generic hardware. Developing on hardware you don’t control is difficult. The burden of coding to and testing on varying equipment is horrendous. Microsoft can make rules that the hardware is supposed to follow, but in the end, if Windows does not shine on Lenovo, HP, Dell, Acer, and Asus, the Windows business plunges into arctic winter.

With all that, Microsoft is at another tremendous disadvantage. It relies on in house developers cutting proprietary code to advance Windows. Microsoft’s competitors rely on foundations that coordinate independent contributors to opensource code bases. Many of these contributors are on the payrolls of big outfits like IBM, Google, Apple, Oracle, and Facebook.

Rough times

Effectively, these dogs are ganging up on Microsoft. Through the foundations— Linux, Apache, Eclipse, etc.—these corporations cooperate to build basic utilities, like the Linux OS, instead of building them for themselves. This saves a ton of development costs. And, since the code is controlled by the foundation in which they own a stake, they don’t have to worry about a competitor pulling the rug out from under them.

Certainly, many altruistic independent developers contribute to opensource code, but not a line they write gets into key utilities without the scrutiny of the big dogs. From some angles, the opensource foundations are the biggest monopolies in the tech industry. And Windows is out in the cold.

What will Microsoft do? I have no knowledge, but I have a good guess that Microsoft is contemplating a tectonic shift.

Windows will be transformed into a service.

Nope, you say. They’ve tried that. I disagree. I read an article the other day declaring Windows 11 to be the end of Windows As A Service, something that Windows 10 was supposed to be, but failed because Windows 11 is projected for yearly instead of biannual or more frequent updates. Windows 11 has annoyed a lot of early adopters and requires hardware upgrades that a lot of people think are unnecessary. What’s going on?

Windows 10 as a service

The whole idea of Windows 10 as a service was lame. Windows 10 was (and is) an operating system installed on a customer’s box, running on the customer’s processor. The customer retains control of the hardware infrastructure. Microsoft took some additional responsibility for software maintenance with monthly patches, cumulative patches, and regular drops of new features, but that is nowhere near what I call a service.

When I installed Windows 10 on my ancient T410 ThinkPad, I remained responsible for installing applications and adding or removing memory and storage. If I wanted, I could rename the Program Files directory to Slagheap and reconfigure the system to make it work. I moved the Windows system directory to an SSD for a faster boot. And I hit the power switch whenever I feel like it.

Those features may be good or bad.

As a computer and software engineer by choice, I enjoy fiddling with and controlling my own device. Some of the time. My partner Rebecca can tell you what I am like when a machine goes south while I’m on a project that I am hurrying to complete with no time for troubleshooting and fixing. Or my mood when I tried to install a new app six months after I had forgotten the late and sporty night when I renamed the Program Files directory to Slagheap.

At times like those, I wish I had a remote desktop setup, like we had in the antediluvian age when users had dumb terminals on their desks and logged into a multi-user computer like a DEC VAX. A dumb terminal was little more than a remote keyboard with a screen that showed keystrokes as they were entered interlaced with a text stream from the central computer. The old systems had many limitations, but a clear virtue: a user at a terminal was only responsible for what they entered. The sysadmin took care of everything else. Performance, security, backups, and configuration, in theory at least, were system problems, not user concerns.

Twenty-first century

Fast forward to the mid twenty-first century. The modern equivalent of the old multi-user computer is a user with a virtual computer desktop service running in a data center in the cloud, a common set up for remote workers that works remarkably well. For a user, it looks and feels like a personal desktop, except it exists in a data center, not on a private local device. All data and configuration (the way a computer is set up) is stored in the cloud. An employee can access his remote desktop from practically any computing device attached to the network, if they can prove their identity. After they log on, they have access to all their files, documents, processes, and other resources in the state they left them, or in the case of an ongoing process, in the state their process has attained.

What’s a desktop service

From the employees point of view, they can switch devices with abandon. Start working at your kitchen table with a laptop, log out in the midst of composing a document without bothering to save. Not saving is a little risky, but virtual desktops run in data centers where events that might lose a document are much rarer than tripping on a cord, spilling a can of Coke, or the puppy doing the unmentionable at home. In data centers, whole teams of big heads scramble to find ways to shave off a minute of down time a month.

Grab a tablet and head to the barbershop. Continue working on that same document in the state you left it instead of thumbing through old Playboys or Cosmos. Pick up again in the kitchen at home with fancy hair.

Security

Cyber security officers have nightmares about employees storing sensitive information on personal devices that fall into the hands of a competitor or hacker. Employees are easily prohibited from saving anything from their virtual desktop to the local machine where they are working. With reliable and fast network connections everywhere, employees have no reason to save anything privately.

Nor do security officers need to worry about patching vulnerabilities on employee gear. As long as the employee’s credentials are not stored on the employee’s device, which is relatively easy to prevent, there is nothing for a hacker to steal.

The downside

What’s the downside? The network. You have to be connected to work and you don’t want to see swirlies when you are in the middle of something important while data is buffering and rerouted somewhere north of nowhere.

However. All the tea leaves say those issues are on the way to becoming as isolated as the character interface on your electric teapot.

The industry is responding to the notion of Windows as a desktop service. See Windows 365 and a more optimistic take on Win365.

Now think about this for a moment: why not a personal Windows virtual desktop? Would that not solve a ton of problems for Microsoft? With complete control of the Windows operating environment, their testing is greatly simplified. A virtual desktop local client approaches the simplicity of a dumb terminal and could run on embarrassingly modest hardware. Security soars. A process running in a secured data center is not easy to hack. The big hacks of recent months have all been on lackadaisically secured corporate systems, not data centers.

It also solves a problem for me. Do I have to replace my ancient, but beloved, T410? No, provided Microsoft prices personal Windows 365 reasonably, I can switch to Windows 365 and continue on my good old favorite device.

Marv’s note: I made a few tweeks to the post based on Steve Stroh’s comment.

Detecting Bogus Email

I’ve noticed from the flood of complaints in the news, on social media, and talking to friends, that dangerous email is worse than ever. The pandemic has shifted the bad hackers into high gear. I can help stem the flood.

I don’t have a special talent, only a suspicious character and a bit of technical knowledge.

I may be struck down for this hubris, but I’ve never been tricked by a bogus email, even though I’ve sent and received email almost from the day it was invented. I don’t have a special talent, only a suspicious character and a bit of technical knowledge. I’ve evolved some robust techniques for weeding out the bad emails.

I’m not talking about spam. Spam is unrequested commercial email, which is annoying, but not vicious. I’ll even admit that a few times, I’ve welcomed a spam message that brought me something new. The stuff I’m concerned with today is fraudulent and malicious email that is intended to do harm rather than legitimately sell a product or service you don’t want.

These emails are often called “phishing,” a term that is a little too cute for a farm boy who shoveled chicken droppings every Saturday morning until he left the farm for college.

Email is convenient. I remember when we had only a few choices for communicating: go to see the person, call them on the telephone, or send them a letter. Each method was useful, charming, and pestilential at times, sometimes all at once. I gripe about my overflowing email inbox but clicking away the chaff is a lark compared to a line up at my desk or a phone ringing constantly. Writing letters was, and still is, an art, but it’s called snail mail for a reason. As annoying as it can be, and handy as Slack and other messaging style services are, email is still a communications workhorse.

Mail, telephone, and in-person fraud, harassment, and other scatter-shot deviltry abounded long before email. The worst of us never tire of devising new mischief to soil other peoples’ lives, but the rest of us have developed instincts, habits, customs, and laws that civilize our lives and tamp down the shenanigans that plague us.

Here, I’ll explain how I keep up with the email crooks.

However, instincts, habits, customs, and laws have not kept up with electronic innovation. Here, I’ll explain how I keep up with the email crooks.

I have a series of steps I go through with email. I divide the process into three phases: suspicion, confirmation, and reaction.

Suspicion

Do I expect this email? Do I know the sender?

If it’s Tuesday and I always get an email from my friend Peter on Tuesday, I feel safe reading it. Actually, at least half of my inbox is expected email from known senders. Faking a phone call or handwritten letter is more difficult than faking an email because voices and handwriting are laden with familiar clues to identity, but faking an email from a friend, outside of spy fiction, is still extremely difficult. Trust your intuition, it’s more powerful than you may think. If something feels off, check it out.

However, intuition breaks down as relationships get more remote, especially in impersonal business email, but you have a great advantage: criminals are seldom as fastidious as legitimate email users. They’re in it for easy money and they usually don’t care about the impression they make or attracting return customers.

As a consequence, they don’t pay proofreaders and formatting professionals to ensure that their emails are perfect. Few businesses will send out emails with misspellings or sloppy formatting, but criminals often do. At best, they will copy an existing piece of legitimate email and make a few changes. If you spot misspellings, grammatical errors, misalignment of type, uneven borders, colors that are not quite right, be suspicious.

Why was this email sent? What’s its point? Does the sender want me to do something? Is there money
involved?

Always be suspicious of any transaction you did not initiate. People and businesses are like slugs. They almost always react to stimulus from their friends and customers, but they seldom reach out unless they have something new to sell to you. Whenever there is money involved, be certain you understand exactly what the transaction is and why you are engaged in it.

Confirmation

If suspicion has set off alarm bells, check it out.

Uniform resource identifiers

Every savvy computer user should know a little about the Uniform Resource Identifiers, or URLs. Although URI is technically correct, everyone calls them URLs (Uniform Resource Locators.) Computing and network engineers have been evolving and improving the concept for over thirty years. They are a formal way of unambiguously naming almost anything and a key to computer based communication.

We are all familiar with them, whether we realize it or not. We all know web addresses like https://example.com. And email addresses like mailto://marv@marvinwaschke.com . Librarians know ISBN (International Standard Book Numbers). Even telephone numbers are now examples of naming systems that follow the URL standard.

Well. That’s fine for engineers and librarians, but what about ordinary users? Why should they know about URLs? Because knowing what a legitimate URL looks like often makes a fraud stand out like a black eye.

In another post, I’ve detailed reading URLs. Check out how here.

Recent hacker tricks

Lately, I’ve noticed that hackers have gotten very fancy with the characters in their URLs. I could indulge in a technical discussion of fonts versus character sets at this point, but I will simply say, look carefully at the characters in URLs. If I see an accent, squiggle, superscript, or an extra curlicue anywhere, I assume I am under criminal attack. Legitimate URLs and text avoid this. Hackers love it.

Circle back

Legitimate businesses have no problem confirming their enquiries. For example, if you get a question about your account with XYZ company, call their publicly listed number— not the one a hacker gives you— and ask for an explanation. You may be bounced from desk to desk and have to wait on hold, but eventually you will get an answer. Either a confirmation of a legitimate issue, or a statement that you can ignore the bogus email.

If XYZ is a company I would continue to deal with, the answer will be prompt, courteous, and helpful. If the process is difficult or the responses are impolite, I would look for an alternate for my future business. However, I always wade through to the end before accepting a hack. Personally, I will tolerate drek to deal with a situation, but I will take steps to avoid future drek.

Reaction

Two main routes can be used to report cybercrimes. I use both.

I am stubborn. I won’t knuckle under to cybercrime. When I am subjected to cyber assault, I report it and do my best to stop it. Frankly, with the state of cyber crime laws and enforcement, I don’t expect to see immediate results. I seldom anticipate that the criminal who assaulted me or my equipment will be punished, but I want to see cyber laws and enforcement strengthened. I hope international organizations will be formed or strengthened to punish or neutralize off-shore criminals. Nothing will change if crimes go unreported.

Two main routes can be used to report cybercrimes. I use both.

You can report crimes to law enforcement. I went into the details of reporting to local and federal law enforcement here. The Federal Trade Commission has a site for reporting identity theft and aids in recovery. They also have a site for reporting fraud.

Another way to report cybercrime is to report it to the organization that is affected. For example, if I received an email about Microsoft Office from m1crosoft.com (notice the “one” instead of an “i”), I would forward the message to phish@office365.microsoft.com . Many companies, especially tech-oriented companies, have facilities for reporting fraudulent emails. I use Google to find the proper procedure. American Express, as another example, requests fraudulent mail be forwarded to spoof@americanexpress.com.

Tedious, but worth it.

Our local, state, and federal governments and these companies all want to shut down the criminals. But they can’t unless we refuse to tolerate this form of crime and report it. Tedious, but worth it.

Free Digital Services: TANSTAAFL? Not Exactly

For the last two decades, we’ve been in an era of free services: internet searches, social media, email, storage. Free, free, free. My parsimonious head spins thinking of all that free stuff.

Social and technical tectonic plates are moving. New continents are forming, old continents are breaking up. Driven by the pandemic and our tumultuous politics, we all sense that our world is changing. 2025 will not return to 2015. Free services will change. In this post, I explain that one reason free digital services have been possible is they are cheap to provide.

I’m tempted to bring up Robert Heinlein, the science fiction author, and his motto, TANSTAAFL (There Ain’t No Such Thing As A Free Lunch), as a principle behind dwindling free services, but his motto does not exactly apply.

Heinlein’s point was that free sandwiches at a saloon are not free lunches because customers pay for the sandwiches in the price of the drinks. This is not exactly the case with digital free lunches. Digital services can be free because they cost so little, they appear to be free. Yes, you pay for them, but you pay so little they may as well be free.

A sandwich sitting beside a glass of beer costs the tavern as much as a sandwich in a diner, but a service delivered digitally costs far less than a physical service. Digital services are free like dirt, which also appears to be free, but it looks free because it is abundant and easily obtained, not because we are surreptitiously charged for it.

Each physical service sale has a fixed cost for reproduction and delivery. Digital services have almost no cost per sale for reproduction and delivery, until they have invest in expanded infrastructure.

Free open source software and digital services

One factor behind free digital services is the free software movement which began in the mid -1980s with the formation of the Free Software Foundation. The “free” in Free Software Foundation refers more to software that can be freely modified rather than to products freely given away, but most software under the Free Software Foundation and related organizations is available without charge. Much open source software is written by paid programmers, and is therefore paid for by someone, but not in ways that TANSTAAFL might predict.

The workhorse utilities of the internet are open source. Most internet traffic is Hyper Text Transmission Protocol (HTTP) packets transmitted and received by HTTP server software. The majority of this traffic is driven by open source Apache HTTP servers running on open source Linux operating systems. Anyone can get a free copy of Linux or Apache. Quite often, servers in data centers run only open source software. (Support for the software is not free, but that’s another story.)

So who pays for developing and updating these utilities? Some of the code is volunteer written, but a large share is developed on company time by programmers and administrators working for corporations like IBM, Microsoft, Google, Facebook, Amazon, and Cisco, coordinated by foundations supported by these same corporations. The list of contributors is long. Most large, and many small, computer companies contribute to open source software utilities.

Why? Because they have determined that their best interest is to develop these utilities through foundations like the Linux Foundation, The Apache Foundation, Eclipse Foundation, Free Software Foundation, and others, instead of building their own competing utilities.

By collaborating, they avoid wasting money when each company researches and writes similar code or places themselves in a vulnerable business position by using utilities built by present or potential competitors. It’s a little like the interstate highway system. Trucking companies don’t build their own highways, and, with the exception of Microsoft, computer companies don’t build their own operating systems.

Digital services are cheap

Digital services are cheap, cheaper than most people imagine. Free utilities contribute to the cheapness, but more because they contribute to the efficiency of service delivery. The companies that use the open source utilities the most pay for them in contributed code and expertise that amounts to millions of dollars per year, but they make it all back because the investment gives them an efficient standardized infrastructure which they use to build and deliver services that compete on user visible features that yield market advantages. They don’t have to compete on utilities that their customers only care about when they fail.

Value and cost of digital services are disconnected

We usually think that the cost of a service is in some way related to the value of the service. Digital services break that relationship.

Often, digital services deliver enormous value at minuscule cost, much lower cost than comparable physical services. Consider physical Encyclopedia Britannica versus digital Wikipedia, two products which offer similar value and functionality. A paper copy of Britannica could be obtained for about $1400 in 1998 ($2260 in 2021 dollars). Today, Wikipedia is a pay-what-you-want service, suggested contribution around $5 a month, but only a small percentage of Wikipedia users actually donate into project.

Therefore, the average cost for using Wikipedia is microscopic compared to a paper Britannica. You can argue that Encyclopedia Britannica has higher quality information than Wikipedia (although that point is disputable) but you have to admit that the Wikipedia service delivers a convenient and comparable service whose value is not at all proportional to its price.

Digital reproduction and distribution is cheap

The cost of digital services behaves differently than the cost of physical goods and services we are accustomed to. Compare a digital national news service to a physical national newspaper.

The up-front costs of acquiring, composing, and editing news reports are the same for both, but after a news item is ready for consumption, the cost differs enormously. A physical newspaper must be printed. Ink and paper must be bought. The printed items must be loaded on trucks, then transferred to local delivery, and hand carried to readers’ doorsteps. Often printed material has to be stored in warehouses waiting for delivery. The actual cost of physical manufacture and delivery varies widely based on scale, the delivery area, and operational efficiency, but there is a clear and substantial cost for each item delivered to a reader.

A digital news item is available on readers’ computing devices within milliseconds of the editor’s keystroke of approval. Even if the item is scheduled for later delivery, the process is entirely automatic from then on. No manpower costs, no materials cost, minuscule network delivery costs.

The reproduction and network delivery part of the cost of an instance of service is often too small to be worth the trouble to calculate.

My experience with network delivery of software is that the cost of reproducing and delivering a single instance of a product is so low, the finance people don’t want to bother to calculate it. They prefer to lump it into corporate overhead, which is spread across products and is the same whether one item or a million items are delivered. The cost is also the same whether the item is delivered across the hall or across the globe. Physical items are often sit in expensive rented warehouses after they are reproduced. Digital products are reproduced on demand and never stored in bulk.

Stepwise network costs

Network costs are usually stepwise, not linear, and not easily allocated to the number of customers served. Stepwise costs means that when the network infrastructure is built to deliver news items to 1000 simultaneous users, the cost stays the about same for one or 1000 users because much of the cost is a capital investment. Unused capacity costs about the same as used capacity— the capacity for the 1000th user must be paid for even though 1000th user will not sign on for months.

The 1001st user will cost a great deal, but after paying out to scale the system up to, say, 10,000 users, costs won’t rise much until the 10,001st user signs on, at which point another network investment is required. Typically, the cost increment for each step gets proportionally less as efficiencies of scale kick in.

After a level of infrastructure is implemented and paid for, increasing readership of digital news service does not increase costs until the next step is encountered. Consequently, the service can add readers for free without affecting adversely affecting profits or cash flow, although each free reader brings them closer to a doomsday when they have to invest to expand capacity.

Compare this to a physical newspaper. As volume increases, the cost per reader goes down as efficiency increases, but unlike a digital service, each and every new reader increases the overall cost of the service as the cost of manufacturing and distribution increases with each added reader. The rate of overall increase may go done as scale increases, but the increment is always there.

Combine the fundamental low cost of digital reproduction and distribution with the stepwise nature of digital infrastructure costs and digital service operators can offer free services much easily and more profitably than physical service providers.

The future

I predict enormous changes in free services in approaching months and years, but I don’t expect a TANSTAAFL day of reckoning because providing digital services is fundamentally much cheaper than physical services.

I have intentionally not discussed the ways in which service providers get revenues from their services, although I suspect most readers have thought about revenue as they read. The point here is that digital services are surprisingly cheap and their economic dynamics are not at all like that of physical goods and services.

As digital continents collide, I expect significant changes in free digital services, changes that will come mainly from the revenue side and our evolving attitudes toward privacy and fairness in commerce. These are a subjects for future posts.

Home Network Setup: Smart Kitchen Crisis

You may call it a smart kitchen. I call it a home network setup disaster: four hackable Linux computers installed and configured by kitchen appliance designers who are, at best, inexperienced in computer security. And I am ashamed to admit I didn’t put them through a security audit before we chose them. We wanted a convenient and efficient kitchen; I knew full well that my security worries would not have a voice in any decision.

Home network setup
Cool cat in smart refrigerator

Last week, Rebecca and I went shopping for new kitchen appliances: a refrigerator, range, hood, and microwave. We are not much attracted by network-connected kitchen appliance features—I supposed we’re old-fashioned in our cooking habits—but the appliances we chose all have Wi-Fi.

We had no choice. Appliances that are not networked are scarce in 2020. You either accept that your kitchen will be networked, or you shop for used appliances. Since replacing one set of used appliances with another set of used appliances was not on our agenda, we have four Internet of Things (IoT) devices scheduled for delivery.

IoT device security

Now, I am forced to think seriously about securing the home network setup of our kitchen against cyber-attack. Forty years ago, when the industry began to hook computers together with TCP/IP and Ethernet, I would never have guessed that home kitchen security would become a topic in 2020.

Why am I worried? I am not as frantic as my well-known colleague, Bruce Schneier, who wrote a popular book about the Internet of Things called Click Here to Kill Everybody, but I share his concerns. Most IoT devices are full-fledged multi-purpose computers: as powerful, versatile, and hackable as the workstations of only a few years ago.

The computer in our new three-speed range hood is more powerful than the coveted Sun SPARC that sat on my desk at Boeing Computer Services in the 90s. The computer in that range hood is also subject to almost any hack reported in the news over the last decade. Ransomware has shown up on a coffeemaker, of all places.

IoT botnets

To top it off, some security professionals expect large IoT botnets will be used in attempts to disrupt the U.S. national election next month by scrambling voter registration or bringing down vote tallying software.

A botnet is a collection of compromised computers under the central control of a botmaster who orchestrates the hacked devices. Thus, botnets are huge covert supercomputers that execute crimes like sending out waves of spam or jamming websites with meaningless traffic. Before the IoT, criminal gangs grabbed control of personal computers and enrolled them in botnets by tricking users into fiddling with fake email or installing bogus doctored applications. It’s easier now.

IoT devices have simplified criminal botnet recruiting. Some of these devices are so poorly secured, criminals can scan the network for vulnerable targets, then take over using default accounts and pathetically weak default passwords. In this way, enormous IoT botnets can be formed quickly with automated scripts.

Users don’t notice that their IoT devices have been invaded because they seldom interact directly with the device. We might never notice that our sleek new smart refrigerator has become a robot thug at the beck and call of a foreign national in a dacha overlooking the Volga river.

The IoT is growing uncontrollably

Log on to your home network router and look at the list of connected devices. I imagine our list is longer than most because my home office is practically a development lab with an assortment of Windows 10 lap and desktops and a Linux tower I use as a server. Both Rebecca and I have two smart phones each (one for phone calls, another without a cellular card for fun), and we also have several tablets distributed in various rooms. We also have smart TVs, Amazon Fire Sticks, and Alexas.

Every time I look at the router’s device list, it has grown longer. What used to be a cute two-line list of his and hers computers has become a configuration management database worthy of a fair-sized business. In the old days, I could glance at the list and know instantly that some bright neighbor kid was filching bandwidth. Now, puzzling it out is a job in itself. When our new appliances are installed, I imagine making sense of the network will get more difficult.

Home network setup crisis

Frankly, I’ve reached a home network management crisis. I no longer feel in control. I’m not sure I will know if I’ve been hacked.

This must change.

Fortunately for me, I’ve helped large enterprises manage their networks for a long time. My quiver has some razor sharp arrows. I can figure this out. No three-speed range hood will bring down our network.

I’ll keep you posted. In the mean time, basic computer hygiene will have to do. Check it out the six rules. They go a long way toward keeping you safe.