The Task List Reveals a Computer’s Beating Heart

Windows 10 Task List
Windows 10 Task List

Like an echocardiogram  that shows the blood flowing through a beating heart, the task shows the flow of activity on a computer. On Apple products, the task list is called the activity list. In Unix and its derivatives, such as Linux and Android, the task list is usually called the process list. In all of these operating systems, a task, activity, or process, whichever you want to call it, is an executing program. Most of the time, there are a lot of them.

Processors

A processor can only run one process at a time, but they switch between processes so rapidly it looks like a processor is running many processes at the same time. All the processes on the task list have been started but have not finished. Some are waiting for input, others are waiting for a chance to use some busy resource like a hard drive, but all are entitled to some time on the processor when their turn comes up.

Many computers today have more than one processor, which increases the number of processes that can run at one time and the amount of time a computer can give to each process. Different operating systems have different strategies for switching between processes, but all the strategies are like plate spinning acts. The plate spinner hurries from plate to plate, giving plates a spin when they begin to slow down and need attention. (If you don’t know what plate spinning is, see it here.) The processor does the same thing, executing a few instructions for a program, then rushing to the next process.

Processes

All the processes, both active and waiting, show up on the task list. That includes malware as well as legitimate processes. If you can spot a bad guy on the process list, you can kill it. The kill may not be permanent, processes can regenerate themselves, but it is usually worth a try. The challenge is to sort the good from the bad. Unless you know what you are shooting at, you might crash the entire computer or lose data, so be careful. You could find yourself restoring your entire system from a backup. Nevertheless, this is one area where you can strap on your weapons and wage open warfare against malware.

When I see an unfamiliar process on the task list, I usually run to Google. Most of the time, Google results tell me that the process is something innocuous that I hadn’t noticed before, but not always. By the way, be a little careful when Googling. There are questionable companies out there with sites that will appear in the search result and try to take advantage of you by offering unnecessary clean up services or dubious downloads. Microsoft will give you trustworthy advice, as will the established antivirus companies, but avoid sending money or installing programs from places you have never heard of. Some may be legitimate, but not all. Above all, don’t let anyone log into your computer remotely without rock solid credentials.

CPU Time

The task list tells you more than just the names of the running processes. There are number of readouts on the state of each process and the resources it is using. The one I usually look at first is the percentage of CPU time being taken and the accumulated CPU time. (Click at the top of the column to sort the processes by the metric.) Both of these metrics show the amount of time a process consumes on the processor–the amount of time the plate spinner has spent spinning the plate. A program that consumes more CPU time is using an extra share of the system’s most critical resource. Shutting down a high CPU consumer will do more to improve your computer’s performance than halting a low CPU consumer.

Some high-consumer processes are legitimate. For example, you will often see a browser using a lot of CPU time. That is because a browser does a lot of work. Any computing that takes place in web page interactions is chalked up to the browser. Some internal system processes, such “system interrupts,” do a lot of work also and rank high on CPU consumption. If you see an installed application that is hogging down CPU, you might check its configuration. There may be adjustments that will reduce consumption. Google will help you find what to do, but keep track of your changes so you can change them back if they don’t work. If you don’t use the program much, perhaps turning it off would be a good idea. When a high consumer happens to be malware, put a high priority on scrubbing it out. A high CPU-consuming malware is like a blockage in a coronary artery. You’ll feel much better without it.

Next Time

There’s more to the task list. Next time.

Personal Cybersecurity: How To Create a Local Account

I am turning in a new direction in this blog. Up to now, I have been writing to software architects and engineers. Apress has given me a contract to write a book called Personal Cybersecurity and I’ve just given a series of presentations on personal computer security at my local library. My previous two books, Cloud Standards and How Clouds Hold Together IT, are both aimed at enterprise software engineers. I hope Personal Cybersecurity will appeal to computer users in general, not just engineers. I gave the library presentations to see what happens when I talk to people who don’t speak engineer’s jargon. When I returned to my office after the last presentation, I was fretting about all the important things that I left out. For the next few blogs, I’ll write for the folks who attended my presentation–computer users who know how to use their devices but are not software engineers or developers. I’ll try to fill in some gaps in the presentation, and perhaps go farther.

For people who did not attend the presentations, but would like to read the PowerPoints, here are One,  Two, and Three.

Create a local admin account tutorial

The first gap I want to fill is a step-by-step tutorial for setting up a local admin account on Windows 10. In the presentation, I warned that running as administrator all the time can make a hacker’s life easier by freely offering them administrator privileges. I forgot to mention that creating a local account in Windows 10 is like thrashing through a maze blindfolded with rocks in your shoes. The twists and turns are hard to follow. Here is a link to a PowerPoint tutorial that shows every step in Windows 10. Earlier Windows OSs make it less convoluted, but the steps are roughly the same. See the tutorial here.

Coming Soon

I am wincing over the meager advice I gave on detecting when a device has been hacked. It will take more than one blog to compensate. In the next blog, I plan to write about one of my favorite tools for spotting a hack beyond anti-virus: The Windows task list. I’ll try to keep the discussion simple, but the task list is an advanced and powerful tool and it does take some understanding of how computers work. It may take some extra effort to understand, but I think it will be worth it. Understanding the task list can help with more than just hacks.

But that is for next time.

How is OVF different?

The number of standards, open source implementations, and de factor standards for cloud management is growing. CIMI, OCCI, TOSCA, Open Stack, Open Cloud, Contrail. OVF (Open Virtual Format) is a slightly older standard that plays a unique role, but architects and engineers often misunderstand its role.

Packaging Format

What is OVF and how is it useful? OVF is a virtual system packaging standard. It’s easy to get confused on exactly what that means. A packaging standard is not a management standard and it is not a software stack. The OVF standard tells you how to put together a set of files that clearly and unambiguously define a virtual system. An OVF package usually has system images of the virtual machines that will make up the virtual machines in the package. A package also contains a descriptor that describes virtual configurations that will support the images in the packages and how the entire system should be networked and distributed for reliability and performance. Finally, there are security manifests that make it hard for the bad guys to create an unauthorized version of the package that is not exactly what the original author intended.

A packaging format is not designed to manage the systems it deploys. To expect it to do that would be like trying to manage applications deployed on a Windows platform from the Control Panel “Uninstall A Program” page. There are other interfaces for managing applications. The OVF standard also does not specify the software that does the installation. Instead it, is designed as a format to be used by many virtualization platforms.

Interoperability

OVF is closely tied to the virtualization platforms on which OVF packages are deployed. The DMTF OVF working group has members from most significant hypervisor vendors and many significant users of virtualized environments. Consequently, an OVF package can be written for almost any virtualization platform following the same OVF standard. The standard is not limited to the common features shared between the platform vendors. If that were the goal, OVF packages would be inter-operable, that is a single OVF package would run equally well on many different platforms, but because platform features vary widely, the package would of necessity be limited to the lowest common denominator. In fact, OVF packages can be written that are interoperable, but they seldom are because most OVF users want to exploit the unique features of the platform they are using.

An OVF Use Case

Is an OVF package that is not interoperable among platforms useful? Of course it is! Let’s look at a use case.

Here’s an easy one. I have a simple physical system, let’s say a LAMP stack (Linux, Apache, MySQL, and Perl, Python, or PhP) that requires at least two servers (one for an http server, the other for a database).  I this configuration use over and over again in QA. If I want to go virtual, an OVF package for this LAMP stack implementation is simple. After the package is written, instead of redeploying the system piece by piece each time I need a new instance, I hand the OVF package to my virtualization platform and it deploys the system as specified, exactly the same, every time. This saves me time and equipment as I perform tests every few weeks that need a basic LAMP stack. When the test is over, I remove the LAMP stack implementation and use the physical resources for other tests. Then I deploy my LAMP stack package again the next time I need it.

Of course, I could do all this with a script, but in most similar environments, formats and tools have supplanted scripts. Look at Linux application deployments. I remember writing shell scripts for deploying complex applications, but now most applications use standard tools and formats like installation tools and .deb files on Debian. .deb files correspond fairly closely to OVF packages. On Windows, .bat files have been replaced with tools like InstallShield or Windows Installer. Why? Because scripts are tricky to write and hard to understand and extend.

OVF provides similar advantages. With an OVF package, authors don’t have to untangle idiosyncratic logic to understand and extend packages. They don’t have to invent new spokes for a script’s wheel and they can exchange packages with other authors written in a common language.

Interoperability between virtualization platforms would be nice, but I still get great benefits without it.

An International Standard

It is also important to realize that OVF has significant visibility as an international and international standard. OVF has been through a rigorous review by national standards body (ANSI) and an international standards body (ISO/IEC). You may ask if that is significant. Who cares? After all, don’t hot products depend on cool designs and code, not stuffy standards? Maybe. The next hot product may not depend on OVF, but I’ll guarantee that if not today, sometime in the near future, you will interact with some service that is more reliable and performs better because it has an OVF package or two in the background. Hot prototypes may not depend on OVF, but solid production services depend on reliable and consistent components, which is exactly what OVF is for.

Every builder of IT services that run on a cloud or in a virtual environment should know what OVF can do for them and consider using it. They should look at OVF because it is a stable standard, accepted in the international standards community, not just the body that published it (DMTF), and, most of all, because it will make their products better and easier to write and maintain.

BYOD- The Agreement

BYOD, Bring Your Own Device, is important, but it has its growing pains.

BYOD is, in a sense, a symmetric reflection of enterprise cloud computing. In cloud computing, the enterprise delegates the provision and maintenance of backend infrastructure to a cloud provider. In BYOD, the enterprise delegates the provision and maintenance of frontend infrastructure to its own employees. In both cloud and BYOD, the enterprise and its IT team loses some control.

BYOD has issues similar to the basic cloud computing and out-sourcing problem: how does an enterprise protect itself when it grants a third party substantial control of its business? For cloud, the third party is the cloud provider, for out-sourcing, it is the out-sourcer. For BYOD, it is the enterprise’s own employees.

Nevertheless, enterprises have responded to BYOD and cloud differently. When an enterprise decides to embark on a cloud implementation, it is both a technical and a business decision. On the technical side, engineers ask questions about supported architectures and interfaces, adequate capacities, availability, and the like. On the business side, managers examine billing rates and contracts, service level agreements, security issues and governance. Audits are performed, and future audits planned. Only after these rounds of due diligence are cloud contracts signed. Sometimes the commitments are made more casually, but best practice has become to treat cloud implementations with businesslike due diligence.

On the BYOD side, similar due diligence should occur, but the form of that due diligence has yet to shake out completely. A casual attitude is common. BYOD is a win on the balance sheet and cash flow statement and a spike in employee satisfaction. This enthusiasm for BYOD has meant that BYOD policy agreements, the equivalent of cloud contracts and service level agreements, are not as common as might be expected.

This is understandable. The issues are complex. BYOD becomes safer for the enterprise as the stringency of the BYOD policy increases. However, a stringent policy is not so attractive to employees. It can force them to purchase from a short list of acceptable devices with an equally short list of acceptable apps, accept arbitrary scans of their device, and even agree to arbitrary total reset of the device by the enterprise. With this kind of control, employees may not be so enthusiastic about BYOD. At the same time, privacy issues may arise and there is some speculation that some current hacking laws might prevent employers from intruding on employee devices.

There are also complex support issues. Must the employer replace or repair the employee’s device when the device is damaged on the employer’s premises while performing work for the employer? This situation is very similar to a cloud outage in which the consumer and provider contend over whether the cause was the consumer’s virtual load balancer or the provider’s infrastructure that caused the outage. In the cloud case, best practice is to have contracts and service level agreements that lay down the rules for resolving the conflict. BYOD needs the same. The challenge is to formulate agreements that benefit both the enterprise and the employee.

In my current book and in this blog, I talk about some of the complexity of BYOD, how it complicates and challenges IT management. BYOD is a challenge, but it does not have to be the tsunami.

Some key questions are

  • How much control does the enterprise retain over its data and processes?
  • What rights does the enterprise have to deal with breaches in integrity?
  • What responsibility does the enterprise have for the physical device owned by the employee?

There are reasonable answers for all these questions although they will vary from enterprise to enterprise. When the answers take the form of signed agreements between the enterprise and the employees, IT can begin to support BYOD realistically. Security can be checked and maintained, incidents can be dealt with, and break/fix decisions are not yelling matches or worse.

With reasonable agreements in place, BYOD support can get real. There is more to say about real, efficient BYOD support that I hope to discuss in the future.