The Now and Future of Device Productivity

The Now and Future of Device Productivity

Early this decade, PC sales began a precipitous decline prompted largely by the advent of smartphones and tablets. It led many prognosticators to forecast that mobility and productivity were merging: Soon, only the old and stubborn would still use their bulky, power-hungry desktop computers to do their work.

The young and progressive would do everything on a pocket-size smartphone or, for the most demanding tasks, a tablet. The future was on-the-go computing—and that (almost) exclusively.

Clearly, mobile devices have carved themselves a tremendous market, and although PCs continue to decline, they’ll remain the workhorses of productivity for at least another five years. Let’s look at some of the reasons why.

1. Screen Size Is (Almost) Everything

If you’re watching a movie on Netflix or YouTube, and if you’re anything like me, you quickly forget that you’re watching it on a tiny smartphone display rather than a large television screen across the room.

The touchscreen also enables greater engagement with what you’re seeing, all in a highly portable device. For such reasons, mobile devices make great replacements for PCs when you’re consuming content. The question is whether they’re just as great when you’re creating content.

No matter how much a smartphone or tablet can do, it can’t (by itself) provide the kind of screen real estate that many people need to work comfortably. Even a large tablet won’t allow you to open several windows or panels in a convenient and accessible manner.

And getting good performance from a tablet, to say nothing of a smartphone, is a challenge. High-power models aim to deliver greater processing capability, but they begin to approach the size and weight of notebook PCs, trading off some mobility.

The utility of a particular device depends at least partly on the task. Writing an email or performing some basic photo edits are usually suitable for mobile devices and their small screens; researching and writing a long work or handling complex graphics usually aren’t.

In my own experience, I’ve found that just stepping down from my dual-monitor desktop configuration to a midsize notebook cuts my productivity down considerably because it makes navigating multiple windows and applications much more laborious and time-consuming. Further reducing my display workspace to a tablet or smartphone screen would be unthinkable for any but the most basic tasks.

Different people have different preferences, but device form factor imposes certain physical limitations that are tough or impossible to overcome. The device size (i.e., screen size) therefore creates a ceiling for productivity.

2. Input Devices: Still Faster Than a Touchscreen

Yes, some smartphone typists are lightning fast with their thumbs, but on average and given roughly the same compute resources, someone with a keyboard and mouse can clean the clock of someone with a touchscreen. And that’s just for word processing and similar applications.

I can’t be the only one who has “fat fingers” when using a touchscreen. Ever in fear of going slightly too far to one side of my target and tapping some shady ad, I often long for the precision of a mouse when I’m using a smartphone or tablet.

For the same reason, a physical keyboard beats a screen keyboard when the task requires fast, accurate typing. Sure, some mobile devices can connect to a keyboard, but at that point you’re beginning to sacrifice the main selling point of a mobile device: mobility. Who wants to also carry a keyboard, even if it rolls up and fits in your pocket?

Is it any surprise that all those autocorrect howlers we hear about (and sometimes commit, to our horror) generally come from mobile devices? We tend to be more efficient and therefore more productive when we can use more than two thumbs to type. And the precise targeting of a mouse at least balances if not outweighs the convenience and engagement of a touchscreen for many tasks.

3. Moore’s Law Is Dead

If it continues, the extraordinary progress of Moore’s Law could ameliorate some of these concerns. An average user only needs so much computing power for the task at hand (“good-enough computing”), whether it be for writing, web browsing, gaming or graphics processing.

Eventually, shouldn’t silicon-technology innovation yield chips that can provide the necessary capabilities while operating in a smartphone’s physical and thermal constraints?

Maybe. One damper on PC sales is that for years they’ve been good enough for many users, reducing the incentive to upgrade. For example, a middle-of-the-road desktop PC that’s half a decade old can still handle most tasks today. At the beginning of the millennium, a five-year-old machine would have been a dinosaur by comparison.

Mobile devices have yet to reach that point (i.e., at which upgrading to a faster processor provides no clear user benefit). Their chief selling point, mobility, means not only that their chips must be small and run at a low temperature but also that they must do everything on a small battery. Ignoring the slow advance of energy-storage technology, perhaps the biggest limiting factor is, therefore, the end of Moore’s Law.

Some companies, such as Intel (co-founded by Gordon Moore), claim Moore’s Law is still going strong, but such claims often rely on fuzzy, altered or unrealistic definitions. The broader consensus appears to be it has ended.

Silicon progress continues, to be sure—just at a slower pace. No longer is adopting the next process technology a no-brainer for chip designers, as the price per transistor is now rising rather than falling.

So, what does it all mean for productivity? It means tablets and smartphones have ceased to get “free” additional compute capacity with each new silicon-process generation. Or, looking at the other side of the coin, the power decreases that come with smaller transistors are becoming less economical because the manufacturing technologies are becoming more expensive per transistor. The implications are manifold.

First, again, it doesn’t mean progress has ended; it just means the “freebies” that Moore’s Law provided are no longer free. Chip designers will find ways to improve their products, but they’ll rely less on just packing more transistors into a smaller area. Technical savvy must replace the “brute-force” approach to design.

Second, it puts a crimp on visions of device modularity. That is, some industry watchers have predicted that smartphones will eventually be powerful enough to handle practically all computing tasks.

In this model, users who want a desktop-like experience would simply dock their mobile device to connect a monitor (or more than one), input devices, speakers and so on. This vision not only eliminates those unsightly PC towers, it also keeps the user’s data in one place.

As I discussed earlier, though, mobile processors lack the capability to deliver such resources in their small form factors: they have yet to reach the point of good-enough computing just for what they do now, let alone for what PCs do. Desktop and even notebook chips still blow them away thanks to their greater cooling capacity and (often) unlimited electrical power (i.e., a wall socket).

Third, it means mobile devices will continue to heavily use the cloud. An advantage of the PC is local processing: it delivers results without the latency of an Internet connection. Although the growing focus on “edge computing” aims to ameliorate this problem, local processing still has an advantage.

In the case of complex tasks, such as those involving large neural networks, the time savings from handling the task in the cloud may well outweigh the latency of delivering the results.

But apart from asking Watson which move to make next during a chess match, how likely are you to be using a smartphone or tablet to do work that requires neural networks or deep learning?

4. Wall Power > Battery Power

One limiter of mobile-device performance is the power source. A PC connects to a wall socket, essentially providing continuous, unlimited power. A smartphone or tablet instead uses a battery, whose size is severely restricted by the need to keep the device thin and light. The result is that mobile processors focus on efficiency (which they excel at) rather than performance.

PCs can therefore tackle big processing jobs without concern for how much power they use or even how much heat they produce (they have lots of fans), but mobile devices must limit their processors to avoid overheating or draining the battery.

For major content-creation projects, complicated scientific/mathematical computing, detailed simulations and so on, PCs far outmatch mobile devices—in large part simply because they don’t concern themselves with power consumption or heat.

5. Economics

Mobile devices are just short of throwaway gadgets. Replacing a damaged screen, increasing the RAM and upgrading the processor—all tasks that PCs can easily accommodate given a halfway knowledgeable user—are difficult, expensive or impossible for smartphones and tablets.

So, a $600 top-of-the-line smartphone won’t seem so hot in a couple years, short of replacing it with the latest model. In the case of a $600 PC, however, a $100–$200 investment (or less) often provides enough of a performance boost to make a complete replacement unnecessary for five years or more.

 

Subscribe to Productivity Bytes:

 



Are PCs the Home of Productivity?

Despite all the productivity advantages of PCs, they’re still ungainly machines (particularly in the case of desktops). Notebooks provide some mobility, but they’re much less convenient than a pocket-sized smartphone or even a tablet. Mobility can be a boon to productivity: it allows you to work on the go or in places that may offer a more suitable environment for creativity or concentration (say, a coffee shop).

Today, the best situation for the average user is a mix: a smartphone, probably a tablet, and maybe a notebook or desktop PC. Those engaging in heavier processing will probably focus more on the PC and less on the mobile devices.

Each individual will naturally face a different situation, but the key is that because mobile devices and PCs don’t (yet) overlap in all their functions and capabilities, neither can fully replace the other.

So, for example, expecting everyone to consume content or check email at a desk beside a tower PC is as foolish as expecting them to try to get some real work done on a tiny smartphone with an underpowered processor.

A Look at the (Possible) Future

Will the future be any different? Today, most users who do more than just consume content probably do best with a mix of devices. But will the developments of tomorrow change that situation? Of course, no one can be sure. We can, however, make some educated guesses. Here are a few.

1. Quantum Mobiles?

Quantum computing is the heir apparent of so-called classical computing, but it faces a number of hurdles. Today, quantum computers are highly limited in their capabilities. What’s worse, they require near-absolute-zero temperatures to operate.

So even though a quantum chip might fit in your smartphone, the cooling infrastructure would hardly fit in your home office. Contrast that technology with the first transistor, which although bulky still operated at room temperature.

Who knows how far quantum computing will go—but it probably won’t end up in your smartphone anytime in the next decade. It may, however, be accessible through the cloud, which is the next best thing as long as you have a stable, low-latency network connection.

2. Modular Mobiles?

I already mentioned the idea of modular computing. Given enough compute, storage and connectivity resources, a smartphone could fit the bill, at least in theory. And it need not be as powerful as a PC: it would just need to provide good-enough computing.

From an economic and practical perspective, though, it should also be internally modular—that is, users should be able to upgrade it and make repairs as easily and inexpensively as they do with PCs.

In addition, some quick advancement in energy storage would be a tremendous help. Progress in so many fields is limited by a lack of safe, efficient, low-volume energy storage; mobile devices would be just one that would benefit greatly from a small, stable battery with greater energy density and fast, easy recharging.

3. Projections Instead of Screens?

The tiny screens of mobile devices limit productivity, but what if they were able to project images interactively, allowing you to use any surface as a display? Such a technology could replace monitors and allow mobile devices to provide the necessary work space to do all the things that only a PC with a large screen or two can handle.

The question is whether a smartphone or tablet will ever have the processing muscle for such a task in addition to running other user applications. They’ll certainly get closer as silicon improves, but eventually, silicon technology will reach hard physical limits—or softer economic ones. At that point, regardless of whether projected interactive displays are conceivable or even possible, they become impractical.

4. Everything in the Cloud?

The cloud offers a number of benefits to both mobile and fixed devices, ranging from virtually unlimited data storage and backup to greater processing resources, the ability to sync devices to a single repository and so on.

And ignoring latency between the device and the data center (something that edge computing is attempting to mitigate), it would seem to be an ideal computing approach. Resources are cheaper thanks to amortization to numerous customers, and users can purchase as much as they need for just as long as they need it.

This model can theoretically make smartphones and tablets the equals of PCs, except for the issues of screen size and input devices. One problem that remains unresolved is data ownership and privacy.

Furthermore, centralization of data storage not only gives fewer and fewer hands control over more and more data, but it also makes bigger (and more lucrative) targets for hackers. Although a large organization can generally afford better security than a small one, its size and assets give criminals a greater incentive to break in—even using expensive attack methods.

Conclusions

PCs aren’t the only way to get work done, but neither can mobile devices enable the same level of productivity—thus far, anyway. Regardless of whether smartphones and tablets will ever drive PCs to extinction (a seemingly unlikely event for more than just technical reasons), the reality today is somewhere in the middle.

For heavy content-creation tasks, desktop PCs are unmatched. For content consumption, mobile devices have an edge thanks largely to their defining characteristic: mobility.

Individual users will run the gamut. Some will rely mostly on PCs, others mostly on mobile devices, and some (perhaps most) will employ a mix. And in each group, some will try to do too much heavy lifting on mobile devices while others will stick doggedly to PCs when a tablet or even smartphone would be a better option.

For now, market trends indicate PCs have a little more deflating to do whereas mobile devices appear to be approaching saturation. When looking at the numbers, it’s important to remember that turnover is much faster with mobile devices than PCs, so the market has arguably come close to equilibrium. And the demise of Moore’s Law suggests that processor upgrades will be decreasingly beneficial to users, apart from major innovation beyond simply shrinking transistors.

Productivity may, therefore, be close to its “market” distribution across devices, given current technologies. That is, some 10 years after the iPhone’s advent, consumers may have largely figured out what they can do well on each device and are spending their technology dollars accordingly.

Should major changes arrive in, say, quantum computing or the cloud—perhaps owing to edge computing or better resolution of data ownership and privacy concerns—the situation could change.

Or the gasping pace of technological development may slow with the decline of Moore’s Law, and instead of always looking for the next gadget to help us do our work, we’ll have to figure out how to more efficiently use the ones we already have.

 

If you enjoyed this post, you’ll also like these:


The following two tabs change content below.
Jeff Clark is a freelance writer and is editor of the Data Center Journal. A self-published author, independent book reviewer and aspiring Renaissance man, his interests range from science and technology to philosophy and woodworking. He has a BS in physics from the University of Richmond as well as an MS and Ph.D. in electrical engineering from Virginia Tech.

Latest posts by Jeff Clark (see all)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.