The first computer I played with, in the late 80s, was a Commodore 64 that came with 64 Kilobytes of RAM. Early in the 90s, I was lucky enough to upgrade to a Commodore Amiga, which was a bounce in capabilities the likely of which I rarely saw ever since. The Amiga came with half a Megabyte of RAM, but I still remember the sheer excitement of the day we bought the other half a Megabyte that allowed us to run more complex programs and games.
Among the most shocking paradigm shifts that came with the Amiga, multitasking opened the doors to a new kind of productivity for me. The idea that I could simultaneously use Paint Shop while playing an audio file from another application was simply mind-blowing.
Around 1995 we switched to a PC. It was a traumatic change and it wasn’t a “Personal” computer, but a machine shared among several members of the family. But it was the beginning of two important periods: my Desktop era, which lasted another 10 years, and my x86 era, which lasted just over 20 years.
Take a look at this chart:
There are a few interesting things to notice here. First and foremost, the exponential nature of RAM growth makes it impossible to even appreciate how small was the memory in my earlier computers compared to the more recent hardware.
Let’s try with a logarithmic scale:
Ok, that’s easier to read. 🙂 See what happened between 2004 and 2005? The amount of RAM on a new computer remained the same. That was an epochal change, happening at the peak of the adoption curve (more on that later). And why is there a decrease in 2017? That’s probably another epochal change, but if it is, this time it’s happening earlier, on the left side of the adoption curve (with the early adopters). Hence the step downwards.
In 2005 sales of Laptops started overtaking sales of Desktops. Nowadays large power stations keep on being a non-negligible part of the market, but year after year they became the prerogative of ultra-power users or very serious gamers, as an ever larger amount of tasks can be successfully tackled from a laptop, that has the nice addition of portability.
So, while in 2003 I was computing from a Pentium 4 with 512MB of RAM, 2 years later my main working device became a laptop that -while offering portability- didn’t increase the specs of my previous machine. Back then I don’t remember suffering or being somewhat limited by my laptop’s capabilities. It should be noted that those years I was enrolled at University and I was probably passing more time writing code than running it.
What happened in 2017? Why the sudden drop? Two months and a half ago, I switched my main working device (an EliteBook equipped with 24GB RAM) for a tiny Nvidia tablet that comes with only 2GB of RAM, the Shield K1. I was interested in testing if the time was ripe for the next epoch, the era of pocket devices.
These days everyone has a more or less powerful smartphone in their pocket, but very few try to make that their primary working device. In my personal experience, the only thing which made the transition to tablet a bit painful was jumping more than a factor 10 backward in terms of RAM. In the second part of this post we will see why that is both a curse and a blessing. For the time being, suffice it to say that multitasking aside, 2 GB of RAM is actually plenty to browse the web, use productivity tools like memory-hungry Slack, Google Drive or Hangouts; compile rather complex software; run a web server and test a heavy web app; play Half Life 2 or Portal.
But I will concede that renouncing to multitasking is a deal breaker for the largest part of us. Memory is like the Pandora’s box: once you opened up to the possibility of quasi-infinite RAM, it can be extremely hard to go back to the roots.
Switching to a tablet didn’t just mean ending the era of Laptops. It also represented for me the moment in which I started spending manifestly more time on ARM processors than on x86s. More than 20 years have passed since something like that happened. The anti-climatic truth is that this is an almost completely painless change, much to the chagrin of Intel. The biggest difference is that ARM processors are less energy hungry. For the rest, a modern Linux distro will make the nitty-gritty differences between RISC and CISC pretty invisible.
Of course it’s possible I am not seeing this right and the majority of us will stick to what’s already existing until something even better comes along (human-body machine interfaces? Better glassholes?). Not everyone likes having to plug a video cable and connect Bluetooth devices every time they go from home to work. And the nerdliness of pocket computers, despite their immense potential for creativity on the go, is not appealing to everyone. I have already gone through a similar disappointment when I became a MiniDisc adopter in the mid-90s, right before they were annihilated by MP3s (which were unarguably better). Although that was a very different scenario, something like that may be happening now. But I hope it’s not, because I don’t feel ready to be one with the machine. 🙂
So what’s next? More of everything except size. A good contender to further test if I took a wrong turn (or if this smaller-device age is really on the verge of taking over) is the Gemini PDA which should be released early next year. It promises a ten-core processor and 4GB of RAM. It fits in a pocket, weighs less than my K1 tablet, but it’s also a phone and comes with an integrated keyboard. Have the cake and eat it too? We’ll see.