Notes on Linux: The Learning Curve and the Command Line

This is the third in a series of posts documenting my first month or so using Linux. The first one was on Macs as “shiny jewelry;” the second was on the difficulty of relying on Linux as your day-to-day work operating system (it doesn’t work).

So, OK, maybe I’m not being totally fair by saying Linux doesn’t work. Given time and resources, maybe it can work pretty smoothly. But there is a trade-off: being in a Linux environment clearly requires dedicated staff support on a regular basis, timelines with a lot more leeway for potential technical failures, an ethos of giving staff time to learn, etc.

Regarding learning, in particular, there’s another misconception by open source enthusiasts that I feel like I keep running into:

All the information you need to teach yourself to run Linux is up online. Yes. I suppose it is. But there are a number of problems with this assumption.

Do you remember learning to work from the command line? Image courtesy CPC-Emu

Do you remember learning to work from the command line? Image courtesy CPC-Emu

The first is sort of a question about how much a person should have to rely on a large body of material outside of the machine itself to learn about that machine. The thing about a mature, fully-featured graphical interface is that the bulk of of the information you need to teach yourself to run it is within the interface itself. Its affordances — the tools it provides to let you do things — are immediately visible, or can be found with minimal digging. The cursor suggests you can point at things to work with them; the various mouse buttons suggest different ways of pointing. “Play” and “next page” and “continue” buttons point right or are to the right, suggesting “go on to the the next thing” to anyone used to reading a language written right-to-left. Discrete icons, whether on a graphical desktop or a tablet, make it clear there are multiple actions the machine can perform.

I’ve been spending my time with a Linux install that supposedly has one of the most polished GUIs — Ubuntu — and it’s still not as consistently good at teaching itself as a Mac is. Its control panels are missing some settings I’m used to; it crashes often; and most annoyingly, some simple GUI elements — like dragging a file onto a USB key — break with frustrating frequency. And when you can’t do something using the GUI, the solution usually turns out to be to do something at the command line. This is the major problem.

The command line does not offer such an easy route to learning. I want to emphasize that am convinced it’s valuable to work at the command line; I know I learned important things from doing it. When Seymour Pappert, the father of the LOGO language, did a demo at Teachers College in which he wrote out a program-stopping typo, I called out “Syntax error!” before I even realized what I was doing. Because working with LOGO, at the command line, taught me a very early and important lesson: computers’ ability to understand language is not as flexible as ours; a computer brain does not work exactly like a human’s.

But the learning curve at the command line is steep, and it requires resources — a book, a man page, a teacher or friend helping you around — which may not be visible, or available to everyone. And while I think everyone should do some work at the command line to familiarize themselves with how computers think, I’m getting to a point where I wouldn’t be comfortable demanding people do a lot of work there.

There’s some problems with treating the Internet as the center for learning how to use the machine. First of all, the Internet is a massive place now. Google does a crummier job of ranking than it used to, and there’s a tremendous number of sites filled with spam. I don’t feel confident anymore that the first few results I see are really valuable when it comes to trawling for good practical instructions. It used to be that the geeks I know would recommend one site over another, but I’m not hearing that here. Which sites are most likely to give me the best information to make my system work, without sending me on some massive yak-shaving project?

Second, if you’re saying the Internet has everything you need, you’re assuming that whoever you’re speaking to has already built a good set of criteria for assessing whether an article or thread is going to give them good-quality, relevant information on what they’re looking for. I feel like I’m constantly finding forum posts on how to do something that is almost like what I want to do, but enough different that I’m constantly stumbling over one step, some shell command or preferences configuration, which is totally irrelevant and sends me irretrievably down some rabbit hole.

Generally, assessing the usefulness of a document for one’s purposes requires some expertise. And we’re talking about n00bs who have never used Linux before, here. So there’s also some cognitive load in sorting out what they need to read.1

Teaching yourself also requires you to draw on other resources, ones that are even less tangible. More on that in the next post, Harry Potter and the System of Privilege.

1The irony here, of course, is this is more or less the kind of task I assigned my Digital Toolbox classes for learning Photoshop, Illustrator, and Adobe Premiere — left them with the Internet as a textbook. Somewhere out there there’s a bunch of students whose ears are burning, and they’re suddenly feeling furious and are navigating to RateMyProfessor to write me even more bad reviews and feeling confused by the impulse because they haven’t thought about me in months. But I digress: Photoshop is a hell of a lot easier to learn than Linux.

Comments 2

  1. Doug Belshaw wrote:

    This is interesting. My parents have had Ubuntu on their laptop since they bought it in 2008. It runs reasonably quickly and I update it now and again.

    They use the iPad they got in 2010 for probably 90% of what they do, but the other 10% (printing, access elearning resources, etc.) is adequately done by Ubuntu without them having to delve into the command line.

    I was utterly amazed a few months back when my father managed to use a few searches to install WINE and get Microsoft Silverlight installed so he could use Netflix. He felt a sense of accomplishment with that, but too often these things spill over into frustration.

    Posted 19 Nov 2013 at 11:55 am
  2. UncleSniper wrote:

    Disclaimer: I’m told I tend to come across as rather… abrasive. If so, I apologize; such is not my intention. This social stuff is just… beyond me — think of me as the computer science version of Dr. Brennan from Bones. :P

    Wow. Just… wow. I’m at a loss for words. This is mostly because anything I could say will likely make me seem like $hairyNerdInMomsBasement screaming bloody murder because somebody dared knock my precious command line. Rest assured that this is not it at all. That said:

    Disregarding the fact that most of your claims seem to be based on rather questionable assumptions (no offense), the entire reasoning is rendered moot by the fact that it’s all based on one false assumption in particular: You are clearly seeing CLI and GUI as rivals. They are not; they are, in fact, partners. Allow me to elaborate:

    Both approaches have their pros and cons. Most notably, both have their place in the computer-y world: GUIs are for the… erm, “n00bs”, as you put it. CLIs are for automation: Have you even tried to put an invocation of [insert your favorite GUI-based software here] into a script? How did that go?

    Consequently, I would not want to miss either. And lo! The proper way to engineer a software system is exactly this: Develop the back-end until is works, i.e. does what is is supposed to (and the testsuite proves that this is so). *Then*, and only then, add front-ends. If done right, it should be no problem to add both a GUI *and* a CLI front-end, both using the same back-end. Heck, go crazy and add a network-based front-end for all I care. Do note, however, that the back-end must know nothing of the front-ends. I would like to think any software engineer worth their money would agree with me on this one.

    The actual issue thus arises from disregarding this principle: People tend to tangle up front-end and back-end into one big mess. Too many pieces of software have one approach or the other (or even both) hardwired into what *should* be their back-end: I recall vividly when the company I used to work for was at a loss because they wanted to run a certain program on their (headless, as those go) server. Because the GUI is hardwired right into the system, it stubbornly refused to run on the headless machine even in the built-in “batch mode”, which (as per the manufacturer) is meant for exactly this type of thing. But it goes the other way around, too: If you have ever tried to glimpse something meaningful from the “terminal output” (to stick to simple terms) of a GUI-based program, you probably drowned in weird debug messages the programmer likely left in there. Both of these issues clearly stem from putting UI-related code right into the back-end, where it doesn’t belong.

    Bottom line: A properly engineed software system can be easily recognized by it having both a command line and a GUI front-end. This, of course, is because the presence of both is a clear indicator of the actual criterion: That it *can* have both (i.e. that this is not a contradiction). Conversely, if a software is “meant as a GUI software” or “meant as a CLI software” and provides no other means of using its actual functionality, then it is almost certainly badly written and the authors need to get a clue. (For added kicks, you might observe that Windows and most Windows software fall into the latter category…)

    Posted 11 Aug 2015 at 7:26 pm

Post a Comment

Your email is never published nor shared. Required fields are marked *