Shortlog - a log of everyday things

Home

2011-01-05

Now that I've got two friends messing around with Shortlog, my minimalist blogging engine (crazy, I know), I've come to the realization that I need to make it easier for people to access/use my software. I need to make it less tied to my particular setup, more easily configured, and better-documented.

Then I realized that this applies not only to shortlog, but the rest of my projects as well. And the first step in using someone's code is obtaining it.

So here you go. I've gone and organized a few things enough to get a decent-looking gitweb page up to make the source available. Documentation and modularity will come in due time.

Another interesting thing I noted today: most of my computer usage is a giant hierarchical organization of data segments. Multiple desktops play home to multiple tiled windows. Most of these windows are either:

  1. web browser windows, each with a multitude of tabs, or
  2. terminal windows, each with a multitude of tabs.

While there are also occassionally chat windows (with several tabs) or the one-off LibreOffice window or file manager, this is my most common interaction with my computer - finding and selecting the desired context to interact with.

This sounded remarkably like Ben Schneiderman's Direct Manipulation Interfaces. To make context switches cost less, I (subconsciously) put likely neighbor contexts next to each other. I may set rules, like "desktop 1 is for communications, desktop 2 is for OpenKinect development." The catch here is that some end nodes (application windows) fit into multiple places in the heirarchy.

My question is then: is there a way to keep the benefit of being able to keep such rules for quickly finding the relevant context and still preserve relative context nearness?


Comments:

avatar from Gravatar

Nicole | 2011-01-05T21:49:40.028850

I feel that the way to preserve contextual nearness is limited by the number of dimensions available and the screen real estate of your viewing device. Because 3D tech frankly sucks, all we're likely to get at this point is a series of 2D arrangements which can be tiled in a tree-like manner based on certain characteristics (this is optional - any ordering scheme would work, really). Tagging bits of data with keywords would make identifying similar data that is not part of the same branch easier, but it is time-intensive and might not be worth the cost for high frequency data changes. A lot of that could be gotten around with clever keyboard shortcuts.

It is very similar to remembering where you put a certain item. We are very good at dealing with three dimensions since that is what we're used to dealing with, but any more and it becomes rather tricky. I'm pretty sure if you could implement a way of organizing digital (interact-able) data in a 4 spacial dimensional manner humans could learn to use it (eventually), but I don't think the algorithms for that exist yet. But it would be pretty cool, and I would kill for an easy way to depict multi-threading in conversations in a way that didn't look stupid.

tl;dr: Basically you want a really shiny way of auto-contextualizing everything based on various features (which change based on the given circumstance) /and/ an easy way to switch between related things once they have been contextualized (and switch between contextualizations mid-stream).

Also any implementation that could move things around for you would play hob on the mind's propensity to spacially relate things ("it was on the page opposite the elephant" for example.)

avatar from Gravatar

Nicole | 2011-01-05T21:56:45.599433

(an even more tl;dr way of putting it is that essentially what you want is an IRL way of becoming Neo from the Matrix without any of that nasty saving the world stuff)

avatar from Gravatar

Drew Fisher | 2011-01-05T22:40:25.115685

I also want a direct brain interface. I've long dreamed of having a serial port wired up to some nerve endings, with the assumption that the brain will eventually figure out the protocol by fuzzing the nerves until it makes sense, just like how we get all our other senses.

So uh, yeah, that's pretty much Neo minus all the cyberpunk. Huh. :)

There's always the dual between automatically figuring out contexts (just do what the user wanted!) and staying consistent (so the user can learn the interface). As you noted, it tends to be hard to do the first one right, and users wind up very frustrated when it is done wrong, so the second approach seems to be the norm. Even things like search engine spelling correction can be frustrating when the system gets it wrong, and as users, we feel out of control.

I'd guess that anything that did do this intelligent context-deduction would eventually get something wrong, at which point I'd probably feel powerless to correct it. And then I'd want to turn off the "smart" system and just manually manage everything, so it's (at least in my mind) consistent and I know how it works.

I guess I'm still looking for ways to improve this "manual management" - by forcing myself to follow rules, I can get some efficiency increase. Can I make the system do some of the enforcement, to reduce mental load? Perhaps, but it's not immediately apparent how to best do this. Now that I think about this more, I remember KDE Plasma's Activities - user-contextualized grouping of widgets. I will have to revisit this to see if it saves me time.

Sloth is the mother of invention. :)

avatar from Gravatar

Nicole | 2011-01-05T23:22:20.759284

Being Neo aside, either you are keeping track of where everything is located, or the system is, or somewhere in between. The problem is making that information available and accessible (the latter being key). I'd argue that it's harder to do without adding more dimensions and it runs the risk of being overly complicated and thus losing users who are turned off by the steep learning curve. Which is why everyone develops their own filing system, so to speak.