The concept of "application" is so obtuse and obsolete, using computers or anything with "applications" is a terrible experience plagued with a multitude of systems that don't talk to each other, and reluctantly exchange any information between them.

Still in the pre-history of computing same as we were in 1965.
Maybe closer to the end of pre-history, but pre-history nonetheless.

Exhibit number one: Wikipedia's entry for "Application Software"

Notice how it really can't be defined with any degree of precision, so the encyclopedia's editors had to offer examples of things we consider to be "applications".

An "application" is just a group of programs bundled together, designed to cooperate between themselves at best, and if possible, with no one else ever.

@h Remember that "application" and "appliance" ultimately have similar roots, if not the *same* root. When the Macintosh first came out, Apple foresaw it becoming an appliance in the future, and it is most regrettable that this has happened. Computers today, even Linux machines, are *appliances*.

In so far as the apps do what you want them to do, that's OK for people who don't care to know/learn about their computers. For the rest of us, it's a straight-jacket.

@vertigo An appliance is physical, with hard constraints. The barriers between programs are conventions and tradition. Only in rare cases due to best practices.

@h That's an unnecessarily narrow view of an appliance, in my estimation. MS, Apple, et. al. do not encourage people to poke around under the hook. Interfaces between programs are hard barriers. Computers today have "no user-serviceable parts inside." That, to me, makes them de facto appliances.

When my attempts to install Ubuntu on my old desktop bricked the motherboard, that's when I learned that any device that ships with an EULA is an appliance. Full stop.

@vertigo What I mean is that in the case of software, such barriers (or "appliancefication") is totally artificial due to design decisions and conventions that nobody agreed to accept, but we keep accepting anyway. For absolutely no good reason other than lack of vision.

@h I don't think it was lack of vision; it was the vision of putting computers in the hands of untrained masses that drove the interfaces we have today. There inlies the problem: untrained.

The untrained masses became either complacent, or worse, actively reveled in their ignorance. This is literally their point of view: "Why should I learn how to type these god-awful cryptic gobbledygook when I can just drag and drop these pretty pictures? Reading is hard! Let's go shopping!"

@h I mean, don't get me wrong -- I agree with the core of what you're saying completely. "Systems Programming is Dead", as Rob Pike wrote once. The all-mighty dollar drove the knife.

@vertigo You have a point with oversimplification of some things, but I don't think that cooperative programs necessarily have to be hard to use. They could also be drag and drop and they could work plugging things and pulling levers all the same.

@vertigo Sure, you probably shouldn't make cryptography software drag and drop where you need great precision, but for a number of commonly used tasks it shouldn't be so damn hard to connect the output of one program to the input of another.

It should be just like connecting pipes, a bit like IFFTTT. Any program to any other program.
It's the hard barriers between programs (in theory written by at least somewhat trained people) that don't make sense at all.

@h I'm curious to know what you mean by "hard barriers."


Try to:

1. Write a program that has some struct
2. Enable it to send this struct to another program.
(Serialising and sending over sockets is cheating. There is no need to waste cycles in serialisation between two programs running in the same memory space, on the same architecture)

3. Write a program that can receive that struct and print it.

4. Make sure it compiles and runs on all major platforms.

If you can achieve that effortlessly, then hard barriers don't exist.

@h Ahh, yeah, AmigaOS let you do that but only because it's a single-address-space OS without any kind of memory protection.

To do this in a Unix environment, you'd need to use shared memory interfaces, and some agreed upon means of two or more programs rendezvousing with each other to coordinate who has access to what data and when.

@vertigo Some IPC or shm stuff. But again, next to impossible to make it really portable without major engineering.
Chromium does some of that but it's some of the ugliest code I've ever seen.

@h You won't be able to get around the need for IPC, so that's a fixed/sunk cost. But, this brings me back to POSIX Threads. I vociferously HHAATTEE pthreads with a passion.

They RUINED the concept of threads; because of pthreads, everyone thinks they're super hard to use now. And they're right!!

Contrast against AmigaOS tasks and message ports, which were freaking EASY to use.

@h Just yesterday, I was thinking that maybe I should clone exec.library from AmigaOS 1.3 and build my own OS around that library.

@h I also had the idea of porting GNU/Hurd as well, or MINIX 3, or Plan 9. But, in all honesty, I can easily get exec.library off the ground a lot sooner than I can get any other of these OSes off the ground.

Maybe I can bundle "dos.library" as well, albeit as a normal library, and not as a BCPL library.

So between dos.library and exec.library, I'd have a functional, if minimal, operating system kernel. I'd just need a reasonable user-land environment.

@vertigo OSes should have the equivalent of Go channels as part of the standard Kernel. At worst a message queue that is optimised for fast memory sharing. I think the use of shm is being discouraged with reason.

@h I mentioned shm because that's what Unix provides. And shm is just fine, as long as you provide some other mechanism for coordinating who can touch it at any given time.

@h When you think about it, IBM's System/360 was just like the Amiga when it was first introduced: a shared, single address space environment. They had 2KB quasi-pages which prevented one task from writing into another task's memory, but *nothing* stopped tasks from *reading* other tasks memory. Today, z/OS is fully memory protected.

So, somehow, there must be a way to evolve an AmigaOS-like environment without breaking compatibility.

@h I've been putting some thought into this, and I came up with an idea that I thought would perhaps work, even supporting multiple address spaces.

Legacy binaries would be loaded into a common region in each process' address space. This common region, like the kernel, would appear in every process; therefore, it behaves exactly like AmigaOS currently does.

New binaries would be loaded into process-private memory. To be able to use exec's messaging, >>

@h >> the process would have to *declare* a region of its address space as "communicable." This would basically have the effect of marking the pages occupied by the message as read-only shared memory.


@h >> Since aliasing memory like this is generally considered a bad idea, exec reserves the right to "clone" the message into a read-only segment of shared memory, for example. So:

struct MyMsg mm;
// initialize mm here.
struct MyMsg *new_msg = MakeCommunicable(&mm, sizeof(mm));
SendMsg(dest_port, new_msg);

This should be able to preserve compatibility with legacy and new/safe binaries alike. FreeMem() would know how to free up a "communicable" region of memory.

@h *IF* this works out, and I think it has a strong chance of doing so, this would completely preserve Exec's simplicity, message port semantics, it'd allow safe and non-safe binaries to interoperate, and it would allow an upgrade path for AmigaOS which, until now, has long been considered an impossible dream.

@vertigo If successful, would all this run on your next iteration of the 64-bit DIY computer?

@h Kestrel-3.

Since my current CPU lacks any MMU, the kernel would not support "safe" binaries. It'd be shared-memory, single address space, just like Kickstart 1.3.

*After* I build the MMU for it (OR, after I switch the CPU out for a Rocket core), then I can upgrade the kernel to add support for "safe" binaries and implement the new system calls needed to make communicable regions of memory.

@h If I could just figure out how to port AROS to the RISC-V environment, I'd probably be able to jump ahead by not having to reinvent so many wheels.


Actually OSX has two different ways to pipe apps together, they're really handy and powerful but I'm always surprised by how little they get used. One is UI based, trying to be accessible to end users (Automator), and the other uses a scripting language (AppleScript).

@vertigo @Antanicus @bob

@mayel @bob @vertigo @h I heard of Applescript before but never used it as I never owned a Mac, it looks pretty simple too!

@Antanicus @h @vertigo @bob

Yeah it's always nice when you remember that it exists. "oh shit, why was I lazy and entered all these notes into the default Notes app, there's no export function so now I'm locked in 😟 oh wait.... time to check if there's AppleScript hooks! 😋"

@Antanicus @h @vertigo @bob

Also notice the record button, and the language dropdown which now has JavaScript too (though you can only record into AppleScript)

@mayel KDE has a similar system too, but it's not scripting that was thinking of.

@bob @Antanicus @vertigo

@h @mayel @bob @Antanicus And of course, AmigaOS, with ARexx ports. I'm pretty sure Apple saw the potential of inter-app scripting and had to replicate their own version of it.

@mayel @h @vertigo @Antanicus @bob yeah, only they recently fired the guy who worked at automation for 20 years and terminated his position

@charlag @mayel @h @vertigo @bob why would you want a user to decide how they use your hardware, when you can have the user use it the way you intend? :)

@Antanicus @mayel @h @vertigo @bob for me it was one of the signs to break up with Apple for good

@charlag @Antanicus @h @vertigo @bob

yeah I've finally switched for good from iOS to Android, and while I've always used Linux in parallel to OSX, I feel like I'm almost ready to break up with the Mac, the hardware has been getting worse as well, so there aren't too many reasons left...

@mayel @charlag @antanicus @h @vertigo My feeling about technology at present is that there isn't any panacea and all systems have various levels of problems. "Security experts" (makes air quotes gesture) will often tell you that iOS is better because it has a secure enclave supposedly implemented in a better way than Android's full disk encryption. Even if that's true there are still other considerations, especially with different threat models.

My take is that Android is better, especially if you can use LineageOS and Fdroid. If you have that type of software setup then you're going to bypass a lot of the worst aspects of contemporary proprietary software.

@bob @vertigo @h @mayel @Antanicus modern Android is not necessary full-disk encrypted now. The problem is that threat surface is much larger but I would still prefer Linage

@h I've seen many attempts at this interface, but none having any real success. You mention IFTTT; that's a pretty specialized tool, and first and foremost requires people to be aware of its existence, and second, your (sic) applications that you use it with have to be compatible enough to work well with it.

@vertigo It may not be IFTTT's fault that it's not part of systems software. Alternatively, it's possible that IFTTT's fault is that its business model requires global scale distribution via the www.
In any case, it's not a problem intrinsic to the core design of IFTTT. I don't think it's the ideal model, but it offers a glimpse of things that could be possible on the desktop, but for some reason we're stuck with roughly with the same GUI we had in 1985, only with fancier colours.

Sign in to participate in the conversation is a a coop-run corner of the fediverse, a cooperative and transparent approach to operating a social platform. We are currently closed to new memberships while we improve our internal processes and policies, and plan to re-open to new folks when that work is complete. [9/2/2018]