The concept of "application" is so obtuse and obsolete, using computers or anything with "applications" is a terrible experience plagued with a multitude of systems that don't talk to each other, and reluctantly exchange any information between them.
Still in the pre-history of computing same as we were in 1965.
Maybe closer to the end of pre-history, but pre-history nonetheless.
Exhibit number one: Wikipedia's entry for "Application Software"
Notice how it really can't be defined with any degree of precision, so the encyclopedia's editors had to offer examples of things we consider to be "applications".
An "application" is just a group of programs bundled together, designed to cooperate between themselves at best, and if possible, with no one else ever.
@h That's an unnecessarily narrow view of an appliance, in my estimation. MS, Apple, et. al. do not encourage people to poke around under the hook. Interfaces between programs are hard barriers. Computers today have "no user-serviceable parts inside." That, to me, makes them de facto appliances.
When my attempts to install Ubuntu on my old desktop bricked the motherboard, that's when I learned that any device that ships with an EULA is an appliance. Full stop.
@h I don't think it was lack of vision; it was the vision of putting computers in the hands of untrained masses that drove the interfaces we have today. There inlies the problem: untrained.
The untrained masses became either complacent, or worse, actively reveled in their ignorance. This is literally their point of view: "Why should I learn how to type these god-awful cryptic gobbledygook when I can just drag and drop these pretty pictures? Reading is hard! Let's go shopping!"
@vertigo Sure, you probably shouldn't make cryptography software drag and drop where you need great precision, but for a number of commonly used tasks it shouldn't be so damn hard to connect the output of one program to the input of another.
It should be just like connecting pipes, a bit like IFFTTT. Any program to any other program.
It's the hard barriers between programs (in theory written by at least somewhat trained people) that don't make sense at all.
1. Write a program that has some struct
2. Enable it to send this struct to another program.
(Serialising and sending over sockets is cheating. There is no need to waste cycles in serialisation between two programs running in the same memory space, on the same architecture)
3. Write a program that can receive that struct and print it.
4. Make sure it compiles and runs on all major platforms.
If you can achieve that effortlessly, then hard barriers don't exist.
@h Ahh, yeah, AmigaOS let you do that but only because it's a single-address-space OS without any kind of memory protection.
To do this in a Unix environment, you'd need to use shared memory interfaces, and some agreed upon means of two or more programs rendezvousing with each other to coordinate who has access to what data and when.
@h You won't be able to get around the need for IPC, so that's a fixed/sunk cost. But, this brings me back to POSIX Threads. I vociferously HHAATTEE pthreads with a passion.
They RUINED the concept of threads; because of pthreads, everyone thinks they're super hard to use now. And they're right!!
Contrast against AmigaOS tasks and message ports, which were freaking EASY to use.
@h I also had the idea of porting GNU/Hurd as well, or MINIX 3, or Plan 9. But, in all honesty, I can easily get exec.library off the ground a lot sooner than I can get any other of these OSes off the ground.
Maybe I can bundle "dos.library" as well, albeit as a normal library, and not as a BCPL library.
So between dos.library and exec.library, I'd have a functional, if minimal, operating system kernel. I'd just need a reasonable user-land environment.
@h When you think about it, IBM's System/360 was just like the Amiga when it was first introduced: a shared, single address space environment. They had 2KB quasi-pages which prevented one task from writing into another task's memory, but *nothing* stopped tasks from *reading* other tasks memory. Today, z/OS is fully memory protected.
So, somehow, there must be a way to evolve an AmigaOS-like environment without breaking compatibility.
@h I've been putting some thought into this, and I came up with an idea that I thought would perhaps work, even supporting multiple address spaces.
Legacy binaries would be loaded into a common region in each process' address space. This common region, like the kernel, would appear in every process; therefore, it behaves exactly like AmigaOS currently does.
New binaries would be loaded into process-private memory. To be able to use exec's messaging, >>
@h >> Since aliasing memory like this is generally considered a bad idea, exec reserves the right to "clone" the message into a read-only segment of shared memory, for example. So:
struct MyMsg mm;
// initialize mm here.
struct MyMsg *new_msg = MakeCommunicable(&mm, sizeof(mm));
This should be able to preserve compatibility with legacy and new/safe binaries alike. FreeMem() would know how to free up a "communicable" region of memory.
@h *IF* this works out, and I think it has a strong chance of doing so, this would completely preserve Exec's simplicity, message port semantics, it'd allow safe and non-safe binaries to interoperate, and it would allow an upgrade path for AmigaOS which, until now, has long been considered an impossible dream.
Since my current CPU lacks any MMU, the kernel would not support "safe" binaries. It'd be shared-memory, single address space, just like Kickstart 1.3.
*After* I build the MMU for it (OR, after I switch the CPU out for a Rocket core), then I can upgrade the kernel to add support for "safe" binaries and implement the new system calls needed to make communicable regions of memory.
Actually OSX has two different ways to pipe apps together, they're really handy and powerful but I'm always surprised by how little they get used. One is UI based, trying to be accessible to end users (Automator), and the other uses a scripting language (AppleScript).
@h I've seen many attempts at this interface, but none having any real success. You mention IFTTT; that's a pretty specialized tool, and first and foremost requires people to be aware of its existence, and second, your (sic) applications that you use it with have to be compatible enough to work well with it.
@vertigo It may not be IFTTT's fault that it's not part of systems software. Alternatively, it's possible that IFTTT's fault is that its business model requires global scale distribution via the www.
In any case, it's not a problem intrinsic to the core design of IFTTT. I don't think it's the ideal model, but it offers a glimpse of things that could be possible on the desktop, but for some reason we're stuck with roughly with the same GUI we had in 1985, only with fancier colours.