This rant about Microsoft controlling TypeScript and through that, re-centralising the whole Javascript ecosystem, is very interesting to me


I have always been suspicious of Typescript just because it's *compiled*. It doesn't feel like compilation is native to the interpreted, distributed, Javascript model. It's a single point of trust that doesn't need to be there, a class division that can be exploited.

And welp. Looks like that's exactly what's happening!

@natecull While I do hate TypeScript as a language, I need to be careful about glass housing it too much since I write ClojureScript at the day job (and love it, probably the world’s easiest language, it’s even more loosey-goosey, sloppy, TIMTOWTDI than vanilla JavaScript).

I can just gasp in awe at how skillfully Microsoft did this. Longtime Github hater here; it was proprietary even before it was Microsoft, and we had all already been burned twice by Sourceforge and Google Code so I never understood why everyone was so eager to slam their necks into this new guillotine.

@natecull but can’t peeps fork TypeScript if they think it’s so hot?

@natecull never mind; I finished reading the article and the rape joke and the irony of publishing it on actual Medium got a bit too much.

@Sandra My reaction exactly.
I don't agree that the problem is that Typescript is compiled. Because that argument should then go for all compiled languages. There are several entirely open source languages that compile to JavaScript. I would think the issue is that the language and compiler are effectively controlled by Microsoft.


@wim_v12e @Sandra

My problem with compilation as compilation is that typically the compiler is built to *run outside of the runtime environment*

and as such it often requires special permission to run, and even specialer permission to modify.

Permissions that increasingly aren't even offered to ordinary users. "No binaries for you, and compilers are hacking tools!" is becoming the norm in corporate environments, and consumer devices are often just as locked down via app stores.

@wim_v12e @Sandra

If the only execution environment a user is offered is a single managed one, then the two processes of:

1. "compilation"


2. "compiler development"

MUST be able to take place WITHIN this single managed environment - not outside it - if the user is to have any kind of freedom to modify and trust their personal computational environment.

I keep repeating this because it seems both necessarily true, and yet so often forgotten by developers who have special dev rights.

@wim_v12e @Sandra

Increasingly, those special dev rights (the right to compile/package/publish code, and the even stronger right to modify and develop new compilers and languages) are going to be tied VERY strongly to being vetted members of a coporation, or institution like Github.

The "trusted compiler" is extending softly to become "the trusted organization" and even the open-source world isn't quite noticing that this is happening.

But it was obvious that it would happen from the start.

@natecull @Sandra
I think I need a bit more context to understand this.

If I want to create an android app, I can just do that and put it on f-droid. And I can still create binaries for macos and linux, and last time I checked (about two years ago), I could still build on windows too. I'll need administrator access to install the initial binary, be it compiler, interpreter or VM. But home users have that kind of access on their machines.

So clearly I am missing something here, what is it?

@wim_v12e @natecull @Sandra

This is a sort of parallel dev universe though. Most Android users don't know f-droid exists.

Breaking into vetted spaces gets harder as vetting gets more common for "security" purposes, & then that vetting gets applied in different circumstances. (None of my projects work on iPhone because I'm not paying $90/year for the privilege to be rejected by their app store.)

@enkiv2 @wim_v12e @Sandra

Yep. I have F-Droid on my Android, but I'm pretty bleak in my expectation that it will be around forever.

All it will take is one "industry-wide best practices" update and the next phone I buy won't be able to install F-Droid "because security/privacy risk" or something.

Source: My lived experience this year, seeing Work From Home + ransomware push organizations into MASSIVELY locking down all their machines and "security consultants" whipping them to do it faster.

@enkiv2 @wim_v12e @Sandra

Seriously, before this year I hadn't realised that the tide could turn so swiftly to "literally being able to run a non-corporate-approved EXE is Evil Hacking" but we're here now, it's happening.

We already had Administrator rights removed from all users for years before this. But this year, "running an EXE" is now considered Malware. And running a compiler is doubly so, "because it can allow running non-signed EXEs"

Compilers are generally EXEs.

@enkiv2 @wim_v12e @Sandra

For more awareness of this trend, search for the phrase "living off the land attack"

"Living off the Land" is corporate cybersecurity expert jargon for "being able to access a compiler or even a command line host of any kind on a Windows machine".

And since banning all compilers is now Corporate Cybersecurity Best Practices today, it will be coming to a consumer machine near you tomorrow.

@wim_v12e @Sandra @natecull @enkiv2 “Consumer machines” these days are phones and tablets, which are hard to run compilers on in the case of Android and impossible in the case of iCan’tOS.

I suspect most individuals and businesses who want a laptop that locked down will get Chromebooks. Though maybe MS and Apple will try to compete in the Chromebook niche.

@freakazoid @Sandra @natecull @enkiv2

I feel like my original point, that the issue is not if a language is compiled or interpreted, has gotten lost along the line.
What is described in the previous posts is control over execution on the CPU.

The original point made by Nate about TypeScript being compiled is what I responded to. This has nothing to do with permissions to run binaries because TypeScript compiles to JavaScript which is compiled to bytecode for a VM running in the browser.

@wim_v12e @freakazoid @Sandra @enkiv2

Thanks for your thoughtful response!

Compilation/transpilation to Javascript, I think, is still quite problematic for me.

If the compiler is written in Javascript and compilation can be done in small pieces and from within the Javascript runtime environment, then at least it doesn't have the "you don't have any control over the compiler" problem we're increasingly facing.

But I'm still not a huge fan because you lose information when you compile.

@wim_v12e @freakazoid @Sandra @enkiv2

We went though this whole thing decades ago with "fourth-generation languages" and preprocessors. Source code generated by machine is not human-friendly at all.

This could probably be mitigated IF someone constructed a whole interactive execution environment within Javascript where, say, you had objects that had both Typescript (or whatever input langage) source code and then the compiled-to-Javascript object code.

I'm sure we could build this on WASM.

@wim_v12e @freakazoid @Sandra @enkiv2

But although this seems obvious to me (it's only "obvious" because I'm thinking as a user who grew up in BASIC, who expects to always have an "immediate mode" - with data/code save/load facility - available inside a runtime environment and I'm always shocked that there often isn't) it doesn't seem to be the way that many of these compile-to-Javascript toolchains think.

All source code is generally kept in files.

But browsers often don't give you files.

@wim_v12e @freakazoid @Sandra @enkiv2

This MIGHT be about to change / in process of changing now that the File System Access API has been approved and is in the most recent Firefox (not yet in long-term support but I think next month).

Anyway, what I want is for the whole "source, version control, build, compile, verifiy, package, sign" toolchain to be available at runtime, for every user, within every deployed endpoint.

Generally I think that means the toolchain should be small not big.

@wim_v12e @freakazoid @Sandra @enkiv2

Some form of "signing" or secure attestation might well be an important basic primitive to have in any distributed code execution environment. Especially if users are going to construct things like "types" at runtime and send them over a wire to untrusted distant machines. That's a thing I think we need so we might as well assume that we need it.

But that kind of signing is gonna have to not be based on X.509 certificates and PKI.

@natecull @Sandra @enkiv2 @wim_v12e Authorizing code execution based on someone else attesting to its safety does not scale well. We need object capability security.

@freakazoid @Sandra @enkiv2 @wim_v12e

Yep, object capabilities does seem like a good way forward. Maybe ocaps don't actually need public key cryptography, though I think mutable objects like in Goblins probably do. That's the sort of 'signing' I mean. It needs to be small and fast and not involve a third party, and you probably wouldn't even use it locally, within your own trusted hard drive or LAN.

(I'm sure to what extent we can trust hard drives and LANs, especially removable media).

@freakazoid @Sandra @enkiv2 @wim_v12e

I would love for us to have some kind of universal "trusted persistent storage" where you could store very simple object-type structures where each object is guaranteed to meet some contract/type/criteria/function. Preferably in as simple and general a way as possible.

Filesystems don't get us this. Databases sort of do but they're chunky and very hard to update the schema. "Object databases" tried to do this but failed - I think embedded too much code.

@freakazoid @Sandra @enkiv2 @wim_v12e

Some people are advocating SQLite for this kind of personal data store. I still think SQL is too slow and chunky for this, but maybe it's okay.

But whatever we used, it would have to deal with "what happens if someone imports a foreign disk / SD card / zipfile / SQLite file that's been constructed/edited by an adversary so the attestations it makes about contracts/schemas/types are deliberately evil and broken, how do we validate it fast and safely?"

Interesting thread, bookmarked,

I don't understand exactly what you are trying to do, but:
1. We've used sqlite a lot and it has been pretty bulletproof.
2. Depending on what you want to validate, some structure to the data (which a relational database gives you) can help.
3. It's not the only way to structure the data, though.

@freakazoid @Sandra @enkiv2 @wim_v12e

@bhaugen @freakazoid @Sandra @enkiv2 @wim_v12e

The sort of thing I'd like would be something like

"what if an interactive Javascript interpreter where every object gets persisted to a file"

Seems like trying to persist arbitrary binary objects to a SQL database might not be the best fit.

"Seems like trying to persist arbitrary binary objects to a SQL database might not be the best fit."

Nope. But maybe a Json DB? Those seem to be popping up like flies. Have no idea which are any good, though.

@freakazoid @Sandra @enkiv2 @wim_v12e

@enkiv2 @wim_v12e @bhaugen @Sandra @mathew @natecull Yeah I don’t know where this whole notion of SQL being slow and clunky or bloated comes from. SQLite, PostgreSQL, and MySQL/MariaDB have all had engineer-centuries put into them, and they’re rock solid, high-performance databases. They’re incredibly hard to beat. Most “schemaless” databases are extremely immature by comparison.

@bhaugen @wim_v12e @Sandra @mathew @natecull @enkiv2 I should say they’re incredibly hard to beat in any practical application. The NoSQL snake oil salesmen love to show how they beat them on narrow benchmarks or specific applications. But even if you have an application for which some other DB is faster, what about tooling? How do you back up? Can you take a consistent snapshot? How do you restore?

@bhaugen @mathew @natecull @enkiv2 @Sandra @wim_v12e For schema migration, your best bet is usually to stick an application-specific API in front of the database both to centralize all your data access code (including caching) and handle everything related to migration, including double-writing and falling back when a record isn’t in the new table yet.

@freakazoid @bhaugen @mathew @natecull @Sandra @wim_v12e

Right, but if you are storing arbitrary objects, you literally don't have a schema. (Or, your "schema" is so granular that the DB is useless.)

Most real-world applications that use nosql would be better off using a properly designed RDBMS, no doubt. The use case here -- to provide MUMPS-style persistence of arbitrary Javascript objects (including code) within every interpreter cycle -- is not a good fit for RDBMS or any existing nosqldb

@wim_v12e @Sandra @bhaugen @mathew @enkiv2 @natecull Then the question becomes what it means to store “arbitrary objects”. Why are you doing it and what do you expect the database to do with their structure? Do you expect indexed search on every field an object might have? What do you plan to do with that?

OpenStreetMap’s tags are kinda like this, but they work fine with SQL.

@freakazoid @wim_v12e @Sandra @bhaugen @mathew @enkiv2

"Then the question becomes what it means to store “arbitrary objects”. Why are you doing it and what do you expect the database to do with their structure?"

Well I would like to have an environment like this because I want a "conversational relationship" with my computer.

That is, I want to be able to boot up a prompt (let's say command line, BUT, it could also be a graphical shell of an "object explorer" type) and do the following:

@freakazoid @wim_v12e @Sandra @bhaugen @mathew @enkiv2

1. Create new namespaces or interaction contexts ("file" or "folder" or "project" equivalents). Preferably with automatic versioning/diffs/transactions.

2. Enter new data, interactively, or importing from other sources.

3. Construct queries of that data interactively.

4. Create metadata and rules/algorithms for grouping/constructing data.

5. Combine steps into scripts or functions

6. Save/load my session/data at any time.

@freakazoid @wim_v12e @Sandra @bhaugen @mathew @enkiv2

7. Be able to import/export data/rules/processes into other formats.

This is probably the same sort of problem space as "Workbooks" in the Jupyter kind of sense. I'd like to have one with less setup ceremony than Jupyter.

It's what oldschool computing environments like BASIC and dBASE II and even VisiCalc used to give us. We've drifted quite a bit away from that "conversational" model towards more formalised, structured, siloed "apps".

@freakazoid @wim_v12e @Sandra @bhaugen @mathew @enkiv2

At the moment my favourite tool is Node.js , which is KIND OF what I want, because I can at least

1) get a quite nice command line on a phone (as long as I've got Termux, which I might not have forever)
2) import data from a file
3) save data to a file (IF I'm running in Node and not a web browser - hopefully File System API will help there)
4) interctively create objects/arrays/functions for data and queries

But persisting data is hard.

@freakazoid @wim_v12e @Sandra @bhaugen @mathew @enkiv2

My current problem I'm using this for is Chinese language learning.

Cos Chinese dictionaries are *big* (40,000 characters or so) and the data is weirdly shaped, has a rough schema but as I examine it and learn new things I keep needing to tweak the schema, and queries are things I want to be able to interactively grow.

But the most important thing is needing to boot it fast and run queries in the moment, as I'm encountering new text.

@freakazoid @wim_v12e @Sandra @bhaugen @mathew @enkiv2

And if my phone runs out of power or crashes and reboots, then Termux/Node just clears its RAM, and I lose all my working objects. So I'd like persistence so I don't lose my train of thought.


What you are describing seems a lot like Smalltalk. I was part of a skunkworks project in the mid 1990's that created a manufacturing shopfloor coordination system in Smalltalk.
It was like Basic interpreters on LSD. Everything, your code and all of the data objects, was in the "image", which persisted.


@freakazoid @wim_v12e @Sandra @mathew @enkiv2


We tried using an object database so multiple people could use the same data, which worked, but had problems. It was very fast for queries, even complex graph traversals. but dog slow for updates because every change put "dirty" marks on objects, the update needed to find all the dirty data, and then updated parts of a big B-tree.

So we went back to single-user images that federated.

But great thread! Hope this fits.

@freakazoid @wim_v12e @Sandra @mathew @enkiv2

@bhaugen @freakazoid @wim_v12e @Sandra @mathew @enkiv2

Interesting! What does "federated single-user images" mean?

I never used Smalltalk but I understand that like Lisp, the "image" concept did cause lots of problems for data sharing and building clean projects for use by other organizations.

Forcing a recompile from source files, in the C fashion, is clumsy but it does seem to help with separating one organization from another and removing "debugging hooks".

I wish we could get both...

@bhaugen @freakazoid @wim_v12e @Sandra @mathew @enkiv2

Oh yeah while I haven't used Smalltalk I did briefly use a similar system called "Jade". It was like Visual BASIC with a persistent Smalltalk-like object database and object explorers.

I recall the database CONSTANTLY corrupted itself and you had to re-import your source code from text files. Also it was very slow compared to a SQL database, and queries had to be hand-coded, there was no query languge. Interfacing with SQL was hard.

I never had a Smalltalk image corrupt itself. And because the code and the data were in the same image, once you loaded it, everything was very fast.

@freakazoid @wim_v12e @Sandra @mathew @enkiv2

> What does "federated single-user images" mean?

I got in an argument with my boss and quit before the federated part got implemented, but I am pretty sure the federation was mediated by a centralized ERP system.

One of my customers joined me and some other people in a startup that tried to implement the federation via message passing, like Smalltalk between images. We failed as a business and never got the fedi images working.

@freakazoid @wim_v12e @Sandra @mathew @enkiv2

Here's a tidbit somewhat about that skunkworks project:

It's more than half wrong, the "engine" was Smalltalk, but the AS/400 ERP system was the centralized mediator between the Smalltalk images.

@natecull @freakazoid @wim_v12e @Sandra @mathew @enkiv2

> I wish we could get both...

I can't prove it yet, but I think we can get both in the fediverse.

@freakazoid @wim_v12e @Sandra @mathew @enkiv2

Looks like my 2nd try to call Chartodon failed again...


Your chart is ready, and can be found here:

Things may have changed since I started compiling that, and some things may have been inaccessible.

The chart will eventually be deleted, so if you'd like to keep it, make sure you download a copy.

Thanks. If you get one, that will do for me, too. I tried again following your advice. Got no "processing" notification, maybe that one did not work either...

I see yours, and mine is now processing. So thanks again for your advice. That was my problem.


Your chart is ready, and can be found here:

Things may have changed since I started compiling that, and some things may have been inaccessible.

The chart will eventually be deleted, so if you'd like to keep it, make sure you download a copy.

@bhaugen @natecull @freakazoid @wim_v12e @Sandra @mathew @enkiv2 yeah seconding this, one of my lecturecers was a big name in the Smalltalk scene (from what I can tell) and basically used it exactly as you're describing.

@thornAvery @bhaugen @freakazoid @mathew @enkiv2 @wim_v12e @Sandra

The big problem with Smalltalk and Lisp "images", as I understand it, is that they made it quite hard to separate multiple projects on a single machine, or separate "build" version from "production" version, etc.

I presume this could probably be mitigated if instead of having just one image, you had a bunch of nested images where each sub-image was like a session or a version or a transaction that inherited from a previous one

@thornAvery @bhaugen @freakazoid @mathew @enkiv2 @wim_v12e @Sandra

I don't know if any image-based systems do this - but I'd quite like to experiment with one and see if it could work.

Like imagine if your "code" was in an image, and your "data" was another image that inherited from the code. Much like how Linux disk partition doctrine used to be back in the 1990s. Or like how cloud VM copy-on-write disk volumes work today.

Sign in to participate in the conversation

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!