Follow

Of course Elsevier's "enhanced pdf viewer" tracks where you click, view, if you hide the page, etc. and then transmits a big base64 blob of events along with ID from University proxy when you leave. I'm sure straight to SciVal for sale.
Is this the way we want science to work?

genuinely sad that avoiding/gaming surveillance to keep your Bench Performance Rankings in the fundable range might have to become part of basic scientific training.

Firefox tracking protection & ublock origin seems to protect against all the trackers I can see

can't protect against all, or even most tracking even if you manage to block all tracking HTTP requests. eg. Springer's viewer doesn't serve the whole PDF, just individual pages as you scroll. link clicks (citations, etc) could be used for document position too.

@swib
yes thank you!!!! sad to have missed the keynote, is a recording up?

@jonny sending large amounts of faked data might be a good way to ruin their dataset? :)

@FiXato
gonna pump up my Productivity Rank to levels unimagined by man or machine

@jonny @FiXato I believe adnauseam.io/ would already send bad data? or at least it should be easy to make it so with this extension

@FiXato @jonny We can even send a message with that: by only sending click coordinates that match an obscene picture or so 😅

@MartinShadok or coordinates that spell out a sentence. :)
But ideally you don't want to make your fake data identifiable as such, because then it can be too easily filtered out. @jonny

@vl1
@MartinShadok @FiXato
click out an SQL injection and drop the image transcription database

@jonny Tracking sucks. Behavioral-based tracking in science, education, (public) administration and all other trust-based relations is irresponsible. It's time to speak out loud and prohibit such "business models".

@jonny Surveillance capitalism is coming to scientific papers now. What is this not a feature you want??

@jonny One more reason to open papers from the authors’ (static html) website or straight from SciHub, even if we have an institutional subscription.

@jonny Oh my god, thanks for letting us know. I am not really surprised, but I guess I still underestimated the degree to which Elsevier and such are completely bonkers 😢

@jonny Don't know about the pdf viewer as I never use it.
However I hate Elsevier, for some reason every time I print one of their pdfs it shrinks to half page and I have to keep separate print settings just for their articles...

@jonny@social.coop u should use sci hub. that's how science should work. freely shared with the world.

@jonny
To answer your question. No, this is not what most of us want.

- Most do volunteered review for many papers. Journals make profit from that.
- many journals take money for things that should be done by them, e.g. language polishing.
- many authors use journal templates. Fast for them, money waste for researchers if rejected and adopted for next journal.
- fees for one open access journal £850 - £2000.
Meanwhile, #Elsevier's anual profits ~ £950000000, often on publicly funded research

@jonny @rysiek last I calculated, it cost about $7000 to host your own copy of libgen+scihub. FYI.

@ultimape @jonny @rysiek So what I'm hearing here is, every university could publicly host the single greatest collection of human knowledge ever assembled, for vastly less money than they're paying the extractive journals in licensing fees.

@ultimape @jonny @rysiek See, that too; the entire university model is outmoded. But within that model the raw quantity of government subsidy that is being funneled into journal profits... is astronomical. All of which small university press under CC licenses with distributed hosting modeled on sci-hub could supplant within moments.

@feonixrift @jonny @rysiek I'm not condoning piracy, but given that just looking up a book at my local library involves a large number of 3d party trackers, I've been kind of radicalized.

I figure the only way to get around widespread surveliece capitalisms is to just do the searching locally... that we get to backup a large collection of human knowledge is just a bonus.

This was a good read on the topic.
scholarlykitchen.sspnet.org/20

@ultimape @jonny @rysiek I strongly condone a global scale re-evaluation of the concept of intellectual property, hopefully resulting in the freeing of academic works from the shackles under which their free distribution is considered piracy.

@ultimape @feonixrift @jonny @rysiek I, on the other hand, wish so-called "piracy" was the main method of accessing content.

@ultimape @jonny @rysiek (do I care so much that it's government subsidy? no... but is that a lever against it? oh yes... because it gives a regulatory foothold.)

@feonixrift
@ultimape @jonny @rysiek

do we know what sort of coverage the 'hubs have?

my few attempts have not yielded me anything

I think a coordinated effort amongst organizations could do the trick though.

call it a van at each, then, but definitely

@ultimape @feonixrift

meh, ok, looked at that 2nd one so re-drafting

also remembering how they hauled away hardcopy journal back issues at my grad alma mater, so the baseline of comparison is getting bad that way

@deejoe @feonixrift def not a complete collection by any means. Figure 3 shows it's a long-tail phenomena.

It's probably improved since then obviously, but definitely feel it myself because I read a ton of obscure journals.

About 50% of the stuff I read isn't on sci-hub and normally have to get directly from the university. My understanding is that they don't scrape everything, but do it on demand as people put in requests.

If we wanna consider capturing everything it's quickly gets humbling.

@deejoe @feonixrift
One of the more frustrating problems is that a vast amount of knowledge is locked away in 100+ page thesis papers that aren't really indexed by DOI or published in journals. And if we wann include everything cited in papers, it means often tracking down books that have been out of print since 1910 (have done this personally).

wiki.archiveteam.org/index.php talks a lot more about the scale of the libgen thing.

Realistic, it might take a lot more archive.org/web/petabox.php

:psyduck:

@ultimape @deejoe Yes; libgen is adequate for me for a layman's reading of scientific journals but in my own field, likewise, I need thesis and old books and papers in russian that might have been scanned in once someone hopes by one of the students of a guy who referenced it once and ... on it goes.

@ultimape @deejoe At least this model captures what is being commonly sought. Which, when I am in the role of research mathematician, I find vastly inadequate; but when I am looking for public health, biomed, geographical, etc. information it's ample.

@feonixrift @deejoe yes! this is my take as well. Good now is better than perfect never.

@deejoe
@feonixrift @ultimape @rysiek
they have paused uploads since 2019 I believe because of the ongoing litigation in India

@jonny @deejoe @ultimape @rysiek except for one brief burst due to an administrative oversight in the India papers causing a temporary loophole, yes

@feonixrift Was that just a one-time thing?

Alexandra tweeted that she was running a bunch of uploads in September. I thought that was a resumption.

@jonny @deejoe @ultimape @rysiek

@dredmorbius
@feonixrift @deejoe @ultimape @rysiek
ya was temporary. the judge was like "ok ya ok sci hub cut it out but Elsevier get on with it" and was not amused with E

@feonixrift Would this be a good opportunity for me to mention that US states are exempt from Federal copyright infringement penalties?

(States, departments, branches, etc., etc. Oh, and state-run universities and colleges.)

copyright.gov/policy/state-sov

@ultimape @jonny @rysiek

@ultimape
@rysiek
or if ya don't got that much, you can pin pieces of LibGen on IPFS ❤️

Sign in to participate in the conversation
social.coop

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!