Of course Elsevier's "enhanced pdf viewer" tracks where you click, view, if you hide the page, etc. and then transmits a big base64 blob of events along with ID from University proxy when you leave. I'm sure straight to SciVal for sale.
Is this the way we want science to work?
@jonny Tracking sucks. Behavioral-based tracking in science, education, (public) administration and all other trust-based relations is irresponsible. It's time to speak out loud and prohibit such "business models".
The #DFG wrote a warning paper on that matter.
@7daq0 @jonny it's also available in English: https://www.dfg.de/en/research_funding/programmes/infrastructure/lis/awbi/index.html
@7daq0 @jonny Here is the direct link to the PDF: https://www.dfg.de/download/pdf/foerderung/programme/lis/datentracking_papier_en.pdf
@jonny One more reason to open papers from the authors’ (static html) website or straight from SciHub, even if we have an institutional subscription.
@jonny Don't know about the pdf viewer as I never use it.
However I hate Elsevier, for some reason every time I print one of their pdfs it shrinks to half page and I have to keep separate print settings just for their articles...
@email@example.com u should use sci hub. that's how science should work. freely shared with the world.
To answer your question. No, this is not what most of us want.
- Most do volunteered review for many papers. Journals make profit from that.
- many journals take money for things that should be done by them, e.g. language polishing.
- many authors use journal templates. Fast for them, money waste for researchers if rejected and adopted for next journal.
- fees for one open access journal £850 - £2000.
Meanwhile, #Elsevier's anual profits ~ £950000000, often on publicly funded research
@ultimape @jonny @rysiek See, that too; the entire university model is outmoded. But within that model the raw quantity of government subsidy that is being funneled into journal profits... is astronomical. All of which small university press under CC licenses with distributed hosting modeled on sci-hub could supplant within moments.
I figure the only way to get around widespread surveliece capitalisms is to just do the searching locally... that we get to backup a large collection of human knowledge is just a bonus.
This was a good read on the topic.
It's probably improved since then obviously, but definitely feel it myself because I read a ton of obscure journals.
About 50% of the stuff I read isn't on sci-hub and normally have to get directly from the university. My understanding is that they don't scrape everything, but do it on demand as people put in requests.
If we wanna consider capturing everything it's quickly gets humbling.
One of the more frustrating problems is that a vast amount of knowledge is locked away in 100+ page thesis papers that aren't really indexed by DOI or published in journals. And if we wann include everything cited in papers, it means often tracking down books that have been out of print since 1910 (have done this personally).
http://wiki.archiveteam.org/index.php?title=Library_Genesis talks a lot more about the scale of the libgen thing.
Realistic, it might take a lot more https://archive.org/web/petabox.php
@ultimape @deejoe Yes; libgen is adequate for me for a layman's reading of scientific journals but in my own field, likewise, I need thesis and old books and papers in russian that might have been scanned in once someone hopes by one of the students of a guy who referenced it once and ... on it goes.
@feonixrift Would this be a good opportunity for me to mention that US states are exempt from Federal copyright infringement penalties?
(States, departments, branches, etc., etc. Oh, and state-run universities and colleges.)
A Fediverse instance for people interested in cooperative and collective projects.