social.coop is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Fediverse instance for people interested in cooperative and collective projects. If you are interested in joining our community, please apply at https://join.social.coop/registration-form.html.

Administered by:

Server stats:

479
active users

#epistemology

1 post1 participant1 post today

"This paper advances the critical analysis of machine learning by placing it in direct relation with actuarial science as a way to further draw out their shared epistemic politics. The social studies of machine learning—along with work focused on other broad forms of algorithmic assessment, prediction, and scoring—tends to emphasize features of these systems that are decidedly actuarial in nature, and even deeply actuarial in origin. Yet, those technologies are almost never framed as actuarial and then fleshed out in that context or with that connection. Through discussions of the production of ground truth and politics of risk governance, I zero in on the bedrock relations of power-value-knowledge that are fundamental to, and constructed by, these technosciences and their regimes of authority and veracity in society. Analyzing both machine learning and actuarial science in the same frame gives us a unique vantage for understanding and grounding these technologies of governance. I conclude this theoretical analysis by arguing that contrary to their careful public performances of mechanical objectivity these technosciences are postmodern in their practices and politics."

journals.sagepub.com/doi/10.11

I need to find a path back to the core study of #philosophy that sustains me. It feels too distant. No, #AI #LLM tools won’t be part of that - because if the fundamental flaw in the method and capability.
I need an outlet, maybe that will be here. I am out of the habit of posting anywhere. Only returned to it this week.

The #epistemology and #systemsThinking based content needs to return to my brain.

Continued thread

Out of curiosity, I went to see when in fact "heed" dropped off in usage, and as I suspected, it was during The Enlightenment, just before the Revolutionary War. It tried to pull up again in the mid-19th century, then the 20th century put the nail in it. Again, probably because of its connections to the concept of obedience. To understand someone meant you would obey them, and after awhile that didn't seem so fun. (I just picture some angry old father screaming at his children to heed him or else.) Our society is FAR less authoritarian than it once was.

(I saw a YouTube video on outsider artist Henry Darger last night, and jesus we have it good. I'd like to keep it that way and make things even better.)

wolframalpha.com/input?i=heed

#epistemology
#psychology
#etymology
#English
#AbuseCulture

www.wolframalpha.comWolfram|Alpha: Making the world’s knowledge computableWolfram|Alpha brings expert-level knowledge and capabilities to the broadest possible range of people—spanning all professions and education levels.

English conflates the concepts of "hear" and "understand." Many conflicts get nowhere because we use the common phrasing, "You're not listening to me!" or "You didn't hear me!" when what we really mean is, "You didn't get me, I want you to make sense of what I'm saying."

The process of comprehending what someone has said is different than hearing their words. How many times have you said, "You're not listening!" and they were in fact "listening" but not getting it? How many times have you said, "No I HEARD you?" when you did not, in fact, understand?

English used to have a snappy word for this: heed. To heed was to both hear AND to understand. And it also meant "obey" which might be why it fell out of favor (which itself reflects an interesting point of cultural values shift). We DO in fact conflate "listen" to obedience, sometimes, especially towards children. But not as much as once was.

The fact that all words mean multiple things, and that English has some issues with which things are conflated, can really influence how we think and interact. It's worth trying to unpack that. Then I start thinking towards how we can change English to be better.

#introduction

Hello, I'm a Social Worker from Mexico City. I'm interested in studying my discipline from a #Science and Technology Studies perspective.

Some other interests are: #philosophy of social sciences, #epistemology, scientific #practices and #community social work.

I want to connect with other people in the social sciences and in the #humanities to broaden my network.

This is my dialnet webpage (in spanish): dialnet.unirioja.es/servlet/au

See you all around!!

DialnetCristian Urbalejo LunaPerfil de Autor en Dialnet.

If you understand Virtue Epistomology (VE), you cannot accept any LLM output as "information".

VE is an attempt to correct the various omniscience-problems inherent in classical epistemologies, which all to some extent require a person to know what the Truth is in order to evaluate if some statement is true.

VE prescribes that we should look to how the information was obtained, particularly in two ways:
1) Was the information obtained using a well-known method that is known to produce good results?
2) Does the method appear to have been applied correctly in this particular case?

LLM output always fails on pt1. An LLM will not look for the truth. It will just look for what is a probable combination of words. This means that an LLM is just as likely to combine a number of true statements in a way that is probable but false, as it is to combine them in a way that is probable and true.

LLMs only sample the probability of word combinations. It doesn't understand the input, and it doesn't understand its own output.

Only a damned fool would use it for anything, ever.

#epistemology #LLM #generativeAI #ArtificialIntelligence #ArtificialStupidity @philosophy

In other words, Generative AI and LLMs lack a sound epistemology and that's very problematic...:

"Bullshit and generative AI are not the same. They are similar, however, in the sense that both mix true, false, and ambiguous statements in ways that make it difficult or impossible to distinguish which is which. ChatGPT has been designed to sound convincing, whether right or wrong. As such, current AI is more about rhetoric and persuasiveness than about truth. Current AI is therefore closer to bullshit than it is to truth. This is a problem because it means that AI will produce faulty and ignorant results, even if unintentionally.
(...)
Judging by the available evidence, current AI – which is generative AI based on large language models – entails artificial ignorance more than artificial intelligence. That needs to change for AI to become a trusted and effective tool in science, technology, policy, and management. AI needs criteria for what truth is and what gets to count as truth. It is not enough to sound right, like current AI does. You need to be right. And to be right, you need to know the truth about things, like AI does not. This is a core problem with today's AI: it is surprisingly bad at distinguishing between truth and untruth – exactly like bullshit – producing artificial ignorance as much as artificial intelligence with little ability to discriminate between the two.
(...)
Nevertheless, the perhaps most fundamental question we can ask of AI is that if it succeeds in getting better than humans, as already happens in some areas, like playing AlphaZero, would that represent the advancement of knowledge, even when humans do not understand how the AI works, which is typical? Or would it represent knowledge receding from humans? If the latter, is that desirable and can we afford it?"

papers.ssrn.com/sol3/papers.cf

papers.ssrn.comAI as Artificial IgnoranceAI and bullshit (in the strong philosophical sense of Harry Frankfurt) are similar in the sense that both prioritize rhetoric over truth. They mix true, false,

Recently, I got several opportunities to discuss the reproducibility crisis in science. To help discuss that complex topic, we need to agree on a vocabulary.

My favorite one has been published by Manuel López-Ibáñez, Juergen Branke and Luis Paquete, and is summarized in the attached diagram, which you can also find here: nojhan.net/tfd/vocabulary-of-r

It's good that this topic is not fading away, but is gaining traction. "Slowly, but surely", as we say in French.

If you want a high resolution suitable for impression, do not hesitate to ask!

finally in print, the last puzzle piece in a decade of thinking about communication across social networks from the perspective of ideal rational agents: how do we factor in ‚dependence‘ - the fact that the same underlying evidence may, for example, reach us via multiple different reports, giving rise to double counting?

There is a strong intuition that multiple independent observations should carry more weight than dependent observations. But how much more?

We show that there is no (known) answer to this normative question in the general case. This renders a fundamental feature of human testimony unsolvable, meaning that acquiring knowledge via the testimony of others is much harder than typically assumed

link.springer.com/epdf/10.1007

#epistemology #SocialNetworks @philosophy

link.springer.comThe problem of dependency

My favourite online comparative theologian at ESOTERICA Dr. Justin Sledge has put out part four of his Demiurge series, would recommend if you're interested in how humans came up with all the crazy things we believe these days. Turns out wE lIvE iN a SimUlAtIoN is antique and creaky as well. youtube.com/watch?v=kq-CoIFf8l0 #Epistemology #Philosophy #ComparativeTheology #Occult #Religion