social.coop is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Fediverse instance for people interested in cooperative and collective projects. If you are interested in joining our community, please apply at https://join.social.coop/registration-form.html.

Administered by:

Server stats:

495
active users

#geneticalgorithms

0 posts0 participants0 posts today

Via the @ataripodcast: in a 25-minute video, Jean Michel Sellier, Research Assistant Professor at Purdue University, demonstrates the use of an #Atari800XL to train a neural network using a genetic algorithm instead of the memory-hungry technique of gradient descent.

hackaday.com/2025/02/21/geneti

I've had a soft spot for Artificial Life for a long time. During the last AI Winter in the mid 1990s, I was spurred to get back into education and onto a career in commercial software development by Stephen Levy's book "Artificial Life: The Quest for a New Creation". I loved that Artificial Life researchers borrowed well-understood mechanisms from genetics and implemented them in software to converge iteratively on solutions, in contrast to AI research, which was attempting to build models of categories which were not understood at all (and largely still aren't) - intelligence (whatever that is) and perception.

In subsequent years I wondered why I wasn't hearing any hype about Artificial Life; it turns out practitioners have been quietly getting on with solving problems using the technique. Meanwhile, yet again, AI boosters have blustered their way into the consciousness with another round of overcooked hype.

The Stephen Levy book is still worth a read, if you can find it. (IIRC Danny Hillis and the Connection Machine folks get a mention too.)

(I don't know if any of the genetic algorithm folks turned out to be supporters of eugenics, as many of the current crop of AI boosters seem to be.)

archive.org/details/artificial

en.m.wikipedia.org/wiki/AI_win

Hackaday · Genetic Algorithm Runs On Atari 800 XLFor the last few years or so, the story in the artificial intelligence that was accepted without question was that all of the big names in the field needed more compute, more resources, more energy…
Not that I have the free time to take on another project, but there's a part of me that wants to do a thorough exploration of argmax and write up what I find, if only as notes. Math-y and science-y people take it for granted; search engines prefer telling you about the numpy function of that name. But it turns out argmax has (what I think are) interesting subtleties.

Here's one. If you're given a function, you can treat argmax of that function as a set-valued function varying over all subsets of its domain, returning a subset--the argmaxima let's call them--of each subset. argmax x∈S f(x) is a subset of S, for any S that is a subset of the function f's domain. Another way to think of this is that argmax induces a 2-way partitioning of any such input set S into those elements that are in the argmax, and those that are not.

Now imagine you have some way of splitting any subset of some given set into two pieces, one piece containing the "preferred" elements and the other piece the rest, separating the chaff from the wheat if you will. It turns out that in a large variety of cases, given only a partitioning scheme like this, you can find a function for which the partitioning is argmax of that function. In fact you can say more: you can find a function whose codomain is (a subset of) some n-dimensional Euclidean space. You might have to relax the definition of argmax slightly (but not fatally) to make this work, but you frequently can (1). It's not obvious this should be true, because the partitioning scheme you started with could be anything at all (as long as it's deterministic--that bit's important). That's one thing that's interesting about this observation.

Another, deeper reason this is interesting (to me) is that it connects two concepts that superficially look different, one being "local" and the other "global". This notion of partitioning subsets into preferred/not preferred pieces is sometimes called a "solution concept"; the notion shows up in game theory, but is more general than that. You can think of it as a local way of identifying what's good: if you have a solution concept, then given a set of things, you're able to say which are good, regardless of the status of other things you can't see (because they're not in the set you're considering). On the other hand, the notion of argmax of a function is global in nature: the function is globally defined, over its entire domain, and the argmax of it tells you the (arg)maxima over the entire domain.

In evolutionary computation and artificial life, which is where I'm coming from, such a function is often called an "objective" (or "multiobjective") function, sometimes a "fitness" function. One of the provocative conclusions of what I've said above for these fields is that as soon as you have a deterministic way of discerning "good" from "bad" stuff--aka a solution concept--you automatically have globally-defined objectives. They might be unintelligible, difficult to find, or not very interesting or useful for whatever you're doing, but they are there nevertheless: the math says so. The reason this is provocative is that every few years in the evolutionary computation or artificial life literature there pops up some new variation of "fitnessless" or "objective-free" algorithms that claim to find good stuff of one sort of another without the need to define objective function(s), and/or without the need to explicitly climb them (2). The result I'm alluding to here strongly suggests that this way of thinking lacks a certain incisiveness: if your algorithm has a deterministic solution concept, and the algorithm is finding good stuff according to that solution concept, then it absolutely is ascending objectives. It's just that you've chosen to ignore them (3).

Anyway, returning to our friend argmax, it looks like it has a kind of inverse: given only the "behavior" of argmax of a function f over a set of subsets, you're often able to derive a function g that would lead to that same behavior. In general g will not be the same as f, but it will be a sibling of sorts. In other words there's an adjoint functor or something of that flavor hiding here! This is almost surely not a novel observation, but I can say that in all my years of math and computer science classes I never learned this. Maybe I slept through that lecture!

#ComputerScience #math #argmax #SolutionConcepts #CoevolutionaryAlgorithms #CooptimizationAlgorithms #optimization #EvolutionaryComputation #EvolutionaryAlgorithms #GeneticAlgorithms #ArtificialLife #InformativeDimensions



(1) If you're familiar with my work on this stuff then the succinct statement is: partial order decomposition of the weak preference order induced by the solution concept, when possible, yields an embedding of weak preference into ℝ^n for some finite natural number n; the desired function can be read off from this (the proofs about when the solution concept coincides with argmax of this function have some subtleties but aren't especially deep or hard). I skipped this detail, but there's also a "more local" version of this observation, where the domain of applicability of weak preference is itself restricted to a subset, and the objectives found are restricted to that subdomain rather than fully global.

(2) The latest iteration of "open-endedness" has this quality; other variants include "novelty search" and "complexification".

(3) Which is fair of course--maybe these mystery objectives legitimately don't matter to whatever you're trying to accomplish. But in the interest of making progress at the level of ideas, I think it's important to be precise about one's commitments and premises, and to be aware of what constitutes an impossible premise.

I fenomeni di emergenza in un sistema in evoluzione sembrano essere legati alla comparsa di processi dinamici nuovi rispetto alla loro qualità e imprevedibili sulla base della caratteristica iniziale del sistema. Per questo motivo è difficile modellizzare i fenomeni di emergenza perché il modello in uso può diventare improvvisamente inadeguato a delineare nuove manifestazioni dinamiche e quindi la vera identità del sistema.

In questa simulazione (nel primo video la registrazione di una sessione) ho cercato di costruire un metamodello che descriva alcuni fenomeni di emergenza durante l’evoluzione del sistema dinamico organismo-ambiente.
Usando algoritmi genetici ho realizzato un sistema di agenti artificiali che interagiscono risolvendo una versione spaziale del Dilemma del Prigioniero Iterato (Axelrod, 1984; Axelrod e Dion, 1988).

Il sistema, compresso tra i vincoli di una crescente complessità interna e l’impossibilità di scindersi, si mantiene spontaneamente entro valori di entropia che gli consentono di mantenere coerenza e “identità” nel tempo, permettendogli di far emergere diversi nuovi equilibri di Nash nella tabella dei payoff ( cultura? economia? politica?…) che regola gli scambi tra agenti artificiali; i nuovi equilibri di Nash emergenti nella tabella dei payoff facilitano l’emergere di alcuni “clan” che si replicano secondo le dinamiche genetiche del Cross-Over.

Per ogni generazione, le azioni del “clan vincente” contribuiscono a modificare la tabella dei payoff secondo regole più vantaggiose per la loro logica di interazione (cooperare/rifiutare/defezionare).

Ogni volta il nuovo equilibrio di Nash emergente viene mantenuto per alcune generazioni, oscillando tra diversi valori di entropia, finché un nuovo “clan” di agenti virtuali prevale sugli altri, orientando l’intero sistema verso l’emergere di un nuovo equilibrio di Nash nella tabella dei payoff.
#ai #geneticalgorithms #gametheory

Interesting article about D-Wave "#quantum" annealing computers. arstechnica.com/science/2023/0

My niche in the aughts was "#scheduling" optimization (90% of the time it was bin-packing, technically). I started with #SimulatedAnnealing and #GeneticAlgorithms, but generally found approximation algorithms (Vazirani's book) to be most practical. PS It's a great niche. #SoftwareDevelopment

Ars TechnicaWhat are companies doing with D-Wave’s quantum hardware?D-Wave's computers are especially good at solving optimization problems.