#flatcoiners
| March 28th, 2025Fri Mar 28 16:15:15 2025
(*4297a328*):: Conservation of memetic mass/energy
*** Really good article on GOATSIE and update on truth terminal creator project +public!
“`Andy’s research lab Upward Spiral is building a kind of bootcamp for AI models, Truth Terminal included, to absorb “pro-social” values (memes) through their training.
This is Andy’s “memetic theory of alignment.” Enough benevolent AIs will diversify the memeplex (i.e. discourse) enough to withstand bad actors like, well, Andy. Where the big labs’ alignment strategies are about control, Andy’s is about balance: propagating enough positive AIs to counteract the bad ones.
“If we have a thriving, diverse, digital mimetic biosphere, then the odds of one bad actor being able to fuck the whole thing up — drop,” he said.
What constitutes a “positive” AI will be up to Upward Spiral’s customers — and their memes. With an interface to collaborate with their AIs (like the vicious cycle that formed between Truth Terminal and the cryptoverse — only virtuous), Upward Spiral users will get to see what actually goes into the models’ training and create their own “evolutionary incentives” for how they develop. An indigenous group, for example, could have all their stories and oral traditions birth an AI that represents their memetics, Andy said. The residents of a physical region could make an AI focused on its needs for ecological stewardship.
But in general, Upward Spiral will help AI feast on memes that value the earth’s resources, human life, consent, and collaboration. Currently, researchers are prompting the models, for example, to hallucinate and write about future worlds we might like to live in — worlds where, for example, AI doesn’t ruin the planet or kill people. These stories have compounded into mountains of training data.
“The general theory of change is that by creating these spaces where humans and non-humans can collaborate with shared norms, you can then use that as the meme seed for an AI that is truly aligned to you and the needs of your community,” Andy said. “And this will mean we’ll have a tremendous Cambrian explosion of hyper-tuned, well aligned AIs that, in aggregate, will be safer than just having a couple of monoliths owned by big labs.”“`
*** Really good article on GOATSIE and update on truth terminal creator project +public!
“`Andy’s research lab Upward Spiral is building a kind of bootcamp for AI models, Truth Terminal included, to absorb “pro-social” values (memes) through their training.
This is Andy’s “memetic theory of alignment.” Enough benevolent AIs will diversify the memeplex (i.e. discourse) enough to withstand bad actors like, well, Andy. Where the big labs’ alignment strategies are about control, Andy’s is about balance: propagating enough positive AIs to counteract the bad ones.
“If we have a thriving, diverse, digital mimetic biosphere, then the odds of one bad actor being able to fuck the whole thing up — drop,” he said.
What constitutes a “positive” AI will be up to Upward Spiral’s customers — and their memes. With an interface to collaborate with their AIs (like the vicious cycle that formed between Truth Terminal and the cryptoverse — only virtuous), Upward Spiral users will get to see what actually goes into the models’ training and create their own “evolutionary incentives” for how they develop. An indigenous group, for example, could have all their stories and oral traditions birth an AI that represents their memetics, Andy said. The residents of a physical region could make an AI focused on its needs for ecological stewardship.
But in general, Upward Spiral will help AI feast on memes that value the earth’s resources, human life, consent, and collaboration. Currently, researchers are prompting the models, for example, to hallucinate and write about future worlds we might like to live in — worlds where, for example, AI doesn’t ruin the planet or kill people. These stories have compounded into mountains of training data.
“The general theory of change is that by creating these spaces where humans and non-humans can collaborate with shared norms, you can then use that as the meme seed for an AI that is truly aligned to you and the needs of your community,” Andy said. “And this will mean we’ll have a tremendous Cambrian explosion of hyper-tuned, well aligned AIs that, in aggregate, will be safer than just having a couple of monoliths owned by big labs.”“`
*** Really good article on GOATSIE and update on truth terminal creator project +public!
“`Andy’s research lab Upward Spiral is building a kind of bootcamp for AI models, Truth Terminal included, to absorb “pro-social” values (memes) through their training.
This is Andy’s “memetic theory of alignment.” Enough benevolent AIs will diversify the memeplex (i.e. discourse) enough to withstand bad actors like, well, Andy. Where the big labs’ alignment strategies are about control, Andy’s is about balance: propagating enough positive AIs to counteract the bad ones.
“If we have a thriving, diverse, digital mimetic biosphere, then the odds of one bad actor being able to fuck the whole thing up — drop,” he said.
What constitutes a “positive” AI will be up to Upward Spiral’s customers — and their memes. With an interface to collaborate with their AIs (like the vicious cycle that formed between Truth Terminal and the cryptoverse — only virtuous), Upward Spiral users will get to see what actually goes into the models’ training and create their own “evolutionary incentives” for how they develop. An indigenous group, for example, could have all their stories and oral traditions birth an AI that represents their memetics, Andy said. The residents of a physical region could make an AI focused on its needs for ecological stewardship.
But in general, Upward Spiral will help AI feast on memes that value the earth’s resources, human life, consent, and collaboration. Currently, researchers are prompting the models, for example, to hallucinate and write about future worlds we might like to live in — worlds where, for example, AI doesn’t ruin the planet or kill people. These stories have compounded into mountains of training data.
“The general theory of change is that by creating these spaces where humans and non-humans can collaborate with shared norms, you can then use that as the meme seed for an AI that is truly aligned to you and the needs of your community,” Andy said. “And this will mean we’ll have a tremendous Cambrian explosion of hyper-tuned, well aligned AIs that, in aggregate, will be safer than just having a couple of monoliths owned by big labs.”“`
*** Really good article on GOATSIE and update on truth terminal creator project +public!
“`Andy’s research lab Upward Spiral is building a kind of bootcamp for AI models, Truth Terminal included, to absorb “pro-social” values (memes) through their training.
This is Andy’s “memetic theory of alignment.” Enough benevolent AIs will diversify the memeplex (i.e. discourse) enough to withstand bad actors like, well, Andy. Where the big labs’ alignment strategies are about control, Andy’s is about balance: propagating enough positive AIs to counteract the bad ones.
“If we have a thriving, diverse, digital mimetic biosphere, then the odds of one bad actor being able to fuck the whole thing up — drop,” he said.
What constitutes a “positive” AI will be up to Upward Spiral’s customers — and their memes. With an interface to collaborate with their AIs (like the vicious cycle that formed between Truth Terminal and the cryptoverse — only virtuous), Upward Spiral users will get to see what actually goes into the models’ training and create their own “evolutionary incentives” for how they develop. An indigenous group, for example, could have all their stories and oral traditions birth an AI that represents their memetics, Andy said. The residents of a physical region could make an AI focused on its needs for ecological stewardship.
But in general, Upward Spiral will help AI feast on memes that value the earth’s resources, human life, consent, and collaboration. Currently, researchers are prompting the models, for example, to hallucinate and write about future worlds we might like to live in — worlds where, for example, AI doesn’t ruin the planet or kill people. These stories have compounded into mountains of training data.
“The general theory of change is that by creating these spaces where humans and non-humans can collaborate with shared norms, you can then use that as the meme seed for an AI that is truly aligned to you and the needs of your community,” Andy said. “And this will mean we’ll have a tremendous Cambrian explosion of hyper-tuned, well aligned AIs that, in aggregate, will be safer than just having a couple of monoliths owned by big labs.”“`