The Soft Deleriziome Machine: A Thousand Guattaris?? - Exploring Philosophical Nonsense

Github

This project investigates the generation of deliberate nonsense inspired by philosophical concepts, utilizing a fine-tuned GPT-2 model and drawing upon the theoretical framework of Félix Guattari.

Project Overview

This project takes a deliberately unconventional approach to language modeling. Instead of aiming for coherent or insightful philosophical statements, the primary goal is to explore the creation of meaningful nonsense derived from philosophical ideas. We are interested in the emergent properties of language models trained on complex philosophical texts – not to produce accurate summaries or extensions of those ideas, but to generate text that sounds philosophical while ultimately lacking coherent meaning.

Think of it as a philosophical thought experiment in linguistic deconstruction. By training a model on the intricate and often abstract language of philosophy, we aim to see what kind of intriguing, amusing, or even unsettling nonsense emerges. The persona of Morosia the Lugubrious serves not as a wise oracle, but as a conduit for these nonsensical pronouncements, highlighting the absurdity that can arise from complex systems of thought when pushed beyond their intended boundaries.

This model trained on 1/4 of "Thousand plateaus" once. Explore the interactive demo of this model on Hugging Face Spaces.

Downloads last month
92
Safetensors
Model size
124M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for genaforvena/the_soft_delerizome_machine_a_thousand_guattaris_fourth_of_plateaus_once

Finetuned
(1405)
this model
Finetunes
1 model
Quantizations
1 model

Collection including genaforvena/the_soft_delerizome_machine_a_thousand_guattaris_fourth_of_plateaus_once