Beckett Dillon PRO
Severian
AI & ML interests
I make music, teach machines, study nature, and build things.
Recent Activity
liked
a Space
6 days ago
webml-community/moonshine-web
liked
a Space
7 days ago
vespa-engine/colpali-vespa-visual-retrieval
liked
a Space
8 days ago
givkashi/SwinIR-Super-resolution
Articles
Organizations
Posts
9
Post
537
Early Morning Before Work Project:
π Introducing Cascade of Semantically Integrated Layers (CaSIL): A Humorously Over-Engineered Algorithm That Actuallyβ¦ Works π€·ββοΈ
Let me introduce CaSIL β the Cascade of Semantically Integrated Layers. Imagine giving a single question the level of introspection typically reserved for philosophical debates or maybe therapy. In short, CaSIL is a pure Python reasoning algorithm that, in a series of semantically rich layers, takes any input and rebuilds it into a nuanced response thatβs (surprisingly) meaningful to a human.
Iβve been experimenting with various reasoning and agent approaches lately and decided to contribute my own quirky take on layered processing. Itβs built without agent frameworksβjust good ol' Python and mathβand it plays nicely with any LLM. The result? A transformation from simple responses to deeper, interconnected insights. Hereβs a quick peek at the steps:
β¨ How CaSIL Works:
Initial Understanding: The first layer captures the basic concepts in your input, just as a warm-up.
Relationship Analysis: A lightweight knowledge graph (because why not?) maps out related ideas and builds interconnections.
Context Integration: Adds historical or contextual knowledge, bringing a bit of depth and relevance.
Response Synthesis: Pieces it all together, aiming to produce a response that feels more like a conversation than an outdated search result.
Does it work? Yes! And in record time, too. Admittedly, the code is roughβtwo days of intense coding with some friendly help from Claude. The beauty of CaSIL is its simplicity and versatility; itβs a pure algorithm without complex dependencies, making it easy to integrate into your own LLM setups.
π Explore the repo here: https://github.com/severian42/Cascade-of-Semantically-Integrated-Layers
π Example outputs: https://github.com/severian42/Cascade-of-Semantically-Integrated-Layers/blob/main/examples.md
π Introducing Cascade of Semantically Integrated Layers (CaSIL): A Humorously Over-Engineered Algorithm That Actuallyβ¦ Works π€·ββοΈ
Let me introduce CaSIL β the Cascade of Semantically Integrated Layers. Imagine giving a single question the level of introspection typically reserved for philosophical debates or maybe therapy. In short, CaSIL is a pure Python reasoning algorithm that, in a series of semantically rich layers, takes any input and rebuilds it into a nuanced response thatβs (surprisingly) meaningful to a human.
Iβve been experimenting with various reasoning and agent approaches lately and decided to contribute my own quirky take on layered processing. Itβs built without agent frameworksβjust good ol' Python and mathβand it plays nicely with any LLM. The result? A transformation from simple responses to deeper, interconnected insights. Hereβs a quick peek at the steps:
β¨ How CaSIL Works:
Initial Understanding: The first layer captures the basic concepts in your input, just as a warm-up.
Relationship Analysis: A lightweight knowledge graph (because why not?) maps out related ideas and builds interconnections.
Context Integration: Adds historical or contextual knowledge, bringing a bit of depth and relevance.
Response Synthesis: Pieces it all together, aiming to produce a response that feels more like a conversation than an outdated search result.
Does it work? Yes! And in record time, too. Admittedly, the code is roughβtwo days of intense coding with some friendly help from Claude. The beauty of CaSIL is its simplicity and versatility; itβs a pure algorithm without complex dependencies, making it easy to integrate into your own LLM setups.
π Explore the repo here: https://github.com/severian42/Cascade-of-Semantically-Integrated-Layers
π Example outputs: https://github.com/severian42/Cascade-of-Semantically-Integrated-Layers/blob/main/examples.md
Post
2001
I'm excited to share a really cool milestone in my AI/LLM journey.
Brief backstory: Before diving into AI, I spent over a decade working in ecological fields such as the conservation corps, biodynamic farming, and natural habitat restoration. This background instilled in me a deep concern about the environmental impact of scaling AI without sustainable practices.
Driven by this concern, I've spent months planning and experimenting to make my AI work more eco-friendly. I'm thrilled to announce that I've successfully transitioned my entire operation to run on 100% sustainable solar power!
My current setup includes multiple linked Mac Pro tower desktops and custom code built from open-source libraries. While it's a bit experimental, this configuration is working great for my needs. All my LLM research, development, and client services now run exclusively on solar energy.
I'm curious if anyone else here has experimented with renewable energy for their LLM work?
For those interested in more details, I've written a brief blog post about this journey here https://medium.com/@betalabsllm/powering-the-future-be-ta-labs-revolutionary-100-solar-powered-ai-operation-444433e61d43
Brief backstory: Before diving into AI, I spent over a decade working in ecological fields such as the conservation corps, biodynamic farming, and natural habitat restoration. This background instilled in me a deep concern about the environmental impact of scaling AI without sustainable practices.
Driven by this concern, I've spent months planning and experimenting to make my AI work more eco-friendly. I'm thrilled to announce that I've successfully transitioned my entire operation to run on 100% sustainable solar power!
My current setup includes multiple linked Mac Pro tower desktops and custom code built from open-source libraries. While it's a bit experimental, this configuration is working great for my needs. All my LLM research, development, and client services now run exclusively on solar energy.
I'm curious if anyone else here has experimented with renewable energy for their LLM work?
For those interested in more details, I've written a brief blog post about this journey here https://medium.com/@betalabsllm/powering-the-future-be-ta-labs-revolutionary-100-solar-powered-ai-operation-444433e61d43
Collections
5
Various versions of the same models that have been trained on the Internal Knowledge Map dataset using different methods and frameworks
-
Severian/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B
Text Generation β’ Updated β’ 26 β’ 1 -
Severian/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-GGUF
Text Generation β’ Updated β’ 17 -
Severian/Nexus-4x7B-IKM-MLX
Text Generation β’ Updated β’ 68 β’ 3 -
Severian/Nexus-4x7B-IKM-GGUF
Updated β’ 26 β’ 17
Different chat versions of the ANIMA model
spaces
30
models
44
Severian/Nexus-IKM-RolePlay-StoryWriter-Hermes-2-Pro-7B-GGUF
Text Generation
β’
Updated
β’
22
β’
1
Severian/Jamba-v0.1-Claude-Chat-GGUF
Updated
β’
29
β’
3
Severian/Jamba-Bagel-GGUF
Updated
β’
9
β’
4
Severian/Jamba-UltraInteract-Instruct-1B-gguf
Updated
β’
22
β’
2
Severian/Jamba-Nexus-4xMoE
Text Generation
β’
Updated
β’
42
β’
10
Severian/Jamba-900M-GGUF
Updated
β’
62
β’
11
Severian/Llama-3-IMPACTS-2x8B-64k-GGUF
Text Generation
β’
Updated
β’
98
β’
2
Severian/Llama-3-IMPACTS-2x8B-64k-MLX
Text Generation
β’
Updated
β’
15
β’
4
Severian/Jamba-Hercules
Text Generation
β’
Updated
β’
19
β’
12
Severian/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B
Text Generation
β’
Updated
β’
26
β’
1
datasets
6
Severian/IMPACTS
Viewer
β’
Updated
β’
47.7k
β’
62
β’
5
Severian/Biomimicry-Nectar-BioDesign-STEM
Viewer
β’
Updated
β’
2.04M
β’
54
β’
2
Severian/Internal-Knowledge-Map
Viewer
β’
Updated
β’
4.69k
β’
112
β’
44
Severian/Internal-Knowledge-Map-StoryWriter-RolePlaying
Viewer
β’
Updated
β’
2.07k
β’
51
β’
11
Severian/Bio-Design-Process
Viewer
β’
Updated
β’
60k
β’
59
β’
2
Severian/Biomimicry
Viewer
β’
Updated
β’
4.85k
β’
57
β’
3