Knut Jägersberg's picture

Knut Jägersberg

KnutJaegersberg

AI & ML interests

NLP, opinion mining, narrative intelligence

Recent Activity

Articles

Organizations

LLMs's profile picture Blog-explorers's profile picture Qwen's profile picture Social Post Explorers's profile picture M4-ai's profile picture Chinese LLMs on Hugging Face's profile picture Smol Community's profile picture

Posts 21

view post
Post
693
Anthropomorphic reasoning about neuromorphic AGI safety

Summary of "Anthropomorphic Reasoning About Neuromorphic AGI Safety"
This paper explores safety strategies for neuromorphic artificial general intelligence (AGI), defined as systems designed by reverse-engineering essential computations of the human brain. Key arguments and proposals include:

1. Anthropomorphic Reasoning Validity:
- Neuromorphic AGI’s design and assessment rely on human cognition models, making anthropomorphic reasoning (using human-like traits) critical for safety analysis. Comparisons to human behavior and neural mechanisms provide insights into AGI behavior and risks.

2. Countering Safety Criticisms:
- The authors challenge claims that neuromorphic AGI is inherently more dangerous than other AGI approaches. They argue all AGI systems face intractable verification challenges (e.g., real-world unpredictability, incomputable action validation). Neuromorphic AGI may even offer safety advantages by enabling comparisons to human cognitive processes.

3. Motivational Architecture:
- Basic drives (e.g., curiosity, social interaction) are essential for cognitive development and safety. These pre-conceptual, hardwired drives (analogous to human hunger or affiliation) shape learning and behavior. The orthogonality thesis (intelligence and goals as independent) is contested, as neuromorphic AGI’s drives likely intertwine with its cognitive architecture.

4. Safety Strategies:
- **Social Drives**: Embedding drives like caregiving, affiliation, and cooperation ensures AGI develops prosocial values through human interaction.
- **Bounded Reward Systems**: Human-like satiation mechanisms (e.g., diminishing rewards after fulfillment) prevent extreme behaviors (e.g., paperclip maximization).
- **Developmental Environment**: Exposure to diverse, positive human interactions and moral examples fosters

https://ccnlab.org/papers/JilkHerdReadEtAl17.pdf
view post
Post
1809
Evolution and The Knightian Blindspot of Machine Learning


The paper discusses machine learning's limitations in addressing Knightian Uncertainty (KU), highlighting the fragility of models like reinforcement learning (RL) in unpredictable, open-world environments. KU refers to uncertainty that can't be quantified or predicted, a challenge that RL fails to handle due to its reliance on fixed data distributions and limited formalisms.


### Key Approaches:

1. **Artificial Life (ALife):** Simulating diverse, evolving systems to generate adaptability, mimicking biological evolution's robustness to unpredictable environments.

2. **Open-Endedness:** Creating AI systems capable of continuous innovation and adaptation, drawing inspiration from human creativity and scientific discovery.

3. **Revising RL Formalisms:** Modifying reinforcement learning (RL) models to handle dynamic, open-world environments by integrating more flexible assumptions and evolutionary strategies.

These approaches aim to address ML’s limitations in real-world uncertainty and move toward more adaptive, general intelligence.

https://arxiv.org/abs/2501.13075