sleeping4cat commited on
Commit
35799fe
·
verified ·
1 Parent(s): 94520a1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -1
README.md CHANGED
@@ -11,4 +11,13 @@ pinned: false
11
  <img src="Ai.png" alt="AI Image" width="150" height="150">
12
  </div>
13
 
14
- A pioneering lab focused exclusively on music, audio, and natural language processing research. Inspired by visionary collectives like Open-SCI and disruptive labs like Google Brain, we have no interest in flashy products or hollow promises. Our singular mission is to push the boundaries of research, driving innovation that amplifies collective human intelligence. We are determined to become a feared name in the field, shaping the future of AI and its real-world applications in art and beyond.
 
 
 
 
 
 
 
 
 
 
11
  <img src="Ai.png" alt="AI Image" width="150" height="150">
12
  </div>
13
 
14
+ Sleeping AI, is a small lab funded by friendly means and other big players. We are not oriented towards products neither promises like delivering AGI in the next couple of years. Think of us as collective such as Open-SCI and names like Google Brain (now defunct).
15
+
16
+ We have a simple motive, we are interested in doing research that's revolutionary and field reviving. What we mean, conducting research that pushes everyone to think 100 years ahead. In the past, OpenAI and Google Brain were well-known for such experiments and research.
17
+
18
+ Unfortunately, due to recent developments, they had shifted focus to consumer facing research that builds hype rather than truly challenging the fabric of what's possible. Sleeping AI was made to fill that void. We like to think ourselves as collectives who are extremely collaborative.
19
+
20
+ We collaborate, research and publish. As a lab, we focus our research on music, audio and natural language processing. Because these are both exciting and challenging.
21
+
22
+ **Current status:** Pre-alpha and looking for investors.
23
+ **Research Update:** We are planning to submit two papers at NeurIPS 2025.