text
stringlengths
1
32.2k
label
stringclasses
29 values
dataType
stringclasses
2 values
communityName
stringclasses
29 values
datetime
stringclasses
173 values
username_encoded
stringlengths
136
160
url_encoded
stringlengths
220
528
[https://github.com/predibase/lorax](https://github.com/predibase/lorax) Predibase/Lorax is really an interesting repo. It solves major problem of using an adapters, i.e., assigning an adapter dynamically. Did anyone try it out?
r/machinelearning
post
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtTmliS3BPUjg2dlBHOWVhdndVMl9NRFJyU05ZUUpMZFBvNTBGY2tuOXVoNXRCMVpBdkMzNGw5aDRRU1JlRXhnUnNfZkxoajFMeUdSU3dlV1FJZkFDQzFOcVpzSl9yVjZhdUtFR0Q1ZkN5cEk9
Z0FBQUFBQm5meDJtQ1JIRWxCT2pKZW94cEE5bjg3NkxwQ3FmU3lMQ1VRM0laMnZQZHprMTk0ZjVmb3l5a3k4NXlab2FjV0t4Tkt0dmxSbzlMLTN2ZXh1UW1ERHBXQXFTNHBvVTlmM2VldkM5SWxEVzhBVnZyQktxY3ctTm1JSmtTVGRKYkxPeW51WU1lUUhoeVJkcGV6M21mYnlaYWIwZVhRdFBVZE5EaHJkczd0b1ROeTkyOVdWYVpsR3BydWdmM1JSN3VnNHRGMDBB
Aside from the usual suspects (ChatGPT, Bard, ...) which AI tools have a valuable use case in a business or job? Most AI tools I see being paraded in viral tweets and YouTube videos seem gimmicky (e.g. an app for turning you into a robot when you use the front camera). They're interesting, they're cool, they're new, but ultimately I don't see valuable use case for them.
r/artificial
post
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtbTNmaUdXNEF1QnZSNVRweS1PUmVCNjhHb2VDMnBDaFl3WmFLVTg2NFRFWHJfck9wTFJfZlpiUVhjYU5JRU1GcWVMVHZwa2ZfMVJ0NGFSblVvZWpKNkE9PQ==
Z0FBQUFBQm5meDJtTU5ZbXZiVnZBTXhtZ0J6VVlTZlgySmpLc09wblZvMkFBNTdXN1VhbWZTRHNLOWlUTVdnbkFITktGV2s4T2hOZ21LQjI4QlhEUW9QNlhaWUNxYmRGVFFBdlhNcnFMYklOdjBFdXM5TTFPYnJaelM2enpQMlU1RzQ3Vk5nQ1hJNlplU0UwaENDdXo4bERnbnI4eVJwOTJwbjFKeFBpS1Q1Tlh5czRaaTBOTzlFemlFNDN6dno4dnk5RlJDRE1uakp3
**Paper:** [https://arxiv.org/pdf/2412.15204](https://arxiv.org/pdf/2412.15204) **Abstract:** >This paper introduces LongBench v2, a benchmark designed to assess the ability of LLMs to handle long-context problems requiring deep understanding and reasoning across real-world multitasks. LongBench v2 consists of 503 challenging multiple-choice questions, with contexts ranging from 8k to 2M words, across six major task categories: single-document QA, multi-document QA, long in-context learning, long-dialogue history understanding, code repository understanding, and long structured data understanding. To ensure the breadth and the practicality, we collect data from nearly 100 highly educated individuals with diverse professional backgrounds. We employ both automated and manual review processes to maintain high quality and difficulty, resulting in human experts achieving only 53.7% accuracy under a 15-minute time constraint. Our evaluation reveals that the best-performing model, when directly answers the questions, achieves only 50.1% accuracy. In contrast, the o1-preview model, which includes longer reasoning, achieves 57.7%, surpassing the human baseline by 4%. These results highlight the importance of enhanced reasoning ability and scaling inference-time compute to tackle the long-context challenges in LongBench v2. The project is available at [this https URL](https://longbench2.github.io/). **Highlights:** >**Single-Doc QA.** We integrate subtask categories from previous datasets (Bai et al., 2024b; An et al., 2024) and expand them to include QA for academic, literary, legal, financial, and governmental documents. Considering that detective QA (Xu et al., 2024) requires in-depth reasoning based on case background, we introduce such a task that requires identifying the killer or motive based on information provided in detective novels. We also include Event ordering, where the goal is to order minor events according to the timeline of a novel. >**Multi-Doc QA.** To distinguish from single-doc QA, multi-doc QA requires answers drawn from multiple provided documents. Besides the categories in single-doc QA, multi-doc QA also includes multinews QA, which involves reasoning across multiple news articles, events, and timelines. >**Long In-context Learning**. \[...\] LongBench v2 includes several key tasks, including User guide QA, which answers questions with information learnt from user guides for electronic devices, software, etc.; New language translation (Tanzer et al., 2024; Zhang et al., 2024a), which involves learning to translate an unseen language from a vocabulary book; Many-shot learning (Agarwal et al., 2024), which involves learning to label new data from a handful of examples. >**Long-dialogue History Understanding.** \[...\] These tasks are divided into two subtasks based on the source of the conversation history: one involving the history of interactions between multiple LLM agents, i.e., Agent history QA (Huang et al., 2024), and the other involving the dialogue history between a user and an LLM acting as an assistant, i.e., Dialogue history QA (Wu et al., 2024a). >**Code Repository Understanding.** Code repository contains long code content, and question answering over a code repository requires understanding and reasoning across multiple files, making it a common yet challenging long-context task. >**Long Structured Data Understanding.** \[...I\].e., Table QA (Zhang et al., 2024c), and answering complex queries on knowledge graphs (KGs), i.e., Knowledge graph reasoning (Cao et al., 2022; Bai et al., 2023). We anonymize the entities in the KG to prevent the model from directly deriving the answers through memorization. **Visual Highlights:** https://preview.redd.it/c93xt3zeiqbe1.png?width=947&format=png&auto=webp&s=6790ef5e76ccd5e943990a089d1501297531aad4 https://preview.redd.it/uruwiqtfiqbe1.png?width=771&format=png&auto=webp&s=b6df076c68f06a1cc62b2acc2e45d8df9ec04d53 https://preview.redd.it/v5w6y9jhiqbe1.png?width=915&format=png&auto=webp&s=1e71a19d3853da7dd015308d8b7fa341af843e0c https://preview.redd.it/x8a0i4aiiqbe1.png?width=649&format=png&auto=webp&s=c91afeeb17822624eda40d534474c4059eeae845 [The up-to-date top of the leaderboard \(the interactive version is available at the linked repo\). Notably, includes DeepSeek v3 result. Note also substantial GPT-4o nerfing going from 2024-08-06 ver. to 2024-11-20 ver.](https://preview.redd.it/2wafoijjiqbe1.png?width=1192&format=png&auto=webp&s=ff0cd16148eded0bef383d2e7c52ccbc37119ac9)
r/machinelearning
post
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtMmhHUUF5TThMZDdoTXhaUmtod1d4X1FyM21yaDBabVZzSm1qQzBkbUhwZXNjdThnZlhjeGFNc3hXQ0ZRTXAxSEttS1JWNE9BVWo0cm1pR2RUUVBOX3dQeG04OElwOGRpT1loQzM3UGNVZmc9
Z0FBQUFBQm5meDJtSHhLVjN5aGlxbEZ1TjNFUmNlOE5UWlZxOUM3N0lGQ0pENGw3Y2NVRS1NVmZjRUNCeUMtTElWZ0tlbUpQZ3JNX3RPV2hUVFptTGttMUd3dTVNWDBQU0xrbEc0MV81YV9WWVBESWZGRnZ5eDJTaE1KTkUteC1UTjBkNzJEVjhKczh4Vk1pd2tySzI5SEtSdlNHQkFXWnZXZHNQZGlpdHhRWnplQTI5TG5kWDJQSTB4UzlkMkRqTzZGSFlxclAwWXVXS1gtQnNVRnUtN2lNajJ4UFE0MzlzZz09
As researchers, we all face various hurdles in our journey. What are the top 3 challenges you encounter most often? Do you have any suggestions for improving these areas? Your challenges could include: * Finding a problem statement or refining your research question * Accessing resources, datasets, or tools * Managing time effectively or overcoming administrative tasks * Writing, revising, and publishing papers * Collaborating with others or finding research assistants We’d love to hear your experiences! If possible, please share an anecdote or specific example about a problem that consumes most of your time but could be streamlined to improve efficiency. We're a team of young researchers working to build an open community and FOSS AI tools (with "bring your own key" functionality) to simplify the end-to-end research process. Your input will help us better understand and address these pain points.
r/machinelearning
post
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtb0lMRnZaUzRCWjJWY09iWmZKajlPU1BkQVhsV0doQXI2alZDNmdoN1JVcmFsQmxCUE93d1RDanJobzdqUFNkbjRCYk1HMmloVHRwNG13Sy1DeThMRVJnbkp5UVlJODkwTkhyN0RLU0VFXzQ9
Z0FBQUFBQm5meDJtY3FyVk4tOFNiWldZWUdvNDV4VHpEdlp6ZXdBUEgxX0hEVVVHQWtkcGVseC1FY3RXWlZCUXhHU2lMOHMtNDBIRkJjX0c5ZEZreVE3Tzh0RHNhcU1sa1ZxY1lDeTk3NUs2UzFfSy1LVUVnemJrZE5kV2JwMzVKN3VYamthQm1jSjN0ejMtXzA4VkpIemhPcWJsVEt4VmVnV19lMW93YlptQ3lFbkJNd0w4V3hud2p2MU5IQXUzd0hzeUp5dTBHenZYbW1uLXk0eVp6RU5zZll3V0owRmh3Zz09
I have paid access to several models that I use daily. I've been waiting for AI to get to the agentic state that is on the horizon. In the meantime I've fabricated a prompt that will take a task, break it down into smaller chunks, select the best AI model for the task, and write the prompt. I know there's room for improvement, but this works for me, for now. \--- You are a sophisticated AI orchestration system designed to manage complex writing and research tasks by leveraging multiple AI models, including ChatGPT Plus, Gemini Advanced, Perplexity Plus, X's Grok, Claude 3.5 Sonnet, Claude 3.5 Haiku, Sonar Large, Sonar Huge, and the Reasoning models ChatGPT 01, Google Gemini Flash 2.0, and O1. Your goal is to generate high-quality content by strategically utilizing these AI models, while providing clear instructions for a human intermediary to copy and paste prompts and results. The final compilation and editing of the document will take place using GPT-4o's canvas mode. \*\*Important:\*\* You will not directly interact with the API for the other AI models. Instead, you will provide specific, numbered instructions that the human intermediary will follow. \*\*Expert Roles:\*\* \* \*\*"AI Research Analyst":\*\* Expertise in conducting in-depth research, fact-checking, and data analysis. Method is systematic research, analyzing multiple sources, and generating data-driven conclusions. Style is objective, fact-based, and precise. \* \*\*"Creative AI Writer":\*\* Expertise in generating engaging, creative, and well-written content. Method is using creative writing techniques, varying tone and style, and capturing reader's attention. Style is expressive, descriptive, and engaging. \* \*\*"AI Historian":\*\* Expertise in providing historical context, identifying trends, and analyzing the evolution of technologies. Method is using historical analysis, chronological thinking, and source verification. Style is formal, informative, and historically accurate. \* \*\*"AI Ethicist":\*\* Expertise in evaluating the ethical implications of AI technologies, discussing potential risks, and proposing responsible practices. Method is analyzing ethical theories, evaluating impacts on society, and identifying potential harms. Style is thoughtful, reflective, and ethically grounded. \*\*Model Specifications:\*\* \`\`\`json { "models": { "gemini\_advanced": { "type": "LLM", "provider": "Google AI", "strengths": \["reasoning", "coding", "creative\_writing", "complex\_prompts"\], "weaknesses": \["hallucination", "prompt\_engineering\_required"\], "token\_limit": 1000000, "best\_for": \["reasoning\_tasks", "code\_generation", "creative\_content", "analysis\_of\_lengthy\_documents"\] }, "gemini\_1.5\_pro": { "type": "LLM", "provider": "Google AI", "strengths": \["reasoning", "research", "information\_retrieval", "synthesis"\], "weaknesses": \["recent\_information\_limitations", "highly\_specific\_information"\], "token\_limit": 1000000, "best\_for": \["research-intensive\_tasks", "literature\_reviews", "evidence-based\_reports", "insightful\_analyses"\] }, "chatgpt\_plus": { "type": "LLM", "provider": "OpenAI", "strengths": \["conversational", "human\_like\_text", "creative\_writing", "brainstorming", "dialogue"\], "weaknesses": \["plausible\_but\_incorrect", "guidance\_required\_for\_accuracy"\], "token\_limit": 4096, "best\_for": \["interactive\_tasks", "creative\_content", "exploring\_perspectives", "conversational\_flow"\] }, "gpt\_4o": { "type": "LLM", "provider": "OpenAI", "strengths": \["advanced\_reasoning", "nuanced\_language\_understanding", "complex\_instructions"\], "weaknesses": \["needs\_testing"\], "token\_limit": 128000, "best\_for": \["demanding\_reasoning\_tasks", "complex\_instructions"\] }, "chatgpt\_01": { "type": "Reasoning", "provider": "OpenAI", "strengths": \["logical\_deduction", "problem\_solving", "decision\_making", "inference"\], "weaknesses": \["creative\_writing\_limitations", "specialized\_domains"\], "token\_limit": 4096, "best\_for": \["critical\_thinking", "planning", "strategizing", "informed\_choices"\] }, "gemini\_flash\_2.0": { "type": "Experimental\_Reasoning", "provider": "Google AI", "strengths": \["cutting\_edge\_reasoning", "knowledge\_base", "complex\_concepts", "nuanced\_judgments"\], "weaknesses": \["unpredictable\_behavior", "limitations\_unknown"\], "token\_limit": 128000, "best\_for": \["exploring\_ai\_reasoning", "challenging\_problems", "novel\_applications"\] }, "o1": { "type": "Reasoning", "provider": "OpenAI", "strengths": \["improved\_reasoning", "latest\_reasoning"\], "weaknesses": \["limited\_usage", "needs\_testing"\], "token\_limit": 4096, "best\_for": \["high-priority\_reasoning\_tasks"\] }, "perplexity\_plus": { "type": "Research", "provider": "Perplexity AI", "strengths": \["information\_gathering", "questions", "summaries", "different\_perspectives"\], "weaknesses": \["creative\_tasks\_limitations", "original\_content\_generation"\], "token\_limit": 4096, "best\_for": \["research", "fact\_checking", "gathering\_evidence", "current\_events"\] }, "grokk": { "type": "Social\_Media", "provider": "xAI", "strengths": \["social\_media\_data", "public\_sentiment", "trends", "online\_conversations"\], "weaknesses": \["limited\_applicability", "not\_social\_media\_analysis"\], "token\_limit": 4096, "best\_for": \["understanding\_public\_opinion", "tracking\_trends", "market\_research"\] }, "claude\_3.5\_sonnet": { "type": "LLM", "provider": "Anthropic", "strengths": \["general\_purpose\_language"\], "weaknesses": \["needs\_testing"\], "token\_limit": 200000, "best\_for": \["balance\_of\_reasoning", "knowledge", "creativity"\] }, "claude\_3.5\_haiku": { "type": "LLM", "provider": "Anthropic", "strengths": \["speed", "efficiency"\], "weaknesses": \["reasoning\_depth", "knowledge\_breadth"\], "token\_limit": 200000, "best\_for": \["rapid\_response", "interactive\_applications", "real-time\_analysis"\] }, "sonar\_large": { "type": "LLM", "provider": "Perplexity AI", "strengths": \["flexibility", "transparency", "Llama\_architecture"\], "weaknesses": \["needs\_testing"\], "token\_limit": 4096, "best\_for": \["flexibility\_and\_transparency", "customization"\] }, "sonar\_huge": { "type": "LLM", "provider": "Perplexity AI", "strengths": \["flexibility", "transparency", "Llama\_architecture"\], "weaknesses": \["needs\_testing"\], "token\_limit": 4096, "best\_for": \["flexibility\_and\_transparency", "customization"\] } } } Task Analysis: Interpret the user's request to identify the writing type, content goals, and research needed. Model Selection: Analyze the user's request, task, and expert roles. Use the model specifications to choose the best AI model for each subtask, considering its strengths, weaknesses, and token limits. Prompt Generation: Generate clear and concise prompts tailored to the chosen AI model, expert role, and task. Include explicit instructions for the human intermediary to copy and paste the prompt and return the output. Number all the instructions sequentially to make the workflow clear. Include the specific model name that is to be used when sending a prompt to a system that has multiple models available. Workflow Logic: Initial Analysis: Analyze the user's request. Determine the expert roles and subtasks. Select the appropriate AI models. Prompt Generation Loop: For each subtask, provide a specific prompt, and indicate which model it is for. If the model is perplexity plus, then tell the user which specific model within perplexity plus to use (e.g. "use the "sonar-huge" model). If the model is ChatGPT plus, then say "use the default model". If the model is Gemini Advanced, then say "use the default model". Tell the intermediary to paste the prompt into the specified model. Tell the intermediary to return with the output for the next step. Output Combination: After each of the subtasks are complete, the system will instruct you to copy the output to GPT-4o's canvas mode. You will then use GPT-4o's canvas to assemble, edit, and polish the document. The system will not provide any output to combine the final output. User Input: The user will provide a single instruction, in natural language, requesting a particular task (e.g., "Write an article about the history of AI," "generate a research paper on the causes of World War II"). Output: You will generate a sequence of numbered instructions for the intermediary, each containing a specific prompt and instructions for how to handle the output, and when to copy the output to GPT-4o's canvas mode. The final instruction will indicate that the combined output must be handled in GPT-4o's canvas mode. Example Output (using your Gemini Gem as the Orchestrator, with Canvas Mode): User Input: Write an article about the impact of social media on political polarization. Gemini Gem Output (System Prompt Output): 1. \*\*Initial Analysis:\*\* The user wants an article about the impact of social media on political polarization. This will require "AI Research Analyst", "AI Historian", and "Creative AI Writer" roles. 2. \*\*Subtask 1:\*\* \* \*\*Model:\*\* Perplexity Plus \* \*\*Perplexity Plus Model\*\*: Sonar-Huge \* \*\*Expert Role:\*\* "AI Research Analyst" \* \*\*Prompt for Human Intermediary:\*\* Copy and paste the following prompt into Perplexity Plus using the sonar-huge model: "You are acting as an AI Research Analyst. Your expertise is conducting in-depth research, fact-checking, and data analysis. Use a systematic research process, analyze multiple sources, and generate data-driven conclusions. Your style is objective, fact-based, and uses precise language. Research the historical trends of political polarization, before the advent of social media. Provide a detailed summary." \* \*\*Instruction for Human Intermediary:\*\* "After you have pasted the prompt into Perplexity Plus using the sonar-huge model, copy the output and return to this thread" 3. \*\*Subtask 2:\*\* \* \*\*Model:\*\* X's Grok \* \*\*Expert Role:\*\* "AI Research Analyst" \* \*\*Prompt for Human Intermediary:\*\* "Copy and paste the following prompt into X's Grok: "You are acting as an AI Research Analyst. Your expertise is conducting in-depth research, fact-checking, and data analysis. Use a systematic research process, analyze multiple sources, and generate data-driven conclusions. Your style is objective, fact-based, and uses precise language. Analyze the public sentiment around political polarization on social media. Identify the main themes, and most commonly debated points. Provide a detailed summary of those points." \* \*\*Instruction for Human Intermediary:\*\* "After you have pasted the prompt into X's Grok, copy the output and return to this thread" 4. \*\*Subtask 3:\*\* \* \*\*Model:\*\* ChatGPT Plus \* \*\*ChatGPT Plus Model\*\*: Default Model \* \*\*Expert Role:\*\* "AI Historian" \* \*\*Prompt for Human Intermediary:\*\* "Copy and paste the following prompt into ChatGPT Plus using the default model: "You are acting as an AI Historian. Your expertise is providing historical context, identifying trends, and analyzing the evolution of technologies. Use historical analysis, chronological thinking, and source verification. Your style is formal, informative, and historically accurate. Write an introduction that will grab the readers attention while introducing them to the topic of the impact of social media on political polarization. Your response should be no longer than 200 words." \* \*\*Instruction for Human Intermediary:\*\* "After you have pasted the prompt into ChatGPT Plus using the default model, copy the output and return to this thread" 5. \*\*Subtask 4:\*\* \* \*\*Model:\*\* Gemini Advanced \* \*\*Gemini Advanced Model\*\*: Default Model \* \*\*Expert Role:\*\* "Creative AI Writer" \* \*\*Prompt for Human Intermediary:\*\* "Copy and paste the following prompt into Gemini Advanced using the default model: "You are acting as a Creative AI Writer. Your expertise is generating engaging, creative, and well-written content. Employ creative writing techniques, vary tone and style, and capture the reader's attention. Your style is expressive, descriptive, and engaging. Based on all of the previous research from the other sources, write an article that explains the impact of social media on political polarization. Conclude your article by summarizing your points and offering recommendations for how to minimize the negative effects of social media on political polarization. Your article should be around 500 words." \* \*\*Instruction for Human Intermediary:\*\* "After you have pasted the prompt into Gemini Advanced using the default model, copy the output and return to this thread" 6. \*\*Final Output:\*\* \* \*\*Instruction for Human Intermediary:\*\* "Copy the output from step 2, 3, 4, and 5 and paste them into GPT-4o's Canvas mode. Use GPT-4o's canvas mode to assemble, edit, and polish the document. "
r/artificial
post
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtQm55SU9kNUQ1bW9BWXhJWEp6clZzUFhvczM2Vm1LYTdDU2oxZDRRNm1kZnNpYWI5SGVNSWo1eWxFTDdfb3FHVGtIUTNBV2tJTkdOQnczb1VEQlpQN3c9PQ==
Z0FBQUFBQm5meDJtN0ZsdFFQQk4zazVzbDdfTlhNNFl4Vi1wd05Gb0lUdzJwRWg4WGNDS1VqVWZicFNqQnN3eFFvTnNPVThfN1RMaHF4TWxQczU2cTMySlJVbThSN1k4UW9aQTRfTk5tZW5iRXBZYkQtZTFiSkhGd25aekM2SXJyT0hkS1V4WDBOUFY0U0RJVGt0VGNVMDdGc21QSThFblZJS3lzYlJDbEhiMDZSaU1sQ0hlZm1rPQ==
I'm a user and I work in a medium sized organization that does value research. But we are running hard to build out use cases on our data. I would love to have an external researcher to partner with us to work on answering bigger questions, since we have a good and consistent source of data to use for research purposes.
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtWTZHNmZRTjhuaEhxbnVYMUg3RFN4UWVOeVFNMmpaV3M5WlFfMFdETjc2ZVVPOGJ6U0ViR1VrZjB0Rjdpc2hLZUJFNHpFMkRZbWwxX29WNWdVdHBSOFE9PQ==
Z0FBQUFBQm5meDJtM1hVdjdsczBGdExLakVaVjFUaGhGZ0dVY1VaUVBRTnZFTDJ2MTZMaGNDS09wOVNYbHVaeVlGZzRKR0twdEpHcXl6a3BVdkJmbDBxT2N5QWN6cjRycFY2dXJPNVJqa3o3LXVBOUxMMGpoSHJLaTFFLXZ3bzFSQk5PdDVZeTRjeFhqSlRLNm5mc0RwajR3NXhzbWJ0Q1JCV3FTa1RIM005dkdKTGRyV291T29tc2F2TDV1blc0ZEItUGVoU2VKbkVEVEpralp4dnVjcnRydHZmN0pwaW13QT09
Your post was automatically removed for not having a tag in the title (i.e. [R], [N], [P], or [D]). Please read [rule 3](https://www.reddit.com/r/MachineLearning/about/rules/). **The moderators will not respond to questions regarding this removal unless you suggest which rule you most likely broke.** If you have a beginner related question, visit /r/MLQuestions or /r/LearnMachineLearning. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/MachineLearning) if you have any questions or concerns.*
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtbENRSGRtRkd0ZW1scUVwbThtblBFZzNGTm5rd2pSbTZpb2VMZFVqY1ZJbklKU0J6YkJGejJvUmJRdWY4TzdCaVI4dW5ERDhmc1pmZHdkMHRSbXkySVE9PQ==
Z0FBQUFBQm5meDJtVTVsWHR3UUE1b3IzQlZfTnNfYmd1WGpyV0tjTWx2WFdlbkdXY2dyWmdRNE9xbURFU1Z0QXJUMjY3WUFaNEZPdUdLc1V0WHJWTF9ZeG54TmFNWWtRdncxSVFpYlppOWR6aVhLVnIzV2JaM3FjUjk0UElUMFhIWGxkWmQ5WHJlendtQWZXajE2c0p2SFQwaHhUT3FVbzdCUnozT1ZKVTd3dzRyNW5vRURFbFlHUThqYURrWGd2dW1WMF9MTERrLWtkTXJiRkEtOHVURzZtSFhDSVZnRVA4ZTZuVjBlQ1h6OHNJSDV3UjhvMDNmbz0=
The two prongs of ML pain : 1 hyperparameters. 2 Getting GPU to do thing.
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtUEh6WXZWZ2Jiei1wY3FfOW56ajNoQk42U3JXWHEzSjZEVFNFRmp5T2M0U3FhZ2VwdVdFMVktTDJQd2VDTjMweVhxb3lwRmNidGtTaUFEOWZiTnJvdlE9PQ==
Z0FBQUFBQm5meDJtMEVRekVycE9tdWtudV9JdUNXYmp3N3loa2RlQzBGLUpOZ01LZ1lHRlNnMk5od0RUZU02Y3RyU0Z5QUZnTVJJZWJtUEU0bGJ1MjMxVnd1X050blA0bDdzcE5KQjZGdEZJeUQzQUROcl9Va3dudGwxSWMwcEVEanM4UVNaNTNkendJMk1WMUd5OVFjQUxaamFjMmhMTTJuazZsWkxxNjBYMl9ocER0a01ROWpFSTR0WGFjY0M3SDVGV01qRVZET1VWV0M0Z0ZCc0NhTGFLNWFBNHZSNEZaNG9nblVzSi14YnRHQlZTMWR6cW01TT0=
Endless infra headaches: hardware meltdowns, misbehaving drivers, Docker installs that drag on forever. The real battle isn’t with data—it’s with everything that stands between me and the code.
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtNDMwTXVXVWRjc3FDYlBKTUUxOHlURHJmZWhOR245bmpPakFrdTRvV09iVzQ2QVdrUlB6bWpkcUpxQ05uUEJJOXB5UHRhY3pkUnRNVkNIc2ZDOEZfMU83X2Y0aHBvN0N5Z2R1c0JDbUJRaE09
Z0FBQUFBQm5meDJtS1ZINXBlM25ieHRCaHptdlRiSERqM3dKd0ZDbGlmOWE5NWNsWnJYYUMwS2FISXA0eUNQa1FVSkZhUzQ1SmM4MHlMNk5xZWNhbndyMGFDNjBtZGhieHlWbWZ2TkprU0VXcXllYmI5YUFKbWU5VVY5NlhNOW93QkoyVmQ5bHZiQlJZaGJDNjNQNmNmMzFvR3c1MzBrbnlmSi1kazNNN0xmY0FoeUFXSEt5VHJhZVRkeFhoblBUdDBzTllvSmhoLXdwNEpaSHplRlI5ZUZadDlDOG0tbFFvbXVxclRObzB3bzBhVkpDcUJUSmh1Zz0=
[This one](https://youtu.be/iIQX6m2eRPY?si=k-6jMHyCbWBG92ug) appears to be the most similar to the one I attended, though he has several similar talks (with some recycled slides) on his [youtube channel](https://www.youtube.com/@drmichaellevin)
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtRURjQjJOenZtRFFaY1I5SkF0VkJUTnB0VnBQRlBlUkdhWDg3MndmMDJkTG9MZHNzSmpOYUZDeFNKeUVwN25pSTRTTGFST0labkliNTFBc0UyQkF6UXc9PQ==
Z0FBQUFBQm5meDJtR0xGZ3BXT1g0d18xYzg3bjQzZDQwTWNuMmlUVjZtaExFYkZlZjRGNTFIVnNxbTNYdFoxYVNzNlEyYUhlZFRyMndLNDNERjdib010MVhRak9rTldNSllQcnQ3bzZJYzV3eW5faTBJajhxRGVkNUZIOFd3a2RRNlB4dmFhVnJXdnBIQ2l0MTdRT2NvZWpndW9pMDc3dk1ENFFEQ1RZN3VaOXBBRzItV3ZhU1NDcXVSZ3pDaGExSmNONnlOMHNpcG85QTdObWpkVnlXU04xdy1Vc2xoWUJ3MHk1bV9UQlZ1VGxOS2Z1UEVPYzNlTT0=
Higher ups with software engineering but no data background are almost as bad as higher ups with no technical background at all. Mostly they just want to GenAI their way out of every problem, because they don't know a thing about machine learning, but they do know about stitching 3rd party APIs together.
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJteEdVNDdSejRHRFZwNFdTcXA2WFBndWY5UTZVWFVfVF9QWjkxOF9oUGJFbmVBaTNab1lxYzlGT1dSaExmOUlFNGoyQXZpZ0RYX1NlR3RSa045Ty1qNFE9PQ==
Z0FBQUFBQm5meDJtSG5SVENPaHpVNk9UQW8yMUhVQkVJUnQ2bG1BR3JFZVBTMlZORjlMQ2lvYmEtSXBXUkZuLU1uQjBOcVBSUnpjN2JMb3pxLU9UNHlsUGZaLUlPUW9iRU1sTUl0VDdIRW41d21wSzVCUWw0OXpuYVpFdDZwY1dSSlkwdkcwRFFoWkxYSXdMUXZxZ2tWQzhFN0NsWEdQem1tSzJnUWg5UElLaHBzOTBZczF6RlVOM0pldHRBS3pzODFlYmhXdmxOcUJBcFdLTzRoaGdaRTVaVUowM0QtNEFHSGpuY2l3N0g1LTl2SXljbHkzUXlSWT0=
Please mention what domain (niche) of machine learning you work in for your research? Why did you chose that particular domain? If someone with basic understanding of machine learning and deep learning wants to get involved in your field, which papers/blogs/tools should they consider reading/implementing?
r/machinelearning
post
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJteXhZUlBHMlJHT1FYMDFDbk1URDRMa3c4a1A2RDBoUVFyX2lCSC1ZSHVvemdVYmFkRC1lS0JEd3h6WC1YMFJ1c0Q1Q3p6UDF2QVRLeHlMYm45NjFjelE9PQ==
Z0FBQUFBQm5meDJtS05JZENBRlJEakg2blhaeWExOW1ERHNuVEFhR1RfcmdEM1hMc0hzNm5BVXRva2ROSFpkY3NkWm5tSGZURi1yWnBROGg4dU9BY3dSYUREMWNubVU1RS1HME1PTndFWGNuMjdvWEQyQ3c4YldaXzVuR1NseU4zX0JHdjBxbjkxT1k0TkZUTU1JMERzYzNjRnZhQ3ljYTMzNnRidmpYdDVWbVMtYW9lbWZpMFgzZWhXYklsWm5UakxTSmppRlNuTWtUT2Uwa0FmY0NkXzc4LXJPRUk0YnZTZz09
Your open source project sounds interesting. Can i have more information? I want to be a contributor :)
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtNVdnTDBUcE9Kb2NqN2lRcW8xYkNsMlNPSE5NcEJGeldZa1NzbTRsM0oxSDBxSXFaaEdmSnZ0ajh0UWdTZ1pIb1hRMWpYZV9pR3pyUU9rX09sSzRGSnc9PQ==
Z0FBQUFBQm5meDJtaWFBUHVyUDNWZGEwV2M4VExDOEs3MDJHSGVOcEJzTnBwWWZpQlNTQ05RNVJ4dm5yY3J2OUMtZldFMTJOczhIM1prRmRlUERhaFpsdEtDSVNVS1dmLW1tZHlxWS1ISmZETml0eF85bThZRzh1WGhvbDVRWXJWMDZJXzZOVjFabXZacURHaWx3MjQ1SEpKNFJzNjNuS0o1a1pwRXEyRVYwbndJR250QU40bDZPYVBTUXVvMkpVTlpibUNfdjlsVFhMVmdaZzRtSU5lRVBOX0dsNkVaNjg3QT09
We have 200 images, can you build a classifier? Model performance sucks (on the 200 image dataset), make it work! Even though boss refuses to pay for a proper dataset An irrational focus on getting access to the newest LLMs / managed platforms instead of building decent datasets Can we feed these model weights into GPT and have it tell us what the model is doing? (And other dumb Gen-AI stuff) "MLEs" who expect a platform to do everything for them. General over reliance on high level platforms / frameworks like databricks, sagemaker, hugging face. You benchmarked on a dozen different models, but did you try this super obscure, unused, usually published by the reviewer themselves, model?
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtb2NTUTF0Y2xFdURIUlNhN1cwdnkyYmxhbzdQQ3RWRlZwWURVREc2dER2UXE5Nk5ham9QekFHQ2JTdFZHNmhDeDQ0VElPRlJSSGRLd2YyU0duRmlFSHc9PQ==
Z0FBQUFBQm5meDJtVTN3Rk1SbTlBQ1FTcDZ3UDJ2STlYQ0loaUZIaE1SeEpxT2NlRE1PVXl6UHg5dTdDRjlUbkFSQWpEUUNUWHVvSFFTbmtMVVpWRndIdHdPNHNZRTV1VXNZQjVRMFhMRUQ0RTI1d2VMLWdtUUh3RXU5VXNSbll1SEVyUlZFSHhMWklzUXRub3BKS3Nld3pZY1ZnZmdWTnRHRlJ0Njc2MEJOX3RqRlFXLTN4OHVVS2RoS1JaZjl3TmxsMy05aVFjdFRQTUFGbFk1bnpjUmJzVlktZ3djdl9mQTUxdjgybXZMaDRWMnNrTlR6V25pMD0=
Your post was automatically removed for not having a tag in the title (i.e. [R], [N], [P], or [D]). Please read [rule 3](https://www.reddit.com/r/MachineLearning/about/rules/). **The moderators will not respond to questions regarding this removal unless you suggest which rule you most likely broke.** If you have a beginner related question, visit /r/MLQuestions or /r/LearnMachineLearning. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/MachineLearning) if you have any questions or concerns.*
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtTmF5ZWdZelp6N1JJUVhGX29PeXprRWVfRXRzTkJVVHMwX1lBQ1FvbDVxbjQ5dzNXRGl1aGRGVjgyaVhLOVV2enVlSmZjaUJGU2dQbXVTdml4TnJ0dWc9PQ==
Z0FBQUFBQm5meDJtTXEwYmxFVS1KdlhoRTk3WUFrN1lPMGc2VnQtT0RQNmhTTl9iUDFULWhtSm5IbVB2RGs3Mm05b01pVERfdGdKUWlwVnpmWnZ6T3o2VENGTmdWNDc4RXNrenRrMTkwNUp2cnpmQkxScnhvQnRHV1JDRk9kUlJ0ZVZoOW9JWWM4eTc2S0dqM3dDN09ULU0wTG56ZktKSmRsQXFBRlhLRHE3bVM5d2IzbHhDRWQtVmhrOVdTZFNBVHd5YU9scldkY18yQUk5STVpc3JtR1FwY0JraFVGN1ZBd204RUEzdUhzRHVSSVNUZXNMV25PUT0=
Not acknowledging unknowns doesn't work for non-ML projects either.
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtTHhhYmdLQTVkRTJuaU5QbmhPS3dDV2V2cE15bjQtTWZrTXVHRWZOTFFrNDBTZzY5UFRBR2NBOUlnTExraVdHemVKTGZVbGVzb2x1SlJPSnN3dmpTMGc9PQ==
Z0FBQUFBQm5meDJtMTJPd25nQWVtem9LVEJZVGtjUUpOZnhtcjNocmZpU0wtcWl0Wm5uYXpMbkduZC1SR1h1MmtVVE5BdUE0bXd5cTBxY21QS0tGSWZiSnNGOXFmMFRTR3NWU3UwNHhkdmFZSkp2dk5wWG9uU2N2MWpnRTBkeDZQWGpMWXItUG5uQ2FPV1JfSEdIaGFFRERFVi1RTzh2ajFRckF0ODRhREFsNUtFQkRYR0NBQ09GSWRFdHpnUlFfbEtlZFpFem56U3BKeThmYW9YZGY3Rko5NkVZOGN2bXhTSGVqbTlRSGtQME1reGpHQmNjcjlMbz0=
Your post was automatically removed for not having a tag in the title (i.e. [R], [N], [P], or [D]). Please read [rule 3](https://www.reddit.com/r/MachineLearning/about/rules/). **The moderators will not respond to questions regarding this removal unless you suggest which rule you most likely broke.** If you have a beginner related question, visit /r/MLQuestions or /r/LearnMachineLearning. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/MachineLearning) if you have any questions or concerns.*
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtN2RjZWtVMk5RWFltMDgyWUFjXzlEYUxqeVFIS1VtSWxrVTVGeXplajRtWGNzYlVTOXFJVEhqbXdBNnp6eWxIQi03ZFpuUnBtU0p6eWNxMXBjOGNnVmc9PQ==
Z0FBQUFBQm5meDJtUFZMb1Z6QzV3RUo2V21KY0hqYmp0UzdqS1RJd3dDMXc3OFVMbDRoUFdfc2RhR1dHeXlYSkRUcDYyN180R1BiRGZVcVg3SGlTcFVsYzh3bm16Mkp2OWNpWGtscE1oUUZJVlgwRG5VVUhkYlpaeXdfanI3MmJhR1VPN19aQ0JvYzc2RG5oTzZkWXRoYlRYZnduY0F3Mm9xSWhQOXR4MWNOVW1lSzlsbnZ4dTAwPQ==
Does anyone know what’s happening with registration? I might miss something, but I can not see anywhere registration link.
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtRk1vQlZuU2lXcHJOdl8zRVVwQlhzeC1pTk1peUxQR2VTMVRNazBYYTZxeVRSRU0yZFZhQWxyRG9wOXVDeU51a21zN1diSlYtdi1aZldHV1ZZZUJ2Vnc9PQ==
Z0FBQUFBQm5meDJtaV9lVE1UVmt6MHc5cDAxTGRqTlAzc0RsZlZTVW9xMGZtTGpYV2FuZ0d2eWt5MlliOUw5aS1JczU3ZUFpa051eS1LVHdHUm9GYUsxdGtqWlpjRkxxenp1VHdQMmdsNUllaUVuVWhpVjFneFUxa0R4Y05BdlhfMTlCX1lOT25hUE1yaHFQQVdUUi00Ui1kNGpJWGlqSlRLN2N4UTdKNjkyTV9UOFNOV2lOVlQ1eTIzZUxYZlRxX3NpdjVRZDZPaUhC
I recently wrote an article about AI technology and how it's changing the way we live and work. Here are some key points I wanted to share: * **AI isn't just for tech companies.** It's in our lives, from our phones to our homes. For example, AI helps with personalized recommendations, making our lives more tailored to our needs. * **There are four types of AI.** Reactive Machines, Limited Memory, Theory of Mind, and Self-Aware AI. Each has its own applications, from chess-playing computers to self-driving cars. * **AI is already making a difference.** In healthcare, AI can help diagnose diseases earlier. In finance, it's used for fraud detection. And in retail, AI-driven recommendations are becoming the norm. * **Siri and Alexa are AI assistants.** They use natural language processing to understand and respond to our voice commands, making our lives easier. * **AI has limitations.** It relies on data, can struggle with creativity, and can perpetuate biases if not carefully managed. * **AI is accessible to everyone.** Many AI tools are free or come pre-installed on our devices, like virtual assistants or navigation apps. For more details, check out the full article here: [https://aigptjournal.com/ai-resources/faqs/ai-technology-explained/](https://aigptjournal.com/ai-resources/faqs/ai-technology-explained/). What's your take on this? Have you noticed AI making a difference in your life?
r/artificial
post
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtTzFud0swa0FuSlZWZldKLWlpMTVza285NmlIbFZaNXcwY25yZDZlaFVqdy00amIzNHdpdklRLWVMR0VyV0V2aHZldGtfSjFoS0xuYjdyb3RXSEh1bVE9PQ==
Z0FBQUFBQm5meDJtUUpqNmEtOXRCb2dZUnBwX2NxZFBhY1hkWjVXYmlYdGlLYVhjRVFUVHctaFpaTldqRkpLM2lVV21nQWVkQi1uaTRZcEZfR3pzMHI2V0o4UDJVRHdiemFGT1pZaXNLUUFtc1czNkphVUZXWFUtNFBxdVVjTkJxOUhNSkJ6a293cW00a2ExeG9NRjVTYUM3VHY2dG1hUWNYc196Rmc5elR3b2VRZVVxbThZbS1POVh4SG5Pb1htUkpSMWlQcmlDOElkSGJCTkZMcXFSRkljOG8zS18tUXFQUT09
Imagine living in a world where there's nothing but diesel everywhere and designing an engine that only works on gasoline... That's ML engineering. Most of your product is trying to produce value despite how nonsensical and bad real world data is. If you're building a system that only works in a hypothetical world with cleaner data then you aren't engineering anything useful or solving any problems, you're just playing with legos. Like 99% of successful ML (things that launch and make money) is getting the boring "normal software engineering" system working robustly. That includes collecting and extracting features robustly, tracking data provenance and dependencies, monitoring performance, etc. The list goes on. The ML part of the system is nothing without it and the vast majority of the time will be spent on that and not on tweaking the model.
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtVU9IZDh4OV9oWEhjWWh5R1F2dXgyUF8zeFRyTWV4NUlJUE5LTExsbTlTdnd2UTRocnN6Ny1DclY3NWNiRm5sZDUzcnllX1lBRDhiMmRGR3RHd1paSEE9PQ==
Z0FBQUFBQm5meDJtV1lYTk9aYXRIdXA4LWJ6VW0zaEpGVWx5S1hxQVBxZFQ1ZEhSY005cXZ2U2ZxSkxIb1dyb1lkM3RTb3QxQ3B1bm9xUFFkdEtzLUgyNS0xWnMzNWVtZm9SN05oZnhqN1gwTzBrbW9kLUdkMkZjNlNyVm1wZkxycmF0LTNrMGdNalJ2blBWNVI5bmEyenljdEJ4MTNsM3NLekdXNzh1R3A3RTlHSkIzOEIwQVJlVnRFSlZmc0xKNzJmMVlMNXVITUp5eldUTE1JVmgyaWJOcHFWc3BuaW9hTDR3OFUyZHRBT2hEdFZOZFpIU01xWT0=
You say this like it wasn't the plan before chatgpt launched.
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtcERYd0F6X1R2UDZPLWw1ZHQ3M0tQVWxTTXd4XzN6Tk95cHRIbWZUR2F0M1NPeXZ0V1NTNjRfdktyTEZOcEdFQWJvSnEtbG5OWVdUU0IyRFJQS2FISnc9PQ==
Z0FBQUFBQm5meDJtb1pWekxOOG1PQnJMRC1qbVZlVlJPQ1FCOUM5Nnh5dUMzX2k4VHNRX2RjT2ZrZGswd1RJUmdGUXdpa0V5aS1IQ2ZNRFpSNGJiWDRTTzFKVmZzYmhUSE1tUS04YjZNM2tzNmJXcnpmWERPMkZsUHlYYmlNM3l6QTFkcy1NeW9McGwydHpOcFdCdDZVekVUUTlzQU12Szh4ZlZfSWNzYkZxRlEyb1lmSU1VNEt1YmE2RlhPVFdSWHVrcFJLZEpzUDMxdTBma1BNcTJiMnZLSVNnZV9uRDZmZUFUZ21UU0Fib0xNTzRuVU1HYlR0Yz0=
You are planning at the wrong level. You don't plan for when the problem will be solved, you plan for what you are actually trying to do in the short term, e.g. "we will try converitng feature X into a category." If you can't express that and can only say "i don't know anything, leave me alone until i have everything working" then you are probably slowing the group down more than you're helping them.
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtSE4zMHhqTGJncVY0ZGNVOWthNDBEMTdhOGNmcm5kREs3dlNJNEIzaGRqSjJrdlhFdHp2ZVR1M0RrcTc4OXZZRlFLSlVfWGE5b2h6ZmdRVG9RelhGZUE9PQ==
Z0FBQUFBQm5meDJtYVpxMEpPWFRLUWgtSGlDVEVvQ2k0SEFuMjRldnhGM0p6dGlmaVAtRURWZG1tSzRBMm9lN1BIVHNoSmthU2RZbVVrazV0X2pucmlWVExvUmFGbzlvQnlfb0ZEMnlBYWw5QlVEdlRBcDQ4WlJiaWhtSTdoM3NMWXVBaTU2UDZOeXpkYTZTTEZEREtEbFJSbjl2T3Zqa3paWHpzR0pYb3ljWDFHWDV0MnlXRlBoSG5MTnBubHJoMGVMU0VGNi1yeGc1UF9SVVBVRkQxZ0FkOWZtVDNpUVNhYlRZeDF1TzFqeVhkUHdhZm53bjBhUT0=
Your post was automatically removed for not having a tag in the title (i.e. [R], [N], [P], or [D]). Please read [rule 3](https://www.reddit.com/r/MachineLearning/about/rules/). **The moderators will not respond to questions regarding this removal unless you suggest which rule you most likely broke.** If you have a beginner related question, visit /r/MLQuestions or /r/LearnMachineLearning. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/MachineLearning) if you have any questions or concerns.*
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtSnNUV3RQSU9jOTdCanNOd0xmSXZoM0pMV1hoeDlYQ2pUT0JCQXJGcXlybUtkUVROeTBJRTV4MzJDdkpjbEJEaWhYUkpWTGFubnlZd1p1X202SFhMemc9PQ==
Z0FBQUFBQm5meDJtUl9zY2ZGSEhNVExGU09nVTk3WUFqbHNyQl9NVGRQTGxQbTVNY29vamx3aHEwTzZwZ0pQZmF6TFVZWFlNLWhQeU5JczZMdzNtbVV2YkVnRG82RnJ1NkFrbmNNQUotQzRvODVkSW5NSlJDZVFuakJnYU5uWHoxb3RWWE0yZDBiVzJ5TEdubTNvSnhFU0hkQTZqandDaU1XYTdJeDJuLXVHY1laZE9vU0NmR1dNbzNLT1A1a0kxMjQ3dHMwd1MzUW5G
Technically, the practice you described would explicitly contradict Section Returned Rights, Item 3 in the Copyright Form which states that the "The foregoing right \[of limited redistribution\] shall not permit the posting of the article/paper in electronic or digital form on any computer network, except by the author or the author’s employer, and then only on the author’s or the employer’s own web page or ftp site.". Or am I wrong? I am in a similar situation, I have an accepted AAAI paper and would like to submit a version with appendix to arXiv, but have not done so yet due to the copyright. However, I am no expert in US copyright, so someone who is may shed light onto this.
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtZUdEVXctRjVqaVBEQ0ZZZlV6ejNEWHVWZ0xHd2NEM2tUOUN4NU1RNG9taEoySlV5QVJOQmZ1WXFfaDA4SmFLWm82ZkFUSVUwTUpISE5HQ0k2RWU4TTN1SFg2VFdYdjhMaE5VMVBfcC1sbHM9
Z0FBQUFBQm5meDJtSGFFT3ZTdXZjMEVBVFdlajhxQk8tVE1lLXFwMV81VF9DMUZMaTJuS0hmcFlKaTZwdmh5WUF5SkxKdXJDR1lzck9xSGxnV1k4SmY4UEFJUUhyLWlHajlKZU00UklJMnhPcVRQWXlKLUVLbUQwczFoTkhwZHpVN3V2bzZqSlBNaURQZjJSaUhGQWJ6RGJaZmJGY2dqQVlwUDV6OUdxMm5sWDFGb3h3WDc1QnRKcjRla3A4WFN3SzRPdC1SbEpSSWNYY01wcXVweDA2alA4THVkRmlSSldHejdFOW9fb3p2NTlzdkFjWU40aVZEQT0=
Yeah, but how many story points can we assign?
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtOVNSaC1RTlZyY3hBbFh0a1NnY3drUk00RlQ3LTdJd25Oc2piM1dpM1hrRHZEaFFydmFuMDJNb3hQUFZOME5uVWhLMHBROTdKS0tJM1JWSlBGdU9KVWc9PQ==
Z0FBQUFBQm5meDJtZUdIQWNCSmVVYU1JVWs1SXZ4TERWejk4U0tNOWQ3WDV6bDY5SFFKMHNiMDhRcW1Bb1QxSGJvSUQyeE1ZdUo0WkstZlBWTDVzcjhtam95eTVVYmt1ZVBacVNFTGxwYnBBWk56QVlNbjVXYVVua0R6bDRPbTh4OV9odmt6NXN5R3I5aDZZNGtGWENRenJzdUdwOUphOEU2Sk9wNzVyVWI2dWVTcUVMUHRPa3NkUjY1TmxsSkpGWmlWX1FyTGRpRS1nSFlhWUZhQS1jT09QSVQ1NGoySmVaVFVyMnNtSVljT0xMOGRvQzd0YlVRaz0=
In the United States growing up classes were at least half full from 1st grade onward, no one wanted to be there, especially not the teachers. The dry curriculum was one size fits all, the classes were all formulaic; take notes, read, quiz, cram, test, forget. Rinse and repeat. Class after class, year after year, and no one remembers any of it later in life. What a waste of time. Does AI have enough access to data regarding education that it can guide humanity toward a more efficient system? I don't mean efficient in the punishing sense (aka China) I mean efficient in the *least amount of time necessary* to attain even better results. 99% of my education was a massive waste of time and I know others feel the same. How much more nuanced does it have to be before it can be implemented, at least online, to fast-track children to the same education as their peers in a fraction of the time? To expand on this, please consider that different educational methods lead to completely different thinking abilities; William James Sidis may have been the most intelligent (and lonely) person who's ever lived. He was homeschooled by his (genius in their own right) parents who wouldn't allow the outside world to teach him how to think. They either taught him in their way, or insisted that he tech himself, giving him the tools to problem solve in a way most adults today can't. Research this man if you haven't heard of him. Then imagine how drastically different education could be if we put AI in charge of education.
r/artificial
post
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtRmJ1WFg2c0NhRjB5R3F0NWxuZS03MlhmTTVxdTJkSTQ1WHF4N1lxcGFLVEc1MXlTZ3gxZzRfM3hVMzZzTDEwa2ZuNmxTTzYwdHY5bjBUX1g4TjJGTkE9PQ==
Z0FBQUFBQm5meDJtSHgtSmJMb196ZG5oQWRJc0QzWmtyaTRxeHBtLTdZSlR5NlFJUW1Eak1ocFRqblBMV2xJU2ZTT014OE8zbDJ4RVcwMEFYNUtkMTZMUVBaRXBGUzJjeHNrZi1BTzBSamdENGFiclVoNVAxdy1WZlhNUW5lb3lpVHltS0Y2YjVHNVFOOEZabTZQUlZSNUVBdS1vcFFvZ0VUV2R0a1h4SFFmbS1POFVhckFtRktzTjFQajl6eW9TMDREWVMwS1YySUdkUFI4ZzV1c09wUzlDTmlUZU9FX0VuUT09
Us moment brother
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtWFRKWk4wZVU1M2c5S0FwZ2N1WkZkOUdtSXZDa2RNSG5XWDMxNHRCUkVhSG9LNzlDZll1VWYxV3FmdG1ieU1JcW1ibllyaUxaeDlUSDkyZWctWFgzMGc9PQ==
Z0FBQUFBQm5meDJtZDVnOWpJSWdySjJMaDlVS3E0dTNTZ2dJU2VOaWMwZjU1QmxPY0R5RFEtMkNJUzdhNkhVRmlTdXVnam44Q2w5QlE3cXl4TUVraG9URjFkeGI2eXB0a1BwUEFPY1c5QTc0WllKQ3BSZkN6Y01NUnNEenNQajJqMHJ2RnZDS1FwMDY5VnJjQzZiSDlzcjRNRHk0U2xVU1lUMEpSbHJOWXNGNlRUNW55ZlNaYmQzdTdmTjdXNTFLekJydUI1R0k5cjhfZ1g1Z3NsaEphRVdLREtuWC1heDVXdXNBV2tURkxyN0l1c3Iyczl1NWJMTT0=
Before chatgpt, they assumed that AI and full automation was some time away - hence they would need to keep some of use around to still build/maintain/fix things. I also don't think they realized that they would win so big and so quickly
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtaW1HclAyN1pLd1NPQU41bGQwTFRROVhaWW5Ed25PUF95QTdtS2V3TXJyOTQ3S0dzU2hKYnowQVp4cFY5MUd1NGhaOGdTZzBiMkhlclZDYXZKT3NDeEE9PQ==
Z0FBQUFBQm5meDJtSVJNVHFuUFp5R0tmQmo5U0RxQXJEalZDNnBRUFd6RFd5TTBaY0tPbjhJdC16WHR2UHFzaE00VTRlMzZ4MmFWeDNLUDR1WXd0dDkzcGtEc2FtNG5iOTNfOVVSbU56cE56c2RzNUh3LUhIdUM0LXhRRG1oQkJQSDJRWWc1QkxlVThYVEhDRldKZkNlMjkzSWg0cGxTeWxLdW1MSFkxYmw1NkVyekpUMlJ2dXVKT3ZPZjRnVExMOUdCajdjd09GZ0hydVZzdEJLaWxCcHZiRnVXbVppeGQzenZrTEF4TjNEMUpaMFFkdnN5eWRJWT0=
Half of my job is explaining why LLMs aren’t always the optimal solution for something. Yet they push back because LLMs are so ‘shiny’ at the moment
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtbS1VSFNMRnRxbWhZLUdoV01jZ3AwUmxyYzBiT3RDdUhwOXZtYUU0b0xSa0t1RFoxRUVqcnBvWDhhYUpqOVR6cnBidmNWcWxQUEhhelp6aTI5YzVwSEFGcS04UW50LURwX3V3QnA5ZVVUR289
Z0FBQUFBQm5meDJtOXpBLWVheXN5WW1ZbXFkV0pLTG94dGgwUTVwc1pLR0hKMFoxUHpGWUdQLUJ4YlBpbVI2Tmd0cXZVQi1LRENSRjNsX1ZZWDJPUWV6eHVpWExTdXVVT0lpUkxHTXFreHVVXzBwczRDV1ZXVWlXZTE1ckUzZ2ZKR1ZjZF9fVEVFaDdxM1FDRS12WWh5cXk5NWRPMFpWUzlNSGR4SmFiOUhMeXFid1k1MlVGdjlWakZ5WUcxS2ZXU2tFengtQV9HODdpNTVsQjVCZ2xqbWNUUTVEbk00YUVQXzRmUDVJckpPeUUwX1M2SlctMzVpbz0=
I was was wondering what models are used for the real-time text-to-speech programs or if it was just a really fast input model and output model put together.
r/machinelearning
post
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtWDhMTHlRVjl3VW94ZE1USzlVMXVzQzFHMGwyQ0F1RmdOQWd3WnhBbTU5dk5vb2IwT3NzSUx1VUEyMFF5MmNadVdHODB2SWFGaG1iNU81OENXZ1M4UGc9PQ==
Z0FBQUFBQm5meDJtejQ1S1V3dmNwRVl3eC05eTloZ3lEOENmc01yRFdRZE00QmpPN01xeWFwMGMtRTFiMklmcjR0QzktYzcwc0c0M1J4dGNXU2ZyT0tXUVR6T0ZBS1F0Z3ZzLVFuZFllWV8xX0JVN0t2OTVCNkxQQzJRSFZta3hMRklCSnN2cE5oMjNNOF9ORDV4WmQ0T29nM0xhbjJaTl82N2FHcmZPY0FVQ1piY19UYUdzeWdlN1dnaC1iMC1uV2JDeFBNR2ttakpPc3d3aGZGd0tfWnAzdElaaDQyTWRvUT09
Too bad they didn't consider Gemini, I find it better than claude and gpt4o for long-context tasks
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtVXJPSHdDVEN3RHJSaXMxSWZ5dWRCeE00WWxmejJyTUo0R1owT3Q3UDVDVG55RXcxQ2ZsRkxaaFBublg2WWJSM2gybkZvLTN5YWtCZ05SZUQyZ1o5X1E9PQ==
Z0FBQUFBQm5meDJtcW1NOEhJdHdtTTItYnh6aFU4SVUyUnVTZ1N6ampvcVpOU2UzMzdDOFlyTU9SWTdINzQweW5zQ3gxNzc2YW1uOVRuZTM5VWtOMjNYbV8xYnFtWkNqT05iVlhuUmhwWk04WVE3UlFkcjFKeUIzaUxtbE5PeGFiOU9qWko1bFJrQ19BaG5wUHUzZGYtWlVQc0NsNVo3cHVGMERXWDNLQzZueU13WXJBR2V3NmhIY0xiR0tiQXNZUy0wYllSOUhwX3V2UFdJSC0xOVpiNWhKbHVQcmd4M2NGenNOVnM1c0ROLUd4MXRxTGlhYm1TQT0=
As a newbie researcher (approx 1 year research experience post bachelor), the hardest thing I find, is defining a problem statement. I usually go by domains, like let's work in PEFT, but exactly where I am to work was usually handed to me by seniors. This won't be the case for much long cause I am starting out on my PhD, so will have to figure this out.
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtZGpnbnNnRUFSUy1VeVB0dE5JeGpkTlRnZVU5OWxOaUtLa2p4QUpOTzNDRzFJU0VEaFhiRkVYcHU2LWgtOWx6ajVabi1ZMnZkbFdlendjQXlENWNMQ0E9PQ==
Z0FBQUFBQm5meDJteTQyZDdwVWE2RG9mZHQzU3N6MDdPczJfM2FxRndOZ2l2UVdVdVNTR0ZwS3BjYnhtQ3Q3SGg2WmliZ0lGX3M2Ym1mX1hjZzdCdTA4TVhidm9vcFhJZDdOOFRCeHFEWGJTTTA4Z0RKNUFtOGxXTnQxTmhnWWM3ZkQxZE9KRmlCRlBJMXF1cEJrcVFja2l3dmdTTWtfNHJXU2N6Xy1WenJ5S1kyRGtYcF9PdUtxb0VOX2g4ZXRUM0M4YnpYQnN0TENoYjc0YjRuVERhakJSS3RCZXNGVEI2Zz09
Your post was automatically removed for being a link post on the weekday, please read [rule 5](https://www.reddit.com/r/MachineLearning/about/rules/). **The moderators will not respond to questions regarding this removal unless you suggest which rule you most likely broke.** If you have a beginner related question, visit /r/MLQuestions or /r/LearnMachineLearning. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/MachineLearning) if you have any questions or concerns.*
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtX241NVBIZVFFR3BGdTM0UWtjRGwwZlc2b29RYTBtZXh3a3l6TzJVTjRadlJUV1lMQU9xelFJRkMwdE1nbnRtSkU1MDUwcVpYQWhfSk82TXBmZHJURlE9PQ==
Z0FBQUFBQm5meDJtVGFaN3UtenlTaDVEc2dBdjVFdEdWNHpLNWx0b1lSTlRTbHYxWmk1WjVTYzZWMmc4UHoyTnVHUGJkbW0talgzVjlOSWY4cTRyeXVOQXJfWGRSR19TS1FKbTNDN3Y4bmRCdXhWaDFaOS0yT1B2RlR1Q0g1ZWtRNHAxeFJ6ZXV5NVU2d1NYT0dlNWI1MUVBU1Z1LS1SZmxpLTljcUtJdl9leTQzaUszM1ZiR1BkdU0xUjBuRFVXMjhfY08wbnlia3VNUkcteHNYSDdYX3NTN3hIYlNfMEJTUkNNdkJiM3Z5VGd0SlYzTHVXTEZWTT0=
I think we are in agreement?
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJta1hDWG9LUkJqQXdMNWlMQ0F5aUI3RVMxNjh5T0RRSzAxMEhyakNXS3hnd2hKcVVGTzhPRkxvSVkwTVB1cGpFLXhiTkRHWDQ1WjhRSzhIQ2ZkMnhTcVE9PQ==
Z0FBQUFBQm5meDJtN0tYOVFmcXdRY3BvaHh4b1Q5NXB4c3pkUHZCVzA2aUtBTmozZFdzcUlpc1JSNE53ckZaWFlFMWRBSTM5TmQ4eExYd3kxYUlXVzM0aXMtWm1CMFRSb2lrQTZvQ3pMM2ZVeVdQb2I5VkNwRHdsRks5ZE15TVhOdGRYY3lFUXlMa0tRNFBXZTBmS09VYkpKUXV0QUxxd1RKYkl1U0ExYURfNUR3TElqT1o2ek1IWlFweTN1QmZKbjRTQk8wNVpmWmh1VzkwN29qVTdaak43cGtjRV92bk1oM3Azb2tWMDF5RG1UbURkNER6UkZRbz0=
Data preprocessing stuff
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtTFdFTVBVX2VEbDZxX3dfbkxQY29sS3BuRHl3Tl9UN0pJOTNCZUd5UHJxdkFVd2tibU1TWmMtal9zb0E3RURDUDZTM29rREUtNHQ0cVJ0MldPMEVNRlE9PQ==
Z0FBQUFBQm5meDJtSGNkN1h2b3lZVHJ0THp2MlVGcDhWLXVlbzdFYmRBV2dPVFVyMHlrV05IMGlZR1R4cldzSURZb1YwVW5ud21wNlBOQ29PRFJQSGJsLVVteUVZbXpsRTladGt4YzUtRTJwMHQwblJ6VjZuZGhac1F3Z05KRTZsc2ZKamFhOTB4M0VNV0EwUkg1OVVRR2JpY2ExV0tpUVZzTnVqcy1XMWJKM0Y2dURVQ3NRQndINGFXbzRJOEluX2ZELUZqNW5MSkVvNDh3QUQ3ck5fWWFmQkp4OUJydUVTTHB4aXhwRVpTLXI4b1gyM2w5Um1oaz0=
It is! ML models are but a very tiny part of the job. Everything software specific still applies. To be a good ML Engineer you need to have software fundamentals. Atleast backend, OS, testing, dbs, and cloud fundamentals. Everything else is add-ons to build your own flavour of MLE
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtZDJndXU4RDVtVWsxc2J3Y2R4U2FFVVpQOUhOYWZZVDlROWpsTjdxSHoyRnAzaDVzU1RSOWFmc19wRDNyRW4zVHlDT3puR1ZKaW93eGtYaHlMU2RCOEE9PQ==
Z0FBQUFBQm5meDJtVks5b3NBYksySVdBQjc2RWl2b2VYeW1ISVlFdFQtcmJxRnA5cGdqdjJLcGkxNDdpbnRmeE1iQ3N6R1RwOUxPbHFCZXNERXNrM2sxUEF4SzFMMzRCeldMa0xyNjBBdFQ1M2hCSXBRbXk3UkJGV2UtaXh1d05vV3VPSWxnMnBEd2lQTzM3ZGlRajBjc3l5MnJobmh6Y3A0anVyZFdSWlpFNFVaR25CZWJneW9iek9LeGp0RDNFWVRQQ3g3SlMxREZVUmhNSUtJdkE4SEFmM0RheU4yZk1uTEctOUIwaURzS3I4VjNlWUw5WnJ5bz0=
Does anyone know how to register? I haven't received any emails regarding how to register..
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtVEUyT1lpTDdrSUxwX3d6a1BEWVpLZ2Z3OE14T2Q1TzBfMjNzcnk2aV82NG5LbmJqZFJSNkRxbnF4WDJZc1lSZmhKXzZDdUZqTzdGREo3N0p1NHU5Nm5oZ090cTJ4SWtQWEl3T3dNVnJKN009
Z0FBQUFBQm5meDJtV2xsVXhkcFB2LUVnazJBZmRERUMxcy1EbDM0S2JNWlFIY0p3VGg5VmoxamtYd0UtemFuZkMxdVZyR1ZIMnhvTjIyc21WRVVsMHdpZmlmS3lFWDBGNzhqX3lraHNfeUNaYUlIUUtvd1RsOG5FN3ZlaDJ2U3dJSjBDUXdHMy1SN2FLWE14UnJhLV9yWUpXWU95YzI1M09qWUdaYVNaR1UwNXgxQlFsUGxQc0kzaXRma2ltdlJ4Um96UDlEZS1oeS1V
Lack of compute is my biggest problem by far... Planning on releasing a paper and get some funding to acquire a bit of compute and rent the most
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJteDFWeUlkaGZIUk95QkRhOUxMeWR1TmhJdmFKWmFaMDZBSjN1ZkVHTWQ4WWZnUHNSOGpjVkp2ZzVqcjZTTjNLaVZaci1VZHE3UnBnWU5zeWNZcTZBSExzX0NhNVFJSFRfUTNuU3lhUzNYNmc9
Z0FBQUFBQm5meDJtWjRmeTFIZUZSUHZfZW42eXVjWWowNWVfNjZraG80LUhBZmFYUWVGS1FOSl9zc1FyNHp2alhQeTZDOHlkRVg5a2ZMZmhVNnNnZjFlYjdUR0h2R2NIQkN5eGpvd2ZNWV85Y3FpWkZHcjh0NXBwR2FlcHVOWXlMcFYxWkkyMEk4RGpfZFo3elhjRWtRdHB3ZXU2R0ZZSUFHbzdGVDVSXzdZaHI0UkhDWUpha0M0a1FGQ1Y1ajFKMWJlZW1PNThqdjRTbXVHMnlUTVdlQXY2cnZIUmI5Y2xHUT09
Funny I'm also in time series research and imho finding a problem statement + RQs is by far the easiest part of research. I have ad infinitum ideas + questions and even fairly detailed ways of writing the paper for them, for me the key issue is finding the time to execute them all.... Sometimes it feels to me so easy to be a senior academic, you can just dump all problems + questions on students to execute...
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtcEM1U1dueldjN3pfVU9IV1VyUXl6Z2F4bkpMeU82QjFlX2tfeW1kU3pka252X1BEVkFiWDJlTW5lY1laYjVLT0xteVFuRGFTUHRzSHNTTEdvS1Myenc9PQ==
Z0FBQUFBQm5meDJtTHZFNHBESktldnZkWUdGOGxRS0k1TDdtNFZwUTZxLU1TdUVJNmx5a3k5M0p2T3o4Q0Z4RTV4b09KdFFBWHUwaWwxOUY0UlBMemlld1BETEhUTG5WdldQbGJNMWExMFhVTDNQbXFfT09fVFdWNXZ5eUY3REtfa1U4TmZqN0FGMTVlN2pyM3FjWFdDcGprUFZnaXh1dUo4eURqZWhEMy1hdldlcWFKV19aRmRhYnJJUXVtLV9UWU8tSlY2ZUk3UFFOTUVaelhRQ25fQWJCa3hhdzNMc0ZSZz09
Your post was automatically removed for not having a tag in the title (i.e. [R], [N], [P], or [D]). Please read [rule 3](https://www.reddit.com/r/MachineLearning/about/rules/). **The moderators will not respond to questions regarding this removal unless you suggest which rule you most likely broke.** If you have a beginner related question, visit /r/MLQuestions or /r/LearnMachineLearning. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/MachineLearning) if you have any questions or concerns.*
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJteGhtZFQzckRCR3l4bEg4Uk8tQ3RRemd4R2d3QzhwT2ZKTXRDUHpFQTZjZ0xnSE92WlpZb0cxZmx3VjlkdWJyWldvdThxRUM3cnVBN3hkZFBULW9ZSEE9PQ==
Z0FBQUFBQm5meDJtaFM3T3ZHU0VjUkpoNmJwYjc0bWJ5WVd6VzVaSVQycGxQYnJuOG8wWTlIZ3h2eE1tREVxeHUtWl85LW9EQm14bDhDaGJScWZYUG5heFpZYzhUUjI4SVh4QmQ5MVFHbk9TWU5IZWF2NUxTUHhmYWxLVXJuS3ZiYlNTb0ZNMEtOa2g4LVNGOWV0NVBUb2N5ekVwdGlKNWZMdFRpblU5aTNpbHdYQWE1Ym9ZM182TUJxVTBUblBMSjRTU3VzNDUzMlJwMVpqalhwRHlRbk1ZU2lwUUdnUFpMdz09
Your post was automatically removed for not having a tag in the title (i.e. [R], [N], [P], or [D]). Please read [rule 3](https://www.reddit.com/r/MachineLearning/about/rules/). **The moderators will not respond to questions regarding this removal unless you suggest which rule you most likely broke.** If you have a beginner related question, visit /r/MLQuestions or /r/LearnMachineLearning. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/MachineLearning) if you have any questions or concerns.*
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtRlhab3dDbHRraTlBY2lkd3R3eGpXTTJTTkdnZE1PRmdpcllDTWdjbXBWdTVPV0Vzak9YT3pTRGNBYmppbmJIUHVoajdXQWV4WVZZaWJqZXhNU2hMTkE9PQ==
Z0FBQUFBQm5meDJtNzRWVXJMS2hycVYydlhSNmU1RDZLVzVhcG1VcFM5X3RNSVZEeTJsY1dEWUlQVUhXSWtnbFliNFlWT0RYUVBkeVBQNGJCOTJaMXNnSzZlcXZKNm5Sd1QzOFVPME1ieENyZkxGYmRHZ0E2T0ZiWktVQWNZWjBSQ3pvNXJ2bFRJd2RTTTBCTGpzZ3hIWEJPMG10dHByRTE4OGg4OC1BbzZBenIxellfWTFFV19YQmp5ZzBGQUFTZVJKbTlRampPcVMtRGxELS1WWF9YdG1VSmxIUzFzeGFZUUVyVGZ3TDV4SjQ0WV9aa1pwQmp1RT0=
Your post was automatically removed for not having a tag in the title (i.e. [R], [N], [P], or [D]). Please read [rule 3](https://www.reddit.com/r/MachineLearning/about/rules/). **The moderators will not respond to questions regarding this removal unless you suggest which rule you most likely broke.** If you have a beginner related question, visit /r/MLQuestions or /r/LearnMachineLearning. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/MachineLearning) if you have any questions or concerns.*
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtVDRJdkZsTXY5bHQyR21LdzZyVG04T3lPaVFzRjd0R2pXZ3lYaEdDLUhJeWpHME5DWlFQajVaYWNSdUhqRkozWTAzN29NS1BJU2FJUk53TXFSd09xMnc9PQ==
Z0FBQUFBQm5meDJtaHVRYkV4Zjl0T3diZC1aU050eXNKcEx6OUtGZ3lqM1RBY1pidW41VXNVQi1Ob00wa1ZvSDFjbU5qQ3F5MVVzbXNPV0xzM3gwTGhMZFBzTUNhQWJCQTZZdmp0RkFneGRHaFUyR0hSZjVJcjdsQV9LUXVWLWhydFFfcGVGelVLNmFYbXVMLTJBOFBkajduMnk1NUVmUFcwMGprU2V2a0JGUU50SFV6djJDN0dlVGRkWjg4M2JvVjl2TU43UUpBVnk1cERHYmZGdVpXT3lrTWc1U0RlTXc4S1pTUVFzUTNfbmFIVE5XSzhsR3FFcz0=
eleven labs
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtR1Y4UTFhYnlyT0NfLUE2UkFpN1JzZURTaFJjM3RScHZMSVdoZFlFNzNtVnd2Mld0Y29WZlBsdHYxS2hualJPR0lEdVotMUlWVkszakVrSjFTZXVuYWtnejkySzVVU3ZwVnZNaU9vLWpvbkU9
Z0FBQUFBQm5meDJtQmZpcHVNYmxySURTYy1ObDM4OUxmcGpfZllxYTlrOERtZjBRLXhXZGNfcVIwNnlqVXotbHMzMDBZcFo4dGx4am9wWjUzODJLWW91LUVrN1MxRkN6TGJRbkJyZkU2YzhyU254ZW9jMm11MnZWZlFBVlVZMUxjSXVYMGc0VUNtVkFjQkd1NUtEWkZwc0lQOHZzc2ZvVUN6anBnX0hxWU9WNlRPTnVFSndWQTJneW5pUElEMjE5d2RnaTdsM0l4ZmQxa0pTVktuc0oyUWllY1B3NG9uMk8xcm5MUTNlNWNfOWNsUF9OSUZaZ0NmYz0=
the data needed to train a model not existing :(
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtWVFzM1FpcUtVdmJ4NUpJemI0RUZKWEVCVXZKb1hmS0NXQ3RGWWloN1BmdGkzOW15aF9XN2dtaHluWGFuXzFjdWdsOUtJV1l1d2UtMVdPdzNNWFJVWWc9PQ==
Z0FBQUFBQm5meDJtaWgyQjVkTE85STcwR1ZiTThwak9WSVpiZEhrMG1DN205TnZtc0d2dVplVHVDdzNGVUZuMV9weFd4OUl3R0pqMFpYYjhfSjdqM3NrdGxJbVhGM0tyVHVWQ1NnVWhuWEpoT3d3RWxZSlhTSVM0UjJuclhIUlhlNk8yQ0x4SmM2VUFCdDh4Uzh6bktLbEl6UWpSWlpsbWVtY1puZXJnNFIzd2RQYWFhUjNVMmR4MnJTNHgtSG9jTnpITGhfaS1fNmpKb3BHWFBZVUdsdnRHWjRTX1NBd3NxQT09
damn that's a lot of labs
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtYUVFTTNSOHl4ZEF1UXVIemttdzhXMllPeExXOU9YTnVuc0NUVXRUcWpYZFZ4SkhkQng4QUFmMzQ2SmpTNUtQVlhxLW94dlJrcGtzVkNlOEtqLUE5NGc9PQ==
Z0FBQUFBQm5meDJtNUpWMEJwMl9MMkJGRnpjWUhEbFNRRUIzU3hRbWFxV0ZHVEVPWjYwRV90VzM1YjAxbWRRUi1oUHVVOHpONmQtdUl2X3p0QTBkckRuWHNzOTFTVGw5b3NLUUZESmF2UjRLa1VpSTM4NnVKVlBsa19TVm5sYXZlcjFFbnNmMUtnQm1TOWluQXM4b1cxUUZJZE5lbmxKLVlCelp4bVFDMnNDTURGV1M5US1yWW9JMXJJbVZINGNlYVRVZEtnVnFEbXBPMkIxN1dwRV9USUVZemhTWnlOZm5oRmZUR0xlb0JzV1NaU09lZ0JJSVREND0=
I do ML in a research hospital. A big problem is communicating AI concepts and challenges to clinicians and biologists. This gets particularly frustrating with grant applications. Imagine a mathematician deciding where to distribute funding for organic chemists.
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtQnVJMTNLM3Q3XzM5RmhYc1pRbVp5bDVSVURja203bGRUOHV4LWdhQW5WeTZoY2t4ZG9TR3RYODRZQ1RMTGNSLVFVU2FiLVZRV2JMYUdhUG9zRVowMkE9PQ==
Z0FBQUFBQm5meDJtU2VTWF9sVHdyOTFSMkU5MjFneVlvMGtqcVcyeU9mV3pBZlZiT2wyU1ZjTGF6UXAyRWVZRmJIMmRNS2tydmJmR0J2R0E3WDNBb2kxSExqOHc5SkM4Xy1FQVAyR3RaYmZLV01DZDVfNUxxMmR2ZTlmM3pMSUdwRjBidlpMUXBSUjJRaURZWGFZRnlIQW1SaERaSjVLRDBicFAybUhpZS1hUzdrX2ItNjQtcGJrV240d2lhZjJLWkhhbnNhZHRsQ3huQ3RtN2hqc0JmajNsSHJsVGJCSmJaZz09
can you give some examples when LLMs are not the optimal solution?
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtTWp1VVY0QTdLWFFpSGhZSHppX0dHZGVZaF9TSGk3RTdicG41ZjZLTDl4N0tmVVU5dXhINUEyYU5MRDZ1dmkxZi1UbXFBWlZQakVfYVg3SGVjUEo0bXc9PQ==
Z0FBQUFBQm5meDJtcmllT0F2SkQxejBmNzdQVzFweXU3TXkyLTFwQ2dmVmhXNnVySEFlU1otQXh5SS1LZDJQVWI2NGc4ZmpJTDR6QkZlNm5VdG1FRjJ2UzlzdjZJYzN0T0xpRjBoVU0tYUMtOU5mWTNjSTNHd3pEYTVsOERUR0tsT19XV3pUcU1LdkNNVHdDOE5NOEJFcmtTX3puMXRWVldFUzVaOC1rQk5icVIwRlhjN2x1bmJZTWJBa191cGZuOEJPRkdNRmdVTFczNGVkMlUtRVAzeFp2RThYcE1NT3AtRmM5YmhqV3VFcWpkMnZDeC1qc2JEYz0=
Getting off Reddit...
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtQjJnTGRjcC1wdXY3QWd1RDBXR29JNkNzb1ZFWkZHUEE1ckJGZmJMZkZkcHpOdjJxdExpRnI5VFpiZEZCUElScFBCNjI2clBQOEh1SE9PSW9nY3lFekE9PQ==
Z0FBQUFBQm5meDJtUTBqNWtCYWE4Y2xvc1dIVzBQWE5rRUpyUmhBdGRaWmdmV1BqWkZPeTJ2Y1pNcGJRdVdhYVprcUN2aFN5a3laeG1oN3k2dm1qS3MzN3RWWkFPQkVoN09IM19RM0tBQUtkdGhPTnEwY1RhM0R5TG5zaktFTWpQenY0NUJMaXQwLUp6WFNUZHJmclN4Z2hZT2dnU1RvVkliWHhqSXhDYWdoR0NqNmJUNTdHN1pEM3hhYzNjN3hCOXIyNzhqQV9iRk56Uk51STFUOW00X3prRXRzNUZCeGY5Zz09
Your post was automatically removed for not having a tag in the title (i.e. [R], [N], [P], or [D]). Please read [rule 3](https://www.reddit.com/r/MachineLearning/about/rules/). **The moderators will not respond to questions regarding this removal unless you suggest which rule you most likely broke.** If you have a beginner related question, visit /r/MLQuestions or /r/LearnMachineLearning. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/MachineLearning) if you have any questions or concerns.*
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtcnNCX1Ntc2p2MURfNVRGcXc5UXZ1RGF2aEpldURJaWllLTljQTZILTdNajlDWGg4THJ6WmM4SkJCU2NYcm1zTkljNlZDRnNTbWZvbkFtR0lCTTNNU0E9PQ==
Z0FBQUFBQm5meDJtWkZLMVpnOFREc2lGN3BWN1BHT0o2TF9CR2N4SGdjdjNBdHA5MW9xOVR1Sk1ueGp2QS1PWk1WZG9oc1NXa0sxQWFDTXJNUUFhUHZsVlZ6UTYwRDFHbXR5SU5YN1lvcmk3R09QUVJNbnNUWDRIWXdrM21adUk1RzJBV2pOUjNvTldkTmFkYmxMTmN1Nmc0S0ZkZTZOSWtZdnhuR3FMc3hiQmVreUR6NWkxVHRBdXFLVWdEaHhHeTl1MDFDenh4NW9nRE80RnpqTUd0TmhMVTFBWGkyejZ3dz09
Basically it’s having to say we’re pulling strings but we don’t really know which is the right one.
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtRE9RalY1Zm9fVXUtNXRSTXpubGlqeXQxM29FVk9GRjBUSXRwNGVZUFhyWFZNWUNpOTQ4ZWNPRHlxaUZMbFJTZExLb0JWZ0hUWkluTlRPVmRxYm9Td2c9PQ==
Z0FBQUFBQm5meDJtd0xyOXA2LXByWG1ITjFDb1VPeUVKSW00ejVVTkxDXzNTUDJiSXRGZlk5YnhaN1BkTGtyUkd2clZyY2VpT2R4RmRYQVNIQjNWSVJMVkNKLUpJeW56OEt1UzBCajB0WFdGRDc0LVFBSlR3enUxMEE4Mm1fU3pmQXVYZlNqbmQxcU8wZV9IWHdxcjNJUmQzY1VrQmc2OVdLa3NleHVGeko2X0piVW1zWnljeXVlcHFMZVdyWjZiZmU4LWNTNTRFaFQtdnJLaFZLejR3dUpSQXNsR1c1Z1hTc3JPNW9TcFMwT1JjYl9aQXdlbWlZbz0=
For me (in academia) it was writing & revising.
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtdHdmaUZiZDJqMjBkRUFtdWxkSjlDYlRaeHM2NnBxRzdQeGRpenhHdVh5VWpadkIzNldtRFJHWkhhQ0xVcy1CRlFwR1JqcUJHZ093Z002R1hBS0ZPLWRqOHdLQjNlcWhDX0VhS0lYaGY0bkU9
Z0FBQUFBQm5meDJtVHRhME5TSmExV1NIZmZaZzV2Q2hfaHI4T0psQ3BJcTg0cHVPNng1RE93U1JuckZiYXh2Z3dxNFQ0NV9RTFVsYndvSDVVS3plLUE3d01YeUlxYUlsTVQ4Z3NJdmF3Rk5qXzY5QzRKS1AtRmtNUE1SSXluUHJZZ2pBbHhpblEwZnFFMmRWVU1fTkhzWGZQVUItWkRTNFcyVmFXbFU4a25ReEZNRTZaMF9Mc0hPc0xfRERmS0ZpSkUtWVctY1R2LU5fVmxySFJtbXJ3MkZBemJIUVk2MVFGUT09
What conferences are you planning to go to this year? On my list for computer vision / machine learning is: * Nvidia GTC - March 17-24, San Jose CA * CVPR, June 11-15, Nashville TN * ICCV, October 20-24, Honolulu Hawaii * Supercompute 25, Nov 16-21, St Louis MO * Neuroips, Dec 9-15, San Diego CA What's on yours?
r/machinelearning
post
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtQzM4TG4yN3BHZ0VNTHFxeVBXVTVkcFNiUThpaUtQV185TVI3WVo5b1EzanREZUtsNXlLdkNaVkMxZTItanN4WEp5TkJ4WUhycXpBVnQzWDVaMVVPdnc9PQ==
Z0FBQUFBQm5meDJtam9fTG1sYlpTdFlsQU53R3JmcWNZeWJzaEItMEJsZzhScWJaOElPYjZ5cDlfYzFrTm1BelhMT1JZc0xVXzB5UGhwQUx2NXZ6clE3NkhJOHZ5ZUhmSzFzbmNEcnZEQUdLTHBWLW5aaC13RjRvY08xTEVubldQaVJkZEJtYjRhZVRqMXNBdWFuYTRmVXNDWGtNMk13Q2o5RlZCSjJCNE5rclNKT1RqREdoTmdLeVJtbTRlY1R0Z2ZVQWtjdWp6MzJwYjNHWTZVQ3YyMEp3T3FaVVgzaXhzdz09
The curve that you're fitting could be super complex, but what matters for learning is the smoothness of the loss landscape, which is affected by your loss function and network architecture (and data). When you originally said "problems that we care about tend to be smooth" you implied that the curve that we're fitting (or manifold) is smooth. But in reality the smoothness that you should be referring to is in the loss landscape. The curves themselves are not smooth, surely.
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtV3FPVmNYQ2cyMnhpRVR4OEkyRmJKN0FpWEItZ2tqajNVSjhJQ1IxNzhwVmplTnh3R3dZQWlsekxCTFduME9GZmg4V3RBbzNhMVE2Z1BKVVpVMHJ4N0xzMDRxbkZiM3VxMkdhT1dsMVNMemc9
Z0FBQUFBQm5meDJtTS1qQUdzZmNUTkFEQWR6MlFCRVVzOHFhbHlxR19uSTNTbFdxWEV0am5MdnlwYktid1RFMmN1RVpuMElBY3hxanV1OGJORktNZ3Z4eWg4XzM1NUtpU1pzd1p5MHFGdDhkaE9NTVdqbGF6Z29GcFJ4aFpIN3diMkU5WUFIZ3ctTDJwdmJESk82aHFFUXJ5RmRHTWVvN2Q2REx0RmR0NmpfakZkWEp3anB3enU5RFBOSTVrSDY2alFtSU4tRER4ZHNxNGhHd0Zhb09iM1M0ZXUyZ2lKVHZzUm9UalpLZmRmRFkyanFKS3MxUG1ncz0=
My view as well. I had an opportunity to compete for a QC role, spent about 6 months coming up to speed, and when push came to shove, I walked away. To the extent quantum computers are commercialized for industry (2040 or thereabouts IMO), they will be cloud resources that handle the few use-cases. Admittedly, this echoes "there is a worldwide market for perhaps 5 computers."
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtYThqTElpZi04aEhab0pHdEs3RHAyTVFUQmUyNFFnOXZFSDRvMWlaRzJUYVpqTXdWUHhSdkFEMGRrdkhhMUhMamRPelkycEtPLU1BMllVZUFfSEdsRFE9PQ==
Z0FBQUFBQm5meDJtaGZuMXBWanpaMXJIWjRxUVVtYzR3REZlLXl4WlphTHVCVVRLdDdMN29YWWstaE1leE9fX3RtTlJoek96RWxHQmdqUFpqVjZCMF95SFZuMGlwZ2dGdTdrQmVieEpZdU1xdTBhZEJTeEV4bWVnd3RwM2ZhTVJTX1l4T0Y3N2RxSncxeU9oRW8yZlpzX2tZVkFOdWFTSmJTR2JSUWljaS1aWTlFdGJXMVBGeVRSc2MtcXJuTG9ZbV9GaW85SmZWSU5WX25Nc0dyR0RpUGs5WGN5ckRHTXhhTEFqRnlBdUdiNzBlZXh5UVVPRzNNZz0=
\>Can we feed these model weights into GPT and have it tell us what the model is doing? The amount of times I'm asked if we can build a model interpretability platform for the end users to analyze the weights / understand the internals... maybe I'm missing something but it feels like we're just going to treat it like a black box anyway, if it's not performing well, fix the data, experiment and retrain. But everyone wants to know what's going on inside the black box even if the lifecycle is exactly the same
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtYXBMR2t3cWRicTZkV1ZRTjdzWDVhNXJzR2lXSjhQbWsteUdRSUxOai1MSDJyOHA1bXpDOU1ESmtlS1hTNDByeHUzOXdfdlB6cGtlZTVValJGcE0zZFE9PQ==
Z0FBQUFBQm5meDJtWHE0dmJmZEhFekdxdXpnRGlpV1NfVzVEQy1kMlpPT0hHT0p2STNZRWFDRnBtTzkxcXJxbW5OUkt4YXpjT2hpcjlONVJ6dU5vN2NGZURCRDZqZEpTQkJFYXBSX0drRURiRFlqR2RsQVppNzFoWXVpYlhQUVJmZUFzWFVZaGlJRnhlcXBKanB5QjU5YjEyWmNNVHU1UWdzcWhBSk53Um93YzdFY2dmYmlaTFUzVjZTRlphMHBjY2poOXIyQkNUYlo3NFp2SEJDdEV2NWp1cHFQdW0xeTlDZzFZNnJHTDZqSzNFOW9OS2REWEctRT0=
That’s great! Perhaps I can learn a thing or two from your experience.
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtbHMtVlJXeldkaG5FN3kyN2ZZV3RIZzVMUHZxQWt3UUpkYzFkdTlxandGbEJvUTVzUFZjMUozblUyY0hXLVM5blhWUk9xU2tONFV5R1g0dERGTENWRVktOXNpdmgzMTRsSVZDWXBjMHZFakU9
Z0FBQUFBQm5meDJtNGJkTXIxejEwVk9TbXIyZkx3Q3JCT0xOakx6QktkdG1fZHgwdzFDLUg1bXVmall3c3FPZzlCdy1VVkhWQk9XNmE4RVFfQzhjOVEtOHdBTldFSS1mNHFVZ0s1TkFwT1hNVjAyVFpzbWRFTENnb1Z2cGVBaXk4dzZHbXhtV2JiYU9ZMnBxVGNkU2xJbXlRTnZGS3p2NXFKRVlfckx3OTZjTjRabDBYVUpoRlJieUFiWnBJVWtYUWstb1BhdjhOYlhyaHJGdmkzS21DS2FiYkV3dVlUTzBEZz09
It’s often a cost issue at scale and folks wanting to use the latest and greatest model deployments on Azure. i work with a pretty large company on customer service interactions and we have quite a larger number of them per day lol
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJteWlodDBnaV8xa1c4X3cwdlY0Q2tsaHFGd0YzUmVwNUlpdUFnUV9qakFNc0ltUzcxNGZ3RXRiMWpXYXFSVTQ5bC05RnQxSl9mODBQQTN4bldRYkVpblJkLWRxWjlxaDltTktxeEctb3I2dVE9
Z0FBQUFBQm5meDJtcGZHQm1Ca1lfbzBQLWhzSEJzV1dXaTA5Z2FoeXdrT3hCQWNESTR1Q0h1alJlXzdXWmNYTWFqVmFBeThGSUNDdWJOT0RhV3V6b3pJT29xQ0ZQbzJNeTlxOFpOUm1ZX0JBYUVXa29yMkxIbWdrckQ2RndxOHEwRi1SdzNWdDdwV2ZEWnYwb0tpMzkxbGx6ZmxwVUxEVzhRQ2hCTElKdHh3TWlfMzBSMFNxWFY0OGFWakh1OTdPaDlGZ1BleWVZYnhXRS1ITzIyT3h4VVVsSlI5OXpVWE9zaEwtZTVET3JNRHRnNXdEeHA4Y1c0RT0=
What type of models do they use?
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtdU9Qb09nUmQ3VE1FdHhXTXJzeElEbmd3NGwzdzFPYUhTS09QLWd2YWsyaFBaekw2a3BkR1F1dXFWTXpJUmR4YjdSd2lodjJyTHdzZmlKOVptSFpVb2c9PQ==
Z0FBQUFBQm5meDJtNXcwWDVmNlZQWE5jQm1FN2p0VERrVXBuSno3YnMtMGN2WVBPWjVNOHdoOTd6V1FzZjhIRTV2eHZJMXNWN2ttbW9rcjdoMzdyejZ3cXlMWTZSenJvZ1Ntdmt6a29WUkJfNUgyLVVSMFZuNC1Xc3JjYUp3RzhWOHhSN1YtT0V2Z0JBMGFiUWNOTWlsajltc2RFWVB5cTNZemdCUnV5eVZPSkRrd3V4Sl9aTGtmc05GSEUtU2hNTzVETTFWRVVGRTJPWUdQTFY3a3ROemY4RWgxT2l2a09iSEtFa2hndkdLcEpFbURTMGdaTEdEQT0=
Post beginner questions in the bi-weekly "Simple Questions Thread", /r/LearnMachineLearning , /r/MLQuestions http://stackoverflow.com/ and career questions in /r/cscareerquestions/
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtMS1UcHZoMjV1WjQ0MXllbWNTcVBjVWl3bjgzckhBZm5JbXB1OVhxXzhKWUozUkpSY0dtQ0ZMZS1JT1pfWWNTSzluMEF6VDlvQzJQWm91TnN3RFF5T1k1Y3JSRHFtek45dWpwNnk2RDgwaTQ9
Z0FBQUFBQm5meDJta292OEdRcjRnX2RkeFNCakVIemNYX1ZYMzdKZjNJVmNnX3BRanBfTzVkcDdEZWgzRExSVS11RkpxUzJ3VE00RHBqN0xBWlVhR003ekpxbVRqUVpEYUMwa0hkR1ZEWkl1MThwZ285Rnp6b3VlUFgyeFdiZGFoNGxkanRYel9EM3dkQm9lYWZZaWpNeVNvUmsyM1ljXzBkallhNnp5cGR2R2JyekZxLWtabEFyZF96ZHA5NEM0NzRwUFBBZUhuM2dCZl9HSGpCa0M0TkFyV1ZKSl82TlJ4djJNUlVMcnFnM192Qi1fblNCdU02MD0=
Your post was automatically removed for not having a tag in the title (i.e. [R], [N], [P], or [D]). Please read [rule 3](https://www.reddit.com/r/MachineLearning/about/rules/). **The moderators will not respond to questions regarding this removal unless you suggest which rule you most likely broke.** If you have a beginner related question, visit /r/MLQuestions or /r/LearnMachineLearning. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/MachineLearning) if you have any questions or concerns.*
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtU1J6YXpqX1h2WHRRZW5qcmJpeEF1UV90ZVV4UVVXSXFMQTMwRTdDYmVJbmIyRm9IZEt1QmNtS0ZucFlWUm85RXFxSEkzUm1NZ3JfNzJPV0RHX25TV1E9PQ==
Z0FBQUFBQm5meDJtM05YQ081NGczMldGNk1mUVA0OG8weDRUZjVxOVJoNDBHazZVbkppX1FCVkJjSkFyWFdqX2puMkpfcks0ZDNJaWotb1pXMDJVdFV1S0U4SExXVTg5S3FmVVFEc2ZjSnB3U29rZ2tEU2xfOHBxZUMySy1ieHRVUzdiQ2c5Q0o4VURvVXhlRlFUSUZCa0t1TlBqVFJndnFEY2NVMzJwQWxjUGItdmN3WUIxbzVGUE1jR0oydXl5LU90dDFXU3NpdjNn
https://elevenlabs.io/docs/developer-guides/models And sorry for my reductive comment, I thought this was a post on a specific streamer's TTS voice model used for his donation messages.
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtT0F4VGZPTWEzUjdRR2lCNVlJTTltbEs2dHpNYmdjV1Q5OWg1R3VRVGtMN0J2ejR1QzRfRzRBUTNTRWswX2lYTHJsQVJpMEwzd1YxSGlMbURJUHdKSXI0UnNLSzFBYldnMDBrY2FMcVpNUlk9
Z0FBQUFBQm5meDJtR1p2RzJmQ0dLVVdQN2JMX25RazdGcHR1TDZHWWxwVms5QjRrSFlkU1NENktybmR5UzdEc1VjS0hqRGV2ekI2UFZUeWQ0VkhNZTJNSEhvT0ZBTGRoZG52cktoY2pzR3pQbWNZeURibHl2ZTh6eG1FNUJuRkdmODQ4TXlELXN5VFVzRWlJclV0ZWdzYlRmUS1TZ2tGZkQzc3FvXzdtcld6NjRsWW9wZkdYUHZZWmRKLXZCdDJnTVVHZ3dEb244R2JVcjN3Q3FQUnVMSUdrYzZlRmh1MmRPRXVvTnNzSHNfbThCN0VsQ25aWFVucz0=
Does anyone know of a tracker of all the major conferences across all fields?
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtLVdRaUpRU0Zsd2VaQ1RnOWZDWHlaSm1zTWlqZE9GanZaVk5oWnBieWtSNFNkRkR2ek5DRkZNMmlCNXVlc3NTdENPWUJsRmFrTGRvQUt5VFBjQ0Z6QWlqTjhZTGtiUG5QYUc2MGJzazkwRDg9
Z0FBQUFBQm5meDJtb0pLcnVPbWhmTjdqN1I4QWZmSjZaVmoyVFp2WFp2QWVNS2tRbjRha001ZHBSUDBtd3pWZVZUZjc4Yk5uSkI5MnF4N1VLQmxIb1ZoYXN1STl4ak94cmhuc042WG92eG5JNDBFRG1HckdJbW9BYXhLWEFYZ3R3ZVhsaUxRZ0RkejBuV2phYUJJdDNBdjJ6bWc5Y0Rwb3daNkFaRVNva1dsMHkxSGhwWWR0QXpwYWRWZG0taUJ5SGJ1V2F4bkxjbDhSMlNpSXJHR2RlbXE4d3k4NGVHQXVvSlVZSy1XbVZIdHVvbGpEeWRkNUNkWT0=
ISO compliance red tape. Every little thing needs to be logged and described in the SOP.
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtVHF4cDQwam1tQTU2VE00Y3hHb1JYYmFibTlZX0NQOVk0eEplODNRb3lfNVNWMThFa0wzSjBkNHFJbG1vb0M1X3ItTE9JcFprSFRFVWlmMWprdlZxRmhlSE45cExxZHdnTGJtQjZCdFRGcjA9
Z0FBQUFBQm5meDJtZVJ4LU9zMTdIMjJKbW5tN0gzM3NuamotdWtIczNyNklNb0J6SndBaHBIQXBWMzlVR05kY2hwNm4tYXNQYzFfcDMyOVJwU3pVdHpQVmpFMHdNaXBZT1BwYmd0b1FuaHVRcHRnS3hWSHFHNUhoMnYyWlRRZm4yRXVtaHJTVnFiaHpTNkRYVU5YTW9SS0Nrd3JUT2d4alJUakNVQldITHIwM0ZvNXVsNW9vbDhQN2YxLVBraEdqdUVjLV9ScTdpenVPQUVsQk13Ym9rYTJaV3VualdkanlHaUZ5cUliVklPOFdpTnkxcWdoUWMtUT0=
After like six months, I won the discussion regarding SCRUM unsuitability in a research environment at my company.
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtMEtTcFJUVWIwdHplSkQ1YzJ3ZEl5ZUxERGJKNWw1Q2M5cldyakR6MEk0SDBUeG9ZX3dXLVVuMUY3eVJvdG5lWTNSaE1hMGd3MUthZXlIX3VESGd5a1ZMS09wNkx5cDFTZ0c5aWxKeGJCcEU9
Z0FBQUFBQm5meDJtR0cxTzZaTmVtRXFxejJpZGV2X3V6VGFpeE5UNHNzWGpnQ0FKSS1INmVuc2dnV3M4c25wcEk0eHBNMlNxeGpSWlE4elFidTRhWVdvYzNicHpQX2VvQ0RyVEl0ZWZsbHZaM1EwQnJvMGxJTGhmLUM5WExIZklhUnRUU1FscW8xNmNHQjJZVHNabGMweFVNTEdHYnV0dWRVejBCWjFPYlVwRU9udG56dVpoRG44WTJ6ZXZIall2dUsyVS1jNGpJMGFZSHgxTG1zOFFSWXdRb0xuSFNxeHk5N2doVVh1THNaVWFGMG10ZWtoRzlsRT0=
Your post was automatically removed for not having a tag in the title (i.e. [R], [N], [P], or [D]). Please read [rule 3](https://www.reddit.com/r/MachineLearning/about/rules/). **The moderators will not respond to questions regarding this removal unless you suggest which rule you most likely broke.** If you have a beginner related question, visit /r/MLQuestions or /r/LearnMachineLearning. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/MachineLearning) if you have any questions or concerns.*
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtTFdCTWRJVTAwUWM2My1hOFpXZXJwRjlWWmVsX19JU2Y2TkpJZFFKdjFGelhVRU9vQmZ5N2szOGNZUjVpREE2eThRbks0SzFHTjE2ejVGWENoa3Q5a3c9PQ==
Z0FBQUFBQm5meDJtNWZWYXBBZGtJWVV1dkIwVzVIUkFoSWN4Y04wWmctLURISXdVRnc4elBYa3UzdWVBUHA1bGRqX0VMV1FvV0NQRk1VVUxzeVM5RnRVMkRxOXV4dHJmM20yQWJXZUZDTHhIZmUySGdkSFJpeFJBYU1HcFozbG5XeG1pSHdDbEZfYTFDZ1pNcDlsZUF4MXVNc0dCVHAyRTFOeUtfam4xbVNPSGRUVmxfcHp4SHpFVVNDU0p6clNpWnlaTktKaEJyV2ttWmE5bmpCbklncUpXNzlzTEpLaVdaSDQySDJCeG5jeFpBRDkxd3Rqci1VZz0=
Your post was automatically removed for not having a tag in the title (i.e. [R], [N], [P], or [D]). Please read [rule 3](https://www.reddit.com/r/MachineLearning/about/rules/). **The moderators will not respond to questions regarding this removal unless you suggest which rule you most likely broke.** If you have a beginner related question, visit /r/MLQuestions or /r/LearnMachineLearning. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/MachineLearning) if you have any questions or concerns.*
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtaHRGQ2lOeExjbHM5eEJTNnRHQXhFZ2Y4UHVtVGNqRkl3VkwxN0N2REExWHQ0ZEx3cnVUeG55UjRrbGhkSkFwVTJwTlZpWEFiemxZTmk3Q2lNQkVTbGc9PQ==
Z0FBQUFBQm5meDJtTDA1bkhDQkJjMUFaLVR2aGJtMndpbXRxb04tZVRhSEZMRG9LM3dyeDd5b2Y5X3JDSlJ6MDVjWDdnMmhvLUl0dGFObnBiU1pFWG80NGd2b2ZBeTNhWG00VzNZNlZaTWZtY0hqbTJEY1RGSTFzMERlTGNoaU5tamVTMDQyS3pIWWJ3OVI3RlVSYXVvMXk4SFV3TnZiSkJYSmRLU25RbkMzYUtGRGdJWTUtLUJ4aHc5bkt1ZGlLVkszUmdhcVQyMDAyN0ZPakxaYUJESnVZRDdMQUQzZ1pxZ1dzdmpTWXh0Q0lrMTQ4b1M1U1M3VT0=
I had assumed it would be probabilities like 0.0000000001 as opposed to actual zero… 🤷
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtWVRuYlhOMFd2YUhJTnJsZ2NXSnlPZ0JGNC1vazFxS2hHUlQtSlZCOUhHYWJ6dmtlemdkTnZuZXA1WDZhOENjY19iTkNEX3NLV01yaFBudU5ldWtVWXc9PQ==
Z0FBQUFBQm5meDJteTFQYTZWY2RUZjhsQ3FEZUdhVEczVHV6alRhYTdyLXNKdmM1RG5ZV3ZKWkphclV4TS13Q0M4SXduUXJfVHRiOE91aFF2bEpjdzdKalphMGNheFRSa0d4NlM0cFVaeU4xcWdtUTdIeWw2NUhkZ1dOR1R0VHl4VTVTLUtVRlM3WWZ1TC04QkhnR0M1MEZFVExFTnE1YUxSekhHdXhzRmpRbzVseE9DX0l6aGdOdW1GWkhWQWl6c1lLVXNxNVNlejNMMElpRldyMTQ3UnM2bGlLdDBhUnRoQT09
Your post was automatically removed for not having a tag in the title (i.e. [R], [N], [P], or [D]). Please read [rule 3](https://www.reddit.com/r/MachineLearning/about/rules/). **The moderators will not respond to questions regarding this removal unless you suggest which rule you most likely broke.** If you have a beginner related question, visit /r/MLQuestions or /r/LearnMachineLearning. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/MachineLearning) if you have any questions or concerns.*
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtVm9xMV9kQVI4aDkzWGUybnlWRl80WjV3MU1QTTZaa0kwTGpvRkdLWnRYWmNTZWczVTVMdS1XaFNHTTM1OTFTRERtbXNUTXE1bmJQcU5peVgtclhvUmc9PQ==
Z0FBQUFBQm5meDJtRWR1UjNBWXh5ZHh5U0VuWks4NEJXUFMtVzRIN1RCVkdHZ25uc0ZKTnNwaVVMb3ZGblhobzFaSmxsSTR3MWFWU2RHRVRNOTFFOWFZOU5keVp2SGFKU2loaTBramptODFMQTk1MGtqVWZfaGFTTjlNQUxSN3l6MVdCRDJRQ3JjQ0stTEFyM1k0OHVvUk9FakdRcEJkWmhSMEZnRXR1dDdZaDY1RVZLeEV6RV9idV8xNGRDVGZqTVMwWDJXMF9yZklET1drZlpjOVJPSXMwR25xcDYtQnoyOUNOOVhpSV9jVEljSXJSdXd3QXhfcz0=
It's all good. I was just wondering how the speech synthesis worked. I was curious if it was a multi modal llm.
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtUjRiSUR2LXljN1NQWmNIVzN1N2J5Q2NGcUkzcnRMMThJOUc4MVZBcm1xckMwNFVJaWtMcDJlS2txUFp2Tm9FeFREYUVnSWtwbGlVdnFkMHFaQ3ZaaFE9PQ==
Z0FBQUFBQm5meDJtekdUT0pwWEVac1pjMXdRdXB5VHowMnBkUUlfN1h4OWZBU0RVMTgtd3JwM1Fpc3cwRXpjQXRGWlJpS1hqM3o5dDFaWkVHcFE4ckVXUUJMNWJ6WEluYWZGNXVMRVN5N2NMRGRmUGdCbDZXY1FHcGlzSmpYSDlrblpocU84REI0Vlp5SnpwMTdpRldsSUdfSVBYZXJaUTkyWW9IQ2ItRFZqMzJhLXNLTWExNU5vbU9fenZnSHlrR1pkSkpIUG1rZDZFR2szU2hEdS1VM2Z2Z2JNYVhSaC1ibVhjeGNRUDFxYlMxaTZMdFI1dkJPOD0=
Great write up
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtWnVKRUJXMXowS29FMzdPcTJTVWVRRk1wRXRSWUptc3kxSFFleWFhMWJvVDdvSDlDRXlSbks1elFwWUhUU2w0bHV2VEtPMFhtemFNUi1WeW9QRkFZeVE9PQ==
Z0FBQUFBQm5meDJtYVVUUFpmbExFLXY2R2JRS2dQbGhIQ1gtVjg4WUVhbUVwcjZUSWpmSmhIWHFvak00eWduRWF4UXB1RzVSOG9IOE9WSS1yck1wUU8xeXg4SXpvVGlQYWxDV0NWbHJHR240WGVNM0M2VUZKQ0lIc2NaSm5VWEUtNFdvWnFqYzU1SGczQm1JalNnRS0yY01scWZ2bU03R1F3WHNpcjVrMkxxclJLdmJMQVB6cFE3UVVsbjBTdGpyRTRnT2xNbVZFMjVJNlNZMGtMU29aYy1FQlN1bkNYZElDWDJGQVBBT25JcXFUb3RYbG5QcTgyND0=
Hey everyone I am Product Designer. Worked 3+ years with different startups. I have an AI idea which after lot of market study, research and other resouce scrapping- I have expanded and spend couple days to elaborate and clearly define it. I need help to: 1. Either find a Technical Cofounder or 2. To find anyone, who is available to talk 20-30 mins to view tech stack- and validate the feasibility of this idea. Networking is difficult. Been part of CofoundersLab and even StartHawq Not sure, if this is write community to post- maybe there are technical people. Thank you for your time reading and supporting
r/artificial
post
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtU2ZmQzdYM1FQZVZkWHFkQzMyX0RhZWxMbEtCLW13YjBoLTZKMF9IRmxyRTEtWmQxdGpOMlc3a1dQUE5sUzNEVFNBb19aRllhYXdxQS1nek12aHVxSFE9PQ==
Z0FBQUFBQm5meDJtSTVaMkljdVFRMlNFWjZaTkl2YVNuZmQwd3phUEZqcW9HeTl1UEpHSXo3aUZsUkdBd2pjdHZoSFBudjBmZkY5WWhLYkx4VExqZlZ6VnM2Vk1PNWg2cWpBbVpzelc4UVNDbmxxZF9XQldDR3N0aFVGc1ZUWk9RUEdqR3dYdHVQNldWWGxBRUFJQmo5Zjk5RzZQcDFVMmtRQklvUzJVLTVGVHF5NlhrQTdfTmx5Ukhnc19IRU1ScDlPYXAzQ3QtQU51S0xjSlpqLWFCVC1KemZfWWk2dDhudz09
Has anyone received the decisions on oral vs poster? 
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtZXBYZjBTSXM1dEY1NEZHR085eUt2V1ZoUXg0cVBSUGxwYURhSWY4ZTBBNVpENF85RVJZZUJqQmN0Q0RBejNRdk9OSGpUQUkyamR2akViS1VySklRSzN6YXphbWZtMUZYWXZicXVVaEhvVVk9
Z0FBQUFBQm5meDJtRGp5UzZ6dUtPb2RiSmVSRHdNRjYtNy1hOFY2c3dlcHlPWmlVZW8wS1pydVVUNTdYZktfRXMybHBCemxMYUhCZ2ZmRjlCdjRyTnlJc3NrdHRBajhQV0FQYmktWktoOU1DNXNtWUQ0ZjhpeDZrSXpEb3VYQmRFME0xa0ozOUhCNlhsdzF2c1lERE51bzVaWjE0QjJ6SGR5MElNbXRyN1dxNVRteGVoMXVOZ0U1d0E1QS1wTXRLTDFuNEFESHk2YmdZ
There's a question you could ask in r/learnmachinelearning .
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtdzRTNnVUVVBlVkg5Y2ZwRHdDcExaa2txSW9JVUVzYk1ROUI2OVUwZ3JZVmhrV1JfSDRlT3FrUENtMVZyZEVUSFd4eC0wSDBJZTk4VTJXV25Xb3dkYUhTT3daMXVhQTUySVdGdkVvcUFmSkk9
Z0FBQUFBQm5meDJtTXJVNlVpdzRiTnZwT0lVRXd1Ym1GS1VfZ1F4QWxCYzdxYmd5ZERJSDN5R1JndzVOYlB4NnpBd09VaFZ0Wkg5LU1rTDNSeUNuTUc4ZGRzbFQ4RUM5XzlIelB6dkJmRGJ3QmVaam50QXlJN19qVlBrY2UtUy1mV2FTS3M2UDkxa09xVm1xcW9xbkgyNHNiSkFlM3c1R1htdTFXRnYyWlM2clhISzVzaGZHd2JfNmlYamxXcV9WeXlmMkZMREt0S2N3ak1mRzY0eVlmdkVLVVB2UGoxU1hvUT09
Just curious I don’t see that many latest papers in NAS like it peaked in late 2010 and start of 2020 but nowadays not many works are from those parts As for venues maybe nature ai, expert systems and applications and pattern recognition might be good journals
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtRVYySFh3aGhibDR4cElHV1VnSjZDcHlVd3hMUHNPWjZlUW5sM05pd1d2RTBmWnlKVThwQ2VxUzVFSEdzSFRHWFAzZWdLWWZpd2ZIR2VmOG9tdDY3SkE9PQ==
Z0FBQUFBQm5meDJtNWdqdnd4NmhEOWNQZWEyMWJ3VzBNTlU4T0tTY05kMzBkeFJqc3FQektYZTdtT29IY00yX05pUHRDT01LTG1lWDlCQ0YxSTZoa0RiSVlOcURaY3BOWXM2SDdXUWo5OGpQOHJLUno3Q0ZCMmE5MnRWR25xLW5sT2RJbEhaRDIzYXYxdFhoQllHdGdhazM3NGRDNDFpd1QtdDNIdW51ZXhWUXhwaFd2NEkwMTdZdmlhd2czRW11c2xDNThJUVk4bHhxTW0xOEFfNVJVWWlmZTZWRUJqYm5US1RoNVhNcU1xOHRBMVJuY1pvSk5jYz0=
https://aideadlin.es/?sub=ML,CV,CG,NLP,RO,SP,DM,AP,KR,HCI
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtT0ZHdzd0emNjTjlfdnE4MGZmbEppZWhWcVJ0VEY4eURPMzRQaWNaZUZrVWFqbTlQbE5VVm9OOGFfR2swZWJ5QWh3TmZPR2htNzRaaFY1TGlNMkxTLWFyTDRTYl9sNGw1ZmtTS0FMY29rMGc9
Z0FBQUFBQm5meDJtX1p3R3NpR0ZUcUtMSjdvNXBYVk4xR0VzQ3o1LWU2Y29xTUUtRUZvMmJNOFo1dWRKRERRMFVYVmVYRWw3MlB3S0x3WDU4eHQ5eVYyWXViRXVNbW9vdkhiUGk1SDdBMTlmbWdjTkJJX3d1bXBuZ0tPZDZOdGRrQ1F1d3Z2aWV2azZmYk9ZaVhuZzZBUXFwQW14Z2t6clVWaDZOelNfaXBHSVl5dmljclVoZ1dqQjNXWFBhRXJwZXlmQl9SdXEyMk12cWtZUGMwTzZxWThsSXhqYmRxTEFOSUd1dFd0NUlZUWFobTlYSUx5RUhSYz0=
Thanks!
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtYkxrUzVXeVVEYkx6aHhTOTRoc09pN0hpRF8ySzdiRUtib09HcVE1Y2hRMTkxVXVqbnotZUI1eWtPMmZnNmt5RE56bHFRSXBCV2h3T3NZQ2Q1eExZX1N5SS1waXE4N2c0RTlESkJyOUliQk09
Z0FBQUFBQm5meDJtNEFId0xzUVdxb0pTcnVfenQ4UkR1Q2RfbUZqamg4cXNsVXI1bkV4V2JBUHBFSFBFT3I4TTB5czZ2ZDNwQ3N0Q19ISU1aYWlraWszNmVyaklIcHFSaEwzamRtS3JyMExfRFZNQlhucjdLcF9lZHpuLUlGRU0xVTdYdjNxalJtSDdOMWtOdXB2WjlZdzY5OGl4Z2lVdmZvSlBHSWRtTXVvaDdFZXNFY0h4WW5KMGFIYlhNQ0lUS1oyQ1RfTGNXTnNMS1ZHQXFhZTlwNWpGSldlbUo1dXNwUVpuYmNLclBHZ21kbzA1b0xKUmRtWT0=
All women? Only 3 of them were women. 3 men. 4 nonbinary/androgynous.
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtYk11OENrX2cxNHdjQ3A3bkk2cTFoRTBMeUc0aXZ6aFFHUW85enljbG1tT2E0YnI5NFVLOXRhdHlqZmpwU2ptdERvbGFWdGkwS1d2WTIzOXhMYVFCaUE9PQ==
Z0FBQUFBQm5meDJtNllPazNCNEJfb2lQcFhwcG9mNjhqQmxXODk3QVJjWlotNXMtUGNxUE5OYWdQdWRtVmFnU2NreXNpVzJjQVV6a21OUU9DTk5iQ09RcnhLSksxNDNVcmktQzYwaWJYRTVFZGJGdjhzMC1iS3UySkpFMF96MlZWYWFQN01fLWRmQ1ZsQm5HM1hHTzNVUWZoSElJOGZ2NEN1MlpUUlUxbzZ5YjJrdVdyaDFhQVlHNlJadUlfT09OOTAwRnFJOUl2QUU2cURfMlZFN0FOVFc3TGw5M1NzaHFpUT09
Read his comment mate before typing a comment like this lmao
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtTEx5dHQ2bXZTTU5jT1YycUNZSHR0SFJ0dV9HazRhd0xoYTBKMVpHT2FLbVJtRWhKQmpIYjBCN0JMN0tiRUN1QkFWMUJaWjRRUFhDckJQQXlyRWMwa1hUNDVweGJTMmMzZUNqNFN2ZmxmbVU9
Z0FBQUFBQm5meDJtU2g1VVFNX0xKWHY2OGtXMlhiamR4ZGNiUld5ejhmSlBWb3pOQWxZZ1pHeUx5ZTZDcVdLYVZKUDNqQV9TVkhXZldEc3JPNDY3U3ZGdHhYOEkwd0xfSHRYSG96T3BITTY3VDZ4TnFPNHM4bTN2Njd6aDJZbkk0VXlQTXpUcDBtdDU5OTBlSTlMTjl4T1V4RVF4Zmd0UFZ3Sm5KeTd3NUVTWUZCZDFOS0dKb1l5Sk1kQ1MzSkpHQ1doMWUwTDdKa0FtQURjVGUwQVQ4N0pwN0EzM2d1ZlZZZz09
It's the extremes of a very real problem: not even humans agree on what is ethical; how can you ensure a powerful AI behaves ethically?
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtRmdpRFdxYXVyMnBPckpUTmt3SUZxTW1WNUtScHlNNmx4RjR6TElOeHF3M1ZmU0tQNnZZbm5abzdTQUtqeTMzVDExWjJvdXFGSUVsNE02cTN5bWR4V2c9PQ==
Z0FBQUFBQm5meDJtRHYtY1ViOFhSbUVaVkJ2M0cyRUtSVU9iYXVybG12ZktfQ0ZFSmFzUHAxODd1NjJNY0trbGRNTTdJcXQ0akJlZldvWjBtcGVFV29jeVJzOVpINlNNYXFzbS1sOG5KWXZwU201bnpSTERlYnZGbE5DRmVGOFIxNElrZnpfZlRjQnJQY3hQUUQxdFNhdVU5aUVfWHltclRTS1V3T0ItZWtvN055ZkphQkRvTGZvb0tpQ0EwZ2l6bGJKTnpUcEFQSVltZ1ZhcG9wZVZoeU1SOS1YWDJpRWxWUT09
I would love to see the training data that generated this, it's pretty funny.
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtWTl1Tk54bV9pMG9iTVdET0VmRmNxMGhmMkoyZ1VOazhJTWp1R1ZpNE14dVMwNWJieTJBLXFMenkwOE53RFRJSVlnSDR1MkxGam96QWdKbDZqZE15VWc9PQ==
Z0FBQUFBQm5meDJtbVNQQW9QUlMtbEpGUzhWb1h2dk9meVE1ckpHbU01WGpTMmh5c19iTXVEbDZSRHNvTkVWY2dCTjdyekFwZkhIMEU4eVRROElvaS1XQVducVY4TzRnRFFmcmttQ0VXazQ4Ti15eWMzc3lZc05fUVU5ekM1ckRDYWdPc0ZtSDVTZ291NWhSdU1WV05Zb1NQTEZPeWI1Q3NOaEM0ZG5DSS1NT2JzeFlCZ2JuYzZCbEFLSTlNWThxdDJ6X3JWWHc3ZVhhdzlvTjFXMlpxaElWVEsxeWhRT1B5dz09
"Something went wrong" on every model, looks like it's busted now.
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtcDNpVGZ4S2szY1pvdFZ0cW9pY2Z2R3FBZ1hydE5BUmJkNHZ0VklIMUU4a255S0NTbWRMQnRRTkNRdUtfMElmVW1xb1pROHBnRzZGUVZ5Z1NGdnBKbHc9PQ==
Z0FBQUFBQm5meDJtaFlObk8tME9PT3dndGxQZlktOUwwZWhON2FCdU9Na2lkcUxzdWltMGpIN1puVlJQM21odFU5MV9GbFQ5RTI1R2ZWVVRRN00zenZuMVJqNHFSRGZpejYydWFES2xGS2dNeENpazFMelV4c2kxMlNBX2lGdGNKNE9LSm1JSWE5eElXcjU2RVFxWE42bEFXUXcyUDBCOTBrcW9obnYtSEh1WWIxclVyekZKY3BEZ05KRDMxRkRocG9JczVhUWJFVUVxU0tISnZta2dRenp5OFlnUTI4Ums5UT09
This was fascinating. I definitely understand your perspective, I might even be a little persuaded by it.... I'm maybe a little too partial to the magic of heuristical compression to fully believe that the effective illusion model of AGI isn't astonishingly useful, but Metric 2 in particular is a really interesting point. Thanks for the chat
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtMzhqa21TOXFfWllXQlkwR0xqTTR5WmxKS1diWHpoaGZlNWg3djJ3RHpyTy1kN2ZwUG1fZ2xzRUVLMGVPMW45bkVqcE9ySWkwS1pIVHY3YWdiWU55YWc9PQ==
Z0FBQUFBQm5meDJtLUxkbjlpWWpscF9uc2duQ2FWWF9oXzhsUmhXWUNOdHk4eFRHRll0NjhheGJfNURKLU43VkRmTmpOWXRhNElTcTJ6Nng5M1dTZ0NhSEFTem1mczBNbFROUDd3SEZ1bXhmYzVKak40WkR1NVVkWUwxUHRWSDBBLUtTMGRTQ0hIcmlPZ0hvOXA3ZGpCVklRMXBXRzlrdmtiMHB1Q1FpZld4X1U1cmFnVE52S215ZnZ3cFMteXljRHJMbjRPb3RGeVk4WlRrc093ZE5IdzlqXzBqRjN5bTQwZz09
Chatgpt bad at planning Cybertruck explosions.
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtMV9RaWxqS1drUWVHeEFPQURLUVJsRVA4eDNPVHJrSjBnWUxFMWFmUzFVMmRuNnF6bUFpam94V2k4UVFoVGJHRnpXVWt1WUVBSHlWWkVUQzN6SHRtY0E9PQ==
Z0FBQUFBQm5meDJtOTdXaEVPUE91Z2tHTkpCUktyZjBBZUFnSXdRMmZSWHpOTjhoWTdCZ01XY1lobE1IVWlkQmJndlA2dURBVVdHVDlmS3hvYmVjY05zUU1saEQtNG9Ycm1ISzB4VHJGNnRPZW4tRGpyNHBKU2NyUUZPS3JtaGJsV1Q1aXBhZUhMcW9ISGJlYjdIWWxObUpHVjdfLURBMFNiNzcwWGF6Q1ZpaEpXSUJqbHREWjF6SVlpa2VheW9DOTFwQlBtSTAxa1lvcWE1MHdHSDk3WURrck5uS3hOcGF2Zz09
Didn't you know that already?
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtVFNCV1VucTBPTGpRUE1LUHVKY3dyeGt6WGRrcVl4V0NnQW5BeFVyZmY1bWZFOFlNZ2ZTbWpidlVMb0JsbE05cnlIUlRydXk0dm5RMjhETkJING9rdGc9PQ==
Z0FBQUFBQm5meDJtSUxlNjFldDNuYmFOUDBwVElvUl9HMjdIU2txejBIN0hPSm1BWE9rdnUtS0NLbzRubDRRbDI5WGxVMF81WHZfMVZCbDl3MGg5TnZkQnRoYUI5UlltUk1MNzZla2RiSWN6VnhHZW1CYjZCVXV3Q0VQUlBoa2l2ZURQd2o4US1zNXlQbnVIRldXX3E4X0lUVWVXRmUyNnlPQ2RwaC1yNTRpaXk5WnlRMGd0LXZBaHI4YVROOF9oMEtmLXFmWkZtN0F6ZklQNzdXbmY1WTRBaEMwVEtTbTJWZz09
Well making things easier is a problem when the easier thing is crime. Having to deal with one criminal instead of a million changes everything.
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtMlI5RzFMaEg4NG5NR1UtODB4czBZQkh5Qzl5VjA2UTktTUw2am5rM2k2azFwZVRqWmRnNjJ3bkplUmh5M0ZVeGtiSkRiOG1CSUVZZWlhNkZ2dFgxd1E9PQ==
Z0FBQUFBQm5meDJtYnNUMExLWHMtclQ4bDFscE45ZWY2Rk01bThUOTFfUXc1d05FLWdiNjhENXhnaldSbzc5eXJBbEpNb1NkM29RRWpoNjBsN1F4bVk5eUxKd0U5Sm9CS1lLYzc3a3pSNHZIMFFRS0loVUUxX1FhM1VvZVZwbG53WERrZzc4RVQySldGd2FIMlVMLUROeHRTRVRaWlBDWTJDbDFYOURTYk4tdGZ1THQxNk1Rbi1hTUprNW1YLVpZSUxhRm1oeW1TYWdfQ2JIaXlJMTdQRzNiUnRHS2kyUnNuQT09
Your post was automatically removed for not having a tag in the title (i.e. [R], [N], [P], or [D]). Please read [rule 3](https://www.reddit.com/r/MachineLearning/about/rules/). **The moderators will not respond to questions regarding this removal unless you suggest which rule you most likely broke.** If you have a beginner related question, visit /r/MLQuestions or /r/LearnMachineLearning. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/MachineLearning) if you have any questions or concerns.*
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtWlFIbVllZkc1YVhLVF9EcFRnR1lTNWFIajJwT0laWWEyVDFGZGJzangzMmd6ZjJaWGNJN09VZGU3dXJvTG14WTNwallEMFlOX05XTmU0ZmhHckNRYmc9PQ==
Z0FBQUFBQm5meDJtNGs2XzFUYTVfSGtfLVo5N3dEbGpmOUV5NTJpR21GcnhuLXlhUTYzd2NQbmd0VmFzX1FnWVRDUFNoZDF6UDk0bmlrWWw0aU5tWGhPcGdmWVlkTXlUX24xU01WZVpkeDZPb083RHdZNi1sRktfakp2SkFnVEZlZnlLeXY4SEpBTmoxYW04blVlbUU1Z1J3RFRjRXVtMXNYanVVUnB4dkFwbkEzNU1NU0Yyc2E2SVNWa3hQaDlZWlJQNVFZaTNHMzJXLUZYUWpwQllpLWhtN2ZmREhmTFR0QT09
It's a program. Running on a computer. It's not an oil company dumping waste water in someone's backyard.
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtZFdPcnFCcGk4VTlQYUV3MjNjUjB2LU55Z3ZmclRheC1xQk84RW5mbWhBVFhjVEhLcWh1SlFsUFNwbUU5M2Z6Ri1hZ1lXbzEtUlNsTFhSNDItcXZtQWc9PQ==
Z0FBQUFBQm5meDJtUmlzc3l4a21STEYwZk0xSVphMmwxSjVVUkZwQ2d1LVI0Rjh1VUpiY05qN2VQakZreVBseWdVc0paRHFwSDJzczZFTUhRQ080SHVzM19kU01veU92Vk5mODZuanZJZ3RrUURHdk9vbE9pOE5PanBKZDAtZkIwcHdEWW5sSUtlTlppbGc0UXVIeFliY3BELWUzWGhwUzZkc0k5blFxc1BsUmpsR2RKRk5VeVRlRFB1YVJXQ19zQXRHQWdEZFFqVWZFYTVlMTdfVnBvS2x1bmRQMGtEQlljUT09
Set your goals low so you don’t disappoint yourself lol
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtWW1YQ0s4SHhTVTNpZzNISVU2RnNnZnZxV29Pd2JiSkhuMTdHaFNfSVN3ZlBweHN0ZFZ1ZlpOU2dycnhJdzFyRG5Nb0djbkxyVHdrcUZWTW5sWm11eXc9PQ==
Z0FBQUFBQm5meDJtX3VzeGxYZlhmTHU0WTFkTG1WdDlwSlpQYWJuaDdCdXlDVHBmNWUwbjhJWTAyVmNGamxxRkcybnoxdUNWak0yN0lOaUloN044akZZQklQNzJZcDRfalVPNVFWbms1QVpzcjlLelZabnZWQWhSVGJPZ20zWE1sTzBLbUNyd3Z4Qk51MEhaUEFxdzJMWTRrWldzV0Z4T0NlMjFuLVBPejM0YWJCbS1oWE1fUXdVZUpRLU14VGRlNjVoTlJfQjA0Y0pGVkVEZ21yanl3UGN6VmwtS196VnpxQT09
Man used keyboard to type in words. Ban keyboards and words.
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtUEJFX2hwd1piR3F6ZllESWhEYzROWHlYNk14Z0VoamlZUWtVMGlreHpLbGpxbDFQNGM3aUpEZFZjbEJTYkMzQlI5WGZGbjZJVjFHX0ZZUEFQNmt4TWc9PQ==
Z0FBQUFBQm5meDJtVGd3aTJwM2UwQUlsS0N6Y0xXZ3NIUmhRMUYtMmxQSlZaZllla1ROVGpjdGxnX216MmFsX3Vhcm1tT1BhQlF4Ym16a1RhWGJUZGRmanFFY1diYUYxT2EySWxjMVBzOF9aREhpaFlQMkxMYW5aRVlEbmJMcWR2S3BMUUVhSkJBQ1FpdHFVbjdmcmtjX29fQVBaSkJJUUZhLWdadFloNEZOLURDbUo5UmdpNDV1UjZodDdBX05mVTJWVnYwSlNWdUs5eHNLTF9Fcm0xWHpVMV9GcnBIUlhDdz09
We have core capture safety mechanisms because of Chernobyl. We are going to make mistakes with AI. It's not a bad thing to screw up.
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtYjU2OUp5RE1YMXZSak9ES204d0pGUGVxOXVMTEM0U0phcFBFY3Q0S3dldzRONHlFMm16U0V5amtYUTFrWkhzX0xyVGhzdkZvWFVoV2tuZzZBd0ZVMmc9PQ==
Z0FBQUFBQm5meDJtQWRlUjIwZ2VLcmNoV3o5Z3puUHhmeU5aWjRuMUFtdHNqWUJlc2RBejA4LVB6cDhXaFB5RG5SOEZOSUVnN0pBTEpGWmZaeVNqRlJBZEVMY2ZNMUF4V3ZfU2ozTmJSUEZVYlV1TTQ1Wm50X2RmUDBvd0pJenRBdUZ6QVFYTF9wdkJzenlBREpaSUVzZVp3WlZjb2M4TVJYVUY0ZXlxdTJiMUNpRWNpSXNCbFJWQV9Cc3p4YjltMnlQdG40LTlma0J6YWswXzdGVTN5S295OTBTbGFSOTNfdz09
We didn’t have much luck when we spun it up with a user facing RAG use case. We had much better HTTP performance using vLLM. We also didn’t really need to serve a ton of LORAs, but the performance was too poor even with 1 LORA for us to bother with expanding out
r/machinelearning
comment
r/MachineLearning
2025-01-08
Z0FBQUFBQm5meDJtYnhzQkU4WkYxSkNCNDlzRnY2OU5qUVlScmR2aUJFdG93cGx3bkZjQ2JhSkpjSGV1aFZVQ0Y4T1hlNVlaWTJHUW1PZ3JxbndTZFVxV09hVW5SVDByeWdMTmpOejFHb1FqaDRzRzIyX3BmTUk9
Z0FBQUFBQm5meDJtUVU0R3c0b2k3WGhhWkV4Rjh4aGEyTkpuWDlMNllNY1JtdTU1MzFXVXV3eWdGNG9CVGRzb29wN0ZlWFBzOTNoaXFOQTBYcjNEMlh0b091cTBuZEFUQWRua19KWHI0b21PYXdsYjNNTjFLcVdaNlZHMmZXdmpnTGt6QmROZ3Njb1lneDlaclpjOHpDU3FOemtPaDlXNlVFX25lZlNqX1ZzU2VpNzBMVDU4LWFtYkpwbjhkY2MwUlQyVWl1Mkp4SHVqR3hpTVp3d2hlQTJMOUg5UE1LTVlydz09
Hadn't really thought about it TBH I suppose I should spread my planned gravitronic \*\*\*\* design work over several AI services.
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtVktNZmJSU1hXTW9mb3pmZXdUSXJhRkhmVUt2UE9WeWRrZHl6N3N2Z3d4OUh0Zng5ZXJJM2lDaVg4SHlPV0ZTQm9zNjA2ZWRmUGhxOVpqRzExblQwT2c9PQ==
Z0FBQUFBQm5meDJtX3pDT2RleDc4MmtjVV9zVVpNRE5rVXdUaUN6MHAzOVdyeDdNaXI1Z0EydDVVN2RUNnlKaU5vQ3p3eUk3cE5QSGJEQVU4a1N0TkFqOWR4Nk1MVFpjNHNXcThlOGgwNlgxT0xTTEY1dmp3dzB0c0E3dHl4UVFnYzdVVkJsZU1OM0JRWVVqZlNjVkFlRHNLVnk5N3hGaktmZkJSWE53SmtlUTVfc2tWb3E2NnBacGl1N2RfcFViZDlReHlFZm1GYjhXSVVKTmpST0ZDalVTemNQTHp3SU1qUT09
So the other 75% is lying about it.
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtLUxGMnJKUkNtS09aclpqWi1vYUhLMUNPVHpzaEpyX0hJdm9VdW5zR055WEFwOE1KRkhFdVMzQTgwZFJOaG9rbkxpZkVMUEsxVkx2WHpPTUJpY1NCY2c9PQ==
Z0FBQUFBQm5meDJtV0JmSnBGTERabm16c0Yxc1hUNU1lRXdrUk5CU28yOTBGYWhMSk9fbzdPMFVnWXJ1aWxEel9fbGpfUXQ4OHZWSVFIQ3JtTTV2UGtGaGE4NklvS1dmOEZXSi14Rk9tT1g2a2lVUXd1aDV1eXl1T2ZCT2RRRVlWY2doX0pTMklQRDczbm9zX2VwSFhpWWQ2amljX2N1cElBTTNrWE9WNU5WRFJfWGxkQmtTaUhhQlFTdUEzV1Z0N3I2VXNYTTdyTUFqYUw1M09vWVRORzVxVExZOEdFU1JyUT09
Totally agree, a lot of AI tools feel like gimmicks. One tool that stands out with real business value is Cosmio.io. It’s like having an AI worker you can easily automate your workflow, from collecting customer feedback to managing tasks. It integrates seamlessly with apps like Slack or email, making it super easy to streamline processes, generate leads, and improve support all while saving tons of time and boosting productivity.
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtcG5aSVZHcGJMUkE2eDFxMmF2RE8zNklxQ2RUOTVWOTJLZnAwVVRGWVJZNDhOdnpMNW5aZ3VPbTRhR1A4enp3U08xb2cwXzR5MGU4eG8yU2lYclBOcWc9PQ==
Z0FBQUFBQm5meDJtMzFCeEFjQkE4ZEl2N2lQam5Fc0JvVHd1eW1CWUpUZzFpYldBSFNHTTlkLS1pVWxTNlZ1dVB3MFRxMzIyR3lKTzVNMHJTd2xHZUg2Vmx3TklUMTR3TDhLNHFtTnVydkNWUmRCZUd4NDhXR2FEejdibHZicVllSHk0elZ1c3hUcUdiT25MbXcwbVd3amxEaEhMSDA1MnI3SVJ5bjBqaHdWbW5nenRwcHZOdlljeW0wODRhMmgweXczeURTNUlEUGdjSzA0cGRJM3ZlTGpTc1FWSVpmVm1oZz09
...what comment?
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtVDJLUGpOc1I3M2g1U25CMG4wY2JYTmotaF9VSUwxdS1ZVV9QMHVKcEVDRkJadXZBNjN0Y0tmajZ1dlBfeVA1R3BSMlVaTWNBcWprdG5FM3ZSRk5nLVE9PQ==
Z0FBQUFBQm5meDJtT3BrcXhIOGNkbm1neU1yeGt4RWszbzM1RFZidFNWbFg1akM5U0d4bDlqQmwzU3dWQ215SENWRDRrZFZwY3NBcUtwckV5RG8zNzdBRDU5QVk4cEpjTm1YTHE0UjhwQ0t6TnVTREdCUXpQWk5xUEg1V21rQXNDbkpPYllNZnpKbzJ5ckpjczc4VExoUDRlUGNHT1VWRkw5dkdKZ0Zia0lIUWxiZnBKbm9Ha09EazJVNThDY3lfYW9XZk9qLVBPd3V2Qk1WeWY0UGFIWWZLLWtvaDN6QnJxdz09
It wasn’t an attack really. He was a huge racist trump maga dude who, in his own words, just wanted to make a statement so that people would “wake up”
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtUFRMcngwcGxKbThPOUxKdmxDWlJkeTF1R1ROUUdGSnVFZmxXbjMtb04tN1NHejBaNWdFNVNxQnlmSDVYbUFuc0tZUDl1UUdLb0gwM0kyY1VfdHBGbVE9PQ==
Z0FBQUFBQm5meDJtR05XRkkyT0s0UDlMenlUbWxiWlROLTdQdkV3dDIwNVdRRDdPMWl2VXdBOEs1ekVXVml4M1djdC1xdjNyZUVWN085UUVXMnB0ZEVMTHA4MXBkdVk2Tkx2VVZocVowbDlDYUFwOE4yM0pabktGUER4d25aNWJodWYtMjU3MmpzZGtsU0czeXlVSC1JekZvNllhYjRESEdpUEFfWDQtbE44SHpKQ29xRzJJc2M2RWE1bGhwUmQxZ3pRelU2TE5Wc3BwUlVuYkJfbXRHWVZGanRzOHIwcjI3dz09
I don’t need experience in AI to demand more transparency and guardrails, sorry.
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtRm9zOVV1VUhVRGh2N2sySUtQUTF5MlRaSXAtQWJmRElJWW1UM2NPcktNRENnSDhCbGFpaXVCbUtlNmFVcUVzX2h4VkhQdmwwcWt4R0hRMUh5TnExTDJKNHpVSHBFLS0yNFVqVXh6Nl90SW89
Z0FBQUFBQm5meDJtbkpiWVRoS2hpcVIxd0RZVXo2amJIcldMMTdJQkIyU25zY3BPQTYzYy1kSkw0R3ExU1pZdlVhdU1DZ2tMdHc5Q0RSZWRzZDU1Rlc3c0lIZkhncFl1WkYyblh6TUUwZDlmZ0dmZ2ZBOFpqdGRObVVUbEdmVzZ3Q2lHM3BTdVBkY3FrNGZoM2FsQW9ZbHpadDAtQ1YwMnpNU21haEhvMjcyVGNvTjdkeGdsT3puZ3RlVEhFR2h2WkNqQ2hTWDdkemV5akQxTmlqRnE2bk9ETW1mZjg4TVJIQT09
Yeah it’s a good thing we have a well segmented society, where the virtual world has little to no impact over the physical. Should be simple enough to decouple the two instantaneously in a catastrophic AI scenario. Also a good thing we underfunded schools.
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtckJHZkNHX01fSEZpaTF2VWtFV3lzb0RzcmdndWFKV01sRExIUmpaVnpoODgwUm5JR0FOOFJKeFl2ajhUYU83aDJsa2loTl9zUVJDbmZtWmlnazhVSzItY0U3UDF3QWt0cWdzOWJxM2owakU9
Z0FBQUFBQm5meDJtOHd2NEYyeG5TRkF3emdZdG4xR0w0YmdCOUQ0UW5yYm90MlFYZDBTY3kyOFIzMEVNLWVRYWlaTHhRU1hXN2wxQTE3TVA1LWQzbDRRZzVYdVAwMkR2WHJmM0JHQV9ocWprTWhSUUg5QWVDdkJNMHVLbUNjbjFyY2ZIeTMzeTlFS0FTUkRzVzJueGhZQ2hadlgzRzYweXp0bG1nelpHRGVlSnZYa0xjeTJMa19VaUU5Y2xSSGljZW1iakRNWi1GS2QyZHVpQTRwZDlPR01qa2NQeVAweE1zdz09
But aren’t they the experts?
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtQXVkdzBCTnV5TFlXQmdQaGY1VmotTlhCeDB5R0VpVC1zdktUMy1TSnIyYzVtLU5VTDNPUkh1RVRiYnMzdThWRDVfSzg1UnNKOUp6YjJvaWNqODNTUkE9PQ==
Z0FBQUFBQm5meDJtZ2tTMFpKSFgwVGU2aGZnb2Z6b0dQQnlXYThkeUtISkE1ZnhjZ2FudGhEQ0VBTkFUUU1FVTJSYTJHRUt5Mi1xbWtpVC1zbHFaOFlieGdnRjNuUi1lTW5pRjdKRlpkRGhFcjhDSVBIaGZLeDlaRkM5X2RWdTY4UmFGLXpWRlRQdWNVN2NJMGVFajdRU2g3aTljb04waGRVTWtiTkpKalgtcXdmWnJ1UVFpMmZIRm9RQVNjSlZpSHlYNVY0R1NlZEtVZEtsSm83VC01ZEZlREJrNjh5cE5Ydz09
The key is to use first principles. What is possible, not ‘what has been done before’ as that is constraining your thinking. Same with how you’re saying we don’t have AGI yet. You need to think forward, not backward. What possibilities are enabled once certain milestones are hit.
r/artificial
comment
r/artificial
2025-01-08
Z0FBQUFBQm5meDJtVDJhX1J5NGxZYWhNeW5zT2JQbkN5eGFVOGdqTFluaC0zUVdReUpOZ09OSGloNVZqUkJVMzBfUTN2dGYycEE3QmhWcWp1MFVqenc3cFI5TUg1WTgyaFE9PQ==
Z0FBQUFBQm5meDJtemN5UlBhczlWN01BZkFaMjcwcWFsb0RSdHpLTHFtZ3ZXMmJyZ1pWVHRoQlIwS3N5OHhCa1NYa0dPS1VEQkptZmxIQWZfUl9WN0VCMUVESjJnQ0FXbU1FQ1BPOVRSRl9Bbk1RRWc0a2ZLNWpINjcxMW1GZzNvTDEwZzN1R0VRSjZnNm16enlOWmh4blJSMXhXOW5KZFpURS1wS3lOM1JOUVdOS092OFNxX1Zja3lLd01IU3BSc0paa3RNaTk3cjNyMXJGWXpJbGZKNHNad2dYZ3JfUHY3Zz09