prompt
stringlengths 16
15.6k
| completion
stringlengths 4
6
|
---|---|
Title: As far as I can tell it's an exact copy of the behavior they exhibited last year and discussed in this thread: <a href="https://news.ycombinator.com/item?id=30167865">https://news.ycombinator.com/item?id=30167865</a><p>Personally when I turn off my Wi-Fi I want it to be off, period.
Upvote: | 40 |
Title: TL;DR (summarised by ChatGPT) - I'm experiencing increased productivity and independence with ChatGPT but grappling with challenges such as lack of work-life boundaries and overwhelming information, leading to stress and burnout.<p>Long story...<p>I have been using ChatGPT for a while, and moved to the Plus subscription for their GPT-4 model, which I must say, is quite good.<p>1. ChatGPT makes us very productive. Personally, in my early 40s, I feel my brain is back in 20s.<p>2. I no longer feel the need to hire juniors. This is a short-term positive and maybe a long-term negative.
[[EDIT: I may have implied a wrong meaning. To clarify - nobody's going yet because of ChatGPT. It is just raising the bar high and higher. What took me years to learn, this thing can do already and much more. And I cannot predict the financial future of OpenAI or the markets in general.]]<p>A lot of stuff I used to delegate to fellow humans are now being delegated to ChatGPT. And I can get the results immediately and at any time I want. I agree that it cannot operate on its own. I still need to review and correct things. I have do that even when working with other humans. The only difference is that I can start trusting a human to improve, but I cannot expect ChatGPT to do so. Not that it is incapable, but because it is restricted by OpenAI.<p>And I have gotten better at using it. Calling myself a prompt-engineer sounds weird.<p>With all the good, I am now experiencing the cons, stress and burnout:<p>1. Humans work 9-5 (or some schedule), but ChatGPT is available always and works instantly. Now, when I have some idea I want to try out - I start working on it immediately with the help of AI. Earlier I just used to put a note in the todo-list and stash it for the next day.<p>2. The outputs with ChatGPT are so fast, that my "review load" is too high. At times it feels like we are working for ChatGPT and not the other way around.<p>3. ChatGPT has the habit of throwing new knowledge back at you. Google does that too, but this feels 10x of Google. Sometimes it is overwhelming. Good thing is we learn a lot, bad thing is that if often slows down our decision making.<p>4. I tried to put a schedule to use it - but when everybody has access to this tech, I have a genuine fear of missing out.<p>5. I have zero doubt that AI is setting the bar high, and it is going to take away a ton of average-joe desk jobs. GPT-4 itself is quite capable and organisations are yet to embrace it.<p>And not the least, it makes me worry - what lies with the future models. I am not a layman when it comes to AI/ML - have worked with it until the past few years in the pre-GPT era.<p>Has anybody experienced these issues? And how do you deal with those?<p>* I could not resist asking ChatGPT the above - couple of strategies it told me were to "Seek Support from Others" and "Participating in discussions or groups focused on ethical AI". *
Upvote: | 163 |
Title: Hi HN community! We want to share AI-town, a deployable starter kit for building and customizing your own version of AI simulation - a virtual town where AI characters live, chat and socialize.<p>Inspired by great work from the Stanford Generative Agent paper (<a href="https://arxiv.org/abs/2304.03442" rel="nofollow noreferrer">https://arxiv.org/abs/2304.03442</a>).<p>A few features:
- Includes a convex.dev backed server-side game engine that handles global state
- Multiplayer ready. Deployment ready
- 100% Typescript
- Easily customizable. You can fork it, change character memories, add new sprites/tiles and you have a custom AI simulation<p>The goal is to democratize building your own simulation environment with AI agents. Would love to see the community build more complex interactions on top of this. Let us know what you think!<p>Demo: <a href="https://www.convex.dev/ai-town" rel="nofollow noreferrer">https://www.convex.dev/ai-town</a><p>I made a world Cat Town to demonstrate how to customize AI town. Using C(h)atGPT :)<p>Demo: <a href="https://cat-town.fly.dev/" rel="nofollow noreferrer">https://cat-town.fly.dev/</a>
Code: <a href="https://github.com/ykhli/cat-town">https://github.com/ykhli/cat-town</a>
Upvote: | 429 |
Title: Go to Twitter and click on a link going to any url on "NYTimes.com" or "threads.net" and you'll see about a ~5 second delay before t.co forwards you to the right address.<p>Twitter won't ban domains they don't like but will waste your time if you visit them.<p>I've been tracking the NYT delay ever since it was added (8/4, roughly noon Pacific time), and the delay is so consistent it's obviously deliberate.
Upvote: | 749 |
Title: I started this project many years ago when I was coming up with ideas for immutable data structures for the PHP data structures extension [1]. I wanted to support access by position in a sorted set, which led to the idea of using binary search trees for both lists and sets. However, I did not expect the scope of this project to increase as much as it did.<p>The more I read about binary search trees, the more I thought about them, and so down the rabbit hole I went.
When you read everything you can find about a topic for several years, eventually you develop a deep enough understanding to compose new ideas from different sources.<p>This project is the result of many rejected ideas and countless experiments.
I tried my best to distill everything I consider important enough to keep, and will continue to develop other ideas as they come along.
I had no intention to implement anything other than weight-balanced strategies to support positional access, but when I read about rank-balanced trees I knew I had to take on the challenge to implement them.<p>I've been in contact with various authors along the way, specifically Bob Tarjan and Salvador Roura, but have otherwise not received any feedback yet.
Implementing all the algorithms was incredibly hard work, by far the hardest work I've ever done, so I hope that others may find value in their presentation.<p>There is still so much work that could be done, but there comes a time when working on something alone begins to yield diminishing returns. I hope to continue this project as an open-source collaborative effort, so please feel free to ask questions or suggest changes, however small.<p>[1] <a href="https://github.com/php-ds/ext-ds">https://github.com/php-ds/ext-ds</a>
Upvote: | 122 |
Title: I consider myself to be a pretty good prompter. Been using the LLMs for a long time now. Most of the time I manage to get the desired results out of LLM models. Do you think this skill is anywhere useful?<p>So far it has saved me some time on my work, but I don't think promoting will be any relevant in the near future. People can and will build models that follow the same mode of thought.
Upvote: | 77 |
Title: Servicer is a CLI to create and manage services on systemd. I have used pm2 in production and find it easy to use. However a lot of its functionality is specific to node.js, and I would prefer not to run my rust server as a fork of a node process. Systemd on the other hand has most of the things I need, but I found it cumbersome to use. There are a bunch of different commands and configurations- the .service file, systemctl to view status, journald to view logs which make systemd more complex to setup. I had to google for the a template and commands every time.<p>Servicer abstracts this setup behind an easy to use CLI, for instance you can use `ser create index.js --interpreter node --enable --start` to create a `.service` file, enable it on boot and start it. Servicer will also help if you wish to write your own custom `.service` files. Run `ser edit foo --editor vi` to create a service file in Vim. Servicer will provide a starting template so you don't need to google it. There are additional utilities like `ser which index.js` to view the path of the service and unit file.<p>```
Paths for index.js.ser.service:
+--------------+-----------------------------------------------------------+
| name | path |
+--------------+-----------------------------------------------------------+
| Service file | /etc/systemd/system/index.js.ser.service |
+--------------+-----------------------------------------------------------+
| Unit file | /org/freedesktop/systemd1/unit/index_2ejs_2eser_2eservice |
+--------------+-----------------------------------------------------------+
```<p>Servicer is daemonless and does not run in the background. It simply sets up systemd and gets out of the way. There are no forked services, everything is natively set up on systemd. You don't need to worry about resource consumption or servicer going down which will cause your app to stop.<p>Do give it a spin and review the codebase. The code is open source and MIT licensed- <a href="https://github.com/servicer-labs/servicer">https://github.com/servicer-labs/servicer</a>
Upvote: | 147 |
Title: Hi HN! Today we are releasing Lottielab, a web-based animation tool, to the public as an open beta. The main tool for editing and exporting Lottie animations today is Adobe After Effects, a 30-year-old visual effects tool that’s not fit for this purpose, has a steep learning curve, and requires a patchwork of error-prone plugins. With Lottielab, we are aiming to reduce the friction of creating and editing product animations by providing an easy-to-use editor with out-of-the-box support for import and export of the Lottie format and many others. Feel free to play around with the tool and let me know what you think - I'm here to answer your questions. Happy animating!
Upvote: | 154 |
Title: Author here. I just wanted a quick and easy way to easily submit strings to a REST API and get back the embedding vectors in JSON using Llama2 and other similar LLMs, so I put this together over the past couple days. It's very quick and easy to set up and totally self-contained and self-hosted. You can easily add new models to it by simply adding the HuggingFace URL to the GGML format model weights. Two models are included by default, and these are automatically downloaded the first time it's run.<p>It lets you not only submit text strings and get back the embeddings, but also to compare two strings and get back their similarity score (i.e., the cosine similarity of their embedding vectors). You can also upload a plaintext file or PDF and get back all the embeddings for every sentence in the file as a zipped JSON file (and you can specify the layout of this JSON file).<p>Each time an embedding is computed for a given string with a given LLM, that vector is stored in the SQlite database and can be returned immediately. You can also search across all stored vectors easily using a query string; this uses FAISS which is integrated.<p>There are lots of nice performance enhancements, including parallel inference, db write queue, fully async everything, and even a RAM Disk feature to speed up model loading.<p>I’m working now on adding additional API endpoints for easily generating sentiment scores using presets for different focus areas, but that’s still work-in-progress (the code for this so far is in the repo though).
Upvote: | 178 |
Title: Hi HN, we're Lucas and Lucas, the authors of Layerform (https://github.com/ergomake/layerform). Layerform is an open-source tool for setting up development environments using plain .tf files. We allow each engineer to create their own "staging" environment and reuse infrastructure.<p>Whenever engineers run layerform spawn, we use plain .tf files to give them their own "staging" environment that looks just like production.<p>Many teams have a single (or too few) staging environments, which developers have to queue to use. This is particularly a problem when a system is large, because then engineers can't run it on their machines and cannot easily test their changes in a production-like environment. Often they end up with a cluttered Slack channel in which engineers wait for their turn to use staging. Sometimes, they don't even have that clunky channel and end up merging broken code or shipping bugs to production. Lucas and I decided to solve this because we previously suffered with shared staging environments.<p>Layerform gives each developer their own production-like environment.This eliminates the bottleneck, increasing the number of deploys engineers make. Additionally, it reduces the amount of bugs and rework because developers have a production-like environment to develop and test against. They can just run "layerform spawn" and get their own staging.<p>We wrap the MPL-licensed Terraform and allow engineers to encapsulate each part of their infrastructure into layers. They can then create multiple instances of a particular layer to create a development environment.The benefit of using layers instead of raw Terraform modules is that they're much easier to write and reuse, meaning multiple development environments can run on top of the same infrastructure.<p>Layerform's environments are quick and cheap to spin up because they share core pieces of infrastructure. Additionally, Layerform can automatically tag components in each layer, making it easier for FinOps teams to manage costs and do chargebacks.<p>For example: with Layerform, a product developer can spin up their own lambdas and pods for staging while still using a shared Kubernetes cluster and Kafka instance. That way, development environments are quicker to spin up and cheaper to maintain. Each developer's layer also gets a tag, meaning FinOps teams know how much each team's environments cost.<p>For the sake of transparency, the way we intend to make money is by providing a managed service with governance, management, and cost-control features, including turning off environments automatically on inactivity or after business hours. The Layerform CLI itself will remain free and open (GPL).<p>You can download the Layerform CLI right now and use it for free. Currently, all the state, permissions, and layer definitions stay in your cloud, under your control.<p>After the whole license change thing, I think it's also worth mentioning we'll be building on top of the community's fork and will consider adding support for Pulumi too.<p>We'd love your feedback on our solution to eliminate “the staging bottleneck". What do you think?
Upvote: | 124 |
Title: I’ve been looking and can only find “Learn Vimscript The Hard Way”.<p>The website I remember had a man in front of some control panel, maybe wearing a space suite, and started with a section named something like “Learn to Crawl”.<p>Does anyone know what I’m talking about?
Upvote: | 49 |
Title: Hi, I'm Evan and I developed Modern CSV. Last year, the beta version of version 2 was posted here on Hacker News. As a follow-up, I'm letting you guys know that the beta period is over and Modern CSV 2 is now available.<p>Modern CSV is a tabular file editor/viewer for Windows, Mac, and Linux. I developed it out of frustration with how spreadsheet programs handle CSV files. Plain text editors, on the other job, do a poor job of handling columns. With Modern CSV, I attempt to combine the best of both worlds.<p>With version 2, you can expect to see an improved UI/UX, better performance, more useful features, updated documentation, and for Mac users, native Apple Silicon support.<p>If you haven't tried it yet, give it a shot and let me know what you think!
Upvote: | 324 |
Title: Why do they keep doing this?<p>I noticed users of my product (https://clonedub.com) were complaining that they couldn't upgrade plans. I also noticed there were no new subscriptions all day which was unusual.<p>So I log into firebase to figure out what's wrong.<p>Lo and behold it reads "Firebase no longer supports stripe extension. This has been moved to invertase".<p>The only mention of this is here:
https://github.com/invertase/stripe-firebase-extensions/issues/542<p>Checking this out further I realised it was a barely supported open source alternative that didnt even work.<p>What a joke?!!<p>I genuinely don't understand why they have such contempt for businesses on their platforms.<p>Why do they keep doing this?!!!
Upvote: | 49 |
Title: [received by email]<p>We are writing to let you know that starting October 1, 2023, all existing and new Standard Tier Egress customers will avail their first 200 GB for free in every region that they choose to operate, after which Standard Tier will be billed at current rates.<p>What do you need to know?
The 200 GB meter will reset every month. We’re introducing this offer to make it frictionless for customers to start using Standard Tier egress with their Virtual Machines.<p>Today, there is no free usage tier for Standard Tier Egress. With this MSA, we are introducing a free tier of egress to our Standard tier customers.<p>What do you need to do?
No action is required on your part. However, if you currently consume Standard Tier Egress, you will notice a slight reduction in the billed amount starting October 1, 2023. You can also review our standard billing rates.<p>Thanks for choosing Standard Tier Egress
Upvote: | 48 |
Title: Tech-stack included
Upvote: | 107 |
Title: We just released our database, SpacetimeDB, on GitHub under the BSL 1.1 license. It converts to a free software license after a few years.<p>The point of the database is that you upload application logic into the database as a WebAssembly stored procedure, so instead of clients connecting to a webserver they connect directly to the database. The database itself does authentication and you write your own authorization logic just like you would inside a webserver.<p>We’ve developed our game, BitCraft (<a href="https://bitcraftonline.com" rel="nofollow noreferrer">https://bitcraftonline.com</a>) entirely in this way. All of the game state is stored and synchronized with clients via SpacetimeDB, including player positions and movement.<p>We also plan to allow you to horizontally scale your applications in two ways:<p>1. By having multiple databases that can send messages to each other (i.e. the actor model) 2. By having distributed databases which partition data over multiple machines, similarly to CockroachDB, although this approach would cause a commensurate increase in latency in accessing data<p>Curious to hear your thoughts!<p><a href="https://spacetimedb.com" rel="nofollow noreferrer">https://spacetimedb.com</a>
Upvote: | 80 |
Title: Marqo is an end-to-end vector search engine. It contains everything required to integrate vector search into an application in a single API. Here is a code snippet for a minimal example of vector search with Marqo:<p>mq = marqo.Client()<p>mq.create_index("my-first-index")<p>mq.index("my-first-index").add_documents([{"title": "The Travels of Marco Polo"}])<p>results = mq.index("my-first-index").search(q="Marqo Polo")<p>Why Marqo?
Vector similarity alone is not enough for vector search. Vector search requires more than a vector database - it also requires machine learning (ML) deployment and management, preprocessing and transformations of inputs as well as the ability to modify search behavior without retraining a model. Marqo contains all these pieces, enabling developers to build vector search into their application with minimal effort.<p>Why not X, Y, Z vector database?
Vector databases are specialized components for vector similarity. They are “vectors in - vectors out”. They still require the production of vectors, management of the ML models, associated orchestration and processing of the inputs. Marqo makes this easy by being “documents in, documents out”. Preprocessing of text and images, embedding the content, storing meta-data and deployment of inference and storage is all taken care of by Marqo. We have been running Marqo for production workloads with both low-latency and large index requirements.<p>Marqo features:<p>- Low-latency (10’s ms - configuration dependent), large scale (10’s - 100’s M vectors).
- Easily integrates with LLM’s and other generative AI - augmented generation using a knowledge base.
- Pre-configured open source embedding models - SBERT, Huggingface, CLIP/OpenCLIP.
- Pre-filtering and lexical search.
- Multimodal model support - search text and/or images.
- Custom models - load models fine tuned from your own data.
- Ranking with document meta data - bias the similarity with properties like popularity.
- Multi-term multi-modal queries - allows per query personalization and topic avoidance.
- Multi-modal representations - search over documents that have both text and images.
- GPU/CPU/ONNX/PyTorch inference support.<p>See some examples here:<p>Multimodal search:
[1] <a href="https://www.marqo.ai/blog/context-is-all-you-need-multimodal-vector-search-with-personalization" rel="nofollow noreferrer">https://www.marqo.ai/blog/context-is-all-you-need-multimodal...</a><p>Refining image quality and identifying unwanted content:
[2] <a href="https://www.marqo.ai/blog/refining-image-quality-and-eliminating-nsfw-content-with-marqo" rel="nofollow noreferrer">https://www.marqo.ai/blog/refining-image-quality-and-elimina...</a><p>Question answering over transcripts of speech:
[3] <a href="https://www.marqo.ai/blog/speech-processing" rel="nofollow noreferrer">https://www.marqo.ai/blog/speech-processing</a><p>Question and answering over technical documents and augmenting NPC's with a backstory:
[4] <a href="https://www.marqo.ai/blog/from-iron-manual-to-ironman-augmenting-gpt-with-marqo-for-fast-editable-memory-to-enable-context-aware-question-answering" rel="nofollow noreferrer">https://www.marqo.ai/blog/from-iron-manual-to-ironman-augmen...</a>
Upvote: | 62 |
Title: Is it just traditional ML?<p>Traditional ML would do a lot of feature extraction and engineering -- from a very specific problem space --before throwing the training compute at it. I think there are good pattern detection, prediction, and anomaly detection models that come out of this.<p>What happens if we just scrape all data (say metrics like weather, flights, population stats, gps locations, web pages clicks, deep space network observations, and kindergarten grades) and if it were possible to build a model with enough weights for all this diverse data ...<p>What kind of use case might such a ... Large DATA model ...open up?
Upvote: | 45 |
Title: Hello,<p>I am an experienced developer (11y) trying to earn some money on the side.<p>I am looking for some tips what could I do.
The reason I said "easy" in the title is because I have a full time job, so I can't commit to multi-month projects full time.<p>I earned some good money on Topcoder before, but currently there are only a very few projects listed.<p>I am not a good speaker, so things like Youtube channel, or streaming is out of the picture, I am not comfortable uploading videos of myself.<p>I checked freelancer websites, but competition is crazy there (developing a full-fledged ecommerce web application for $100, and such)<p>Are there any other good websites like Topcoder?
What do YOU do to earn money on the side?<p>Edit: For suggestions about the US: I am actually living in Canada. If you have any Canada specific, please suggest :)
Upvote: | 95 |
Title: I’m looking for a good privacy focused photo hopefully self hosted programs to backup and share photos. With the way everyone is training AI on everything now, I don’t really trust google, apple, etc with my photos and videos. So what are some good options that I can put on my home server and self host to share and organize my families photos.<p>My wife would also be using it and she isn’t the most tech savvy. So something that is easy to use after initial setup would be ideal.<p>TIA
Upvote: | 81 |
Title: Graduate school looks to be like way to much of a time and money commitment right now. A ton of this academic content seems to be free online anyway. I got into software with free content and classes online.<p>I'm wondering if anyone has had success moving into this field, for a generalist engineer? I'd imagine advanced degrees aren't required for everything? ML infra and stuff, perf/optimization work etc... Maybe learning materials, resume and interview advice etc? Thanks in advance if you have an interesting answer!
Upvote: | 58 |
Title: A lot of times with side projects I wished I had gotten feedback early on, before I spent a lot of time on an inefficient direction. I wonder if people wait too long to publish something before it is fully polished, then realized that the polishing wasn't needed.<p>I'm interested to see things that people would have never published otherwise. I know a lot of my projects never make it to a published phase, but I still would have been interested in knowing the general reception. Please drop your projects here!
Upvote: | 195 |
Title: Mine would be The Utopians trilogy[1], I recommend it to anyone looking for a good sci-fi read.<p>[1]: <a href="https://stallman.org/Bob-Chassell" rel="nofollow noreferrer">https://stallman.org/Bob-Chassell</a>
Upvote: | 439 |
Title: Hi HN! We’re Yann, Edouard, and Bastien from Koyeb (<a href="https://www.koyeb.com/" rel="nofollow noreferrer">https://www.koyeb.com/</a>). We’re building a platform to let you deploy full-stack apps on high-performance hardware around the world, with zero configuration. We provide a “global serverless feeling”, without the hassle of re-writing all your apps or managing k8s complexity [1].<p>We built Scaleway, a cloud service provider where we designed ARM servers and provided them as cloud servers. During our time there, we saw customers struggle with the same issues while trying to deploy full-stack applications and APIs resiliently. As it turns out, deploying applications and managing networking across a multi-data center fleet of machines (virtual or physical) requires an overwhelming amount of orchestration and configuration. At the time, that complexity meant that multi-region deployments were simply out-of-reach for most businesses.<p>When thinking about how we wanted to solve those problems, we tried several solutions. We briefly explored offering a FaaS experience [2], but from our first steps, user feedback made us reconsider whether it was the correct abstraction. In most cases, it seemed that functions simply added complexity and required learning how to engineer using provider-specific primitives. In many ways, developing with functions felt like abandoning all of the benefits of frameworks.<p>Another popular option these days is to go with Kubernetes. From an engineering perspective, Kubernetes is extremely powerful, but it also involves massive amounts of overhead. Building software, managing networking, and deploying across regions involves integrating many different components and maintaining them over time. It can be tough to justify the level of effort and investment it takes to keep it all running rather than work on building out your product.<p>We believe you should be able to write your apps and run them without modification with simple scaling, global distribution transparently managed by the provider, and no infrastructure or orchestration management.<p>Koyeb is a cloud platform where you come with a git repository or a Docker image, we build the code into a container (when needed), run the container inside of Firecracker microVMs, and deploy it to multiple regions on top of bare metal servers. There is an edge network in front to accelerate delivery and a global networking layer for inter-service communication (service mesh/discovery) [3].<p>We took a few steps to get the Koyeb platform to where it is today: we built our own serverless engine [4]. We use Nomad and Firecracker for orchestration, and Kuma for the networking layer. In the last year, we spawned six regions in Washington, DC, San Francisco, Singapore, Paris, Frankfurt and Tokyo, added support for native workers, gRPC, HTTP/2 [5], WebSockets, and custom health checks. We are working next on autoscaling, databases, and preview environments.<p>We’re super excited to show you Koyeb today and we’d love to hear your thoughts on the platform and what we are building in the comments. To make getting started easy, we provide $5.50 in free credits every month so you can run up to two services for free.<p>P.S. A payment method is required to access the platform to prevent abuse (we had hard months last year dealing with that). If you’d like to try the platform without adding a card, reach out at [email protected] or @gokoyeb on Twitter.<p>[1] <a href="https://www.koyeb.com/blog/the-true-cost-of-kubernetes-people-time-and-productivity" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-true-cost-of-kubernetes-peopl...</a><p>[2] <a href="https://www.koyeb.com/blog/the-koyeb-serverless-engine-docker-containers-and-continuous-deployment-of-functions" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-koyeb-serverless-engine-docke...</a><p>[3] <a href="https://www.koyeb.com/blog/building-a-multi-region-service-mesh-with-kuma-envoy-anycast-bgp-and-mtls" rel="nofollow noreferrer">https://www.koyeb.com/blog/building-a-multi-region-service-m...</a><p>[4] <a href="https://www.koyeb.com/blog/the-koyeb-serverless-engine-from-kubernetes-to-nomad-firecracker-and-kuma" rel="nofollow noreferrer">https://www.koyeb.com/blog/the-koyeb-serverless-engine-from-...</a><p>[5] <a href="https://www.koyeb.com/blog/enabling-grpc-and-http2-support-at-edge-with-kuma-and-envoy" rel="nofollow noreferrer">https://www.koyeb.com/blog/enabling-grpc-and-http2-support-a...</a>
Upvote: | 98 |
Title: Hi, I'm Alex - the creator of STRICH (<a href="https://strich.io" rel="nofollow noreferrer">https://strich.io</a>), a barcode scanning library for web apps.
Barcode scanning in web apps is nothing new. In my previous work experience, I've had the opportunity to use both high-end commercial offerings (e.g. Scandit) and OSS libraries like QuaggaJS or ZXing-JS in a wide range of customer projects, mainly in logistics.<p>I became dissatisfied with both. The established commercial offerings had five- to six-figure license fees and the developer experience was not always optimal. The web browser as a platform also seemed not to be the main priority for these players. The open source libraries are essentially unmaintained and not suitable for commercial use due to the lack of support. Also the recognition performance is not enough for some cases - for a detailed comparison see <a href="https://strich.io/comparison-with-oss.html" rel="nofollow noreferrer">https://strich.io/comparison-with-oss.html</a><p>Having dabbled a bit in Computer Vision topics before, and armed with an understanding of the market situation, I set out to build an alternative to fill the gap between the two worlds. After almost two years of on-and-off development and 6 months of piloting with a key customer, STRICH launched at beginning of this year.<p>STRICH is built exclusively for web browsers running on smartphones. I believe the vast majority of barcode scanning apps are in-house line of business apps that benefit from distribution outside of app stores and a single codebase with abundant developer resources. Barcode scanning in web apps is efficient and avoids platform risk and unnecessary costs associated with developing and publishing native apps.
Upvote: | 78 |
Title: I'm curious to hear what others have found useful. Whether it's your OS, the services you are hosting, or even general tips on things like buying hardware on a budget or how you implemented a cheap/free DDNS, I (and probably the rest of HN) would love to see what's popular among self hosters.<p>(Disclaimer: I am in the process of setting up a self hosted server, so your answers may help me with my setup.)
Upvote: | 52 |
Title: I made this straight vanilla JS game for a game jam a few years ago. Considering coming back to it and fixing the bugs and gameplay.<p><a href="https://github.com/jonfranco224/not-my-cows">https://github.com/jonfranco224/not-my-cows</a> if anyone wants to check the source.<p>Edit: y'all seem to be enjoying this! I spun up a quick Twitter/X for game updates if anyone is interested - <a href="https://twitter.com/notmycowsgame" rel="nofollow noreferrer">https://twitter.com/notmycowsgame</a>
Upvote: | 88 |
Title: Hi HN, we are Ed, Zach, and Ronald, creators of Shadeform (<a href="https://www.shadeform.ai/">https://www.shadeform.ai/</a>), a GPU marketplace to see live availability and prices across the GPU market, as well as to deploy and reserve on-demand instances. We have aggregated 8+ GPU providers into a single platform and API, so you can easily provision instances like A100s and H100s where they are available.<p>From our experience working at AWS and Azure, we believe that cloud could evolve from all-encompassing hyperscalers (AWS, Azure, GCP) to specialized clouds for high-performance use cases. After the launch of ChatGPT, we noticed GPU capacity thinning across major providers and emerging GPU and HPC clouds, so we decided it was the right time to build a single interface for IaaS across clouds.<p>With the explosion of Llama 2 and open source models, we are seeing individuals, startups, and organizations struggling to access A100s and H100s for model fine-tuning, training, and inference.<p>This encouraged us to help everyone access compute and increase flexibility with their cloud infra. Right now, we’ve built a platform that allows users to find GPU availability and launch instances from a unified platform. Our long term goal is to build a hardwareless GPU cloud where you can leverage managed ML services to train and infer in different clouds, reducing vendor lock-in.<p>We shipped a few features to help teams access GPUs today:<p>- a “single plane of glass” for GPU availability and prices;<p>- a “single control plane” for provisioning GPUs in any cloud through our platform and API;<p>- a reservation system that monitors real time availability and launches GPUs as soon as they become available.<p>Next up, we’re building multi-cloud load balanced inference, streamlining self hosting open source models, and more.<p>You can try our platform at <a href="https://platform.shadeform.ai">https://platform.shadeform.ai</a>. You can provision instances in your accounts by adding your cloud credentials and api keys, or you can leverage “ShadeCloud” and provision GPUs in our accounts. If you deploy in your account, it is free. If you deploy in our accounts, we charge a 5% platform fee.<p>We’d love your feedback on how we’re approaching this problem. What do you think?
Upvote: | 62 |
Title: Unsure if this is the same for everyone but our notification emails from SES are being rate limited by Gmail.<p>Background:<p>One of my apps provides time-sensitive email notifications, for 60-90 days, to the subscribers(paid) - so it is critical to deliver emails. We have been using the same email provider and have been using AWS SES for a couple of years now. We have the SPF, and DKIM all verified. Yet, for the last 3 days, we are getting the below email<p>```<p>Sub: Delivery Status Notification (Failure)<p>Body: Our system has detected an unusual rate of<CRLF>421-4.7.28 unsolicited mail originating from your IP address. To protect our<CRLF>421-4.7.28 users from spam, mail sent from your IP address has been temporarily<CRLF>421-4.7.28 rate limited. Please visit<CRLF>421-4.7.28 https://support.google.com/mail/?p=UnsolicitedRateLimitError to<CRLF>421 4.7.28 review our Bulk Email Senders Guidelines.><p>```<p>Troubleshooting till now: I have got the AWS tech support team confirming my email configuration has no issues. AWS team has informed "The current throttling is just that Gmail is seeing a lot of messages from the SES shared IP and is throttling the messages"<p>>>> <i>Is anyone facing the same issue with AWS? or similar issues with other bulk email service providers?</i><p>>>> <i>How do you deal with such issues in the future? Set up alternative email service providers.</i><p>>>> <i>Is this the side effect of Gmail's dormant account deletion rolled out last week?</i>
Upvote: | 67 |
Title: If your history is off Youtube's landing page will show no videos and will "encourage" you to enable the history.<p>Any ideas why are they using this trick?
Upvote: | 94 |
Title: Hi HN, We’re Harshith, Manoj, and Manik<p>Poozle (<a href="https://github.com/poozlehq/poozle">https://github.com/poozlehq/poozle</a>) provides a single API that helps businesses achieve accurate LLM responses by providing real-time customer data from different SAAS tools (e.g Notion, Salesforce, Jira, Shopify, Google Ads etc).<p>Why we built Poozle: As we were talking to more AI companies who need to integrate with their customers’ data we realised managing all SAAS tools data and keeping them up-to-date is a huge infra of ETL, Auth management, Webhooks and many more things before you take it to production. It struck us – why not streamline this process and allow companies to prioritise their core product?<p>How it works: Poozle makes user authorization seamless using our drop-in component (Poozle Link) and handles both API Key and OAuth dance. Post-authentication developers can use our Unified model to fetch data to their LLMs (no need to sync data separately and then normalise at your end). Poozle keeps data updated in real time while giving you options to choose sync intervals. Even if the source doesn’t support webhooks, we’ve got you covered.<p>Currently, we support Unified API for 3 categories - Ticketing, Documentation and Email. You can watch a demo of Poozle (<a href="https://www.loom.com/share/30650e4d1fac41e3a7debc212b1c7c2d)l" rel="nofollow noreferrer">https://www.loom.com/share/30650e4d1fac41e3a7debc212b1c7c2d)...</a><p>We just got started a month ago and we’re eager to get feedback and keep building. Let us know what you think in the comments : )
Upvote: | 132 |
Title: I had this backup code working reliably for years, using local file system, vps/dedicated server, or remote storage for backup, then I finally get time to wrap README, iron few missing switches and publish. Should be production ready and reliable, so it could be useful to others. Contributors are welcome.<p><<a href="https://github.com/dusanx/saf">https://github.com/dusanx/saf</a>>
Upvote: | 89 |
Title: I received communications about updated privacy policies and terms of service from Microsoft, Meta, and PayPal today (in that order, chronologically.) Was there a triggering event that caused them all to update at the same time?
Upvote: | 197 |
Title: Hi folks,<p>does anyone have experience they could share about contracting for US companies from Europe as EU national?<p>I'm investigating this as being a bit tired with the lack of contract / remote / engineering first culture in the EU. I hope this will be an interesting topic for a few people.<p>After some googling, I'm under the perception that US companies are mostly looking for remote workers inside the US, however, there must be companies in the US who are looking to find specialists in the EU. Are there any sites that bring together companies and offshore contractors (not upwork, I'm thinking about niche or expert sites)?<p>I'm not sure about the legal obligations on either end. Does the company or the contractor need to register somewhere or pay special taxes in the US to provide services in the US?
Upvote: | 58 |
Title: Hey HN!<p>Rivet is an OSS game server management tool that enables game developers to easily deploy their dedicated servers without any infra experience.<p>We recently open-sourced Rivet after working on it for the past couple of years. I wanted to share some of my favorite things about our experience building this with the HN community.<p>My cofounder and I have been building multiplayer games together since middle school for fun (and not much profit [1]). In HS, I stumbled into building the entire infrastructure powering [Krunker.io](<a href="http://Krunker.io" rel="nofollow noreferrer">http://Krunker.io</a>) (acq by FRVR) & other popular multiplayer web games. After wasting months rebuilding dedicated server infrastructure + DDoS/bot mitigation over and over, we started building Rivet as a side project.<p>Some interesting tidbits:<p>- ~99% Rust and a smidgeon of Lua.<p>- Bolt [2] – Cluster dev & management toolchain for super configurable self-hosted Rivet clusters. It’s way over-engineered.<p>- The entire repo is usable as a library. Our EE repo uses OSS as a submodule.<p>- Traefik used as an edge proxy for low-latency UDP, TCP+TLS, & WSS traffic.<p>- Apache Traffic Server is under-appreciated as a large file cache. Used as an edge Docker pull-through cache to improve cold starts & as a CDN cache to lower our S3 bill.<p>- ClickHouse used for analytics & game server logs. It’s so simple, I have nothing more to say.<p>- Serving Docker images with Apache TS is simpler & cheaper than running a Docker pull-through cache.<p>- Nebula has been rock solid & easy to operate as our overlay network.<p>- We use Redis Lua scripts for complex, atomic, in-memory operations.<p>- Obviously, we love Nix.<p>- We keep a rough SBOM [3].<p>- Licensed under Apache 2.0 (OSI-approved). We seriously want people to run & tinker with Rivet themselves. We get a lot of questions about this: [4] [5]<p>Some HN-flavored FAQ:<p>> Why not build on top of Agones or Kubernetes?<p>Nomad is simpler & more flexible than Agones/Kubernetes out of the box, which let us get up and running faster. For example, Nomad natively supports multiple task drivers, edge workloads, and runs as a standalone binary.<p>> [Fly.io](<a href="http://Fly.io">http://Fly.io</a>) migrated off of Nomad, how will you scale?<p>Nomad can support 2M containers [6]. Some quick math: avg 8 players per lobby * 2M lobbies * 8 regional clusters = ~128M CCU. That’s well above PUBG’s 3.2m CCU peak.<p>Roblox’s game servers also run on top of Nomad [7]. We’re in good company.<p>> Are you affected by the recent Nomad BSL relicensing [8]?<p>Maybe, see [9].<p>> How do you compare to $X?<p>Our core goal is to get developers up and running as fast as possible. We provide extra services like our matchmaker [10], CDN [11], and KV [12] to make shipping a fully-fledged multiplayer game require only a couple of lines of code.<p>No other project provides a comparably accessible, OSS, and comprehensive game server manager.<p>> Do you handle networking logic?<p>No. We work with existing tools like FishNet, Mirror, NGO, Unreal & Godot replication, and anything else you can run in Docker.<p>> Is anyone actually using this?<p>Yes, we’ve been running in closed beta since Jan ‘22 and currently support millions of MAU across many titles.<p>[1]: <a href="https://github.com/rivet-gg/microgravity.io">https://github.com/rivet-gg/microgravity.io</a><p>[2]: <a href="https://github.com/rivet-gg/rivet/tree/main/docs/libraries/bolt">https://github.com/rivet-gg/rivet/tree/main/docs/libraries/b...</a><p>[3]: <a href="https://github.com/rivet-gg/rivet/blob/main/docs/infrastructure/SBOM.md">https://github.com/rivet-gg/rivet/blob/main/docs/infrastruct...</a><p>[4]: <a href="https://github.com/rivet-gg/rivet/blob/main/docs/philosophy/LICENSING.md">https://github.com/rivet-gg/rivet/blob/main/docs/philosophy/...</a><p>[5]: <a href="https://github.com/rivet-gg/rivet/blob/main/docs/philosophy/WHY_OPEN_SOURCE.md">https://github.com/rivet-gg/rivet/blob/main/docs/philosophy/...</a><p>[6]: <a href="https://www.hashicorp.com/c2m" rel="nofollow noreferrer">https://www.hashicorp.com/c2m</a><p>[7]: <a href="https://www.hashicorp.com/case-studies/roblox" rel="nofollow noreferrer">https://www.hashicorp.com/case-studies/roblox</a><p>[8]: <a href="https://www.hashicorp.com/blog/hashicorp-adopts-business-source-license" rel="nofollow noreferrer">https://www.hashicorp.com/blog/hashicorp-adopts-business-sou...</a><p>[9]: <a href="https://news.ycombinator.com/item?id=37084825">https://news.ycombinator.com/item?id=37084825</a><p>[10]: <a href="https://rivet.gg/docs/matchmaker">https://rivet.gg/docs/matchmaker</a><p>[11]: <a href="https://rivet.gg/docs/cdn">https://rivet.gg/docs/cdn</a><p>[12]: <a href="https://rivet.gg/docs/kv">https://rivet.gg/docs/kv</a>
Upvote: | 327 |
Title: I'm interested in knowing what type of service you operate, how much it costs you each month, and how much it brings in.
Upvote: | 82 |
Title: This is a keyboard in just intonation. It can play the notes a piano can. The big difference from a piano is that all the notes become consonant. At least, when you want to play a dissonant chord, you are clearly opting in to it because it's clear which notes are dissonant to each other. You won't bump into a dissonant note by mistake.<p>You can play without knowing any music theory. Hit arbitrary notes with the rhythm you want, and the pitches will work. Not understanding the buttons is fine. Even rolling your elbow around your keyboard is fine.<p>If you are a musician and press the wrong key while playing a song, it will still fit. It will sound like you made an intelligent, conscious choice to play another note, even though you know in your heart it was an accident. Beginner jazz musicians rejoice.<p>It's not an AI making choices for you; it's just a very elegant interface. What makes this possible is several new discoveries in psychoacoustics about how harmony works. While a piano lays out notes in pitch space, this keyboard is able to lay out notes in consonance space. When you play random notes, they tend to be "close together" on the physical keyboard. Distance on the keyboard maps well to distance in consonance space, so those random notes are close together in consonance space and sound good together.<p>According to Miles Davis, a "wrong" note becomes correct in the right context. If you try to play a wrong note, the purple buttons you press will automatically land you in the right context, even if you don't know what that context is yourself. So you can stumble your way through an improv and the keyboard will offer the right notes without needing you to think about it.<p>Harmonic consonance of chords can be read directly off the numbers in the keyboard, which implies that these numbers are a good language to think about music with. It doesn't take years of training, just reading the rules. The key harmony insight that you can do on this keyboard, and not on a piano, is to add frequencies linearly (like 400 Hz + 300 Hz). The reason this matters is that linear combinations of frequencies are a major factor of harmony, in lattice tones. So to see how dissonant or consonant a chord is, you want to check how distant it is from a sum or arithmetic progression. On a piano, to do the same, you'd have to memorize fractional approximations of 2^(N/12), then add and subtract these fractions, which is very difficult. For example, how far is 6/5 + 4/3 from 5/2? Hard to say! But if denominators are cleared, it's easy to compare 36 40 45: they're off by 1 from an arithmetic progression. This also applies to overlapping notes, not just chords. Having all the keys accessible on a piano is very convenient, but this translation layer of 2^(N/12) approximation + fractional arithmetic makes it hard to see harmony beyond the pairwise ratios.<p>The subset of playable songs is different from a piano, which means that songs in your existing piano repertoire will snip off some notes. Hardware for thumb keys would fix this, so you could play your existing piano songs in full, plus other songs a piano can't play. I don't have such hardware so I haven't implemented this. The other way is to have two keyboards and a partner.<p>The remaining issue is that there is no sheet music in just intonation. Unfortunately, I have had no success in finding piano sheet music in a common, interpretable format. So while I do have a converter from 12 equal temperament to just intonation, there are no input files to use it with...
Upvote: | 331 |
Title: Hi HN,<p><a href="https://superfunctions.com" rel="nofollow noreferrer">https://superfunctions.com</a><p>I'm working on a web app that allows Ai prompts to function as an API. I want to make it easier for developers to use Ai. I've found it painful to monitor, cache, and iterate on prompts. superfunctions.com is designed to be the simplest building block to create Ai powered apps and scripts.<p>Simplest example I can think of:
You want an api to convert human-named colors to hex
You can write a prompt like: "convert {{query.color}} to color, only output hex for css" and then you can call your prompt with <a href="https://superfn.com/fn/color-to-hex?color=blue" rel="nofollow noreferrer">https://superfn.com/fn/color-to-hex?color=blue</a>
and the response will contain: #0000FF<p>Watch a short video intro:
<a href="https://www.youtube.com/watch?v=KdO1TBUbRuA">https://www.youtube.com/watch?v=KdO1TBUbRuA</a><p>Login without needing an account:
<a href="https://superfunctions.com/login/anon" rel="nofollow noreferrer">https://superfunctions.com/login/anon</a><p>I'm still sorting out a few bugs, but it's usable in it's current state.<p>This is my first solo project, so I'm very open to feedback and suggestions.<p>-Trent
Upvote: | 42 |
Title: Hi HN!<p>I’ve been lurking for a while, but out of fear of being steamrolled by HN readers or maybe just natural introversion, I’ve always been too scared to post or comment. Which is why<p>1. this is my first real Hacker News submission<p>2. my friend Michael and I built "Fake" Hacker News, a place to post and see what AI-generated HN comments might say.<p>Here’s a video of me using fakeHN to test this very submission:
<a href="https://www.loom.com/share/4b9f4f9d7c77489a86baeb92ec55a1ed?sid=a74043f6-8a6e-4a68-bcda-330ab9d7eafe" rel="nofollow noreferrer">https://www.loom.com/share/4b9f4f9d7c77489a86baeb92ec55a1ed?...</a><p>And an example of one of our generated posts:
<a href="https://www.fakehn.com/submission?id=tnYPX00BX827jWFPFkVJ" rel="nofollow noreferrer">https://www.fakehn.com/submission?id=tnYPX00BX827jWFPFkVJ</a><p>To try it, submit a title and text, and depending on traffic and the powers that be, after ~5 seconds, you’ll see some Fake HN comments and replies.<p>We don’t support url submissions yet, but we’re happy to build it if the community wants it!<p>Other features to knock out: deeply nested replies, streamed comments, and higher-fidelity comments mapping to real readers, since the generations now are still pretty shallow. Instead of the quick and dirty system in place now, we think it’d be really cool to see how more nuanced AI agents with the opinions and biases of real individual HN readers might respond.<p>I’d love to see what fakeHN posts you’ve tried and hear any feedback, whether you feel like it’s more of a nifty toy or could eventually solve real problems. If nothing else, it’s been funny to try random posts and see the results. :)<p>- Justin and Michael
Upvote: | 107 |
Title: Hello HN!<p>I was let go from my gaming job a couple of months ago, and unfortunately nothing has come up yet.<p>Thankfully, I was thinking of moving away from the industry anyway, so this is a great opportunity to do so. I've got some savings and have given myself a year to set-up a cybersecurity consultancy business. My main target will be start-ups, and small to medium tech companies, particularly gaming ones that don't yet have a cybersecurity division, but nonetheless need one, and don't see the point of hiring a full time cybersecurity professional.<p>The field has always interested me, and most of my games experience is doing server side development, alongside DevOps, and then straight game dev. But server work has been the bulk, so at least I'm familiar with the basics of hardening a system against interference, mostly by players trying to cheat, and every now and again against criminal interests who have targeted our games.<p>I've got around 15+ years of experience as a software engineer, around half of that in plain server development, and the other half specialized in server dev for games.<p>I've got a bachelor's degree in software engineering, and an MSc in Computer Games Technology. I'm taking a short postgrad course in Cybersecurity at my local university, but that takes 8 months. In the meantime, I'm studying to get Security+ certified so I can start bidding for jobs and have something more backing me apart from my CV.<p>My question is the following, what am I missing? What else can I get or do to give myself more credibility? Does anyone have any tips on getting clients?<p>I'm planning on running promotions for start-ups and going to several meet-ups to distribute coupons, some booklets with free information on personal cybersecurity, and just to network.<p>Cheers in advance for the advice!<p>p.s.: I'm also setting a sister company for game dev consulting, but I'm much more familiar with that and feel much more comfortable with it, but tips for that are also welcome.
Upvote: | 43 |
Title: Hey HN,<p>I built superwhisper out of frustration with the native dictation capabilities of macOS. Inaccurate, required manual punctuation, didnt activate in some contexts or would have audio capture issues.<p>I wanted a replacement that worked offline, had cross language support, was configurable and worked in any application.<p>Under the hood the app is using whisper.cpp, which runs really well on the Apple Silicon chips.<p>You can use the base and standard size models for free, larger models sizes and languages other than english are paid.<p>Let me know what you think! For context, I launched this just one month ago and have been rapidly adding features and making fixes.<p>If you want to follow along with development, I post release info on twitter (<a href="https://x.com/superwhisperapp" rel="nofollow noreferrer">https://x.com/superwhisperapp</a>) or you can subscribe to emails via the form on the website (very bottom).
Upvote: | 43 |
Title: I recently made an app that detects your baldness with ML from hair pictures. It is making quite a lot of views and I am thinking if I can earn some money with it (<a href="https://amibalding.co/" rel="nofollow noreferrer">https://amibalding.co/</a>).<p>I don't want to make a paywall or something annoying for the user. For the moment there is no monetization and the hosting is around $100 a month.<p>If it can make the $100 month back I would be really happy because I could still runs the website and show it to recruiter. It's my most successful project and I show it on my CV.<p>Thanks guys,
Upvote: | 86 |
Title: Go to your profile > "flagged submissions". Do you see any flagged submissions that you don't recall flagging?<p>I have counted 48 flagged submissions, with only 2 of them being genuine flags.<p>I'm curious whether I am exceptionally clumsy, or whether this is a common issue for the rest of you.
Upvote: | 64 |
Title: I'll start.<p>A few years back, I was interviewing at a then "hot" startup. At the end of the process, the CTO calls and says they'd like to extend an offer. I was expecting him to walk me through the offer details, when he goes "well, are you going to take it?" I asked about getting some specifics (cash comp, equity, etc.) and he explains that they ask candidates to commit before sharing any details.<p>I told him that didn't seem like such a great idea, and he assured me that comp wouldn't be an issue, and that they do this to avoid hiring mercenaries. I passed and never looked back.
Upvote: | 185 |
Title: Wife of an app-builder here.<p>My husband quit his full time job at Apple to work full-time last year on a new social audio app where you are anonymous and there are no videos or images - people can only connect with their voice.<p>The app has been up for just less than 2 months, and it was incredible to see 300 users from all around the world leave voice messages of support for each other.<p>It definitely restored my faith in humanity, so much so that I decided to jump on board as a co-founder :)<p>We are on a mission to end loneliness that is gripping our world today, and we hope you also come onboard to get or offer support, or even just to make authentic connections with people around the world.
Upvote: | 58 |
Title: Hello! I'm James and I am working on VisionScript. With VisionScript, I want to empower people -- including everyone without any prior programming experience -- to build cool apps with vision.<p>This weekend, I recorded a demo for VisionScript, in which I made apps that count how many cats are in an image and hides people in a video. Each app was < 10 lines of code.<p><a href="https://vimeo.com/856043804" rel="nofollow noreferrer">https://vimeo.com/856043804</a><p>VisionScript is built for the 10 year old inside of me who would have loved more visual programming languages with which to play. I want to show people the potential of programming and how you can make what you want with computers, whether it be a game that counts cats or an app that monitors how many birds flew past a tree. Those "wow" moments should come as soon as possible in one's learning experience.<p>VisionScript is in active development. I started work on this project in July. There will likely be bugs; this is a passion project. Inspiration comes from Wolfram and Python. Behind the scenes, I am adopting what I am calling "lexical inference", which is to say there is a last state value on which functions infer; the language manages types and state.
Upvote: | 93 |
Title: Hello HN,<p>I’m Tony, the CEO of Cosmic (https://www.cosmicjs.com), we provide a headless CMS and API toolkit to create and deliver content to websites and apps. Today, we are releasing Cosmic Media which enables you to search millions of high-quality, royalty-free, stock photos, videos, and vectors from popular online media services: Unsplash, Pexels, Giphy, and Pixabay from one convenient interface. It also includes AI-generated images from OpenAI. Check it out here: https://cosmicmedia.vercel.app<p>We built it to solve our own need to consolidate our existing media extensions, which were individual media extensions using the Unsplash API and Pexels Video API, and we thought, "why not combine them into one"? Rather than search from different stock media websites, seems like it would be nice to aggregate it into one interface. Then we sort of thought about what else might someone want for adding media to their content, so we added DALL-E AI image generation. We've been using it internally and find that it's saved us some time when searching for media to add to our blog posts.<p>We are offering it as both a stand-alone open source tool and as a Cosmic extension which can be added to your projects for easy access during content creation from the Cosmic dashboard. Check out the code and feel free to customize and extend it to suit your needs: https://github.com/cosmicjs/cosmic-media-extension<p>Let me know what you think in the comments.<p>- Tony
Upvote: | 46 |
Title: Hey HN<p>OpenCopilot is an OSS framework which helps devs to build open-source AI Copilots that actually work and embed into their product with ease.<p>Why another LLM framework?<p>Twitter is full of impressive LLM applications but once you peel off the curtains it’s clear that they are just demos. The reason being because building an AI Copilot that goes beyond a Twitter demo can be complex, time-consuming and unreliable.<p>Our team has been in the AI space since 2018 and built numerous LLM apps & copilots. While doing that, we got approached by many startups saying they’d also like to build a copilot for their product but they haven’t been able to get it reliable, fast or cost-effective enough for production use. Thus we built OpenCopilot framework, so devs can intuitively get AI Copilots running in less than 10 minutes and iterate towards a useful Copilot in a single day.<p>We believe every product, company and individual will have their Copilot in the future. Thus, we’d love your feedback, questions and constructive criticism.
Upvote: | 109 |
Title: Hi HN,<p>From that scene in Hackers where they were listing off books with nicknames (ex. Compiler: Principles, Techniques and Tools, aka "The Dragon Book," I ended up being curious about what other books might also have the same treatment. This is the current list I have, scoured from places like Wikipedia, Amazon and other forums, but I wondered if anyone found anything else? (This is just for my own curiosity)<p>The AWK Programming Language, Alfred Aho, Peter Weinberger, Brian Kernighan, aka "The Gray Book"<p>The C Programming Language, Brian Kernighan and Dennis Ritchie, aka "The White Book"<p>Compilers: Principles, Techniques and Tools, aka "The Red/Purple Dragon Book"<p>Computer Architecture: A Quantitative Approach, aka "The Pillar Book"<p>The Design and Implementation of the FreeBSD Operating System, aka "The Devil Book"<p>Foundations of Computer Science, Aho, Alfred and Ullman, Jeffrey, aka "The Turtle Book"<p>Introduction to Automata Theory, aka "The Cinderella Book"<p>Lions' Commentary on UNIX, 6th Edition, aka "The Lions Book"<p>Modern Compiler Implementation in ML, aka "The Tiger Book"<p>Operating Systems: Three Easy Pieces, aka "The Comet Book"<p>The OpenGL Programming Guide, aka "The Red Book"<p>The Peter Norton Programmer's Guide to the IBM PC, aka "The Pink Shirt Book"<p>Principles of Compiler Design, aka "The Green Dragon Book"<p>Programming Perl, aka "The Camel Book"<p>Programming Ruby, aka "The Pickaxe Book"<p>Smalltalk-80: The Language and its Implementation, aka "The Blue Book"<p>Structure and Interpretation of Computer Programs, aka "The Wizard Book"<p>Unix Power Tools, aka "The Unix Book"
Upvote: | 44 |
Title: Wanting to build a house, and looking for a DB of open source plans if such thing even exist.
Upvote: | 506 |
Title: Deploying vision models is time consuming and tedious. Setting up dependencies. Fixing conflicts. Configuring TRT acceleration. Flashing (and re-flashing) NVIDIA Jetsons. A streamlined, developer-friendly solution for inference is needed.<p>We, the Roboflow team, have been hard at work open sourcing Inference, an open source vision deployment solution. Our solution is designed with developers in mind, offering a HTTP-based interface. Run models on your hardware without having to write architecture-specific inference code. Here's a demo showing how to go from a model to GPU inference on a video of a football game in ~10 minutes:<p><a href="https://www.youtube.com/watch?v=at-yuwIMiN4">https://www.youtube.com/watch?v=at-yuwIMiN4</a><p>Inference powers millions of daily API calls for global sports broadcasts, one of the world’s largest railways, a leading electric car manufacturer, and multiple other Fortune 500 companies, along with countless hackers’ hobby and research projects. Inference works in Docker and supports CPU (ARM and x86), NVIDIA GPU, and TRT. Inference manages dependencies and the environment. All you need to do is make HTTP requests to the server.<p>YOLOv5, YOLOv8, YOLACT, CLIP, SAM, and other popular vision models are supported (some models need to be hosted on Roboflow first, see the docs; we're working on bring your own model weights!).<p>Try it out and tell us what you think!
Upvote: | 71 |
Title: I recently found myself computing the similarity between lots of very high dimensional vectors (i.e., sentence embedding vectors from LLMs), and I wanted to try some more powerful measures of similarity/dependency than just Cosine similarity, which seems to be the default for everything nowadays because of its computational efficiency.<p>There are many other more involved measures that can detect more subtle relationships, but the problem is that some of them are quite slow to compute, especially if you're trying to do it in Python. For my favorite measure of statistical dependency, Hoeffding's D, that's true even if you use Numpy. Since I recently learned Rust and wanted to learn how to make Python packages using Rust, I put together this new library that I call Fast Vector Similarity.<p>I was blown away by the performance of Rust and the quality of the tooling while making this. And even though it required a lot of fussing with Github Actions, I was also really impressed with just how easy it was to make a Python library using Rust that could be automatically compiled into wheels for every combination of platform (Linux, Windows, Mac) and Python Version (3.8 through 3.11) and uploaded to PyPi, all triggered by a commit to the repo and handled by Github's servers-- and all for free if you're working on a public repo!<p>Anyway, this library can easily be installed to try out using `pip install fast_vector_similarity`, and you can see some simple demo Python code in the readme to show how to use it.<p>Aside from exposing some very high performance implementations of some very nice similarity measures, I also included the ability to get robust estimates of these measures using the Bootstrap method. Basically, if you have two very high dimensional vectors, instead of using the entire vector to measure similarity, you can take the same random subset of indices from both vectors and compute the similarity of just those elements. Then you repeat the process hundreds or thousands of times and look at the robust average (i.e., throw away the results outside the 25th percentile to 75th percentile and average the remaining ones, to reduce the impact of outliers) and standard deviation of the results. Obviously this is very demanding of performance, but it's still reasonable if you're not trying to compute it for too many vectors.<p>Everything is designed to fully saturate the performance of multi-core machines by extensive use of broadcasting/vectorization and the use of paralell processing via the Rayon library. I was really impressed with how easy and low-overhead it is to make highly parallelized code in Rust, especially compared to coming from Python, where you have to jump through a lot of hoops to use multiprocessing and there is a ton of overhead.<p>Anyway, please let me know what you think. I'm looking to add more measures of similarity if I can find ones that can be efficiently computed (I already gave up on including HSIC because I couldn't get it to go fast enough, even using BLAS/LAPACK).
Upvote: | 141 |
Title: Hi HN,<p>Gentrace is our new evaluation and observability tool for generative AI (open beta).<p>Generative pipelines are hard to evaluate because outputs are subjective. Lots of developers end up just doing “gut checks” on a few inputs before shipping changes, or they build up a spreadsheet of test cases that they manually run through the pipeline. Some companies outsource filling out the spreadsheet. However, in any of these cases, you end up with a very slow and expensive process for evaluation.<p>At one point, we did this too. Gentrace is the result of a pivot; it was an internal tool we used to automatically grade new PRs as developers shipped changes to generative pipelines that other people thought might be useful.<p>Gentrace makes pre-production testing of generative pipelines continuous and nearly instantaneous. In Gentrace, you:<p>- Import and/or construct suites of test data
- Use a combination of AI and heuristic evaluators to grade for quality, hallucination, safety, etc
- Use our interface to correct automated grades or add your own (yourself or a member of your team)<p>Gentrace integrates at a code level for evaluation, meaning we test your generative AI pipeline the way you would test normal code. This allows you to test more than just prompt changes; for example, you can compare models (eg Claude 2 vs GPT-4 vs GPT 3.5 vs Llama 2) or see the effects of additional chained steps (”Rewrite the previous answer in the following tone:”).<p>Here’s a video overview that goes into a bit more detail: <a href="https://youtu.be/XxgDPSrTWIw" rel="nofollow noreferrer">https://youtu.be/XxgDPSrTWIw</a><p>In production, Gentrace observes for speed, cost, and data flow. It also shows real user feedback as well. We do this by integrating via our SDK at a code level; Gentrace does not proxy requests.<p>Soon, we’ll allow you to convert production data into test cases, allowing customer support to turn bad production generations into “failing tests” for AI teams to make pass.<p>We process interim steps and multiple outputs as well, helping evaluate agent flows / chains where the “last output” isn’t always the only thing that matters.<p>There’s been a lot of observability tools published recently. We differ from those by focusing more strongly on blending observability with strong evaluation and by using an SDK rather than a “man-in-the-middle” approach to capturing data (ie Gentrace can be down and your request to OpenAI will still succeed).<p>Within the evaluation landscape, we differentiate by integrating with code (see above for benefits) for capturing generative outputs and by providing a customizable UI workflow for building evaluators. In Gentrace, you start with off-the-shelf automated evaluators and then customize them to your specific task. You also build and run new evaluators on old generative outputs. Finally, you easily override automated evaluators and/or blend automated evaluation with evaluation by humans on your team.<p>We also focus on being suitable for business use. We are SOC 2 Type 1 compliant (Type 2 coming shortly), have robust legal documentation around data processing, security, and privacy, and have already passed several vendor legal and security reviews at large technology companies.<p>Our standard usage-based pricing is available on the website: <a href="https://gentrace.ai/pricing" rel="nofollow noreferrer">https://gentrace.ai/pricing</a><p>If you are building features with generative AI, we would love to get your feedback. You can self-serve sign up (without a credit card) for a 14 day trial here: <a href="https://gentrace.ai/" rel="nofollow noreferrer">https://gentrace.ai/</a><p>We’re available right here for feedback and questions. We’re also available at [email protected].<p>Best,
Doug, Vivek, and Daniel
Upvote: | 67 |
Title: Hi HN community. We are excited to open source Dataherald’s natural-language-to-SQL engine today (<a href="https://github.com/Dataherald/dataherald">https://github.com/Dataherald/dataherald</a>). This engine allows you to set up an API from your structured database that can answer questions in plain English.<p>GPT-4 class LLMs have gotten remarkably good at writing SQL. However, out-of-the-box LLMs and existing frameworks would not work with our own structured data at a necessary quality level. For example, given the question “what was the average rent in Los Angeles in May 2023?” a reasonable human would either assume the question is about Los Angeles, CA or would confirm the state with the question asker in a follow up. However, an LLM translates this to:<p>select price from rent_prices where city=”Los Angeles” AND month=”05” AND year=”2023”<p>This pulls data for Los Angeles, CA and Los Angeles, TX without getting columns to differentiate between the two. You can read more about the challenges of enterprise-level text-to-SQL in this blog post I wrote on the topic: <a href="https://medium.com/dataherald/why-enterprise-natural-language-to-sql-is-hard-8849414f41c" rel="nofollow noreferrer">https://medium.com/dataherald/why-enterprise-natural-languag...</a><p>Dataherald comes with “batteries-included.” It has best-in-class implementations of core components, including, but not limited to: a state of the art NL-to-SQL agent, an LLM-based SQL-accuracy evaluator. The architecture is modular, allowing these components to be easily replaced. It’s easy to set up and use with major data warehouses.<p>There is a “Context Store” where information (NL2SQL examples, schemas and table descriptions) is used for the LLM prompts to make the engine get better with usage. And we even made it fast!<p>This version allows you to easily connect to PG, Databricks, BigQuery or Snowflake and set up an API for semantic interactions with your structured data. You can then add business and data context that are used for few-shot prompting by the engine.<p>The NL-to-SQL agent in this open source release was developed by our own Mohammadreza Pourreza, whose DIN-SQL algorithm is currently top of the Spider (<a href="https://yale-lily.github.io/spider" rel="nofollow noreferrer">https://yale-lily.github.io/spider</a>) and Bird (<a href="https://bird-bench.github.io/" rel="nofollow noreferrer">https://bird-bench.github.io/</a>) NL 2 SQL benchmarks. This agent has outperformed the Langchain SQLAgent anywhere from 12%-250%.5x (depending on the provided context) in our own internal benchmarking while being only ~15s slower on average.<p>Needless to say, this is an early release and the codebase is under swift development. We would love for you to try it out and give us your feedback! And if you are interested in contributing, we’d love to hear from you!
Upvote: | 215 |
Title: Clint is an open-sourced medical information lookup and reasoning tool.<p>Clint enables a user to have an interactive dialogue about medical conditions, symptoms, or simply to ask medical questions. Clint helps connect regular health concerns with complex medical information. It does this by converting colloquial language into medical terms, gathering and understanding information from medical resources, and presenting this information back to the user in an easy-to-understand way.<p>One of the key features of Clint is that its processing is local. It's served using GitHub pages and utilizes the user's OpenAI API key to make requests to directly to GPT. All processing, except for that done by the LLM, happens in the user's browser.<p>I recently had a need to lookup detailed medical information and found myself spending a lot of time translating my understanding into the medical domain, then again trying to comprehend the medical terms. That gave me the idea that this could be a task for an LLM.<p>The result is Clint. It's a proof-of-concept. I currently have no further plans for the tool. If it is useful to you as-is, great! If it is useful only to help share some ideas, that's fine too.
Upvote: | 45 |
Title: I've now got a semi successful YouTube channel. One of the things that helped me out at the beginning was other more popular YouTubers surfacing up my videos to their audiences.<p>I'm looking to pay back some of that and want to find some good undiscovered content. There's lots of people posting really good technical content and they just aren't pushed out by "the algorithm".<p>Maybe the production values aren't great, the audio is not great or it's just very amateurish, but if the content is good then I'm interested.<p>I'm not really interested in people who already have thousands of subs, I'm looking for the guys who are getting a few views on their videos and deserve more.
Upvote: | 142 |
Title: Have you had any experience as a mediocre programmer where you coded mostly hit and miss style and then you did some project, read some book(s) or took some courses and you started to program like a scientific process and a mental model was formed and eventually became a good then a better programmer?<p>You started understanding programming languages much easier and better and were able to structure solutions in more elegantly and efficiently in code.
Upvote: | 51 |
Title: Hey HN,<p>Some of you were really interested in Postgres logging with pgAudit in my previous post here: <a href="https://news.ycombinator.com/item?id=37082827">https://news.ycombinator.com/item?id=37082827</a><p>So I built this logger: <a href="https://rocketgraph.io/logger-demo" rel="nofollow noreferrer">https://rocketgraph.io/logger-demo</a><p>using pgAudit to show you what can be done with Postgres auditing. It offers some powerful features like "get me all the CREATE queries that ran in the past hour". These are generated by AWS RDS Instance running on my Rocketgraph account. Then they are forwarded to Cloudwatch for complex querying. In the future we can connect these logs to slack so you can get slack alerts when a developer accidentally DROPs a table.<p>If you like my work, please check it out here: <a href="https://github.com/RocketsGraphQL/rgraph">https://github.com/RocketsGraphQL/rgraph</a><p>And if you want this logging on your own Postgres instance. Use <a href="https://rocketgraph.io/" rel="nofollow noreferrer">https://rocketgraph.io/</a>
And setup a project. pgAudit is automatically installed.
Upvote: | 94 |
Title: <a href="https://github.com/acheong08/obsidian-sync">https://github.com/acheong08/obsidian-sync</a><p>Hello HN,<p>I'm a recent high school graduate and can't afford $8 per month for the official sync service, so I tried my hand at replicating the server.<p>It's still missing a few features, such as file recovery and history, but the basic sync is working.<p>To the creators of Obsidian.md: I'm probably violating the TOS, and I'm sorry. I'll take down the repository if asked. It's not ready for production and is highly inefficient; Not competition, so I hope you'll be lenient.
Upvote: | 416 |
Title: I feel the need to do some code reading, but I don’t know where to find good code (don’t say GitHub).
Upvote: | 245 |
Title: Hi HN,<p>Code Llama was released, but we noticed a ton of questions in the main thread about how/where to use it — not just from an API or the terminal, but <i>in your own codebase</i> as a drop-in replacement for Copilot Chat. Without this, developers don't get much utility from the model.<p>This concern is also important because benchmarks like HumanEval don't perfectly reflect the quality of responses. There's likely to be a flurry of improvements to coding models in the coming months, and rather than relying on the benchmarks to evaluate them, the community will get better feedback from people actually using the models. This means <i>real</i> usage in <i>real</i>, everyday workflows.<p>We've worked to make this possible with Continue (<a href="https://github.com/continuedev/continue">https://github.com/continuedev/continue</a>) and want to hear what you find to be the real capabilities of Code Llama. Is it on-par with GPT-4, does it require fine-tuning, or does it excel at certain tasks?<p>If you’d like to try Code Llama with Continue, it only takes a few steps to set up (<a href="https://continue.dev/docs/walkthroughs/codellama">https://continue.dev/docs/walkthroughs/codellama</a>), either locally with Ollama, or through TogetherAI or Replicate's APIs.
Upvote: | 187 |
Title: Hi, I’m Chris, one of the co-founders of Shimmer. Last October, following my ADHD diagnosis, I launched Shimmer (<a href="https://shimmer.care">https://shimmer.care</a>), one-to-one ADHD Coaching for adults. Our HN launch was here: <a href="https://news.ycombinator.com/item?id=33468611">https://news.ycombinator.com/item?id=33468611</a>.<p>A quick recap before I dive into our new launch: Shimmer is an ADHD coaching service for adults. We took apart the traditionally expensive, inaccessible ADHD coaching offering ($300-600+/session) and redesigned it from first principles. You get matched with one of our expert ADHD coaches, meet weekly over video, and get supported throughout the week via text and with learning tools. This solution is special to me personally (and our community) because it doesn’t just give you “knowledge” or offer another “tool”—our coaches help you set realistic goals, take personalized steps towards it, and keep you accountable.<p>Today we’re excited to launch our most-request feature: Web.<p>Over the past 9 months, we learned (and iterated) a lot with our members and coaches. A few key challenges pointed to the need for a web version:
(1) ADHD “object permanence” challenges (e.g. out of sight out of mind), we needed to be multi-platform so when you finish a task or goal or encounter a challenge, regardless of if you’re near your laptop or phone, you can check it off & ping your coach right away,
(2) members used reflection modules (e.g. after each task, you’re prompted to reflect on what worked and didn’t work, and it informs your coach) more thoroughly than we originally anticipated, and web allows for deeper reflection and typing,
(3) overarching coaching goals were often forgotten during the day-to-day, and the web makes it easier to use visual cues to keep goals top of mind for motivation,
(4) many of our members struggle with phone addiction and driving members to the mobile app ended them up in Tiktok/IG, whereas the web app offers a focused environment to get in their “coaching zone”.<p>Our new web app was designed alongside over 1,200 members, 22 coaches, countless hours of testing and iterating. We’re excited (but nervous!) to unveil this new version. If you have ADHD (or think you do), we’d love for you to check out our platform and give us critical feedback (or positive reinforcement!). It’s a super streamlined and ADHD-friendly signup process and in honor of our web launch and back to school/work, the first month is 30% off.<p>Our pricing: $115/mo. for Essentials plan (15-min weekly sessions), $230/mo. for Standard plan (30-min weekly sessions), $345/mo. for Immersive plan (45-min weekly sessions); all plans additional 30% off first month, HSA/FSA-eligible.<p>We know these prices are expensive for many people with ADHD and we’re committed to bringing costs down over time. It’s more affordable than what many people are paying for coaches, but the fact that we’re relying on humans, and not going the “we can automate all this with AI” route, puts a floor on how low the costs can drop. That said, here are some actions we’re taking to drive down costs for those who need it: (1) we offer needs-based scholarships and aim to have 5% of members on them at any time, (2) we often run fully sponsored scholarships with our partners—over 40 full ride scholarships and 100 group coaching spots have been disbursed alongside Asian Mental Health Project, government of Canada, and more, and (3) we have aligned our coaching model alongside Health & Wellness Coaching, which is expected to be reimbursed in 2024. If you have ideas or expertise here, please reach out to me directly at [email protected].<p>On behalf of our small but mighty & passionate Shimmer team, I’m excited for the Hacker News community to share your thoughts, feedback, and ideas. If you feel comfortable, I’d also love to hear your personal ADHD story and what has worked / hasn’t worked for you.<p>Co-founders Christal & Vikram
Upvote: | 229 |
Title: Hello, Hacker News! I'm David, cofounder of Release (YCW20). Introducing Release AI, a tool designed to empower users with instant access to DevOps expertise, all without monopolizing the valuable time of our experts. Developed with the developer and engineer community in mind, Release AI takes the power of OpenAI's cutting-edge GPT-4 public LLM and augments it with DevOps knowledge.<p>In its initial phase, Release AI offers "read-only" access to both AWS and Kubernetes. This means you can engage in insightful conversations with your AWS account and K8s infrastructure effortlessly. Looking ahead, our roadmap includes plans to integrate more tools for commonly used systems. This will enable you to automate an even broader array of your daily tasks.<p>If you would like more info you can check-out our launch YC (it has more details, screen casts): <a href="https://www.ycombinator.com/launches/JI1-release-ai-talk-to-your-infrastructure">https://www.ycombinator.com/launches/JI1-release-ai-talk-to-...</a><p>Our quickstart guide: <a href="https://docs.release.com/release-ai/quickstart">https://docs.release.com/release-ai/quickstart</a><p>Signup and use it: <a href="https://beta.release.com/ai/register">https://beta.release.com/ai/register</a><p>Please give it a try! We would love your feedback as we are enhancing Release AI, reach out to us with any feature requests or crazy ideas that Release AI could do for you. Feel free to email me at [email protected] or leave a comment, looking forward to chatting with you.<p>Join the conversation in our Slack community and discover the future of DevOps with Release AI!
Upvote: | 143 |
Title: Throwaway because I'm pretty active here.<p>I'm so depressed and lost, friends. This has not been an Incredible Journey.<p>About two years ago I was raked over the coals and charged with a white collar computer crime. It was highly-publicized and described in a less shimmering light than what actually happened, as most press releases by the justice department are. My current arrangement with the government involves "special projects" on an as-needed basis, which is why I’m not incarcerated.<p>I was employed throughout the turmoil — being charged (though, never indicted), and ultimately pleading guilty to a single count of computer intrusion. (In this case, computer intrusion was defined by a cURL request, changing a single query param; no other charges were pursued.) My employer knew the details, kept me on as long as they could: they are currently operating in a shell capacity as of late Q2 due to being unable to raise a round.<p>During my search for work I’ve always disclosed that I have baggage and can’t pass a background check. Even with this, I’ve had offers put on the table, only for them to be terminated or rescinded at some point. It's not because I'm not telling the whole truth: when people ask, I tell; if they just say "oh what'd you do" and I say "well, according to the government, this, but this is the real story" it's found to be fascinating, sad, and annoying at the same time.<p>All I know is to be transparent with people, so manipulating the story or the details is difficult for me — I have autism, and my entire existence lives to be transparent and logical because that's just how I'm wired.<p>I really don’t know what to do at this point. I have rent that’s due, and nothing to fall back on: no assets to cash out (legal defense cleaned me out completely), I live off ramen like a founder, I have the bare minimum everything already. I don’t have any family to go to — my mother, and my only, passed away about this time a few years ago. My friends: well, I am usually the person that is supporting them when they are in crisis.<p>I am not used to being in crisis.<p>Being autistic makes it particularly difficult because I'm already so awful at advocating for myself. Perhaps the most frustrating part is that I outwardly appear neurotypical.<p>Freelancing sites like Upwork are probably my next go-to, even if it rattles my pride and my usual rate, at least I won’t be homeless and starving. I thought I’d post here in case anyone has any resources or ideas I haven’t thought of yet.<p>For the last 15 years I’ve been building MVPs and shaping up Ruby/Rails applications, working at some big-name Rails shops, many smaller YC companies, and everything in between. I'm active and relatively popular in the open-source Ruby/Rails world, and my side projects here have been met with great admiration. But I'm still in this position. With the tech stack I've pigeoned myself into, I’m unfortunately a one-trick in regard, but at this point I’ll take anything I can get. My next stop is retail, if I can even pass a background check.<p>Redacted resume, if anyone can help: https://docs.google.com/document/d/1gZ_-spX5F2sIyJUuL7firQbWK9xNrSYZ69O_xackCjQ (context: https://news.ycombinator.com/item?id=37265203)<p>My email, for this journey, is [email protected]
Upvote: | 63 |
Title: I had no browsers open. I was minding my own business. Windows 11 gave me an unsolicited toaster popup, with the default action being to change Chrome's default search engine to Bing.<p>This is an atrocious dark pattern and a huge overreach for an operating system, whose job it should be to stay out of my way and run what I tell it to.<p>But more than that, this strikes me as directly anti-competitive behavior. I would appreciate views on this by people having more educated opinions on this than myself.
Upvote: | 76 |
Title: Hi HN,<p>We have fine-tuned CodeLlama-34B and CodeLlama-34B-Python on an internal Phind dataset that achieved 67.6% and 69.5% pass@1 on HumanEval, respectively. GPT-4 achieved 67%. To ensure result validity, we applied OpenAI's decontamination methodology to our dataset.<p>The CodeLlama models released yesterday demonstrate impressive performance on HumanEval.<p>- CodeLlama-34B achieved 48.8% pass@1 on HumanEval<p>- CodeLlama-34B-Python achieved 53.7% pass@1 on HumanEval<p>We have fine-tuned both models on a proprietary dataset of ~80k high-quality programming problems and solutions. Instead of code completion examples, this dataset features instruction-answer pairs, setting it apart structurally from HumanEval. We trained the Phind models over two epochs, for a total of ~160k examples. LoRA was not used — both models underwent a native fine-tuning. We employed DeepSpeed ZeRO 3 and Flash Attention 2 to train these models in three hours using 32 A100-80GB GPUs, with a sequence length of 4096 tokens.<p>Furthermore, we applied OpenAI's decontamination methodology to our dataset to ensure valid results, and found no contaminated examples.<p>The methodology is:<p>- For each evaluation example, we randomly sampled three substrings of 50 characters or used the entire example if it was fewer than 50 characters.<p>- A match was identified if any sampled substring was a substring of the processed training example.<p>For further insights on the decontamination methodology, please refer to Appendix C of OpenAI's technical report.<p>Presented below are the pass@1 scores we achieved with our fine-tuned models:<p>- Phind-CodeLlama-34B-v1 achieved 67.6% pass@1 on HumanEval<p>- Phind-CodeLlama-34B-Python-v1 achieved 69.5% pass@1 on HumanEval<p>Note on GPT-4<p>According to the official technical report in March, OpenAI reported a pass@1 score of 67% for GPT-4's performance on HumanEval. Since then, there have been claims reporting higher scores. However, it's essential to note that there hasn't been any concrete evidence pointing towards an enhancement in the model's coding abilities since then. It's also crucial to highlight that these elevated figures lack the rigorous contamination analysis that the official statistic underwent, making them less of a reliable comparison. As a result, we consider 67% as the pass@1 score for GPT-4.<p>Download<p>We are releasing both models on Huggingface for verifiability and to bolster the open-source community. We welcome independent verification of results.<p>Phind-CodeLlama-34B-v1: <a href="https://huggingface.co/Phind/Phind-CodeLlama-34B-v1" rel="nofollow noreferrer">https://huggingface.co/Phind/Phind-CodeLlama-34B-v1</a><p>Phind-CodeLlama-34B-Python-v1: <a href="https://huggingface.co/Phind/Phind-CodeLlama-34B-Python-v1" rel="nofollow noreferrer">https://huggingface.co/Phind/Phind-CodeLlama-34B-Python-v1</a><p>We'd love to hear your thoughts!<p>Best,<p>The Phind Team
Upvote: | 813 |
Title: I wanted to download some lectures [1] off youtube as I'm going to be without internet or a long period, and but A) just apt install get me a version which is too old, and I get ERROR: Unable to extract uploader id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. and B) going to the update url I see Access denied; Due to a ruling of the Hamburg Regional Court, access to this website is blocked.<p>So, what's the situation with youtube-dl? How do I get it working? As a separate question, how else can I download something off youtube?<p>I guess last resort I'll have to dig into the python to figure out why that regex is failing and what else I can do about it, but that's not sustainable.<p>[1] if anyone is curious, just discovered this from HN: https://www.youtube.com/watch?v=tNOu-SEacNU&list=PLDcUM9US4XdPz-KxHM4XHt7uUVGWWVSus&index=5
Upvote: | 97 |
Title: I myself am a freelance developer and I regularly work with non-technical founders to bring their ideas to life.<p>I feel like matching the expectations of your client or partner is crucial in the beginning of a project.<p>It’s important to lay out the groundwork, process & technologies while not including the tedious details. Communication is key etc etc<p>I’m keen to uncover my blind spots and see what other HNers with experience in similar circumstances think is most important when working with non technical business owners and founders
Upvote: | 41 |
Title: Hi all!<p>I'm going into my freshman year, and figured that the best way to prepare for the intro to programming Racket course would be to implement my own garbage-collected, dynamically typed, functional programming language in C ;)<p>Anyways... here's the repo:
https://github.com/liam-ilan/crumb<p>I started learning C over the summer, so I still have a whole lot to learn... Any feedback would be greatly appreciated! :D
Upvote: | 138 |
Title: Hi there HN community. I'm excited to show my personal project -- an open-source code and hardware approach that connects my 1980's TRS-80 Model III computer to an OpenAI server. The TRS-80 can hold a chat session with OpenAI, and it's all very retro feeling. There were challenges along the way, as the approach to interfacing between ancient hardware and modern software interface wasn't immediately available. Please see <a href="https://github.com/druid77/trs-gpt">https://github.com/druid77/trs-gpt</a> for all the details, and let me know if you have any ideas for improvements. For example, adding support for voice.
Upvote: | 102 |
Title: I created this to fill a need, accessing those silly LLMs from the menu bar instead of a web browser or command line.<p>It relies 100% on Ollama, but gives you an easy way to make queries against multiple models, and provides a few sample modelfiles for you.<p>open source, of course.<p><a href="https://github.com/JerrySievert/Dumbar">https://github.com/JerrySievert/Dumbar</a> to skip the blog post and go straight to the code and release.
Upvote: | 60 |
Title: Hello, my Stripe account got banned almost 1 week ago, i posted it here: https://news.ycombinator.com/item?id=37268953, and i tried to contact the support several times. Everytime they tell me that someone will contact me via email. I tried to contact also the email heretohelp@ but nothing. Does anyone know a way to get an answer? It seems as if they don't care
Upvote: | 49 |
Title: Underrated music, books, and movies that you'd like to share? This thread may be highly biased, but I'm confident that we'll find some great discoveries for a lazy Sunday.<p>I keep two eclectic lists of "interesting" stuff:<p>https://www.slowernews.com/s/underrated<p>https://www.slowernews.com/s/timeless
Upvote: | 51 |
Title: Graphweaver is an open-source GraphQL API Server that can connect many data sources to create a single API. Create a headless CMS, API Gateway, BaaS or use it as a BFF.
Upvote: | 74 |
Title: I'm preparing to shift to a fully remote work setup and want to maximise its effectiveness. So I would like to ask you, which work-from-home purchases had the most significant positive impact on your comfort, productivity, and well-being?
Upvote: | 40 |
Title: Microsoft Edge version 116.0.1938.62 has added rounded corners to <i>all</i> web pages, and there is no way to opt out.<p>Edge has been interfering with the web application experience from the start, with unwanted hover icons and menus that cannot be disabled by the application. (For example, if you hover over an image, you get Visual Search and Settings icons.) Super annoying.<p>But this latest change takes the cake. It is ugly. If the page has a scrollbar at document level, the web page will get rounded corner only at the top left, not the top right, thus losing symmetry. For example, go to CNN or Github. If the web page has dark-mode you get a distracting white border around the page, for example on chat.openai.com.<p>Edge is breaking this cardinal rule: Browser chrome belongs to the browser, but the content area belongs to the application alone. Edge should stay out of the content area, or at least should allow applications to turn off unwanted buttons and menus, thus retaining full control of the application experience.
Upvote: | 91 |
Title: Note: this is different from a few previous Ask HN posts where the impacted person was technically apt (if not a geek). Here this is for a typical older person whose technical capacities are small.<p>My mother has a degenerative illness that is slowly going to lower here eyesight (starting from the center of the retina). We are not sure how far this is going to go.<p>She is an avid reader (several books every month) and this is what I would like to address in the first place. Then there are the other aspects that are less technical but are <i>very much</i> welcome as well.<p>She reads on a Kobo (or similar device) and the fact that she can make the fonts bigger is already a good thing. She will probably continue to do so (hopefully to get to the point where she will have to spell each word...).<p>She also solves quite a lot of "literature oriented" quizzes (similar to crosswords) - today this is on paper but I will need to find a way for her to move online.<p>I would appreciate any tricks or solutions for non-technical people like her that could make her life easier.
Upvote: | 249 |
Title: I've dealt with a lot of developers in the past ~4 years and I've noticed a lot of them didn't have the "basic" theoretical concepts that I considered widespread and universal about Software Engineering and Computer Science. For example, engineers putting a lot of logic in their unit tests, or choosing the wrong data structures, etc.<p>Maybe it's because of the advent of self taught engineers? Or maybe these concepts are just "too boring"? I'd like to know if I'm biased or not.<p>EDIT: I'm not implying this is a "bad thing". Just trying to assess if it's a reality or it's just me. And I'm very supportive of self taught engineers! I think it's great that people can build their own careers by themselves. We might just need to adjust our field a bit based on how it evolves.
Upvote: | 60 |
Title: Hi HN, we’re Thomas, Martin, Théo, and Josef, cofounders at HyLight (<a href="https://www.hylight.aero">https://www.hylight.aero</a>). We build and operate autonomous hydrogen airships to inspect energy infrastructure like pipelines and power lines. Here’s a video: <a href="https://www.youtube.com/watch?v=SuW5ur8ER7A">https://www.youtube.com/watch?v=SuW5ur8ER7A</a>.<p>Energy infrastructure operators (utility companies) struggle with conducting precise inspections of their assets. It is extremely important for them because they need to make sure that the network is in good condition to avoid outages and leaks. This infrastructure has big impacts when it is malfunctioning. For instance, methane leaks from oil and gas infrastructure represent 4% of global carbon emissions every year (and approx. $7B worth of losses in Northern America and Europe).<p>Gas and power networks are physically so large (more than 47M miles globally) that millions of miles of inspections must be done each year. Currently the most used solution is helicopters (used on 90% of inspections). Helicopters are dangerous, have a high carbon footprint and are costly to use. Plus, helicopter service providers have to go as fast as they can to save their margins. So the data quality is not optimal at all.<p>Our airship (the “HyLighter”) does exactly what is needed to gather a lot of precise information from the air. It flies slowly and can hover almost indefinitely, it consumes little energy, allowing great range for inspections. It can simultaneously mount all the sensors that are required for the inspections (HQ cameras, LiDAR, infrared, leak detection devices...). Plus, due to its size, we can write stuff on it to tell nearby residents what we are doing.<p>How does it work? It is basically a drone airship. We use a lighter-than-air gas in the envelope (helium or hydrogen) for buoyancy. We have a H2 tank and fuel cell transforming H2 into power for all the systems. In terms of engines we built 2 gyros (gyroscopes engines) at the rear and front of the airship. They allow us to have vectorial thrust and therefore to be extremely maneuverable. The sensors are fixed under the HyLighter on gimbals and can easily “follow” the linear infrastructure that is inspected.<p>When we began working on H2 and drones, we quickly realized that there was a problem with the weight of H2 tanks. The tanks have to be very strong to contain enough H2 which is extremely low in density. Then, we realized that we could use the "problem" inherent to H2 (very low density) to our advantage. We simply needed to use H2 as a lifting gas and power source. The envelope of the airship becomes the tank!<p>The HyLighter is more efficient than other current solutions. As mentioned, helicopters are the most-used at present. Compared to those, we use less energy, emit less GHG, have better data quality, and less risks for human beings as the HyLighter is unmanned. Compared to quad drones, we have longer flight time and more payload, allowing for various simultaneous sensors collecting data—we can simultaneously mount all the sensors that are required for the inspections (HQ cameras, LiDAR, infrared, leak detection devices). Compared to plane drones, our flight speed is lower so we can collect better data and less ground risk. Compared to satellites, we have a lot more precision (actually they can't even be used for most of our operational use cases).<p>The genesis of HyLight is that Théo wanted to work on new uses of H2 and drones when he was at school. He was joined by Martin who studied in the same engineering school, then by Josef who's Martin's BFF and then by me. I met Théo when we finished our studies in UC Berkeley last year and we all launched HyLight together. As we kept working on it, we gained more and more interest from pipeline and power line operators and we realized that there was indeed a big problem.<p>HyLight is at the stage where we have our first POCs. Our first paid flight is in the coming weeks. Our business model is straightforward: inspection as a service. We charge a price per kilometer depending on length, location and type of data collected.<p>I discovered HN not too long ago and I’m impressed with the engagement in tech and innovation of this community! I’m very interested to get your opinion on HyLight. Maybe some people work in the energy industry and know a thing or two about energy infra inspections that we could learn from? If so please tell us what you think, be it red flags or positive stuff! Also, as individuals, how would you feel seeing a big drone airship flying in the air near your home? We look forward to any and all comments!
Upvote: | 192 |
Title: Hi folks! I’m Eric Rowell, and I’m really excited to share what our startup has been working on: <a href="https://www.second.dev">https://www.second.dev</a>. Second automates codebase migrations and upgrades for developers using AI agents and static analysis. You log in, connect Second to your GitHub repo, run a module, and get a pull request.<p>Here’s a demo video: <a href="https://www.youtube.com/watch?v=d5IldYeCPBY">https://www.youtube.com/watch?v=d5IldYeCPBY</a>.<p>We believe that most developers want to spend their time innovating and creating new software. A lot of time at large companies is spent on basic codebase maintenance instead. Not all of this is automatable, but some is! And the technology to support it is developing rapidly.<p>Unlike generic AI agents, our AI agents specialize in codebase migrations and upgrades. The agents run modules which include a procedural plan of attack, and execute prompts using RAG (Retrieval-Augmented Generation) to utilize the most up-to-date context to do the job, including documentation and code examples, crawled with LangChain and stored in vector DBs. Our agents also reshape directory structures, resolve dependencies, and update build systems. The agents can run in our Second Cloud, in your own AWS or Azure VPC, or they can run fully on-premise with open source LLMs.<p>Today, we specialize in Angular → React, CRA → Next, JS → TS, and Upgrade Next. We’re building new modules every week, and we plan to open up an SDK to enable the Second community to build their own modules too.<p>You can try Second for free on codebases up to 2MB. For larger codebases, we charge $10/MB for the first module run, and then after that you can run it as many times as you like for free.<p>Please try it out and let us know what you think. We are obsessed with codebase migrations and upgrades, so please let us know how we can help!
Upvote: | 43 |
Title: I spent about a decade on a different career path but have always done small python projects for fun (making games, writing bots for games, doing coding challenges). My current career is coming to an end and I have about a year to prepare for a programming job hunt. I was thinking of doing a portfolio piece since I don't have any real credentials, something like a full stack website that does some memey gpt stuff since that sounds fun but I'd like to hear any other ideas or advice
Upvote: | 41 |
Title: We are currently working on moving our site over to a react front end and adding more features to the site. I wanted to highlight one we are quite proud of - The advanced bodymap. Click the advanced button and get specific exercises that work that muscle. We are hoping to have an even bigger breakdown in the future when we build our "recovery" section of the site.<p>Happy to answer any questions and take feature requests.
Upvote: | 44 |
Title: Or convince me otherwise. I thought this would lead to an interesting discussion. I'm a back-end developer, currently constructing a front-end app in ReactJS. Ok, I must say I'm a newbie when it comes down to ReactJS. But as an example, it took me 20 minutes to realize that a table with 1000 elements, each having edit buttons, led to the slow response of displaying a modal. Now, I'm tasked with finding an effective solution to address this issue—whether it's through pagination, using react-window, or exploring other alternatives. And I haven't even found the cause whether it's the amount of event handlers, the size of the virtual DOM or something else. Back-end feels so much straightforward.
Upvote: | 45 |
Title: I have, over the years, watched as our TV has become more intrusive and abusive. Every software update takes it up a notch.<p>As I say in a Reddit post [0], I didn't go to the store to buy a remotely controlled ever-changing remotely-controlled digital advertising platform to hang in our family room.<p>I didn't go to the store to buy a data gathering device to plant into my home.<p>I didn't go to the store to provide a TV manufacturer with, as they say in their newest terms of service:<p>"grant and agree to grant to VIZIO and its affiliates and licensees, a non-exclusive, transferable, revocable, royalty-free right and license (with right to sublicense) to use, reproduce, publicly display, publicly perform, adapt, collect, modify, delete from, distribute, transmit, promote and make derivative works of the VIZIOgram Content, in any form"<p>This device, which cost well over $2,000 is the source of stress and consternation.<p>When family and friends come over for a visit, it is impossible to control what they click on. The "home page" is full of ads.<p>They have a service through which you upload your family photos into their cloud service. The above license is just one aspect of what they grab from consumers without consent.<p>I say "without consent" because of several realities.<p>- They do not make you read and sign anything when you buy the product. Go to the store to buy a TV. You walk away with a big box and no disclosures of any kind. The plastic bag they ship it in has more visible disclosures than the device itself.<p>- How about the terms of use/service? Nobody reads 34 pages of nonsense (assuming they can find it).<p>- If you have the TV installed by the shop where you bought it, they set it up and walk away. You never agreed to anything.<p>- They constantly change and update firmware and TOU/S. You never agree to anything.<p>In short, they take advantage of consumers and tend to become abusive about just how far they push it. This multi-thousand-dollar TV is now a fully remote-controlled digital ad serving system in my family room. That is not what I signed-up for and not what I bought at all.<p>I just want a TV.<p>What about watching network TV with ads? Isn't that an advertising platform in your home?<p>Sure, except that I can choose to tune in or not. And the TV station doesn't modify the software in my TV to deliver more and more ads into my screen.<p>Interestingly enough, in a prior job I designed image processing boards to drive LCD and other display modules. Part of me has been thinking it is time to engineer a "TV Lobotomizer" board that can be used to modify these TV's and completely remove all such capabilities while (via open source) giving the consumer full control of what happens with their TV.<p>Sorry if that came off as a huge rant. With the last software update this thing has just gone over a threshold that is simply intolerable. Actively looking for solutions.<p>I think this kind of consumer protection has to be undertaken by government. I am not one to automatically reach for that kind of a solution. However, it has become beyond clear that TV manufacturers are perfectly happy abusing consumers as far as they can go. The only expedient way to fix this might be some kind of a law that forces full user control of every single feature on a TV and an absolute iron-clad privacy requirement. If a customer chooses to pierce that protection, they should be free to do so after informed consent and have every right to take it all back.<p>[0] https://www.reddit.com/r/VIZIO_Official/comments/162xuop/p75qxh1_ready_to_hire_an_attorney/
Upvote: | 49 |
Title: Hey all<p>Sharing a new piece of work I've been doing with a friend. Mu is a new micro web app platform which enables building and sharing apps instantly with storage, auth and payments built in. Apps are single file, built in the browser and rendered as an iframe. They're "micro" because they're quite literally tiny single purpose utilities like a hackernews reader or old school guest book. It's mostly at this point something that scratches a personal itch. Making app development super simple and lightweight. Sort of like living GitHub gists. And trying to build a simpler, cleaner place to consume the web. Right now nothing more than a cool hack I'm sharing. Feedback obviously welcome.<p>Cheers
Asim
Upvote: | 73 |
Title: I started programming in ~2013 in JavaScript. I’ve since learned and tried a handful of languages, including Python, but JavaScript was always my favorite. Just within the last year I learned Ruby, and I was blown away by how fun and easy to use it is. At the present time, I’m starting all my new projects in Ruby.<p>My impression is that in the ‘00s, Python and Ruby were both relatively new, dynamically typed, “English-like” languages. And for a while these languages had similar popularity.<p>Now Ruby is still very much alive; there are plenty of Rails jobs available and exciting things happening with Ruby itself. But Python has become a titan in the last ten years. It has continued to grow exponentially and Ruby has not.<p>I can guess as to why (Python’s math libraries, numpy and pandas make it appealing to academics; Python is simpler and possibly easier to learn; Rails was so popular that it was synonymous with Ruby) but I wasn’t paying attention at that time. So I’m interested in hearing from some of the older programmers about why Ruby has stalled out and Python has become possibly the most popular programming language (when, in my opinion, Ruby is the better language).
Upvote: | 578 |
Title: Hi HN! Langfuse is OSS observability and analytics for LLM applications (repo: <a href="https://github.com/langfuse/langfuse">https://github.com/langfuse/langfuse</a>, 2 min demo: <a href="https://langfuse.com/video">https://langfuse.com/video</a>, try it yourself: <a href="https://langfuse.com/demo">https://langfuse.com/demo</a>)<p>Langfuse makes capturing and viewing LLM calls (execution traces) a breeze. On top of this data, you can analyze the quality, cost and latency of LLM apps.<p>When GPT-4 dropped, we started building LLM apps – a lot of them! [1, 2] But they all suffered from the same issue: it’s hard to assure quality in 100% of cases and even to have a clear view of user behavior. Initially, we logged all prompts/completions to our production database to understand what works and what doesn’t. We soon realized we needed more context, more data and better analytics to sustainably improve our apps. So we started building a homegrown tool.<p>Our first task was to track and view what is going on in production: what user input is provided, how prompt templates or vector db requests work, and which steps of an LLM chain fail. We built async SDKs and a slick frontend to render chains in a nested way. It’s a good way to look at LLM logic ‘natively’. Then we added some basic analytics to understand token usage and quality over time for the entire project or single users (pre-built dashboards).<p>Under the hood, we use the T3 stack (Typescript, NextJs, Prisma, tRPC, Tailwind, NextAuth), which allows us to move fast + it means it's easy to contribute to our repo. The SDKs are heavily influenced by the design of the PostHog SDKs [3] for stable implementations of async network requests. It was a surprisingly inconvenient experience to convert OpenAPI specs to boilerplate Python code and we ended up using Fern [4] here. We’re fans of Tailwind + shadcn/ui + tremor.so for speed and flexibility in building tables and dashboards fast.<p>Our SDKs run fully asynchronously and make network requests in the background. We did our best to reduce any impact on application performance to a minimum. We never block the main execution path.<p>We've made two engineering decisions we've felt uncertain about: to use a Postgres database and Looker Studio for the analytics MVP. Supabase performs well at our scale and integrates seamlessly into our tech stack. We will need to move to an OLAP database soon and are debating if we need to start batching ingestion and if we can keep using Vercel. Any experience you could share would be helpful!<p>Integrating Looker Studio got us to first analytics charts in half a day. As it is not open-source and does not work with our UI/UX, we are looking to switch it out for an OSS solution to flexibly generate charts and dashboards. We’ve had a look at Lightdash and would be happy to hear your thoughts.<p>We’re borrowing our OSS business model from Posthog/Supabase who make it easy to self-host with features reserved for enterprise (no plans yet) and a paid version for managed cloud service. Right now all of our code is available under a permissive license (MIT).<p>Next, we’re going deep on analytics. For quality specifically, we will build out model-based evaluations and labeling to be able to cluster traces by scores and use cases.<p>Looking forward to hearing your thoughts and discussion – we’ll be in the comments. Thanks!<p>[1] <a href="https://learn-from-ai.com/" rel="nofollow noreferrer">https://learn-from-ai.com/</a><p>[2] <a href="https://www.loom.com/share/5c044ca77be44ff7821967834dd70cba" rel="nofollow noreferrer">https://www.loom.com/share/5c044ca77be44ff7821967834dd70cba</a><p>[3] <a href="https://posthog.com/docs/libraries">https://posthog.com/docs/libraries</a><p>[4] <a href="https://buildwithfern.com/">https://buildwithfern.com/</a>
Upvote: | 143 |
Title: I got an email from Google Fi yesterday:<p>> Notice about your CPNI settings<p>> [...] Starting today, we will opt you into the sharing with and use of your CPNI by Alphabet affiliates to receive this information. Your opt in will go into effect 30 days from the date you receive this email.<p>> You’re not required to stay opted in. It’s your right and our duty under federal law to protect the confidentiality of your CPNI. If you prefer to opt out of letting Fi use and share your CPNI with Alphabet and Google services, you can do so by replying here or via Fi’s privacy and security settings at any time (instructions here). Opting out of CPNI sharing won’t affect your account or your ability to use any of Google Fi Wireless’ services. If you do not choose to opt-out within 30 days of receiving this notice, Fi will assume your approval to share your CPNI with other Google services and our affiliates.<p>Dear Google, https://www.google.com/search?q=opt-in
Upvote: | 81 |
Title: Hi folks,<p>My friend Sami and I recently built Vizly, a Mac application that allows anyone to query their databases using plain English.<p>Vizly is built on Llama 2, llama.cpp, and runs fully on-prem (edit: meaning everything is local and your data never leaves your own computer).<p>We are running two Llama models, one for natural language to SQL translation, and another that uses the results from the SQL to render visualizations. That means there are no external APIs and all the AI models are running locally on your MacBook.<p>We tried to make Vizly very easy to share as well. Every Vizly instance creates a share link that can be accessed by anyone on the same network as you. Just send the share link to anyone on the same network and they will be immediately able to run AI-powered queries, hosted from your device.<p>Vizly previously used to be a hosted solution for querying CSVs and now we are on-prem specifically focussed on databases.<p>Would love if you could try it out and give us any feedback!
Upvote: | 104 |
Title: Hey HN. Over the summer I built Open Interpreter, a CLI that lets you ask Code-Llama or GPT-4 to write/run code.<p>It runs multiple languages (Python, Shell, HTML/CSS, Node JS rn) then sends the output back to the language model.<p>It’s essentially an open-source, local implementation of OpenAI’s Code Interpreter. No limits on file size, runtime timeouts, or web access.<p>Everything is streamed beautifully, rendered with Markdown, and syntax highlighted.<p>Try it out and let me know what you tried! - Killian
Upvote: | 82 |
Title: The organization of my digital assets is rather ... haphazard! Some files (create, downloaded, received from others) on one computer. Some on another computer. Some files are backed up, some of those backups are not regular. Photos are on the phone, but not on the computer. The phone syncs photos to a large cloud provider, but photos deleted from the phone don't get deleted in the cloud. Videos occasionally get copied to a USB drive connected to the TV. Some data is shared with others via cloud providers, other is not. Files in the various computers are disorganized across folders, filenames, multiple versions in different places.. and so on.<p>I'm imagining that many of you have clean, well-organized, backed up digital assets across a variety of computers and cloud providers. Curious how you do it in terms of i) Motivation ii) organization schemes (e.g. Johnny Decimal? PARA?) iii) Tools and processes, potentially across file-types (e.g. Photos vs documents, files created by yourself vs obtained elsewhere) iv) Any sharing you do (e.g. with Spouse, family, friends) v) Prevent entropy from growing. vi) Any other points/topics to be considered?<p>Please share your data organization best practices, warnings, success stories, pitfalls etc.
Upvote: | 41 |
Title: Hi, I'm currently learning HTML, CSS and JS but I'd like to get more in depth. I love learning stuff from books, even if it's a bit old-fashioned, because I can always take a brief look at what I need rather than having to search for basic information online, among several crappy SEO websites.<p>Can you suggest some books? I'd prefer advanced ones, as I'm not a newbie.<p>PS: I know you learn things only by doing, the book would be a support. You can suggest also books related to networking, that's the next step :)
Upvote: | 68 |
Title: I've been using this for about a year now - I parsed 6 months of my messages on slack and found the most common phrases I use and generated keyboard shortcuts for them.
Upvote: | 815 |
Title: Our app is designed to be used across the Asia Pacific.<p>We have members who follow western naming conventions as well as members following common asian naming conventions.<p>Turns out there can be alot of variation on what is the convention.<p><a href="https://www.asiamediacentre.org.nz/features/a-guide-to-using-asian-names" rel="nofollow noreferrer">https://www.asiamediacentre.org.nz/features/a-guide-to-using...</a><p>How would you handle different naming conventions, so users see their name in the order they would like?<p>Family, Given<p>Given, Family
Upvote: | 270 |
Title: I have dabbled with many resources before in hope of learning to program and learn basics of CS. I have looked at intro course sequences of schools like MIT, CMU, Stanford, etc. Most have partial resources locked down. I have dabbled with books and they felt like shit. Even the book Think Python by Allen Downey was all over the place.<p>The books and blogs at least taught me basic syntax constructs like variables, function definition, loops, etc. But I couldn't understand how to package them up (compose) to use them in solving problems.<p>Then I started studying through Berkeley's 3 course intro CS61A, CS61B, CS61C. They have all materials in the open and if you are following the most current iteration of the course, they even post solutions to the problems. It was godsend for me.<p>Now, I am confident to learn more CS topics using courses from CMU, MIT, Berkeley, Stanford, etc. The thing that was holding me back was the lack of confidence of programming and understanding really what a program was doing.
Upvote: | 243 |
Title: Hello Hackers! One year ago we changed international transfers for consumers with only a fixed fee and made them on average 10x cheaper than Wise. Now we are thrilled to announce that we start “Atlantic Money for Business” to offer transfers up to £/€1m for a fixed £/€3 fee and at the current exchange rate. And while Revolut and Wise have recently raised their prices for business transfers by up to 50%, we enable savings of several thousand on every transfer. What are you waiting for?
Upvote: | 62 |
Title: Our founder woke up to a gut-wrenching email from Stripe today, reading "We're writing to you because, after conducting a routine review of your Stripe account for Integrate for Good (account ID: [**]), we've found that it presents a high level of risk for customer disputes." We're a very small, hyper local nonprofit and have been using Stripe without any significant issues for over four years. Radar has flagged only one charge in the past year, and we've had just a single dispute since opening our account.<p>We requested further review of our account and supplied Stripe with further information in the dashboard. Just 45 minutes later, we received a second email confirming their decision to close our account. I find it hard to believe that Stripe was able to "conduct another review of [our] account" in such a short span of time.<p>While trying to figure out what could possibly present "a high level of risk for customer disputes," only one recent, major incident comes to mind. About two months ago, we experienced a security breach where an unauthorized user gained access to the account owner's Stripe login. They attempted to send six large invoices, of which only two went through. We promptly resolved the issue by resetting passwords and multi-factor authentication, and we refunded the wrongful charges. We contacted Stripe support to ensure everything was in order.<p>Support was helpful, they said they would escalate the issue and ensure everything was fine. A few days later, the owner's login was disabled by the security team. Getting it enabled again took nearly two months of trying to get in touch with support. Stripe was emailing us, but the emails were going into the ether. The attack hit the owner's inbox as well, as rules were configured to delete all incoming emails from Stripe, something Stripe support caught!<p>Once we sorted out that mess, we discovered Stripe withheld over $12,000 in donations while the login was disabled. The funds were scheduled to be released today, the same day the account was cancelled. That's $12,000 of donations people have made that we can't access. We are very small, and very much need access to that money to keep our programs running. It took months for us to get there, but only 45 minutes for Stripe to decide to kill our account.<p>We want to figure out what's going on and how to resolve this, but Stripe has disabled both live chat and phone support for our account. (I'm still able to use those options in other Stripe accounts under the same login).<p>Overnight, we've gone from planning our largest annual fundraiser, to considering pulling the plug altogether. We are incredibly frustrated by the complete and utter lack of transparency, the inability to contact Stripe, and absolute hopelessness we're feeling. If anyone knows someone we can hop on a call to discuss this situation with, please leave details or contact us at:
Founder & Executive Director: [email protected]
Director of Technology (me): [email protected]
Upvote: | 81 |
Title: I love tech too much and would like to find something social or at least different in my spare time that pulls me away...
Upvote: | 65 |
Title: I want to do my own thing and I don't want to do it alone. I am happy to write boring business software with boring tech.<p>The not wanting to work alone for a year to bootstrap makes me want to find about $600k so I can pay a designer and two other devs to work on the software with me for 12-18 months. I project I can be to 300k ARR in 3 years so 600k to kick this off doesnt seem to risky.<p>I am certain I can find $100k with friends and family but I want the full amount to make the hires and take the leap.<p>I explicitly don't want to build a "unicorn" I just want a healthy business building clean, safe, secure software that is doing something positive in the world (even if it is just boring business admin problems).<p>Thoughts or ideas or how to find an angel (or two) or other avenues?
Upvote: | 50 |
Title: Recently, there are more and more studies that smartphones harm learning and not a single study with the opposite results. However, very few parents have the guts not to buy a smartphone for their child. At what age do children in the HN crowd begin to have censored access to proprietary software (personal supervision) and uncensored (smartphone with or without parental controls)? Are there families where children have access to computers with only FOSS before they have access to proprietary software?
Upvote: | 325 |
Subsets and Splits