prompt
stringlengths
16
15.6k
completion
stringlengths
4
6
Title: It seems like there are a lot of examples of companies handling a security breach or loss of service poorly. Are there examples when a company handled an incident well, especially with a great postmortem writeup? Upvote:
63
Title: According to Google,<p>&gt; <i>$name contains content that may violate Google Drive&#x27;s Phishing policy. People that you&#x27;ve shared this file with will still be able to access it, but they will see a warning on the file. If you think this is an error and would like the Trust &amp; Safety team to review this file, request a review below.</i><p>…if I open it, the Google Sheet tells me &quot;It contains links that might be used to steal your personal information.&quot; But it doesn&#x27;t tell me <i>which</i> like.<p>The sheet is a list of wedding vendors — we&#x27;re trying to plan a wedding. So if one of the wedding vendor&#x27;s sites has be pwned, a.) I&#x27;d like to know, and b.) how TF is that <i>my</i> fault, such that it translates to a ToS violation, Google?<p>Of course, the appeal button offers me no ability to include an explanation, or a request for which link … Upvote:
48
Title: A few months ago, I benchmarked FastAPI on an i9 MacBook Pro. I couldn&#x27;t believe my eyes. A primary REST endpoint to `sum` two integers took 6 milliseconds to evaluate. It is okay if you are targeting a server in another city, but it should be less when your client and server apps are running on the same machine.<p>FastAPI would have bottleneck-ed the inference of our lightweight UForm neural networks recently trending on HN under the title &quot;Beating OpenAI CLIP with 100x less data and compute&quot;. (Thank you all for the kind words!) So I wrote another library.<p>It has been a while since I have written networking libraries, so I was eager to try the newer io_uring networking functionality added by Jens Axboe in kernel 5.19. TLDR: It&#x27;s excellent! We used pre-registered buffers and re-allocated file descriptors from a managed pool. Some other parts, like multi-shot requests, also look intriguing, but we couldn&#x27;t see a flawless way to integrate them into UJRPC. Maybe next time.<p>Like a parent with two kids, we tell everyone we love Kernel Bypass and SIMD equally. So I decided to combine the two, potentially implementing one of the fastest implementations of the most straightforward RPC protocol - JSON-RPC. ~~Healthy and Fun~~ Efficient and Simple, what can be better?<p>By now, you may already guess at least one of the dependencies - `simdjson` by Daniel Lemiere, that has become the industry standard. io_uring is generally very fast, even with a single core. Adding more polling threads may only increase congestion. We needed to continue using no more than one thread, but parsing messages may involve more work than just invoking a JSON parser.<p>JSON-RPC is transport agnostic. The incoming requests can be sent over HTTP, pre-pended by rows of headers. Those would have to be POSTs and generally contain Content-Length and Content-Type. There is a SIMD-accelerated library for that as well. It is called `picohttpparser`, uses SSE, and is maintained by H2O.<p>The story doesn&#x27;t end there. JSON is limited. Passing binary strings is a nightmare. The most common approach is to encode them with base-64. So we took the Turbo-Base64 from the PowTurbo project to decode those binary strings.<p>The core implementation of UJRPC is under 2000 lines of C++. Knowing that those lines connect 3 great libraries with the newest and coolest parts of Linux is enough to put a smile on my face. Most people are more rational, so here is another reason to be cheerful.<p>- FastAPI throughput: 3&#x27;184 rps. - Python gRPC throughput: 9&#x27;849 rps. - UJRPC throughput: -- Python server with io_uring: 43&#x27;000 rps. -- C server with POSIX: 79&#x27;000 rps. -- C server with io_uring: 231&#x27;000 rps.<p>Granted, this is yet to be your batteries-included server. It can&#x27;t balance the load, manage threads, spell S in HTTPS, or call parents when you misbehave in school. But at least part of it you shouldn&#x27;t expect from a web server.<p>After following the standardization process of executors in C++ for the last N+1 years, we adapted the &quot;bring your runtime&quot; and &quot;bring your thread-pool&quot; policies. HTTPS support, however, is our next primary objective.<p>---<p>Of course, it is a pre-production project and must have a lot of bugs. Don&#x27;t hesitate to report them. We have huge plans for this tiny package and will potentially make it the default transport of UKV: <a href="https:&#x2F;&#x2F;github.com&#x2F;unum-cloud&#x2F;ukv">https:&#x2F;&#x2F;github.com&#x2F;unum-cloud&#x2F;ukv</a> Upvote:
290
Title: Hey HN, we&#x27;re excited to introduce Sorbay - an open source alternative to Loom for creating and sharing screen recordings.<p>With Sorbay, you can easily record your screen, camera, and microphone all at once. It is a complete solution that comes with its own backend service, allowing you to instantly share a link of your recording as soon as it is finished. The video is streamed directly to the backend service as the recording happens to make this possible.<p>With both founders based in different countries, we needed a tool to quickly share screen recordings to keep us up to date or to ask for feedback. Meetings are cool if you need to discuss something deeply, but for almost everything else a quick recording works better.<p>We had to settle for one of the proprietary solutions because none of the open source tools allowed us to quickly share something with each other. Doing the recording is one aspect, but having the ability to instantly share a link was crucial. Waiting on a 400mb video upload to a Dropbox is just too much interruption if you want to quickly share something.<p>The tipping point for us to actually build this open source tool came via an interaction from one of our day jobs. A third party provider sent a screen recording full of confidential information and to make things worse, all of it was uploaded by them to a different third party service. We strongly believe that information like this should stay within a company, ideally on infrastructure that they control themselves. Having a fully integrated open source solution is the best way to go for this.<p>Our goal with this first public release is to gather feedback. The critical code paths are working, but it is still a bit rough to use. We deliberately cut out all non-essential features, but have a clear roadmap on what we want to release this year.<p>There are a couple of known issues like audio glitches, non-working videos in Safari and crashing binaries that we hope to fix in the coming weeks. Later this year, we plan on releasing a cloud hosted version of Sorbay that would let you connect your own S3 storage provider. Additionally, we will be releasing an on-prem option focused on features for enterprises (SSO, RBAC, compliance).<p>Both the Sorbay Client and the backend service are completely open source. For licensing we choose the AGPLv3 throughout the stack. The client is built with Vue.js on top of Electron. The use of Electron might be a bit controversial here on Hackernews but given the resources we currently have that was the only way that allowed us to get a working client out on all major platforms. The backend service is realized with Django. We use Keycloak for authentication and Minio for S3 compatible storage. All of this is run alongside Postgres and Redis, running on Docker containers which are managed by Docker Compose.<p>We invite you to try Sorbay for yourself and join us on our issue tracker[1][2], Slack channel[3] or here on HN.<p>Thanks for checking out Sorbay!<p>[1]: <a href="https:&#x2F;&#x2F;github.com&#x2F;sorbayhq&#x2F;sorbay">https:&#x2F;&#x2F;github.com&#x2F;sorbayhq&#x2F;sorbay</a><p>[2]: <a href="https:&#x2F;&#x2F;github.com&#x2F;sorbayhq&#x2F;sorbay-client">https:&#x2F;&#x2F;github.com&#x2F;sorbayhq&#x2F;sorbay-client</a><p>[3]: <a href="https:&#x2F;&#x2F;join.slack.com&#x2F;t&#x2F;slack-oso6527&#x2F;shared_invite&#x2F;zt-1qd8gm543-KGdb5gD4WqikZEKEk8sSTA" rel="nofollow">https:&#x2F;&#x2F;join.slack.com&#x2F;t&#x2F;slack-oso6527&#x2F;shared_invite&#x2F;zt-1qd8...</a> Upvote:
93
Title: Hi HN – Noa, Akash, and Sidd here. We’re building Vellum (<a href="https:&#x2F;&#x2F;www.vellum.ai">https:&#x2F;&#x2F;www.vellum.ai</a>), a developer platform for building on LLMs like OpenAI’s GPT-3 and Anthropic’s Claude. We provide tools for efficient prompt engineering, semantic search, performance monitoring, and fine-tuning, helping you bring LLM-powered features from prototype to production.<p>The MLOps industry has matured rapidly for traditional ML (typically open-source models hosted in-house), but companies using LLMs are suffering from a lack of tooling to support things like experimentation, version control, and monitoring. They’re forced to build these tools themselves, taking valuable engineering time away from their core product.<p>There are 4 main pain points. (1) Prompt engineering is tedious and time consuming. People iterate on prompts in playgrounds of individual model providers and store results in spreadsheets or documents. Testing across many test cases is usually not done because of the manual nature of prompt engineering. (2) LLM calls against a corpus of text are not possible without semantic search. Due to limited context windows, any time an LLM has to return factual data from a set of documents, companies need to create embeddings, store them in a vector database and host semantic search models to query for relevant results at runtime; building this infrastructure is complex and time consuming. (3) There is limited observability &#x2F; monitoring once LLMs are used in production. With no baseline for how something is performing, it’s scary making changes to it for fear of making it worse; and (4) Creating fine-tuned models and re-training them as new data becomes available is rarely done despite the potential gains (higher quality, lower cost, lower latency, more defensibility). Companies don’t usually have the capacity to build the infrastructure for collecting high-quality training data and the automation pipelines used to re-train and evaluate new models.<p>We know these pain points from experience. Sidd and Noa are engineers who worked at Quora and DataRobot building ML tooling. Then the three of us worked together for a couple years at Dover (YC S19), where we built features powered by GPT-3 when it was still in beta. Our first production feature was a job description writer, followed by a personalized recruiting email generator and then a classifier for email responses.<p>We found it was easy enough to prototype, but taking features to production and improving them was a different story. It was a pain to keep track of what prompts we had tried and to monitor how they were performing under real user inputs. We wished we could version control our prompts, roll back, and even A&#x2F;B test. We found ourselves investing in infrastructure that had nothing to do with our core features (e.g. semantic search). We ended up being scared to change prompts or try different models for fear of breaking existing behavior. As new LLM providers and foundation models were released, we wished we could compare them and use the best tool for the job, but didn’t have the time to evaluate them ourselves. And so on.<p>It’s clear that better tools are required for businesses to adopt LLMs at scale, and we realized we were in a good position to build them, so here we are! Vellum consists of 4 systems to address the pain points mentioned above:<p>(1) Playground—a UI for iterating on prompts side-by-side and validating them against multiple test cases at once. Prompt variants may differ in their text, underlying model, model parameters (e.g. “temperature”), and even LLM provider. Each run is saved as a history item and has a permanent url that can be shared with teammates.<p>(2) Search—upload a corpus of text (e.g. your company help docs) in our UI (PDF&#x2F;TXT) and Vellum will convert the text to embeddings and store it in a vector database to be used at run time. While making an LLM call, we inject relevant context from your documents into the query and instruct the LLM to only answer factually using the provided context. This helps prevent hallucination and avoids you having to manage your own embeddings, vector store, and semantic search infra.<p>(3) Manage—a low-latency, high-reliability API wrapper that’s provider-agnostic across OpenAI, Cohere, and Anthropic (with more coming soon). Every request is captured and persisted in one place, providing full observability into what you’re sending these models, what they’re giving back, and their performance. Prompts and model providers can be updated without code changes. You can replay historical requests and version history is maintained. This serves as a data layer for metrics, monitoring, and soon, alerting.<p>(4) Optimize—the data collected in Manage is used to passively build up training data, which can be used to fine-tune your own proprietary models. With enough high quality input&#x2F;output pairs (minimum 100, but depends on the use case), Vellum can produce fine-tuned models to provide better quality, lower cost or lower latency. If a new model solves a problem better, it can be swapped without code changes.<p>We also offer periodic evaluation against alternative models (i.e. we can see if fine-tuning Curie produces results of comparable quality to Davinci, but at a lower price). Even though OpenAI is the dominant model provider today, we expect there to be many providers with strong foundation models, and in that case model interoperability will be key!<p>Here’s a video demo showcasing Vellum (feel free to watch on 1.5x!): <a href="https:&#x2F;&#x2F;www.loom.com&#x2F;share&#x2F;5dbdb8ae87bb4a419ade05d92993e5a0" rel="nofollow">https:&#x2F;&#x2F;www.loom.com&#x2F;share&#x2F;5dbdb8ae87bb4a419ade05d92993e5a0</a>.<p>We currently charge a flat monthly platform fee that varies based on the quantity and complexity of your use-cases. In the future, we plan on having more transparent pricing that’s made up of a fixed platform fee + some usage-based component (e.g. number of tokens used or requests made).<p>If you look at our website you’ll notice the dreaded “Request early access” rather than “Try now”. That’s because the LLM Ops space is evolving extremely quickly right now. To maximize our learning rate, we need to work intensively with a few early customers to help get their AI use cases into production. We’ll invite self-serve signups once that core feature set has stabilized a bit more. In the meantime, if you’re interested in being one of our early customers, we’d love to hear from you and you can request early access here: <a href="https:&#x2F;&#x2F;www.vellum.ai&#x2F;landing-pages&#x2F;hacker-news">https:&#x2F;&#x2F;www.vellum.ai&#x2F;landing-pages&#x2F;hacker-news</a>.<p>We deeply value the expertise of the HN community! We’d love to hear your comments and get your perspective on our overall direction, the problems we’re aiming to solve, our solution so far, and anything we may be missing. We hope this post and our demo video provide enough material to start a good conversation and we look forward to your thoughts, questions, and feedback! Upvote:
136
Title: www.twitter.com<p>reply<p>{&quot;errors&quot;:[{&quot;message&quot;:&quot;Your current API plan does not include access to this endpoint, please see <a href="https:&#x2F;&#x2F;developer.twitter.com&#x2F;en&#x2F;docs&#x2F;twitter-api" rel="nofollow">https:&#x2F;&#x2F;developer.twitter.com&#x2F;en&#x2F;docs&#x2F;twitter-api</a> for more information&quot;,&quot;code&quot;:467}]} Upvote:
61
Title: Our startup initially chose LTSE Equity [1] for our cap table management. We were told in writing that it is an annual plan that then goes month-to-month.<p>Fast forward 16 months and we haven&#x27;t been using it. It&#x27;s just not that useful of a tool for us, so we are switching to a competitor.<p>The button to change plans in LTSE&#x27;s dashboard doesn&#x27;t work, it just shows a spinner. So we contacted support, who insisted on a meeting before cancelling the plan. We said we are not interested in meeting, and now they are saying our plan is annual, and we have 8 months remaining.<p>We plan to file chargebacks as we have in writing that the plan is month-to-month, but I wanted the community to know that LTSE Equity uses dark patterns to avoid cancellation, and is doing business dishonestly in general.<p>[1] https:&#x2F;&#x2F;equity.ltse.com&#x2F; Upvote:
46
Title: A Python sketch of a regex engine in less than 150 lines of code Upvote:
103
Title: I left my iPhone in a cab in Costa Rica by mistake. I may as well have thrown it into a volcano.<p>I have 2FA set up on three dozen different and important online accounts, and it&#x27;s all through the Google Authenticator app on that iPhone.<p>Is there a recommended way to go about this problem?<p>Or have I locked myself out of my entire life? Upvote:
78
Title: This is a long-running personal project I&#x27;ve had to write an optimizing compiler from scratch. Everything was done by me, including the lexer&#x2F;parser, SSA-based IR, high-performance data structures, and code generator.<p>Originally I wasn&#x27;t targeting the NES. It started as a scripting language, then it morphed into a C++ replacement, and then finally I turned it into what it is today. The large scope of the project and colorful history means it&#x27;s still a little rough around the edges, but it&#x27;s now working well enough to post. Upvote:
298
Title: Hi HN,<p>we’re the co-founders of Bearer, and today we launch an open-source alternative to code security solutions such as Snyk Code, SonarQube, or Checkmarx. Essentially, we help security &amp; engineering teams to discover, filter and prioritize security risks and vulnerabilities in their codebase, with a unique approach through sensitive data (PII, PD, PHI).<p>Our website is at <a href="https:&#x2F;&#x2F;www.bearer.com" rel="nofollow">https:&#x2F;&#x2F;www.bearer.com</a> and our GitHub is here: <a href="https:&#x2F;&#x2F;github.com&#x2F;bearer&#x2F;bearer">https:&#x2F;&#x2F;github.com&#x2F;bearer&#x2F;bearer</a><p>We are not originally Security experts but have been software developers and engineering leaders for over 15 years now, and we thought we could provide a new perspective to security products with a strong emphasis on the developer experience, something we often found lacking for security tools.<p>In addition to building a true developer-friendly security solution, we’ve also heard a lot of teams complaining about how noisy their static code security solutions are. As a result, they often have difficulties triaging the most important issues, and ultimately it’s difficult to remediate them. We believe an important part of the problem lies in the fact that we lack a clear understanding of the real impact of any security issues. Without that understanding, it’s very difficult to ask developers to remediate critical security flaws.<p>We’ve built a unique approach to this problem, by looking at the impact of security issues through the lens of sensitive data. Interestingly, most security team ultimate responsibility today is to secure those sensitive data and protect their organization from costly data loss and leakage, but until today, that connection has never been made.<p>In practical terms, we provide a set of rules that assess the variety of ways known code vulnerabilities (CWE) ultimately impact your application security, and we reconcile it with your sensitive data flows. At the time of this writing, Bearer provides over 100 rules.<p>Here are some examples of what those rules can detect: - Leakage of sensitive data through cookies, internal loggers, third-party logging services, and into analytics environments. - Non-filtered user input that can lead to breaches of sensitive information. - Usage of weak encryption libraries or misusage of encryption algorithms. - Unencrypted incoming and outgoing communication (HTTP, FTP, SMTP) of sensitive information. - Hard-coded secrets and tokens. - And many you can find see here: <a href="https:&#x2F;&#x2F;docs.bearer.com&#x2F;reference&#x2F;rules&#x2F;" rel="nofollow">https:&#x2F;&#x2F;docs.bearer.com&#x2F;reference&#x2F;rules&#x2F;</a><p>Rules are easily extendable to allow you to create your own, everything is YAML based. For example, some of our early users used this system to detect the leakage of sensitive data in their backup environments or missing application-level encryption of their health data.<p>I’m sure you are wondering how can we detect sensitive data flows just by looking at the code. Essentially, we also perform static code analysis to detect those. In a nutshell, we look for those sensitive data flows at two levels: - Analyzing class names, methods, functions, variables, properties, and attributes. It then ties those together to detected data structures. It does variable reconciliation etc. - Analyzing data structure definitions files such as OpenAPI, SQL, GraphQL, and Protobuf.<p>Then we pass this over to a classification engine that assess 120+ data types from sensitive data categories such as Personal Data (PD), Sensitive PD, Personally identifiable information (PII), and Personal Health Information (PHI). All of that is documented here: <a href="https:&#x2F;&#x2F;docs.bearer.com&#x2F;explanations&#x2F;discovery-and-classification&#x2F;" rel="nofollow">https:&#x2F;&#x2F;docs.bearer.com&#x2F;explanations&#x2F;discovery-and-classific...</a><p>As we said before, developer experience is key, that’s why you can install Bearer in 15 seconds, from cURL, Homebrew, apt-get, yum, or as a docker image. Then you run it as a CLI locally, or as part of your CI&#x2F;CD.<p>We currently support JavaScript and Ruby stacks, but more will follow shortly!<p>Please let us know what you think and check out the repo here: <a href="https:&#x2F;&#x2F;github.com&#x2F;Bearer&#x2F;bearer">https:&#x2F;&#x2F;github.com&#x2F;Bearer&#x2F;bearer</a> Upvote:
106
Title: Hey HN! We’re Stefan and Elijah, co-founders of DAGWorks (<a href="https:&#x2F;&#x2F;&#x2F;www.dagworks.io" rel="nofollow">https:&#x2F;&#x2F;&#x2F;www.dagworks.io</a>). We’re on a mission to eliminate the insane inefficiency of building and maintaining ML pipelines in production.<p>DAGWorks is based on Hamilton, an open-source project that we created and recently forked (<a href="https:&#x2F;&#x2F;github.com&#x2F;dagworks-inc&#x2F;hamilton">https:&#x2F;&#x2F;github.com&#x2F;dagworks-inc&#x2F;hamilton</a>). Hamilton is a set of high-level conventions for Python functions that can be automatically converted into working ETL pipelines. To that, we&#x27;re adding a closed-source offering that goes a step further, plugging these functions into a wide array of production ML stacks.<p>ML pipelines consist of computational steps (code + data) that produce a working statistical model that a business can use. A typical pipeline might be (1) pull raw data (Extract), (2) transform that data into inputs for the model (Transform), (3) define a statistical model (Transform), (4) use that statistical model to predict on another data set (Transform) and (5) push that data for downstream use (Load). Instead of “pipeline” you might hear people call this “workflow”, “ETL” (Extract-Transform-Load), and so on.<p>Maintaining these in production is insanely inefficient because you need both data scientists and software engineers to do it. Data scientists know the models and data, but most can&#x27;t write the code needed to get things working in production infrastructure—for example, a lot of mid-size companies out there use Snowflake to store data, Pandas&#x2F;Spark to transform it, and something like databrick&#x27;s MLFlow to handle model serving. Engineers can handle the latter, but mostly aren&#x27;t experts in the ML stuff. It&#x27;s a classic impedance mismatch, with all the horror stories you&#x27;d expect—e.g. when data scientists make a change, engineers (or data scientists who aren’t engineers) have to manually propagate the change in production. We&#x27;ve talked to teams who are spending as much as 50% of their time doing this. That&#x27;s not just expensive, it&#x27;s gruntwork—those engineers should be working on something else! Basically, maintaining ML pipelines over time sucks for most teams.<p>One way out is to hire people who combine both skills, i.e. data scientists who can also write production code. But these are rare and expensive, and in our experience they usually are only expert at one side of the equation and not as good at the other.<p>The other way is to build your own platform to automatically integrate models + data into your production stack. That way the data scientists can maintain their own work without needing to hand things off to engineers. However, most companies can&#x27;t afford to make this investment, and even for the ones that can, such in-house layers tend to end up in spaghetti code and tech debt hell, because they&#x27;re not the company&#x27;s core product.<p>Elijah and I have been building data and ML tooling for the last 7 years, most recently at Stitch Fix, where we built a ML platform that served over 100 data scientists from various modeling disciplines (some of our blog posts, like [1], hit the front page of HN - thanks!). We saw first hand the issues teams encountered with ML pipelines.<p>Most companies running ML in production need a ratio of 1:1 or 2:1 data scientists to engineers. At bigger companies like Stitch Fix, the ratio is more like 10:1—way more efficient—because they can afford to build the kind of platform described above. With DAGWorks, we want to bring the power of an intuitive ML Pipeline platform to all data science teams, so a ratio of 1:1 is no longer required. A junior data scientist should be able to easily and safely write production code without deep knowledge of underlying infrastructure.<p>We decided to build our startup around Hamilton, in large part due to the reception that it got here [2] - thanks HN! We came up with Hamilton while we were at Stitch Fix (note: if you start an open-source project at an employer, we recommend forking it right away when you start a company. We only just did that and left behind ~900 stars...). We are betting on it being our abstraction layer to enable our vision of how to go about building and maintaining ML pipelines, given what we learned at Stitch Fix. We believe a solution has to have an open source component to be successful (we invite you to check out the code). In terms of why the name DAGWorks? We named the company after Directed Acyclic Graphs because we think the DAG representation, which Hamilton also provides, is key.<p>A quick primer on Hamilton. With Hamilton we use a new paradigm in Python (well not quite “new” as pytest fixtures use this approach) for defining model pipelines. Users write declarative functions instead of writing procedural code. For example, rather than writing the following pandas code:<p><pre><code> df[&#x27;col_c&#x27;] = df[&#x27;col_a&#x27;] + df[&#x27;col_b&#x27;] </code></pre> You would write:<p><pre><code> def col_c(col_a: pd.Series, col_b: pd.Series) -&gt; pd.Series: &quot;&quot;&quot;Creating column c from summing column a and column b.&quot;&quot;&quot; return col_a + col_b </code></pre> Then if you wanted to create a new column that used `col_c` you would write:<p><pre><code> def col_d(col_c: pd.Series) -&gt; pd.Series: # logic </code></pre> These functions then define a &quot;dataflow&quot; or a directed acyclic graph (DAG), i.e. we can create a “graph” with nodes: col_a, col_b, col_c, and col_d, and connect them with edges to know the order in which to call the functions to compute any result. Since you’re forced to write functions, everything becomes unit testable and documentation friendly, with the ability to display lineage. You can kind of think of Hamilton as &quot;DBT for python functions&quot;, if you know what DBT is. Have we piqued your interest? Want to go play with Hamilton? We created <a href="https:&#x2F;&#x2F;www.tryhamilton.dev&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.tryhamilton.dev&#x2F;</a> leveraging pyodide (note it can take a while to load) so you can play around with the basics without leaving your browser - it even works on mobile!<p>What we think is cool about Hamilton is that you don’t need to specify an “explicit pipeline declaration step”, because it’s all encoded in the function and parameter names! Moreover, everything is encapsulated in functions. So from a framework perspective, if we wanted to (for example) log timing information, or introspect inputs&#x2F;outputs, delegate the function to Dask or Ray, we can inject that at a framework level, without having to pollute user code. Additionally, we can expose &quot;decorators&quot; (e.g. @tag(...)) that can specify extra metadata to annotate the DAG with, or for use at run time. This is where our DAGWorks Platform fits in, providing off-the-shelf closed source extras in this way.<p>Now, for those of you thinking there’s a lot of competition in this space, or what we’re proposing sounds very similar to existing solutions, here’s some thoughts to help distinguish Hamilton from other approaches&#x2F;technology: (1) Hamilton&#x27;s core design principle is helping people write more maintainable code; at a nuts and bolts level, what Hamilton replaces is procedural code that one would write. (2) Hamilton runs anywhere that python runs: notebook, a python script, within airflow, within your python web service, pyspark, etc. E.g. People use Hamilton for executing code in batch tasks and online web services. (3) Hamilton doesn&#x27;t replace a macro orchestration system like airflow, prefect, dagster, metaflow, zenML, etc. It runs within&#x2F;uses them. Hamilton helps you not only model the micro - e.g. feature engineering - but can also help you model the macro - e.g. model pipelines. That said, given how big machines are these days, model pipelines can commonly run on a single machine - Hamilton is perfect for this. (4) Hamilton doesn&#x27;t replace things like Dask, Ray, Spark -- it can run on them, or delegate to them. (5) Hamilton isn&#x27;t just for building dataframes, though it’s quite good for that, you can model any python object creation with it. Hamilton is data type agnostic.<p>Our closed source offering is currently in private beta, but we&#x27;d love to include you in it (see next paragraph). Hamilton is free to use (BSD-3 license) and we’re investing in it heavily. We’re still working through pricing options for the closed source platform; we think we’ll follow the leads of others in the space like Weights &amp; Biases, and Hex.tech here in how they price. For those interested, here’s a video walkthrough of Hamilton, which includes a teaser of what we’re building on the closed source side - <a href="https:&#x2F;&#x2F;www.loom.com&#x2F;share&#x2F;5d30a96b3261490d91713a18ab27d3b7" rel="nofollow">https:&#x2F;&#x2F;www.loom.com&#x2F;share&#x2F;5d30a96b3261490d91713a18ab27d3b7</a>.<p>Lastly, (1) we’d love feedback on Hamilton (<a href="https:&#x2F;&#x2F;github.com&#x2F;dagworks-inc&#x2F;hamilton">https:&#x2F;&#x2F;github.com&#x2F;dagworks-inc&#x2F;hamilton</a>) and on any of the above, and what we could do better. To stress the importance of your feedback, we’re going all-in on Hamilton. If Hamilton fails, DAGWorks fails. Given that Hamilton is a bit of a “swiss army knife” of what you could do with it, we need help prioritizing features. E.g. we just released experimental PySpark UDF map support, is that useful? Or perhaps you have streaming feature engineering needs where we could add better support? Or you want a feature to auto generate unit test stubs? Or maybe you are doing a lot of time-series forecasting and want more power features in Hamilton to help you manage inputs to your model? We’d love to hear from you! (2) For those interested in the closed source DAGWorks Platform, you can sign up for early access via www.dagworks.io (leave your email, or schedule a call with me) – we apologize for not having a self-serve way to onboard just yet. (3) If there’s something this post hasn’t answered, do ask, we’ll try to give you an answer! We look forward to any and all of your comments!<p>[1] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=29417998" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=29417998</a><p>[2] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=29158021" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=29158021</a> Upvote:
182
Title: Hi HN! A few days ago I saw a graph[0] that showed the # of job postings on HN was declining. I started wondering what other trends I could glean from the data, so I created this!<p>You can filter through the top level comments by keyword; for example you can filter by &quot;remote&quot; to see the massive spike around March 2020. Another interesting thing I found is that I can compare hiring across cities.<p>I hope you enjoy! I made it so that the links to your search are sharable so if you have some interesting data you should be able to just link the page you&#x27;re on!<p>[0] <a href="https:&#x2F;&#x2F;rinzewind.org&#x2F;blog-en&#x2F;2023&#x2F;the-tech-downturn-seen-through-hacker-news-comments.html" rel="nofollow">https:&#x2F;&#x2F;rinzewind.org&#x2F;blog-en&#x2F;2023&#x2F;the-tech-downturn-seen-th...</a> Upvote:
146
Title: I&#x27;ve always wanted to just upload a whole book to ChatGPT and ask questions. Obviously with the char limit that&#x27;s impossible... So some buddies and I built Ghost. We have it limited to 5 pages for uploads for now, but plan on expanding the limit soon. Let me know what you guys think! Upvote:
57
Title: I&#x27;m a strong engineer, and that&#x27;s my core skill. But I want to focus 75% of my time on marketing.<p>Is there a way to learn marketing, in the same way that one learns software engineering?<p>It&#x27;s so easy to learn software engineering, and the results are instant. How can I apply the same learning concepts to marketing? Upvote:
74
Title: Hello, we are Shikha, Sourabh, and Vipul - co-founders at UpTrain, an open-source ML observability toolkit. UpTrain helps you monitor the performance of your machine learning applications, alerts you when they go wrong, and helps you improve them by narrowing down on data points to retrain on, all in the same loop.<p>Our website is at: <a href="https:&#x2F;&#x2F;uptrain.ai&#x2F;">https:&#x2F;&#x2F;uptrain.ai&#x2F;</a> and our Github is here: <a href="https:&#x2F;&#x2F;github.com&#x2F;uptrain-ai&#x2F;uptrain">https:&#x2F;&#x2F;github.com&#x2F;uptrain-ai&#x2F;uptrain</a><p>ML models tend to perform poorly when presented with new and previously unseen cases as well as their performance deteriorates over time due to evolving real-world environments, which can lead to the degradation of business metrics. In fact, one of our customers (a social media platform with 150 million MAU) was tired of discovering model issues via customer complaints (and increased churn) and wanted an observability solution to identify them proactively.<p>UpTrain monitors the difference between the dataset the model was trained on and the real-world data it encounters during production (the wild!). This &quot;difference&quot; can be custom statistical measures designed by ML practitioners based on their use case. That last point regarding customization is important because, in most cases, there’s no “ground truth” to check if a model’s output is correct or not. Instead, you need to use statistical measures to figure out drift or performance degradation issues, and those require domain expertise and differ from case to case. For example, in a text summarization model, you want to monitor drift in the input text sentiment, but for a human pose estimation model, you want to add integrity checks on the predicted body length.<p>Additionally, we monitor for edge cases defined as rule-based smart signals on the model input. Whenever UpTrain sees a distribution shift or an increased frequency of edge cases, it raises an alert while identifying the subset of data that experienced these issues. Finally, it retrains the model on that data, improving its performance in the wild.<p>Before UpTrain, we explored many observability tools at previous companies (Bytedance, Meta, and Bosch), but always got stuck figuring out what issues our models were facing in production. We used to go through user reviews, find patterns around model failures and manually retrain our models. This was time-consuming and opaque. Customizing our monitoring metrics and having a solution built specifically for ML models was a big need that wasn’t fulfilled.<p>Additionally, many ML models operate on user-sensitive data, and we didn’t want to send users’ private data to third parties. From a privacy perspective, relying on third-party hosted solutions just felt wrong, and motivated us to create an open-source self-hosted alternative for the same.<p>We are building UpTrain to make model monitoring effortless. With a single-line integration, our toolkit allows you to detect dips in model performance using real-time dashboards, sends you Slack alerts, helps to pinpoint poor-performing cohorts, and many more. UpTrain is built specifically for ML use cases, providing tools to monitor data distribution shifts, identify production data points with low representation in training data, and visualization&#x2F;drift detection for embeddings. For more about our key features, see <a href="https:&#x2F;&#x2F;docs.uptrain.ai&#x2F;docs&#x2F;key-features">https:&#x2F;&#x2F;docs.uptrain.ai&#x2F;docs&#x2F;key-features</a><p>Our tool is available as a Python package that can be installed on top of your deployment infrastructure (AWS, GCP, Azure). Since ML models operate on user-sensitive data, and sharing it with external servers is often a barrier to using third-party tools, we focus on deploying to your own cloud.<p>We’ve launched this repo under an Apache 2.0 license to make it easy for individual developers to integrate it into their production app. For monetization, we plan to build enterprise-level integrations that will include managed service and support. In the next few months, we plan to add more advanced observability measures for large language models and generative AI, as well as make UpTrain easier to integrate with other tools like Weights and Biases, Databricks, Kubernetes, and Airflow.<p>We would love for you to try out our GitHub repo and give your feedback, and we look forward to all of your comments! Upvote:
138
Title: Hi! I&#x27;ve been a member of HN for fifteen years so today I&#x27;m very excited to share Plato.<p>Plato is an Airtable-like interface for your Postgres or MySQL database. It&#x27;s an admin panel for devs and non-devs alike to manage your DB. We see teams use Plato for customer support, customer success, ops, etc..<p>We built Plato because we think more people should be able to build and extend internal tools. We thought it was strange that even though low-code is supposed to democratize development, all of the low-code internal tool builders are marketed to engineers! Airtable is a familiar UI that fits the relational model well, so we&#x27;ve been inspired by their work. Even the engineers on our team use Plato quite a bit, since it&#x27;s often easier than spinning up a SQL prompt.<p>Some features:<p>- Postgres and MySQL support<p>- Visual query controls (sorts, filters, hiding columns). No SQL.<p>- Joins by &quot;expanding&quot; foreign keys<p>- Virtual columns for tracking new data<p>- Auto-generated backlinks for one-to-many relationships<p>- Read-only locking for individual tables<p>- Virtual tables for sharing new views with your team<p>Plato today works on databases with a public IP (just whitelist our IP to connect), but we&#x27;re soon rolling out an on-prem version. We can also set up an SSH tunnel for you if you contact us at [email protected].<p>We&#x27;d love to hear your feedback! Thanks.<p>- Michael Upvote:
100
Title: I&#x27;m a big fan of the BBC podcast In Our Time -- and (like most people) I&#x27;ve been playing with the OpenAI APIs.<p>In Our Time has almost 1,000 episodes on everything from Cleopatra to the evolution of teeth to plasma physics, all still available, so it&#x27;s my starting point to learn about most topics. But it&#x27;s not well organised.<p>So here are the episodes sorted by library code. It&#x27;s fun to explore.<p>Web scraping is usually pretty tedious, but I found that I could send the minimised HTML to GPT-3 and get (almost) perfect JSON back: the prompt includes the Typescript definition.<p>At the same time I asked for a Dewey classification... and it worked. So I replaced a few days of fiddly work with 3 cents per inference and an overnight data run.<p>My takeaway is that I&#x27;ll be using LLMs as function call way more in the future. This isn&#x27;t &quot;generative&quot; AI, more &quot;programmatic&quot; AI perhaps?<p>So I&#x27;m interested in what temperature=0 LLM usage looks like (you want it to be pretty deterministic), at scale, and what a language that treats that as a first-class concept might look like. Upvote:
688
Title: I want nothing more than to do any other line of work.<p>Honestly I feel more inspired to drive Uber than ever do another tech challenge or deal with another incompetent manager.<p>Fuck malicious compliance and the assholes who put you in that position.<p>I love programming, but the people and this industry is a whole other beast.<p>I have given 20+ years of my life trying to make a living for my family.<p>It has provided me an income but it came at a cost way more than I ever imagined.<p>I know I&#x27;m not alone in feeling this way.<p>I would like to start over, but I have no background or time left to start over with.<p>The idea of burning it all down just to start all over again is painful, but not as painful as it has been working in tech.<p>What success stories of leaving the tech industry can you all provide?<p>I guess for the ones who really got out they probably will never even see this.<p>I know if I had I would not be here reading any of this. Upvote:
44
Title: Hi HN!<p>This is Assaf, Matan, and Yshay from <a href="https:&#x2F;&#x2F;livecycle.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;livecycle.io&#x2F;</a><p>Livecycle enables dev teams to collaborate and comment in context, on top of any preview environment. Using Livecycle, developers get clear feedback earlier in the release cycle leading to higher-quality products, a faster release cadence, and fewer context switches and misunderstandings.<p>Livecycle builds and pushes a dev-like environment for every branch in your repo (or, if you prefer, you can bring your own environments). Any containerized application will work, and support for multiple containers via docker-compose is coming soon. You get a unique, shareable link for every branch, which automatically updates for every commit pushed to that branch.<p>Each link contains not only your deployed environment but also includes:<p>- A dashboard to view and manage all of your environments and users<p>- Collaboration features - create screenshots, record audio&#x2F;video clips, suggest CSS&#x2F;content changes, and leave comments with rich text and internal threads<p>- Integration with Jira&#x2F;Linear (view tickets associated with a PR or create new tickets from comments users left on the environment)<p>- Integration with GitHub&#x2F;GitLab - view your build status in the PR&#x2F;MR (with a link to the environment), comments left on Livecycle will be synced to PR&#x2F;MR comments so that devs can easily gather feedback from both devs and other stakeholders in one place<p>- Even more stuff: Slack integration, integrated network, and console logs, etc…<p>We’re thrilled to see a wide variety of teams already benefitting from Livecycle - large companies, startups, freelance developers, dev shops, and more.<p>And we invite you to check out how Livecycle can bring value to you and your team.<p>And please let us know if you have any comments or questions :-) Upvote:
44
Title: Hi HN, we&#x27;re excited to share our open source tool with the community! We previously posted here with the tagline “real-time events for Postgres” [0]. But after feedback from early users and the community, we’ve shifted our focus to working on tooling for manual database changes.<p>We&#x27;ve consistently heard teams describe challenges with the way manual data updates are handled. Seemingly every engineer we spoke with had examples of errant queries that ended up causing significant harm in production environments (data loss&#x2F;service interruptions).<p>We’ve seen a few different approaches to how changes to production databases occur today:<p>Option 1: all engineers have production write access (highest speed, highest risk)<p>Option 2: one or a few engineers have write access (medium speed, high risk)<p>Option 3: engineers request temporary access to make changes (low speed, medium risk)<p>Option 4: all updates are checked into version control and run manually or through CI&#x2F;CD (low speed, low risk)<p>Option 5: no manual updates are made - all changes must go through an internal endpoint (lowest speed, lowest risk)<p>Our goal is to enable high speed changes with the lowest risk possible. We’re planning to do this by providing an open-source toolkit for safeguarding databases, including the following features:<p>- Alerts (available now): Receive notifications any time a manual change occurs<p>- Audit History (beta): View all historical manual changes with context<p>- Query Preview (coming soon): Preview affected rows and query plan prior to running changes<p>- Approval Flow (coming soon): Require query review before a change can be run<p>We’re starting with alerts. Teams can receive Slack notifications anytime an INSERT, UPDATE, or DELETE is executed from a non-application database user. While this doesn’t prevent issues from occurring, it does enable an initial level of traceability and understanding who made an update, what data was changed, and when it occurred.<p>We’d love to hear feedback from the HN community on how you’ve seen database changes handled, pain points you’ve experienced with data change processes, or generally any feedback on our thinking and approach.<p>[0] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=34828169" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=34828169</a> Upvote:
47
Title: I build this app because I was tired of using Google Translate to translate my locale files (i18n). I wanted to use a more efficient and accurate translation tool. ChatGPT, however, always break my json and cannot translate large contents. So I build this app to solve these problems. Hope it can save your time.<p>github: <a href="https:&#x2F;&#x2F;github.com&#x2F;ObservedObserver&#x2F;chatgpt-i18n">https:&#x2F;&#x2F;github.com&#x2F;ObservedObserver&#x2F;chatgpt-i18n</a><p>online app: <a href="https:&#x2F;&#x2F;chatgpt-i18n.vercel.app&#x2F;" rel="nofollow">https:&#x2F;&#x2F;chatgpt-i18n.vercel.app&#x2F;</a> Upvote:
92
Title: Hi folks! I’m Eric Rowell, and I’m really excited to share what I’ve been working on: <a href="https:&#x2F;&#x2F;www.second.dev">https:&#x2F;&#x2F;www.second.dev</a>. Second is a developer platform that connects your GitHub account to a bot that can generate new web applications or add features to existing ones. You log in and configure the features you want, and this prompts a bot to create and modify code files using a combination of compilers and GPT-3, then raise a pull request or commit the changes directly to a repo.<p>You can use Second to create a brand new web application, or you can connect it to an existing web application. They run in the cloud and connect directly to your Github, so you don’t have to install anything. Here’s a demo video: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=IR9JUxznEC0">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=IR9JUxznEC0</a>.<p>Disclaimer! Second is still very much in alpha, so if you want to connect a Second bot to an existing repo, please only use test repos!<p>I’ve been building for the web for over a decade, including developing the Yahoo video player, architecting the LinkedIn homepage, and leading data visualization efforts at Workday. I’ve created several popular open source projects like KineticJS, BigOCheatSheet, and El Grapho. Most recently I was the co-founder and CTO of a no-code platform for enterprise companies.<p>Over the last few years, I’ve become obsessed with the idea of enabling developers to create large volumes of high quality software fast. Today, developers utilize libraries, frameworks, new languages, DSLs, no-code platforms, and most recently IDE code assistants like GitHub CoPilot. These are all great, but I think we can do better. Wouldn’t it be cool if you could just tell a bot to go off and implement a full-stack feature, sort of like having your own dedicated second developer up in the cloud?<p>There are too many things that humans are writing code for that they shouldn’t be. This includes commodity features and integrations like authentication, forgot-password flows, subscription billing, database setup, CRUD pages, collections, data tables, etc. Human developers should be focused on the code that is special to the product. Bots should take care of the gruntwork.<p>Moreover, the world needs more software than there are engineers to build it all. Web applications are complex enough that traditional no-code and low-code solutions, which output runtimes, are not viable. The output must be source code. Unlike no-code tools which try to offload software development onto non-programmers (which works ok, but only up to a rather low complexity ceiling), Second is a higher-level programming tool, meaning it raises the level of abstraction for <i>engineers</i>, which is how most gains in programming productivity have been achieved over the years. Second produces source code that can be modified at any time by developers, with no “special” parts of the code base that are off limits.<p>So how is it possible to create multi-file full-stack features using GPT-3 when token limits are still really small, i.e. 4k tokens? Well, we can lean on one of the most common strategies for complex problems in computer science—divide and conquer. Rather than trying to construct one giant prompt to get one giant response, I’m using imperative programming to model the general approach to each full-stack module, using GPT-3 to figure out what files should be created, modified, and where they should go, and then using a combination of compilers and GPT-3 to generate and modify each file piecemeal.<p>Thus far, five YC companies have used Second to build their initial web application foundation. Customers have used Second to set up ticket management systems, CRMs, workflow screens, interfaces on top of generative AI and LLMs, etc. The Starter plan is free and our paid plan is $299&#x2F;project&#x2F;month. I’m currently running a promotion and taking 50% off, which ends tomorrow. A project is tied to a specific Github repo.<p>I would love for you to try out Second and let me know what you think! Please be gentle, it is very early. I’m looking for early feedback to figure out what features should be built next. Thanks! Upvote:
260
Title: I&#x27;m trying to get into reading, but I keep running into an issue where a lot of these books feel like fluff. Like, why are there so many books around 250 pages? Surely, these authors are trying to hit a page count first, and then providing information second. It just feels disingenuous, which kills my vibe while reading the book.<p>There are some books I&#x27;ve read where it feels like every page is a gold mine of information. Is this whole fluff-to-information predicament a common thing in reading? What tools&#x2F;metrics are there to help find meaningful books? For example, is it viable to only read books greater than 4.5 stars on Goodreads? Or is meticulously researching for good books just a matter of life in the book-reading hobby?<p>Maybe every book is valuable, and it&#x27;s just a skill to read, to extract the meaningful information effectively. But honestly, as with everything in life, it&#x27;s probably a mix of everything. Researching and reading skills will probably make the hobby more enjoyable. But I mean, since it&#x27;s a hobby, I have the right to try to avoid books I would consider &#x27;useless&#x27;. Upvote:
47
Title: tl;dr we at Escape (YC W23), we scanned 5651+ public APIs on the internet with our in house feedback driven API exploration tech, and ranked them using security, performance, reliability, and design criteria. The results are public on <a href="https:&#x2F;&#x2F;apirank.dev" rel="nofollow">https:&#x2F;&#x2F;apirank.dev</a>. You can request that we index your own API to the list for free and see how it compares to others.<p>Why we did that?<p>During a YC meetup I spoke with a fellow founder that told me how hard it was to pick the right external APIs to use within your own projects. I realized that most of what we build relies on public APIs from external vendors, but there was no benchmark to help developers compare and evaluate public APIs before picking one. So we decided to do it ourselves. Say hi to apirank.dev.<p>Why is ranking public APIs hard? Automating Public API technical assessment is a tough problem. First, we needed to find all the public APIs and their specifications - mostly OpenAPI files.<p>We used several strategies to find those:<p>- Crawl API repositories like apis.guru<p>- Crawl Github for openapi.json and openapi.yaml files<p>- A cool google dork<p>Those strategies enabled us to gather around ~20.000 OpenAPI specs.<p>Then lies the hard part of the problem:<p>We want to dynamically evaluate those APIs&#x27; security, performance, and reliability.<p>But APIs take parameters that are tightly coupled to the underlying business logic.<p>A naive automated way would not work: putting random data in parameters would likely not pass the API&#x27;s validation layer, thus giving us little insight into the real API behavior.<p>Manually creating tests for each API is also not sustainable: it would take years for our 10-people team. We needed to do it in an automated way.<p>Fortunately, our main R&amp;D efforts at Escape aimed to generate legitimate traffic against any API efficently.<p>That&#x27;s how we developed Feedback-Driven API exploration, a new technique that quickly asses the underlying business logic of an API by analyzing responses and dependencies between requests. (see <a href="https:&#x2F;&#x2F;escape.tech&#x2F;blog&#x2F;feedback-driven-api-exploration&#x2F;">https:&#x2F;&#x2F;escape.tech&#x2F;blog&#x2F;feedback-driven-api-exploration&#x2F;</a>)<p>We originally developed this technology for advanced API security testing. But from there, it was super easy to also test the performance and the reliability of APIs.<p>How we ranked APIs?<p>Now that we have a scalable way to gather exciting data from public APIs, we need to find a way to rank them. And this ranking should be meaningful to developers when choosing their APIs.<p>We decided to rank APIs using the following five criteria:<p>- Security - Performance - Reliability - Design - Popularity<p>Security score is computed as a combination of the number of OWASP top 10 vulnerabilities, and the number of sensitive information leaks detected by our scanner<p>The performance score is derived from the median response time of the API, aka the P50<p>The reliability score is derived from the number of inconsistent server responses, either 500 errors or responses that are not conform with the specification<p>The Design score reflects the quality of the OpenAPI specification file. Having comments, examples, a license, and contact information improves this score<p>The popularity score is computed from the number of references to the API found online<p>If you are curious about your API&#x27;s performance, you can ask us to index your own api for free at <a href="https:&#x2F;&#x2F;apirank.dev&#x2F;submit" rel="nofollow">https:&#x2F;&#x2F;apirank.dev&#x2F;submit</a> Upvote:
176
Title: Hello, I am excited to share PyBroker with you, a free and open-source Python framework that I developed for creating algorithmic trading strategies, including those that utilize machine learning. With PyBroker, you can easily develop and fine-tune trading rules, build powerful ML models, and gain valuable insights into your strategy&#x27;s performance.<p>Some of the key features of PyBroker include:<p>- A super-fast backtesting engine built using NumPy and accelerated with Numba.<p>- The ability to create and execute trading rules and models across multiple instruments with ease.<p>- Access to historical data from Alpaca and Yahoo Finance.<p>- The option to train and backtest models using Walkforward Analysis, which simulates how the strategy would perform during actual trading. The basic concept behind Walkforward Analysis is that it splits your historical data into multiple time windows and then &quot;walks forward&quot; in time in the same way that the strategy would be executed and retrained on new data in the real world. Walkforward Analysis also helps overcome the problem of data mining and overfitting by testing your strategy on out-of-sample data.<p>- More reliable trading metrics that use randomized bootstrapping to provide more accurate results. PyBroker calculates metrics such as Sharpe, Profit Factor, and max drawdown using bootstrapping, which randomly samples your strategy&#x27;s returns to simulate thousands of alternate scenarios that could have happened. This allows you to test for statistical significance and have more confidence in the effectiveness of your strategy.<p>- Support for strategies that use ranking and flexible position sizing.<p>- Caching of downloaded data, indicators, and models to speed up your development process.<p>- Parallelized computations that enable faster performance.<p>- Additionally, I have written tutorials on the framework and some general algorithmic trading concepts that can be found on <a href="https:&#x2F;&#x2F;www.pybroker.com" rel="nofollow">https:&#x2F;&#x2F;www.pybroker.com</a>. All of the code is available on Github using the link above.<p>Thanks for reading! Upvote:
70
Title: Today I was presented with a blank HTTP 403 when trying to access the HN login page, but only on Mullvad.<p>Has HN added new blocking mechanisms, if so why, and how do they work... and more generally, why is blocking VPN IPs becoming more common?<p>The standard excuse I&#x27;ve heard before is &quot;VPNs are a larger source of abuse&quot;. I take issue with this reasoning in that firstly: it lacks a denominator, VPNs are a larger source of traffic in general, abuse should be considered as a ratio. Secondly, VPNs are a known highly NATed collection of IPs, so lots of legitimate users will be thrown under the bus in the same stroke.<p>These arguments usually fall on deaf ears with most site owners who do not care about user discrimination, and will reach for the easiest blunt tool rather than make their service more fundamentally resilient. I expected more from HN though. Upvote:
40
Title: Lots of content these days makes it easy to feel imposter syndrome and behind the times. What are you people who are actively trying to become a better programmer doing, learning, building or studying to become better? Upvote:
114
Title: I got pretty good at (and very addicted to) the lunar lander game from a few days ago...<p>so I decided to make an autopilot for the lander based on what I felt like was the best strategy! Now I can have perfect landings every time without lifting a finger :D<p>Writing the autopilot code was a lot more fun than I expected! It felt a bit like programming a robot.<p>Source code: <a href="https:&#x2F;&#x2F;github.com&#x2F;szhu&#x2F;lunar-lander-autopilot">https:&#x2F;&#x2F;github.com&#x2F;szhu&#x2F;lunar-lander-autopilot</a><p>Original lander HN post: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35032506" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35032506</a> Upvote:
277
Title: I run a newsletter where I interview next-door music makers. They tell us about their process and show us their studio.<p>I&#x27;m looking for people to interview so If you&#x27;re interested let&#x27;s make it happen.<p>There&#x27;s people doing music with $100 gear and other&#x27;s with studios that cost multiple thousands. Wherever you stand, we want to see what you&#x27;re doing with what you have :)<p>[This is an example](https:&#x2F;&#x2F;www.gasnewsletter.com&#x2F;p&#x2F;39-denis-violet) of an interview &#x2F; studio tour. Upvote:
55
Title: It&#x27;s no news that Silicon Valley Bank is experiencing some trouble.<p>Since most startups in the US bank with them, would you like to share some details of your current situation? Upvote:
161
Title: Hi HN! We are Charly and Bryan, founders of Defer (<a href="https:&#x2F;&#x2F;www.defer.run&#x2F;">https:&#x2F;&#x2F;www.defer.run&#x2F;</a>). Defer is a zero-infrastructure background jobs platform for Node.js developers. As a managed platform that brings modern development standards to background jobs (ex: multi-env support, zero-API design), we enable Node.js developers to build products faster and scale without effort and infrastructure knowledge.<p>Background jobs, while being used in all web applications (processing webhooks, interacting with 3rd party APIs, or powering core features), did not benefit from the developer experience improvements that arose in all other layers of the Node.js API stack: quick and reliable databases with Supabase or easy Serverless deployment with Vercel.<p>Today, even for simple use cases, working with background jobs in Node.js necessarily requires some infrastructure knowledge—either by deploying and scaling an open source solution (ex: BullMQ) or using an IaaS such as AWS SQS with Lambdas, which comes with complexity and limited features (no support for dead letter queues, dynamic concurrency, or throttling).<p>At a large scale, you will need to solve how to handle rolling restarts, how to auto-scale your workers, how to safely deploy without interrupting long-running jobs, how to safely encrypt jobs’ data, and how to version them. Once deployed, your background job’s code lives in a separate part of your codebase, with its own mental model (queues and workers). Finally, most solutions provide technical dashboards which are not always helpful in debugging production issues, so you end up having to build custom dashboards.<p>Most companies we talked to try to handle those different aspects, building custom similar solutions and using developers’ time that could have been used on user-facing features.<p>Bryan and I are technical founders with 10+ years of experience working at start-ups of all stages (e.g. Algolia, home of HN Search!), from tech lead to CTO roles. Like many developers, we got asked many times to work on background job stacks and invest time into tailoring and scaling them for product needs.<p>I even dedicated most of my time at Algolia to building a custom background jobs pipeline to power the Algolia Shopify integration: ingesting partial webhooks from Shopify, enriching them given customers configuration, in FIFO order per shop, with the Shopify rate limited API, for thousands of shops and the equivalents of 3 millions of jobs per day. Given the complex and unique product requirements of the Algolia Shopify Ingestion Pipeline, the only solution (at the time and context) was to build a custom background jobs stack combining Redis and Kubernetes.<p>When consulting with some startups, we witnessed some developers choosing to keep some slow API routes calling 3rd party APIs synchronously instead of investing time in setting up background jobs. When looking back to the recent increase of productive zero infrastructure solutions in the Node.js ecosystem, we were surprised that the experience with background jobs remained unchanged. We decided to build Defer, so working with background jobs, CRONs, and workflows would match the current standard of Node.js developer experience.<p>Inspired by Next.js, Remix, and Netlify design, background jobs in Defer become background functions that live in your application’s code, with direct access to all configuration options: retry, concurrency, and more (<a href="https:&#x2F;&#x2F;docs.defer.run&#x2F;features&#x2F;retries-concurrency&#x2F;">https:&#x2F;&#x2F;docs.defer.run&#x2F;features&#x2F;retries-concurrency&#x2F;</a>) , and no specific mental model to learn. Your background functions get continuously deployed from GitHub with support for branch-based environments, allowing you to test new background jobs in no time, before safely moving to production.<p>Defer works for all kinds of Node.js projects, not only serverless ones. It does not require you to learn any new architectures or adapt your system design—you just turn your code into background functions using coding patterns you already know, ex: map-reduce, or recursion. Defer brings features such as configurable retries (advanced backoff options), throttling, and concurrency at the background job level, which other solutions either require you to implement yourself or are simply not available. Finally, the Defer Dashboard is the only background jobs Dashboard to allow developers to quickly find executions based on business&#x2F;product metadata, ex: “Show all executions for `user_id=123`) to quickly debug product issues.<p>Defer’s infrastructure, written in Go, is composed of 3 main components: a Build pipeline, a Scheduler, and a Runner. The Build pipeline enables us to build any Node.js project without requiring any configuration file (<a href="https:&#x2F;&#x2F;docs.defer.run&#x2F;platform&#x2F;builds&#x2F;">https:&#x2F;&#x2F;docs.defer.run&#x2F;platform&#x2F;builds&#x2F;</a>). The Scheduler relies on Postgres for persistent storage of your jobs (no risk of losing some)—all jobs’ data is encrypted—and on Redis, as an atomic counter to handle features such as concurrency and throttling (<a href="https:&#x2F;&#x2F;docs.defer.run&#x2F;platform&#x2F;executions&#x2F;">https:&#x2F;&#x2F;docs.defer.run&#x2F;platform&#x2F;executions&#x2F;</a>). Our infrastructure runs on AWS EC2 - leveraging auto-scaling groups, using the containerd API directly from Go.<p>We run a progressive deployment approach to enable uninterrupted long-running jobs (some of our customers’ jobs run for more than 5h) while releasing updates multiple times a day. Once your application is up and running, the Defer dashboard gives you all the essential information to operate background jobs: activity histograms, performances, and Slack alerting upon failures. The executions list comes with rich filters, allowing you to quickly find all the executions linked to a specific customer or other business metadata.<p>In short, we ensure that you get all the essential features, with the best developer experience, and with a fully managed infrastructure and observability tools so you can focus on building your product.<p>All of this would be meaningless without a free plan for small and side projects and usage-based pricing, so that’s what we offer: <a href="https:&#x2F;&#x2F;www.defer.run&#x2F;pricing">https:&#x2F;&#x2F;www.defer.run&#x2F;pricing</a>. If you want to give Defer a try, you can get started with a simple GitHub login, without any credit card information required, and our docs are at <a href="https:&#x2F;&#x2F;docs.defer.run">https:&#x2F;&#x2F;docs.defer.run</a>.<p>We would love to get to read about your experience with doing background jobs in Node.js and feedback on what we’ve built. We look forward to your comments! Upvote:
202
Title: Hi HN, I&#x27;m Dean, the non-technical co-founder of SchemafreeSQL. We released our beta version about a year ago. You can see the HN Post here <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=30291592" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=30291592</a><p>Today I am pleased to announce our initial release of our hosted SFSQL offering.<p>A major concern from the HN Beta feedback we received was our longevity. Being a hosted database solution I can see why. We took that to heart and re-engineered our offering. We de-risked it by minimizing the amount of infrastructure under our management, fly.io manages customer&#x27;s dedicated SFSQL endpoints, Aiven.io manages customer&#x27;s dedicated databases across 5 clouds, our serverless offering is a managed AWS Aurora Serverless cluster, our in-house databases is managed by Planetscale.com, and Stripe handles subscriptions. The cost of these services are mostly on demand, bringing our monthly fixed cost to a very manageable level.<p>I highly recommend all these services.<p>We are boot strapping SFSQL for now. Our business model, how we make money, is simple. Our prices are higher than our costs. Just like all businesses, margins matter and because we incur the costs of these service and pass them on, our margins take a hit. We envision being more of an add-on to these service providers and others like them eventually. Our margins would increase and the total cost to our customers would decrease. In this model our pricing is purely value based.<p>&quot;Data, Fluid as Code&quot; is what we settled on after countless iterations. I believe it captures the &quot;why&quot; question, &quot;why did we build this&quot;. SFSQL originally started out as an Object store for a online dev. environment we built in 2000. It has evolved over time. Its schemaless properties where added as we had a need to better handle user provided data structures and refactoring associated with many of our client projects.<p>Eric, the technical co-founder and creator of SFSQL, answered the &quot;How&quot; question in our HN beta post. For more in depth info on what&#x27;s going on behind the scenes Eric is available via email [email protected], he is not available to respond here.<p>The demo apps were all built by me. Eric would like it known that he is not responsible for those codes bases, which are all available on GitHub. I built these apps while testing out SFSQL and seeing if we play nice with the various serverless platforms. Client solutions we have built with SFSQL are not available for public display so we went with these demo apps I built. The apps show how easy it is to hook up a back-end to a web app with SFSQL even by a non programmer like myself.<p>I hope you check out SFSQL. Try it for free, no sign-up required, and please leave us feedback <a href="https:&#x2F;&#x2F;schemafreesql.com&#x2F;givefeedback_HN.html" rel="nofollow">https:&#x2F;&#x2F;schemafreesql.com&#x2F;givefeedback_HN.html</a> Upvote:
42
Title: I’m married. I don’t have kids. I sleep on average 7 hours. I visit the office no more than 2 times a week (so don’t spend a lot of time on commute). I don’t overwork. I barely meet with friends. I don’t play video games. I don’t watch Netflix or TV in general (apart from 1 movie every one-two weeks). I eat simple food that takes no more than 20 min to prepare. I don’t shop for groceries, but order them with delivery. I outsource big house chores to robots (washer, dishwasher, vacuum cleaner).<p>And yet, I barely find time to work on my side projects, or engage in interesting discussions online, or learn new things.<p>So I’m either terrible with time management or I spend my free time on useless things. Or both.<p>Dear HN, how do you find time for side projects, engaging in online discussions, learning new things? Upvote:
53
Title: Just enter the locations people will be traveling from. MLC then calculates the location, where the combined aircraft emissions are minimised. Based on data from the European Emissions Agency. Upvote:
86
Title: Would T-Bill buyers be impacted and how? Upvote:
48
Title: I find the level of writing quality in the essays and articles of these two magazines quite impressive.<p>What other online magazines do you read? Upvote:
257
Title: Been a few weeks since last posted. How goes the hunt? Upvote:
162
Title: Youtube channels that are not managed by huge corporations but are managed by single creators with a humble studio setup or a group of indie creators. Upvote:
87
Title: Throw 16GB at a system and start up a few browsers and other programs, and pretty they have used up almost all the RAM. It seems browsers in particular will happily eat all the RAM they can.<p>Why is this? At what point will the hunger of apps for RAM be satiated? With 32GB? 64GB?<p>With all the advances in tech, why are our hardware still struggling to keep up with demands of the apps we use? Upvote:
46
Title: I sometimes think back fondly of the small automation scripts I wrote over the years, so I wanted to ask you: Which scripts, hacks &amp; automations did you create that your&#x27;re proud of? Personally, I e.g. built a bot to automatically answer apartment ads [1], a small script to run Borg backup once per day [2], and a small automation to track the temperature in my aquarium [3].<p>1: <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;adewes&#x2F;c9b2a71457c6c6f01f2f" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;adewes&#x2F;c9b2a71457c6c6f01f2f</a><p>2: <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;adewes&#x2F;02e8a1f662d100a7ed80627801d0aed0" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;adewes&#x2F;02e8a1f662d100a7ed80627801d0a...</a><p>3: <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;adewes&#x2F;7a4c20a5a7379e19d78ba54521d3dc7d" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;adewes&#x2F;7a4c20a5a7379e19d78ba54521d3d...</a> Upvote:
332
Title: I am hoping someone out there with more wisdom than me or more personal development that went through this same sort of issue and came out the other side better can help me get some perspective because this is really killing me... I&#x27;m just going to lay it out as best as I can, and please don&#x27;t worry about offending me, if there&#x27;s some cold hard truth I need to hear it&#x27;s OK, just let me have it.<p>I&#x27;m a software developer with about 15 years of experience with 70% of my background being in small companies and startups where my position was some form of &quot;software dev&quot; but I ended up doing the solutions architect work, interfacing with clients, working with the sales&#x2F;owners, designing solutions, and implementing them too (with great success!). And I loved it, working with stakeholders and coming up with creative solutions really brings me joy.<p>However now when I go to bigger companies that pay better as some form of &quot;senior software dev&quot; I end up eventually absolutely hating being told what to do or how to build something, especially when it seems to me like it&#x27;s clearly a terrible way of doing something or if I feel like I could have come up with a much better approach, and ESPECIALLY if the person seems to be half my experience or age.<p>Perhaps I have some kind of skewed mentality where product managers or solutions architects are superior to me and that by being a pegged as a &quot;lowly coder&quot; I am basically letting them get all the credit, because I often end up spoon feeding them the solution and they end up thanking me on a meeting and then getting all the glory when those things work out, even though it was my experience and advice than caused them to succeed and they just sap everything from me and nobody in the business knows I exist.<p>In my mind, this manifests itself as &quot;I should just get that job instead&quot;, but even after some introspection I can&#x27;t actually tell if that is an accurate sentiment, it just drives me insane to have someone other than me doing the &quot;big picture&quot; design work and getting to interface with the customer, but I don&#x27;t have a rational reason other than I have been doing this stuff for so long that I actually do think I know how to do it better than the people I have been dealing with.<p>On top of it all, and probably the more frustrating aspect of this since I would like to get back to doing what I did when I was younger and &quot;accidentally&quot; fell into the ideal roles thinking it was a standard software developer role but instead they basically made me do all the work (which allowed me to do the really satisfying and creative stuff). I am having a hard time even getting an interview for these types of roles because people just seem to instantly reject a software engineering background for solutions architect positions.<p>I just don&#x27;t even know what to believe, but it certainly sucks and I think it&#x27;s skewing my ability to make rational career choices. For example sometimes I just want to start my own software consulting company so that I can decide which aspects of the work I do but it&#x27;s mostly because of the reasons above and I have a hunch that those are all terrible reasons to attempt such a thing.<p>Am I just a total piece of shit who resents authority? Why do I feel so awful and feel like people above me are stealing the credit for my work?<p>I need a job. I have kids. I feel so incredibly immature for feeling this way and hating people for no reason, but it&#x27;s the truth and I really don&#x27;t know what to do. Thanks. Upvote:
44
Title: Hello,<p>I&#x27;m interested in some thoughts on this. I have nothing to go on but a hunch... but is there any guarantee that popular &quot;file conversion&quot; websites aren&#x27;t honeypots for sensitive or useful information?<p>The odds, to me, of some employee running random files through a file conversion website at some point seems terrifyingly high. And some (like https:&#x2F;&#x2F;fabconvert.com&#x2F;) definitely seem more suspicious than others, lacking any legal entity or trademark I can find. If there were, or are, corrupt file conversion websites out there, it would be the perfect crime. So much so that, if I were running a business, I would not allow employees to touch any such service with a 10-foot-pole - but how often is that cited in training for preventing information leaks?<p>Thoughts? Upvote:
88
Title: Hey HN,<p>I&#x27;m a full time game developer fifteen months into creating my city builder game. It&#x27;s a lonely journey so I put together a very small group of other solo game developers.<p>We meet up every week (currently Tuesday nights, EST) to relate to the struggle, hang out, and sometimes rotate one person who presents for the night (they can teach or talk about anything game dev related, including their game). It&#x27;s been a success and motivating for all involved. There&#x27;s also a second group that meets on Thursday&#x27;s, but this group is currently full. There&#x27;s about 10 total people on the Discord server.<p>I&#x27;m looking to add 2 people to the group who can commit to weekly meetups. You must be working on your game full time. Must be serious about finishing&#x2F;releasing your game.<p>About the group:<p>We are late 20s - 30s and serious about releasing our respective games. We are pretty open and honest with each other, and will question each other&#x2F;provide feedback freely.<p>About our games:<p>My game: Metropolis 1998<p>Person #2 Game: Basketball GM<p>Person #3 Games: 9001 and It Usually Ends In Nuclear War<p>Other people&#x27;s game on the server who can publicly share:<p>Drift<p>Reisha Falls<p>Email is in profile Upvote:
264
Title: I know groupthink didn&#x27;t cause the SVB collapse alone, but it did lead to many otherwise very sophisticated founders being put at risk by taking their deal because &quot;everyone does it&quot;.<p>As one lesson from this debacle, I think it&#x27;s useful to ask ourselves what are other areas of groupthink that leave us in the tech industry particularly vulnerable?<p>E.g., as an OSS developer, I&#x27;ve used the MIT license for 10+ years because other devs I trust use it, but I don&#x27;t have any real reason why.<p>What are other examples to be aware of? Upvote:
65
Title: I recently observed that majority of my family members and friends are not using ad blockers and reader mode for reasons suck as lack of knowledge about plugins &amp; laziness.<p>So their online reading experience is not pleasant as a result very less amount of them read anything including what i share.<p>Its an attempt to give them clutter, tracker &amp; ad free reading experience right off the bat. Upvote:
136
Title: Hi,<p>Charles Schwab does not have too much of a deposit exposure, however it is going down exactly like SVB and First Capital. Any explanations? Upvote:
66
Title: Hi! We’re Nikhil and Alek, founders of Pynecone (<a href="https:&#x2F;&#x2F;pynecone.io">https:&#x2F;&#x2F;pynecone.io</a>), an open source framework to build web apps in pure Python. This can be anything from a small data science&#x2F;internal app to a large multi-page web app. Once your app is built, you can deploy your app with a single command to our hosting service (coming soon!), or self-host with your preferred provider.<p>Our Github is: <a href="https:&#x2F;&#x2F;github.com&#x2F;pynecone-io&#x2F;pynecone">https:&#x2F;&#x2F;github.com&#x2F;pynecone-io&#x2F;pynecone</a><p>Python is one of the most popular programming languages in the world. Webdev is one of the most popular applications of programming. So why can’t we make full-stack web apps using just Python?<p>We worked in the AI&#x2F;infra space and saw that even skilled engineers who wanted to make web apps but didn’t know traditional frontend tools like Javascript or React found it overwhelming and time consuming to learn. On the other hand, no code and low code solutions that save time in the development process lack the flexibility and robustness of traditional web development. These tools are great for prototyping, but they can be limiting as your app becomes more complex. We wanted to build a framework that is easy to get started with, yet flexible and powerful enough so you don’t outgrow it. Our main website is fully built with Pynecone and deployed on our hosting service.<p>In Pynecone, the frontend compiles down to a React&#x2F;NextJS app, so from the end-user’s perspective it looks like any other website. We have 60+ built-in components ranging from forms to graphing. Components are defined as Python functions. They can be nested within each other for flexible layouts, and you can use keyword args to style them with full CSS. We also provide a way to easily wrap any existing React component. Our goal is to leverage the existing webdev ecosystem and make it accessible to Python devs.<p>The app state is just a class. State updates are functions in the class. And the UI is a reflection of the state. When the user opens the app, they are given a unique token and a new instance of the state. We store user state on the backend, and use Websockets to send events and state updates. When a user performs an action, such as clicking a button, an event is sent to the server with the client token and the function to handle the event. On the server side, we retrieve the user&#x27;s state, execute the function to update the state, then send the updated state back to the frontend for rendering. Since Pynecone is 100% Python, you can easily integrate all your existing Python libraries into your app. In the future, we hope to leverage WebAssembly to offload many operations to the client.<p>Once your app is built, the next big challenge is deploying it. We’re building a single-line deploy, so you can type pc deploy and get a URL of your live app in minutes. Since we specialize in hosting a single type of app, we aim to provide a zero configuration deployment process. We are still working on releasing the hosting service, but you can sign up for its waitlist on our homepage. Alternatively, you can choose to host your app with your preferred cloud provider.<p>Things users have built with Pynecone so far include internal apps ranging from CRM to ML tools, UIs for LLM apps, landing pages, and personal websites. If you use Python, we would love to hear your thoughts and feedback in the comments! Upvote:
550
Title: Latest up here: https:&#x2F;&#x2F;www.moderntreasury.com&#x2F;svb-resource-center<p>Prior HN thread: https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35113496 Upvote:
130
Title: Hi HN! I’m Chris Mui, founder of Electric Air (<a href="https:&#x2F;&#x2F;electricair.io">https:&#x2F;&#x2F;electricair.io</a>). We’re building a residential heat pump system. This will be an all-electric replacement for your home’s furnace and air conditioner that enables more centrally ducted installs, manages your indoor air quality, and saves you money on monthly energy bills. We also streamline purchase, finance and install by selling directly to homeowners. You can place a preorder today at <a href="https:&#x2F;&#x2F;electricair.io">https:&#x2F;&#x2F;electricair.io</a>.<p>Heat pumps work by using refrigerant and a compressor to move energy against a temperature gradient. If you put 1 kWh of energy into a heat pump, you get 3-5 kWh of heating in your home. But this isn’t breaking the laws of physics because heat pumps don’t make heat, they move it around. The extra 2-4kWh gets absorbed from the outdoors, even when it is cold outside. The low pressure refrigerant in the outdoor heat exchanger is colder than the outdoor air, so it has to absorb energy. After the compressor the refrigerant in the indoor heat exchanger is hotter than the indoor air, and energy flows into your home. This happens in a continuous cycle. A great feature in this system is a reversing valve that allows the flow of refrigerant to be flipped and your heat pump becomes an air conditioner.<p>There’s a big push to end fossil fuel use in US homes by electrifying all end-uses, and heat pumps are a critical part of this. Space heating is 50% of the average homeowners energy consumption, and makes up 10% of overall US energy use. Recognizing the importance of heat pump adoption, the recently passed Inflation Reduction Act contains $4.3B in heat pump rebates for low and middle income families, and a $2000 tax credit that applies to everyone. Heat pumps can also save homeowners on their monthly utility bills vs. heating with natural gas, propane, fuel oil, and electric resistance. And thanks to the popularity of vapor injection systems, heat pumps now work well even in the cold climates of the Northeast.<p>Quick technical aside on vapor injection systems - this is an improvement to the basic vapor compression cycle. Gas from the condenser outlet is injected halfway into the compression process. This increases the compressor efficiency, increases the mass flow rate of refrigerant through the compressor, and also lowers the discharge temperature. The result is higher system efficiency, higher heating capacity, and the ability to operate across large temperature gradients (say -15F outside temp to 72F in your home) without exceeding the discharge temperature limit and damaging the compressor.<p>I’ve spent my career building and designing thermal systems—first in aerospace, then at Tesla working on Model 3 and Semi Truck, and most recently in vertical farming. I got really excited about residential heat pumps when I realized that we’re about to go through a huge transition where the 80M single family homes in the US replace their furnaces with heat pumps.<p>But the products on the market today have a number of shortcomings. The homeowner experience sucks because the integration of thermostat, heat pump equipment and air quality systems is terrible. Nothing works together well, and the best thermostats are not fully compatible with inverter driven heat pumps. In addition the process of getting a heat pump is painful, including finding a trustworthy contractor, sorting out financing, and wading through rebates. And finally contractors struggle with installs because of the difficulty of properly sizing the system, and understanding if your duct work is compatible with a heat pump<p>I wanted to approach home heating and cooling from a product design approach, improve the end-to-end experience for homeowners and make a product that was compelling beyond its climate motivations. Electric Air is building a thermostat as well as heat pump equipment (air handler and condenser) and a contractor web-app.<p>Better air quality is achieved through a thermostat with PM2.5 and CO2 sensors, as well as an air quality module on the air handler that controls HEPA filtration, fresh air intake and modification of the home’s humidity. The thermostat algorithm combines demand-response with weather and time-of-use rate plans to reduce monthly utility bills through pre-cooling and pre-heating. Unlike a Nest or Ecobee, the thermostat will be able to run the heat pump in variable speed mode. A more powerful air handler blower and contractor software enables more ducted installs - no wall units required. The most common heating system in the US is a natural gas furnace connected to ductwork, with the hot air ultimately coming out of vents in each room. This heat pump is a great replacement for the furnace and air conditioner in these ducted systems. The same software used for ducts also helps contractors perform simple load disaggregation (turn a utility bill into a thermal load calculation) to properly size a heat pump system. In addition there’s actually some industrial design going into the outdoor condenser, meaning you don’t have to hide it in an alley. And finally homeowners can purchase this system online. We help with financing and rebates, and connect them with a contractor to do the actual install.<p>How come no one’s doing this? Heat pump manufacturers are bad at making consumer products like thermostats and the thermostat manufacturers are IOT companies that don’t have the know-how to wade into heat pump equipment manufacture. For heat pump manufacturers, their end customer is largely HVAC contractors, and not homeowners. Also selling direct means disrupting their current distribution strategy which normally involves selling to regional distributors, and sometimes straight to contractors. Getting this right is a big systems integration problem that the current players are ill equipped to handle.<p>While we don&#x27;t have any physical prototypes at the moment, we have the industrial design and also largely understand how this will be built. The core technology risk is quite low, it&#x27;s really about executing the scope well and also finding the right product that homeowners find compelling. I&#x27;m working on building traction via preorders (<a href="https:&#x2F;&#x2F;electricair.io">https:&#x2F;&#x2F;electricair.io</a>), and will start building hardware once fundraising is complete, likely in the next few weeks.<p>What issues have you had with your existing heat and cooling, and do you have any interesting stories around a heat pump install or use? I would love to hear your ideas, experiences, and feedback on any and all of the above! Upvote:
927
Title: I&#x27;ve been building Rails, Python, Node and, you name it, frontend JS web app development for the last 12 years. I think I&#x27;ve gotten too bored with the same challenges that app development presents. Has anyone made the official career pivot to the ML&#x2F;AI field? What did you have to do? Did you have to start from a lower-level entry into the field? Upvote:
104
Title: Given the events over the past weekend, has your views on how SV is run on the business side changed? Upvote:
43
Title: Hey HN,<p>We&#x27;re Alex, Martin and Laurent. We previously founded Wit.ai (W14), which we sold to Facebook in 2015. Since 2019, we&#x27;ve been working on Nabla (<a href="https:&#x2F;&#x2F;nabla.com" rel="nofollow">https:&#x2F;&#x2F;nabla.com</a>), an intelligent assistant for health practitioners.<p>When GPT-3 was released in 2020, we investigated it&#x27;s usage in a medical context[0], to mixed results.<p>Since then we’ve kept exploring opportunities at the intersection of healthcare and AI, and noticed that doctors spend am awful lot of time on medical documentation (writing clinical notes, updating their EHR, etc.).<p>Today, we&#x27;re releasing Nabla Copilot, a Chrome extension generating clinical notes from video consultations, to address this problem.<p>You can try it out, without installation nor sign up, on our demo page: <a href="https:&#x2F;&#x2F;nabla.com&#x2F;copilot-demo&#x2F;" rel="nofollow">https:&#x2F;&#x2F;nabla.com&#x2F;copilot-demo&#x2F;</a><p>Here’s how it works under the hood:<p>- When a doctor starts a video consultation, our Chrome extension auto-starts itself and listens to the active tab as well as the doctor’s microphone.<p>- We then transcribe the consultation using a fine-tuned version of Whisper. We&#x27;ve trained Whisper with tens of thousands of hours of medical consultation and medical terms recordings, and we have now reached an error rate which is 3× lower than Google&#x27;s Speech-To-Text.<p>- Once we have the transcript, we feed it to a heavily trained GPT-3, which generates a clinical note.<p>- We finally return the clinical note to the doctor through our Chrome extension, the doctor can copy it to their EHR, and send a version to the patient.<p>This allows doctors to be fully focused on their consultation, and saves them a lot time.<p>Next, we want to make this work for in-person consultation.<p>We also want to extract structured data (in the FHIR standard) from the clinical note, and feed it to the doctor’s EHR so that it is automatically added to the patient&#x27;s record.<p>Happy to further discuss technical details in comments!<p>---<p>[0]: <a href="https:&#x2F;&#x2F;nabla.com&#x2F;blog&#x2F;gpt-3&#x2F;" rel="nofollow">https:&#x2F;&#x2F;nabla.com&#x2F;blog&#x2F;gpt-3&#x2F;</a> Upvote:
117
Title: I like having an idea of how many people visit my blog but I&#x27;m wondering if there&#x27;s a better privacy conscious solution for simple analytics (don&#x27;t need advanced features like sales conversions etc.) which would also be GDPR compliant without adding cookie banners.<p>Hackers with the same use case (simple blog pages visitors count) what are you using? Upvote:
46
Title: Hello HN! We’re Max and Lydia, co-founders at CodeComplete AI (<a href="https:&#x2F;&#x2F;codecomplete.ai">https:&#x2F;&#x2F;codecomplete.ai</a>), an AI-powered coding assistant for enterprise companies. Many large companies can’t use products like GitHub Copilot because of the security and privacy risks, so we’re building a self-hosted version that’s fine tuned to the company’s codebase.<p>We love Copilot and believe that AI will change the way developers work. Max wanted to use Copilot when he was an ML engineer at Meta, but leadership blocked him because Copilot requires sending company code to GitHub and OpenAI. We built CodeComplete because lots of other companies are in the same boat, and we want to offer a secure way for these companies to leverage the latest AI-powered dev tools.<p>To that end, our product is really meant for large engineering teams at enterprise companies who can’t use GitHub Copilot. This generally means teams with more than 200 developers that have strict practices against sending their code or other IP externally.<p>CodeComplete offers an experience similar to Copilot; we serve AI code completions as developers type in their IDEs. However, instead of sending private code snippets to GitHub or OpenAI, we use a self-hosted LLM to serve code completions. Another advantage with self-hosting is that it’s more straightforward to securely fine-tune to the company’s codebase. Copilot suggestions aren’t always tailored to a company’s coding patterns or internal libraries, so this can help make our completions more relevant and avoid adding tech debt.<p>To serve code completions, we start with open source foundation models and augment them with additional (permissively-licensed) datasets. Our models live behind your firewall, either in your cloud or on-premises. For cloud deployments, we have terraform scripts that set up our infrastructure and pull in our containers. On-prem deployments are a bit more complicated; we work with the customer to design a custom solution. Once everything’s set up, we train on your codebase and then start serving code completions.<p>To use our product, developers simply download our extension in their IDE (VS Code currently supported, Jetbrains coming soon). After authentication, the extensions provide in-line code completion suggestions to developers as they type.<p>Since we’re a self-hosted enterprise product, we don’t have an online version you can just try out, but here are two quick demos: (1): Python completion, fine-tuned on a mock Twitter-like codebase: <a href="https:&#x2F;&#x2F;youtu.be&#x2F;YqkqtGY4qmc" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;YqkqtGY4qmc</a>. (2) Java completion for &quot;leetcode&quot;-style problems, like converting integers to roman numerals: <a href="https:&#x2F;&#x2F;youtu.be&#x2F;H4tGoFNC8oI" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;H4tGoFNC8oI</a>.<p>We take privacy and security seriously. By default, our deployments only send back heartbeat messages to our servers. Our product logs usage data and code snippets to the company’s own internal database so that they can evaluate our performance and improve their models over time. Companies have the option to share a subset of that data with us (e.g. completion acceptance rate, model probabilities output, latencies, etc), but we don’t require it. We never see your code or any other intellectual property.<p>We charge based on seat licenses. For enterprise companies, these contracts often demand custom scoping and requirements. In general though, our pricing will be at a premium to GitHub Copilot since there is significant technical and operational overhead with offering a self-hosted product like this.<p>Having access to these types of tools would have saved us a bunch of time in our previous jobs, so we’re really excited to show this to everyone. If you are having similar issues with security and privacy at your current company, please reach out to us at [email protected]! We’d love to hear your feedback. Upvote:
138
Title: Hi everyone! I’m Samir, and my co-founder Neil and I are building Blyss (<a href="https:&#x2F;&#x2F;blyss.dev">https:&#x2F;&#x2F;blyss.dev</a>). Blyss is an open source homomorphic encryption SDK, available as a fully managed service.<p>Fully homomorphic encryption (FHE) enables computation on encrypted data. This is essentially the ultimate privacy guarantee - a server that does work for its users (like fetching emails, tweets, or search results), without ever knowing what its users are doing - who they talk to, who they follow, or even what they search for. Servers using FHE give you cryptographic proof that they aren’t spying on you.<p>Unfortunately, performing general computation using FHE is notoriously slow. We have focused on solving a simple, specific problem: retrieve an item from a key-value store, without revealing to the server which item was retrieved.<p>By focusing on retrievals, we achieve huge speedups that make Blyss practical for real-world applications: a password scanner like “Have I Been Pwned?” that checks your credentials against breaches, but never learns anything about your password (<a href="https:&#x2F;&#x2F;playground.blyss.dev&#x2F;passwords">https:&#x2F;&#x2F;playground.blyss.dev&#x2F;passwords</a>), domain name servers that don’t get to see what domains you’re fetching (<a href="https:&#x2F;&#x2F;sprl.it&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sprl.it&#x2F;</a>), and social apps that let you find out which of your contacts are already on the platform, without letting the service see your contacts (<a href="https:&#x2F;&#x2F;stackblitz.com&#x2F;edit&#x2F;blyss-private-contact-intersection" rel="nofollow">https:&#x2F;&#x2F;stackblitz.com&#x2F;edit&#x2F;blyss-private-contact-intersecti...</a>).<p>Big companies (Apple, Google, Microsoft) are already using private retrieval: Chrome and Edge use this technology today to check URLs against blocklists of known phishing sites, and check user passwords against hacked credential dumps, without seeing any of the underlying URLs or passwords.<p>Blyss makes it easy for developers to use homomorphic encryption from a familiar, Firebase-like interface. You can create key-value data buckets, fill them with data, and then make cryptographically private retrievals. No entity, not even the Blyss service itself, can learn which items are retrieved from a Blyss bucket. We handle all the server infrastructure, and maintain robust open source JS clients, with the cryptography written in Rust and compiled to WebAssembly. We also have an open source server you can host yourself.<p>(Side note: a lot of what drew us to this problem is just how paradoxical the private retrieval guarantee sounds—it seems intuitively like it should be impossible to get data from a server without it learning what you retrieve! The basic idea of how this is actually possible is: the client encrypts a one-hot vector (all 0’s except a single 1) using homomorphic encryption, and the server is able to ‘multiply’ these by the database without learning anything about the underlying encrypted values. The dot product of the encrypted query and the database yields an encrypted result. The client decrypts this, and gets the database item it wanted. To the server, all the inputs and outputs stay completely opaque. We have a blog post explaining more, with pictures, that was on HN previously: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=32987155" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=32987155</a>.)<p>Neil and I met eight years ago on the first day of freshman year of college; we’ve been best friends (and roommates!) since. We are privacy nerds—before Blyss, I worked at Yubico, and Neil worked at Apple. I’ve had an academic interest in homomorphic encryption for years, but it became a practical interest when a private Wikipedia demo I posted on HN (<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=31668814" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=31668814</a>) became popular, and people started asking for a simple way to build products using this technology.<p>Our client and server are MIT open source (<a href="https:&#x2F;&#x2F;github.com&#x2F;blyssprivacy&#x2F;sdk">https:&#x2F;&#x2F;github.com&#x2F;blyssprivacy&#x2F;sdk</a>), and we plan to make money as a hosted server. Since the server is tricky to operate at scale, and is not part of the trust model, we think this makes sense for both us and our customers. People have used Blyss to build block explorers, DNS resolvers, and malware scanners; you can see some highlights in our playground: <a href="https:&#x2F;&#x2F;playground.blyss.dev">https:&#x2F;&#x2F;playground.blyss.dev</a>.<p>We have a generous free tier, and you get an API key as soon as you log in. For production use, our pricing is usage-based: $1 gets you 10k private reads on a 1 GB database (larger databases scale costs linearly). You can also run the server yourself.<p>Private retrieval is a totally new building block for privacy - we can’t wait to see what you’ll build with it! Let us know what you think, or if you have any questions about Blyss or homomorphic encryption in general. Upvote:
206
Title: As the title says, curious if anyone else in here is experiencing a slightly slower HN. I&#x27;d say I started noticing it a couple of hours ago. Maybe it&#x27;s because of so much stuff happening recently (OpenAI, FB layoffs etc) and the website not handling the load well? Upvote:
177
Title: I know obviously there is a ton of discussion about GPT-4 in the main announcement post, and while I&#x27;m blown away by a lot of these capabilities, I honestly don&#x27;t understand how our capitalist system will survive this, long term. I&#x27;m not a Luddite, but it&#x27;s pretty easy for me to see how this will get rid of the need for tons of jobs. People always love to say &quot;technology replacing people has been happening since the beginning of time&quot;, but the change here (at least to me) is that the <i>rate</i> by which AI can fill jobs will (or has?) hit a tipping point where it will be faster than new jobs will pop up.<p>How do other people feel about this? I&#x27;ve discounted tons of hype cycles in the past (crypto&#x2F;Blockchain, Metaverse, etc.), even in cases where I was wrong (e.g. the importance of mobile), but this feels at least as consequential as the Internet to me. Upvote:
57
Title: Hi HN,<p>As a developer, I much prefer to write code than read other people’s code. But most projects I work on involve other developers so it’s hard to avoid. Often I find it really hard to quickly parse other people’s code so I thought maybe ChatGPT could help.<p>ChatGPT does a really nice job of giving clear explanations when you paste in code and ask it to explain it. But the interface is a bit of a pain to use if you’re using it many times a day. It’s also hard to share explanations with co-workers. So I built whatdoesthiscodedo.com. Just paste your code and get your ChatGPT explanation and a sharable link you can give to coworkers.<p>I’m hopeful it can also be helpful to folks who aren’t professional developers. My co-founder at my main company, Beam Analytics, is more of a product guy so only needs to write scripts etc. Before, he’d often just find code online to copy and then struggle to make it work. But with this tool, he says he’s getting better intuition in understanding the code he’s using, and how to modify it.<p>We’d love your feedback. Email us at hi (at) beamanalytics.io or DM me on twitter @TheBuilderJR Upvote:
186
Title: From the GPT-4 paper: &quot;Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.&quot;<p>While OpenAI&#x27;s goal of creating AGI is admirable, the approach is anything but Open, and I fear pushing research for the industry more broadly into closed territory and restricted to the hands of a few - under the guise of maintaining AI safety.<p>A few data points:<p>- Transition from non-profit to &quot;capped-profit&quot; corporation (investment returns up to 100x) in 2019<p>- GPT-2: &quot;phased rollout&quot;<p>- GPT-3: parameter weights not released<p>- GPT-4: no details about # of parameters, architecture, research details, and certainly not parameter weights<p>The latest move would also encourage other industry players to restrict research detail disclosure (as otherwise OpenAI would be in an unfairly advantageous position by having access to others&#x27; work) Upvote:
61
Title: I see it’s common for people on HN to have a personal website&#x2F;blog. I’m interested in knowing if the creation and maintenance of a personal website have lead to paid full&#x2F;part time jobs, increased learning, brought new connections to others or are purely vanity. Upvote:
513
Title: I use a VPN all day long and lately I&#x27;ve been getting stuck filling out 2-5 reCAPTCHAs each time I want to view a site or login or perform a function. In the distant pass during my bot-making days there were a number of CAPTCHA solving services that cost a small fee per CAPTCHA successfully solved. I see there are still many of these services today. I checked the Mozilla extension store and there&#x27;s one that looks very sketchy but possibly works - reCAPTCHA solver by DoZz. Half the reviews are 5* and the other half are 1* and &#x27;scam&#x27; or &quot;doesn&#x27;t work.&quot;<p>Are there other less-known extensions? Upvote:
91
Title: Have you built any project or business that you didn&#x27;t think would generate any revenue, but is generating a good enough income for you? Upvote:
55
Title: Looking for feedback on my project to teach python by writing code that interacts with a Minecraft World. Upvote:
171
Title: Leaving aside the fact that nothing can beat actual experience. What books helped you in your entrepreneurship journey? Upvote:
303
Title: Hey HN! We’re Caleb and Josh, the founders of BuildFlow (<a href="https:&#x2F;&#x2F;www.buildflow.dev" rel="nofollow">https:&#x2F;&#x2F;www.buildflow.dev</a>). We provide an open source framework for building your entire data pipeline quickly using Python. You can think of us as an easy alternative to Apache Beam or Google Cloud Dataflow.<p>The problem we&#x27;re trying to solve is simple: building data pipelines can be a real pain. You often need to deal with complex frameworks, manage external cloud resources, and wire everything together into a single deployment (you’re probably drowning in Yaml by this point in the dev cycle). This can be a burden on both data scientists and engineering teams.<p>Data pipelines is a broad term, but we generally mean any kind of processing that happens outside of the user facing path. This can be things like: processing file uploads, syncing data to a data warehouse, or ingesting data from IoT devices.<p>BuildFlow, our open-source framework, lets you build a data pipeline by simply attaching a decorator to a Python function. All you need to do is describe where your input is coming from and where your output should be written, and BuildFlow handles the rest. No configuration outside of the code is required. See our docs for some examples: <a href="https:&#x2F;&#x2F;www.buildflow.dev&#x2F;docs&#x2F;intro" rel="nofollow">https:&#x2F;&#x2F;www.buildflow.dev&#x2F;docs&#x2F;intro</a>.<p>When you attach the decorator to your function, the BuildFlow runtime creates your referenced cloud resources, spins up replicas of your processor, and wires up everything needed to efficiently scale out the reads from your source and then writes to your sink. This lets you focus on writing logic as opposed to interacting with your external dependencies.<p>BuildFlow aims to hide as much complexity as possible in the sources &#x2F; sinks so that your processing logic can remain simple. The framework provides generic I&#x2F;O connectors for popular cloud services and storage systems, in addition to &quot;use case driven” I&#x2F;O connectors that chain together multiple I&#x2F;O steps required by common use cases. An example “use case driven” source that chains together GCS pubsub notifications &amp; fetching GCS blobs can be seen here: <a href="https:&#x2F;&#x2F;www.buildflow.dev&#x2F;docs&#x2F;io-connectors&#x2F;gcs_notifications" rel="nofollow">https:&#x2F;&#x2F;www.buildflow.dev&#x2F;docs&#x2F;io-connectors&#x2F;gcs_notificatio...</a><p>BuildFlow was inspired by our time at Verily (Google Life Sciences) where we designed an internal platform to help data scientists build and deploy ML infra &#x2F; data pipelines using Apache Beam. Using a complex framework was a burden on our data science team because they had to learn a whole new paradigm to write their Python code in, and our engineering team was left with the operational load of helping folks learn Apache Beam while also managing &#x2F; deploying production pipelines. From this pain, BuildFlow was born.<p>Our design is based around two observations we made from that experience:<p>(1) The hardest thing to get right is I&#x2F;O. Efficiently fanning out I&#x2F;O to workers, concurrently reading &#x2F; processing input data, catching schema mismatches before runtime, and configuring cloud resources is where most of the pain is. BuildFlow attempts to abstract away all of these bits.<p>(2) Most use cases are large scale but not (overly) complex. Existing frameworks give you scalability and a complicated programming model that supports every use case under the sun. BuildFlow provides the same scalability but focuses on common use cases so that the API can remain lightweight &amp; easy to use.<p>BuildFlow is open source, but we offer a managed cloud offering that allows you to easily deploy your pipelines to the cloud. We provide a CLI that deploys your pipeline to a managed kubernetes cluster, and you can optionally opt in to letting us manage your resources &#x2F; terraform as well. Ultimately this will feed into our VS Code Extension which will allow users to visually build their data pipelines directly from VS Code (see <a href="https:&#x2F;&#x2F;launchflow.com">https:&#x2F;&#x2F;launchflow.com</a> for a preview). The extension will be free to use and will come packaged with a bunch of nice-to-haves (code generation, fuzzing, tracing, and arcade games (yep!) just to name a few in the works).<p>Our managed offering is still in private beta but we’re hoping to release our CLI in the next couple weeks. Pricing for this service is still being ironed out but we expect it to be based on usage.<p>We’d love for you to try BuildFlow and would love any feedback. You can get started right away by installing the python package: pip install buildflow. Check out our docs (<a href="https:&#x2F;&#x2F;buildflow.dev&#x2F;docs&#x2F;intro" rel="nofollow">https:&#x2F;&#x2F;buildflow.dev&#x2F;docs&#x2F;intro</a>) and GitHub (<a href="https:&#x2F;&#x2F;github.com&#x2F;launchflow&#x2F;buildflow">https:&#x2F;&#x2F;github.com&#x2F;launchflow&#x2F;buildflow</a>) to see examples on how to use the API.<p>This project is very new, so we’d love to gather some specific feedback from you, the community. How do you feel about a framework managing your cloud resources? We’re considering adding a module that would let BuildFlow create &#x2F; manage your terraform for you (terraform state would be dumped to disk). What are some common I&#x2F;O operations you find yourself rewriting? What are some operational tasks that require you to leave your code editor? We’d like to bring as many tasks into BuildFlow and our VSCode extension so you can avoid context switches. Upvote:
104
Title: Hey HN! Charles here from Prequel (https:&#x2F;&#x2F;prequel.co). We just launched the ability for companies to import data from their customer’s data warehouse or database, and we wanted to share a little bit more about it with the community.<p>If you just want to see how it works, here’s a demo of the product that Conor recorded: https:&#x2F;&#x2F;www.loom.com&#x2F;share&#x2F;4724fb62583e41a9ba1a636fc8ea92f1.<p>Quick background on us: we help companies integrate with their customer’s data warehouse or database. We’ve been busy helping companies export data to their customers – we’re currently syncing over 40bn rows per month on behalf of companies. But folks kept on asking us if we could help them import data from their customers too. They wanted the ability to offer a 1st-party reverse ETL to their customers, similar to the 1st-party ETL capability we already helped them offer. So we built that product, and here we are.<p>Why would people want to import data? There are actually plenty of use-cases here. Imagine a usage-based billing company that needs to get a daily pull from its customers of all the billing events that happened, so that they can generate relevant invoices. Or a fraud detection company who needs to get the latest transaction data from its customers so it can appropriately mark fraudulent ones.<p>There’s no great way to import customer data currently. Typically, people solve this one of two ways today. One is they import data via CSV. This works well enough, but it requires ongoing work on the part of the customer: they need to put a CSV together, and upload it to the right place on a daily&#x2F;weekly&#x2F;monthly basis. This is painful and time-consuming, especially for data that needs to be continuously imported. Another one is companies make the customer write custom code to feed data to their API. This requires the customer to do a bunch of solutions engineering work just to get started using the product – which is a suboptimal onboarding experience.<p>So instead, we let the customer connect their database or data warehouse and we pull data directly from there, on an ongoing basis. They select which tables to import (and potentially map some columns to required fields), and that’s it. The setup only takes 5 minutes, and requires no ongoing work. We feel like that’s the kind of experience every company should provide when onboarding a new customer.<p>Importing all this data continuously is non-trivial, but thankfully we can actually reuse 95% of the infrastructure we built for data exports. It turns out our core transfer logic remains pretty much exactly the same, and all we had to do was ship new CRUD endpoints in our API layer to let users configure their source&#x2F;destination. As a brief reminder about our stack, we run a GoLang backend and Typescript&#x2F;React frontend on k8s.<p>In terms of technical design, the most challenging decisions we have to make are around making database’s type-systems play nicely with each other (kind of an evergreen problem really). For imports, we allow the data recipient to specify whether they want to receive this data as JSON blob, or as a nicely typed table. If they choose the latter, they specify exactly which columns they’re expecting, as well as what type guarantees those should uphold. We’re also working on the ability to feed that data directly into an API endpoint, and adding post-ingestion validation logic.<p>We’ve mentioned this before but it bears worth repeating. We know that security and privacy are paramount here. We&#x27;re SOC 2 Type II certified, and we go through annual white-box pentests to make sure that all our code is up to snuff. We never store any of the data anywhere on our servers. Finally, we offer on-prem deployments, so data never even has to touch our servers if our customers don&#x27;t want it to.<p>We’re really stoked to be sharing this with the community. We’ll be hanging out here for most of the day, but you can also reach us at hn (at) prequel.co if you have any questions! Upvote:
40
Title: Hey HN, we are Remen, Ben, and Nick, the founders of Propify (<a href="https:&#x2F;&#x2F;getpropify.com">https:&#x2F;&#x2F;getpropify.com</a>), an API aggregator for residential property management. We abstract over archaic APIs and merge them into a single modern API for the real estate industry, giving companies access to multiple property management systems (PMSs) via a REST API. Think merge.dev or Plaid for real estate.<p>Property managers nowadays use software to operate residential rental properties. Our customers are not these property managers directly, but companies who provide services to both property managers and renters. Our customers are solving problems around resident screening, security, parking, maintenance, etc., which property managers typically outsource.<p>Compared to most other industries in 2023, property management still runs on old tech with bad&#x2F;wrong&#x2F;absent API documentation. Creating and maintaining integrations with these systems is incredibly painful (1999 called and wants to tell you about its cool new SOAP technology!)<p>As an example of what we’re solving for, one prospective customer told us they regularly get 503 (service unavailable) errors from one of these PMSs at the beginning of each month because the system can’t process rent payments and other requests at the same time. We address challenges like this using an exponential backoff retry strategy.<p>We abstract over all these APIs to give our customers software they can reasonably use. Our goal is to eliminate old tech, poor docs, and unreliable infra for our customers so they can focus on delivering value instead of fighting with integrations.<p>We offer a tested RESTful API with accurate documentation and modern architecture that we can scale. In addition, we want to make the developer experience as good as possible with things like webhooks, SDKs, websockets, and even a GraphQL API (coming later this year).<p>Before Propify, we built a rent payment app to bridge the technology gap between renters and landlords. Then we encountered the pain of PMS integrations and decided to pivot and solve that problem instead.<p>dang suggested that we include a product demo or video but since our core product is an API, there’s not that much to show—sorry! But our docs are here: <a href="https:&#x2F;&#x2F;docs.getpropify.com&#x2F;">https:&#x2F;&#x2F;docs.getpropify.com&#x2F;</a>, and if you’re working&#x2F;interested in this space and have your own credentials for one of our supported integrations, we’ll be happy to get you a sandbox account and a demo.<p>This industry has been stagnant because of the barrier to entry for integrating with these vital PM systems. Our goal is to unlock innovation by pulling real estate tech into the modern world. Developer experience is incredibly important to us, and we are actively looking for areas where we can improve. We welcome your feedback, questions, and comments! Upvote:
95
Title: Yesterday someone tried to deposit money in my account and they were told my account was &quot;closed&quot;. I logged into my account to see and everything looked normal, so I told them they probably had the number wrong and to try again. They went back the next day and got the same response. &quot;Account is closed.&quot;<p>I called Chase today and was informed that my account was &quot;locked&quot;. I asked why and how I could get it unlocked. They ended forwarding me multiple times and finally sent me to the Chase compliance department. They informed me again that the account was &quot;locked&quot; and said the account will be closed in a couple days. I asked why and they said your account has been flagged, they did a review and there is no way to reverse the decision. In addition, I am no longer allowed to open any new accounts. I asked what the reason was and they said &quot;For compliance reasons we are not allowed to disclose the reason&quot;. They mentioned that the only way they could reverse the decision was to speak with a branch manager.<p>I immediately went to my local branch and spoke with my branch manager. After multiple phone calls by the manager to the Chase compliance department, she informed me there was no way to appeal and my account would be closed. In addition, my other checking and savings account will be closed in the next couple days and I won&#x27;t be able to open any new accounts with Chase (new info). The bank manager said she has no power in this situation and doesn&#x27;t even know the reason. If the compliance team decides something, the branch manager can&#x27;t do anything.<p>I never even thought this was a possibility and I feel powerless. Anyone have any ideas of what I should do next? The crazy thing is that when they lock an account, it&#x27;s frozen for 10-15 days and then when it&#x27;s closed they send a check in the mail. My money is now being held hostage at Chase until they decide to send me a check. Upvote:
76
Title: Hi, my name is Miguel and I am very happy to share what&#x27;s been months worth of work :)<p>The project has rough edges for sure, but any early feedback, comments or concerns are appreciated!<p>=== The Problem ===<p>You work on the Security and Operations (SecOps) team in charge of your organization&#x27;s Software Supply Chain Security. You feel pretty good about the state of things already, your developer teams are signing their commits, deliverables, scanning for vulnerabilities,… Life is good!<p>Then you realize that you are not compliant with the latest security requirements. You get referred to slsa.dev and are told that you need to be at least level 3, whatever that means!<p>Aha! I “just” need to implement an attestation and artifact layer in our Software Supply Chain, which you complete after a couple of months of work.<p>Now to the easy part (or what you think). To make the developer teams adopt it.<p>You quickly realize that standardizing best practices and security requirements is very hard. Development and SecOps team dynamics are clashy and poorly defined due to priorities mismatch. Also, from the developer&#x27;s point of view, it’s very time-consuming and frustrating to pollute your CI&#x2F;CD systems with convoluted, error-prone and complex processes to comply with the SecOps team.<p>So there has to be a better way that satisfies both sides...<p>=== The Solution ===<p>Enter Chainloop. You can think of it as an API for your organization&#x27;s Software Supply Chain that both parties can use to interact effectively to meet their mismatched priorities.<p>SecOps teams regain security compliance, visibility, standardization and control by having a mechanism to define and propagate attestation requirements. Developers, on the other hand, get jargon-free tooling that can be used to meet compliance with minimum friction and effort.<p>=== Give it a try ===<p>Eager for feedback from the community so please reach out. Happy to chat!<p>Thanks!<p>PS: You can see an attestation end-to-end demo here <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Q_0dlBqKtIU&amp;t=384s">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Q_0dlBqKtIU&amp;t=384s</a> Upvote:
45
Title: Hello HN,<p>We&#x27;re excited to introduce Aperture[0], an open-source project that addresses the challenges of detecting and mitigating reliability and performance issues in microservices. Aperture provides a reliability abstraction layer, enabling globalized load control for easier management across distributed microservice architectures[1].<p>DoorDash uses Aperture and detailed its implementation in a recent blog post[2], discussing microservice architecture failures such as cascading failure, retry storm, death spiral, and metastable failure. The post also examines the limitations of existing countermeasures like load shedding, circuit breakers, and auto-scaling in coordinating mitigation across services.<p>Our CTO, Tanveer Gill, recently demonstrated Aperture&#x27;s capabilities at a recent conference[3]. Aperture uses prioritized load shedding for automatic detection and handling of request overloads, enabling graceful degradation and prioritization. Its unified intelligent load management system coordinates services during outages, and distributed rate limiting protects vulnerable APIs from heavy-hitters.<p>[0] https:&#x2F;&#x2F;github.com&#x2F;fluxninja&#x2F;aperture<p>[1] https:&#x2F;&#x2F;docs.fluxninja.com&#x2F;<p>[2] https:&#x2F;&#x2F;doordash.engineering&#x2F;2023&#x2F;03&#x2F;14&#x2F;failure-mitigation-for-microservices-an-intro-to-aperture&#x2F;<p>[3] https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=yHKPXsZOc5I Upvote:
45
Title: Hello HN!<p>TLDR;<p>- Quality News is a Hacker News client that provides additional data and insights on submissions, notably, the upvoteRate metric.<p>- We propose that this metric could be used to improve the Hacker News ranking score.<p>- In-depth explanation: <a href="https:&#x2F;&#x2F;github.com&#x2F;social-protocols&#x2F;news#readme">https:&#x2F;&#x2F;github.com&#x2F;social-protocols&#x2F;news#readme</a><p>The Hacker News ranking score is directly proportional to upvotes, which is a problem because it creates a feedback loop: higher rank leads to more upvotes leads to higher rank, and so on...<p><pre><code> → ↗ ↘ Higher Rank More Upvotes ↖ ↙ ← </code></pre> As a consequence, success on HN depends almost entirely on getting enough upvotes in the first hour or so to make the front page and get caught in this feedback loop. And getting these early upvotes is largely a matter of timing, luck, and moderator decisions. And so the best stories don&#x27;t always make the front page, and the stories on the front page are not always the best.<p>Our proposed solution is to use upvoteRate instead of upvotes in the ranking formula. upvoteRate is an estimate of how much more or less likely users are to upvote a story compared to the average story, taking account how much attention the story as received, based on a history of the ranks and times at which it has been shown. You can read about how we calculate this metric in more detail here: <a href="https:&#x2F;&#x2F;github.com&#x2F;social-protocols&#x2F;news#readme">https:&#x2F;&#x2F;github.com&#x2F;social-protocols&#x2F;news#readme</a><p>About 1.5 years ago, we published an article with this basic idea of counteracting the rank-upvotes feedback loop by using attention as negative feedback. We received very valuable input from the HN community (<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=28391659" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=28391659</a>). Quality News has been created based largely on this feedback.<p>Currently, Quality News shows the upvoteRate metric for live Hacker News data, as well as charts of the rank and upvote history of each story. We have not yet implemented an alternative ranking algorithm, because we don&#x27;t have access to data on flags and moderator actions, which are a major component of the HN ranking score.<p>We&#x27;d love to see the Hacker News team experiment with the new formula, perhaps on an alternative front page. This will allow the community to evaluate whether the new ranking formula is an improvement over the current one.<p>We look forward discussing our approach with you!<p>Links:<p>Site: <a href="https:&#x2F;&#x2F;news.social-protocols.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;news.social-protocols.org&#x2F;</a><p>Readme: <a href="https:&#x2F;&#x2F;github.com&#x2F;social-protocols&#x2F;news#readme">https:&#x2F;&#x2F;github.com&#x2F;social-protocols&#x2F;news#readme</a><p>Previous Blog Post: <a href="https:&#x2F;&#x2F;felx.me&#x2F;2021&#x2F;08&#x2F;29&#x2F;improving-the-hacker-news-ranking-algorithm.html" rel="nofollow">https:&#x2F;&#x2F;felx.me&#x2F;2021&#x2F;08&#x2F;29&#x2F;improving-the-hacker-news-ranking...</a><p>Previous Discussion: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=28391659" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=28391659</a> Upvote:
140
Title: Hi Hacker News! We’re Jason and Ayan, the cofounders of Sidekick (<a href="https:&#x2F;&#x2F;www.getsidekick.ai&#x2F;">https:&#x2F;&#x2F;www.getsidekick.ai&#x2F;</a>). We made a GPT-driven bot for developers that runs in Slack and Discord and answers questions about your developer docs, while automatically keeping them up to date with new info. You can join our public Slack where you can ask Sidekick questions about Airbyte, a popular open source data connector catalog: <a href="https:&#x2F;&#x2F;join.slack.com&#x2F;t&#x2F;sidekick-public&#x2F;shared_invite&#x2F;zt-1ra86qug3-~UWNCISLWpNj55Im6C6OaQ" rel="nofollow">https:&#x2F;&#x2F;join.slack.com&#x2F;t&#x2F;sidekick-public&#x2F;shared_invite&#x2F;zt-1r...</a><p>Or if you prefer to not sign up to our Slack, here&#x27;s a demo video showing the same thing: <a href="https:&#x2F;&#x2F;youtu.be&#x2F;fPhP1325RkI" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;fPhP1325RkI</a><p>We’re in the process of making everything open source (there are some contractual issues we’re working through), but our client side code and basic infra is here: <a href="https:&#x2F;&#x2F;github.com&#x2F;ai-sidekick&#x2F;sidekick">https:&#x2F;&#x2F;github.com&#x2F;ai-sidekick&#x2F;sidekick</a>.<p>Providing technical support to developers has been expensive for companies because they need to hire skilled engineers to do it. We’ve seen community support channels with a 2000:1 ratio of developers to support engineers - there’s no way every question will get answered. We built Sidekick to make this much easier. It’s particularly helpful for open-source companies&#x2F;projects because many OSS communities have a lot of people asking questions, but hardly anyone helping troubleshoot.<p>We integrate with Slack and Discord, since that’s where developer support is already happening. On the backend, we use Weaviate to index the data and OpenAI’s text-davinci-3 model to generate responses.<p>In addition to answering questions, Sidekick can also update .md files automatically with new information. When someone reacts to a message in Slack with the emoji, Sidekick will use Weaviate to find the part in your documentation that’s most related to the message, then use GPT3 to merge the new info into the documentation. Finally, it will submit a pull request on Github with the changes. We saw that devrel teams are already making product announcements and helping users troubleshoot common issues in the community, so we built this feature to save them even more time.<p>We use GPT for generating the responses and new documentation, but are relying less and less on it after learning that you hit a ceiling on answer quality very quickly by using only GPT and prompt engineering techniques. Here’s some of what we learned trying to prevent hallucinations in our answers: <a href="https:&#x2F;&#x2F;medium.com&#x2F;@jfan001&#x2F;how-we-cut-the-rate-of-gpt-hallucinations-from-20-to-less-than-2-f3bfcc10e4ec" rel="nofollow">https:&#x2F;&#x2F;medium.com&#x2F;@jfan001&#x2F;how-we-cut-the-rate-of-gpt-hallu...</a><p>What we found makes a much bigger difference is the breadth and quality of the content you can search through, which is why we now rely a lot more on cleaning and annotating data, which yields far better results when combined with prompt chaining. For example, instead of naively chunking data into 1000 token blocks, we parse the markdown into semantically meaningful sections (e.g. paragraphs, lists, code blocks) and tag the content with the header name and document name so it’s more likely to surface for searches that are match for the section it’s from, even if it doesn’t exactly match the content in that chunk.<p>One fun thing we also learned is that when Sidekick gets added to a #help channel, people who otherwise wouldn’t ask questions start using it. It turns out, there are a lot of “lurkers” who come to these channels to find answers, but don’t want to bother anyone with their issue. Adding a tool that they can get answers from instantly brings these people out into the community, giving founders and community managers an opportunity to reach out to them.<p>To summarize, Sidekick 1) saves support engineers time, 2) keeps the docs up to date and 3) helps engage developers in the community. Long-term we want to provide an analytics product on top of Sidekick so companies can understand how their product is being used and where there are opportunities to add more value to their customers (and charge more money for it).<p>We’d love to hear from the HN community about this product! Do you think using a tool to search through and update developer docs from Slack would save you time? Upvote:
158
Title: Hi HN – we are Brandon and Brayden (confusing we know), and we are building Outerbase (<a href="https:&#x2F;&#x2F;www.outerbase.com">https:&#x2F;&#x2F;www.outerbase.com</a>) a better interface for your databases. Think Google Sheets or Airtable, but on your relational database. We provide a collaborative UI on top of Postgres, MySQL and other databases, enabling teams to view, edit and visualize their data. Here’s our short demo video: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=38RslBYdZnk">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=38RslBYdZnk</a><p>Accessing data is a challenge to team members who aren’t data analysts or engineers. Databases are usually locked down to a few team members, and everybody else has to rely on them to get access. Most non-engineers can’t (and don&#x27;t want to) use developer tools, and developers don&#x27;t want to write SQL for teammates all day. Technical employees end up being bottlenecks for access to data. In some cases this can be extreme—we’ve seen publicly traded companies with only 2 data scientists for the whole org!<p>Our goal is to make data accessible to everyone who needs it. We have an intuitive spreadsheet-like editor that sits on top of your databases, as well as the capability to save and share queries. You can take those queries to create charts and dashboards for your team. You can also query your data using EZQL, our natural-language-to-SQL conversion. We use OpenAI to power the natural language process, and we pass the relational schema on top so we can easily know the relationships between your tables.<p>Prior to starting Outerbase, I (Brandon) was a product designer at DigitalOcean and noticed that while DO did a good job making it simple to create databases, there wasn&#x27;t a modern solution to manage them afterwards. Often users had to use PHPMyAdmin, psql, or $insertDBGUIHere, and to be honest most of them do not provide the best user experience. They’re for a very technical audience, and fall short of making data accessible for everyone. We saw a need to do for data what DigitalOcean did with the droplet.<p>Brayden led an engineering team at Walmart and dealt with data at a completely different scale. He led the iOS, Android, and web teams for their amends experience and a lot of time was spent pulling, querying, and generating reports on that data. So when we talked about building this he was immediately in.<p>How it works: We have a React-based frontend that uses a combination of Sequelize and some native libraries to normalize the underlying SQL, which allows us to query and connect to different relational databases. Currently we support Postgres, MySQL, Snowflake, BigQuery, and Redshift. We don&#x27;t store any of your end data—everything else is encrypted and all credentials are stored in KMS.<p>Tools like Outerbase make it possible for people to do their jobs more directly. One of our larger customers uses us as a way to moderate what gets posted to their app. Users submit data and our customer will actually go in and mark a column approved if the content is ok for their audience.<p>Outerbase is available to use today. You can try it for free with 1 user and then if you want to collaborate or use additional features you can upgrade to our pro tier or the obligatory “call us” enterprise tier.<p>We would love to hear your thoughts on the product, you can sign up today for free, use the sandbox database or connect your own! We know the space isn’t exactly uncrowded, but we hope our approach to building something that is intuitive and collaborative will make it easier for everyone to access their data. We know some HN users are not our target audience because they’re technical and already have tools they’re comfortable with—but even then you might want a tool so your team doesn’t have to bug you as much with data requests! We let you simply give them read access to their db and enable them to do their own queries.<p>We’d love to hear your views, opinions, experiences about this. What would you want to see from a database&#x2F;data visualization tool? Looking forward to discussion in the comments! Upvote:
185
Title: I was getting tired of copy&#x2F;pasting reams of code into GPT-4 to give it context before I asked it to help me, so I started this small tool. In a nutshell, gpt-repository-loader will spit out file paths and file contents in a prompt-friendly format. You can also use .gptignore to ignore files&#x2F;folders that are irrelevant to your prompt.<p>gpt-repository-loader as-is works pretty well in helping me achieve better responses. Eventually, I thought it would be cute to load itself into GPT-4 and have GPT-4 improve it. I was honestly surprised by PR#17. GPT-4 was able to write a valid an example repo and an expected output and throw in a small curveball by adjusting .gptignore. I did tell GPT the output file format in two places: 1.) in the preamble when I prompted it to make a PR for issue #16 and 2.) as a string in gpt_repository_loader.py, both of which are indirect ways to infer how to build a functional test. However, I don&#x27;t think I explained to GPT in English anywhere on how .gptignore works at all!<p>I wonder how far GPT-4 can take this repo. Here is the process I&#x27;m following for developing:<p>- Open an issue describing the improvement to make<p>- Construct a prompt - start with using gpt_repository_loader.py on this repo to generate the repository context, then append the text of the opened issue after the --END-- line.<p>- Try not to edit any code GPT-4 generates. If there is something wrong, continue to prompt GPT to fix whatever it is.<p>- Create a feature branch on the issue and create a pull request based on GPT&#x27;s response.<p>- Have a maintainer review, approve, and merge.<p>I am going to try to automate the steps above as much as possible. Really curious how tight the feedback loop will eventually get before something breaks! Upvote:
373
Title: Hi HN, we’re Royal and Vedant, co-founders of CodeParrot (<a href="https:&#x2F;&#x2F;www.codeparrot.ai&#x2F;">https:&#x2F;&#x2F;www.codeparrot.ai&#x2F;</a>). CodeParrot automates API testing so developers can speed up release cycles and increase test coverage. It captures production traffic and database state to generate test cases that update with every release.<p>Here’s a short video that shows how it works: <a href="https:&#x2F;&#x2F;www.loom.com&#x2F;share&#x2F;dd6c12e23ceb43f587814a2fbc165c1f" rel="nofollow">https:&#x2F;&#x2F;www.loom.com&#x2F;share&#x2F;dd6c12e23ceb43f587814a2fbc165c1f</a> .<p>As managers of engineering teams (I was CTO of an ed-tech startup, Vedant was the founding engineer of a unicorn company) both of us faced challenges in enforcing high test coverage. We ended up relying a lot on manual testing but it became hard to scale, and led to reduced velocity and higher production bugs. This motivated us to build CodeParrot.<p>How it works: we auto-instrument backend services to capture production traffic. Requests and responses coming to your backend service, as well as the downstream calls made by it like DB calls are stored. As part of your CI pipeline, we replay the captured requests whenever your service is updated. The responses are compared with the responses from production env and regressions are highlighted to the developers. To ensure that the same codebase gives the same response in CI environment and production, we mock all downstream calls with the values from production.<p>Most tools to record and replay production traffic for the purpose of testing capture traffic on the network layer (as sidecar or through load balancer), CodeParrot instead relies on an instrumentation agent (built on top of OpenTelemetry) to capture traffic, enabling us to capture downstream request&#x2F;response like database responses which are otherwise encrypted on network layer. This helps us mock downstream calls and compare the response from CI environment vs production environment. Additionally, this helps us sample requests based on code flow and downstream responses which provide better test coverage compared to just relying on API headers &amp; parameters.<p>Our self-serve product will be out in a few weeks. Meanwhile, we can help you integrate CodeParrot, please reach out at [email protected] or you can choose a slot here - <a href="https:&#x2F;&#x2F;tidycal.com&#x2F;royal1&#x2F;schedule-demo" rel="nofollow">https:&#x2F;&#x2F;tidycal.com&#x2F;royal1&#x2F;schedule-demo</a>. We’ll be selling CodeParrot via a subscription model but the details are TBD. In addition, we will be open sourcing the project soon.<p>If you’ve already tried or are thinking of using tools in this space, we’d love to hear your experience and what you care about most. We look forward to everyone’s comments! Upvote:
127
Title: This free, open-source .NET library allows you to license your non-free applications through activation keys. Follow the quick start instructions and try it out in 5 minutes!<p>Available on:<p>NuGet <a href="https:&#x2F;&#x2F;www.nuget.org&#x2F;packages&#x2F;SNBS.Licensing.ActivationKeys&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.nuget.org&#x2F;packages&#x2F;SNBS.Licensing.ActivationKeys...</a><p>Website (full docs, downloads) <a href="https:&#x2F;&#x2F;snbslibs.github.io&#x2F;Licensing.ActivationKeys" rel="nofollow">https:&#x2F;&#x2F;snbslibs.github.io&#x2F;Licensing.ActivationKeys</a><p>GitHub (downloads, full docs, release notes etc.) <a href="https:&#x2F;&#x2F;github.com&#x2F;SNBSLibs&#x2F;Licensing.ActivationKeys">https:&#x2F;&#x2F;github.com&#x2F;SNBSLibs&#x2F;Licensing.ActivationKeys</a> Upvote:
54
Title: I made a table where you can find out the source&#x2F;location of factory for where health supplements are made. Then, I spent a year reading product labels so you can save time and money when buying supplements. This is that update.<p>This is still a work in progress but it functions fine.<p>My previous post was a simple database of company data showing ingredient sourcing&#x2F;location. That took 10 days, this has taken me close to 9 months. BackOfLabel is an extension of that initial interest with dosage information at the product &amp; ingredient level.<p>This update allows sorting by many more attributes at the product level (for 4000+ products at the moment) of manually scraped data.<p>Now, for instance you can sort by specific types of ingredient - eg. filter by magnesium glycinate , magnesium orotate or any combination. eg. find ubiquinol or ubiquinone, two forms of coenzyme q10. This is useful for consumers but also companies seeking competitor analysis.<p>You are able to filter products by<p>– Ingredient – Filter by liquid, tablet, capsule, powder &amp; more – Browse by UPC Code – Dosage Information – No. Individual Serving – No. Manufacturer Serving – Total Dosage<p>For example You can also search by type of protein powder - eg. search for whey protein powder and find the dosage information for many products instantly.<p>It frustrates me and I think the way that people buy supplements is wrong. And they don&#x27;t know any better because there are incentive structures that keep them in the dark. This is a small effort to combat the misleading labeling and lack of regulation in the industry.<p>full disclosure - i&#x27;ve provided a generic affiliate link in the table that means i earn a small percentage (5%) of total cart if you purchase through the link<p>note: browse on desktop to filter &amp; sort Upvote:
70
Title: I&#x27;m in my mid-40s. For the last 25 years, I can pretty much predict what&#x27;s going to happen in the next few years. If I made a mistake, it&#x27;s often because I was over-optimistic about my predictions. e.g. ARM CPU in 2000; Smartphones in 2001 (Microsoft&#x27;s Compact Framework 1.0); etc.<p>What happened with GPT-4 shocked me so deeply that, yesterday morning, I caught myself forgetting to give way at a busy intersection. (I&#x27;ve never received any tickets in my life)<p>This afternoon, I finally had some alone time. So I went into the garden and work on my feelings.<p>Here is a list of the emotions that are swelling up in me, (using the Junto Emotional Chart)<p>Awe-struck (for obvious reasons) Astounded<p>Triumphant (the sci-fi vision has finally come true in our lifetime)<p>Perplexed (wow, how comes it&#x27;s coming at us so fast!)<p>Anxious (of the cascading effect on society, geopolitics and our business).<p>There are plenty of texts written about either the looming doom or the ultimate freedom of human capital. I am not interested in any of those abstract discussions.<p>What I do want to hear are<p>1. How are YOU feeling? 2. Any particular changes you&#x27;ve made to your product or startup? 3. How about your life? 4. If you have any child&#x2F;children, your family or how you educate them?<p>Thanks! Upvote:
42
Title: Meta offers an OAuth based API for Instagram. Many companies and tools are built on and rely on this API for their product &amp; daily operations.<p>Beginning Friday evening (US time), a critical endpoint in this API has broken. The endpoint creates long-lived access tokens, so it is in the critical path for almost any company using the API.<p>I find it disappointing that a leading technological company does not acknowledge a bug that&#x27;s been reported to them several times more than 24 hours ago, even if to say that&#x27;s it&#x27;s being investigated.<p>The endpoint: https:&#x2F;&#x2F;developers.facebook.com&#x2F;docs&#x2F;instagram-basic-display-api&#x2F;guides&#x2F;long-lived-access-tokens#get-a-long-lived-token<p>Currently the API returns a Bad Request with a wrong error message (the endpoint should support &quot;GET&quot;): ``` { &quot;message&quot;: &quot;Unsupported request - method type: get&quot;, &quot;type&quot;: &quot;IGApiException&quot;, &quot;code&quot;: 100, &quot;fbtrace_id&quot;: &quot;AuDYqK74IrT9Yt2Sx51UlP6&quot; } ```<p>I have opened a bug report but received no response. Facebook&#x27;s status page shows all green in the API section: https:&#x2F;&#x2F;metastatus.com&#x2F; There are several Meta Developers Community threads with no response Upvote:
159
Title: I have been working on this for a while. Main goal was to build a usable programming language. I even end up building few tools for this such as IntelliJ plugin, etc.<p>I also plan on building some games with it in future.<p>Main use case would be: small games (raylib), tools (static linux binaries with musl-libc) and recreational programming (wasm4). Works in Windows also. If you have emscripten in path you can even build these games&#x2F;tools (raylib) to WASM.<p>Please have a look. Thank you.<p>-------------------------------------<p>Main Repo: <a href="https:&#x2F;&#x2F;github.com&#x2F;YakshaLang&#x2F;Yaksha">https:&#x2F;&#x2F;github.com&#x2F;YakshaLang&#x2F;Yaksha</a><p>Doc: <a href="https:&#x2F;&#x2F;yakshalang.github.io&#x2F;documentation.html" rel="nofollow">https:&#x2F;&#x2F;yakshalang.github.io&#x2F;documentation.html</a><p>Library: <a href="https:&#x2F;&#x2F;yakshalang.github.io&#x2F;library-docs.html" rel="nofollow">https:&#x2F;&#x2F;yakshalang.github.io&#x2F;library-docs.html</a><p>Tutorials: <a href="https:&#x2F;&#x2F;github.com&#x2F;orgs&#x2F;YakshaLang&#x2F;discussions&#x2F;categories&#x2F;tutorials">https:&#x2F;&#x2F;github.com&#x2F;orgs&#x2F;YakshaLang&#x2F;discussions&#x2F;categories&#x2F;tu...</a><p>----------------------------------------<p>Started after a comment from WalterBright here <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=28929840" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=28929840</a> Upvote:
205
Title: There are a huge number of tech podcasts, but the quality is very hit &amp; miss.<p>Which podcasts do you go out of your way to listen to? Which podcasts keep you up to date on the latest trends &amp; subjects?<p>Which podcasts introduce you to new concepts &amp; subjects in an engaging, informed, &amp; intelligent way? Which tech podcasts do you trust? Upvote:
43
Title: integrate chatGPT into your scripts or terminal work. Supports piping text, saving prompts, estimating costs, and some basic json&#x2F;yaml extraction.<p>I&#x27;ve added some elaborate examples on the readme of how to use it with pictures, that may provide a better overview. Upvote:
315
Title: Recently, I have started interviewing interns in their final semester for an internship and to my surprise I frequently encounter a lack in what I would call foundational computer science knowledge. I don&#x27;t mean data structures and algorithms, but for example<p>* Database Systems (relational algebra, SQL)<p>* Concurrent Programming<p>* Network Programming<p>It seems most are exposed to them partially through project work but without the base knowledge.<p>Is this typical for CS undergraduate degrees because you get to pick your own classes? Upvote:
386
Title: As a developer, I recently encountered challenges with GRUB and discovered I lacked knowledge about my computer&#x27;s boot process. I realized terms like EFI partition, MBR, GRUB, and Bootloader were unfamiliar to me and many of my colleagues. I&#x27;m seeking introductory and easy-to-understand resources to learn about these concepts. Any recommendations would be appreciated! Upvote:
457
Title: The more I work in tech, the more modern society depresses me.<p>I do not know exactly what&#x27;s the link between the two, but I think it might be related to the fact that it&#x27;s useful for tech workers to be relatively updated about world events, how society function in general, and what&#x27;s happening lately in the field.<p>The problem is that doing that will very quickly tell you that modern society fucking sucks. Almost nothing works, everyone hates each other, things that are important get ignored because of greed, and most importantly, there&#x27;s almost nothing you can do about it.<p>After a decade working in tech I realised I&#x27;m tired of being exposed to these problems. I&#x27;d really like to leave the big city and its problems and live in a remote area where I can be closer to nature, and in a small community where I could be more self-sufficient and contribute back in more meaningful ways than I do today.<p>I&#x27;m posting this in HN because I have noticed that this is not exclusive to me. It seems that getting burned out of modern society is quite common among people who worked in tech for a long time, so I was wondering if someone here had experience in making this jump and leaving the big city to live a simpler life next to nature. Did it help you? What led you to do it and how do you feel about it today? Upvote:
73
Title: Hi HN, we’re Nicolas, Nebyou, and Robert and we’re building Lume (<a href="https:&#x2F;&#x2F;lume.ai">https:&#x2F;&#x2F;lume.ai</a>). Lume uses AI to generate custom data integrations. We transform data between any start and end schema and pipe the data directly to your desired destination. There’s a demo video here: <a href="https:&#x2F;&#x2F;www.loom.com&#x2F;share&#x2F;bed137eb38884270a2619c71cebc1213" rel="nofollow">https:&#x2F;&#x2F;www.loom.com&#x2F;share&#x2F;bed137eb38884270a2619c71cebc1213</a>.<p>Companies spend countless engineering hours manually transforming data for custom integrations, or pay large amounts to consulting firms to do it for them. Engineers have to work through massive data schemas and create hacky scripts to transform data. Dynamic schemas from different clients or apps require custom integration pipelines. Many non-tech companies are even still relying on schemas from csv and pdf file formats. Days, weeks, and even months are spent just building integrations.<p>We ran into this problem first-hand as engineers: Nebyou during his time as a ML engineer at Opendoor, where he spent months manually creating data transformations, while Nicolas did the same at his time working at Apple Health. Talking to other engineers, we learned this problem was everywhere. Because of the dynamic and one-off nature of different data integrations, it has been a challenging problem to automate. We believe that with recent improvements in LLMs (large language models), automation has become feasible and now is the right time to tackle it.<p>Lume solves this problem head-on by generating data transformations, which makes the integration process 10x faster. This is provided through a self-serve managed platform where engineers can manage and create new data integrations.<p>How it works: users can specify their data source and data destination, both of which specify the desired data formats, a.k.a. schemas. Data source and destinations can be specified through our 300+ app connectors, or custom data schemas can be connected by either providing access to your data warehouse, or a manual file upload (csv, json, etc) of your end schema. Lume, which includes AI and rule-based models, creates the desired transformation under the hood by drafting the necessary SQL code, and deploys it to your destination.<p>At the same time, engineers don’t want to rely on low- or no-code tools without visibility under the hood. Thus, we also provide features to ensure visibility, confidence, and editability of each integration: Data Preview allows you to view samples of the transformed data, SQL Editor allows you to see the SQL used to create the transformation and to change the assumptions made my Lume’s model, if needed (most of the time, you don’t!). In addition, Lineage Graph (launching soon) shows you the dependencies of your new integration, giving more visibility for maintenance.<p>Our clients have two primary use cases. One common use case is to transform data source(s) into one unified ontology. For example, you can create a unified schema between Salesforce, Hubspot, Quickbooks, and Pipedrive in your data warehouse. Another common use case is to create data integrations between external apps, such as custom syncs between your SaaS apps. For example, you can create an integration directly between your CRM and BI tools.<p>The most important thing about our solution is our generative system: our model ingests and understands your schemas, and uses that to generate transformations that map one schema to another. Other integration tools, such as Mulesoft and Informatica, ask users to manually map columns between schemas—which takes a long time. Data transformation tools such as dbt have improved the data engineering process significantly (we love dbt!) but still require extensive manual work to understand the data and to program. We abstract all of this and do all the transformations for our customers under the hood - which reduces the time taken to manually map and engineer these integrations from days&#x2F;weeks to minutes. Our solution handles the truly dynamic nature of data integrations.<p>We don’t have a public self-serve option yet (sorry!) because we’re at the early stage of working closely with specific customers to get their use cases into production. If you’re interested in becoming one of those, we’d love to hear from you at <a href="https:&#x2F;&#x2F;lume.ai">https:&#x2F;&#x2F;lume.ai</a>. Once the core feature set has stabilized, we’ll build out the public product. In the meantime, our demo video shows it in action: <a href="https:&#x2F;&#x2F;www.loom.com&#x2F;share&#x2F;bed137eb38884270a2619c71cebc1213" rel="nofollow">https:&#x2F;&#x2F;www.loom.com&#x2F;share&#x2F;bed137eb38884270a2619c71cebc1213</a>.<p>We currently charge a flat monthly fee that varies based on the quantity of data integrations. In the future, we plan on having more transparent pricing that’s made up of a fixed platform fee + compute-based charges. To not have surprise charges, we currently run the compute in your data warehouse.<p>We’re looking forward to hearing any of your comments, questions, ideas, experiences, and feedback! Upvote:
115
Title: What projects to start building after learning fundamentals of programming to get out of the tutorial and MOOC rabbit hole? It&#x27;s more like taking the training wheels off at some point while learning to ride a bicycle.<p>I know the suggestion is to build what I like. But what if I don&#x27;t have any problem at hand because I don&#x27;t know the space much. What projects can I start with? Things that can operate as standalone projects and can be used.<p>I find most university courses are also limiting in this aspect. They only ask to implement part of the project (only a few functionalities). Maybe I haven&#x27;t looked into many. I am not saying university courses or books are useless. They have their place. They can be very useful to fill in the gaps of our knowledge. But they are not all in all. We have to build tangible concrete things and get things done. They may be small projects. Atleast that&#x27;s what makes a great designer or in general engineer. They make things in their garage or workspace even if they are tiny projects.<p>What type of projects (guided or not) can I build so that I can get an idea of the space and they start getting ideas of my own to build. Along the way I have to read up stuff e.g: unknown datastructures or algorithms to fill in my knowledge bucket.<p>And also I have doubts regarding how far can I go with that rudimentary knowledge of programming? When do I start learning more about other things like systems, algorithms, databases, etc? Upvote:
60
Title: I&#x27;ve been playing with the idea of an LLM prompt that causes the model to generate and return a new prompt. <a href="https:&#x2F;&#x2F;github.com&#x2F;andyk&#x2F;recursive_llm">https:&#x2F;&#x2F;github.com&#x2F;andyk&#x2F;recursive_llm</a><p>The idea I&#x27;m starting with is to implement recursion using English as the programming language and GPT as the runtime.<p>It’s kind of like traditional recursion in code, but instead of having a function that calls itself with a different set of arguments, there is a prompt that returns itself with specific parts updated to reflect the new arguments.<p>Here is a prompt for infinitely generating Fibonacci numbers:<p>&gt; You are a recursive function. Instead of being written in a programming language, you are written in English. You have variables FIB_INDEX = 2, MINUS_TWO = 0, MINUS_ONE = 1, CURR_VALUE = 1. Output this paragraph but with updated variables to compute the next step of the Fibbonaci sequence.<p>Interestingly, I found that to get a base case to work I had to add quite a bit more text (i.e. the prompt I arrived at is more than twice as long <a href="https:&#x2F;&#x2F;raw.githubusercontent.com&#x2F;andyk&#x2F;recursive_llm&#x2F;main&#x2F;prompt_fibonnaci_include_math.txt" rel="nofollow">https:&#x2F;&#x2F;raw.githubusercontent.com&#x2F;andyk&#x2F;recursive_llm&#x2F;main&#x2F;p...</a>) Upvote:
97
Title: Anyone else noticing this while editing an email? It seems just crazy. Upvote:
645
Title: Hey HN, we’re the cofounders of bloop (<a href="https:&#x2F;&#x2F;bloop.ai&#x2F;">https:&#x2F;&#x2F;bloop.ai&#x2F;</a>), a code search engine which combines semantic search with GPT-4 to answer questions. We let you search your private codebases either the traditional way (regex or literal) or via semantic search, ask questions in natural language thanks to GPT-4, and jump between refs&#x2F;defs with precise code navigation. Here’s a quick demo: <a href="https:&#x2F;&#x2F;www.loom.com&#x2F;share&#x2F;8e9d59b88dd2409482ec02cdda5b9185" rel="nofollow">https:&#x2F;&#x2F;www.loom.com&#x2F;share&#x2F;8e9d59b88dd2409482ec02cdda5b9185</a><p>Traditional code search tools match the terms in your query against the codebase, but often you don’t know the right terms to start with, e.g. ‘Which library do we use for model inference?’ (These types of questions are particularly common when you’re learning a new codebase.) bloop uses a combination of neural semantic code search (comparing the meaning - encoded in vector representations - of queries and code snippets) and chained LLM calls to retrieve and reason about abstract queries.<p>Ideally, a LLM could answer questions about your code directly, but there is significant overhead (and expense) in fine-tuning the largest LLMs on private data. And although they’re increasing, prompt sizes are still a long way off being able to fit a whole organisation’s codebase.<p>We get around these limitations with a two-step process. First, we use GPT-4 to generate a keyword query which is passed to a semantic search engine. This embeds the query and compares it to chunks of code in vector space (we use Qdrant as our vector DB). We’ve found that using a semantic search engine for retrieval improves recall, allowing the LLM to retrieve code that doesn’t have any textual overlap with the query but is still relevant. Second, the retrieved code snippets are ranked and inserted into a final LLM prompt. We pass this to GPT-4 and its phenomenal understanding of code does the rest.<p>Let’s work through an example. You start off by asking ‘Where is the query parsing logic?’ and then want to find out ‘Which library does it use?’. We use GPT-4 to generate the standalone keyword query: ‘query parser library’, which we then pass to a semantic search engine that returns a snippet demonstrating the parser in action: ‘let pair = PestParser::parse(Rule::query, query);’. We insert this snippet into a prompt to GPT-4, which is able to work out that pest is the library doing the legwork here, generating the answer ‘The query parser uses the pest library’.<p>You can also filter your search by repo or language - What’s the readiness delay repo:myApp lang:yaml. GPT-4 will generate an answer constrained to the respective repo and language.<p>We also know that LLMs are not always (at least not yet) the best tool for the job. Sometimes you know exactly what you’re looking for. For this, we’ve built a fast, trigram index based regex search engine based on Tantivy. Because of this, bloop is fast at traditional search too. For code navigation, we’ve built a precise go-to-ref&#x2F;def engine based on scope resolution that uses Tree-sitter.<p>bloop is fully open-source. Semantic search, LLM prompts, regex search and code navigation are all contained in one repo: <a href="https:&#x2F;&#x2F;github.com&#x2F;bloopAI&#x2F;bloop">https:&#x2F;&#x2F;github.com&#x2F;bloopAI&#x2F;bloop</a>.<p>Our software is standalone and doesn’t run in your IDE. We were originally IDE-based but moved away from this due to constraints on how we could display code to the user.<p>bloop runs as a free desktop app on Mac, Windows and Linux: <a href="https:&#x2F;&#x2F;github.com&#x2F;bloopAI&#x2F;bloop&#x2F;releases">https:&#x2F;&#x2F;github.com&#x2F;bloopAI&#x2F;bloop&#x2F;releases</a>. On desktop, your code is indexed with a MiniLM embedding model and stored locally, meaning at index time your codebase stays private. Indexing is fast, except on the very largest repos (GPU indexing coming soon). ‘Private’ here means that no code is shared with us or OpenAI at index time, and when a search is made only relevant code snippets are shared to generate the response. (This is more or less the same data usage as Copilot).<p>We also have a paid cloud offering for teams ($12 per user per month). Members of the same organisation can search a shared index hosted by us.<p>We’d love to hear your thoughts about the product and where you think we should take it next, and your thoughts on code search in general. We look forward to your comments! Upvote:
264
Title: I built this customizable literature-based K-12 homeschool curriculum, based on my experience as a homeschool parent. It&#x27;s designed especially for intellectually curious kids who love to read.<p>One of the main benefits of homeschooling is the ability to design customized programs of study that let kids learn at their level of challenge in each subject. But since designing custom curricula from scratch requires a huge time commitment and familiarity with children&#x27;s literature and academic materials, most homeschooling parents don&#x27;t take advantage of this potential and instead opt for prepackaged curricula.<p>Great Books Homeschool eliminates a lot of the work involved in designing a complete and rigorous curriculum for homeschooled students. The website generates a default program of study for each student, then helps parents customize it. Transcripts and other records are generated automatically.<p>Pricing is normally subscription based, but we&#x27;re offering complimentary access for twelve months to the first 50 users who sign up for our beta testing program. In return, beta testers are requested to complete a monthly questionnaire about their experience with the curriculum.<p>If you would like to participate in the beta testing program, please first create a free trial account at <a href="https:&#x2F;&#x2F;www.greatbookshomeschool.com" rel="nofollow">https:&#x2F;&#x2F;www.greatbookshomeschool.com</a>. Once signed in, go to <a href="https:&#x2F;&#x2F;www.greatbookshomeschool.com&#x2F;parent&#x2F;beta-application?via=hn" rel="nofollow">https:&#x2F;&#x2F;www.greatbookshomeschool.com&#x2F;parent&#x2F;beta-application...</a> and complete the application form.<p>Questions and comments are welcome! Upvote:
45
Title: Recreating Delta&#x27;s in-flight entertainment UI in ReactJS between on a flight between NYC and SFO. [Post by a friend of mine] Upvote:
127
Title: For people who lived through 2007-2008 do you think the current times feel similar to how the last financial crisis unfolded? Upvote:
129
Title: After playing with AI Avatars (like many of us I guess around here), I started to wonder if we could instead bring real value to people by producing affordable professional head-shots using a combination of Dreambooth and ControlNet.<p>Obviously it&#x27;s only the beginning and there are still many imperfections, but the foundational tech behind this (Dreambooth and ControlNet) are only respectively 6 months and 1.5 month old, and already delivers pretty amazing results.<p>I came up with this little service &quot;Virtual Face&quot; and I&#x27;m looking for feedback if some of you are willing to try it (you can use the HUNTER50 coupon to get 50% off, can&#x27;t make it free to try yet since the running costs are still non-negligible).<p>Cheers, Pierre Upvote:
147
Title: Just received this email, can&#x27;t find a blog post yet?<p>On March 23rd, we will discontinue support for the Codex API. All customers will have to transition to a different model. Codex was initially introduced as a free limited beta in 2021, and has maintained that status to date. Given the advancements of our newest GPT-3.5 models for coding tasks, we will no longer be supporting Codex and encourage all customers to transition to GPT-3.5-Turbo. About GPT-3.5-Turbo GPT-3.5-Turbo is the most cost effective and performant model in the GPT-3.5 family. It can both do coding tasks while also being complemented with flexible natural language capabilities.<p>You can learn more through: GPT-3.5 model overview Chat completions guide<p>Models affected The following models will be discontinued: code-cushman:001 code-cushman:002 code-davinci:001 code-davinci:002<p>We understand this transition may be temporarily inconvenient, but we are confident it will allow us to increase our investment in our latest and most capable models.<p>—The OpenAI team Upvote:
175
Title: Hey HN, we’re Eric and Christian, cofounders of Frigade (<a href="https:&#x2F;&#x2F;www.frigade.com">https:&#x2F;&#x2F;www.frigade.com</a>), a developer tool for building high-quality product onboarding. Here’s a demo video: <a href="https:&#x2F;&#x2F;frigade.com&#x2F;video&#x2F;demo.mp4">https:&#x2F;&#x2F;frigade.com&#x2F;video&#x2F;demo.mp4</a>. Also, the “Frigade demo checklist” on our home page (<a href="https:&#x2F;&#x2F;frigade.com&#x2F;#demo">https:&#x2F;&#x2F;frigade.com&#x2F;#demo</a>) was built using Frigade itself, if you want to give it a spin.<p>Onboarding is a critical first experience for software products. It makes a big difference for customer activation and retention. However, onboarding experiences often do not get the attention they deserve because, without good tooling, they’re a pain to build. At a previous startup, Christian and I were surprised at how hard it was to get right. It took a ton of time that would otherwise have gone to our core product. We also knew that much better tooling was possible because we had experienced it in the internal tooling at LinkedIn, where we worked at a previous job.<p>Established software companies such as LinkedIn, Uber, Vanta, and many others invest in internal developer platforms to give their teams advantages like the ability to quickly build great onboarding experiences. But these platforms only get built at scale. With Frigade, our goal is to develop the platform that every growth engineering team would love to have, but can’t afford to build themselves.<p>There are no-code tools for onboarding (e.g. Pendo, Appcues, Intercom), but they’re rigid and clumsy. They’re built for marketers and managers rather than developers, so they sit on top of your product instead of feeling native. They’re not defined in your codebase, frequently break without teams noticing, and there are limits to what you can build with them. Unlike these tools, Frigade uses native SDKs and provides an API for developers to build new onboardings in code.<p>How to use Frigade in your product: first install our SDK (React for now); choose a production-ready Frigade UI component or go headless; create a “Frigade Flow” in our admin panel and hook it up to the SDK (Flows are where you set the logic, targeting, content, and more); publish your flow to go live.<p>Frigade automatically tracks users’ progress through your Flows. Your team can sign in to our admin panel to see which customers have completed what steps or input data, and you can use our API to sync tracking events to your analytics platform, send drip campaign (emails that guide users through setup), or show product reminders (e.g. “Finish connecting your data” banners). Frigade leverages server-driven UI, so once a Flow is live, you can easily update content (e.g. change copy, add another step) and logic (e.g. who should see it and when) in real-time without new code deploys.<p>We’ve spent most of our time building out the platform with early customers so far, but our self-serve flow is (ironically) coming soon. In the meantime, our demo video gives you the basic picture: <a href="https:&#x2F;&#x2F;frigade.com&#x2F;video&#x2F;demo.mp4">https:&#x2F;&#x2F;frigade.com&#x2F;video&#x2F;demo.mp4</a>. Please let us know if you’re interested in trying Frigade, we’d love to get your feedback on what we’ve built so far! We work mostly with web-based B2B companies today, and our pricing is competitive with other tools on the market.<p>What has been your experience with product onboarding? We’d love to hear more about what’s worked well and what’s been a pain in the ass—as well as anything else you’d like to share. Upvote:
100
Title: Look, I&#x27;m as interested in the huge leaps forward in AI as all the rest of you techy nerds. And I read most of the stories about the technology itself. But please, for the love of $deity, can we outlaw the completely pointless <i>&quot;I asked AI to create...&quot;</i> type submissions?<p>Every day there are dozens of them; <i>&quot;I asked AI to write me a haiku...&quot;, &quot;I asked AI to paint me a picture...&quot;, &quot;I asked AI to help me learn coding...&quot;, &quot;I asked AI to write me a blog post...&quot;, &quot;I asked AI to tell me a joke...&quot;, &quot;I asked AI to write a poem...&quot;, &quot;I asked AI to write a short story...&quot;, &quot;I asked AI to design a new trouser press...&quot;</i> etc etc et-<i>bloody</i>-cetera.<p>And, almost without exception, these submissions are of no interest whatsoever to anyone outside of the person who typed some crap into ChatGPT [or other AI] and was so proud of the outcome they decided to share it with the wider world... whether or not the wider world could give a shit or not.<p>Yes, we all know you can type almost any kind of prompt into an AI chatbot and get some &quot;interesting&quot; text or imagery in response. But after the 1000th such submission it&#x27;s about as interesting as hearing that someone sent an email or installed a new app on their phone. It really is the adult equivalent of running excitedly home from primary school, clutching the crappy drawing you did in class and insisitng it get stuck to the fridge door.<p>Please. If you want to play with your AI chatbot, just do it quietly in the corner. The rest of us DON&#x27;T BLOODY CARE!<p>Examples:<p>https:&#x2F;&#x2F;hn.algolia.com&#x2F;?q=i+asked+AI<p>https:&#x2F;&#x2F;hn.algolia.com&#x2F;?query=i asked GPT Upvote:
42
Title: Hey there HN! We&#x27;re Esteban and Esteban and we are looking to get feedback for the new version of our GPT-powered, open-source code contextualizer.<p>We&#x27;re starting with a VS Code extension that indexes information from git (GitHub, GitLab, or Bitbucket integrations available), Slack and Jira to explain the context around a file or block of code. Finally, we summarize such aggregated context using the power of GPT.<p>As devs we know that it&#x27;s very annoying to look at a new codebase and start understanding all the nuances, particularly when the person who wrote the code already left the company. With this problem in mind, we decided to build this solution. You&#x27;ll be able to get into &quot;the ghost&quot; of the person who left the company.<p>Soon, we will also be building a GitHub Action that does the same thing as the VS Code extension but at the time of creating a PR: Index the most relevant information related to this new PR, and add it as a comment. This way we will provide context at one more moment, and also, we will be making the IDE extension better.<p>Here&#x27;s our open source repo if you also want to check it out: <a href="https:&#x2F;&#x2F;github.com&#x2F;watermelontools&#x2F;watermelon-extension">https:&#x2F;&#x2F;github.com&#x2F;watermelontools&#x2F;watermelon-extension</a><p>Please give us your feedback! Thanks. Upvote:
100
Title: Hello, I was running around Germany, hectically navigating public transportation, and getting lost all the time. I noticed that every station had i platforms, each used lists of n buses (trains, whatever) arriving, each has their list of m destinations. That means I would be scanning i x n x m items just to see if I was at the correct stop. As I was nervous, for every bus that arrived, I would rescan the list of stops to double check. I began thinking how I could make a better system.<p>Linked is a very shoddy mockup of how bloom filters could be used to allow passengers O(1) lookup time for which platform+bus is the correct one. I believe it&#x27;s likely for public transportation to grow increasingly more complex in the future, as population grows, and under the current list-based system, this will make the signage ever more complex. I think some bloom filter mechanism could reduce that complexity.<p>So, here is my fantasy, my day dream. What do you think? Upvote:
128