HN Search powered by Algolia

콘텐츠

I've been brewing up this tool on the side, and want to share it with the gang here, open source - of course.
The tool is AutoNL and the repo is: https://github.com/Actioninsight/AutoNL

WHAT DOES IT DO:

AutoNL uses Open-Interpreter to automate multi-step tasks with the help of a simple spreadsheet. The basic insight here is OI, AutoGPT, etc all begin to break down when their planning enters a loop. But we often actually know the plan we want it to follow anyway, we just to leverage it to execute. So this puts all the responsibility for execution on AI, but all the responsibility for planning on the human.

HOW DOES IT WORK?

AutoNL is a script and a spreadsheet. The spreadsheet has the following columns "STEP", "INPUT FILE(s)", "INSTRUCTION", and "OUTPUT FILE(s)". The script just does the job of running through the process one time, validating each step, and finally hashing the files and moving them to one directory.

WHERE IS THE DEMO?

The first demo is right here: https://youtu.be/aCa8ntYIkpM, but I'm planning to record a few more - right now I've got the tool hooked up to e-mail, authenticated with Microsoft Graph, routing tasks between sheets, etc - lots to come.

DOES THIS WORK 100% ALL THE TIME?

Absolutely not. This is experimental and is prone to various failures, but can also become quite reliable. The example sheet 'datapre.xlsx' typically matches hashes on outputs about 80% of the time currently. I expect future models will only continue to improve the quality and reliability, while local models like OpenCodeInterpreter will make local offline use achievable.

WHY DID YOU DO THIS?

Thank you for asking. I think AI tools are constantly trying to automate the user out of the loop, but my own experience is that I want to be fully in the loop at all times. I want to the loopmaster, and I want tools that put me at the centre and give me visibility and control over everything. AutoNL turns the entire pipeline into natural language. Even the script itself is built primarily in Natural Language, controlled by Open-Interpreter.

WHATS THE ROADMAP:

Short-term roadmap would be to make it easier to build new sheets. I want a command line flag that we can run to start a sheet on a given step, I want to be able to control the model and system prompt from inside the spreadsheet, and I'd also like to spend some time on creating an AutoNL sheet that helps build AutoNL sheets - that's the dream.

Hi Hackers, I'm Suby

The main goal behind building this library is to make websites livelier and more interactive with minimal effort, ultimately enhancing the user experience.

So, I created an open-source library featuring a collection of animated UI components. These components are mostly inspired by the innovative designs found on Awwwards websites.

The Idea:

1. Browse through the animations and choose the ones you like.
2. Take a look at the example codes provided
3. Follow the documentation instructions to set up the components in your project.
4. Voila! You're all set.

Day 1:
I shared my project on Indiehacker did well and 39 visitors.
Day 2:
I used my existing network and messaged them over twitter about the Animation Library. One of them shared the post and got almost 300 visitors.
Day 3:
I did a post on reddit on the developer community, and it got me almost 5.7k impressions and I ended up with 153 users.

I don't have a large network or following, so it's tough to market the product even if it's open-source and useful.

Right now, I'm dedicating all my efforts to expanding the library by creating as many components as possible. If you like the idea and find the library useful, please consider supporting it. I'm also eager to receive feedback for further improvements.

Opensource Library Link: ui.gloz.tech

3 of us are students at a top 5 university in the US. We have been pretty good friends and worked together on things for over a year now. And 6 months ago my uncle who has a very strong technical background in a tech related but non-tech space had a good idea that addresses his field of work, and is only recently possible to actually address because of new technology that has come out in the past year or so. And luckily, all three of us have either experience in this direct tech field, or adjacent skills necessary to tackle the problem. The three of us have already agreed to an equal equity split amongst us technical founders, and my uncle is fine with a lower equity split since he's not doing any of the actual technical work. His value stems from the fact that he has a really good understanding of what the user would want since he is from the field. And he is high up in his firm and has lots of good relationships for us to use for testing the product, and getting customers. So we have no issue with his equity stake and even want to give him a royalty since his additional value stems from his ability to get us customers. The problem though is that he does not want to leave his job, and quite frankly we don't need him to nor want him to. YC requires founders to leave their jobs but him leaving his job would not really help us since we want to test it on the people at his firm. Moreover, he isn't coding so we don't really need him full time. And he also works remote for his job so the amount of time he can commit, is in line with what we need. Would YC accept this from a founder? He's kind of a unique case since his value is actually increased by him staying at his job.

Hey HN! Founder of Million – We’re building a tool to that helps fix slow React code. Here is a quick demo: https://youtu.be/k-5jWgpRqlQ

Fixing web performance issues is hard. Every developer knows this experience: we insert console.log everywhere, catch some promising leads, but nothing happens before "time runs out." Eventually, the slow/buggy code never gets fixed, problems pile up on a backlog, and our end users are hurt.

We started Million to fix this. A VSCode extension that identifies slow code and suggests fixes (like ESLint, for performance!) The website is here: https://million.dev/blog/lint

I realized this was a problem when I tried to write an optimizing compiler for React in high school (src: https://github.com/aidenybai/million). It garnered a lot of interest (14K+ stars) and usage, but it didn't solve all user problems.

Traditionally, devtools either hinge on full static analysis OR runtime profiling. We found success in a mixture of the two with dynamic analysis. During compilation, we inject instrumentation where it's necessary. Here is an example:

  function App({ start }) {
    Million.capture({ start });  //  inject
    const [count, setCount] = Million.capture(useState)(start);  //  inject
 useEffect(
  () => {
    console.log("double: ", count * 2);
  },
  Million.capture([count]),  //  inject
);
 return Million.capture(  //  inject
  <Button onClick={() => setCount(count + 1)}>{count}</Button>,
);

}

From there, the runtime collects this info and feeds it back into VSCode. This is a great experience! Instead of switching around windows and trying to interpret flamegraphs, you can just see it inline with your code.

We are still in the very early days of experimentation! Million Lints focuses on solving unnecessary re-renders right now, and will move on to handling slow-downs arising from the React ecosystem: state managers, animations, bundle sizes, waterfalls, etc. Our eventual goal is to create a toolchain which keeps your whole web infrastructure fast, automatically - frontend to backend.

In the next few weeks, we're planning to open source (MIT) the Million Lint compiler and the VSCode extension.

To earn a living, we will charge a subscription model for customized linting. We believe this aligns our incentives with yours: we only make money when we make your app faster.

We'd love to know your thoughts – happy to answer :)

Backstory: I'm a product designer who's mostly worked for startups and now big tech, and I haven't really touched html/css for nearly a decade. I've worked closely with engineers my entire career but never really rolled the sleeves up and dived into a scripting language. I'd seen some engineers playing around with CodeGPT over a year ago when it launched–we huddled around a screen and tried to decide how quickly our jobs would be replaced by this new technology. At the time, we weren’t in any real danger, but I caught a glimpse of how well it understood prompts and stubbed out large amounts of code.

For the past four or five years, I've played a hacky trivia game with family and friends where I play a song, and they have to guess the movie that features the song; Guess the Needle Drop. After many passionate debates and over-the-top celebrations fueled by my generation’s nostalgia for popular classic songs and films, people often told me that I needed to “build an app for this.”

I started doodling in Figma before quickly starting to build the website in Node, and then read somewhere that it's a better approach to learn vanilla javascript before trying to benefit from frameworks like React, etc. So I started again with a static vanilla website and, piece by piece, built out each chunk of functionality I’d envisioned. My mind was consistently blown at how helpful ChatGPT was–far beyond my lofty expectations, even with all the AI hype. It was like having a 24/7 personal tutor for free. I rarely had to google console errors hoping that a Stack Overflow discussion catered to my exact scenario. With enough information, ChatGPT always knew what was wrong and explained in terms I could understand.

The workflow went like this: I would describe the desired user experience, parse the code GPT suggested, copy it to my editor, and paste back any errors I came across along the way. The errors were abundant at the beginning, but I got better over time at anticipating issues. Perhaps my biggest takeaway was that I had to learn how to converse with ChatGPT: sometimes I would spend 10 minutes crafting a prompt, forcing me to fully understand and articulate my own line of thinking about what I was trying to achieve .

Using ChatGPT to make a static local website is fairly trivial, but the deployment and automation stage is where I fully realised the scope of what I could achieve. As a product designer, I’ve had the luxury of listening to engineers discuss solutions without personally having to sweat the execution. Working solo I couldn’t stay in the periphery anymore. I kinda knew AWS was a whole thing. That git was non-negotiable. That having a staging server is sensible and that APIs could do a lot of the heavy lifting for me. I would sanity-check with ChatGPT whether I understood these tools correctly and whether it was appropriate to use them for what I was building. A few of the things that initially intimidated me but I ended up working out:

- GitHub Actions workflows

- AWS hosting and CloudFront

- Route 53 DNS hosting

- SSL certificates

- Implementing fuzzy search

- LocalStorage and JSON manipulation

- Even some basic python to scrub data

It’s a fairly basic game, and for anyone sneaking a look with the inspector, it’s a dog’s div soup breakfast served with a side of spaghetti logic. But it still goes to show how much AI seems like a learning steroid.

Hi HN, happy to share this project that I’ve been working on.

Ingrain is like Clerk for user activities. With Ingrain, you can add user activity logging to your app and display them in minutes using our UI components.

I know from my past experience of building internal tools and b2b apps that user activity logging is a useful feature that is always requested, but often shelved for other core business features. By providing this as a service, Ingrain saves developers time, especially from building the UI to display the activities.

My partner and I really like the whole concept of Clerk and we want to build the equivalent of Clerk for user activities. We hope you will find it useful. Feel free to reach out to us at [email protected] if you have any questions/comments.

Thank you!

Hello HN community!

I'm excited to present our latest project: ReadWeb.ai. This isn't just another web translation tool; it's a platform designed for seamless sharing and bilingual comparison, making it easier to understand and engage with web content in multiple languages.

What Makes ReadWeb.ai Unique?

Bilingual Comparison: View translations side-by-side with the original text, providing a clear comparison for language learners or anyone needing detailed understanding.

Easy Sharing: Share translated web content directly from the platform, ideal for collaborative work, educational purposes, or social sharing.

Instant Translation: Transform web pages, images, fashion content, and product descriptions into various languages with a single click.

Multilingual Capability: With extensive language support, access a diverse range of online content without language barriers.

Intuitive Design: Our user-friendly interface ensures a hassle-free navigation and translation experience.

Why Use ReadWeb.ai?

In today's world, accessing information in different languages should be effortless. Whether you're collaborating internationally, studying a new language, or exploring global markets, ReadWeb.ai is your tool for easier web navigation and content sharing.

We're eager to see how ReadWeb.ai can assist you in your daily web interactions. Try it out and let us know how our features like bilingual comparison and easy sharing enhance your experience.

Discover more at ReadWeb.ai

Your feedback and suggestions are highly valued!

I'm pleased to introduce SoraCool, a website dedicated to Sora-generated videos. SoraCool boasts a sleek design and a seamless user experience (which will continue to improve in the future). Users can explore our video gallery and dive into detailed video pages with titles, descriptions and previews. The intuitive progress bar allows viewers to navigate through videos effortlessly. We also offer a range of features including FAQs about Sora and the opportunity to try out Sora's features.

Stay tuned for updates on OpenAI's latest release of the text-to-video engine powered by Sora. I want to make it clear that the current features are based on my own ideas and I'm looking forward to your feedback to improve the platform. Keep an eye out for future features as I'll be updating the site on an ongoing basis.

Visit us at Link:
https://soracool.com
and don't forget to follow us on our social media accounts for the latest news and discussions - you can find us by clicking on the x account button at the bottom of the page. Thank you for your support and have a great day!

Hello

We are launching Meteor Files in open beta! A new player in file sharing and upload niche.

Key features:

1. We are the first service to allow single file uploads over 5TB. In fact, our top-tier plan has no limit for single file size

2. Resumable and resilient uploads with multi-layered fail-overs

3. Start upload on one device — continue on another

Encouraging you to register and try it while it's baking hot right from the oven. We are looking forward to getting feedback regarding the following: UI/UX, Design impressions, Pricing & Plans impressions, Upload speed and concurrency, and Download speed.

Back story: I'm an active open-source contributor and maintainer. I have developed and contributed to NPM and Meteor.js packages for more than ten years. In 2015, I published the first version of the "files" package for the Meteor.js ecosystem (at Atmospherejs). Since then, it has become the most popular solution for file uploads within Meteor.js. During all those years of active development, I've learned a lot about file uploads and delivery.

I've been lucky enough to work in the media production industry and see from the inside how often employees struggle and have to wait longer hours for source files to get to 100% before they can call it a day. That's why one of the first features I wanted to deliver is the ability to resume the same file upload from a different device as long as the original file remains in the user's possession, like on a hard drive or USB thumb.

Today, I have a small (but very talented) team, and we are happy to bring Meteor Files to life as SaaS. I've put all my previous experience to the goal of building the most reliable, resilient, and durable file upload and sharing service with features not present by competitors.

More features coming soon: Resumable downloads, Team and project management, BYOS — bring your own storage, Hosted solution, White label.

Let us know what is the most crucial feature in file uploads and sharing for you and your business

Hi folks, my name’s Alex Kolchinski, and I’m looking for a cofounder to build an AI-powered B2B software company with.

A bit about me:

-I grew up programming and worked full-time as an engineer for a year after high school.

-I did a BS/MS in CS with an AI focus at UChicago.

-I was an APM at Google.

-I did half of a PhD at Stanford and did research in the Stanford AI Lab, publishing three papers — https://scholar.google.com/citations?user=wuMJ27MAAAAJ

-I dropped out of Stanford to start Mezli (YC W21) and led it as CEO. We launched a popular autonomous restaurant, but went out of business after our Series A fundraise fell through. I raised $4M (but failed to raise the additional ~$10M we needed), led a team of ~30, and ran several functions including finance and marketing. Google Mezli for news, reviews, etc., and you can see a video of the tech here - https://www.youtube.com/watch?v=DV2I9XwcEZE

-While shutting down Mezli over the past ~6 months, I’ve built and launched two products solo — a B2C utility app (www.readtome-app.com) and a B2B workflow automation product that just crossed $10K/month in revenue.

The aforementioned workflow automation product lets insurance agencies get quotes from carriers via API call in cases when quotes are otherwise only obtainable by filling out long web forms. Currently, agents spend as much as half their time clicking through these web forms. This product automatically fills them out with some help from GPT4, letting agents spend more of their time selling insurance.

Despite the early revenue, I’m still deciding whether to stay in this niche or pivot to a different one (I’m exploring a few others) — which will mostly depend on where I find the highest potential to scale to $1M+ in revenue within a year or two. My goal is to get to $1M+ with a narrow “wedge” product that’s quick to sell, then grow outwards from there, responding to inevitable shifts in the landscape of the software industry, including changes in the capabilities of AI, as they emerge.

I’ve been navigating this process solo so far, but I’d prefer to bring on a partner. Working as a team is more fun, it often yields better decisions, and it’s a lot faster than doing everything alone.

And if I’m going to bring on a cofounder, I’d like to do it now while the direction of the company is still up in the air. This way, we can discover the long-term direction of the company together, and it’ll feel like our baby, not just mine.

As far as who I’m looking to work with, I can see one of two arrangements working well:

  1. I’m CEO, you’re CTO. I focus on selling, you focus on building, but I’ll help build when appropriate. You need to be a top-notch builder (of software) with a history of shipping things quickly.
  1. I’m CTO, you’re CEO. You sell, I build. Because I’ve been the CEO of a startup that had some temporary success, I have a pretty high bar for this one: either you should have a previous exit as a startup CEO, or you should be a veteran of an industry that you can immediately start making sales in.

Either way, we should mesh well personally and professionally, and it wouldn’t hurt if you have previous experience working at startups.

A couple of other things:

-I feel very strongly about working in-person together, most days of the week, most weeks of the year, in or near San Francisco.

-At this stage of the game I’d be looking to split equity equally, with one extra share to the CEO to break ties. I also prefer a longer vesting schedule than 4 years to align founder incentives for the long-term.

Interested? Reach out — I’m at [email protected] — and please include a brief summary of what you’ve done in the past and why you’re interested in working together.

And for more details, see my blog post and video here:
https://alexkolchinski.com/2024/02/27/im-looking-for-a-cofounder/

We're developing an NLP-to-SQL application utilising OpenAI's GPT-4 to enable users to query Australian Census data via natural language. We've encountered issues with OpenAI's API rate limits, impacting user concurrency given our Tier 2 status. Our typical queries are about 3000 tokens each, limiting the number of simultaneous users.

Has anyone encountered similar challenges and how have you addressed them? Specifically:

Is there comprehensive literature or resources on managing OpenAI API costs effectively?
Would implementing a caching strategy, such as storing frequently asked questions in a vector database, mitigate the rate limit issues by reducing direct API calls?

Any insights or _experience_s shared would be greatly appreciated.

Backstory: I'm a product designer who's mostly worked for startups and now big tech, and I haven't really touched html/css for nearly a decade. I've worked closely with engineers my entire career but never really rolled the sleeves up and dived into a scripting language. I'd seen some engineers playing around with CodeGPT over a year ago when it launched–we huddled around a screen and tried to decide how quickly our jobs would be replaced by this new technology. At the time, we weren’t in any real danger, but I caught a glimpse of how well it understood prompts and stubbed out large amounts of code.

For the past four or five years, I've played a hacky trivia game with family and friends where I play a song, and they have to guess the movie that features the song; Guess the Needle Drop. After many passionate debates and over-the-top celebrations fueled by my generation’s nostalgia for popular classic songs and films, people often told me that I needed to “build an app for this.”

I started doodling in Figma before quickly starting to build the website in Node, and then read somewhere that it's a better approach to learn vanilla javascript before trying to benefit from frameworks like React, etc. So I started again with a static vanilla website and, piece by piece, built out each chunk of functionality I’d envisioned. My mind was consistently blown at how helpful ChatGPT was–far beyond my lofty expectations, even with all the AI hype. It was like having a 24/7 personal tutor for free. I rarely had to google console errors hoping that a Stack Overflow discussion catered to my exact scenario. With enough information, ChatGPT always knew what was wrong and explained in terms I could understand.

The workflow went like this: I would describe the desired user experience, parse the code GPT suggested, copy it to my editor, and paste back any errors I came across along the way. The errors were abundant at the beginning, but I got better over time at anticipating issues. Perhaps my biggest takeaway was that I had to learn how to converse with ChatGPT: sometimes I would spend 10 minutes crafting a prompt, forcing me to fully understand and articulate my own line of thinking about what I was trying to achieve .

Using ChatGPT to make a static local website is fairly trivial, but the deployment and automation stage is where I fully realised the scope of what I could achieve. As a product designer, I’ve had the luxury of listening to engineers discuss solutions without personally having to sweat the execution. Working solo I couldn’t stay in the periphery anymore. I kinda knew AWS was a whole thing. That git was non-negotiable. That having a staging server is sensible and that APIs could do a lot of the heavy lifting for me. I would sanity-check with ChatGPT whether I understood these tools correctly and whether it was appropriate to use them for what I was building. A few of the things that initially intimidated me but I ended up working out:

- GitHub Actions workflows
- AWS hosting and CloudFront
- Route 53 DNS hosting
- SSL certificates
- Implementing fuzzy search
- LocalStorage and JSON manipulation
- Even some basic python to scrub data

It’s a fairly basic game, and for anyone sneaking a look with the inspector, it’s a dog’s div soup breakfast served with a side of spaghetti logic. But it still goes to show how much AI seems like a learning steroid.

I work in senior IT leadership for a rapidly growing company (100MM -> 2B through M&A) and I'm curious about the _experience_s of different companies when it comes to the decision-making process behind developing internal tools versus relying on 3rd party apps/SaaS providers. There seems to be a pivotal moment for many businesses where the scales tip in favor of custom development, but I imagine this varies widely based on industry, company size, growth phase, and specific business needs.

When we had 500 users, a simple SaaS solution for $5 a user made sense, now that we're at almost 6000 it seems ludicrous for us to be spending $15k a month (after the 50% 'enterprise' discount) for software that that could be internally developed and maintained for a fraction of that cost over the products lifecycle.

Hello everyone,
I want to kindly ask for help. We are a small open-source software company.

Just now, out of nowhere, GitHub flagged and suspended our whole GitHub org for our organization without any warning.

We don't collect any user information don't do any scraping, spam, nothing.

We're a fully open source project (software company developing cloud for AI apps) and people having access to our GitHub is a vital part of our product.

Do you have any experience with this? Can it be something like competition reporting or GitHub incorrectly recognizing some kind of activity as suspicious, or some kind of attack?

Thank you for any advice!

Hey HN!
I've been working on a project called Darwin that I'm thrilled to share with you.

Darwin is essentially your GitHub agent powered by large language models (LLMs). It checks out your projects, understands them through natural language prompts, and automates tasks such as fixing issues, documenting code, reviewing pull requests, and more.

What drove me to create Darwin was a desire to harness the power of LLMs in a way that's seamlessly integrated with the tools I use daily. The motivation came from my curiosity about what could be possible when writing code that understands code.
Darwin stands out because it's designed for developers who want to leverage AI without needing deep expertise in LLMs or prompt engineering. It offers:

- hands-off approach to automate routine development tasks.

- Novel and creative ways of making LLMs work for you

- A unique API for each project, allowing for customized automation tools.

Currently, Darwin is in alpha. It's functional, with users able to connect their repositories, define tools, and run tasks. I'm especially interested in feedback at this stage — everything from output quality to user experience. Every project starts with a $5 free budget to try it out, and while payment isn't implemented yet, I'm keen to hear your thoughts.

The vision for Darwin is not just about automation but creating a more productive, creative, and enjoyable development experience. I believe we're just scratching the surface of what's possible with AI in software development, and I'm excited to see where we can take this.

For those interested, I'm looking for alpha testers and feedback. If you're curious about automating your GitHub workflow or want to push the limits of what AI can do for development, Darwin might be for you.
Check it out and let me know what you think!

As the proprietor of a boutique online store in the UK, I'm considering a major overhaul of our website to enhance user experience and conversion rates, which are currently hampered by high bounce rates and low engagement. I'm in search of web design and development expertise or services that excel in e-commerce, particularly those that can offer mobile optimization and user-friendly interfaces. Could anyone recommend the latest trends or technologies that might help uplift my website's performance and visual appeal?

For last few months chrome devs surely been pushing some disgusting updates to UI. Among other rather mildly annoying changes that all can be described as "just add padding everywhere" there was one most useless one. They changed that small window that pops up when you hit star button to bookmark page. In the past it was quite useful right after creating bookmark you could easily choose folder, edit URL or hit remove if you done bookmark by mistake. Now Edit button hidden behind button that looks like link that opens folder selection and only there you can rewrite URL (which was insanely useful when you wanted to save some site but erase path to current page and GET parameters). Instead new Edit button takes you to another simplified version of old bookmark window but again without way to directly edit URL but at least Remove button present there (so now if you created bookmark by mistake it's two clicks instead of one to remove it).
I understand this is just my small rant and even other browsers like Firefox has same interface where you can't edit URL right away easily, but even on Firefox with it's user customizability I managed to add URL field that works just fine. The only thing I'm sincerely confused about is why they do this to us. Is someone asked for that so much? Is it ergonomic? Is it about accessibility? My experience with Chrome for last year looks like this: They made some downgrade -> You find out about it and search for ways to disable that downgrade -> You find some chrome://flags that does what you want -> They deprecate and then remove that flag with next updates. Maybe it's so called enshitiffication on purpose, maybe they try to push their new sidebar for some reason, I don't know. I'm just so disappointed with updates recently that wanted to rant about that somewhere and I've already sent tons of issues reports to chrome about it.

Hey HN - observing the unfolding situation with a major platform heading towards an IPO has been eye-opening. The tensions between "what’s best" for the platform (users, developers, and moderators) versus investor value expectations (+ platform implications are uncomfortably clear.

Oh, and especially striking is the issue of moderators, crucial to the platform, remaining unpaid as a risk highlighted in an IPO filing.

This situation raises a fundamental question for founders: How do we secure the investment needed for growth without compromising the core values and community that define our projects?

Seeing a platform potentially jeopardize its user base and community support at the altar of investor returns is a cautionary tale and not new, but it seems tech is only accelerating it.

I’m facing this turmoil firsthand. The path from startup to scale involves navigating investor demands, but at what cost? How do we avoid making the same compromises at the cost of our users and partners? It feels impossible.

This tension is too real to me, and I think it shows up as an outright defensiveness when I talk to investors. Maybe I'm too emotional?

I’m reaching out for insights on maintaining this balance. Any advice or _experience_s shared would be invaluable.

Hi all - this is a recreation of the classic lunar lander game, except it's multiplayer, and has weapons! People can compete to land first or blow each other away.

Play it here: https://lander.gg

Architecturally, for each match, there's a server and many clients. The server and each client all run the same physics simulation of the game world, but the server holds the authoritative state and broadcasts player inputs to all clients.

The server is spun up via partykit as a cloudflare durable object. Partykit is great; you code the server as if it's there's just one server, and partykit takes care of instantaneously spinning one up whenever you need one. It communicates with clients via web sockets and is fast.

I'm using the rapier physics engine, which is written in rust with Javascript bindings. It supports deterministic simulation, which is important to make sure all parties can run simulations independently but end up with the same world state at each time step. It also supports taking snapshots of the game world, which makes it possible for each party to keep historical snapshots of the game world at each time step so they can rebase user inputs from other players as they are received, to keep the simulation in step. Occasionally, the server also broadcasts its entire authoritative game state to make sure everyone agrees.

The game starts you out in a public room, but you can create a private game as well (just go to https://lander.gg/whatever). Turn off infinite health and fuel for the full experience!

I'm not a game developer so this is just a random passion project! It started 20 years ago (!) when I was a TA for a CS class at Berkeley, and I helped design and teach an experimental programming class aimed at (non-cs) engineers. The final project asked students to implement some physics simulation of a simple lunar lander game. The original project page is somehow still online here: https://inst.eecs.berkeley.edu/~cs4/fa04/prj/

The project was a lot of fun to put together, and even though it was challenging, the students had a lot of fun with it too. After the class was over, I continued hacking on the game, adding weapons and allowing two players to play side-by-side on the same keyboard. I always wanted to make it network-multiplayer, but started my first job and never got to it.

Fast forward 20 years! With recent advancements like partykit, I wanted to give it a try, and rebuilt the game with network multiplayer as the main focus, and with a real physics engine! It was fun to hack on

Hey Hacker News community!

I’m the CEO of wikiroutes.info, serving millions of users monthly. Our site's speed and responsiveness are critical, with a specific focus on optimizing for Cloudflare. With our user base's distribution, it's crucial for us to partner with a hosting provider in Europe or the US that ensures lightning-fast internal speeds between Cloudflare and the hosting service.

Could anyone recommend hosting providers that excel in these areas? If you've managed high-traffic sites, I'd be grateful if you could share the name of your hosting provider, your site's URL, and its monthly traffic. Insights into your experience with Cloudflare integration, especially regarding server location and internal speed, would be incredibly valuable.

What hosting are you using, how does it perform in terms of traffic, and what has your experience been with Cloudflare integration?

PS: And yes, ChatGPT helped me phrase this since English isn't my first language. No shame in that!

Hey HN! We've expanded our session replay capabilities to include React Native for iOS. Using OpenReplay, a session replay and analytics tool that you can self-host, developers can record, replay, and troubleshoot issues in their apps.

What’s included:

- React Native iOS focus: tailored for React Native apps on iOS.

- Full-fledged DevTools: includes crash analytics, network payloads, and performance metrics for comprehensive debugging.

- Smart heuristics: features like auto-detect of crashes and tracking of user frustrations, such as click rage, to identify issues quickly.

- Customizable tracking: the OpenReplay SDK allows tailored session tracking, capturing user interactions and changes.

Benefits:

- Fast bug identification: with session replay, see exactly what went wrong.

- In-depth analysis: beyond surface-level issues, understand the root cause.

- Optimize user experience: use insights to refine and improve your app’s usability.

Interested? For more details, you can check out the GitHub repo [0] or documentation [1]:

[0] https://github.com/openreplay/openreplay

[1] https://docs.openreplay.com/en/rn-sdk

Apple recently announced changes to iOS, Safari, and the App Store specifically for the European Union, as outlined here: https://www.apple.com/de/newsroom/2024/01/apple-announces-changes-to-ios-safari-and-the-app-store-in-the-european-union/.

I'm at a crossroads with my B2C app, which has around 10-15k total users (a mix of free and paid). According to my calculations, adopting the new App Store terms would net me an additional 400€ per month. While the financial upside seems clear, I'm considering other factors such as potential future growth and any unforeseen implications these changes might bring. I'm mostly interested in the reduced comission I have to pay to Apple and not yet about using alternative app stores or linking out.

The crux of my question is: Should I opt for these new terms? Here are some points I'm pondering:

The immediate financial benefit is apparent, but I'm curious about long-term implications. What will happen if I publish another app, that has a much lower income:cost ratio and I go above a million users (I know what will happen, but how should I weigh the chance and future regulatory pressure on Apple to change terms?)

How might this change affect my app's user experience, especially considering all the EU-specific alterations (including linking out)? Could there be any negative repercussions (taken by Apple against me)? Could Apple reduce my reach in the App Store - charts, similar apps etc.?

Are there any potential pitfalls or opportunities I might be overlooking?

What are you going to do?

Hey HN! Chris and Yuhong here from Danswer (https://github.com/danswer-ai/danswer). We’re building an open source and self-hostable ChatGPT-style system that can access your team’s unique knowledge by connecting to 25 of the most common workplace tools (Slack, Google Drive, Jira, etc.). You ask questions in natural language and get back answers based on your team’s documents. Where relevant, answers are backed by citations and links to the exact documents used to generate them.

Originally Danswer was a side project motivated by a challenge we _experience_d at work. We noticed that as teams scale, finding the right information becomes more and more challenging. I recall being on call and helping a customer recover from a mission critical failure but the error was related to some obscure legacy feature I had never used. For most projects, a simple question to ChatGPT would have solved it; but in this moment, ChatGPT was completely clueless without additional context (which I also couldn’t find).

We believe that within a few years, every org will be using team-specific knowledge assistants. We also understand that teams don’t want to tell us their secrets and not every team has the budget for yet another SaaS solution, so we open-sourced the project. It is just a set of containers that can be deployed on any cloud or on-premise. All of the data is processed and persisted on that same instance. Some teams have even opted to self-host open-source LLMs to truly airgap the system.

I also want to share a bit about the actual design of the system (https://docs.danswer.dev/system%5Foverview). If you have questions about any parts of the flow such as the model choice, hyperparameters, prompting, etc. we’re happy to go into more depth in the comments.

The system revolves around a custom Retrieval Augmented Generation (RAG) pipeline we’ve built. During indexing time (we pull documents from connected sources every 10 minutes), documents are chunked and indexed into hybrid keyword+vector indices (https://github.com/danswer-ai/danswer/blob/main/backend/dans...).

For the vector index (which gives the system the flexibility to understand natural language queries), we use state of the art prefix-aware embedding models trained with contrastive loss. Optionally the system can be configured to go over each doc with multiple passes of different granularity to capture wide context vs fine details. We also supplement the vector search with a keyword based BM25 index + N-Grams so that the system performs well even in low data domains. Additionally we’ve added in learning from feedback and time based decay—see our custom ranking function (https://github.com/danswer-ai/danswer/blob/main/backend/dans... – this flexibility is why we love Vespa as a Vector DB).

At query time, we preprocess the query with query-augmentation, contextual-rephrasing, as well as standard techniques like removing stopwords and lemmatization. Once the top documents are retrieved, we ask a smaller LLM to decide which of the chunks are “useful for answering the query” (this is something we haven’t seen much of elsewhere, but our tests have shown to be one of the biggest drivers for both precision and recall). Finally the most relevant passages are passed to the LLM along with the user query and chat history to produce the final answer. We post-process by checking guardrails and extracting citations to link the user to relevant documents. (https://github.com/danswer-ai/danswer/blob/main/backend/dans...)

The Vector and Keyword indices are both stored locally and the NLP models run on the same instance (we’ve chosen ones that can run without GPU). The only exception is that the default Generative model is OpenAI’s GPT, however this can also be swapped out (https://docs.danswer.dev/gen%5Fai%5Fconfigs/overview).

We’ve seen teams use Danswer on problems like: Improving turnaround times for support by reducing time taken to find relevant documentation; Helping sales teams get customer context instantly by combing through calls and notes; Reducing lost engineering time from answering cross-team questions, building duplicate features due to inability to surface old tickets or code merges, and helping on-calls resolve critical issues faster by providing the complete history on an error in one place; Self-serving onboarding for new members who don’t know where to find information.

If you’d like to play around with things locally, check out the quickstart guide here: https://docs.danswer.dev/quickstart. If you already have Docker, you should be able to get things up and running in <15 minutes. And for folks who want a zero-effort way of trying it out or don’t want to self-host, please visit our Cloud: https://www.danswer.ai/

After 14 months and probably over 1,000 hours, I'm finally at the point where I feel like I can show off my side project, LANCommander.

As the title states, LANCommander is a self-hosted, open source digital distribution platform for PC games. The project started to solve the woes of a regular in-person LAN party that I participate in with some friends. I got tired of spending more time solving technical issues than playing games, so I decided to create a tool to help automate the process.

The server application is built as a .NET 8 Blazor application. Database access is implemented with Entity Framework and a SQLite database. The client is a custom addon for the open-source launcher Playnite. By utilizing Playnite I was able to shift my focus from the end user experience to a fairly robust server application.

I recommend reading some of the (admittedly, probably a little outdated) documentation available at https://lancommander.app and overview over at the GitHub repository (https://github.com/LANCommander/LANCommander). If you're interested in chatting about the project or following development, I recommend joining our Discord server: https://discord.gg/vDEEWVt8EM

Hi HN! In December I launched an MVP for Agora here: https://news.ycombinator.com/item?id=38635695

After posting, we got thousands of users and hundreds of comments with valuable feedback from the community. I spent a couple sleepless nights frantically pacing around my room trying to keep the product live and, relatively, performant. After getting some sleep, I got back to work to make the product better.

A few updates:

1. We've grown from 25 million to 200 million products on Shopify and WooCommerce. The team at WooCommerce reached out after the HN launch to help us figure out how to index their stores. Similar to Shopify, we found that there’s a public file available for all stores that use Wordpress and WooCommerce at [Base URL]/wp-json/wc/v1/products. For example, the file for Good Works Tractors is available here: https://www.goodworkstractors.com/wp-json/wc/store/v1/produc... So I bought a list of 3.5 million active WooCommerce stores on a website called BuiltWith, adapted the product data model, and started the crawler to go down the list. We've indexed around 515k stores so far.

2. We improved the search experience. We're using Mongo to host the 200 million product records. First, we switched from Mongo Atlas Search to Typesense. After testing Typesense with our product records, we found most searches to be under 200ms. We're not storing the product images which slows down the loading speed at times. This week, we set up a server using Paperspace to run SBERT embeddings on a GPU (new to the AI workflow so apologies if I get the lingo wrong). We quickly realized that the dimension size of the embeddings matters a lot here, given the size of the data set. The GPU is still running to process all 200 million records and we're about a week away from releasing AI-powered search.

3. We localized the user experience. There's now frontend and backend IP detection to only show users products that are 'based in' or 'ship to' their specific country. This 'ships to' filter (i.e. stored in all Shopify stores in the /meta.json route like https://wildfox.com/meta.json) significantly slows down the search results but we're trying to get creative on the loading process and animation. For example, we're using Revalidating on Next.JS to give several pages a 'hard coded' feel and the data refreshes every 60 seconds. https://nextjs.org/docs/app/building-your-application/data-f...

4. We got our first few paying customers. Store owners can sign up for free to track their store's performance on Agora. We validate that they are the store owner by making sure the email address and store URL match on sign up, and then send them an email verification link. They can upgrade to a subscription tier to 'verify' their products to get better placement in relevant search results. Additionally, they can pay to 'boost' products and guarantee that they'll show up in the first row of results. Given the high purchase-intent searches on Agora, I'm finding this to be the right business model.

The next challenge to solve: We need to improve the quality of products on Agora. There's a lot of resellers, dropshipping stores, and low quality images. Now, just because a product is sold on a reseller or dropshipping website, doesn't mean it's a bad product. There's a lot of exceptions and edge cases to solve. One potential solution: we're considering coming up with an "Agora Score" that takes in several factors including the image quality, store name, brand name, website SEO, etc. to tell users how trustworthy we think the product is.

I'd love any feedback or advice. I did solve my original problem of finding 'red shoes' for my wife, but inadvertently created more problems for myself. I'm loving every minute of it though. My wife jokes that everything is now "Agora this...Agora that". Open to any advice on that as well.

Exclusive to Gemini Advanced: Edit and run Python code

- What: Exclusive to Gemini Advanced, you can now edit and run Python code snippets directly in Bard's user interface. This allows you to experiment with code, see how changes affect the output and verify that the code works as intended.

- Why: These coding capabilities are particularly beneficial for two main use cases: learning and verification. For example, students can play with Gemini's code examples to better understand how modifications impact the outputs. This interactive learning experience can help you grasp coding concepts more effectively. Or, developers can quickly check if the code generated by Gemini runs correctly before copying it. This saves you time and ensures that the code you use is functional.

More info at https://gemini.google.com/updates

I have an idea for a side project and now I'm analyzing different options for backend to serve the Flutter mobile app.

I write Go professionally and it will be generally easy to provide REST endpoints backed by Postgres as storage.

That said, I will also need to take care of user registration, authorization, content management, etc.

I am not very much experienced with Django, but it's a very popular choice in one-man startups. It also seems that there are many more ready components and production-ready libraries for Django and Django REST.

What backend stack would you choose today?

Five months back, I started building Visioncast. Drawing inspiration from psychology-related books like "Psychocybernetics" and "What to Say When You Talk to Yourself", I've been fascinated by the power of affirmations and positive self-talk. This fascination was the seed for Visioncast, guiding users through personalized audio exercises including affirmations, meditations, motivational speeches, and visualizations, all tailored to their individual needs.

I'm leveraging GPT-4 and TTS to deliver content that's not just personalized but deeply resonant with the challenges and aspirations of the user. Whether it's bolstering self-esteem or stopping procrastination, the app is aims to support mental and emotional well-being.

Perfecting the user experience has been tricky. Right now I'm copying ChatGPT's suggestion cards to start you out, paired with a customization screen that offers extensive control over the final output. This level of personalization sets Visioncast from app like Calm, Headspace, and Waking Up, which offer no personalization.

In the first version, it took 1m 20s to generate audio. I had to wait for the whole GPT4 response to come in. I've streamlined audio processing and streaming (with lots of fighting with AWS/Django/React Native and Nginx) And now, with the app inching ever closer to what I envisioned as an AI mentor, the validation from our first two paying users has been incredibly affirming.

I'd love to hear your insights/feedback, as I'm still working out UX and some bugs.

요약하다
The article discusses various projects created by different individuals. One project, AutoNL, is an open-source tool that automates multi-step tasks using a simple spreadsheet and a script. Another project is an open-source library of animated UI components created to enhance user experience on websites. Additionally, a group of students and their uncle are collaborating on a tech idea, with the uncle providing industry insights and connections. Another project, Million, is a tool designed to fix slow React code by identifying performance issues and suggesting fixes. The tool uses a mixture of static analysis and runtime profiling to improve web performance. The creators plan to open-source the tool and offer a subscription model for customized linting. Overall, these projects aim to improve automation, user experience, and web performance through innovative solutions.