Hacker News new | past | comments | ask | show | jobs | submit | rememberlenny's comments login

This is amazing! Congrats on the launch!


Pro tip: Read the actual court decision and not just the headlines.

Tech law journalism is a telephone game that usually distorts what was actually said. People then form strong opinions on the headlines. Chaos ensues.

Example: “AI art cannot be copyrighted - US Federal Judge”

That’s not what the judge decided. The decision said you can’t assign a copyright to an AI. This doesn’t mean the thing you create with AI can’t be copyrighted. You write a prompt, press a button, cause something to be created, etc., and you’re the author not the AI.

https://www.documentcloud.org/documents/23919666-thalervperl...

From @AndrewMayne on Twitter


Worst case, load it in GIMP, set brush to 1% opacity and give it a couple of strokes. Boom, bona fide copyrighted work.


That makes a lot more sense.

Else what prevents you from saying that Photoshop with all it bazilion function isn't an AI and therefor you can't copyright it.


Love this.


Its the older completion models, not the older chat completion models.


They're deprecating all the completion/edit models.

The chat models constantly argue with you on certain tasks and are highly opinionated. A completion API was a lot more flexible and "vanilla" about a wide variety of tasks, you could start a thought, or a task, and truly have it complete it.

The chat API doesn't complete, it responds (I mean of course internally it completes, but completes a response, rather than a continuation).

I find this a big step back, I hope the competition steps in to fill the gaps OpenAI keeps opening.


Unfortunately their decisions are driven by model usage: gpt-3.5-turbo is the most used one (probably due to the low price and similar result)


"similar" is a very bold claim ;-)

Comparable, perhaps.


In March, I was confident edge level usage of large models wouldn’t be possible soon.

Distilling large models into smaller parts, has gotten better than I anticipated.


In case it's not obvious, this is a concrete departure from rudimentary "prompt engineering".

Dialog-based interfaces, such as a context-aware conversation, are better at conveying human intent and provide a more natural way to interact with model capabilities.


With all the excitement around Stable Diffusion, it’s nice to have an easy API to plug into for product development.

The API supports three endpoints (generation, edits, and variation), but doesn’t support fine-tunes - which are capturing peoples attention now with deepfake avatars.

Im curious to see what people build with this.


There are several services that host Stable Diffusion API, DreamStudio being the official one.


Here is the prompt for "A still of Kermit The Frog in Dragon Ball Z (1989)"

https://user-images.githubusercontent.com/1332366/171921054-...


The reason we made this is because you often need to juggle multiple tools when doing anything in video. Rather then using one tool for video collection, another tool for editing, and another for sharing clips - we wanted to have one tool for everything.

I believe we are the only video collection tool that also lets users design around the collected video clips in a free-form drag-and-drop editor and share out final videos.


Sharing the latest iteration of Milk Video with HN.

Milk Video makes it easy to gather video testimonials, edit and share them in one place. We created a robust browser based video composition tool for teams that produce webinars, and recently added a feature to gather video testimonials via a link.

The overall application runs via a React application with a Ruby on Rails API. To gather video recordings, we use Daily.co as a recorder and stream the recordings using Mux. To composed new video with the collected recordings, we made a video rendering engine that breaks the source video material into frames, and screenshots individual frames of the composed final video. Once the frames are created, we reconstruct the video and add the audio tracks on for the final product.

If you have anyone in a Marketing team at the company you work at, I'd greatly appreciate sharing it with them and seeing what they think!

Thank you!


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: