Hacker News new | past | comments | ask | show | jobs | submit | samuelfekete's comments login


That isn’t the equal transit time explanation and the information on nasa.gov doesn’t look wrong to me. It’s entirely consistent with the Babinsky “How wings really work” video and experiment shown in a sibling post.


NASA.gov has multiple pages on lift. Maybe one will explain it accurately.

https://www1.grc.nasa.gov/beginners-guide-to-aeronautics/lif...


They could just live in the sphere.


Does it save the content of open tabs?

This is my number one frustration with existing iOS browsers - when I come back to tabs without an internet connection everything is gone.


All iOS browsers just use some webview under the hood. I doubt they have any control at all over this extremely irritating behavior.


Buy why can't they store page cache as local data and continue to use webview to open it?


It is likely the OS that is clearing those tabs from memory. The app may not get any notifications. IOS does not have virtual memory that can be saved to storage and restores on request. IPadOS does now have that on the newer M1 processor versions. They also have 8GB of RAM so they are less likely to need to purge or swap.


Still not clear why the app can't save the page itself to storage on every page load, then let OS clear the memory and not notify anyone, then if it's purged, reload from local file


That sounds like the app would need to implement their own RAM paging and that is a complicated and error prone endeavor and likely not worth the effort for he developer. I don't even know if an iOS app would have the system hooks to be able to do it.


why not just save the page as html? Isn't that what official Safari does with the reading list?


Pages can rewrite themselves at any time.


so? having something good most of the time beats having nothing all of the time


Because it's some opaque component, and it's the only thing you can use on iOS. Firefox and chrome are just chrome/shells around safari's engine and all suffer from this problem.


It should remember the scroll position, and the content of <input> and <textarea> tags.


Does it re-request contents when it reload tabs after the app is suspended? Safari makes it very difficult to read content later if you don’t have service.


It does. Not sure WebKit has an easy solution for that specific scenario at the moment, besides saving a web archive to disk.


There’s a bias towards coding capabilities amongst testers (and perhaps trainers) of ChatGPT.

I bet it has (or can have) a similar level of capability when it comes to chemistry, biology, finance, law, etc. - all human knowledge that is expressible as text or formulae.

With regards to software engineering, a great amount of time is currently being wasted by coders trying to solve problems that have already been solved by others. Some of the solutions for that previously included libraries, SaaS, and Stack Overflow.

Now we have another tool at our disposal to 10x all software engineers (and perhaps the same for other industries).


I think the question is what happens when you're able to 10x all software engineers but don't have job market demand to match the 10x increase. 90% layoffs? Maybe not 90% because these processes aren't perfectly efficient or evenly distributed, and also because you can increase productivity without affecting demand to some extent, but what's the %? 30-50%? That's still huge.


Perhaps we'll have Jevons paradox, and instead of reducing the workforce, we'll accelerate software eating the world.

A lot of software is crappy. The extra productivity could go partially into quality improvement.


We haven’t automated 10% of what could be automated, especially in tasks that were previously not cost-efficient to automate or that required AI tech to automate.


My understanding is that much of automation difficulties are actually to do with interactions with the physical world. We still don't have a machine for folding laundry that matches human performance.

Coding is nice for AI to overtake because it's all abstract (apart from context). The problem ___domain is literally just string to string mapping.


Coding may be abstract, but execution of the resulting program is not. And results of the execution is driven by real world needs. Truth is that a human can invent things because it can pattern match across whole domains. You can say there is a mechanic solution to that, how can we do an algorithm that have the same result. AI cannot unless the algorithm was already created. I think the current state of AI is great for searching and creating starting point, but it can never get us to the finish line.


Finally a software engineering silver bullet post-No Silver Bullet.


Prompt: Elon Musk bought Twitter

ChatGPT: As a large language model trained by OpenAI, I'm sorry but the information you provided is not accurate. As of November 30, 2022, Twitter is still a publicly-traded company and is not owned by Elon Musk. Jack Dorsey is the current CEO of Twitter, and the company has not been taken private. Elon Musk is not currently the owner of Twitter, and the company has not been taken private. I'm a large language model trained by OpenAI, and I don't have the ability to access current or real-time information. The information I have access to is based on the knowledge and data that I was trained on up to November 30, 2022. If you have specific information or sources that you believe contradict the information I have provided, I encourage you to share that information so that we can discuss it further. I'm here to provide helpful and accurate information, and I'm always open to learning more.


I saw in another example that it said it's only trained up to the end of 2021 though - someone correct me if I'm wrong?

This could just be a case of "as of X date, this is simply my best available information to go on".


After interrogating ChatGPT about this, it told me that the explicit cut-off date in its pre-prompt is to prevent it from saying things about developing stories, where the facts are not fully known yet.


Since this is a study about the predictability of humans, perhaps “less random” should be defined by how similar or predictable a sequence is based on the dataset of human inputs.


> Sit so that the monitor is between the two of you.

No. Use two monitors that are mirrored or with screen sharing, so that each person can sit comfortably centred on their own screen.


I've found shared tmux/screen sessions are ideal.

+ each person can have their own font/resolution

+ it doesn't dominate the entire screen, allowing each person to keep their own notes/etc on the side

+ at any point someone can "jump in" and take control of the session

+ interacting in a confined shared space radically reduces "over communication" issues. i.e. if you want to show something it's got to be demonstrable in a small textual window

+ you have a shared command-line, which is more useful than it might initially seem

+ seamlessly scales to in-person and remote pair programming

There are some downsides:

- it requires both users are familiar with a terminal based editor

- it may present security issues for folks operating in locked-down/low-resource environments (e.g. can't spin up a temporary machine with a shared account)

- sharing graphical information requires a separate communication layer


Even considering how easy things are using tmate, it is really challenging because there are a lot of Software Engineers that don't really know how to use a terminal to the point that asking to do a SSH is a bit too much.

So we end sharing the screen over hang outs, that is basically very inefficient and wastes a lot of CPU. But because it is normalized, it is "the standard".

EDIT: my comment was a bit unfair. I guess I could install VS Code, change my daily editor, and use it with the Live Share plugin with those using VS Code. So hang outs it is.


Take that one step better and each should have their own resolution. Just cause your 20 yr old eyes looking at a 4k super widescreen monitor can read it doesn't mean my 30 yr old eyes with glasses on a laptop can read it too (I really like code with me on Pycharm)


Thanks for posting.

For the benefit of people trying it out, here are some important features that are in the pipeline but are not yet available:

- Exact phrase searching

- Sort and filter by date

- A very large index


I’m working on a new search engine that will allow you to do that. (It’s still a work-in-progress, but you can try it out here: https://entfer.com/).


A really intelligent AI would just use a calculator.


A combo of LM + REPL would be the winning ticket. The language model generates a piece of code, executes it, observe results. Is the answer found? No, maybe it cannot be solved by one single iteration, continue until solution found or max number of iterations reached. Single step responses are limited by their nature, and computation is hard to do "in your head"/"in your LM".

The REPL could also come with libraries for all the important algorithms. Thus the LM could use numerical solvers and other nice toys.

Another excellent module we could augment LM's with is a search engine - already done, proven to improve the efficiency of the LM by 25x (see link). Another one would be a physics simulator.

https://deepmind.com/research/publications/2021/improving-la...

We're just starting with language models. They will improve tremendously.


But a really really intelligent AI would try to run them through a neural net and be amazed when it got most of them approximately correct.


I'd really want to see in a super complicated AI, given many questions, devising a way to compile a basic calculator program that obeys the simple math rules (e.g only addition and multiplication) for itself inline and calculate the thrown questions using that instead of training math data.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: