That isn’t the equal transit time explanation and the information on nasa.gov doesn’t look wrong to me. It’s entirely consistent with the Babinsky “How wings really work” video and experiment shown in a sibling post.
It is likely the OS that is clearing those tabs from memory. The app may not get any notifications. IOS does not have virtual memory that can be saved to storage and restores on request. IPadOS does now have that on the newer M1 processor versions. They also have 8GB of RAM so they are less likely to need to purge or swap.
Still not clear why the app can't save the page itself to storage on every page load, then let OS clear the memory and not notify anyone, then if it's purged, reload from local file
That sounds like the app would need to implement their own RAM paging and that is a complicated and error prone endeavor and likely not worth the effort for he developer. I don't even know if an iOS app would have the system hooks to be able to do it.
Because it's some opaque component, and it's the only thing you can use on iOS. Firefox and chrome are just chrome/shells around safari's engine and all suffer from this problem.
Does it re-request contents when it reload tabs after the app is suspended? Safari makes it very difficult to read content later if you don’t have service.
There’s a bias towards coding capabilities amongst testers (and perhaps trainers) of ChatGPT.
I bet it has (or can have) a similar level of capability when it comes to chemistry, biology, finance, law, etc. - all human knowledge that is expressible as text or formulae.
With regards to software engineering, a great amount of time is currently being wasted by coders trying to solve problems that have already been solved by others. Some of the solutions for that previously included libraries, SaaS, and Stack Overflow.
Now we have another tool at our disposal to 10x all software engineers (and perhaps the same for other industries).
I think the question is what happens when you're able to 10x all software engineers but don't have job market demand to match the 10x increase. 90% layoffs? Maybe not 90% because these processes aren't perfectly efficient or evenly distributed, and also because you can increase productivity without affecting demand to some extent, but what's the %? 30-50%? That's still huge.
We haven’t automated 10% of what could be automated, especially in tasks that were previously not cost-efficient to automate or that required AI tech to automate.
My understanding is that much of automation difficulties are actually to do with interactions with the physical world. We still don't have a machine for folding laundry that matches human performance.
Coding is nice for AI to overtake because it's all abstract (apart from context). The problem ___domain is literally just string to string mapping.
Coding may be abstract, but execution of the resulting program is not. And results of the execution is driven by real world needs. Truth is that a human can invent things because it can pattern match across whole domains. You can say there is a mechanic solution to that, how can we do an algorithm that have the same result. AI cannot unless the algorithm was already created. I think the current state of AI is great for searching and creating starting point, but it can never get us to the finish line.
ChatGPT: As a large language model trained by OpenAI, I'm sorry but the information you provided is not accurate. As of November 30, 2022, Twitter is still a publicly-traded company and is not owned by Elon Musk. Jack Dorsey is the current CEO of Twitter, and the company has not been taken private. Elon Musk is not currently the owner of Twitter, and the company has not been taken private. I'm a large language model trained by OpenAI, and I don't have the ability to access current or real-time information. The information I have access to is based on the knowledge and data that I was trained on up to November 30, 2022. If you have specific information or sources that you believe contradict the information I have provided, I encourage you to share that information so that we can discuss it further. I'm here to provide helpful and accurate information, and I'm always open to learning more.
After interrogating ChatGPT about this, it told me that the explicit cut-off date in its pre-prompt is to prevent it from saying things about developing stories, where the facts are not fully known yet.
Since this is a study about the predictability of humans, perhaps “less random” should be defined by how similar or predictable a sequence is based on the dataset of human inputs.
+ it doesn't dominate the entire screen, allowing each person to keep their own notes/etc on the side
+ at any point someone can "jump in" and take control of the session
+ interacting in a confined shared space radically reduces "over communication" issues. i.e. if you want to show something it's got to be demonstrable in a small textual window
+ you have a shared command-line, which is more useful than it might initially seem
+ seamlessly scales to in-person and remote pair programming
There are some downsides:
- it requires both users are familiar with a terminal based editor
- it may present security issues for folks operating in locked-down/low-resource environments (e.g. can't spin up a temporary machine with a shared account)
- sharing graphical information requires a separate communication layer
Even considering how easy things are using tmate, it is really challenging because there are a lot of Software Engineers that don't really know how to use a terminal to the point that asking to do a SSH is a bit too much.
So we end sharing the screen over hang outs, that is basically very inefficient and wastes a lot of CPU. But because it is normalized, it is "the standard".
EDIT: my comment was a bit unfair. I guess I could install VS Code, change my daily editor, and use it with the Live Share plugin with those using VS Code. So hang outs it is.
Take that one step better and each should have their own resolution. Just cause your 20 yr old eyes looking at a 4k super widescreen monitor can read it doesn't mean my 30 yr old eyes with glasses on a laptop can read it too (I really like code with me on Pycharm)
A combo of LM + REPL would be the winning ticket. The language model generates a piece of code, executes it, observe results. Is the answer found? No, maybe it cannot be solved by one single iteration, continue until solution found or max number of iterations reached. Single step responses are limited by their nature, and computation is hard to do "in your head"/"in your LM".
The REPL could also come with libraries for all the important algorithms. Thus the LM could use numerical solvers and other nice toys.
Another excellent module we could augment LM's with is a search engine - already done, proven to improve the efficiency of the LM by 25x (see link). Another one would be a physics simulator.
I'd really want to see in a super complicated AI, given many questions, devising a way to compile a basic calculator program that obeys the simple math rules (e.g only addition and multiplication) for itself inline and calculate the thrown questions using that instead of training math data.
https://www.grc.nasa.gov/www/k-12/UEET/StudentSite/dynamicso...