That github issue is closed because it's been mostly completed. As of https://github.com/modelcontextprotocol/modelcontextprotocol..., the latest draft specification does not require the resource server to act as or poxy to the IdP. It just hasn't made its way to a ratified spec yet, but SDKs are already implementing the draft.
It really depends on if you're dealing with an async stream or a single async result as the input to the next function. If a is an access token needed to access resource b, you cannot access a and b at the same time. You have to serialize your operations.
Can you provide a citation for this? I’ve read older RFCs that "recommend" recipients allow single LFs to terminate headers for robustness. I’ve also read newer RFCs that weaken that recommendation and merely say the recipient "MAY" allow single LFs. I’ve never noticed an HTTP RFC say you can send headers without the full CRLF sequence, but maybe I missed something.
Me too. It's one thing to accept single LFs in protocols that expect CRLF, but sending single LFs is a bridge to far in my opinion. I'm really surprised most of the other replies to your comment currently seem to unironically support not complying with well-established protocol specifications under the misguided notion that it will somehow make things "simpler" or "easier" for developers.
I work on Kestrel which is an HTTP server for ASP.NET Core. Kestrel didn't support LF without a CR in HTTP/1.1 request headers until .NET 7 [1]. Thankfully, I'm unaware of any widely used HTTP client that even supports sending HTTP/1.1 requests without CRLF header endings, but we did eventually get reports of custom clients that used only LFs to terminate headers.
I admit that we should have recognized a single LF as a line terminator instead of just CRLF from the beginning like the spec suggests, but people using just LF instead of CRLF in their custom clients certainly did not make things any simpler or easier for me as an HTTP server developer. Initially, we wanted to be as strict as possible when parsing request headers to avoid possible HTTP request smuggling attacks. I don't think allowing LF termination really allows for smuggling, but it is something we had to consider.
I do not support even adding the option to terminate HTTP/1.1 request/response headers with single LFs in HttpClient/Kestrel. That's just asking for problems because it's so uncommon. There are clients and servers out there that will reject headers with single LFs while they all support CRLF. And if HTTP/1.1 is still being used in 2050 (which seems like a safe bet), I guarantee most clients and servers will still use CRLF header endings. Having multiple ways to represent the exact same thing does not make a protocol simpler or easier.
In its original terms for printing terminals, carriage return might be ambiguous. It could means either "just send the print head to column zero" or "print head to 0 and advance the line by one". The latter is what typewriters do for the Return key.
But LF always meant Line Feed, moving the paper but not the print head.
These are of course wildly out of date concepts. But it still strikes me as odd to see a Line Feed as a context reset.
>The latter is what typewriters do for the Return key.
Minor correction: mechanical typewriters do not have a Return key, but they have both operations (line feed, as well as carriage return).
The carriage return lever is typically rigged to also do line feed at the same time, by a preset amount of lines (which can be set to 0), or you can push the carriage without engaging line feed.
Technically, the lever would do LF, and pushing on it further would do CR (tensioning the carriage spring).
It is, however, true that most of the time, the users would simply push the lever until it stops without thinking about it, producing CRLF operation —
— and that CR without LF was comparatively rare.
From a pure protocol UX perspective, it would make sense IMO to have a single command for (CR + LF) too, just like the typewriter effectively does it (push the lever here to do both at once).
It seems weird that the protocol is more limited than the mechanical device that it drives, but then again, designers probably weren't involved in deciding on terminal protocol specs.
On manual typewriters there is a lever that turns the roller to accomplish a line feed (or two if set for double space.)
This lever is usually located on the left side of the carriage to make it convenient to push it back to the right side in the same motion.
>the lever would do LF, and pushing on it further would do CR (tensioning the carriage spring).
In any case, carriage return is just as important function of the lever as line feed:
- you can also directly do line feed by turning the roller
- line feed, by itself, doesn't need a large lever
- carriage return, by itself, doesn't need a large lever either - you can simply push the carriage
- however, having a large lever is an ergonomic feature which allows you to:
1) return the carriage without moving your hands too far from the keyboard
2) do CRLF in one motion without it feeling like two things
3) If needs be, do a line feed by itself, since the force required for that is much smaller compared to the one to move the carriage (lever advantage!).
The long lever makes it so that line feed happens before carriage return. If the lever were short, you'd be able to move the carriage until it stops, and only then would the paper move up.
So I wondered why the control codes are doing the operations in the opposite order from the typewriter.
Turns out, the reasons are mechanical[1]:
>The separation of newline into two functions concealed the fact that the print head could not return from the far right to the beginning of the next line in time to print the next character. Any character printed after a CR would often print as a smudge in the middle of the page while the print head was still moving the carriage back to the first position. "The solution was to make the newline two characters: CR to move the carriage to column one, and LF to move the paper up.
Aha! Makes sense.
In a way, this was creating a de-facto protocol by usage, in a similar spirit the the author is suggesting to get rid of it.
As in: the existing standard wasn't really supported, but letting the commands go through nevertheless and allowing things to break incentivized people to collectively stick to the way of doing things that didn't result in misprints.
Note that I said that it would make sense a CRLF command too, as in: in addition to separate CR and LF commands, which are useful in their own rights.
I also strongly disagree with the author that LF is useless.
So many times in code I need to type:
Function blah(parameter1 = default1,
parameter2, ...)
It would be super nice to move down from the beginning of the word "parameter1" down to the next line even when it's empty to start typing at that position.
Sure, there is auto format. But not in this comment box.
And what I'm talking about is exactly what LF was meant to do!
I want all my text boxes to support that, and to have a special key on my keyboard to do it.
1) "Typewriters" in parent's comment didn't refer to mechanical typewriters, but
2) Line feed/carriage return semantics, as well as the UX of combining them into one action to proceed to the next line of text, predate electric typewriters and were effectively the same on mechanical ones.
As I wrote in the other comment, the subtle difference in semantics comes from teletypes, which couldn't advance the paper feed and return the carriage fast enough to print the next character in the timespan of one command.
Not that it applied to all teletypes, but it was the case for a very popular unit.
The makers of that machine deliberately didn't include a single command that would do CR/LF so that there'd be no way for the users to notice that.
The ordering, CR then LF, differs from the one on mechanical typewriters, where LF always precedes CR when you use the big lever, allowing one to use the same lever to produce blank lines without moving the carriage (in effect, doing LF LF LF ... LF CR).
On the teletypes though, CR LF ordering was, in any case, a lie, since in actuality, LF was happening somewhere in the middle of the carriage return, which took the time span of two commands.
The CR command had to precede LF on the teletype because it took longer, but since the mechanisms were independent, they could be executed at the same time.
This is the difference from mechanical typewriters.
Typing mechanism was also independent of CR and LF, and running CR + [type character] at the same time was bad. But having fixed-time-per-command simplified everything, so instead of waiting (..which means buffering - with potential overflow issues - or a protocol to tell the sending party to wait, which is a lot more complex), hacks like this were put in place.
My IBM selectric is not functional (got it as a repair project, didn't get to do it yet), so I can't verify, but I'd guess it doesn't need to do CR then LF, since it can simply not process input while the carriage is returning. It's OK for it to do CR and LF in any order, or simultaneously.
If the operator presses and releases a button during this time, the machine can simply do nothing; the operator will re-type the character the next instant, using the buffer in their head where the text ultimately comes from.
The teletypes didn't have that luxury, as the operator on the other end could be a computer, which was told it could send output at a certain rate, and by golly it did. Not processing a command would mean dropping data.
All that is to say that CR and LF are present on both typewriters and teletypes, with the following differences:
* mechanical typewriters always do LFCR due the mechanics of the carriage return lever, which was designed for a human operator;
* teletypes do CRLF because that's how they cope with the typist being a machine that can't be told to wait a bit until the carriage returns;
* and electric typewriters are somewhere in betwen and could do whatever, because the CR lever was replaced by the motor (like on a teletype), but the operator was still a human that could wait half a second without forgetting what it is that they wanted to type.
IMO, it's worth keeping CRLF around simply because it's a part of computer and technology history that spans nearly two centuries, from typewriters to Google docs.
> Our browsers could have been exploiting things behind NAT this entire time. Smart TVs, Smart watches, phones, anything pingable on your LAN.
Maybe if they’re running an HTTP server (which isn’t too uncommon for IoT devices) while allowing the attacker website via CORS (less likely). An IoT device listening for WebSocket or WebRTC connections won’t benefit from CORS, but those are relatively rare and ought to have other mitigations in place.
All your links show is the ability to scan ports, not even read the responses to the fetch() requests made to local IP addresses. That could be useful to an attacker, but a far cry from exploiting any smart device or having the ability to send “outgoing crafted packets” from the browser. You cannot even open arbitrary sockets or craft arbitrary HTTP requests.
> The fact that most programming languages don’t give enough semantic information for their compiler to do a good job doesn’t mean it necessary has to be so. Functional programmers just trust that their compiler will properly optimize their code.
While everyone has to trust their compiler will make reasonable optimizations to some extent, there becomes a certain point of complexity where it becomes difficult to intuitively know if a "sufficiently smart compiler"[1] will properly optimize which is problematic.
I realize you're arguing Haskell is worse than Ocaml in this regard, but I'd argue it's harder to reason about how functional code will be translated into machine code than comparable procedural code in general.
You might be technically correct, but if you extend that logic, why not just make the grid 1x1 and select a single color?
The grid size is part of the pattern in the same way that the colors are part of the pattern. It’s not just a color pattern, it’s a generalized mapping of input to output.
In short: you need to resize the grid because that’s what the examples do.
> why not just make the grid 1x1 and select a single color?
For two reasons:
1. The initially suggested grid size was 3x3.
2. Filling in a 3x3 grid is sufficient to show that you understood the pattern, but filling in a 1x1 (or even 2x2) grid is insufficient.
Requiring the user fill in a larger grid is a waste of time. The existence of the grid size selector would still make sense in cases where a 2x2 grid would be sufficient to show the solution, so it is not obvious at all that a 6x6 grid should be chosen.
> The grid size is part of the pattern in the same way that the colors are part of the pattern.
To understand a pattern, you have to see at least two valid inputs and corresponding outputs. For the first example, a valid example for the expected output grid size is missing.
I arrived at the "correct" conclusion eventually, but the only indicator was that the reading direction for the UI was absolutely ridiculous ( https://i.imgur.com/CuQ2z2N.png ), suggesting that the authors did not think this through properly, so the solution had to be weird as well.
Honestly I’d disagree. I was a bit confused at first but moment I realized I could resize the grid, the answer strikes me as obvious and clear. Yes, in some theoretic sense you can argue a 3 x 3 grid answer is fine, but shows this to 100 different humans and majority would agree that resizing the grid is the obvious and more natural solution.
What is even the meaning of "correct" in this case?
This makes me think of "math" problems requiring you to find the next number in a series. They give you 5 numbers, and ask for the 6th. When I can build a polynomial than can generate the first 5 and any 6th number. Any.
Sounds like the point of these exercises it to guess what the author had in mind, more than some universal intelligence test. Though of course the author thinks their own thoughts are the measure of universal intelligence. It's a tempting thing to believe.
AFAIK, all Linux distros plus Windows and macOS have TCP keepalives off by default as mandated by the RFC 1122. Even when they are optionally turned on using SO_KEEPALIVE, the interval defaults to two hours because that is the minimum default interval allowed by spec. That can then be optionally reduced with something like /proc/sys/net/ipv4/tcp_keepalive_time (system wide) or TCP_KEEPIDLE (per socket).
By default, completely idle TCP connections will stay alive indefinitely from the perspective of both peers even if their physical connection is severed.
Implementors MAY include "keep-alives" in their TCP
implementations, although this practice is not universally
accepted. If keep-alives are included, the application MUST
be able to turn them on or off for each TCP connection, and
they MUST default to off.
Keep-alive packets MUST only be sent when no data or
acknowledgement packets have been received for the
connection within an interval. This interval MUST be
configurable and MUST default to no less than two hours.
ASP.NET Core can easily route "/hello" to HelloController.Index(), but it's not exactly automatic. The controller library adds routes to the routing middleware in a call to MapControllerRoute the app developer must make during startup which specifies a pattern.
reply