I'm the developer of Juniper, a functional reactive programming language for the Arduino. It's very possible to run high level code on small devices. This even includes things like closures, which Juniper allocates on the stack!
I've also never been able to get these P2P web file transfer tools (FilePizza, ShareDrop) to work without issues. Transfers inevitably fail partway through, especially for large files. This seems to happen even in ideal network conditions such as over a LAN.
You may be interested in my other comment. To put it simply, pure languages remove serialisation of false data dependencies, exposing data parallelism, and interaction nets remove serialisation of false control dependencies, exposing control parallelism.
It would be great if we could also go ahead and fix subpixel anti-aliasing for OLED screens. People have been been trying for years to get Microsoft's attention about this issue. [1]
The subpixel layout of OLED screens is different than the the traditional layout, so text ends up looking pretty bad. Patching ClearType would be the first step to fixing this issue. I'm surprised that none of the display manufacturers have tried twisting Microsoft's arm to fix this issue. At the present moment OLED screens are the superior display technology, but cannot be used for productivity because of this issue.
> The subpixel layout of OLED screens is different than the the traditional layout, so text ends up looking pretty bad. Patching ClearType would be the first step to fixing this issue.
Patching ClearType is unfortunately not as straightforward as it should have been. In an ideal world, you just change the sampling kernel your rasterizer uses to match the subpixel layout (with perceptual corrections) and you’re done. In our world, it takes hackery of Lovecraftian levels of horrifying to display crisp text using a vector font on a monitor with a resolution so pitiful a typographer from centuries ago would have been embarrassed to touch it. Unfortunately, that ( < 100 dpi when 300 dpi is considered barely acceptable for a print magazine) is the only thing that was available on personal computers for decades. And if you try avoid hacks, you get more or less Adobe Reader’s famously “blurry” text.
One of the parts of that hackery is distorting outlines via hinting. That distortion is conventionally hand-tuned by font designers on the kind of display they envision their users having, so in a homogeneous landscape it ends up tied to the specifics of both ClearType’s subpixel grid (that has been fixed since 2001) and Microsoft’s rasterizer (which is even older). Your sampling kernel is now part of your compatibility promise.
The Raster Tragedy website[1] goes into much more detail with much more authority than I ever could lay claim to, except it primarily views the aforementioned hackery as a heroic technical achievement whereas I am more concerned with how it has propagated the misery of 96 dpi and sustained inadequate displays for so long we’re still struggling to be rid of said displays and still dealing with the sequelae of said misery.
> Unfortunately, that ( < 100 dpi when 300 dpi is considered barely acceptable for a print magazine)
I find this fascinating, because I recall school textbooks having visible dots, but I'm yet to experience what people refer to as "oh my god I'm seeing the pixel!".
It further doesn't help that when seated at a typical distance (30° hfov) from a ~23" 16:9 FHD display (96 ppi), you get a match (60 ppd) for the visual acuity you're measured for when an optometrist tells you that you have a 20/20 eyesight.
It's been of course demonstrated that eyesight better than 20/20 is most certainly real, that the density of the cones in one's eye also indicates a much finer top resolution, etc., but characterizing 96 ppi as so utterly inadequate will never not strike me as quite the overstatement.
As far as I know, visible dots in color printing are usually due to limitations on the set of available colors and limited precision with which the separate stages that deposit those colors on the sheet can be aligned with each other, not due to the inherent limitations on the precision of each of those color layer. You get dots in photos, but not jagged lines in your letters. And 300 dpi is what your local DTP / page layout person will grumpily demand from you as the bare minimum acceptable input for that dithering, not what the reader will ultimately see. One way or another, 96dpi pixelated (or blurry) text in e.g. an illustration in a printed manual is really noticeable, and miserable to read in large stretches.
A more precise statement is perhaps that 96dpi is much too low to use plain sampling theory on the original outlines to rasterize fonts. It does work. The results are readable. But the users will complain that the text is blurry, because while there’s enough pixels to convey the information on which letter they are looking at, there are not enough of them to make focusing on them comfortable and to reproduce the sharp edges that people are accustomed to. (IIRC the human visual system has literal edge detection machinery alongside everything else.)
And thus we have people demanding crisp text when the requisite crispness is literally beyond the (Nyquist) limit of the display system as far as reproducing arbitrary outlines. Sampling theory is still not wrong—we can’t do better than what it gives us. So instead we sacrifice the outlines and turn the font into something that is just barely enough like the original to pass surface muster. And in the process we acquire a heavy dependency on every detail of the font display pipeline, including which specific grid of samples it uses. That’s manual hinting.
Seriously, leaf through the Raster Tragedy (linked in GP) to see what the outlines look like by the time they reach the rasterizer. Or for a shorter read, check out the AGG documentation[1] to see what lengths (and what ugliness) Microsoft Word circa 2007 had to resort to to recover WYSIWYG from the lies the Windows font renderer fed it about font metrics.
As for seeing pixels—I don’t actually see pixels, but what I do start seeing on lower-res displays after working on a good high-res one for a week or so is... lines between the pixels, I guess? I start having the impression of looking at the image through a kind of gray grille. And getting acclimatized to (only!) seeing high-res displays (Apple’s original definition of Retina is a good reference) for several days really is necessary if you want to experience the difference for yourself.
> Patching ClearType is unfortunately not as straightforward as it should have been. In an ideal world, you just change the sampling kernel your rasterizer uses to match the subpixel layout (with perceptual corrections) and you’re done. In our world, it takes hackery of Lovecraftian levels of horrifying to display crisp text using a vector font
Can you explain why that is? Is it a bed Microsoft made or something more intrinsic to font rendering generally?
> whereas I am more concerned with how it has propagated the misery of 96 dpi and sustained inadequate displays for so long we’re still struggling to be rid of said displays and still dealing with the sequelae of said misery.
Well, Apple found a solution that works with web and print - and the font( file)s are the same. What's the secret stopping Microsoft from going the Apple route, other than maybe backwards compatibility?
> What's the secret stopping Microsoft from going the Apple route, other than maybe backwards compatibility?
Better monitors (what they call Retina). Older pre-Retina screens render them faithfully, which means that the text is (a bit) blurry. This is the poison that needs to be chosen (there is no way around this), Microsoft prioritized legibility while Apple prioritized faithfulness.
Raster Tragedy should have been called the Raster Disaster. Mostly because I keep calling it that and looking for the wrong thing every single time I want to link to it.
They should just ditch ClearType and use grayscale AA like Acrobat used to have. PPI is high enough on modern displays that the reduction in resolution won't matter.
> PPI is high enough on modern displays that the reduction in resolution won't matter.
Have you looked at the desktop monitor market recently? There are still a lot of models that are not substantially higher PPI than what was normal 20 years ago. PCPartPicker currently shows 1333 monitors in-stock across the stores it tracks. Of those, only 216 have a vertical resolution of at least 2160 pixels (the height of a 4k display). Zero of those 4k monitors are smaller than 27", so none of them are getting close to 200 PPI.
On the low-PPI side of things, there are 255 models with a resolution of 2560x1440 and a diagonal size of at least 27". One standard size and resolution combination that was common over a decade ago still outnumbers the entirety of the high-PPI market segment.
If you look at the Steam Hardware Survey results, their statistics indicate an even worse situation, with over half of gaming users still stuck at 1920x1080.
If subpixel antialiasing made sense during the first decade after LCDs replaced CRT, then it still matters today.
Asides from Steam, consider one of the biggest markets for displaying text on the Windows OS - low-end office PCs. My entire company runs on Dell’s cheapest 24” 1080p monitors. I don’t expect that will change until Dell stops selling 1080p monitors.
> If you look at the Steam Hardware Survey results, their statistics indicate an even worse situation, with over half of gaming users still stuck at 1920x1080.
Are these the natural resolution of the monitor or just what people play games at? I suspect the latter because the most popular gards are more mid level / entry level cards. The 1650 is still at #4.
The Steam Hardware Survey samples the system when Steam is launched, not while a game is playing. For most users, Steam starts when they log in to the computer. I think the unfortunate reality is that a very large number of gamers are still using 1920x1080 as their everyday ordinary screen resolution for their primary display, though a few percent at least are probably on laptops small enough that 1920x1080 is somewhat reasonable.
Not all gamers have a computer entirely dedicated to that purpose. Even among those that do, it's not uncommon to also play games or run Steam on another machine.
I still have Steam installed on the laptop that was long ago replaced as my gaming computer but which is occasionally used for other purposes, because I have no particular reason to remove it.
It's actually probably reporting the software-configured resolution, not the hardware capability. The important distinction is whether it's a system-wide resolution setting or a game-specific setting that may not apply to browser contexts (except for the ones used by Steam itself).
What makes you think that it’s more likely reporting a software-configured resolution?
It is after all a hardware survey, and focused on what user hardware supports.
It's vastly simpler, and more useful, for Steam to detect the current resolution. Trying to detect the maximum supported resolution is non-trivial, especially when there are devices that will accept a 4k signal despite having fewer pixels.
Plenty of gaming monitors are native 1080p. Compared to a higher-res normal monitor at the same price, you usually get a higher refresh rate and better pixel response times. Or you used to, anyway—looks like that part of the spec sheet has been effectively exhausted in the recent couple of years, and manufacturers looking to sell something as “gaming gear” are slowly moving on to other parts of it. As long as they’re raising the baseline for all of us, I’ve no beef with them.
They existed for a while, and as recently as a year or two ago there was a cheap LG 24" 4k that was only about $300. But I think the monitor market in general moved on to focus more on larger sizes, and "4k" became the new "HD" buzzword that meant most products weren't even going to try to go beyond that. So basically only Apple cared enough to go all the way to 5k for their 27" displays, and once everyone else was doing 4k 27" displays a 4k 24" display looked to the uninformed consumer like a strictly worse display.
The linked issue points out that grayscale AA has color fringing on some of the subpixel layouts. It's not obvious to me how one would fix it though, it seems like a deficiency built-in to panels with weird subpixel layouts and the subpixel layouts are a compromise chosen to achieve (fake?) higher PPI
I have a 1440p 27" monitor. On my monitor, ClearType vs greyscale AA is the difference between acceptable text and stabbing your eyes out with a rusty spoon.
From what I can gather, 4k at 32", which is the typical size you get 4k panels at, is just 30% more pixel-dense.
I have strong doubts just 30% more density will somehow magically make grayscale AA acceptable.
If you know any good 27" 4k mixed-use (ie >= 144Hz, HDR) monitors I'm all ears.
It's not that easy. With the stripe layouts, all you have to do is increase the horizontal or vertical resolution when rasterizing, then map that to subpixels. There's no current methodology or algorithms to deal with triangular layouts, etc. And OLED's subpixel layouts have been moving around yearly with both LG and Samsung. Those two even have RGB stripe layouts forecast for the future.
Also, this isn't true? The blur busters founder (Mark Rejhon) has worked a lot on this exact issue and already has defined shaders and approaches to arbitrary subpixel geometry text shaders in the PowerToys repos (no thanks to Microsoft).
His approach is based on the Freestyle HarmonyLCD subpixel rendering approach which has supported non-striped layouts for over 6 years.
We're currently blocked by Microsoft, who continue to ignore everyone on this issue despite Mark's best efforts. Core Windows shaders need to be modified and he can't really proceed without cooperation, without injecting a security risk for anyone who uses his solution.
LG was RWBG, but newer panels use RGWB, which works better with subpixel.
I wasn't aware of the FreeType harmony approach, but it looks like there's some problems, like no FIR filtering. The rapidly changing subpixel arrangements would also be difficult to accommodate. They'd have to have a new DDC command or something to poll the panel's subpixel matrix. I imagine by the time they got that through the standards bodies that RGB OLED would be ready.
Once upon a time I owned a LCD monitor with diagonal subpixels[1], and subpixel antialiasing absolutely didn’t work on that either. It’s just that it was very niche and I’m not sure if there were even any follow-up products that used the same arrangement.
> There's no current methodology or algorithms to deal with triangular layouts, etc.
I believe there are rasterization algorithms that can sample the ideal infinite-resolution picture according to any sampling kernel (i.e. shape or distribution of light) you desire. They may not be cheap, but then computer graphics is to a great extent the discipline of finding acceptable levels of cheating in situations like this. So this is definitely solvable. Incompatibility with manual hinting tuned to one specific sampling grid and rasterization algorithm is the greater problem.
The built-in ClearType Text Tuner [^1]. Fixed most of my issues with OLED rendering on Windows. However, some applications do their own subpixel-AA. Even on MacOS, where they supposedly removed all subpixel-AA, I still see the occasional shimmering edge.
> OLED screens are the superior display technology
With all the hassle to apparently keep my OLED from
burning in [^2], I'd disagree. Apples Displays acheive the same contrast levels with backlight dimming. The only cost is a slight halo around tiny bright spots. It's only really noticeable when your white cursor is on a pitch-black screen.
[^2]: The edges of the picture are cut off because of pixel orbiting. I have to take a 5-min break every 4 hours for pixel refresh. I have to hide the menubar and other permanently visible UI-elements.
I don't think it's a big issue on Microsoft's radar because a lot of OLED screens have a high enough pixel density that the problem isn't really noticeable. The subpixel arrangements themselves have also improved in recent years, further mitigating the issue.
I have a 27" 1440p 3rd gen QD-OLED panel and while I can make out some fringing if I pay real close attention to black-on-white text, it's not noticeable in general usage. The 4k panels have such a high DPI that I can't see the fringing at all without a magnifying glass.
If you know that your monitor is a 3rd-generation QD-OLED panel, then you probably know that text rendering was the main complaint about earlier generations of QD-OLED, and there are probably still more of those in the wild than ones as recent as yours.
This is also a very valid point. The improvement in text clarity on this panel was one of the key reasons why I decided to pull the trigger on this $1,000 monitor, while I had passed on previous models.
On OLEDs, high levels of ambient light hitting the monitor tends to wash out blacks, making them appear dark gray, thereby subverting one of the most clear-cut advantages of OLED.
I don't see this on any of my phones or wearables. I'm aware QD-OLED in particular has this weakness, but haven't heard of or experienced any other OLEDs having this issue.
I wonder if it's actually due to the quantum dots, or if it's more broadly a thing for large OLED panels. I haven't spent any significant time using an OLED TV, and I think most of the OLED computer monitors I've seen in person were QD-OLED.
The story I'm aware of is that in the QD-OLED display stack it was not possible to put a polarizer layer in, which is what causes its telltale weakness in ambient light rejection.
So you'd also not see this on other types of displays with a quantum enhancement film (i.e. FALD MiniLED + quantum dots), it's specifically QD-OLED that has this weakness.
This is to the extent that if the self-emissive quantum dot demo from this year's CES was real, even that won't have this issue (although it will likely still do have the stupid triangular subpixel geometry like on QD-OLED, as the demo unit also had that).
I don't know the mechanism behind it, but I've done side by side comparisons and it's clear to the untrained eye that the impressive contrast ratio of OLEDs is easily weakened by ambient lighting conditions.
It's about the use of interaction nets, which gives an optimal evaluation strategy for the lambda calculus. I'm not an expert on it, but from my understanding it allows extensive sharing of computation across different instances of an enumerative search.
Parallelism of the computation is another big selling point, except modern hardware design is not well suited for the calculus. The author of the video recently tried to get the system to work well on GPUs and ran into issues with thread divergence. I think their current plan is to build some sort of cluster of Mac Minis due the good performance of the CPUs on that platform.
If this computation paradigm advances far enough and shows enough promise, I would expect to see companies start to prototype processors tailor made for interaction nets.
I've worked at military contractor jobs for years, and there are many people there who believe this - if the taxpayer is paying and the software isn't classified, then it should be open source.
Ghidra is a great example of this, and having this software be free has been of great benefit to the security community.
What about using TPM modules? I've been researching these modules lately, primarily for use in online video games. From my understanding, you can use TPMs to effectively ban players (TPM ban) based on their hardware. This would mean every time an account is banned, the bad actor would have to switch to a different TPM. Since a TPM costs real money, this places a limit on the scalability of a bad actor.
Cool, if you can require them for every possible interaction on a platform but even that violates privacy if you have one universal value that ties it all together (the identifier of the specific TPM).
It's just the phone number/email issue but tied to hardware. If you think these things won't leak and allow bad actors to tie your accounts across services then I have some lovely real estate in Florida you may be interested in.
It also appears that resetting a fTPM works around this since it fully resets the TPM. Even if it didn't then people buying used CPUs could find that they're banned from games that they've never even played or installed on their system before
> It also appears that resetting a fTPM works around this since it fully resets the TPM. Even if it didn't then people buying used CPUs could find that they're banned from games that they've never even played or installed on their system before
It depends how the TPM utilization was applied in practice. The initial manufacturer key (Endorsement Key) is hardcoded and unextractable. All the long-lived keys are derived from it, and can be verified by using the public part of the EK. Usually EK (or cert created from it) is directly used for remote attestation.
> What about using TPM modules? I've been researching these modules lately, primarily for use in online video games. From my understanding, you can use TPMs to effectively ban players (TPM ban) based on their hardware. This would mean every time an account is banned, the bad actor would have to switch to a different TPM. Since a TPM costs real money, this places a limit on the scalability of a bad actor.
It is even worse for privacy than phone number. You can never change it and you can be linked between different services, soon automatically if Google goes forward with the plans.
Functions that use a field called x but do not use a field called y can use the type {x=int, ... 'a}, right?
The main difficulty I see with row polymorphism is with field shadowing. For example if you have a record with type {a=bool, x=int, c=unit}, then set the x field with type string instead, the new type should be {a=bool, x=string, c=unit}.
I suppose if you only have syntax for creating a record with a literal, but do not have syntax for updating an existing record this is not a problem.
I don't exactly understand your concern, but yes the type {x=int, ... 'a} is valid in a language with row polymorphism but without subtyping. If you do have subtyping, dealing with rest (or spread) is unnecessary. But if you remove subtyping, the unification algorithm isn't powerful enough on its own for many intuitive use cases. The easiest example is if a function takes a list of records all of which need an x field of type int, then you cannot pass it a list of records where all contain the x field of int but some also contain an irrelevant y field and others contain an irrelevant z field.
> you cannot pass it a list of records where all contain the x field of int but some also contain an irrelevant y field and others contain an irrelevant z field.
Yes you can - that's just a existential type. I'm not sure what the syntax would be, but it could be somthing like:
List (exists a : {x=int, ...'a})
(In practice (ie if your language doesn't support existential types) you might need to jump through hoops like:
List ((forall a : {x=int, ...'a} -> b) -> b)
or whatever the language-appropriate equivalent is, but in that case your list will have been created with the same hoops, so it's a minor annoyance rather than a serious problem.)
> if a function takes a list of records all of which need an x field of type int, then you cannot pass it a list of records where all contain the x field of int but some also contain an irrelevant y field and others contain an irrelevant z field.
Can you explain that a little more? Intuitively I would imagine that those y and z fields would 'disappear' into the rest part.
With subtyping, the type checker would understand that the list type is covariant and would accept a record with more irrelevant fields, because that's a valid subtype.
Without subtyping, the rest part needs to be identical for each element of the list. In fact you cannot even express the concept of a list with different rest parts. The key thing to understand is that the rest part never really disappears. The type checker always deduces what the rest part should be in every case. In languages like Haskell you can work around this by using existential quantification but that's a whole different extension to the type system, and one that's certainly not as flexible as full subtyping.
This stuff happens in Computer Science too. Back around 2018 or so I was working on a problem that required graph matching (a relaxed/fuzzy version of the graph isomorphism problem) and was trying algorithms from many different papers.
Many of the algorithms I tried to implement didn't work at all, despite considerable effort to get them to behave. In one particularly egregious (and highly cited) example, the algorithm in the paper differed from the provided code on GitHub. I emailed the authors trying to figure out what was going wrong, and they tried to get funding from me for support.
My manager wanted me to right a literature review paper which skewered all of these bad papers, but I refused since I thought it would hurt my career. Ironically the algorithm that ended up working the best was from one of the more unknown papers, with few citations.
Beautiful. And thanks for the testimony. Ironically, this may have helped your product or research: Yes you spent more time on the BS, but in the end you found and used an algorithm both better and more obscure. While your competitors struggled with worse ones. Messed up incentives again.
I'm currently developing an online game and have been looking into anticheat measures. From what I've seen, kernel level anticheats basically keep the honest people honest, and any determined cheater can work around them. I do wish Windows had some better way to ensure things are tamper proof for online games.
I actually think it's a good idea to charge at least a small amount of money for online games, so that when cheaters are banned they will have to pay real money to remake their accounts.
In a previous game I worked on, there were freely available and easy to use cheats. This completely ruined the online experience. At one point I wrote a small script to detect an irregularity used only by the cheaters, and the number of people who were banned was astounding. Of course the cheaters patched their cheat within a week and we were back to square one.
The only issue with charging for the game is you will then lose out on player base, some of the most popular competitive games get that way because kids and teens without an income can play them regardless. It's inclusive of all, especially when the developers focus on ensuring their game can run on as many devices as possible even with low-spec. This is incredibly important for a games community, even if it does make cheating slightly easier.
https://www.juniper-lang.org/