Hacker News new | past | comments | ask | show | jobs | submit | alecmg's comments login

nothing like Russian, South Asian maybe


oh great, they are not using a confusing name anymore... wait, now they are using a stupid name!


Phoronix regularly tests this, search for keyword mitigations

https://www.phoronix.com/review/amd-zen5-mitigations-off


I ran mitigations=off on my Zen4 until I saw that phoronix article and realised that in most workloads it made essentially no difference, and even harmed performance in others. I no longer run mitigations=off. But on my old i7 7700HQ mitigations=off improved performance by 10-20% depending on workload.


Could it be quantified how much UK is using coal power of other countries?

Since industry is moved outside, the products we consume use power of producer country, mostly China. Is there a correlation in reduction of local coal power and amount of energy intensive products imported?


It's all connected of course.

It's much easier to just look at the global consumption of coal. Peak coal usage was in 2022 (a brief spike caused by the Russian invasion of the Ukraine). With the exception of China and India, coal usage has declined pretty much everywhere. And in many western countries, like the UK and US it is being phased out rapidly; mostly for economical reasons. It's just no longer cost competitive.

China is still building coal plants but their usage seems to have peaked as well or be close to that as they have aggressively accelerated deployment of wind and solar there and are of course responsible for producing most of the growth of that. Also, there's a sense of urgency there because coal related pollution was making their cities unlivable. This is similar to what happened in the UK mid last century. Also, they are pursuing some aggressive short term goals to reduce dependence on coal.


On the flip side is really fair for a nation to claim their emissions aren't their responsibility because it was due to the production of exported goods? Would we accept that line from Germany if they decided to keep a coal plant open. "Oh, that's not our CO2, it's all going to make cars for China. It's their CO2". A little ubsurd when Germany gets rich off the sale. All the profit, none of the responsbility. If china wants to make our stuff I only think it's fair that they are responsible for the pollution caused by the production.

If anything else this narrative "exported CO2" muddies the water and makes it harder to hold nations to account. A basic "emissions in your borders are your responsibility to handle and reduce" is easy to understand, hard to game, and avoids this all devolving into CO2 accounting tricks.


CO2 is about 20% higher when accounting for imports:

https://ourworldindata.org/co2/country/united-kingdom#consum...

But both are still trending down.


That is a good point. Europe has since the 1970s actually cleaned up the continent pretty well by kicking out a lot of polluting industry.

(The latest target of environmentalists in the Netherlands was data centers. They used too much power, water whatever. So they went to the deserts of Spain. Epic win).


I WISH it would be prefetching video in background while showing ads.

But no, always goes to spinny wheel buffering after ad ends. Oh, and thats after having some spinny wheel to load the ad in first place ffs.


~100 MB/s write speeds?

what use case needs an SD card that can be written for 40000 seconds?


> The new 4K 32" 3rd gen QD-OLED panels seem to be the closest to perfection, but I'm not sure if they're suitable for long sessions of programming, web browsing and reading documentation (i.e. displaying static UI elements for hours) due to burn-in and temporary image retention.

Wouldn't worry about burn-in. 3rd gen should be more durable and equipped with all kind of burn-in mitigation tech. And a 2-3 year burn-in warranty

My concern about OLED used to be the weird pixel layout, not great for text. But on 4k QD-OLED pixels are smaller, so text fringing is not noticeable. And LG WOLED coming this year should have no problem at all.


I accept that the organic material contained in the pixels has limited lifespan and that the brightness of each pixel will eventually wear out. That's just the limitation of the technology.

But I'd like an OLED monitor to somehow mask/compensate for this degradation by e.g. adjusting the voltage/brightness of individual pixels according to their cumulative wear so that it's invisible to the user. That way, I would observe no signs of burn-in at, say, 30% brightness, but after years of cumulative usage, the monitor would get less and less bright (i.e. the wear would appear uniform).

What I'm primarily concerned about is temporary image retention; the outline of a white PDF document opened for hours being visible after switching to a dark IDE. I'm not sure if the 3rd gen QD-OLED or WOLED panels are resistant to such kind of image retention.


Not an issue if they have a means of propulsion capable of delivering 1g the whole way

Which is necessary to go any meaningful distance in space anyway


ahh, I wish they included speed comparison to numpy.average

I know, thats not the point and average was only picked as a simple example, but still...


Agree. Normal Python for loop apply to a Numpy array to do simple math is just pure nonsense.

Just tested how would it be without compile nonsense.

```

a = np.random.random(int(1e6))

%%timtit

np.average(a)

%timeit

np.average(a[::16])

```

And my result is that no matter how uncontiguous in memory (here I take every 16 elements like what they did, and I tested for 2,4,8,16), we are doing less operations so it always end up faster. Contrastingly their SIMD compiled code is 10-20X slower in uncontiguous case.

And for a larger array that is 16X of the contiguous one, but we only take 1/16 of its element, the result is like 10X slower as shown by the article. But I suspect that purely now you have a 16X larger array to load from memory, which itself is slow in nature.

```

b = np.random.random(int(16e6))

np.average(b[::16])

```

Which conclude that people should use Numpy in the right way. It is really hard to beat pure numpy speed.


But that's precisely what makes this a good exercise, you can see how far you are able to close the gap between the naive looping implementation and the optimized array implementation.


> np.average

But that's not the function in the article. The article implements `(a + b) / 2`.

And, on my system, simple `return (arr1 + arr2) / 2` takes 1.2ms, while the `average_arrays_4` takes 0.74ms.


Few years ago I tried to beat the C/C++ compiler on speed with manual SIMD instructions vs pure C/C++ Didn’t work out…

I can only imagine that this is already backed into Numpy now.


You usually have to unroll your loops for it to help (unless compilers have gotten smarter about data dependencies)


definite deja vu from Optane

Capacity and price killed it, no word about these in the article


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: