Hacker News new | past | comments | ask | show | jobs | submit login
Thoughtworks Technology Radar Oct 2024 – From Coding Assistance to AI Evolution (infoq.com)
42 points by NomDePlum 5 months ago | hide | past | favorite | 25 comments



I appreciate the effort that has gone into organising the tech radars, each time. But whether by export group or review panel, I've come to belief that (blindly) following 'best practices' or 'industry standards' based on prospective adoption is moot.

Tools, languages and technology take their place after a significant share of people have adopted them, not before. Trying to steer industry towards certain tools and away from others, ignores the irrational choices that belie them, and how resistant we may sometimes be to purported technical fixes.

We are creatures of habit, and will break those habits on occasion, but ideally not because of an imbibed fear of losing out---the tools need to be proven in practice, not on promise.


> Tools, languages and technology take their place after a significant share of people have adopted them, not before. Trying to steer industry towards certain tools and away from others, ignores the irrational choices that belie them, and how resistant we may sometimes be to purported technical fixes.

Your perspective is biased to a particular point. The technology adoption bell curve (pioneers, early adopters, early majority, late majority, laggards) is a more instructive prism. Our propensity to assume risk is commensurate with both the size of the problem we're trying to solve and the lack of lower-risk options.

Pioneers aren't paying attention to tech radars, because brand-new projects find their earliest users directly, not via tech radars. The audience for tech radars are early adopters and early majority. Both are looking for validation from the rest of the market (i.e. seeking to avoid projects rated as "hold"/avoid), the difference being that early adopters are more risk-friendly (i.e. willing to try "assess") while the early majority is less risk friendly (i.e. looking for the "trial" recommendation).

At no point is a tech radar trying to get you to ignore your own evaluation cycle or to push projects ("adopt") that are not right for you. It's just a relatively organic marketing funnel to help projects with minimum proving to expand their audience, and for the wider market to connect to potential solutions, in exchange for the market developing appreciation for ThoughtWorks and thus serving as a potential marketing funnel for ThoughtWorks's primary consulting business.


To be fair, I don't think the idea here is to follow this advice without doing some proper research. The radar helps me to find interesting developments, and to sift through the enormous amount of garbage that is out there.


I appreciate that, and a good summary now and then doesn't hurt. But, for whatever reason, some recommendations come to be regarded as leading (and by SEO measures, this one is leading quite a bit).

My gripe is that it cements a certain way of thinking about how to best organise our stacks, to the degree that other companies are following suit by copying the 'radar' template without questioning its organising logic (why are languages/frameworks one category, for instance).

That's not to say we cannot categorise tools and techniques or inventorise new ones, but that we should not take the opinion of one entity to reflect industry as a whole.


This flocking behaviour is indeed an annoying trait of most humans.

As a nice counter example in this context, Zalando has tuned the quadrant categories to their specific way of working. Their categories are: Datastores, Data Management, Infrastructure, and Languages.

https://opensource.zalando.com/tech-radar/


Thoughtworks makes these radars every year. They make sure overhyped & expensive to build trends are portrayed as the new hotness, only to mark them as “avoid” a few years later. I wonder why!


> I wonder why!

It almost feels like you're using irony to imply a connection between selling software consulting, and pumping-and-dumping tech trends. But I'm sure that's not your intention.


I'd never do that!


Is there any quantification of this available?


I agree. It is nice to see what others are using though (I hadn't heard of a lot of stuff they list), and see if there's anything that might be useful to investigate.


Word-of-mouth is key here I think, rather than one entity placing itself as a single source of truth. I think the unfortunate side-effect of these and similar tech retrospectives, is that they deepen a certain top-down view of technology; that new innovations fit neatly into pre-established categories of accept/reject, whereas reality is far less clear-cut and organisable in neat quadrants.

To me, an immediate improvement would be to display user issues and workarounds along the newly reported technologies. Or why someone tried it and then left if on the shelf. Possibly in context of a multi-team project.

Not a full write-up is needed here, but some practical user stories and anecdata alongside the current summary/ranking on the adoption radar would go a long way.


You miss the purpose, this is for consulting companies, what to push, what to sell, what conferences to organise, which books to sponsor, what pitch to give at new customers, what to upsell,....


I know the purpose, that does not exempt it from critique.


Agreed, which is why unless I am told to work on such matters, usually always wait for dust to settle, saving me from not bothering at all, as you mention.


I love this initiative. We thought it would be a great idea to have a radar specifically for our own internal projects. Turns out that many companies are doing this already!

ThoughtWorks even provides open-source tooling to help you set this up.

For our second iteration we rewrote the entire radar frontend to better suit our needs. Using ChatGPT, Cursor, and Copilot, this was a breeze.

[1] https://www.thoughtworks.com/radar/byor


Am I the only one who doesn't find this useful at all?

It is very much the opposite of keep-it-simple and use-what-is-proven.


> Am I the only one who doesn't find this useful at all?

I was going to say you're not alone, but for the purposes of identifying "what-is-proven" you can use the report to discount anything not labeled "adopt" as a starter.




Ah, their fabled rear-view mirror again.

A nicely-presented summary of various tech applications but I’m not sure we need a lighthouse anymore.


fun fact, lighthouses were repurposed into coast guard rescue stations...at least on the Great Lakes that is the case...


That seems like an apt metaphor for TW, if the guards were the ones who got you into the mess in the first place, of course


It is actually interesting how they miss most of them even "forecasting" after the fact by giving too much importance to things that flop later.

GraphQL for data products ...


Reading a little deeper, they seem to see that graphql’s ability to aggregate data products and their schemas is useful as support in LLM applications. The key term is “data products”; they don’t seem to be hyping it as the magical enterprise cross-system federated API gateway, instead as a structured/LLM-ingestible DSL for data understanding and live retrieval across many datasources. Which makes sense IMO.


Yes, feedback loops? What feedback loops!




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: