Any LLM would outperform the DBA we had in our first start-up. This guy was a buddy of the co-founder and was parachuted into the top paying IC role out of about 25 developers.
His "achievements" included dropping the production database by messing with the backup processes, insisting on very wide tables rather than smaller tables with joins since "they're easier to maintain", and giving out full admin rights to junior devs who subsequently went on to develop their own "black data mart" which was way way better than the production version.
Sigh, and to make things worse, this DBA guy managed to leverage his "experience" into a Principal Data Architect role at a major consulting company after the start-up predictably imploded.
I worked with 1000s of dba's in my 35 year career (I get into companies to 'fix things', so I see a lot of this). I don't think your case is unique at all. And seems to get worse.
I keep telling people that this is why AI will turn the industry on its head very soon. It doesn't need to be better than a good developer or designer. Because if you look at the landscape in tech, the vast majority of people simply are not that good. People who are actually objectively good are few and far between and usually very expensive.
If you have an average company that needs to settle for something that works instead of something that is great, you could already replace a ton of jobs with AI. When the C-suite realises this, below average devs are doomed. In the coming years we will probably see the first companies taking off primarily employing AI as coders - long before AIs can beat elite programmers.
Conversations around AI are almost always unclear and poorly framed driven by a profit-hungry hype train.
We're still talking about AI as though the goal was never actually to have artificial intelligence. LLMs are impressive for what they are, but they definitely aren't intelligent. OpenTextPrediction just doesn't have a nice ring to it and definitely wouldn't be valued at billions of dollars.
Whenever I see somebody type super intelligence or AGI, I remember that AI was supposed to be solved by a summer project at Dartmouth in 1956 and I chuckle a bit.
Well that's a great question, and an even more basic complaint I have with the AI research space. They have yet to bother coming up with a clear way to define or recognize intelligence or consciousness.
Those are interesting starting points. I don't know if I'd say its that simply, but Tue direction seems totally reasonable.
The ability to solve problems is a particularly interesting one. To me there's a difference between brute forcing or pattern recognition and truly solving a problem (I don't have a great definition for that!). If that's the case, how do we really recognize which one an LLM or potential AI is doing?
It'd be a huge help if AI researchers put more focus on the interoperability problem before developing systems that could reasonably emerge intelligence.
I'm a terrible developer, most people here wouldnt even consider me one. I have thought about this a lot.
Lets follow this all the way to the bottom: all coding employees are replaced by one very senior one who is AI assisted.
One thought -How are juniors turned into seniors? Lets say that we solve that with some yet to be invented educational solution, and then companies that arent code heavy would hire them for much less money or something like that.
The senior developer always keeps his job, because we cant have non technicals deploying LLM code yet. Then maybe that becomes solved, so your non technical CTO can deploy code.
This then creates an environments where fuck ups are on the CTO for being non technical. The blame aspect is the reason for this, its political.
Or it creates an environment where infrustructure and software becomes a solved problem altogether.
Then we start considering if AI can replace all engineering altogether in many fields. All of commerical writing and commercial creative arts are mostly taken over. Occasionally a brilliant human example moved things in a different direction, but this is quickly fed to the incumbent AIs and then it becomes commercialised.
What happens now? Everyone moves into hardware work?
I think before people would actually be "replaced", the productivity boost might actually cause more work since all of sudden, the development costs go lower which means that new things that were too expensive previously, which there are tons of suddenly become low hanging fruits. I think it's hard to predict what happens after that though.
> One thought -How are juniors turned into seniors? Lets say that we solve that with some yet to be invented educational solution, and then companies that arent code heavy would hire them for much less money or something like that.
In theory there are already many occupations like medicine where you have to study for years before you can do actual work, but coding wise it will still be easier, since people who do it as hobby will do it as hobby and become good enough on their own.
If you define elite programmers in the context of actual coding as those who excel at implementing ideas and solutions, I could imagine that this skill might become less relevant with the advent of AI. Smashing out over 1000 lines of Haskell would then be the equivalent of being able to calculate complex numbers in your head.
However, if you define elite programmers as those who possess good ___domain knowledge, communication-, management-, and soft skills, then yes, they might become so productive that they could replace developers whose main skill is writing code as we move up a level of abstraction. While it might help today to have a certain level of understanding about Assembly and C, we do not need to be elite at it to be a good software engineer.
I am asking as I met a few devs who are electrical engs. with a very good understanding of how a computer actually works but now earn more with React and Python.
The overlap between both groups are quite large from my personal experience. The mythical code wizard with no ___domain knowledge or soft skills exist but they are very rare. Most people who are really good at coding are also good at picking up ___domain knowledge.
Yeah, I would say that elite programmers are the ones who are able to do most value with the tools they have, so likely they would be the ones who would get most out of AI as well, since they know how to make it do what they want and understand if it's any good or not.
and i’ve worked with 1000s of sysadmins, java|python|php|go|javascript developers, network admins, security admins who could be replaced by bash scripts, llms, or even mid-level dbas.
I assume you exaggerate the numbers but to your point: yes, but these people in my experienced are best replaced with nothing at all since they do more harm than good. Sute, you could replace them with LLMs but what us the point of fucking up your codebase faster for less money?
This is my main fear about AI: AI being used to do stupid things more efficiently.
i was replying to a comment that said they’d worked with “thousands of dba’s (sic)” who were superfluous, so as a rhetorical device, I used their own phrase to make my own point - that being a dba requires a combination of generalized expertise, and specialized ___domain knowledge. it’s honestly frustrating how much negative publicity a DBA gets in general because they’re cautious and wary because they have to have an eye to the big picture in addition to delivering the specific objective. I have hundreds of examples where I’ve had to teach security admins how to use wireshark to demonstrate that their security policy settings are incorrect. or how many full stack traces I have to deliver to front and developers to demonstrate that the time out on their asynchronous quick search functions are problematic. or how many heat dumps I’ve had to interpret for java developers to demonstrate that they have a memory leak that’s blowing up the app server, which is making the user experience slow.
the knee-jerk reaction to simply blame the data layer nearly forces the DBA to be a master of all layers of the stack. my experience is that quality issues from javascript and application code developers are proliferating at a much faster rate than the “dba’s”
Sure, but the article was about dba's. I worked with DBAs first in the 80s; very competent grumpy old white men who would not let you near the systems. Until about 2005, we had to sit with a team of them, have the db changes printed out and discuss why we needed that, pros and cons and discuss the (then Java) application in detail. Often good remarks came back, we went back and forth and it got done. There was somewhere there a transition to cloud and dba's no longer really needed or 18 year old college students don't make bad dba's or we can outsource them etc. I remember close to 2010 sitting at a large european publisher who had a dba team that barely knew databases, let alone the rest of the stack.
But yes I agree with your points generally, the rest of the tech staff is often even worse. This article was about dba's and I have and do meet many who shouldn't be ones and who are dangerous. Way too many.
We usually work for 3-5 companies at the same time and we swap one out around every month: we do integrations for our own product and troubleshooting for clients. I meet far more people who should find another quickly per month than 9, including dba's, cto's, devs, secops, decops, managers etc.
Currently AI already beats most of them on single one shot tasks but LLMs are not consistent across tasks for the same project; nor are a lot of humans but at the moment better than LLMs. And seeing the progress over the past 2 years; yes they get better at the one shot tasks, amazingly so, but consistency, even in the same conversation is a big issue. They just are not. So humans will win until that is fixed, if it can be. And hallucinating; people bluff about their knowledge too, but they can be corrected and might soak up that knowledge; for LLMs it's not proven it's even possible to fix this.
This is not the original title, which is "Database Diagnosis System using Large Language Models". LLM as a diagnosis tool seems indeed like a really good use case as the paper show, but the changed title imply the point is replacing the whole role.
So someone has to say "yes, please commit this change to resolve the issue", that hardly feels like a role remains here, just the DBA manager is needed now.
* The use of tree-based knowledge extraction with manual review + the graph of the resulting information by principle component extraction demonstrates the effective base of the context.
* The use of a Sentence-BERT model specifically for tool matching avoids the hallucination problem of LLMS offering fake solutions/diagnosis steps.
* The tree-based multi-LLM-expert diagnosis by vote system also addresses hallucination and failures like looping through the same solutions over and over in complex cases, and is reminiscent of the monte-carlo advance for AlphaGo and paxos consensus protocols. AND it provides output in an auditable way, which is important for incidents.
When testing, they evaluate against a human DBA with two years of experience, which seems kind of junior to me. Notably, in the results the D-Bot usually (9/12 cases) comes close to the junior DBA, but does not exceed it. However, the D-Bot definitely exceeds the results of raw LLM prompting and it has the obvious speed advantage over a human.
Overall, this gives me confidence that some of the LLM projects at my own company can be useful, since auditability + specific knowledge extraction are relevant to our work.
I love the anomaly profiler that's proposed here. I don't know that I'd ever want this system actually performing automated remediation steps, but having an agent that's both monitoring the database and getting prompted by alerts would fit my team very nicely.
Getting the alert followed by an automated analysis containing suspects and possible remediation steps removes a lot of DBA drudgery.
His "achievements" included dropping the production database by messing with the backup processes, insisting on very wide tables rather than smaller tables with joins since "they're easier to maintain", and giving out full admin rights to junior devs who subsequently went on to develop their own "black data mart" which was way way better than the production version.
Sigh, and to make things worse, this DBA guy managed to leverage his "experience" into a Principal Data Architect role at a major consulting company after the start-up predictably imploded.