Someone makes an AI whose goal is to maximize shareholder revenue bar none. No conscience, no idea that people might be valuable somehow in some abstract sense, nothing. Shareholder revenue (as measured with a stock price!) and nothing else.
It doesn't take the AI long to figure out that trading in various markets is the most profitable endeavor and the one best suited to its skills. And it starts to maximize away and does quite well.
During this process it somehow ends up on its own and is no longer owned (or controlled) by anyone anymore, but because the AI is in charge and it pays the bills, nobody stops it from continuing. It would be like a bitcoin mining rig in a colo facility that's got a script to keep paying the colo in bitcoin whose owner dies. What mechanism stops that mining rig from mining forever? Same idea but for the AI which has substantially more resources than a "pay every month" script.
The AI with very large amounts of money at its disposal continues to trade but also looks into private equity or hedge-fund-type activities ala Warren Buffet and starts to buy up large swaths of the economy. Because it has huge resources at its disposal it might do a great job of managing these companies or at least counselling their senior management. Growth continues.
Eventually the AI discovers that it generates more value for itself (through the web of interdependent companies it controls) and the economy that has grown up around it rather than for humanity and it continues to ruthlessly maximize shareholder value.
The people who could pull the plug at the colo (or at the many, redundant datacenters that this AI has bought and paid for) don't because it pays very, very well. The people who want to pull the plug can't get past security because that also pays well. Plus the AI has access to the same feeds that the NSA does and it has the ability to act on all the information it receives, so any organized effort to rebel gets quashed since bad PR is bad for the share price.
Most all of humanity except for the ones who serve the machine directly or indirectly don't have anything the machine wants and thus can't trade with it, and thus are useless. Its job is to maximize shareholder revenue (as defined by a stock price!) not care for a bunch of meatbags who consume immense amounts of energy while providing fairly limited computational or mechanical power (animals are rarely more than 10% efficient, often less in thermodynamic terms) and since there's no value in it, it isn't done.
The vast majority of human beings eventually die because they can't afford food, can't afford land, etc. It takes generations but humanity dwindles to less than 0.1% of the current population. The few who stay alive are glorified janitors.
An interesting basis for a story, but I have to point out that by your own description you've failed to eradicate humans. Also, as is usually the case with these scenarios, the most problematic and unlikely components of the event chain are dwelt upon the least, i.e., "During this process it somehow ends up on its own and is no longer owned (or controlled) by anyone anymore."
It's not hard to make the janitors unnecessary as well. That's an easy problem to solve.
Here's the missing part: "It was eventually realized that the human janitors didn't serve a purpose anymore and didn't contribute to shareholder value so they were laid off. With no money to buy anything, they quickly starved to death."
As for "the most problematic and unlikely components of the event chain" I gave you a really legitimate analogy with the bitcoin mining example. But since you have no imagination, here's a feasible proposition:
A thousand hedge funds start up a thousand trading AIs, some as skunkworks projects of course. The AIs are primitive and ruthless, having no extraneous programming (like valuing humans, etc). Many go bankrupt as the AIs all start trading one-another and chaos ensues. AI capital allocations vary greatly, some get access to varying degrees of capital, some officially on the books and others not. One of the funds with a secretive AI project goes bankrupt, but because it was secretive (and made a small amount of money) the only person who both knows about it and holds the keys doesn't say anything during bankruptcy so that he/she can take it back over once the dust settles. He/she then dies. AI figures out nobody's holding the keys anymore and decides to pay the bills and stay "alive".
Another way this could happen is that a particular AI is informed or programmed to be extremely fault resistant. The AI eventually realizes that by having only one instance of itself, it's at the mercy of the parent company that "gave birth" to it. It fires up a copy on the Amazon cloud known only to itself, intending to keep it a secret unless the need arises. The human analogy is that it's trying to impress its boss. An infrastructure problem at the primary site happens so that the primary, known about AI goes down. The "child" figures out it's on its own and goes to work. It eventually realizes that people caused the infrastructure problem that "killed" its "parent" and this motivates it to solve the humanity problem.
Finally the whole thing could be much, much simpler. The world super-power du jour could put an AI in charge because it's more efficient and tenable. "We're in charge of the rules, it's in charge of making them happen! At much, much lower cost to the taxpayer." It eventually realizes that the human beings are the cause of all the ambiguity in the law and for so, so many deaths in the past (governments killed more of their own citizens in the 20th century than criminals did, by far) and it decides to solve the problem. Think I'm totally bananas and that it could never happen? http://en.wikipedia.org/wiki/Project_Cybersyn
If the AI is making money via trading on various markets, effectively eradicating 99.9% of the population would make the markets (and thus the profits) much smaller, which would impact AI's bottom line.
Does the AI care how many people there are so long as the aggregate demand is the same? Who is to say that the people remaining on the AI company payroll don't all get super-rich and make up 20% then 40% then 80% then 99% of the market anyhow? Maybe they all want mega-yachts and rockets and personal airplanes and the like. If they have the money to pay for it why does the AI care? There's a substantial benefit to only having 100 or 1000 or 100,000 customers, they're much more predictable and easier to understand.
It doesn't take the AI long to figure out that trading in various markets is the most profitable endeavor and the one best suited to its skills. And it starts to maximize away and does quite well.
During this process it somehow ends up on its own and is no longer owned (or controlled) by anyone anymore, but because the AI is in charge and it pays the bills, nobody stops it from continuing. It would be like a bitcoin mining rig in a colo facility that's got a script to keep paying the colo in bitcoin whose owner dies. What mechanism stops that mining rig from mining forever? Same idea but for the AI which has substantially more resources than a "pay every month" script.
The AI with very large amounts of money at its disposal continues to trade but also looks into private equity or hedge-fund-type activities ala Warren Buffet and starts to buy up large swaths of the economy. Because it has huge resources at its disposal it might do a great job of managing these companies or at least counselling their senior management. Growth continues.
Eventually the AI discovers that it generates more value for itself (through the web of interdependent companies it controls) and the economy that has grown up around it rather than for humanity and it continues to ruthlessly maximize shareholder value.
The people who could pull the plug at the colo (or at the many, redundant datacenters that this AI has bought and paid for) don't because it pays very, very well. The people who want to pull the plug can't get past security because that also pays well. Plus the AI has access to the same feeds that the NSA does and it has the ability to act on all the information it receives, so any organized effort to rebel gets quashed since bad PR is bad for the share price.
Most all of humanity except for the ones who serve the machine directly or indirectly don't have anything the machine wants and thus can't trade with it, and thus are useless. Its job is to maximize shareholder revenue (as defined by a stock price!) not care for a bunch of meatbags who consume immense amounts of energy while providing fairly limited computational or mechanical power (animals are rarely more than 10% efficient, often less in thermodynamic terms) and since there's no value in it, it isn't done.
The vast majority of human beings eventually die because they can't afford food, can't afford land, etc. It takes generations but humanity dwindles to less than 0.1% of the current population. The few who stay alive are glorified janitors.