It's pretty surprising that they're willing to charge a flat rate rather than by token, but great news for users. It's inevitable that you get annoyed at AI when it consumes tokens and generates a bad answer, or starts reading files that aren't fully relevant. The flat rate takes away that bad taste. The business modelling behind it must be quite intense, I hope this doesn't blow up in JetBrains' face if Junie's usage patterns change over time.
JetBrains are in a great position to do this though, perhaps the best position. Whereas a tool like Claude Code or Aider can give the LLM grep and little else [1], Junie can give the LLM a kind of textual API to the IDE's own static analysis database. If Claude/GPT want to understand what a function is doing and how it's used, it could issue a tool call that brings up nicely rendered API docs and nothing else, it could issue a tool call to navigate into it and read just the function body and so on. And they can use the IDE to check whether the code complies with the language rules more or less instantly without needing to do a full compile to pick up on hallucinated APIs or syntax errors.
So much potential with this kind of integration, all that stuff is barely the start.
[1] Aider attempts to build a "repo map" using a PageRank over symbols extracted by tree-sitter, but it never worked well for me.
>The business modelling behind it must be quite intense, I hope this doesn't blow up in JetBrains' face
Historically... this tends to work out. Reminds me of Gmail initially allowing massive inbox. YouTube doing free hosting. All the various untethered LAMP hosting...
If necessary they'll add an anti-abuse policy or whatnot to mitigate the heavy users.
The sophisticated modeling is basically "just get going" with a guesstimate and adjust if needed.
I doubt that pricing structure will sink any ships. It's going to be about utility.
> Historically... this tends to work out. Reminds me of Gmail initially allowing massive inbox. YouTube doing free hosting. All the various untethered LAMP hosting...
One difference I see: storage capacity and compute performance aren't increasing like they had in the past, so companies can't rely on these costs to dramatically drop in the future to offset bleeding cash initially to gain market share.
The cost of inference[0] for the same quality has been dropping by nearly 10x year over year. I’m not sure when that trend will slow down, but there’s still been a lot of low-hanging fruit around algorithmic efficiency.
Sure. I agree that usage/demand is likely to outgrow compute performance.
But.. a lot of the other dynamics that make this game winnable still stand. Maybe they will need to go with a meter eventually or some other pricing structure... but it will work out.
It's odd that they don't seem to let you pay for overages, it looks like you are just shit out of luck past a certain point even on the most expensive plan.
Were you able to figure out what constitutes a "credit"? I initially assumed they were following Cursor's (early) model of 1 prompt = 1 credit, with the tokens used to fulfill the prompt not costing anything. If that's how they're doing it that still leaves a bad taste when you waste a credit on something that doesn't work, but it does remove the need to care about how the tool gets there.
Token based is a pretty strong downside for me that would be enough to get me to use other tools like Cursor (even though I love JetBrains IDEs). I get actively stressed watching an automated system burn through my money on its own recognizance. If I'm going to have quotas or usage-based pricing I want the metrics used to be things that I have direct control over, not things that the service provider controls.
TANSTAAFL. With flat pricing, companies have incentives to downgrade you to cheaper models - which currently strongly correlates with worse quality of output - or, more likely, to trim context significantly and hope you won't notice.
But yes, there should absolutely be ways to track usage, ideally before the prompt is even submitted for processing (maybe for >N tokens per query, where N can be specified in settings but has a reasonable default).
Junie has been amazing for me, completely replaced my payments for Claude Code and Cursor. And it was free (until today). It's the least aggressive agent i've used, no complete re-writes or even close, And is able to achieve about 95% of what I ask of it
The only downside - which might be fixed in the newest release - is that it completely forgets context between messages - even in the same message window. But that feels like both cost cutting and easy to fix
My biggest issues with Claude Code and Cursor for what its worth:
Claude Code: Price, plus when it doesnt get things right, within a few messages it ALWAYS just creates a new file entry point with some demo console.logs that do nothing but show messages, and claims to have succeeded in what I asked
Cursor: Will break some functionality in my web application while fixing or creating others, about 80% of the time
Cursor results are going to depend heavily on the model; Gemini 2.5 pro exp seems the overall strongest. You’re probably defaulting to 3.7 sonnet which is completely unusable; it was good at first but I am convinced anthropic “updated” (degraded) it behind the scenes to lower their inference costs. OpenAI did the same with GPT-4o for a bit a while back before making it better again.
3.7 also seems to have converged more on the hybrid reddit user/npr listener/HR lady tone and manner of speaking that makes me want to punch a wall. Genuinely people could probably increase LLM usage just by fixing this problem and banning r*fit from the training set.
I've seen evidence that suggests this is false, and that it's more likely that cursor degraded the experience in their context window to save on costs.
The date stamped models haven't had any evidence of ever changing or degrading, to my knowledge. Aider did a test for this as well.
Is there a way to use Claude within the Jetbrains IDEs? I have a Jetbrains IDE license, and a Claude subscription, but I couldn't find an integration. To use Claude and have it integrated I need to subscribe to Jebtbrains AI instead, but then I don't have Claude in the browser anymore.
We made the choice to integrate directly with third party providers for a few reasons, but the major one is to do with how the providers can use our user's data. We have very restrictive agreements which don't allow the providers to use the data for training or any purpose other than validating the requests.
> Restrict usage of AI Assistant for a project
Create an empty file named .noai in the root directory of the project.
> When this file is present, all AI Assistant features are fully disabled for the project. Even if this project is opened in another IDE, the AI Assistant features will not be available.
I teach and absolutely must be able to disable AI for my student projects otherwise the students learn very little and are lead down false paths constantly
Of course. As I tell them: I am a good teacher and my job is to teach you how to program computers well. It is possible to cheat and get a good grade in this class. If I catch you, I will report you to the university. But I'm not going to work hard to catch people, that's not my job: my job is to teach. You should learn.
My concern isn't for students who are cheaters, there isn't much I can do about them. Rather it is for students who don't know any better and start having AI auto-completion thrown at them.
How many students rely on AI, is that number really going up?
My son is 14 and knows about AI (I'm a researcher in the space, so I've been mentioning advances in it for years). He seems to code with some of his peers, and it seems like normal to me (python scripts, HTML, js type stuff written the old fasioned way [written by hand or copy pasting into a notebook.exe equivalent :P]). I try to be super honest with him, and I tell him AI is incredible, but we also joke about it and I explain the incredible drawbacks of vibe coding, especially while learning.
I wonder overall though how LLMs are going to effect CS education. Will students avoid using the tools, or will they be accepted? CS homework projects were always easier to cheat on vs say fine art, since of the ease one can copy and paste code, but AI tools makes trivial work of many homework exercises that would in theory be harder to implicate someone.
Yes, it is going up. In my classes the problem is that AI is just there in VSCode or IntelliJ, and they start using it almost by accident. That is what I want to avoid.
Tons of cheaters existed pre-ai too. They'll always exist, whether they use ai or just share copy pasted code from students who attended the semester before. Probably half of my graduating class couldn't program because they cheated their way through.
I used JetBrains AI for about a year, it was pretty good to basically help me to scaffold things, it felt like instructing a Junior developer, which isn't bad, saved me time for side projects.
* They say free for all IDEs except the community version of PyCharm and IntelliJ.
* Looks like if you want to use your own LLM you need to be an enterprise user? None of the lower tiers allow for it, I find this really, really dumb, if I'm paying for compute, why can't I also run my own LLM? Am I misunderstanding this?
* ReSharper and Android Studio don't fall under the credit system? I really would like to know what that means.
- JetBrains AI tools (AI Assistant and Junie coding agent) are now available under a single subscription starting with version 2025.1.
- There are three tiers: AI Free, AI Pro, and AI Ultimate.
- The free tier offers unlimited code completion, local AI models, and credit-based access to cloud AI assistance and Junie.
- AI Assistant supports Claude 3.7 Sonnet and Google Gemini 2.5 Pro.
- Junie is powered by Anthropic's Claude and OpenAI models.
It's not clear whether AI Free will be available in Community Edition IDEs or not.
Update: From the AI Plans & Pricing page, there's a tooltip that says: "The free tier is not available in the Community Editions of PyCharm and IntelliJ IDEA."
As far as I know, that rule doesn't apply to verbs. Do you have a style guide that indicates otherwise? I know it looks like I'm trying to correct someone's grammar online, but I'm legitimately trying to learn. Your comment made me curious so I searched for a bit but couldn't come up with anything.
The free tier now supports connecting to local AI models running on LM Studio or Ollama, but it still doesn't actually function without an internet connection.
If you block access to the internet or to their AI API servers [1], it refuses to start a new chat invocation. If you block access halfway through a conversation, the conversation continues just fine, so there's no technical barrier to them actually running offline, they just don't allow it.
Their settings page also says that they can't even guarantee that they implemented the offline toggle properly, a flag that should be the easiest thing in the world to enforce:
>Prevents most remote calls, prioritizing local models. Despite these safeguards, rare instances of cloud usage may still occur.
So you can't even block access to the very servers that they say their faulty offline toggle would leak data to.
I disconnect from the internet sometimes and noticed this morning that my previous night's chat was invisible. I could only see it once I connected again.
This puts me off a bit to finally try local models. Anyone know what kind data is collected in those rare instances of cloud usage?
Hi, here are our data collection policies for the cloud-based LLMs. We've worked out agreements that heavily restrict how third party companies can use your data, including not storing it or using it for model training: https://www.jetbrains.com/help/ai/data-collection-and-use-po...
> The AI Free tier gives you unlimited code completion and access to local AI models
Looking forward to giving this a try.
Work provides me with tooling and requires that I stick to approved AI tools, and my hobby-coding alone is just not important or regular enough to justify a paid subscription.
It's been a little annoying that I can have ollama running locally, enable ollama and configure it in my IDE, but still (seemingly?) not be able to make use of it without activating a paid AI Assistant license.
It makes perfect sense that cloud models would require payment, and that JetBrains would make some margin on that.
But I'm already paying for an IDE whose headline features have recently been so AI-focused, and if I'm also providing the compute, then I should really be able to use those features.
You are getting the AI FRee tier with any paid license for a JetBrains IDE and as you stated it should work with local AI models. I looked through our internal documentation, and I couldn't find anything that stated anything different. If you run into issues, please open a YouTrack ticket and we can have a better discussion/look at what's going on, but with everything I see, I'd expect it to work the way you think.
> On top of all that, the All Products Pack and dotUltimate subscriptions will now come with AI Pro included.
Well, colour me surprised. I've used JetBrains as an an example of a pretty decent company in the past (e.g. the way they remind you your subscription is up for renewal a couple of months in advance, so you have all the time in the world to unsub if you like), but I wasn't expecting them to just add this to the existing subscriptions.
They’ve bumped up the price once in all the years I’ve been subscribed to the All Tools pack. When they did that, they gave existing subscribers the option to buy two years’ subscription at the old price.
Yeah, JetBrains has so far been incredibly good to their users, and it sure seems like they know that that has been their primary competitive advantage in a landscape dominated by free editors. Hopefully that calculation stays the same in the AI world.
I think I'll be more than happy to try it out then cause I have that pack and compare it against the Github Copilot plugin, or the likes of Continue.dev (which was pretty good in VS Code, but kind of buggy in JetBrains IDEs).
That's cool that you're looking at those things. I hope we've made progress on "Apply" (and we're doing more.) And as heads-up, as you can imagine, we're looking at NEP.
I don't know when "AI Pro" became bundled with the "all products pack" but I don't think it was there last year. I've been paying for it separately for a bit now, but a couple days ago I noticed 2 licenses showing up in the 'AI Assistant' tab. Was able to just cancel the monthly one, and use the one from 'all products pack'. May look to upgrade again when the Junie stuff becomes available in the other IDEs.
I'd like to know more about what is powering Junie under the hood.
> According to SWEBench Verified, a curated benchmark of 500 developer tasks, Junie can solve 53.6% of tasks on a single run
That's nice I guess, but why isn't this an entry on the actual https://www.swebench.com/#verified website? (Also: 53% isn't that impressive these days, Claude Sonnet can reach 63%)
“JetBrains and Anthropic share a commitment to transforming how developers work. Developers rely on Claude’s state-of-the-art performance in solving complex, real-world coding tasks. We’re excited to see how Junie, powered by Claude, will help the global developer community create innovative things within the trusted JetBrains IDEs that customers love.”
Mike Krieger, Chief Product Officer, Anthropic
Kind of confusingly, in today's release of Rider 'Junie' is mentioned nowhere I can find. The AI assistant tab, which was already available (paid), just has options to pick from popular models (4o, o1, o3, Gemini, Claude) or LM Studio / Ollama
In my experience working on JB extensions, Rider is the most different of the IDEs. Most people think of just IntelliJ and that’s the same code base as eg PyCharm. But Rider seems substantially different.
The biggest difference between the language-specific IDEs in my experience is how they expose the project structure, with GoLand, PyCharm, etc. providing a much more directory centric workflow while Rider by nature has to work around .sln and .*proj files.
But Rider is uniquely weird in its use of ReSharper for code analysis.
Junie isn't available for all of the IDEs yet, so it's not yet available in Rider. As of today it's available for PyCharm, IntelliJ, WebStorm and GoLand: https://www.jetbrains.com/junie/
I have been playing with it for a couple of hours. I have the All Product Pack which includes the AI Pro tier for free. You can hook up local models easily as well using Ollama or LM Studio. This seems better than both Continue.dev and Copilot. I will probably be cancelling my copilot subscription before the current year is up.
JetBrains has outlasted most of the fully open source IDEs out there. Open source IDEs tend to suck and be abandoned after a while because there's no incentive to keep maintaining it, and it's expensive. See: NetBeans, Eclipse.
AFAIK the only IDEs in wide use today are open-core, like JetBrains' IDEs.
no. junie is decent as an agent, despite it being slow (i’d put it between cursor and windsurf/copilot on quality).. but the autocomplete is anemic. they have to improve their ability to generate suggestions at all before they can start recommending next edits.
Correct, MCP is not used in Junie yet, but it's something we are looking into. Comments like this help us better gauge general interest, so highly appreciate this!
It’s a little confusing. I have the all-products pack but a month ago paid $100 for the ai package which is supposed to be included now in the all-products pack. So I should get a $100 credit or bumped up to the next level ai package.
Aside from the Toolbox answer, even within the Toolbox it was previously two separate actual installations of PyCharm (community and professional) but starting in 2025 they're now just one install, and it switches behavior based on your authentication https://www.jetbrains.com/pycharm/download/#:~:text=PyCharm%...
You can use Ollama or LM Studio locally. There is also code completion running on local models which is built into the IDE and comes bundled for free with IntelliJ.
Since the ukraine war, jet brains products have gone way down hill. Their best engineers were in ukraine. The IDEs have become buggy and slow, sad to see this once very polished software lose relevance
Most of JetBrains engineers was russians located in st. Petersburg. Since the war started, JetBrains claimed they've relocated all workforce from russia.
That's not true. At least not based on their R&D locations back then. Most of those were in Russia. They quickly - and rightfully so - closed these locations down when the war started and moved their activities elsewhere.
Visual Studio Code was first released in 2015; Intellij (the original JetBrains IDE) was first released in 2001. Even Atom -- the editor that Microsoft forked to make VSCode -- had its first public release in 2014.
It's safe to say that JetBrains IDEs are something other than "crappy VSCode clones."
You don't _have_ to use the AI stuff, personally I've disabled all of it because my fan was spinning like crazy. Maybe in a year or two I'll try it again.
I like a good conspiracy but based on what? Jetbrains have no incentive to force that, they make money based on providing flexible tools that people will pay for. And their IDEs are desktop apps, you could always just... not upgrade. Unlike web or cloud-based "IDEs".
I was already a satisfied paying customer. I don't need that new stuff but I understand they have to go where the market goes if they want to stay relevant vs competitors (Microsoft VSCode/Github/Copilot) in the eyes of prospective customers who judge products using comparative feature grids.
If you don't want to use an IDE or pay for your tools that's fine. You don't have to look for reasons to hate on it. No one cares what you don't use.
> I don't need that new stuff but I understand they have to go where the market goes if they want to stay relevant vs competitors
Looking at the people racing to jump off the cliff and saying "let's maybe consider not doing that" can be an competitive advantage, see https://procreate.com/ai
Thanks for the reminder! I was looking for a modern editor without AI stuff (I do like AI things but sometimes you'd want an off.) Didn't notice it became open source. Nice!
It's automatically installed and bundled with the IDE. You can disable it, but to uninstall fully you have to manually delete files from each and every IDE installation
As mentioned in another comment, you can add a .noai file to the root of your project to disable AI support.
As to deleting files that ship with the app but that aren't used if you disable them... that feels awfully 1980's "gosh disk space is expensive" thinking.
JetBrains are in a great position to do this though, perhaps the best position. Whereas a tool like Claude Code or Aider can give the LLM grep and little else [1], Junie can give the LLM a kind of textual API to the IDE's own static analysis database. If Claude/GPT want to understand what a function is doing and how it's used, it could issue a tool call that brings up nicely rendered API docs and nothing else, it could issue a tool call to navigate into it and read just the function body and so on. And they can use the IDE to check whether the code complies with the language rules more or less instantly without needing to do a full compile to pick up on hallucinated APIs or syntax errors.
So much potential with this kind of integration, all that stuff is barely the start.
[1] Aider attempts to build a "repo map" using a PageRank over symbols extracted by tree-sitter, but it never worked well for me.