What I originally envisioned was that we would first dip our toes in the water by quickly adding a very simple, easy-to-implement feature (the "improve my writing" feature mentioned above). If we received positive feedback from customers and management, we would be able to secure a proper budget for adding more complex and cool features based on RAG/function calling. For example, a scenario where a course learner asks a question and the LLM looks for the answer in the company's knowledge base/course library and provides a tailored, precise response. I had many other cool, useful scenarios in mind.
In fact, when we showcased our prototype of the "improve my writing" feature that we quickly put together in one week in early 2023 to a few select clients, their feedback was very enthusiastic. However, it took several months to bring it into production due to several bureaucratic hurdles: clearance from the legal department, the product owner delaying the release because of other priorities, and so on.
Now that the first feature has received a lukewarm reception because of the delayed release, we have neither a dedicated team nor a budget for adding more LLM-based features. Implementing proper and useful RAG is more complex and requires a certain level of expertise (vector DB integration, chunking strategy/indexing, tricks like HyDE, reranking, etc.), compared to just using the command "hey ChatGPT, improve this: %s." It is now unlikely that we'll have anything cool anytime soon without the support of the CEO or product owner (and they probably believe I proved it was indeed a fad). Most teams are currently busy with ordinary features from our backlog that do not require LLMs, and no one cares anymore.
In fact, when we showcased our prototype of the "improve my writing" feature that we quickly put together in one week in early 2023 to a few select clients, their feedback was very enthusiastic. However, it took several months to bring it into production due to several bureaucratic hurdles: clearance from the legal department, the product owner delaying the release because of other priorities, and so on.
Now that the first feature has received a lukewarm reception because of the delayed release, we have neither a dedicated team nor a budget for adding more LLM-based features. Implementing proper and useful RAG is more complex and requires a certain level of expertise (vector DB integration, chunking strategy/indexing, tricks like HyDE, reranking, etc.), compared to just using the command "hey ChatGPT, improve this: %s." It is now unlikely that we'll have anything cool anytime soon without the support of the CEO or product owner (and they probably believe I proved it was indeed a fad). Most teams are currently busy with ordinary features from our backlog that do not require LLMs, and no one cares anymore.