1. I have a json schema with required fields. I complete the json, but do not include the required fields.
2. I run out of token from the model before I finish the json object because I'm in the middle of some deep, nested structure.
These seem solvable, just edge cases to control for by either reserving tokens, randomly generating required tokens until completing the json, or something more sophisticated.
In general the CEO has the ability to award equity however they want. It's fairly common for exercise windows to be extended, awards to be increased as a part of a severance package, cashless exercise to be allowed, etc. You may need to get sign off from the board, but that's usually easily justifiable.
As stated above, after that, the communication to the rest of the team is key.
I think there is a trend towards using SQL to do the T part of ELT. For example, see the rise in popularity of dbt. Analysts are often limited by the data that's in the warehouse already. Instead of asking a data engineer or developer to do something so that more data gets pushed to the warehouse, I wanted to be able to pull it in myself.
Because of that, I'm starting an open source project, WebCrepe, to empower analysts to pull data directly into their databases using SQL. The idea is that we pair a database extension with a web app to enable searching the internet and pulling in structured data. It's really early right now. I have a docker-compose file you can use to spin up a postgres database and the backend. I still need to write some better documentation on how to write queries but it's basically using the advanced google search language.
I'm interested in analytics folks that have use cases I can build out and engineers interested in working on it. If there is any interest then I'll write up better docs and build more functionality.
I think it would be easier to move and then find something. Toronto has a good and growing tech scene. I think you should be able to find something interesting fairly quickly with your experience.
I'm building a postgres extension that allows you to do web searches using a SQL query. The idea is to be able to pull in data from the web with some structure (which you define using custom scrapers) on demand.
Right now I have a proof of concept that's pretty simple. It's a multicorn extension that calls to a FastAPI backend. I have it all running using docker-compose.
I'm open to working with people that want to use it, or people that want to build it. I don't have any real plans to open source it or commercialize it. It's just a little side project I think is neat. I'm open to any ideas or use cases you might have.
Send me an email (in profile) or dm. Looking forward to it!
elovee (https://elovee.com) | ML Engineer, Data Scientist, Full-Stack Software Engineer | North America | Remote | Full Time
We are elovee, a healthcare startup focused on developing A.I. based technology to improve day-to-day care for seniors. We're building a voice user interface for seniors living with dementia. Our mission is to solve loneliness and isolation for seniors.
Roles we're hiring for
- ML Engineer/ Data Scientist.
- Full Stack Engineer
Why you want to work with us
- We are small. You get to help set the culture and direction
- Cutting edge technology. We are pushing SoTA Speech-to-text, conversation modeling, text-to-speech models. Tuning where needed and building what we have to.
What we are looking for
- Experienced engineers that can take requirements and build products.
- A strong sense of ownership.
- Empathetic, team oriented teammates.
- Connection to our mission
My understanding is that Seldon and Kubeflow are more geared towards infrastructure engineers. Our goal is to hide the infrastructure tooling so that Kuberentes, Docker, or AWS expertise isn’t required. Cortex installs with one command, models are deployed with minimal declarative configuration, autoscaling works by default, and you don’t need to build Docker images / manage a registry.
1. I have a json schema with required fields. I complete the json, but do not include the required fields.
2. I run out of token from the model before I finish the json object because I'm in the middle of some deep, nested structure.
These seem solvable, just edge cases to control for by either reserving tokens, randomly generating required tokens until completing the json, or something more sophisticated.