Hacker News new | past | comments | ask | show | jobs | submit login

Ravel Law https://www.ravellaw.com | San Francisco, CA | Full-time

At Ravel, we develop the legal profession’s most innovative products for data analysis, visualization, and research - uncovering insights about judges’ rulings, revealing critical cases, enabling lawyers to make data-driven decisions, and more.

Ravel was launched from Stanford University’s Law School, Computer Science Department, and d.school, with the support of CodeX (Stanford's Center for Legal Informatics). We have been featured in Wired, The New York Times, the American Bar Association Journal, and our founder is a Forbes 30 under 30 for 2015.

We are a rapidly growing Series A startup funded by top tier investors like NEA. We offer competitive compensation, equity, and health care. Our culture is extremely dog- and human-friendly. Our office headquarters are in San Francisco, South of Market - conveniently located between BART and CalTrain.

We're looking for Front-End Engineers (jQuery, Ember, D3), Full-Stack Engineers (Scala, JS, Mongo), and Data Scientists (Spark, H20, Stanford NLP). Check out the full descriptions and apply at https://jobs.lever.co/ravel.




Can I ask, do you really need Hadoop and "big data" for this? There have gotta be substantially fewer than 10k courts in the United States. What needs processed that SQL can't accommodate?

Meta-note: It may be wise to make a rule on what's appropriate to leave as a comment on these hiring posts. I can see some companies shying away if they feel like it's going to turn into a "critique my stack and/or hiring process" thing.


We're processing the opinions rather than the courts, so we're dealing with millions of documents. Since we're building a network of their citations, it winds up being way too much data to hold in memory on a single node, hence the need for Spark.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: