Hacker News new | past | comments | ask | show | jobs | submit | QuasiAlon's comments login

Apologies. Fireworks are the text book example of public goods.


But how does this break economics textbooks? They took a public good and effectively privatized it, ostensibly for the public benefit (reducing infrastructure costs on NYE) but conceivably for private benefit (passing out VIP tickets as favors or steering a profitable security contract to an ally).

There's nothing in economics that says the optimal outcome will naturally emerge. There's the concept of equilibrium in price theory, but that only exists under conditions of perfect competition - zero exit/entry costs, absolute market transparency, and fungibility of what's traded. To parallel your firework example, that works quite well for commodities like oil or many ag products, but not for housing.

Check out this book, which provides a good explanation of why it is politically profitable to do things like privatizing public goods: https://en.wikipedia.org/wiki/The_Dictator%27s_Handbook

Essentially, the authors' argument is that political power can easily be modeled as transactions in a political capital market where access to public goods is traded for political support (participatory or vote delivery blocs in rigged elections, or non-interference by military actors in more obvious dictatorships). The wiki article includes a link to their more rigorous academic exploration of the topic which is almost equally readable.


The state of economy as a science seems to be in much deeper trouble than I expected.


making the public good a private good so they could raise the money for making the transition :)

but seriously, the reasons are legit.


I tried this for a while, via daily tweets instead of a journal. Tried to find one thing a day, even if it was the smallest of things (e.g. first time wearing a new pair of socks!).

Very quickly I found out that my daily gratitudes are about (1) driving to/from work that was less congested than usual. (2) finding a good parking spot (mostly at work). (3) getting a tasty lunch (during lunch break, at work).

This actually depressed me, b/c it was just further indication all I do is work.


Doesn't it make it hard for you to fall asleep night time?


> I suffer from general fatigue

I think I do too. I'm always tired and weak. Check ups and blood works are OK, and doctors I see just say 'there's nothing wrong with you, you're probably just depressed'. Nope.

Very frustrating.


1) I may sound like a total n00b, but don't you find it hard to code only from a command line environment? I mean, you can't use IDEs with SSH, right?

2) I'm going to be that guy that chimes in and says you have to be careful with code you write on company's time / using company equipment. So, be careful with that.


So you actually can but it can be a pain tbh...

Here is a link describing it

https://unix.stackexchange.com/questions/12755/how-to-forwar...

Of course with latency and all that sending x windows can slow it down a bit (and there was some weird stuff with me and my Chromebook with it, but I got it work in the end)

But! It definitely is possible, and once it's set up and everything it can become really helpful/useful

Not too sure security wise how it fairs but I figured if it's just for homework and on campus it should be fine for the most part, and ssh is secure (hence the name :D ) so it should be fine


1) I'm mostly using vim. Been using it for development for the past 6 years and loving it.

2) Thanks! Duly noted.


emacs works great in tmux session and i consider it as a very powerful IDE.


How easy is it to retrieve and browse through information stored?


+1 :(


IMHO for a 12 y/o to stick to a goal and work on something for a year, without losing interest or moving to a new shiny object, is no less remarkable than actually producing a game. Well done.


> Fortunately, there are new automated tools today that can do that automatically.

can you please elaborate?


One very old tool for such things was called "stepwise regression". IIRC J. Tukey was partially involved in that. It appears that the AI/ML work is close to the regression and curve fitting going back strongly to the early days of computers in the 1960s and a lot in the social sciences back to the 1940s and even about 1900.

A lot is known. E.g., there's the now classic Draper and Smith, Applied Regression Analysis. Software IBM Scientific Subroutine Package (SSP), SPSS (Statistical Package for the Social Sciences), SAS (Statistical Analysis System), etc. does the arithmetic for texts such as Draper and Smith. For some decades some of the best users of such applied math were the empirical macro economic model builders. E.g., once at a hearing in Congress I heard a guy, IIRC, Adams talking about that.

Lesson: If are going to do curve fitting for model building, then a lot is known. Maybe what is new is working with millions of independent variables and trillions of bytes of data. But it stands to reason that there will also be problems with 1, 2, 1 dozen, 2 dozen variables and some thousands or millions of bytes of data, and people have been doing a lot of work like that for over half a century. Sometimes they did good work. If want to do model building on that more modest and common scale, my guess is that should look mostly at the old very well done work. Here is just a really short sampling of some of that old work:

Stephen E. Fienberg, The Analysis of Cross-Classified Data, ISBN 0-262-06063-9, MIT Press, Cambridge, Massachusetts, 1979.

Yvonne M. M. Bishop, Stephen E. Fienberg, Paul W. Holland, Discrete Multivariate Analysis: Theory and Practice, ISBN 0-262-52040-0, MIT Press, Cambridge, Massachusetts, 1979.

Shelby J. Haberman, Analysis of Qualitative Data, Volume 1, Introductory Topics, ISBN 0-12-312501-4, Academic-Press, 1978.

Shelby J. Haberman, Analysis of Qualitative Data, Volume 2, New Developments, ISBN 0-12-312502-2, Academic-Press, 1979.

Henry Scheffe, Analysis of Variance, John Wiley and Sons, New York, 1967.

C. Radhakrishna Rao, Linear Statistical Inference and Its Applications: Second Edition, ISBN 0-471-70823-2, John Wiley and Sons, New York, 1967.

N. R. Draper and H. Smith, Applied Regression Analysis, John Wiley and Sons, New York, 1968.

Leo Breiman, Jerome H. Friedman, Richard A. Olshen, Charles J. Stone, Classification and Regression Trees, ISBN 0-534-98054-6, Wadsworth & Brooks/Cole, Pacific Grove, California, 1984.

There is a lesson about curve fitting: There was the ancient Greek Ptolemy who took data on the motions of the planets and fitted circles and circles inside circles, etc. and supposedly, except for some use of Kelly's Variable Constant and Finkel's Fudge Factor, got good fits. The problem, his circles had next to nothing to do with planetary motion; instead, that's based on ellipses and that was from more observations, Kepler, and Newton. Lesson: Empirical curve fitting is not the only approach.

Actually the more mathematical statistics texts, e.g, the ones with theorems and proofs, say, "We KNOW that our system is linear and has just these variables and we KNOW about the statistical properties of our data, e.g., Gaussian errors, independent and identically distributed, and ALL we want to do is just get some good estimates of the coefficients with confidence intervals and t-tests and confidence intervals on predicted values. Then, can go through all that statistics and see how to do that. But notice the assumptions at the beginning: We KNOW the system is linear, etc. and are ONLY trying to estimate the coefficients that we KNOW exist. That's long been a bit distant from practice and is apparently still farther from current ML practice.

Okay, ML for image processing. Okay. I am unsure about how much image processing there is to do where there is enough good data for the ML techniques to do well.

Generally there is much, much more to what can be done with applied math, applied probability, and statistics than curve fitting. My view is that the real opportunities are in this much larger area and not in the recent comparatively small area of ML.

E.g., my startup has some original work in applied probability. Some of that work does some things some people in statistics said could not be done. No, it's doable: But it's not in the books. What is in the books is asking too much from my data. So, the books are trying for too much, and with my data that's impossible. But I'm asking for less than is in the books, and that is possible and from my data. I can't go into details in public, but my lesson is this:

There a lot in applied math and applications that is really powerful and not currently popular, canned, etc.


Stepwise regression is not to be recommended because it's very easy to fool oneself.

http://www.sascommunity.org/mwiki/images/e/e2/NYASUG-2007-Ju...

http://www.barryquinn.com/the-statistical-dangerous-of-stepw...

Shrinkage methods like lasso/elasticnet are less susceptible to these problems.


I agree fully. No doubt Breiman took some steps beyond what Tukey did. My only point was that the question and some answers are old.


Thank you for the list of resources.

Are you able to go into more detail about your startup (problems it is solving)?


Okay, here, just for you. Don't tell anyone!

My view is that currently there is a lot of content on the Internet and the total is growing quickly. So, there is a need -- people finding what content they will like for each of their interests.

My view is that current means for this need do well on (rough ballpark guesstimate) about 1/3rd of the content, searches people want to do, and results they want to find. My work is for the "safe for work" parts of the other 2/3rds.

The user interface is really simple; the user experience should be fun, engaging, and rewarding. The user interface, data used, etc. are all very different from anything else I know about.

The crucial, enabling core of the work, the "how to do that", the "secret sauce", is some applied math I derived. It's fair to say that I used some advanced pure math prerequisites.

To the users, my solution is just a Web site. I wrote the code in Microsoft's Visual Basic .NET 4.0 using ASP.NET for the Web pages and ADO.NET for the use of SQL Server.

The monetization is just from ads, at first with relatively good user demographics and later with my own ad targeting math.

The Web pages are elementary HTML and CSS. I wrote no JavaScript, but Microsoft's ASP.NET wrote a little for me, maybe for some cursor positioning or some such.

The Web pages should look fine on anything from a smart phone to a high end work station. The pages should be usable in a window as narrow as 300 pixels. For smaller screens, the pages have both horizontal and vertical scroll bars. The layout is simple, just from HTML tables and with no DIV elements. The fonts are comparatively large. The contrast is high. There are no icons, pull-downs, pop-ups, roll-overs, overlays, etc. Only simple HTML links and controls are used.

Users don't log in. There is no use of cookies. Users are essentially anonymous and have some of the best privacy. For the user to enable JavaScript in their Web browser is optional; the site works fine without JavaScript -- without JavaScript maybe sometimes users will have to use their pointing device to position the cursor.

There is some code for off-line "batch" processing of some of the data. The code for the on-line work is about 24,000 programming language statements in about 100,000 lines of typing. I typed in all the code with just my favorite text editor KEdit.

There is a little C code, and otherwise all the code is in Microsoft's Visual Basic .NET. This is not the old Visual Basic 6 or some such (which I never used) and, instead, is the newer Visual Basic part of .NET. This newer version appears to be just a particular flavor of syntactic sugar and otherwise as good a way as any to use the .NET classes and the common language runtime (CLR), that is, essentially equivalent to C#.

The code appears to run as intended. The code should have more testing, but so far I know of no bugs. I intend alpha testing soon and then a lot of beta testing announced on Hacker News, AVC.COM, and Twitter.

For the server farm architecture, there is a Web server, a Web session state server, SQL Server, and two servers for the core applied math and search.

I wrote the session state server using just TCP/IP socket communications sending and receiving byte arrays containing serialized object instances. The core work of the Web session state server is from two instances of a standard Microsoft .NET collection class, hopefully based on AVL or red-black balanced binary trees or something equally good.

The Web servers do not have user affinity: That is, when a user does an HTTP POST back to the server farm, any of many parallel Web servers can receive and process the POST. So, the Web servers are easily scalable. IIRC, Cisco has a box that will do load leveling of such parallel Web servers. Of course, with the Windows software stack, the Web servers use Microsoft's Internet Information Server (IIS). Then IIS starts and runs my Visual Basic .NET code.

Of course the reason for this lack of user affinity and easy scalability is the session state server I wrote. For easy scalability, it would be easy to run hundreds of such servers in parallel.

I have a few code changes in mind. One of them is to replace the Windows facilities for system logs with my own log server. For that, I'll just start with my code for the session state server and essentially just replace the use of the collection class instances with a simple file write statement.

I wrote no prototype code. I wrote no code intended as only for a "minimum viable product". So far I see no need to refactor the code.

The code is awash in internal comments. For more comments, some long and deep, external to the code, often there are tree names in the code to the external comments, and then one keystroke with my favorite editor displays the external comments. I have about 6000 files of Windows documentation, mostly from MSDN, and most of the tree names in the comments are to the HTML files of that documentation.

I have a little macro that inserts time-date stamp comments in the code, e.g.,

Modified at 23:19:07 on Thursday, December 14th, 2017.

and I have some simple editor macros that let those comment lines serve as keys in cross references. That helps.

The code I have is intended for production up to maybe 20 users a second.

For another factor of 10 or 20, there will have to be some tweaks in some parts of the code for more scaling, but some of that scaling functionality is in the code now.

For some of the data, a solid state drive (SSD), written maybe once a week and otherwise essentially read-only, many thousands of times a day, would do wonders for users served per second. Several of the recent 14 TB SSDs could be the core hardware for a significant business.

Current work is sad -- system management mud wrestling with apparently an unstable motherboard. At some effort, I finally got what appears to be a good backup of all the files to an external hard disk with a USB interface. So, that work is safe.

Now I'm about to plug together another computer for the rest of the development, gathering data, etc.

I'm thinking of a last generation approach, AMD FX series processor, DDR3 ECC main memory, SATA hard disks, USB ports for DVD, etc., Windows 7 Professional 64 bit, Windows Server, IIS, and SQL Server.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: