This trend is also evident in conferences like ICML, CVPR and EMNLP.[1] I have a semi-popular open source ML project [2] and the repo gets more Chinese visitors/users (from posts on Weibo/WeChat etc.) than all other countries including USA.
I think Chinese researchers & government have a key advantage in collecting enormous datasets, e.g. I wouldn't be amazed if they end up with state of the art models for problems such as object-detection/segmentation for videos taken from Vehicles, Medicine/Radiology and face recognition [3]. Several US strategic funding agencies seem to be pursuing projects with this implicit assumption.
While american government agencies do collect data at a similar scale (Body Cams, Police dashcams), inefficiency in data sharing significantly complicate their use.
On an unrelated note: If you have a popular Github repo, you can view the Google Analytics style stats under graphs/traffic for referring urls/unique-visitor counts etc.
Crazy thought: what if the AI is in 10-20 years the first field in which major breakthroughs will be first published in Mandarin instead of in English. Language is power, and currently English speaking countries have the edge.
Debatable. If we add up all the English speaking or English-inclined countries, we get: the US, the UK, Australia, Canada, India, Nigeria, South Africa, all of Northern Europe and plenty of other places which are culturally closer to English-speaking countries than to Mandaring-speaking ones (one :) ).
China has a huge mass but it's just one object. The Anglosphere has several big objects and a ton of smaller ones and overall I'd say it is much bigger and has a longer reach.
I prefer the garden vs the forest analogy when it comes to China vs the Rest. As our models of never ending growth break down, transitions to newer sustainable models require controls that China is in a better position to exercise than the current chaotic environments we see elsewhere. I see lot of confusion in the minds of leaders in the rest of the world of what they should transition too. And that confusion percolates down economic and r&d activities. It's not going to resolve it self any time soon given the clowns our freedom and idea spreading social media is propping up.
Ofcourse even the best gardeners can fuck up but if I was betting on the best outcomes in 10-20 years I would bet on China.
Those top-rated Chinese researchers are publishing in English. Their Chinese-only venues are not at the caliber that they would consider publishing ground breaking work to.
The *ACL conferences (major NLP venues) decided to allow presentations to be in Mandarin now; the rationale being that a lot of presentations were in such poor English already, might as well let half the audience be engaged.
It will take at least a few more years before submissions are allowed in Chinese: reviewers can't be assumed to know Chinese. The momentum is there though.
Of course, but especially in the applied sector of ML, it can happen that interesting blog posts and code examples are increasingly written in Mandarin.
Well, if English-speaking countries can't keep up with Chinese-speaking ones as far as AI is concerned, hopefully they will at least make enough progress to do accurate Chinese-to-English machine translation.
I'd hope that by then AI would have advanced well enough to enable us to navigate the language jungle more easily. Who knows, maybe the theory related to this will be one of those seminal Mandarin papers?
This is a simple function of the number of tinkerers available.
Any ML project I've worked on has been a lot of tinkering - not unlike the sort of craftsmanship one associates with woodworking and blacksmithing.
Tinkerers gave us Google, Facebook and of course modern civilization.
Tinkerers need the Medici family equivalent. Silicon Valley has that in spades. The Chinese graduate students I met and knew brought this tinkerer's spirit and the American university atmosphere allowed them to unleash their potential (check out who features in the author lists of CMU, Stanford, MIT's papers at NIPS, ICML etc.).
That state of mind and not a top-down approach is what we need.
More Ben Franklins, Robert Hookes, and Issac Newtons; fewer 5-year planning commissions.
Reading this article gives me the feeling that new technologies simply cannot get off the ground without central government planning and billions of dollars in taxpayer spending. I guess it's up to the Chinese government to decide whether China wants to keep testing this hypothesis over and over against all odds.
I wasn't totally convinced by the first linked essay. The primary arguments presented seem to be:
- Government funded development only gives the government what it wants while the citizens remain unsatisfied. This is due to a lack of "market tests".
- We never tested the alternative where the government doesn't fund research and let private companies innovate instead.
Dr. Mazzucato addresses this in her book. She notes that the government needs to provide a pipeline from basic research to marketability. She also argues that the last twenty years, in various countries, have shown that trusting private companies to innovate doesn't give better returns. Also, I feel like Dr. Glein is over-simplifying Dr. Mazzucato's argument by claiming she argues that "many of the technologies and innovations we now value were produced single-handedly by government". Dr. Mazzucato routinely celebrates the ability of private companies to integrate innovative technologies for the public. She's mostly arguing that these companies should be taxed better (more efficiently? realistically?) by licensing the technologies.
However, the argument Dr. Glein links to (https://www.jstor.org/stable/116937?seq=1#page_scan_tab_cont...), is very interesting! It claims that R&D personnel are a finite resource and government funding crowds out the supply of talent for the private sector. I don't know why, but I thought the supply was elastic? Maybe because I over-idealize immigration?
The second essay seems to mostly re-iterate the importance of the "crowding out" effect. It also notes that Mazzucato got some data wrong because: "But in the thirties governments in the US did not fund long-term fundamental research, so companies did it themselves. Now that governments do fund long-term fundamental research, industry needs no longer do so: yet again Professor Mazzucato advocates the very policies that lead to the outcomes she deplores."
tl;dr people seem to be getting different conclusions from datasets I haven't seen and I need to do more research into the "crowding out effect"
> (...) I started reading “How to Build a Brain” by Chris Eliasmith. By the end of it, I truly felt biologically plausible modelling was hugely important for the advance of Cognitive Science.
Does it give a biologically plausible explanation for backpropagation?
I almost wonder if this was a ploy by the American companies. AI is being injected into all apps and even pushed as major feature, but, at least in my circle of friends, nobody cares or wants that. Most people use Whatsapp, with no AI. Google is trying Allo, to no success, and Facebook is pushing M, also to no success. I think AI is brilliant and could be the future, but from my experience, people generally don't want it in the ways it's being pushed. I definitely see the 'AI' and 'ML' markets, somewhat tied together, switching gears soon.
Facebook's M has had limited success, but on the other hand, News Feed, the core of Facebook's business, is build entirely around AI. And Google, of course, has had plenty of success with AI outside of Allo.
I wouldn't say "nobody cares or wants" AI, just that companies are still trying to figure out where it's best used.
You're right...kinda. AI is used in many places. Where it's advertised as a feature? Failure. Where it's used in the backend? People don't seem to have much of a choice. I stand by my statement.
Back in 90s, some Japanese manufacturers (e.g. Panasonic) tried to sell AI-based home appliances such as rice cookers or washing machines. The buzzword then was "Fuzzy Logic" instead of NN. Don't know how successful they were, but it seemed like a classic case of selling features instead of stories. Ordinary people don't buy stuff because of its features.
I think fuzzy logic controllers are very common in appliances. Samsung even has a neuro fuzzy logic washing machine that uses optical sensors to see how clean the water is. I think consumers need a fancy sounding technology on their appliances even if they don't know what it means exactly.
Reminds me of the (imo, erroneous) dichotomy between "strong" and "weak" AI back in the day. Strong AI is what captures the imagination of the crowds, but weak AI is what actually brings the money despite not being as sexy. Of course many things that we take for granted now would have been considered strong AI 40 years ago; the definition is constantly shifting.
I forget where I heard it, but someone once pointed out that AI, before the recent DL boom, refers to things that we don't know how to do. As soon as we know how to solve a problem, it gets a much more apt name.
I'm constantly amused, when I see articles like this, that I've yet to see any application of AI as impressive as Akinator, and it's been around for a full decade.
In consumer products, perhaps not. Business systems? Oh, you betcha.
Note: I'm interpreting AI as "also the very useful subset of AI called Machine Learning" here. ML is already in so many things, but I think it truly has yet to take off. But I very much believe that it will be ubiquitous. It's probably worth it for programmers to learn some of the basic algorithms, general approaches, and what they're good for.
I agree. That's kind of my point. AI now seems tied to data mining consumers. Consumers don't want this. It is much better served behind the scenes, or in business settings. The majority of people just don't want a creepy assistant.
I'm sure people know and like self-driving Teslas
Alexa is quite popular, Snapchat filters (&clones) likewise
Google search and many other popular products (YouTube video captioning) are also driven by AI
I agree with you though - we are not yet there for chatbots!
I will take better AI for predictive text entry on touch screen keyboards any day of the week. I use Android's built-in keyboard with swipe typing and it does fix some things but it could go much further.
I think most "AI" efforts right now are in an uncanny valley of usefulness. They're "intelligent" enough to be troublesome and difficult to program with, but not "intelligent" as in AI-complete. We get used to the speech recognition working until suddenly it doesn't, and then we just wish we could type and click our way through.
Messaging is perhaps a bad place for AI as people use it specifically to chat to real human intelligences and don't want weird AI butting in. I'm sure there are lots of other areas where it is useful like the robots avoiding obstacles mentioned in the article.
There's so many ways it is being used, and many of it are seen as useful by consumers, I think. For example, better autocorrect/prediction of the next word you're going to type, smarter text selection, ...
I come from China, many people around me say that the reason of high proportion of Chinese authors in AI papers is many Chinese researchers just pour their "trash" into journals, it's a fake sense that China has become a major player in AI.
U.S. and the other English researchers have done most of the essential/landmark jobs in AI, Chinese authors contribute less.
Chinese students are getting top tier educations in the United States only to be forced to return home because of absurd F visa restrictions which effectively prohibit them from gainful employment in the United States after completing their studies. It's pitiful and sad.
Don't know why you're being downvoted, because that exact thing has happened to many of my former lab mates. Giving a green card to anyone with a graduate degree in STEM fields would be the most sensible thing to do if the US wanted to avoid the reverse brain drain of foreign graduates educated mostly on the US taxpayer's money.
The vast majority of those grads would be very happy to stay in the US.
Where does this claim about being educated mostly on the US taxpayer's money come from? A lot of my classmates in graduate school have said they are in graduate school since it will give them more chances at the lottery allowing them to work in the US. Others have said they just wanted the best education they could get.
But all of them are always complaining about how theres basically zero financial assistance for them and they are lucky that their parents can pay for them, since even getting a part time job or internship is a massive hassle that most companies won't even bother with due to their student visa situation.
For PhD students in computer science, they will almost all get funding from the federal government (indirectly through grants).
Masters students, maybe not.
You are incorrect about the internships, which don't require visas for F1 holders as they get a year of OPT. Now, they might save up their one year of OPT for post graduation, which is useful for the wait in getting an H1, though I guess there are also useful extensions that can even get around that (IANAL, so I don't know what the exact story is ATM).
That is not true and the reason is obvious - bamboo ceiling in American companies. Think about it - Chinese grads have far higher chance to be promoted to a senior position in companies like BAT and all of them pay you well.
Is the bamboo ceiling string enough and well known enough that it would drive a majority of Chinese graduates from US schools out of the country even if they received a green card? I don't find this assumption oblivious at all. It's easy to underestimate discrimination if you aren't the ones being discriminated. Do you happen to have any data on the actual discrimination and more importantly for this discussion the perceived one?
I think the lottery system might change soon enough. I also think there are problems with tying an education degree with employment. There are many universities that charge a huge amount for foreign students for an education here knowing there is a chance of employment. People paying literally $200k so they can get the practical training and a chance at a H1B sponsorship. There was a local for-profit university that did this while not properly teaching their students. Once this was found out the students lost their money and opportunity at a job.
Which is actually correct, don't know why u get downvoted. Hinton and Bengio are all affiliated with Canadian institutions when deep learning takes off. And it is AlexNet that really gives DL its very first big momentum.
Even now, I would say U of Toronto and U of Montreal are still the best universities for DL across the North America.
Not to mention the many people who have written some influential papers or are doing research in Silicon Valley have come through U of T or U of M, often having done research under Hinton and Bengio.
Deep Learning is only a small part of AI research.
A big part of the rest of machine learning & AI research has been pioneered in the US. However, there have been some huge contributions by UK and Canada throughout the growth of AI.
Prior to Deep Learning, Machine Learning community don't really associated themselves with Artificial Intelligence very often. The AI at the time, if I recall correctly, related more to 70s symbolism and heuristic search approach.
Now DL have became the dominant focus, people start to put AI/DL in every places possible, to ride the along the hype. For me it is deep learning that really leads the AI buzzword trend at this point.
General AI conferences such as IJCAI [1] and AAAI [2] have been publishing machine learning papers forever, just as they have been publishing constraint satisfaction-related papers, logics-related papers, multi-agent stuff papers, etc..
Of course non-generalist conferences such as ICML, UAI, AAMAS, etc. focus on more specific topics, and those are considered of the same level (A+) than IJCAI/AAAI. Their existence does not mean that anyone in those fields doesn't think they are not doing AI.
In fact, pretty much everyone agrees they are doing AI. If anyone does not agree on that, it is (part of) the pro-"symbolic reasoning" crowd, which sometimes claim only their "strong AI" approach is actual AI and everything else is just little more than mathematical tricks.
> Prior to Deep Learning, Machine Learning community don't really associated themselves with Artificial Intelligence very often. The AI at the time, if I recall correctly, related more to 70s symbolism and heuristic search approach.
It's more complicated than that.
First, the AI community had periodic tendencies that oscillated between philosophies and approaches. Every once in a while, someone would make a big contribution, that approach would bloom, then progress would slow for a number of years. There were previous periods where ANNs and other statistical methods were in vogue.
Second, it depends on what you define as the AI community and the ML community. For example, speech recognition people have been using Hidden Markov Models for years, which is a statistical/learning method. Is that AI? Is that ML?
Dupe from my other reply, but the general sentiment is the same:
Again, I was assuming this is talking about deep learning. I understand that hating on deep learning is popular, but the current AI boom, including AlphaGo (which triggered the Chinese interest in AI according to this article) is driven by deep learning.
Would just like to put in a mention for Andy Barto (Sutton's pHD advisor) who has contributed as much as Sutton to reinforcement learning.
Although Barto is retired now, Barto's lab at UMass Amherst is still alive and kicking (led by another of his pHD and incoming UMass asst. professor in Phil Thomas).
There's a lot of history to it - the Parallel Data Processing Group (PDP) in the '80s was at UCSD - the real problem is that we don't have curriculums in place outside the top tier institutions that lead to research success, at least at the rate they used to :-(
IMO, America will still be the leader in AI. The main difference between China and America is, China is more product focused, America is more math/logic focused.
I think the term artificial intelligence is incorrect. What we're currently building is, IMO, artificial intellect.
I view intelligence as the capacity to use resources, including the intellect, wisely and creatively. The current system we're building have no self-awareness and no will, so they cannot really be intelligent.
If we want to build intelligent machines, we have to understand the architectures of the mind. We use memory as a sort of mental matter for the formulation of thoughts that we structure in a certain way to formulate ideas/concepts and then we go out and build things. We are builders because we are thinkers and we are conscious about being able to think.
But what is a thought, really? What is consciousness?
Look at the size of our brain next to the huge computers and data centers we're building for "AI". Our brain is small, yet was able to think about how to build these systems.
We believe we're going to develop "artificial intelligence" by building massive computers and data centers. How absurd is that?
Folks, to build intelligent machines, we have to build thinking machines and for this, we'll have to truly understand how the mind works and when we do this, I believe we'll be quite surprised.
> I think the term artificial intelligence is incorrect. What we're currently building is, IMO, artificial intellect.
The rest of your comment contradicts this statement.
Dictionary definition of intellect is "the faculty of reasoning and understanding objectively, especially with regard to abstract matters". Dictionary definition for intelligence is "the ability to acquire and apply knowledge and skills".
Current AI has great difficulty in abstract matters/thought, let alone understanding something beyond simply a series of learned patterns.
> If we want to build intelligent machines, we have to understand the architectures of the mind.
We have intelligent machines now, many which outperform human capacity for specific tasks. If your talking about strong artificial intelligence, then I wouldn't necessarily disagree, but maybe it could go the other way. By developing strong AI, we can understand the architecture of the mind. Maybe strong AI can be developed with the intellect of a cat/dog, and that gives insight into the human mind.
> We believe we're going to develop "artificial intelligence" by building massive computers and data centers. How absurd is that?
No one who is knowledgeable about AI actually believes this (based on your definition of AI).
> Folks, to build intelligent machines, we have to build thinking machines and for this, we'll have to truly understand how the mind works and when we do this, I believe we'll be quite surprised.
You ask the question "what is a thought" above, then state we need to have thinking machines to make intelligent machines. One could argue machines today think, one could argue alphago 'thinks'. Using your definition of intelligence, we have AI today that meets your requirements. Wisdom is knowledge with good judgement and creativity is exploration with experimentation. There is plenty of academic work out there which covers all this.
I think Chinese researchers & government have a key advantage in collecting enormous datasets, e.g. I wouldn't be amazed if they end up with state of the art models for problems such as object-detection/segmentation for videos taken from Vehicles, Medicine/Radiology and face recognition [3]. Several US strategic funding agencies seem to be pursuing projects with this implicit assumption.
While american government agencies do collect data at a similar scale (Body Cams, Police dashcams), inefficiency in data sharing significantly complicate their use.
[1] https://medium.com/@karpathy/icml-accepted-papers-institutio...
[2] http://www.deepvideoanalytics.com/
[3] https://github.com/seetaface/SeetaFaceEngine
On an unrelated note: If you have a popular Github repo, you can view the Google Analytics style stats under graphs/traffic for referring urls/unique-visitor counts etc.