Kind of crazy the Google Cloud team is using gmail.js to build this extension. Gmail.js is a side project by Kartik Talwar who just graduated college and the codebase isn't exactly active.
I guess even Google can't find a good way to extend Gmail with their own APIs...
Google isn't a monolith. The author is on one team that wanted a way to encourage use of its services. They used an open source project to achieve it. Doesn't seem _that_ crazy to do rather than wait for some other team to build you an API.
The original comment is meant as an observation rather than an attack. It's great to see the author of the post use gmail.js to quickly get the sentiment analysis working with 30 lines of code.
The library was a fun side project and has been used by many companies (big and small) over the past few years. Its always nice to see someone make a cool extension on top of gmail.
Here is another one that came out a while ago (uses gmail.js) and on the same theme as this original post:
Thanks for the library! It's super awesome, useful, and easy to use. And yes, it made writing this ridiculously simple, it's borderline embarrassing how little code I wrote, but that was also part of what made it neat to me.
This was just a "let's spend a few hours playing with the Natural Language API" hack day type thing. Robbie's comment perfectly explains the context behind it. Google search pointed me to your library, which fit the need perfectly well, so that's what I used and I really appreciate that you wrote it and gave it permissive license!
Anytime someone has a scary thought about ML it all comes down to control problems - eg. the machine having the final say about what happens without human being able to override it.
In gmail now, if you say "attached" in your email but don't attach something, you can still send the message, it just gives you a warning. You're consistently shown improved interfaces and content because of ML, yet those aren't scary and you don't feel out of control.
I think people have seen way too many scary movies about AI, and it's tainted our whole discipline.
Look at red lights as an example. Those are just primitive robots. Yet, when I'm sitting at an empty intersection at night, even though I have excellent, advanced sensors in my eyes, I'm not allowed to just run a red light.
Red lights aren't just a "helpful warning." The robots already have the final say in that arena, because human law (for now) backs them up.
It's not based on scary movies... primitive robots already force me to waste my time on a nightly basis.
The town where I grew up has a red light which only changes when a weight sensor is tripped. But weight sensors are expensive, and the approaching two-lane road only has a sensor in one lane. Worse, it's calibrated for cars, so motorcycles never trip it.
Yes, only. If you ride up late at night on a motorcycle, you have a choice between committing a serious traffic offense and waiting several hours for the light to change. No, this problem isn't ever going to be addressed.
I think it's damned reasonable to be scared of what we'll get when people hand authority to machines. Not for Terminator reasons, but for simple bureaucratic stupidity.
Are you sure it's a weight sensor? It's usually an inductive loop[1], which detects metal. They're usually supposed to detect motorcycles, but are sometimes incorrectly calibrated. I've been able to trigger most troublesome detectors by tilting my motorcycle while on top of it.
If you ride up late at night on a motorcycle, you have a choice between committing a serious traffic offense and waiting several hours for the light to change. No, this problem isn't ever going to be addressed.
It's probably already addressed: run the light. Check the regulations of your local jurisdiction, but it very likely that the light is treated as "broken", and you are allowed to proceed after verifying that the intersection is clear. Yes, I've tested this IRL.
If it's an intersection you deal with regularly, call the local DOT, tell them about it and they'll come recalibrate the sensor. Yup, tested that IRL, too. Buddy of mine said that in his case they met him at the intersection so they make sure his bike worked when they were done.
And, BTW, it's not a weight sensor but an induction loop. Meaning that turning the bike off and restarting it can trip the light. Something something electricity through the starter windings.
It's worth noting that in this situation, corner-case problem gets worse the smarter the (mechanical) robot becomes.
We've already seen how poorly designed test-data and training combinations can result neural nets that behave in a racist fashion. More broadly, neural nets have been enlisted in quests like find insurance risks or security risks and here they could really bad effects if they get veto power.
> If you ride up late at night on a motorcycle, you have a choice between committing a serious traffic offense and waiting several hours for the light to change
Uhh, or
(A) Back up, pull over your motorcycle, walk the crosswalk(s) as a pedestrian with a 'bike', get on motorcycle on the side of the road you want to go on, continue; or
(B) Make a right turn + U-turn as many times as needed to reach the desired turn? (3-way intersections can pose a problem, but the through road is most likely defaulting to green and if not you can pull over...)
If your municipality still treated either of the above as a 'serious traffic violation' yet still not springing for better sensors that likely exist for more mainstream compatibility then you're just living with idiots at that point (or they just don't like motorcycles :P)
Gmail does this already. Try and attach a Windows exe to an email. I have valid reasons to want to do that, but have to find workarounds.
It seems happy with uploading the same file to Google drive and putting a link in the email. Though that doesn't solve my use case of sharing with Chinese Nationals who are blocked from those urls. I'm also not clear on how/why this is somehow more secure.
Can't you zip up the .exe in an encrypted form and send the password? This is better anyway since even if gmail let the .exe go out the spam filter at the other end is likely to strip the .exe anyway.
Brilliant example! Though it seems more an example of humans giving a robot the final say, rather than a robot earning the final say based on the merit of its predictions.
Right. It is just like explosives. There is no problem leaving agents of rapid chemical reaction lying around where kids may find them, it is the kids.
That's a terrible example. He's talking about control and overrides; in that context, a red light is a literal counterexample of your point.
You can violate systems with overrides, and doing so might have consequences, but that's distinct from systems you can't override which was kemendo's point on our fear of where the control lies. (If tomorrow we decide to swap the meaning of red and green, that control is still humans'; the lights won't do anything about it.)
A self-driving car that locks up if you try and run a red would be an example, but as you didn't say that I'm guessing you just missed the point.
A driver cannot control nor override a light -- they can't make it change quickly colors when an intersection is obviously empty. They can only ignore it entirely. That's different.
You wouldn't be "overriding" killer robots by hiding in underground tunnels. You'd just be avoiding them entirely.
My main argument though is that people's distrust comes from extrapolation of negative real life experiences, not just from the world of fiction.
That's not the machine though, that's the inflexibility of the human legal process. It's more than possible technologically to manage that exact problem (and is implemented elsewhere): sensors to detect a car and turn light or blinking yellows after a certain time of day.
So it's not the machines, it's humans ceding power to them voluntarily.
>The robots already have the final say in that arena, because human law (for now) backs them up.
But traffic lights don't "have a say", they're put up and have their mechanics defined by humans. Neither does electronic time recording have a say on how many hours of work are required for an employee to get a paycheck.
- taking away that control is just one parameter, one preset, one configuration setting away
- We don't exactly have the best track record for consistently picking people we can trust not to fuck with those configuration settings for their own ends
> I think people have seen way too many scary movies about AI, and it's tainted our whole discipline.
I think people have been burned too many times by their 'authorities who know better', and fortunately at least now have a sense of where the slippery slopes are (if not exactly how best to navigate them).
I'd pick erring on the side of caution over sliding down one of those cliffs any day. If you think it's such an impairment to progress, feel free to live in a non-paranoid place; but if "it's only paranoia when someone's not trying to break it for their own gain", as far as I'm concerned it's rarely-to-never going to be 'just paranoia'.
(With that said, this particular post isn't too scary, and just reduces to the "control your hardware" problem. That, and 'control your networks' are the big ones for now.)
So, I disagree a little. The first-order problem is one of control, as you describe. The second-order problem is one of comfort with control, and the normalization that that entails. Once you accept a certain piece of automation, you tend accept its failures alongside its successes. To continue on from the GP, it is entirely likely that an error message along the lines of "your phrasing does not conform to polite societal norms" (to avoid the term 'groupthink') adds one extra layer of friction between an intention and an act. This can be a good thing (your google example), but it's not universally a good thing, because of possible conditioning effects that can have subtle and long-lasting implications.
I agree that it's not scary in the "Skynet is going to take over" sense. Much rather in the "They Live" sense, though even without the bare malicious intents there.
> I think people have seen way too many scary movies about AI, and it's tainted our whole discipline.
I think this is ridiculously unfair. My concerns about policies like this have nothing at all to do with strong AI. We don't need scary movies to see examples of software decisions made with no access to human overrides.
A simple and extremely topical example: years ago, my Google account somehow got screwed up. It decided I was 13, and never aged. Six years after I made it, I was still 13.
It turned out there was no way to fix that. If you were under 13, they would close your account for ToS violation. Then, after closure, you could fax proof of age and get your account fixed. But if your account was listed at 13, forever? No such luck. There was no way whatsoever to fix the problem. No phone number, and (as with all Google problems) lengthy support forum threads amounting to "deal with it".
I have no idea how this happened. But it was completely irreparable, until one day the bug got fixed and my age was right. That's the stuff I worry about - simple rules being applied absolutely, with no recourse to human judgement.
> I feel uncomfortable when my computer physically struggles with me. Sure, I can overpower it now, but it feels like a few short steps from here to the robot war.
Another step towards this war is the Bluetooth connection in my wife's car. As a passenger, I cannot connect my phone to play music, directions, or make calls unless the car is in park, because "This function is not available while moving". And I hate it. We don't want to sit and wait to set off while its 8-year-old Bluetooth hardware pairs, we want to get going and let the navigator do the navigating.
Fortunately, this contrasts with the Waze navigation app's message box which actually detects motion using the GPS, and displays "Typing is disabled while driving - OK / Passenger"
Groupthink is mostly a problem because it happens subconsciously. A group that actively wants its members to think a specific way has always had other means.
When you're (as the leadership of an organisation) actively installing a filter, I think you'd be quite more likely to install the filter that would say "Error: The message cannot be sent because it merely reinforces the majority viewpoint and does not add new information to the conversation."
Sorry, your content was removed because we determined that it was linked to fake news content and/or was deemed to offensive for some members of our platform.
I think anger is ok and justified at times, and not always just a consequence of being in the wrong state of mind, etc. I think we're more and more moving to a mindset where anger is just not really ever acceptable, either to feel or to display.
But, that's all a bit heavy for what this is trying to accomplish, which is certainly an understandable goal.
Anger is definitely not okay. Ideally, you say - 'This project isn't working because you failed to gather requirements from the right users." or "You spent three months choosing a vendor without proper due diligence who doesn't have a working client library, documentation, or a functioning test endpoint".
However the proper way to express anger in our enlightened business environment is passively and non specifically. "I'm unsure that this project is accomplishing it's goals." or "the team is having trouble with the vendor you chose" and then you schedule a 1:1 meeting to talk about it over a warm beverage where you adhere to a 10:1 complement to criticism ratio.
> Ideally, you say - 'This project isn't working because you failed to gather requirements from the right users." or "You spent three months choosing a vendor without proper due diligence who doesn't have a working client library, documentation, or a functioning test endpoint".
Both of those statements are completely reasonable. Don't conflate directness with anger, berating, personal attacks, etc. It's entirely possible to be direct without attacking people.
Totally. It often helps to depersonalise the message by saying "this project isn't working because the requirements weren't gathered from the right users" tho.
This sounds like you're saying anger is never okay. Is that true?
Because I can think of many instances when there's something more important than "not being angry", and anger may be necessary to achieve that goal.
I also don't follow your prescription for roundabout criticism. I've worked successfully with many teams where we were much more direct, and we were very pleased with our outcomes, and fellow team members.
It's a concern I share. I know this about a fun use of some high tech sorcery, but life is already too calm for me sometimes. I see people jump through ridiculous hoops to avoid any and all conflict. There are industries built around relieving people of the stress of having to interact with each other.
True story: I sent a technically correct, succinct email to someone in another division. They complained up to their VP, who talked to my VP, and then the st rolled downhill on me.
Allegedly, the problem was that I hadn't apologized that they were having a bad experience with a service that we offer.
I've also started questioning how I phrase my own work related e-mails.
Yesterday I was sent one from a partner saying that some files we were to transfer to their FTP server (yeah, I know it's 2017) were missing. So I logged into the server manually and checked -- the files were there, created at 5 AM, just like they are everyday when the cron job that generates them was run. So I started my response by saying, "I just checked the server manually and the files were there. Why do you consider them missing?"
But then I started to think...would this reply come off as if I were questioning their intelligence? They must believe that the files actually aren't there, or else they wouldn't have sent me an e-mail asking. The same process has been running everyday since February of last year, however, and presumably they had never failed to find the files before yesterday.
I just sent the e-mail anyway, but it is interesting that I've been conditioned to question myself over a very innocuous reply to make sure my tone was appropriate, as to not risk offending the other party.
well in this case your reply IS very condescending. lmao. "I just checked the server manually and the files were there. Could you check again and let me know if they still aren't showing up for you?" would probably be a better way to phrase what you're saying.
Agreed on its own, but I would consider the personality sending it. A manager I worked with sent emails that sounded like Data trying to parse human communication and often seemed condescending, but in person, the guy was very affable. Turns out he is just a poor communicator through text. If I got the above message from a someone in another division, I would think he was being condescending on purpose.
Maybe I should also add that this partner has been a nightmare to work with for well over a year, so perhaps some of my annoyance with them in general is shining through. They have certainly talked down to me before in a very transparent way, when in fact they were the ones in the wrong, so I don't feel all that bad.
For example, there was a multiple day e-mail back-and-forth debating XML namespaces for SOAP requests on another related project. My code generated XML with different namespace labels than those in their examples, so they told me it was "incorrect". I had to send them the specific section of the SOAP spec to illustrate that the labels can be anything as long as they are set to the appropriate URI -- they're labels. They didn't believe me, thought that the differing namespace labels were what were causing the requests to be rejected on their end...until a few days later when it turned out that it was a completely unrelated bug in their system.
> I just sent the e-mail anyway, but it is interesting that I've been conditioned to question myself over a very innocuous reply to make sure my tone was appropriate, as to not risk offending the other party.
That's a good habit to get into, particularly given how many of our usual non-verbal cues don't come across in email.
I often think to myself "I'm about to send a mail to X hundred/thousand people; it's worth my taking a couple extra moments or even minutes considering exactly how it might come across".
I think you're right, but anger actually works better in person.
The problem with anger over text, especially email, is that the angry message sits there and stews and stays angry at the recipient for hours or days. And they have lots of time to stew themselves and consider their response and go over lots of counter-anger thoughts in their own mind, before sending something back.
In-person anger can come and go quickly; it gets resolved; it doesn't continue for hours and hours and wear people down. Especially the way men interact (in some subcultures), anger is just a way to communicate a sense of urgency or import, and can quickly reverse into relief and camaraderie once it's acknowledged. (Of course this isn't universal, but it is a style that can work depending on context and culture, and if it's not overused).
Of all the Deep* and *.ai branding going around, this is by far the best yet. Email is a great starting point, since we're often more prone to being (brutally, rudely) honest in isolation. Is there a wearable that does this sort of thing with Biometrics and/or sound monitoring?
I could see this being applied in general customer service. Provide a sentiment analysis on the way in. Determine best way (sentiment) and time to communicate a response.
Slippery slope - This seems like a couple of steps away from forced censoring of emails. I can see this becoming mandatory in emails and messages. But who would control the filters? Will extra meta data with content rating be sent along with message?
Google offers their consumers as products to data-mining/ad companies. This is like telling the farmer that ignoring the angry "moos" of their cattle is censorship.
I guess even Google can't find a good way to extend Gmail with their own APIs...