Hacker News new | past | comments | ask | show | jobs | submit login
What’s New in C# 7.0 (microsoft.com)
322 points by runesoerensen on Aug 25, 2016 | hide | past | favorite | 268 comments



Pattern matching! Tuples! Awesome, C# 7.0 is scratching two itches I've felt every time I've worked with it.

If only .NET had a suitable cross-platform UI toolkit, then I'd be using it everywhere. Eto.Forms is a good attempt but I found rather buggy and limited at this point in development. Avalonia, while further along in terms of stability and features, doesn't even try to fit in with any platform's look and feel.


Very glad to see tuples and pattern matching also. But allowing the tuples to be structs with mutable public fields seems off. Once a tuple is constructed it shouldn't be mutable IMO, unless I'm missing some important use cases?


Quick'n'dirty construction of custom mutable trees, I guess? Don't know about important. Best guess I have.


This is the feature I've been waiting for!

Correct me if I'm too excited, but won't these be an excellent alternative to maintaining lots of DTOs for SQL calls?


I was thinking the same, I never really liked having those, also good for handling responses from services.


Why not winforms? Very convenient, and mostly works in mono


WinForms is a dead end. It doesn't support Windows modern or Mac OS X, and it looks completely out of place on Linux.


> It doesn't support Windows modern

That's a good thing.

> or Mac OS X

"Does Winforms run on OSX?

Yes, as of Mono 1.9, Winforms has a native OSX driver that it uses by default"

http://www.mono-project.com/docs/faq/winforms/

> it looks completely out of place on Linux

On KDE3? KDE4? KDE Plasma? Gnome2? Gnome3? MATE? Cinnamon? LXDE? LXQt? Xfce? CDE? GnuStep? Unity? Enlightenment? Non-DE X Window System environment - with which Window Manager, with which theme?


Going by the documentation you linked yourself:

> In terms of integrating visually with the desktop, we currently ship with a classic Win32 theme.

That sticks out like a sore thumb everywhere, including every Win32 release since Windows 2000.


The standard Win32 theme is the classic, an engineering marvel. Everything else that came after that - starting with the infamous native Windows XP theme - was and is a horrible mess.


I'm pretty sure that it looks out of place in all of the Deskktop environments that you have mentioned.


On Windows it takes the style of whatever window theme you're using, what do you mean?


"mostly works" is rather… optimistic.


Feels kinda crazy to see tuples and pattern matching "just arrive" when other languages have had them for years.


What's your bar for "other language"? Remember that C# is a mainstream language, with all that entails in terms of stability and conservatism of language design (and the other way around - it's popular partly because it's conservative). Comparable languages are Java, C++ and the like; and in that category, C# is, generally speaking, more on the "progressive" end of the spectrum. Comparing it to something like Scala is rather apples and oranges.


A big advantage of this design over most of the other languages with tuples that I've seen is that their members can have names - that is, types can be anonymous but have named members. This I think hits a sweet spot as it's rare to not want fields to have names, but fairly common to want intermediate types that just bundle a few fields together where there's no good name that clarifies much.


Anonymous items with named fields are always only just one name away from being structs. That seems to me to be the least-sweet spot in the possibilities of (named/anonymous item) x (named/anonymous fields).


It's a very big step in a language with nominal typing. Tuples with named fields are compatible (even across assemblies, if I remember correctly) so long as order, names and types match. Structs are not compatible in that manner.


Nice point. There's also the fact that with deconstruction being added as well, you don't even have to have the overhead of declaring a struct variable to get the return value and then dereferencing the fields, you can just deconstruct the tuple into variable if you so wish.


> Anonymous items with named fields are always only just one name away from being structs.

That's a huge line to cross though! Once you give it a name then you generally have to give it a home somewhere.


Out variables, and I imagine the various sorts of tuples, are created on the stack. Structs can by stack-allocated, or they can be heap-allocated, which has performance considerations which may be important in a given program. This feels much more like a case of right tool for the job than anything else.


Every language has something missing, but C# has plenty of other great features so tuples weren't really that big of a problem. There are also plenty of built-in containers like the Tuple class that can offer the same functionality.


Exactly. The biggest that comes to my mind is generics which are missing from many languages.


A tuple struct fills nearly zero of the real use cases for language level tuples. No pattern matching makes it sort of useless.


What about returning multiple values from a function without declaring a dedicated type? That is pretty useful.


Sure. Like how streams "just arrived" in Java yet C# has had the equivalent for years.

No language has every feature. C# is making enormous strides ahead of many of its counterparts.


Java's a pretty extreme outlier for late or non-adoption of modern features, but it is true that Java is the closest direct competitor/peer to C# in terms of usage/marketshare. C# seems to be steadily shifting from being a language which initially (v1.0) looked like Java to one which looks increasingly like Scala (e.g. addition of pattern matching, tuples, local methods etc.).


Java is generally C#'s main competitor. Comparing them is the most appropriate comparison.


Scala seems to be vastly more expressive while simultaneously having only 20% of C#'s feature set.


F# would probably be a better comparison peer to Scala than C# would IMHO


There was always the Tuple class you could use since .Net 4.

https://msdn.microsoft.com/en-us/library/system.tuple(v=vs.1...


There's a few downsides to the Tuple class as mentioned here: https://github.com/dotnet/roslyn/issues/347

Namely, heap allocation and the inability to name tuple members.


And a very verbose syntax. You have to repeat everywhere the different types of the different members.


Tuple.Create(3, "hello", true)

Type inference FTW.


I find that the verbosity can be minimised by type aliasing. E.g:

using Complex = System.Tuple<double, double>;

Edit: Clarity


But then you might as well declare a class or structure to store the data. The point of tuples is to be self-contained.


It maybe wasn't clear, but I was referring to the parent's point about verbosity when using the 'old' Tuple class.

    using Complex = System.Tuple<double, double>;
is less verbose than

    public class Complex
    {
        public double i { get; set; }
        public double j { get; set; }
    }


Not quite equivalent, since System.Tuple is immutable. So, at minimum:

  public class Complex {
    private readonly double _i;
    private readonly double _j;
    public double I {get {return _i;}}
    public double J {get{return _j;}}
    public Complex(double i, double j){
      _i = i;
      _j = j;
    }
  }
But System.Tuple is also IComparable, IStructuralComparable, IStructuralEquatable. I haven't had enough coffee yet to add all the boilerplate for that to the above, which only reinforces the point about verbosity.


Although with c# 6.0 (current release) you can simplify that to:

  public class Complex {
    public double I { get; } 
    public double J { get; }
    public Complex(double i, double j) {
      I = i;
      J = j;
    }
  }


Even before C# 6.0, you could simplify it this way:

  public class Complex {
    public double I { get; private set; } 
    public double J { get; private set; }
    public Complex(double i, double j) {
      I = i;
      J = j;
    }
  }


Sometimes it feels like we're entering a new age of coding. C# is so full of "features" that code refactoring tools like ReSharper, CodeRush and JustCode became simply indispensable. It is quite hard to be sure, every time, what is the most elegant/compact way of doing things.

In the .Net/VisualStudio world, the language tooling (Intelisense, visual drawing XAML & WinForms, etc) became as important as the language semantics or the availability of frameworks and documentation.

A long ago there was a fad, known as CASE (Computer Aided Software Engineering) that advocated doing on programming what CAD did to other engineering. It seems that were finally getting there.


The most productive I've been as a developer was 20 years ago when I was creating VB6 apps and components...


So true. Between 1998 and 2000 VB6 was perfect, you could develop Windows GUI applications in a rapid manner. The VB6 IDE, the Windows UI guideline, the VB6 API, everything was great. And PlanetSourceCode.com was what was later SourceForge and nowadays GitHub. When I read about Microsoft's vision for .Net in 1999 about applications running as a service over the internet (that was their original idea), I thought that will end the great Windows platform and started looking for alternatives. Microsoft dumbed VB6, but it took them until 2003 to come up with DotNet with a different implementation, it (C# 1) was basically a Java clone with small syntax improvements. The biggest fail was to announce VB.Net and their lackluster Channel9 video that showed the third party developed the converter wizard (VB6 to VB.Net) that barely worked and was free only in the lite version. In the meantime I switched to C, C++, PHP, etc. The web replaced desktop applications in many cases anyway. And nowadays Android and iOS are important targets too. Sadly, the rapid development never got that easy to use as it was with VB6 IDE, with HTML4+PHP with Dreamweaver being almost as fast. But nowadays, the GUI builder aren't as polished and well integrated and don't support all features of the supported languages - so many skip the GUI builder and write the GUI code by hand.


Edit-and-Continue is the greatest productivity feature ever added to a development stack for the real conditions one would encounter in large projects, and it astonishes me even now how misunderstood it's purpose is.

Consulting has me moving jobs a lot and I encounter a veritable kaleidoscope of crappy work. Being able to correct minor issues in scope without restarting, while ten form posts deep in an archaic webforms app with dodgy "we-didnt-know-so-we-rolled-our-own" state management has been a lifesaver. Thank you nineties VB team, it's because of you that I'm not rocking back-and-forth in a padded cell making animal noises today.


If you've ever tried jrebel or even the default hot swap feature on the JVM the edit-and-continue feature feels like a bad joke though. Any non-trivial change requires a restart, and you are forced through a series of distracting button presses and dialogs any time you want to use it.

In Eclipse (for example) you just start typing code, (none of this silly locking the IDE while debugging stuff), and the debugger will update the running program instantly. With jRebel you can even change things structurally without restarting.

Oh how I would love an actually working hot swap feature for .Net.


Does Edit-and-Continue ever work? I'm curious, because I'm almost always forced to build x64-only, because of some libraries we have to use that are 64-bit only, and I've never been able to change code while the debugger is running. Supposedly it is supposed to work in VS 2015, but I'm not seeing it - it's always the nasty messagebox saying "Changes to 64-bit applications are not allowed"


Build 64-bit only or release 64-bit only?

I only ask as I can usually manage 32-bit builds on dev while working, then just test there's no spooky differences when I target 64-bit.

There are issues with some projects that aren't well documented. E+C on web apps on local or remote IIS are a no-go (at least to the best of my knowledge), whereas if you can target IIS express during development it works beautifully.

I've always found that the juice has been worth the squeeze, for values of squeeze that only require me to tweak configuration for my dev box.


I was a huge fan of E&C when I was doing VC++ in the 00s.

I'm not sure about other languages, but with Erlang's hot loading (plus the sync[1] project) I find that aspect to be nearly as good as E&C in Visual Studio.

While it's not quite at the level of 'stop the code from executing, changing something, continue', it's close enough for practical usage - save a file, the underlying module is more-or-less instantly recompiled and loaded.

[1] https://github.com/rustyio/sync


> Between 1998 and 2000 VB6 was perfect, you could develop Windows GUI applications in a rapid manner

Delphi was much better.


I miss Borland.


Since 1984 (SideKick)


I had to do some maintenance on old and messy VB6 code one year ago. I studied best practices to improve the code while fixing bugs.

In the end, I found it hard to do maintenable code for complex applications. And it was not specifically because of the IDE but more because of the way you have to organize the code.

But I did not spend more than some months on it so maybe I'm wrong.


VB.net development is not that different from VB6. It is a way more sophisticated language and therefore can do more complex things. But the basic things take pretty much the same syntax than they were in VB6.


It is and it isn't. VB.Net is basically everything from C# with an syntax heavily inspired by VB6. But the similarity ends there. The RAD IDE of VB6 or Delphi are unmatched, and the compatibility is gone. As WinForms is legacy as well, the whole experience is different anyway. People would like to run their Win32 apps on their smartphone, nowadays you have more luck with Android than what Redmond comes up.


> People would like to run their Win32 apps on their smartphone

And they did actually. Windows Mobile - the original Windows Mobile, 2000-2012 - had 42% smartphone market share in 2007, a native Win32 API (and .Net Compact Framework for C# and VB.Net too), had WinCE kernel and worked on x86, MIPS, ARM and SuperH CPUs.


I loved WinCE/WinMobile back then, except for RAM as document storage - only WinMobile 2005 or so and later supported Flash-storage as well. Though, it supported yet another Win32API implementation with less functions and you had to recompile your code for the target (another reason was: mobile had ARM based CPUs).

All what people like me want is NT kernel, Win classic UI, WinAPI with transparent JIT of x86 exe applications on a mobile device (like WinNT on Mips or DECAlpha or OSX Rosetta). And no nonsense like the metro crap from Win8/10.


Add to it resistive screen with stylus - because it provides much more information density instead of gigantic interface elements catered to fingers-driven UI, with no luck however - fingers are closing half of the screen, unlike a thin stylus - hardware qwerty keyboard and a small optical trackpad, which moves a cursor on the screen or works like two-dimensional Tab button cycling focus between UI elements, depending on mode. Removable battery, and multi-color programmable RGB LED would be nice to have too.

I loved it until the 'trendy' movement toward fingers-driven UI started, around version 6.0.


You mean back when you could type a whole ten characters without twenty new languages and frameworks coming out showing you how obsolete you were? How lame!


Did you ever maintain vb6 apps? Because I still want to cry thinking about the times I've had to.


I have. If they were designed well then they are normally OK to maintain. If they were developed badly then they can take extra effort. Much like any other project in any other language.

If you had an large old Vb6 project build using MVP pattern you would be laughing.


    Much like any other project in any other language.
Yes, you can write good or bad code in any language. But some languages have an endemic culture of not caring about good code. IME, VB is one of them, as is PHP.

All the VB6 code I've seen has been a random mashup of databinding directly to the and click handling. The tools encouraged this approach.


I see your point to a degree. The tools made it easy for folks to get apps up and running quickly, RAD being a popular buzzword at the time. I don't think it's fair to say the culture was that of people who didn't care about the code.

I've certianly seen lot's of terrible VB6 code, but also lots of Vb6 apps that were better designed than some WinForms C# apps I've seen. I know this is all subjective.


My experience tells me that any company whose business isn't selling software, doesn't care about the code.

I have learned the hard way that they don't care one single second about quality, only that it works the way required to support business, anything else is secondary.


In some sense it's true, but what's your point? Still in those companies, a better language/framework can go a long way in conducing better code quality, even if the boss gives zero shit about it.


The boss giving zero shit about it means not getting the approval IT requires to support anything you might want to do.

In such environments even root access is time limited to a few hours and requires a ticket asking for it with a reason that should be validated by the boss.


That type of programming never went away.

That is what I was doing on Windows 3.x and have done while coding in Mac OS X and Windows.

Also the reason I never quite adapted to UNIX mentally of VI/Emacs.

Even though I have spent enough years coding for UNIX that me and Emacs are quite old acquainted with each other.


There's a Vi plugin for almost everything. Highly recommend VsVim for visual studio.


Except I don't like VI at all.

I know the essential for when Emacs isn't available which is quite common on commercial UNIX installations, learned back when Xenix was still being sold and don't intend to learn more than that.

Thanks anyway, I know you had good intentions.


"WinForms" is "language tooling"? "XAML" is "language tooling"? By what definition?

I can write server-side C# in vim (without omnisharp) just fine, actually.

Maybe the tools became "indispensable" because they add value? I refactor my code in vim but it would probably be faster in VSCode. That doesn't make C# a bad language.


C# has (had? I'm still a couple of versions behind) so much syntactical overhead that writing code without a visual studio seems painful at the very least.

The var keyword was the first time it was even possible to write C# without tooling. It has definitely made it possible for me to use Vim for editing C# code. Newer features, such as lambda expressions, and some of the much nicer property syntax makes me feel that it may actually be fun to write C# from scratch without Visuak Studio looking over it.


Compared to what languages ?

I always thought of C# as the improved, less verbous Java. I wouldn't compare it with Python or other dynamic languages, but I prefer it to most compiled languages (especially at the times it was created)


When I read their comment I thought of how much annotation of static types you have to do that compilers in other languages are smart enough to figure out on their own. Consider the following snippet in C#:

    public static IList<Ephraimite> FindEphraimitesToKill(IList<Ephraimite> ephraimites)
    {
        IList<Ephraimite> ephraimitesToKill = new List<Ephraimite>();

        foreach (Ephraimite ephraimite in ephraimites) 
        {
            if (ephraimite.Speak("shibboleth") == "sibboleth")
            {
                ephraimitesToKill.Add(ephraimite);
            }
        }

        return ephraimitesToKill;
    }
Now, the same in (imperative-style) OCaml:

    let findEphraimitesToKill ephraimites =
      let ephraimitesToKill = ref [] in
      List.iter (fun (ephraimite) -> begin
        if speak(ephraimite, "shibboleth") = "sibboleth" then
          ephraimitesToKill := !ephraimitesToKill @ [ephraimite] 
      end) ephraimites ;
      ephraimitesToKill;;
See how much less we had to specify the type of what we're handling here? And things would have gotten easier if we used functional-style OCaml, but that's not entirely a fair comparison.

So hypothetically it should be possible to write a C# compiler that would, if type annotations were omitted in some places, be confident enough to assign a sensible default, which is why we call the current version verbose. Although I suppose it's slightly more complicated than that. (Interfaces, if we want to have them, need to be explicitly declared as there are potentially many interfaces that could be used for a given object, and the rigidity imposed by redundant type declarations could help with the health of large and long-lived codebases.)


I'm probably missing your point, but

  public static IList<Ephraimite> FindEphraimitesToKill(IList<Ephraimite> ephraimites)
  {
      var ephraimitesToKill = new List<Ephraimite>();
      foreach (var ephraimite in ephraimites)         
          if (ephraimite.Speak("shibboleth") == "sibboleth")            
              ephraimitesToKill.Add(ephraimite);                    
      return ephraimitesToKill;
  }
or:

  public static IList<Ephraimite> FindEphraimitesToKill(IList<Ephraimite> ephraimites)
  {
      return ephraimites.Where(e => e.Speak("shibboleth") == "sibboleth").ToList()
  }
is not that verbose.


The thing is, though, would those be allowed by style guides at the enterprise companies where C# is most common?

The C# style guide of a former employer of mine (an enterprise C# user) forbids both of these snippets because of the unbraced statements in the former and the LINQ and lambdas in the latter. But admittedly I don't know what's common among C# users, so maybe they were in the minority.


5-6 years ago enterprise guidelines preventing LINQ/lambda's were more common. Current C# practices for enterprise companies(familiar with 6+ fortune 500's, have not seen or heard of them being banned in the last 4 years) definitely allow lambdas and LINQ.

The "required brackets" is more common but I think it's a good rule :). Readability is only slightly hindered by the extra brackets, but I've seen quite a few errors from

if(shibboleth)

    DoSomething();

    DoSomethingElse();


The brackets, maybe. Preferences for brackets for single lines vary. I personally don't like them, but many code style guides (including MS's) recommend them.

The LINQ version should be allowed almost everywhere that has a good development group. Sometimes LINQ queries can get hairier than the equivalent foreach, but I've never worked anywhere that would frown upon the shorter/cleaner LINQ.


I think its strange comparing languages and then saying that the point doesn't stand because code guidelines of companies don't support the more compact version.

I don't like the version without brackets, but I don't think it would make the code less compact if you'd add brackets.


Second time I'm going to mention VsVim here. You still have to wait a month for VS to start, but you get enough of Vim to be pretty productive.


For the last decade or so the c# world has been moving away from those ill conceived tools. Winforms/Webforms were drag'n'drop based and have since been replaced with xaml and html. SSDT has given way to migration based approaches.

Intellisense has become it's own lightweight thing and consumable from more tools, you can code just fine without it. ReSharper I've never used, I find it gets in my face more than it helps.

If you want to see a tooling based approach, go look at the state of android development these days. That's why we see stuff like react native becoming popular.


SSDT style database upgrades are still the most popular. Since it's what most of the red gate tools do, unless you use ready roll.


With the migrations tools listed above there is no need for any of the red gate tools.


What tools listed above? I haven't seen any DLM tools mentioned.


Sorry, it was in another post:

> My go to migrator is fluent migrator (https://github.com/schambers/fluentmigrator) or flywaydb if anyone objects to the c# (https://flywaydb.org/).


They've been around for a long time.

I don't think they will become as popular as red gate. Since you have to write the migrations yourself. That's a lot of additional work.


They run deterministicaly though, making them much safer. I've done migrations with these tools that would be impossible with redgate, clients could be several months out of date since an upgrade and the migrations would just work. I've seen redgate/ssdt need manual coaching after just a few weeks.

I also think your over estimating the popularity of red gate. A minority of places use it, whereas a lot use EF migrations.


Literally every place I've worked uses red gate.

I've never had an issue with it, If you make sure your database is constantly up to date. So changes are small.


> Literally every place I've worked uses red gate.

I've had the opposite experience, I've only been at one place that used it and a couple that considered it.

> I've never had an issue with it, If you make sure your database is constantly up to date. So changes are small.

This is not an option in many places, if you have 3 month release cycles for instance (although I favor continuous deployment). Another is when you have apps installed on site, some clients can be multiple versions behind.


I don't know about that. I'm working on a codebase that heavily uses C# 6.0 features, and for the most part, the "most elegant/compact way of doing things" is disseminated via code reviews, and quite fast at that. Convenient things stick around and become idiomatic precisely because they're convenient.

Then again, you don't have to always go for the most elegant/compact thing, either.


All I want is easy syntax for immutable records, and C#-like a fluent syntax for .WithPhoneNumber(phoneNumber) etc. Seems like it'd be a cinch to implement, and for me the single handiest feature from F#.

In F# these are "guaranteed" to be non-null, even though null instances come up during deserialization all the time. In C# use cases, I don't think the non-null "guarantee" should be made or implied, just a POCO.


Hi there! C# language designer here.

You can see (and participate in) the discussion on records here if you'd like: https://github.com/dotnet/roslyn/issues/10154

> Seems like it'd be a cinch to implement

Ah... how i wish that were so :)


Indeed. While records are my favorite feature from F#, they could be improved.

I'd love it if record definitions were extensible (even abstractly--sometimes you want a record with just the data, and sometimes including db-centric "id, createdTime, etc"), and there should be a way of defining / converting between them without a ton of boilerplate. That would allow something like:

record DbStuff = { id: int; created: DateTime }

record UserRecord: DbStuff { name: string; ... }

var userRecord = new UserRecord(...)

var userData = userRecord.WithoutDbStuff() // but what would be the reflected name of this type?

var newRecord = userData.WithDbStuff(3, DateTime.Now)

----

Sometimes you want to be able to define records

PhotoData{source, metadata, yada, etc, url} and

PhotoDisp{source, metadata, yada, etc, bytes}

without the repetition, and again with an easy way to convert between the two. (And yes you could simplify the above by using containership but oftentimes there are cross-cutting things that containership doesn't solve. You really want a flattening solution.)

Easy integration with C# anonymous classes should be considered too.

C# has always been the more real-world-centric language so I'd hope these common use cases would be considered.


I second this! The ability to easily create intersection types would be awesome!


Having done a lot of python recently, this looks great (and familiar).

But some things feel a little bit rushed:

- In the example "(o is int i || (o is string s && int.TryParse(s, out i))": When reading this statement as a human, o is obviously not an int when it comes down to the TryParse function. But if the 1st part was removed, the 2nd part wouldn't be valid either. I know this is technically how declarations work and I don't have an idea, if there is a better solution for this, but it feels weird.

- The new features of the case statement are nice but the old syntax with the mandatory break is probably worth getting rid of. Especially since all cases share a scope, adding/removing cases is prone to errors. I'd love to see a solution similar to "catch".

- The Deconstruct method is introduced as a "magic name". No interface, no override, nothing... Even the return type is "void". Why not use something like "public out Point(...)" to keep it similar in fashion to the constructor. Other options may be something with "this" (like indexing) or "operator".


I was thrown by the first one you pointed out too, I'm still not sure I understand it.

The page said for type patterns it will "test that the input has type T, and if so, extracts the value of the input into a fresh variable x of type T", so if o is an int it'll be extracted to a fresh int i with that value

But the out syntax in TryParse isn't the new one they mentioned, it's the current one that requires predeclared variables - to be new it'd be out int i or out var i. So i is already declared as an int before this code example? In that case how can the first bit work? Does it not create a "fresh variable" if there's already one in scope with the desired name and type? That could be quite confusing, usual behavior is to compile error if an inner scoped variable conflicts with an outer one, I'm not sure I understand why this should be different.


The expression

    o is int i
is effectively:

    int i; // in the nearest enclosing scope
    (o is int && (i = (int)o)) // in place of an expression
The "fresh" variable `i` is available for use elsewhere. If an `i` already exists in that scope (the block the `if` statement is in), that is a redeclaration of a variable which is a compile-time error.

If the "is" expression evaluates to `false`, the variable `i` will not be assigned, however, it will still be declared. Attempting to use `i` if it is not definitely assigned is a compile-time error. However, you have the opportunity to re-assign it.

Some examples:

    {
      if (!o is int i) throw Exception();
      use(i); // works: i is declared and always assigned
    }

    {
      if (o is int i) { do_something(i); }
      use(i); // compile-error: i declared but might not be assigned
    }

    {
      if (o is int i) { do_something(i); }
      else { i = 0; }
      use(i); // works -- definitely assigned
    }
Thus the example pointed out in the post boils down to this in C# 6:

    int i;
    if (o is int) { 
      i = (int)o;
    } else {
      string s = o as string;
      if (s != null && !int.TryParse(s, out i)) {
        throw Exception();
      }
    }
    // i is definitely assigned here.


> The Deconstruct method is introduced as a "magic name".

Collection initializers already rely on “magic names” (the `Add` method + `IEnumerable`), so I’d say that’s okay. It’s simply a compiler feature, after all.


Deconstruct will be an operator and this is how C# handles other operator overrides, e.g. equality, implicit/explicit type conversions, so this is in line with what developers would expect.


>The Deconstruct method is introduced as a "magic name". No interface, no override, nothing...

There is precedence for this in the form of GetEnumerator and GetAwaiter.


I don't like the idea of accessing tuple values by their local name

    var names = LookupName(id);
    WriteLine($"found {names.first} {names.last}.");
It's leaky and should simply use the deconstructing syntax instead.

I'm on the fence about the ref returns. It can lead to some fantastic performance improvements and enable otherwise unusable datastructures. But is C# really a language you want if that is important to you. Why not go all in a use a lib in C or C++ for the critical parts of the code?

I still miss better immutability support, and probably most of all, ability to declare something at non-nullable.

But that said, the update is a fantastic update to an already good language.


> But is C# really a language you want if that is important to you.

C# is used by hundreds if not thousands of games by ways of Unity and is therefore demonstrably good enough in this scenario. Furthermore, just because a language is not C++ does not mean you shouldn't take every opportunity to give programmers tools to write fast code - at the end of the day I doubt anyone would be interested in a language that is purposefully slow.

Performance was a core feature of the language (yes, yes, it's managed) from day 1 with unsafe - it's nice to see further improvements in that department.

Use C++ when you need to, definitely, but when you're copying about 256 bytes around for a trivial world-view-projection matrix multiplication[1] you have a problem.

[1]: https://github.com/dotnet/roslyn/issues/10378#issuecomment-2...


Isn't Unity stuck with C# 2.0 ?


Poked around Google and it seems as though you are correct about Unity; however, C# tends to use existing MSIL features and doesn't require the BCL public key token for supporting infrastructure. You can patch in many recent features in by simply defining the types yourself. A common example of that would be:

    public sealed class ExtensionAttribute : Attribute { }
Even async/await comprises of a convention and an interface - I haven't tried it, but in theory using the CLR 2.0 TPL should bring async/await your project. So far as I know, the feature being used for ref returns has always existed for C++/CLR so this stuff should work on CLR 2.0 (although you will need the C# 7.0 compiler).


No, it supports c# 6 without dynamics but uses the 2.0 runtime. Now that mono is open source, they are updating the runtime to be current.

There's a project out there that swaps the mono compiler with Roslyn and gives some nice edit and continue features for Unity.


While it's used for game development, I'm not convinced that it automatically means features should cater specifically to that audience. Compared to everything non-performance critical the language is used for, the performance crowd is dwarfed. I'm not against it, I personally have been in situations where it would have been useful, I'm just not convinced it's beneficial for the language to implement it.


I watched the NIAC talks yesterday and a quote (paraphrased) that I found interesting was:

> Computation is innovated thanks to gamers.

The same effect might benefit programming languages.


I think that is true. Where they seek to get that last bit of performance, they get creative and come up with new and better ways to do things. But I don't think C# is the language to do this, it lacks too many low level constructs.


But those aren't local names, the defaults are 'item1', 'item2', etc. They only get that name if you explicitly declare it in the function declaration.


Out variables seem like a mis-feature, especially when they are adding a better alternative (Tuples) in the same release.

It's great to finally have tuples, but the c/java style syntax is showing it's age compared to something like scala which has return type declarations at the end, which I find much more readable.

Literal improvements will be a godsend for writing database migrations.

Ref returns and locals look like a source of constant headaches. It's much harder to reason with code when the data can be modified by random others bits of code. Currently I can look for all references to an array to find everywhere that modifies it now I can't.


Hey there, C# language designer here.

> Out variables seem like a mis-feature, especially when they are adding a better alternative (Tuples) in the same release.

Out variables are really great when working with existing code that predates Tuples (for example, the entire BCL). We don't just introduce language features that will only work well for new code. We also want to make the experience of coding against existing APIs feel better.

> Ref returns and locals look like a source of constant headaches.

That 'headache' has to be weighed against the needs of some parties to write very fast code and to have as little overhead as possible. ref returns enables a much more pleasant coding experience that enables these sorts of speedups in these specialized scenarios.

> It's much harder to reason with code when the data can be modified by random others bits of code.

There is a simple solution to this of course, don't expose your data through ref returns :)

For many (likely most) developers, ref returns simply won't be something that ever hits their radar. Similar to stackallocs and other esoteric features, this is really a targeted improvement for people who really like working in .Net but are finding it hard to squeeze out essential perf in some core scenarios.

The .net runtime supports this features fantastically. We wanted to make it available to people who like the power and safety of C#, but occasionally need some low level hooks to get that extra perf boost as well.


> Out variables are really great when working with existing code that predates Tuples (for example, the entire BCL). We don't just introduce language features that will only work well for new code. We also want to make the experience of coding against existing APIs feel better.

I disagree on this point. If there is a language feature that has been superseded by a better alternative, continuing to add sugar to the old feature only serves to perpetuate its use.

It embiggens the language while providing relatively little value in return. Furthermore, it adds confusion as to what should be considered idiomatic.

Tuples and deconstruction are so clearly better than out variables, I would have thought it would make more sense to deprecate the "out" feature entirely. Or, at the very least, not make it easier to use them.

Awkward and outdated features should be painful to use.

I also felt that this was a very odd addition. Even odder, the article doesn't even list tuples and deconstruction first.


Did you guys look into alternative syntax for return types? It really is a huge eyesore, and impediment to reading, when you have more than just a typename and a couple of modifiers like * and [] attached to it.

With tuples especially, it's even worse, because the method name gets squished between two very similarly looking ()s, which is very different from how methods have historically looked in code. If I were scanning code fast, I'm not sure I would even read it as a function declaration.

C++ adopted its "auto ... -> ..." syntax a while ago - granted, they had a forcing function in form of decltype(), but many people also use it for readability reasons with complex return types. I hope C# follows suit; or, better yet, comes up with a unified type-after-name syntax a la ES6, that can be used everywhere in the language, while retaining existing syntax for back-compat purposes.


What would you suggest?

public TResult SomeMethod() where TResult : (string name, int id)


The most obvious approach would be to repurpose C++ syntax, since it's already been around for a while, and C# syntax is generally pretty close to C++ in many respects. So:

  public SomeMethod(bool b) -> (string name, int id) { ... }
However, this may be undesirable due to confusion with => for lambdas and expression-bodied methods, especially in:

  public SomeMethod(bool b) -> (string name, int id) => ...
: is the next obvious candidate, and would unify the syntax with TypeScript and many other languages. But given that it's already used for labels and named arguments, I'm not sure there's enough room there to reuse it also for types.

:: is another decent choice in terms of familiarity coming from other languages (Fortran, Haskell etc). But, unfortunately, it's already taken for extern alias, and I don't think this could be easily disambiguated in many contexts.

Now, if this is narrowly scoped to method return type only (i.e. we're not trying to invent a syntax that could later be used in a similar way to swap the type and the name in other places, like arguments and variables), and only as a fallback for when the usual "Type Name" arrangement has poor readability, perhaps take a hint from Ada and reuse "return"?

  public SomeMethod(bool b) return (string name, int id) { ... }
A tad verbose, but if it's intended to be used sparingly, primarily with tuple-returning methods and deeply nested generics, I think that's okay - tuples themselves are pretty verbose when components are named.

Or maybe borrow "as" from VB? It looks like it could be extended to other kinds of declarations in the future in a straightforward manner, without conflicting with its existing use for casts:

  // Just for method return types
  public SomeMethod(bool b) as (string name, int id) { ... }

  // For everything
  public SomeMethod(b as bool) as (name as string, id as int) {
    var x as float;
    TryFoo(out var y as bool);
    ...
    switch (obj) {
      case foo as Foo:
        ...
    }
  }


I like the as syntax, it may conflict with casting though when used on variables.

    public SomeMethod(bool b) -> (string name, int id) { ... }
When was this added to c++? I think I need to brush up on my lower level skills.


With regards to "as" conflicting, I think it shouldn't be a problem because the cast operator is binary. So if you already know that something is a declaration, the current token is the identifier that's being declared (or closing parenthesis of the parameter list), and the following token is "as", it cannot be a valid expression. Consider:

  var x as int;
"var" makes it a declaration, so "x" is the identifier, and "as" has to be the type specifier.

  var x = y as int;
"=" clearly separates the identifier from the initializer, and the latter is an expression, so "as" is the cast operator.

Similar reasoning applies to other places where the two can appear - function parameter list, return type, field and property declarations etc.

So far as I can tell, by the time we get to the point where "as" would be used to specify the type of the declared entity, we will already know that it's a declaration, and that the only other token that can follow in existing grammar is "=", "{", or "=>" (for variable and field initializers and default parameter values, property getters and setters, and lambdas and expression-bodied members, respectively), and none of these start an expression. In all other contexts, "as" would be an operator.


It was added in C++11 when they added decltype(), so that you could reference arguments when computing return type. It's slightly different though, in that you need to specify the return type in the usual position as "auto" first, and then you can use "->" after the parameter list. So:

  template<typename A1, typename A2>
  auto foo(A1 a1, A2 a2) -> decltype(a1 + a2) {
    return a1 + a2;
  }


Honestly, I'd like to have seen ref returns implemented as attributed out params (syntactic sugar). As it stands I can't see me using them in any of places that I originally planned to - I am exactly the target market for that feature. Still, I'm sure something interesting can be done with this.


> That 'headache' has to be weighed against the needs of some parties to write very fast code and to have as little overhead as possible. ref returns enables a much more pleasant coding experience that enables these sorts of speedups in these specialized scenarios.

I'm worried this could result in a loss of focus, you can't make everyone happy all the time. c# is a fantastic language to develop applications in (web or desktop) but it's never going to be the fastest language around or be a systems level language. For me some of the other features listed here (like immutable records) would be a much better fit for where c# excels already.


The Midori project proved otherwise.

One of the reasons we are stuck with C and C++ is because other language vendors, including Microsoft, dropped the ball regarding performance of type safe languages.

Do you think C++ would have been kept around if Java and the CLR had been AOT compiled to native code with focus on compiler optimisations since day one?

I really appreciate the efforts of the .NET team in adopting the Midori lessons in the standard .NET stack.


Some things can be rather tricky to AOT-compile, especially when you don't have a fixed "world" (i.e. when code can be dynamically loaded, as is the case with both Java classes and CLR assemblies).

Consider virtual generic methods, for example. If your set of types is not statically bound, you can't allocate the appropriate number of vtable slots in advance, because you don't know all the instantiations.


C++ has the same problem.

I was already using dlls with plugins in Windows 3.1 and C++.


C++ doesn't have this problem, because it doesn't allow virtual function templates.

For that matter, it doesn't allow any templates across ABI boundary, except when you manually instantiate them (extern template) - and then only for those explicit instantiations. So it is effectively impossible to have a generic C++ API using templates that is not statically linked.


Of course it does, as I said I was doing that.

Borland C++ for Windows 3.x had an initial implementation for templates as they started to be discussed at ANSI and also supported exporting classes across dlls, providing both producer and consumer were Borland C++.


How exactly did that implementation work for templates?


No idea, not something I was too deep into. Back on those days I just used them.

Here is the link for the Borland C++ compiler documentation, I was actually using the Turbo C++ version for Windows 3.1.

https://archive.org/details/bitsavers_borlandborn3.1Programm...

The BIDS, Borland International Data Structures were the template based library that replaced the object based one and could be accessible as a DLL as well.

To export classes from DLL one would do something like

    // On the DLL side
    class _export MyClass {...}

    // On the consumer side
    class huge MyClass {...}

Described on page 336 of the manual.

If you check the the templates documentation, the generated code code could be externalized, page 152, via the -Jgx.

I was only a BIDS user across DLLs, but I can easily imagine a combination of _export/huge and -Jgd/-Jgx being what was necessary to make it all work.


Okay, so that was something that I have mentioned in passing earlier:

"it doesn't allow any templates across ABI boundary, except when you manually instantiate them (extern template)"

So you can export template instantiations, yes. But you cannot export the template itself. For fairly obvious reasons - to instantiate a template requires compiling C++ code after substituting the tempalte parameters, so to export it across ABI boundary would require including the entirety of the C++ language (or at least a substantial subset - I guess you could desugar a bunch of stuff like, say, lambdas before) in that ABI.

And virtual templates are a whole other kettle of fish, because every instantiation of a virtual template would require a new slot in the vtable - or else the generic would have to dispatched by a single method, and the caller would have to supply runtime type information. In practice, C++ just forbids virtual templates outright.

In C#, this all works just fine, because generics still exist on bytecode level, which is what gets exported - and at runtime, the JIT will instantiate the generic by compiling that bytecode to native code, separately for every instantiation (in practice they unify them for reference types, because pointers are always of the same size; but value types get distinct JIT-compiled versions each).

For virtual generics, I'm actually not entirely sure how this works, but I would assume that vtable slots are allocated in batches and filled with instantiations as needed - and when you run out of slots and need to reallocate the vtable (and potentially shift indices of other methods in it), you can just JIT-recompile the bytecode for any method that happens to use the class with affected vtable.


Well maybe you don't care, but there are folks that do (I'm on that list) ;-) I want the speed of C but the safety of C#.

There are many pieces of code that you probably use every day that go to great lengths to squeeze as much power as possible. Why not help these folks help us? I'm glad there is more focus on performance because we all benefit. (Hoping to see more on the CLR side.)


> I want the speed of C but the safety of C#

Rust or D seem like the best bet here


Why c# and not rust or something similar that has a lower level focus?


Because some of us believe in GC enabled systems programming languages, as done already at Xerox PARC (Cedar), DEC (Modula-2+/Modula-3), ETHZ (Oberon, Active Oberon, Component Pascal) and MSR (Sing# / System C#).

What is missing is a company like Microsoft to push it down to mainstream developers, at all costs, like it happen to all luddites.


> I want the speed of C but the safety of C#

Try OCaml.


>I want the power of a Bugatti but the efficiency of a Prius.

Use a tractor.


Adding features for the the subset of developers that need optional performance enhancements is great. However, it does seem ripe for misuse. As in the developer who makes everything a ref reference because they read it makes it "faster." Would there be a way to get Visual Studio to warn about this?


Sure. Write a Roslyn analyzer to help out here.

Note: it's not like you can just sprinkle 'ref' on the return type of your methods willy nilly. Like 'ref'/'out' parameters, it requires you to do very specific things to keep your code safe/legal. As such, it's unlikely to just be added by people because, by and large, most code won't be equipped to actually handle it properly.

The codebases that will want to use this are already ones that have done a lot of work that makes them amenable to ref. i.e. game engines where you have large arrays of structs that you want to operate on in an efficient manner and whatnot.


I think it's great that you guys are adding a lot of high performance features that are beneficial to game engines. Any news you can share with us on a certain very-popular-game-engine-running-a-very-old-version-of-the-clr? ;)


Not .NET developer here but the usual way is to use some sort of linting tool to "disable" the esoteric features.


These tend to become cargo cult practices, "thou shalt never" that get in the way of those few places where they are useful.


I don't think so. The purpose of compiler warnings, lint-like tools, static analyzers and such is to bring developer attention to some block of code. Not 'Hey, you should not do it this way', but 'Hey, is this thing here as intended by you or just a typo or mistake?'.

I like the way its implemented in Perl. Use 'use strict' by default, and guard the block where you really need something unusual by 'no strict something' - refs, vars, subs - so neither compiler nor people reading your code never being confused whether is it a mistake or author's intention.


In C# one can always

  #pragma warning disable ${list of warnings}
and

  #pragma warning restore ${list of warnings}
https://msdn.microsoft.com/en-us/library/441722ys.aspx

Although frankly the list of warnings to disable and restore consists of warning numbers, not names, which is not super convenient.


>Out variables seem like a mis-feature, especially when they are adding a better alternative (Tuples) in the same release.

You can't retro-fit multiple-returns (via Tuples) onto existing library functions, so for places where you're forced into using out parameters, this is a slight improvement.


Sure you can - you can introduce a new overload resolution rule that lets you treat trailing out params on a method as tuple members, so a method

    TReturn Foo(T1 param1, out T2 param2) 
can be called as if it were

    (TReturn, T2) Foo(T1 param1)
You'd need a keyword to trigger this kind of overload resolution at the call site to ensure back compatibility - maybe you'd have to call

   (var x, var y) = out Foo(param); 
or something. That way all that legacy library code gets a free upgrade to your new language feature.

Since the C# team decided not to do that, you can actually implement a version of this at a library level, by static importing a family of generic methods like this:

   Func<TParam, (T1, T2)> Tuplify<TParam,T1,T2>(Func<TParam, out T1, T2> outFunc) {...}
for every pattern of out params you want to convert, then you can just call

   (var x, var y) = Tuplify(Foo)(param);


> it's age compared to something like scala which has return type declarations at the end, which I find much more readable.

Scala or .... VB.net!


LMAO

-Anthony D. Green, Program Manager, Visual Basic


I'll begrudgingly admit it got a couple of things right.


Out parameters are also useful in constructors. I often use them as a way to limit the scope in which an object can be modified, which lets me have mostly immutable objects.

    IEnumerable<Student> GetStudents(SqlCommand cmd)
    {
        var rdr = cmd.ExecuteReader(); //select * from student left join courses on...

        var memo = new Dictionary<int, Student.Completion>(); //Completion lets us add courses to a student

        while(rdr.Read())
        {        
            if(!memo.TryGetValue(rdr.GetInt32("student_id"), var out complete)
                 memo.Add(new Student(rdr out complete).ID, complete);

            complete.AddCourse(rdr); //might be a no-op
        }

        return from kv in memo select kv.Value.Student; //each student object is unmodifiable.
    }


So I'm taking a look at this line:

  if(!memo.TryGetValue(rdr.GetInt32("student_id"), var out complete)
    memo.Add(new Student(rdr out complete).ID, complete);
I think I understand your point, but I found this code really hard to read. You don't use the first var out complete in the TryGetValue, right? And the Student c'tor returns itself as an out parameter?

If I understand you correctly, you like this because you don't have to declare a new student before you add it? I.e. the alternative would be

  if(!memo.ContainsKey(rdr.GetInt32("student_id")))
    var student = new Student(rdr);
    memo.Add(student.ID, student);
I guess I would prefer this over the former. Also, why not enforce your student ID constraint in SQL instead of putting everything in a dictionary only to take it back out again? That would simplify your code to the point of just being a map from rdr->Student. Furthermore, if all I had was the Student c'tor, I would never guess that was the intent of the out parameter. This seems more anti-pattern than pattern.

I would rather put a simple IEnumerable in front of SqlDataReader so that you could just do:

  foreach(var row in rdr) yield return new Student(row)
This doesn't obviate your use case, however, which is to inline a variable where it's needed in multiple places in that line because you can save yourself an explicit declaration. In this case, however, I think that the increased readability justifies the explicit declaration.


Hi, thanks for taking the time to comment.

> You don't use the first var out complete in the TryGetValue, right?

There is only one complete variable; we declare it in TryGetValue, and it's definitely used in the last line of the while statement, but might first be used after the if statement.

> And the Student c'tor returns itself as an out parameter?

The Student constructor does not return itself as an out parameter, rather it returns an object that can modify an internal list of courses.

The idea is that when we iterate through a result set from a database, some of the rows are going to correspond to a new student object, and some are going to correspond to a course that belongs to the student. Crucially, we are only allowed modify the Student object (or whatever) during iteration, and the collection that we return will only contain immutable/unmodifiable Student objects.

I've used this technique to populate deeply nested structures (lots of joins and nested joins) using only one query.

> I would rather put a simple IEnumerable in front of SqlDataReader so that you could just do:

    > foreach(var row in rdr) yield return new Student(row)
Just to be clear, the reason that I cannot do that is because the Student object might not be completely "hydrated" until we finish iterating through the result set, because it might contain a bunch of nested objects that also need to be instantiated from one or more DB records.

> Also, why not enforce your student ID constraint in SQL instead of putting everything in a dictionary only to take it back out again?

I'm not quite sure I understand this (which is probably my fault), but rest assured that we use nothing but SQL (specifically, DDL) to enforce data integrity.


Thank you, in turn, for replying. I'm of the opinion that out variables have little value if you have tuples and deconstruction, but I wanted to understand your point. I'm not sure I do, unfortunately. I'm little out of practice with C#, so bear with me.

Back to this code:

  if(!memo.TryGetValue(rdr.GetInt32("student_id"), var out complete)
    memo.Add(new Student(rdr out complete).ID, complete);
There's a parens missing on the end of the if, correct? Also I can't parse the student c'tor:

  new Student(rdr out complete)
I was assuming there is a comma in there somewhere. Does this compile? What does "out" do here? I thought you were getting a new out variable, but that's not correct since you wouldn't be able to name it the same in the same scope.

Also, I don't see how "complete" is ever non-null. If the ID isn't in the dictionary, then TryGetValue returns false and "complete" is null. Then you add the null "complete" to the dictionary, and throw away the Student object (which apparently does other side effects) once you have its id? If ID is in the dictionary, you get back what you inserted, which is still null.

And then you call .AddCourse on the possibly null reference? I'm lost.

Can you post the code again? Maybe I'm just missing something due to a syntax error.


You're right, there were missing tokens; sorry about that.

Here is the equivalent C# 6 version of the code (i.e. something very similar to the pattern that I currently use):

        IEnumerable<Student> GetStudents(SqlCommand cmd)
	{
		var rdr = cmd.ExecuteReader();
		var memo = new Dictionary<int, Student.Completion>();

		while(rdr.Read())
		{
			Student.Completion completion; //this declaration will be unnecessary in C# 7
			
			if(!memo.TryGetValue(rdr.GetInt32("student_id"), out completion))
				memo.Add(new Student(rdr, out completion).ID, completion);
			
			completion.AddCourse(rdr); //completion is *guaranteed* to be non-null
		}
		
		return from kv in memo select kv.Value.Student;
	}
As you can see, it only differs from the C# 7 version by one line.

The first thing to note is that, per the C# spec, the `out` parameters of a method must be definitely assigned before the method returns[1]. It just so happens that the constructor of the `Student` class always creates a new `Completion` object and assigns it to the `out` parameter. Now, theoretically, an `out` parameter could be assigned a null reference (as in the case of TryGetValue), but in practice it's trivial to guarantee that it will be non-null (as in the case of our Student cstor).

In the line,

    memo.Add(new Student(rdr, out completion).ID, completion);
we first call the Student cstor, which assigns a non-null value to completion, so that by the time `memo.Add` is called, the completion variable is guaranteed to be non-null.

Also, `Student.Completion` is a class that is defined inside of the `Student` class. As such, it has access to all of Student's members (private and public). But, in order to do anything to an instance of Student, a Completion instance must have field which references that Student instance (unlike the case of Java's inner classes, which are a bit more powerful I think). That is why this line is possible:

     return from kv in memo select kv.Value.Student //`Value` is an instance of Student.Completion
Here is the basic definition of the Student class:

        sealed class Student
	{
		public int ID { get; } //this is a readonly property, meaning it can only be modified in a cstor
		public FullName { get; } //readonly property

		private List<Course> courses = new List<Courses>(); 

                public IEnumerable<Course> Courses => from c in courses select c;
		
		public Student(IDataReader rdr,  out Completion completion)
		{
			ID = rdr.GetInt32("student_id");
			Fullname =  rdr.GetString("fullname");
			
			completion = new Completion(this);
		}
		
		public sealed class Completion
		{
			public Student Student { get; } //readonly property.
			
			public Completion(Student student)
			{
				this.Student =  student;
			}
			
			public void AddCourse(IDataReader rdr)
			{
				if(rdr.GetInt32("student_id) == Student.ID)
					student.courses.Add(new Course(rdr));
			}
		}
	}

 
Like I said in my earlier comment, my immediate goal was to be able to create a set of immutable objects with arbitrary nestings from the result of a single SQL query that may have an arbitrary number of joins (which is how we represent nesting relationally). The above example just has just one nested property, but I have production code in which objects have many more nested properties. For example, the `Course` class in the aforementioned example might have its own `Completion` class for adding `CourseAssignment` instances (e.g. select from Student left join Course on ... left join CourseAssignment on ...).

But, the really big idea is that I wanted an object-capability system[2][3]. Getting objects be immutable "almost everywhere"[4] is a nice side benefit.

[1] http://www.ecma-international.org/publications/files/ECMA-ST... (section 12.1.6)

[2] https://en.wikipedia.org/wiki/Object-capability_model

[3] https://www.youtube.com/watch?v=EGX2I31OhBE

[4] https://en.wikipedia.org/wiki/Almost_everywhere (in this case "almost everywhere" is with respect to the set of all possible execution paths).


Thanks for continuing to indulge me in this conversation. So, if I understand you correctly, your data looks something like this:

    StudentId|    Name |CourseId
    ============================
            1|      Bob|       1
            2|     Jane|       1
            1|      Bob|       2
            1|      Bob|       3
            2|     Jane|       4
With many more columns of course. The point being that it's a denormalized listing of student-course pairs. If so, you'd be able to get immutability by reading the query result into a data table and doing something like:

    var result = from row in dt.AsEnumerable() 
    group row by row.Field<int>("StudentId") into grp 
    select new Student(
        grp.Key,
        grp.First().Field<string>("Name"),
        from courseRow in grp select new Course(courseRow)
        );
This seems to satisfy all of the conditions that you list (namely immutable objects), but with the added advantages of not needing a inner Completion class, and a number of less lines of code.

Furthermore this is less coupled because now the Student class doesn't need to worry about DataReaders or DataTables or which column names to read from.

I would also argue that this version is a lot easier to read and reason about.

I think you could make a similar transformation for any circumstance in which you wanted to use an out variable in the fashion you outline.

Does that make sense? Is there another case in which you would advocate for our variables in new code?


Yeah, I think lots of uses of out in the .Net framework are from the early days and a little bit of a mistake. Since they can't remove them this is suppose to make them easier to work with. TryGetValue comes to mind, but I agree overall out variables always felt clunky


Although F# manages to convert all those TryGetXXX methods into tuple-returning transparently, so it can be done. Just off the top of my head, you could probably write an extension method to wrap the original into the new format pretty easily.


> Literal improvements will be a godsend for writing database migrations.

Can you elaborate on this; both how it helps and why you're using C# for database migrations?


My go to migrator is fluent migrator (https://github.com/schambers/fluentmigrator) or flywaydb if anyone objects to the c# (https://flywaydb.org/).

Each migration gets a number, typically time encoded. If I was to write one now it would have the attribute [Migration(201608251157)]. With the new literals this will become [Migration(20160825_1157)], amazing how much readability a single underscore can make.


>c/java style syntax is showing it's age compared to something like scala which has return type declarations at the end, which I find much more readable

I hope there are better examples of Scala's syntactical advantages than this one, because it seems like the 'egyptian-style' vs 'next line style' brace bracket debate...


Yes there are plenty. The correct order of type declarations just ties in with the better syntax for generics and leads to an easier to read, more consistent language.


Interesting; thanks. F# and Scala are pretty high on my to-learn list.


I've seen some p/invoke examples where out variables are used as a c pointer.


out/ref still have their uses when interfacing with COM or P/invoke.


I still lack some kind of sum type that is always checked for exhaustiveness. When I'm creating a class hierarchy for a ___domain, 9 times out of 10 I'd rather have a simple sum type. When I have to make a class hierarchy in C# anyway (because there are no simple sum types). I'd like to be able to write something like this, and have this match/switch fail to compile if I add a new type of shape.

    match(shape) 
    {
      case Rectangle r
        return r.With*r.Height;
      case Circle c
        return pi*c.Radius*c.Radius;
    }


I'm just going to pimp my library (github.com/mcintyre321/OneOf) which lets you pull this off in c#. I'm quite sad it's needed though, it would be great to have it in the language.


That's very neat, it should be pretty easy to sugar something like that into the compiler with no CLR changes.


It looks like they've added a lot of rope to hang ourselves with typos.

    int myvar, I;
    foo(out int mvar); // oops, not myvar; maybe caused by a refactor?
    bar(out *); // oops, not I; was up too late coding
And so on. Things like this would be easily missed when reading code at-a-glance, and it's this sort of bug that arises often in languages that allow implicit declaration of variants.


Hi there, C# language designer here.

I don't really see your example in that way. Let's start with the latter one first:

> bar(out *); // oops, not I; was up too late coding

I'm not sure how this situation is any differnet from any other case where you need to pass some variable, and you pass the incorrect name. This is already possible all over the language. For example, you might have written "bar(out J)" when you meant "bar(out I)". As usual, the recommendations here are to use strong types and good names to help keep your code clear in intent and to allow the compiler to tell you when something seems awry.

Now let's look at your first example:

> foo(out int mvar);

This version immmediately screams at me that something is happening that requires my attention. First off, just the presence of 'out' is an immediate call that this is not a normal call. Nearly all calls pass by value, so out/ref stick out like a sore thumb (esp. in an IDE that classifies them accordingly). Second, the "out int" is another large indicator that this is doing stuff very special.

Finally, i'd point out that the mispelled name problem is really no different than what you might experience today with code like:

  int myvar;
  ...
  // much later and indented a bit ...
         int mvar;
         Foo (out mvar);
Here you've unintentionally introduced a new variable that you may or may not have intended to. Without a collision, there's no way to tell.

> it's this sort of bug that arises often in languages that allow implicit declaration of variants.

No implicit declarations are allowed. All declarations are explicit. We just don't force you to have to declare in one ___location and use in another. This is a pattern that many people hate, and which goes against a desire to immediately declare and initialize, and thus not have to track down how a variable is written to.


> I'm not sure how this situation is any different from any other case where you need to pass some variable, and you pass the incorrect name.

It's a wildcard. Passing in any other variable name would ideally raise an error about the use of an undeclared variable or a mismatched type. The use of a wildcard disposes of those errors.

> the mispelled name problem is really no different than what you might experience today

Not quite; your modified examples includes two declarations on their own lines. Being on their own line gives them greater visual presence at-a-glance than the new syntax which buries the declarations within a parameter list.

Worth noting is that my trivial example managed to confuse at least one reader who was unable to see the issue[0].

> All declarations are explicit.

While true, you've muddied the lines a little by moving declarations into the syntax of other expressions. Where previously a declaration sat on its own line or at the beginning of an assignment, they may now be peppered throughout the syntax in ways that are not so easy to observe at-a-glance.

0: https://news.ycombinator.com/item?id=12356681


The first one requires 2 errors. Reinitializing a variable (if you intended to use myvar then you wouldn't need to put int after your out) and compounding it with a misspelling. This is no different from how you'd make the same mistake today.

I'm not happy about wild cards. It seems like taking what is a very powerful character in software development and using it for a pretty minor usability improvement. Besides it seems misleading. When I see an asterisk I don't think throwaway. I can see how that aligns with "anything can go here" meaning from regex, but I feel there is a difference in "value". Using a * in a regex seems to increase the power and heft of the regex, where as using it here, seems to decrease the value, which is almost the opposite effect. Another way of putting it is that the point should be to let me ignore this while reading code, but the asterisk draws my attention to it instead.

Compare this to the Haskell convention of using an _ for throwaway items, which disappears just enough, instead of attracting attention. And by simply being a convention, it isn't giving the character too much power.

Not sure if lack of Haskell style type inference prevents C# from using convention for throwaways instead of wildcard chars, but I think it would be much less overhead on the Dev.


It does seem silly, considering that F# already uses _ as a "don't care" wildcard, and it's a pretty common pattern already in C# lambda-heavy code to use the underscore for the same idea.


R# should warn "unused variable myvar" on the first one.

You'll get a compiler error if you try and use `I` uninitialized.

Not saying you're wrong, just having trouble getting worried about this. You can make typos now:

    int x, y;
    foo(out x, out x); // oops, not y; was up too late coding


In a non-trivial example it's likely that `myvar` or `I` would see themselves used elsewhere, perhaps even as an out value to another method, and so raise no warnings. The source of the problem won't be so obvious at-a-glance because the new declaration syntax aren't so different from existing syntax, and could easily go unnoticed when shadowing existing values deep within a function.

Sure, you can make typos now, but these changes expand the possibilities of errors arising from typos or incomplete refactoring; while reducing the discoverability of the issue at-a-glance.


> foo(out int mvar); // oops, not myvar; maybe caused by a refactor?

This should fail to compile, it's the equivalent of declaring a variable twice int the same scope.


If that were int myvar you'd be correct, in this case the typo introduces a new variable mvar that won't conflict with the original declaration.


Whoops, I missed the typo. Inadvertently proving your point.


Wow, lot's of great stuff! I especially like local functions, I'm all for breaking up complicated methods into several methods to simplify and clarify syntax... but it always felt weird that all those methods where on an class scope even though they only were relevant for that method.

This resulted in many cases that those methods needed to be broken out into their own class... which is the way to go some times but felt like a bit of overkill in others


I mostly agree, but until now I just created private helper methods and went on with my life. If there are many of them, I put them into a #region. There are alot of cases where such helpers can be used from two separate locations and they also increase discoverability for other devs.

If you really wanted to, wasn't it already possible to create anonymous delegates/lambdas locally? The only improvement I see with local named methods are readable stacktraces.I have to admit, the mangled names are a major headache for me (and the stacktraces starting at lamda invocations, missing the 'original' stacktrace. I'm looking at you Parallel.Foreach!)


I would have thought use of a Func<> or a lambda would have been appropriate instead of a local function?


I've been really excited about this release, but also a little disappointed that we still don't have proper object composition.

I'd be happy with either extension methods with state or adding traits to the language as a separate feature. I understand the reasons it was decided against in C# 4, but its hard to preach composition over inheritance when the framework is philosophically against it, and it runs contrary to how the whole framework is structured (single inheritance from Object). But it still irks my fussiest self.

You can get something that sort of gets mostly there with ConditionalWeakTable, but its not endorsed by the vendor, and in the experience I had with it while trying to create a mixins module for C# a few years back, makes the GC leak like a sieve at scale.


What's the point of

    GetCoordinates(out var x, out var y)
over destructuring assignment like

    var x, var y = GetCoordinates();
the former looks completely backwards.


Wondered about this myself, is this something that is needed for backwards compatibility or something?

IMO fishing parameters back from functions is one of the things I do not miss from old C.

Anyone has anything to add to masklinn's explanation?

Edit:

found CyrusNajmabadi's comment here: https://news.ycombinator.com/user?id=CyrusNajmabadi


Maybe the former method would make more sense to use if it were HasCoordinates and returned a Boolean?

This way you would get a Boolean if the coordinates exist and would not have to perform validation on the out variables.


The former works with pre-tuple out-parameter-based API e.g. Int32.TryParse.


I much prefer your second example. In fact, surely tuples would help in this situation?

eg. in C++: tuple<int,int> x = GetCoordinates(); or auto x = GetCoordinates()

Much easier to see the output. I maintain enough old old C++ code and don't like seeing GetCoordinates(int x, int y) or x in COM-land any more.


I just realized that I don't remember I ever saw a C# rant on HN and very very rarely in general. It must have the highest usage/complaints ratio in the industry. Now that I said that I'd love to see the languages ordered by that ratio!


Maybe not exactly what you want but it is a "% of developers who are developing with the language or tech and have expressed interest in continuing to develop with it".

http://stackoverflow.com/research/developer-survey-2016#tech...


Since we're all chipping in with features we'd like to see in C#, here are mine: anonymous classes - with actual class bodies, that can implement interfaces - and traits.


Yeah, coming from Java a decade ago this was the first thing I noticed lacking. And with all the stuff C# has introduced beyond that, I'm always fairly surprised that this hasn't been done.

In preparation, the feature currently called "anonymous classes" in C# needs to be redubbed "anonymous records". And in preparation for that, well, "nonymous" records need to be introduced.


Can you give an example of how C# would be different with anonymous classes?

Googling that term just gives a whole lot of references to inner classes in Java.


Implementation of IComparer


One I've come across is binding a command to a button. With anonymous inner classes you can have something like:

  myButton.AddListener(new Command {
    bool enabled() {
      return false; //more logic goes here
    }
    void onClick() {
      //do something
    }
  });


But that's something that's very rare in C#. Usually for that sort of thing you have events and delegates. Although I have to admit that the library I work on had a few patterns that would benefit from anonymous interface implementations, but not that much to really miss the feature.


much much better to use DelegateCommand:

    SubmitCommand = new DelegateCommand(()=> DoSomething(), ()=> IsSubmitEnabled);
Edit: and obviously the Command DependencyProperty on the Button will be bound to the SubmitCommand property, instead of accessing directly the button from the ViewModel, that is the worst possible thing that you can possibly do.


I'm not convinced that gives you that much more over using a delegate.


It doesn't. It's a nice to have feature, not a world changing one.


You could, if you were so inclined, just generate proxies on the fly.


Im so excited by this in particular

"Switch statements with patterns

We’re generalizing the switch statement so that:

•You can switch on any type (not just primitive types)

•Patterns can be used in case clauses

•Case clauses can have additional conditions on them

Here’s a simple example:

switch(shape)

{

    case Circle c:

        WriteLine($"circle with radius {c.Radius}");

        break;

    case Rectangle s when (s.Length == s.Height):

        WriteLine($"{s.Length} x {s.Height} square");

        break;

    case Rectangle r:

        WriteLine($"{r.Length} x {r.Height} rectangle");

        break;

    default:

        WriteLine("<unknown shape>");

        break;

    case null:

        throw new ArgumentNullException(nameof(shape));
}

There are several things to note about this newly extended switch statement:

•The order of case clauses now matters: Just like catch

clauses, the case clauses are no longer necessarily disjoint, and the first one that matches gets picked. It’s therefore important that the square case comes before the rectangle case above. Also, just like with catch clauses, the compiler will help you by flagging obvious cases that can never be reached. Before this you couldn’t ever tell the order of evaluation, so this is not a breaking change of behavior.

•The default clause is always evaluated last: Even though the null case above comes last, it will be checked before the default clause is picked. This is for compatibility with existing switch semantics. However, good practice would usually have you put the default clause at the end.

•The null clause at the end is not unreachable: This is because type patterns follow the example of the current is expression and do not match null. This ensures that null values aren’t accidentally snapped up by whichever type pattern happens to come first; you have to be more explicit about how to handle them (or leave them for the default clause).

Pattern variables introduced by a case ...: label are in scope only in the corresponding switch section."


I wish they did not call it pattern matching. Instead this is more like Destructuring combined with Switch. the guarantees are different


If you look at the original specs on Roslyn GitHub, it started as a more generalized extensible pattern matching system (tying into destructuring). I would imagine that this is still in the plans, but it just didn't fit into this version.


yes I'm aware and participated in some of the discussions.


"Full" pattern matching, as one would expect from F# for example, wouldn't be something I could expect from C#. For example:

    type Tree<'a> = Empty | Node of 'a * Tree<'a> * Tree<'a>
There is no way to express this directly in C#. Thus, I am "locked out" of a whole class of Pattern Matching that I would get with F#.

But that doesn't mean that C#-style pattern matching (really, Is-patterns and a Type Switch) don't fall under the umbrella of Pattern Matching. You can define tuple patterns that you can match on with the switch statement in C#, which is every bit a form of pattern matching as matching on a particular case of a union type.


> C#, is every bit a form of pattern matching as matching on a particular case of a union type.

I'm afraid that is not the case, because in the current C# version, the compiler does nothing help you determine if you've matched all the possibilities. (it is not exhaustive)

Exhaustiveness is important because it gives warning or an error to help you refactor and modify code without fear of breaking other code. Why engage in all the static typing business if the compiler is not going to help you refactor things?

This is important in the same way that adding a subclass requires exhaustively supplying all the abstract methods (true pattern matching is "dual" to adding a subclass; you use the compiler feedback to direct what actions to take next. you can't do this with the Switch based formulation because you don't get any feedback from the compiler if you missed cases or have refactored other code to add cases (for example, adding/refactoring abstract methods in a base class provides feedback on where to add/update methods in the subclasses)


Yes, and because of the lack of record and union types in C#, it's (probably) nigh-impossible to pull off exhaustive pattern matching. However, this is still pattern matching - you can define tuple patterns which decompose the data, much like you can in F#; just with some different syntax.

I agree with you that exhaustiveness is incredibly important, and I really hope that it can be done in C# some day. However, record and union types are also a piece of this puzzle.


You can achieve something similar via a library: https://github.com/mcintyre321/OneOf

I also think language level support for this would be a killer feature, it turns OOP and the requirement to use polymorphism to guarantee strategy-per-type on it's head.


CSharp once again showing it is the best language in terms of features and improvements. Great job all round by the designers.


I'd say Swift is more or less in line with C# with regards to functionality. Many new features added to C# seem to already exist in Swift. Swift could use some async/await mechanism en sometimes it's a bit annoying to typecast simple types for calculations, but otherwise Swift is a really sweet language to work with.

Nonetheless C# is a nice language. I enjoy writing C# code, just as I enjoy writing Swift code.


Swift is my daily driver for a year and some, I love the language and I'm having much fun working in it, it have some great ideas (nullability, emphasis on value types, protocol extensions), but calling it "more or less in line with C#" is still a bit of stretch.

(For starters, Swift desperately needs to include most of the points from "Generic Manifesto" to be comparable with more mature languages like C#. Not being able to return a generic interface from a function is just silly)


I'd disagree with you to be honest. Swift is still in it's infancy, and while it copied many of it's features from CSharp, it did so poorly and even now they continue to make breaking changes to their standard with each update.


Yes I have noticed this. It is interesting observing the changes to Obj-C just to keep up with the developments in Swift-land, when Swift itself is changing so frequently.


> CSharp once again showing it is the best language in terms of features and improvements.

That's a pretty sweeping generalization. Most of the headline features in the article are already in other languages e.g. Scala.


The difference being CSharp is actually used by a large number of people. Scala is not even in the top 10 used languages [0].

[0] http://www.tiobe.com/tiobe-index/


Your comment was about C# being "the best language in terms of features and improvements", nothing about usage numbers.

I agree that C# is far more widely used than Scala, but Scala is being used by significant numbers of developers esp. for Spark. Tiobe is also widely regarded as a pretty poor indicator of language adoption.


Scala is consistently #15 or #16 in various measurements. Just not in TIOBE as they decided to use the wrong search terms to look for, so TIOBE doesn't provide useful information in this case.


C# > C++?

I was hoping for deterministic lifetimes in this release.... just kidding. Must be me being used to C++'s RAII and useful destructors.

I believe your comment is a huge generalisation.


Will C# ever start to increase type inference to make it less verbose? Local functions are ok. But why not lambda syntax+inference? Probably still because of making quoted code have ambiguous syntax. But this ends up adding noise and increases the barrier on deciding when to put code into a tiny function.

When using C# I'm constantly noticing how much extra code there is. It's just frustrating, and I don't feel like I'm getting a unique benefit in return (unlike, say, in Rust). I know F# would be more concise and work just as well. But I know it's not easy extending the existing design decisions.

(The tuple story is sad, eh? This was F#'s original design, using structs. Then they made peace with the strange heap allocated framework tuple. And now...? No tuple interop?)

Anyways, congrats, C# definitely is the best general dev language commonly accepted at a large amount of companies, especially when factoring in tooling.


> how much extra code there is. It's just frustrating, and I don't feel like I'm getting a unique benefit in return (unlike, say, in Rust)

Actually you are getting a benefit in return (unlike, say, in Rust) - the libraries. Verbose syntax is less complex in regard to writing the code, so more developers write the code. I have no scientific evidence to provide about the fact, but observations speak for themselves. Compare the quantity of libraries available for Rust and yet another 'overly verbose' language, Go. We can find Go wrapper for almost anything, and often a few to chose from.


> Compare the quantity of libraries available for Rust and yet another 'overly verbose' language, Go. We can find Go wrapper for almost anything, and often a few to chose from.

Rust was stabilized last year, while Go was released six years ago. It is simply wrong to say that Go has a larger quantity of libraries because it is more verbose. IMHO the Rust ecosystem is surprisingly comprehensive for a language its age.


No benefit over F# is what I mean - nothing about .NET requires C# to be verbose. With Rust, there's zero alternative that'll have the same guarantees and performance. So I can put up with the verbosity.


Nice! I was also hoping for some sort of guaranteed not-nullable type and immutable classes. Maybe in C# 8.


F# has guaranteed not-nullables, except when you do default<NotNullableThing> it returns null. That said, it generally works well in well-defined situations but any time there's deserialization it's often a bigger mess than just never assuming not-null.

I think it's a problem that needs to be solved in the CLR, not C#, and my guess is it probably never will be.


Does the pattern matching also statically check that you don't try to use a failed match.

What if instead of

     if (o is null) return;     // constant pattern "null"
     if (!(o is int i)) return; // type pattern "int i"
     WriteLine(new string('*', i));
somone forgot to return, as in

     if (!(o is int i)) {
         // do something
     }
     // incorrectly try to use `o` as an int
     WriteLine(new string('*', i));
Rust uses the special `match` syntax to avoid this. But I suppose C# 7.0 also analyse the structure of the `if`s and `return`s to acheive the same effect more flexibily.

And if it can do that, could that they also start phasing in statically-checked null avoidance?


This was a concern of mine as well, as you can see in the discussion of scope changes for pattern variables [1], the C# checks for definite assignments handle this. So:

    if (o is int i) { f(i); }
    use_int(i); // <-- error: variable `i` is not definitely assigned
But then if you have:

    if (o is int i || (o is string s and int.TryParse(out i)) use_int(i);
We know that `i` is definitely assigned at that ___location.

[1]: https://github.com/dotnet/roslyn/issues/12597


My guess is this would be covered by the "have you initialised the variable" check, similarly to:

    int i;
    if (x == 3) { i = 2; }
    WriteLine(i);
I'd be interested to hear the answer, though.


I tend to avoid using switch/case for quite irrational reasons. The break statement just looks wrong and I can't layout the code in a way I find at all pleasing. I'll almost always use a sequence of else if conditions instead.

In my ideal world, it'd look like...

    switch (x)
    {
        case 1:
        {
            /* ... */
        }
        case 2:
        case 3:
        {
            /* ... */
        }
    }
And before anyone points it out, switch is the same as else if when the thing being compared is a simple local variable.


Most of the time when I use switches, I'm returning from inside the case, because I tend to pull the switch out into another function.

It's usually a good idea to create a new scope with your cases, so you can avoid some irritating variable name clashes, since, unlike and if, switch cases don't automatically do that, i.e.

  switch(foo) {
    case 0:
      var bar = 2;
      // do stuff...
      break;
    case 1:
      var bar = 47; // error, conflicting variable name
      // do stuff...
      break;
  }


Same here, plus I think the parent brakes are noise, can the compiler just know that the switch statement ends on last case?


No, as in some instances you may wish to do the same thing for two case statements, so you need it to fall through case1 into case2.

In other situations, you really don't want it to do that, so a break is required.

How would the compiler know the difference? Note that you can't "fall through" cases in Swift, and it is irritating. It stops you using the same code for two case statements - you have to duplicate it or put it into a function.


Not with nested switch statements.


...I don't want to ever have to work with your code.


But you can already do almost that, right?

    switch (x)
    {
        case 1:
        {
            /* ... */
        }
        break;
        case 2:
        case 3:
        {
            /* ... */
        }
        break;
    }


The problem is "falling through" the case statements. You can do this in C++ and C, which is really useful. You can't do this in Swift, and it's incredibly annoying.

Break is there for a good reason.


Like everything except "Ref returns and locals".

I don't think I've ever seen a case where I wished for that feature.

All the other things - tuples, anonymous out vars, pattern matching - I've found myself wanting quite often.


I believe ref returns are a performance feature. They seem a bit like move constructors in C++ in that they can save you a copy? But I am not confident I understand the nuances of move constructors to compare them properly.


They don't move anything, they are either pointers to a value on the stack or to a field in an object on the heap. The first is more like how standard C++ references are usually used, and the second are interior pointers like Go has. No code gets executed to make them happen, and no data is copied.


Since C# is starting to look more like JavaScript (i.e. local functions) and vice versa (i.e. arrow aka lambda expressions). I'm going to throw my pocket change into a couple features I've been growing old hoping to see:

1. Regex expressions just like Regexp in JS:

    /^\s+$/.IsMatch("  "); // or
    Regex r = /^\s+$/;
2. DateTime expressions, similar to regex, something like:

    DateTime clockTowerLightning = #1955/11/12 10:04PM#;


> 1. Regex expressions just like Regexp in JS:

I don't know about the current state of C#'s grammar, but ambiguities between this regex construct and eg. division contributes to the complexity of correctly parsing JavaScript code.

https://tc39.github.io/ecma262/#sec-ecmascript-language-lexi...


I'll take DateTime literals, for sure. But only if they require using ISO date format. A slight munge of omitting the timezone to mean 'system local timezone' would be fine too.

My main objection is over the month/day confusion with other formats. Is that November, or December?


ISO-serialized datetimes are the only serialized datetimes. At least if you want to avoid painfulness. Similarly, UTC is good. If there's one thing I've learned, it's that, for my own sanity, datetimes should be localized and formatted at the last possible instant, just before a human has to see them, and kept in a consistent, canonical form at all other times.


To be clear, local functions long predate Javascript. Though a handy regex notation would be nice, I could've used that on a couple projects that I had (corporate environment) to do in C# when it wasn't really best suited to the task (at least in terms of brevity/clarity compared to other languages like python or perl for generating reports and similar behaviors.


Algol and Lisp already had local functions....


For item 1, extension methods and implicit conversions would get you really close:

Definition: https://gist.github.com/noblethrasher/5edffb4cb2efa4ed9174fb...

    //Usage: 

    var is_match = "  ".IsMatch("^\s+$");

    //OR

    regexp rgx = "^\s+$";

    is_match = rgx.IsMatch("   ");


The first is almost certainly better off as it is in F#: active patterns. That'll let you define patterns that'll execute arbitrary code to match, making regex a simple library feature, vs a special one-off. C# should have focused on having proper extensible syntax vs hard coding random features (like duck typed foreach or collection initializers).

But given the constraints of mscorp and the target users perhaps they made the right decisions.


DateTime literals... something I wish C# would borrow from VB.NET


I thought pattern matching in C# would look like pattern matching in Elixir. For example, two functions with different signatures run based on the parameters you give it.

    def hello("sergio") do
      IO.puts "Hello Sergio"
    end

    def hello(name) do
      IO.puts "Hello #{name}"
    end

    ---

    hello("sergio") => "Hello Sergio"
    hello("mia") => "Hello mia"


Isn't that more an example of `multiple dispatch` rather than `pattern matching`?


Both :) You can pattern match in function head (definition)

http://elixir-lang.org/getting-started/pattern-matching.html


In Elixir land it's been taught to me as pattern matching. I have never heard of `multiple dispatch`.


Pattern matching, tuples, local functions... feels like someone was inspired by how Typescript turned out :)


Or just looking at F#.


In the binary literal proposal there is also some unresolved discussion about octal support, chip in if you like:

https://github.com/dotnet/roslyn/issues/215


I wish more languages would take a radix-agnostic approach (at least to a reasonable limit). What's wrong with:

  2#1010101
  8#75342
  1341235
  10#13451
  16#feb1300
radix#value, from erlang.

Or:

  #b10101
  #xfefe
  #o7777
  #36rSSSS
From common lisp.

Customize in some fashion for C# and its current notations for hex and (now) binary literals.

  036rSSSS
  02r10101
  0b10101 // this and above are equal


The reason people don't do this is:

1. Our current base notation is already familiar and widely used. Nobody is confused about 0xA vs 0b10.

2. Nobody will really use those other bases, so you're adding obscure features that muddle your syntax.


Wow! These are some really impressive features. I started learning F# a few months back and I can see some interesting parallels. Very excited to see Tuples and Pattern Matching.


Is 6.0 widely used in industry/open source?


25-strong dev team using C# 6 right now. Painless upgrade apart from a few pieces of legacy tooling. Some patterns haven't really been adopted much (e.g. exception filters). C# 7 looks like more of a leap but there's always an appetite to stay current in our team and I'm sure we'll take the plunge.


At my work I do a lot with 3.5 still. We need to upgrade our version of devexpress. More recent projects use MVC with .Net 4.0 or 4.5 (but our IT guy is nervous about installing such new runtimes on the servers that don't have 4.5 already..)

We use VS2012


You can easily use c# 6 with .NET 3.5. I know because we do!

You just need to upgrade VS to 2015. Language version has nothing to do with framework version!


Feel your pain. My work laptop has VS2012 on it.

Last year we were still targeting XP.


Noob question, why tuples? I mean why 3?


t doesn't mean three, it works with 2 or more.


Microsoft is mixing object oriented and functional in c#. This is not right, not only in case of Microsoft but also Python etc. They are fundamentally different and self-contained and need to be kept separate.


Unfortunately, C# can flail about adopting F# features yet the best feature of F# is something C# can never get:

Dependency Order Compilation


Except it does. And F# doesn't. F# just has in-order compilation. C# has an extra step to determine dependencies and compile them in dependency order.

The only thing for F# is that F# allows the user to determine the compilation order and show it in Visual Studio.


Sorry I wasn't clear, I meant in the following sense:

"One of the most common complaints about F# is that it requires code to be in dependency order."

In light of your statement I understand the way I put it made little sense.


Hi David, language designer here.

Can you give an example of where you'd want to take advantage of this in C#?

Thanks!


I don't understand the question, rather what's the point of it?


My instant concern: the variable names used by function declarations have become part of the API. If you use var to match the library producer's naming for tuple members, a rename of the variable in the definition will break your code, as the name you previously used will no longer exist!

Old example:

    bool Hello(string name) {...}
    var myName = "world";
    Hello(myName);
The developer can change the name of variables in the prototype to be more (or less!) descriptive without a problem - they're decoupled.

New example:

    (string interjection, string name) GenerateGreeting() {...}
    var greeting = Greeting();
    Console.Write("{0} {1}", interjection, name);
Now should the author change "interjection" to "greeting", _ your code won't compile _ and in a way that you likely didn't expect.


I assumed they are still decoupled and just shorthand for:

  string interjection;
  string name;
  (interjection, name) =  GenerateGreeting() {...}
I can't believe the C# team would introduce something so brittle into the language. If you're right, that a serious concern.


The names of the tuple field are a part of the method signature. Would you expect to be able to change other parts of the method signature, like the method name or argument types, and not have stuff break?

[edit] unless you're talking about how your example magically introduces "interjection" into the scope. That's not how it works. You would have to say "greeting.interjection". Or use the destruction syntax and say "var (ichosethis, andthis) = generateGreeting();", in which case its the order of the fields, not the name of them, that is important.


I think named tuple parameters only matters when not deconstructed, when deconstructed the position matter and you can choose your name. If you don't want to take that risk don't name your output parameters.

But that's the same as named parameter, and you didn't complained, if the client used named parameters and you change them it will break their code. The full signature of the method is the public API, as you can see when navigating a compiled assembly.


Agreed, I would much prefer something like:

(string first, string last) name = GetName();




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: