Hacker News new | past | comments | ask | show | jobs | submit login
A love letter to Objective-C (thoughtbot.com)
62 points by ingve on June 26, 2023 | hide | past | favorite | 92 comments



So much negativity in this thread. I, for one, absolutely loved working with ObjC. I no longer do any macOS programming, but when I did, it was magical.

Is the syntax weird and crazy? No doubt. It is, however, an extremely pragmatic approach. It's completely unique in that it offers a dynamic, very-high-level OO language on top of C, giving you a way to work on both levels. C (or C++) for the performance-sensitive parts and integration with C libraries, and ObjC for the glue that orchestrates the application. ObjC isn't without its faults, and if was starting to show its age, so it makes sense for Apple to develop Swift, though between the two languages, I think ObjC is the more innovative one.

It's a lot like having a scripting language around a C core, though ObjC has stronger static typing. Dynamic dispatch, late binding, and interfaces (protocols) results in a super rich, dynamic ecosystem of objects all interacting with each other, fulfilling the promise of Smalltalk better — in the sense of being the basis for real, mainstream products, including macOS itself — than any implementation of Smalltalk ever did.


Vala


I rooted so much for Vala when it first came out. Such good idea to bring the same good ideas to GLib. Alas it was not to be, the (irrational IMO) negativity around this approach killed it before it had a chance.


>GNOME

>A whole GNOME-specific language

Lmao

I was also a big Theranos investor and an early toe sock adopter.


No mention of smalltalk? Essentially Objective-C was smalltalk message passing bolted onto C. Then you had Objective-C++ which allowed a whole new language rolled in. I still find myself writing MacOS apps in Objective-C/C++ which allows me to use all the fantastic C++ libraries and frameworks that exist. No doubt Swift is a better language that gives you strong typing and functional constructs, but I’ll always have a place in my heart for Objective-C.


Why would they mention smalltalk? The article is framed from the writer's specific experience with both languages, not as a historical deep dive into the progenitors of Objective-C.

Granted these historical details are super interesting, but feels far from the intent of the article.


Because that is in essence why Obj-C works the way it does -- e.g., has all the square brackets. Obj-C is the child of smalltalk and C. It wasn't created in a vacuum.


Ruby is also based on Smalltalk. The author is just showing ignorance of where things come from.


Thoughtbot is famously a Ruby shop since their inception, I don’t know that this detail just happened to slip their mind due to ignorance.


I’ve written a lot of Objective C.

It is by far the worst language I’ve used professionally.

While I don’t love Swift it is still a breath of fresh air compared to ObjC.


What are your complaints specifically? In the beginning I also hated it (mainly because it was on a platform I didn't care much about). But the more I wrote ObjC (mainly in context of Metal) the more I started to appreciate the "minimal elegance" which is the complete opposite of C++ - I don't appreciate the fact that every object access is a dynamic method call though ;) (but that's where C comes in)

Typically I wrap the ObjC code which needs to call into system framework in subsystems which then expose a simplified C API (same with C++ libraries, wrap the C++ shim under a simplified C API), so that most platform- and library-agnostic code is written in plain C.

ObjC never got in the way though, and the nice thing is that (unlike C++) it supports the latest C features.


On my case, plenty of @ [] to type.

C++ already supports more C than it should.


ObjC 2.0 introduced dot-syntax for property access, gets rid of most of the [] (but still is a two-edged sword, because it looks like a cheap operation, but is a method call under the hood).

The '@' might look ugly but it's a good way to separate ObjC keywords from C and C++ keywords, so that ObjC can introduce new keywords without risking to break existing code.

(but this like complaing that Python uses tabs for scope blocks, or that C and C++ have too many {}, it's just a matter of getting used to it)


The whitespace significance of python will never be an equivalent comparison here. Whitespace significance adds unnecessary parsing and meta-programming complexity, and adds a significant vector for subtle and difficult to detect errors in code and data (in Python, YAML etc.)


Dot syntax does nothing for regular messages.

Forgetting how hard they are to type in some keyboards, depending on locale?

Not everyone is using US layouts.


> Forgetting how hard they are to type in some keyboards, depending on locale?

{ } is just as bad on German keyboard layout ({} is right-Alt 7/0 and [] is right-Alt 8/9 - on Windows at least, don't even remember on the Mac).

That's basically why I switched to English keyboard layout right after I left the Amiga (which actually had a decent German layout for programmers).

> Dot syntax does nothing for regular messages.

True, but at least in the Cocoa and Metal code I usually write, there's a lot more property accesses than traditional method calls.


Xcode autocompletes the closing brackets!


Care to elaborate?

Objective-C's simplicity and minimalism is what makes it one of my favorites.


I like Objective-C too. It's just C with a very good lightweight abstraction layer for modeling GUI apps.

I'm enough of a purist to think Obj-C 2.0 (introduced around 2007 at the same time as iOS) was mostly a mistake. In old Objective-C, it was absolutely clear when you were sending a message. You couldn't mistake a line of C code for an Obj-C message call.

But Obj-C 2.0 lets you use the same syntax for message calls and struct member access, as in:

  foo.view.bounds.size.width
Two of these dots are expensive virtual message calls. The other two are practically free stack pointer offsets. It's not in good C spirit to introduce this kind of confusion about performance impact. (However it's very much in the worst C++ spirit of frivolous overrides.)

And the benefits were mostly meaningless because the old way (which still works) was barely any longer to type, especially with IDE autocomplete:

  [[foo view] bounds].size.width


>But Obj-C 2.0 lets you use the same syntax for message calls and struct member access

I'm in 100% agreement, you should never hide the message passing syntax, and am not a fan of this feature.

Alas it was probably introduced as a sop to those people that are constantly complaining about the syntax, and in the process made the language worse ergonomically, not better.


Yep.

And what's even worse is that the semantics are muddled.

    foo.view.bounds.size.width = 200;
This will not set the width of the view to 200.


There is nothing simple or minimalist about C style separate header files with hacked on property synthesizers. The language is an aggregation of hacks.


There can be simple, minimalist aggregation of hacks.


I have written a lot of Objective C. I don't use it much anymore, except to maintain some existing applications. That said, I still believe my time with it was much more ergonomic for my applications than Swift -- particularly its ability to act as a system language -- given it is a strict superset of C. Further, with the introduction of Swift I chose to change career direction, perhaps somewhat literally because of my dislike of the new language. I chose to go deeper into dynamic languages and now use Clojure, and to a much lesser extent, Python, professionally.


What languages have you used?


C, C++, Java, Kotlin, JavaScript, Python, Ruby, Objective C, Swift


Four of those languages are ones where you can almost always statically tell when code is dead. Two or three of them are dynamic languages with large investments by multiple companies and communities to make the dynamicness less terrible for performance and code size.

The one that is neither is objc (ignoring recent awesome improvements like objc_direct).


I'd argue that C++ and Java are way worse than Objective C.


All are the same language.


I suppose you were expecting him to throw smalltalk or clasp or prolog/datalog in there to get your approval? This was rude.


Objective-C is probably the worst language I’ve ever encountered (speaking as a compiler guy).

A language where you can’t prove anything is dead, where you can’t make any changes to the runtime to deal with the dynamic semantics is not great.

ObjC did one neat thing by having language level runtime type resolution address the component oriented programming and Abi stability issues present in c++ and later addressed by things like COM. But at the cost of some seriously bad performance characteristics.


Yeah, compiler people hate it. Because there's very little for them to do.

The question is whether programming languages exist to make the compiler (people) happy, or to enable programmers to build great software.

I'd argue the latter.

Apple has now accepted the former.


They have done some nice stuff recently with objc_direct. The minute you can annotate away the dynamism it’s much better. Question is what’s the best way to do this? Lots of folks have been investigating analysis of entire apps either at the source level or some ir (maybe mlir some day).


objc_direct is pretty awful.

> The minute you can annotate away the dynamism it’s much better.

Only if you think dynamism is a bad thing. It isn't. There are cases when you might want or need to take it away, but they are both (a) rare and (b) context sensitive. So taking the dynamism away at the supplier side is the wrong approach.

> analysis of entire apps either at the source level

Dramatically worse build times for little to no gain. What's not to like?!


People already enable pgo and lto which dramatically makes their build times worse all the time just to produce a shipable app/dylib. When large objc codebases (apple, meta, google scale code bases) ship frameworks and apps they basically can’t do so without this sort of thing. World builds at apple literally take days.

It’s also not little to no gain, codebases that misuse/overuse objc instead of C/C++ often get vastly better startup time (less selectors to page in at app boot) and app size because of this sort of thing (especially with lto and -Oz).


> People already enable pgo and lto which dramatically makes their build times worse all the time

No. Not "all the time". Even the cases you listed are fairly unique corner cases.

> When large objc codebases (apple, meta, google scale code bases) ship frameworks and apps they basically can’t do so without this sort of thing

Apple does this not because the code would not be shippable, but because doing so for an OS once before shipping is a useful performance optimisation. Source: I used to work on the OS X performance team.

Haven't heard that FB needs to do this, though their mobile development practices are so fubared that I wouldn't be surprised.

The case that I do know about where they absolutely had to do crazy things to be shippable (more precisely: to remain below the iOS AppStore cellular download limit, a P1) was Uber. But that wasn't Objective-C, that was their Swift rewrite.


> Source: I used to work on the OS X performance team.

I am so sorry to hear that.

When I was at Apple I worked on a different part of the stack so my experience is not first hand here, but the folks I knew who worked on optimizing the shared cache either through ld or clang paint a different picture. I don't think they would ship without those performance optimizations, but I take your point that at least at the time you were there it was not as dire. That's not what it sounds like when I speak to those folks in the last year or two however (for iOS anyways). I don't think there's any universe where they wouldn't do these optis at the very least before shipping, but I don't mean that they can't have functioning non-optimized builds internally.

Certain apps mentioned above really can not ship without this stuff, but you're right about the development practices. Given the app size limits and startup time constraints as well as the coding practices it is really not feasible to not do things like LTO, objc_direct as much as possible, and align the startup path of an app by page boundary etc.

> was Uber.

I am well aware of the Uber Swift rewrite and when I say "unable to ship" I don't mean it to that extent in every case, but my point is I don't think anyone is going to worry about lengthy prod build times if having faster builds means less engagement even in the single digits.


OS X performance was lots of fun. Mostly ;-)

> I don't think they would ship

Notice "they" and "would". Which is completely different from "all the time just to produce a shipable app/dylib".

Sure we would apply various optimisations, including some very nasty, hacky and slow techniques in order to produce order files. But this is not something people do "all the time", it is something that Apple does. Because Apple produces the operating system, and because Apple has certain performance goals. Including not having performance regressions (unless specifically allowed). So once you start doing this, you can't stop unless you can find the performance elsewhere.

It is not necessary to produce a "shippable" app or dylib, the vast majority of apps or dylibs can be and are shipped without doing any of this.


> It is not necessary to produce a "shippable" app or dylib, the vast majority of apps or dylibs can be and are shipped without doing any of this.

You're not wrong. I think I engaged in a little bit of hyperbole here because I often fall into the thinking of top 10 or 20 appstore app terms. Not certain but if memory serves I think probably even Meta's least aggressively optimized app still does the orderfile thing.


> I often fall into the thinking of top 10 or 20 appstore app terms

You're not the only one. As far as I can tell, the vast majority of (particularly, but not exclusively) compiler work is driven by these edge cases.

As an example, when I look a the justifications for pushing undefined behavior exploitation in C, invariably some Very Smart Compiler Guy™ will state with great authority that this is absolutely essential and completely unavoidable. When you check more closely, it turns out that it enables a somewhat obscure optimisation that at best most people can do without and at worst is just about completely useless.

See Proebsting's Law, which was unduly optimistic, and also some of the work on variability in benchmarking, for example how different lengths of environment variables can dwarf most optimisation effects.

Daniel Bernstein stated this a bit more aggressively as "The Death of Optimising Compilers"[1]. He's not wrong, either, but when I mentioned this at a Google event, Chandler Carruth basically blew up and said it was complete garbage. Apparently he and Daniel almost came to blows over this at some conference.

At the time, Google was working on a far less optimising JS runtime for Chrome, after they found out that their super-simple runtime (mostly just interpreted) for small devices was beating their super-duper sophisticated Turbofan JIT on many real world tasks. And of course Microsoft found out that just turning off the JIT completely has marginal real-world impact.[2]3][4]

So what's the disconnect? How can these two very smart and knowledgeable people come to such stunningly different conceptions of reality, so different as to be not just incompatible, but apparently mutually incomprehensible? I think (Turing Award Winner) Michael Stonebraker had a great analysis[5] of what is happening, though for him it was in the field of databases: nearly all the research today, when it isn't completely removed from reality, is focused on the needs of what he calls the "whales", the 0.01% or so of users who are big enough that they sponsor that sort of work (Google, Apple, Microsoft, Facebook, ..).

So in the relevant community, the needs and the problems of the whales define reality. But they are far, far removed from and often in conflict with the needs of the other 99.99% of users.

Which goes back to your comment about compilers and compiler people optimising for the "common case". For that to work, you have to actually be aware of the common case.

[1] https://cr.yp.to/talks/2015.04.16/slides-djb-20150416-a4.pdf

[2] https://microsoftedge.github.io/edgevr/posts/Super-Duper-Sec...

[3] https://blog.metaobject.com/2015/10/jitterdammerung.html

[4] https://news.ycombinator.com/item?id=28735392

[5] https://youtu.be/DJFKl_5JTnA?t=677


Also my answer to your question is: I think both. If it’s generally too hard for compiler people to optimize for the common case then it will be harder for folks to build great software. I’d also argue that apple did not do the former in regards to memory safety, rust did.


Demonstrably false.

It is easy to write fast software in Objective-C. Harder in Swift, for example.


What part are you responding to? I in no way made any claims about objc versus swift performance. I said the way they have tried to address memory safety assumes more compiler optimization and less burden to the programmer than rust.


"If it’s generally too hard for compiler people to optimize for the common case then it will be harder for folks to build great software."

The compiler people don't need to optimise for the common case.


That’s literally all compiler people do.. if we see a pattern people are using 80 percent of the time we find a way to make it better. There are ways to make ObjC better optimized at the compiler and runtime level but no one is exactly asking for patches for doing so for objc for very obvious reasons (which I am not going to discuss on a public forum).


> That’s literally all compiler people do.

I know. My point is that in lots of cases it is unnecessary and in fact unhelpful.

> make it better

Well, you first need to get agreement from your language users on what constitutes "better". Removing the dynamism from a dynamic language does not make it "better".


> Well, you first need to get agreement from your language users on what constitutes "better". Removing the dynamism from a dynamic language does not make it "better".

I think that depends on if said dynamism is being intentionally used. Far too often it is not, and it gets even worse when that is paired with tools that do automated source code generation like the kinds Meta and Google use (or misuse depending on your definition).


Can you give some examples? "Easy" and "hard" are subjective.


1. Swift tends to be about on par with Java in the language shootout.

2. Java tends to at least around 2x slower than C.

3. For my book[1], I wrote a chapter on Swift and did quite a bit of benchmarking. Just how bad Swift was actually surprised me, and I wasn't expecting it to be particularly good.

4. Case study: https://blog.metaobject.com/2020/04/somewhat-less-lethargic-...

Even the worst Objective-C implementation I could think of (really comically bad) was 3.8x the speed of Swift's compiler-supported super-duper-sophisticated implementation. With a bit of work I took that to around 20x.

[1] https://www.amazon.com/gp/product/0321842847/ref=as_li_tl?ie...


The other thing that ObjC does right is that it builds on top of an unmodified C (unlike C++ which went full in, messed everything up and now needs the C++ standard to change to integrate new C features, which then usually ends up as half-assed and incomplete attempt - see designated initalization). C and ObjC can be improved independently from each other, and combining them "just works" without "harmonization efforts" by language committees.

Also, doesn't the existance of ARC mean that ObjC supports better "static lifetime analysis" than C++? Or is C++ just behind the curve there?


ARC is runtime reference counting. How does that offer static lifetime analysis? C++ also has a form of ARC in the form of shared pointers, the only difference is that in C++ it's opt-in while in ObjC/Swift ARC it's just the way it is.


Objective-C has an advantage, the compiler is able to elide retain/release pairs due to ARC semantics, where in C++ there is no special semantics defined in ISO C++ for smart pointer classes.


ARC does static lifetime tracking (or maybe better: ownership tracking) to remove redundant reference counting operations (highly simplfied: it basically makes the decision for you when to use the equivalent of a shared_ptr or unique_ptr under the hood - which is at least nice for API design, since functions just take or return generic "object references" instead of raw, shared or unique pointers like in C++ (which then is usually not what the caller wants in a specific situation).

ARC is not perfect because it has to be fairly conservative (it can be manually tuned though), but ObjC still seems to give the compiler more static information than shared_ptr and unique_ptr in C++ which are purely stdlib runtime constructs.


I'd imagine Swift also does this?


AFAIK Swift's ARC is the same thing, yeah.


That “one neat thing” was absolutely huge for APIs and applications. Much more significant for a platform than any feature of C++ ever, for instance.

The performance wasn’t a real problem because you could always use C types for data on performance-sensitive paths. Nobody does Obj-C message calls to iterate over audio samples.


Yep. I think right now we just have too much objc code written that doesn’t need any of the parts of objc needed for frameworks.


But as an app developer, I love using Objective-C because it lets me build beautiful things without worrying about making the compiler happy.


> It’s really a testament to the Objective-C runtime (and Ruby C extension API) that while much of the original code was written in 2001, an incredible amount of it runs unchanged over 20 years later on modern versions of macOS and Ruby.

This is one of the things I love. Starting in 2014 with the introduction of Swift and continuing to the present, I've heard people claim that every line of Objective-C code you write is technical debt, but the opposite turned out to be true: any Swift code you wrote in the early years is now completely unusable. Apple even removed some Swift version migration tools from Xcode. Meanwhile, old Objective-C code still Just Works™. I continue to write ObjC today, with no regrets.

Another thing I love is that you can import open source C projects with no changes whatsoever.

Compiling is much faster. The debugger actually works (though Apple has been making it worse lately). There are so many practical advantages to Objective-C, even if it's not the most "beautiful" language. Objective-C has always valued pragmatism over idealism, as the marriage between high level object-oriented programming and low-level systems programming.


> The language has respectably reached that level of maturity and support that allows it to just continues on, quietly running in the background.

Didn't Apple tinker with Objective-C a lot during its runtime, or am I just concentrating on the bumps? I remember reading the Cox book ages ago, where you basically had a simple SmallTalk-ish message syntax on top of plain old C. NeXt you got protocols etc.

Over the time of it being heavily used on OS X and especiallly iOS (where a lot of new and fledgling programmers came in), all kinds of stuff was added. Properties, various memory management experiments etc.

Sure, after Swift was introduced, that seemed to have calmed down a bit. But "mature"?

And no, it's not like other languages fare reasonably better, now that most of them are handled like products.


Apple added a lot of sugar to different parts of Objective-C and a few side features – dot syntax, synthesised properties, blocks, etc. They're still adding tweaks like nullability, generics and other Swift-compatibility features. And they were continually changing things like NSString and NSNumber internal representations and allocation patterns to avoid actually allocating.

They really only changed the runtime once – Objective-C 2.0 – which they mostly hid behind the transition to 64-bit. This introduced layout resilience (so they could add fields to objects without breaking compatibility) and tagged pointers (so the reference count could be stored inside the pointer).


There have been a couple of more runtime changes, for performance reasons, and better integration with Swift.


I don't miss Objective-C because I still use it on daily basis. I also like Swift, I use it daily at work but I don't see the point of using Swift when working on solo projects.

ObjC is fast, fun and I personally feel very productive in it.


One terrible thing about ObjC is that it considers it valid to call methods on null pointers. It just does nothing and returns 0/false/nil. This leads to especially frustrating bugs.


I like many of the highlights of the article. I found it odd that Objective-C was described in terms of Ruby, rather than the reverse given that Objective-C was initially developed more than a decade earlier than Ruby. I do understand that the author came to the language Objective-C after a long history with Ruby.


I love both Objective-C and Swift. I prefer the former simply because it feels quite novel and interesting in 2023


Nobody loved objc. Apple forced people to do so. Sometimes people get Stockholm syndrome


They actually did.

When OS X was being developed, Apple was unsure about Objective-C potential adoption, so it made Java a tier 1 language, with an Objective-C bridge, just like Swift nowadays.

When it was clear that the Mac OS community, raised in Object Pascal and C++, had no issues with Objective-C, Java Bridge was dropped from OS X, and Apple contributed their JVM implementation to Sun.


Wow, never heard about this. Do you have more info?


Here,

"Steve Jobs reveals OS X Strategy WWDC 1998" => Java section

https://www.youtube.com/watch?v=RJ5BM9hNnwg

Java Bridge to Objective-C

https://developer.apple.com/library/archive/documentation/Co...

Relevant part from the introduction

"The original OpenStep system developed by NeXT Software contained a number of object-oriented frameworks written in the Objective-C language. Most developers who used these frameworks wrote their code in Objective-C.

In recent years, the number of developers writing Java code has increased dramatically. For the benefit of these programmers, Apple Computer has provided Java APIs for these frameworks: Foundation Kit, AppKit, WebObjects, and Enterprise Objects. They were made possible by using techniques described later in this document. You can use these same techniques to expose your own Objective-C frameworks to Java code."

Cocoa Java integration

https://developer.apple.com/library/archive/documentation/Co...

Java deprecation on OS X

https://developer.apple.com/library/archive/releasenotes/Jav...

And after the deprecation/removal, contribution of their own changes back to OpenJDK

https://www.infoworld.com/article/2078216/apple-joins-oracle...


Sadly that video is no longer available to watch :(


It’s probably easier to use than COM on windows. The other interesting thing about it was that because of the message passing and easily inspectable objc_methname sections in the macho it was probably super easy for those early iPhone app coders to write some early apps before apple even shipped their first AppStore and iPhone OS 2.0 sdk.


COM is great as idea.

Unfortunately WinDev is so siloed into their way of thinking that never managed to create productive tooling.

The exceptions being VB 6, .NET Native and C++/CX.

It is somehow tragic that Delphi and C++ Builder are the most long lived toolchains that offer usable productivity with COM, and Microsoft itself is unable to keep productive tooling for COM going.


Probably nobody has ever gotten Stockholm Syndrome, including the hostages in Stockholm.


In a Babylon 5 episode (set a few centuries in the future) they call it the Helsinki Syndrome, which I thought was a clever little nod by the writer to remind us how a legend continues to mutate.


I'm not aware of a mention in B5 but they do call it Helsinki Syndrome in Die Hard.


NeXT loved it, and produced an amazingly ergonomic and productive application development experience on the back of it.

In the early days of NeXT (legend, and youtube videos have it) they wanted to address some of the shortcomings of Mac, and some of the most consistent feedback was that it was so "darned hard" to program the Mac, so Objc (along with IB, etc) was the answer.


I bought a NeXT because, among other reasons, it had Objective-C.


My graduation thesis was porting a particle engine from Objective-C/OpenGL into C++ on Windows, because NeXT's future was uncertain and my thesis supervisor wanted to get rid of his Cube.


What about those of us who liked it before Apple touched it?


It was a figure of speech. You were a tiny minority

https://www.researchgate.net/figure/Popularity-Trend-of-the-...


It’s not just objective C but the entire Apple ecosystem that grew up around it is a tangled ball of mess.


It's "abstractions" that led to this: https://youtu.be/kZRE7HIO3vk [The Thirty Million Line Problem]

And Apple is one of the companies directly responsible for the explosion in complexity - and insanity - in modern computing.

It's all so deeply entrenched - the hardware, abstraction layers, languages, culture - that it's difficult to imagine ever being able to extricate ourselves from this mess.

We need another 'quantum leap' in technology, or some new paradigm...


> It's "abstractions" that led to this: https://youtu.be/kZRE7HIO3vk [The Thirty Million Line Problem]

> And Apple is one of the companies directly responsible for the explosion in complexity - and insanity - in modern computing.

I’m not going to watch an almost two hour long video to check whether it supports that claim or not. Can you elaborate, especially on the “directly” and “insanity” parts?


Here's a summary of the video:

Casey Muratori makes the point that we've had phenomenal growth in hardware power, but our software is not correspondingly better. Computers should be much more pleasant to use. We all know what he's talking about: laggy software, crashes, updates, driver issues, etc.

Things are no longer coded from scratch - we pile code on top of code on top of code. Features accumulate, together with complexity - and bugs.

How many lines of code does it take to read a text file, in a web browser? Across the whole stack - browser, website, OS, router - he estimates 56 million LoC.

The OS alone takes 30 million lines of code!

And yet, in the 90s, demo writers routinely produced literal operating systems on a single floppy disk. Remember Amiga demos? They 'hit the metal' because they weren't hobbled by drivers and other abstractions.

But as hardware developed, the manufacturers removed that access, forcing developers to work through abstractions. A SoC is a super powered 1980s home computer, but we can't actually reach that hardware.

And it's up to hardware people to start taking this problem seriously.

And now my comments:

Apple is one of those manufacturers.

This article hit a nerve with me, because I dumped iOS development; because of the complexity and insanity. Swift is a sprawling behemoth of a language now, hugely complex, full of gotchas. In some senses, it has to be - because it's a reflection of the hardware.

I'm not going to elaborate more. Quite simply: you can't program your iDevice, EVER, without going through Apple's prescribed programming language(s), frameworks, libraries, LLVM, and hardware drivers.

That's Apple's fault. Computing should not be like this.


> And yet, in the 90s, demo writers routinely produced literal operating systems on a single floppy disk. Remember Amiga demos? They 'hit the metal' because they weren't hobbled by drivers and other abstractions.

That was possible because the hardware was essentially a "game engine API" (e.g. the hardware was designed to be conveniently used directly, and was also designed with game requirements in mind).

Whether that's good or bad is definitely debatable, for instance on the Amiga the need for hardware backward compatibility because of popular games that directly accessed the hardware was definitely stalling the Amiga's progress in the 90's.

I don't disagree with most of your points though. We're definitely in an age of unnecessary software bloat, but I think the answer isn't to go back to accessing hardware directly. A lot can be gained by just applying some common sense when writing code.

PS: Apple or the iPhone definitely isn't at fault though. The mess already started with the PC and Windows (but this mess also gave us increadibly fast hardware progress in the second half of the 90's by allowing hardware to evolve faster because of driver abstraction layers).


Mobile development, let alone iOS, is a substantially small amount of programming. Bizarre to blame Apple specifically, let alone Objective-C. It seems like you would just as easily be unhappy with web development or Android development as you would be with iOS development.


Hmm...Objective-C is exactly the kind of language that helps you prevent 30 million line problems.

Of course it doesn't do this by itself. If you insist on using it like a C++ or a Java, you will get the same 30 million line problem, slightly worse, because Objective-C is not a good C++ or Java. But if you can use it right, as the good folks at NeXT did, then you can work magic (aka. a sufficiently advanced technology).


i don't like to be overly enthusiastic, but an obvious instrument i see right now that would be capable to refactor 30 million lines are LLM. They might be a good tool to provide superhuman code simplification.


I don’t think that throwing even more complexity at not only our problems but also our solutions is the lesson from that.


it's a one-time shot though. You see what suggestions the LLM is producing, and decide or not to apply them. Once the code is refactored, you get back to regular coding.


That doesn't sound like it's less work than an "AI free" approach. The tricky part with refactoring isn't figuring out what needs to be changed and how, but to implement the change in a way that it works with the rest of the system, while at the same time actually improving things (and not just applying a different coding style).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: