Back

The C23 edition of Modern C

515 points1 yeargustedt.wordpress.com
eqvinox1 year ago

> The storage order, the endianness, as given for my machine, is called little-endian. A system that has high-order representation digits first is called big-endian. Both orders are commonly used by modern processor types. Some processors are even able to switch between the two orders on the fly.

Calling big endian "commonly used by modern processor types" when s390x is really the only one left is a bit of a stretch ;D

(Comments about everyone's favorite niche/dead BE architecture in 3… 2… 1…)

legends2k1 year ago

Arm is bi-endian and is alive in most phones.

I agree with another GP's comment that modern doesn't mean popular/widely used.

unscaled1 year ago

The book does say "Both orders are commonly used by modern processor types". I'd say this sentence is quite misleading, since it would lead you to believe two falsehoods:

1. That both byte orders are equally prevalent in the wild, particularly in systems that are expected to run modern C code.

2. That both byte orders are equally likely to be found in "modern" (new or updated) processor design.

It's not entirely incorrect, but a better phrasing could be used to clarify that little-endian is the more modern and common storage order, but you still cannot ignore big-endian.

38362936481 year ago

Don't a bunch of web protocols use big endian?

nineteen9991 year ago

You can go lower than that, TCP/IP itself is big-endian (see RFC 1700).

umanwizard1 year ago

I really doubt any mainstream smartphone runs their Arm chip in big-endian mode ever.

legends2k1 year ago

That's besides the point. The book's author has a valid point. Being pedantic should be applied at all levels if you're going that route.

eqvinox1 year ago

The problem about being pedantic is that you can choose different directions to be pedantic in. My "direction" is that code isn't written in a vacuum, it mixes with code millions of other people wrote and runs on machines millions of other people built. As such:

My concern isn't that the phrasing in the book is wrong, and I have expressly not argued that. It's that it presents the issue as having no further depth, and these two choices as equivalent. They aren't. The "Some processors are even able to switch between the two orders on the fly." that follows makes it even worse, at least to me it really sounds like you needn't give any care.

And the people reading this book are probably the people who should be aware of more real-world background on endianness, for the good of the next million of people dealing with what they produced.

kelsey987654311 year ago

MipsBE is very common in edge devices on many networks. You may have 5 MipsBE devices in your home or office without realizing. It's almost never an issue so nobody cares, but they are common.

skissane1 year ago

> Calling big endian "commonly used by modern processor types" when s390x is really the only one left is a bit of a stretch ;D

POWER is bi-endian. In recent versions, Linux on POWER is little-endian (big-endian Linux on POWER used to be popular, until all the distros switched some years back), while AIX and IBM i are big-endian.

AIX and IBM i are probably not quite as alive as IBM mainframes are, but AIX is still arguably more alive than Solaris or HP/UX are, to say nothing of the dozens of other commercial Unix systems that once existed. Likewise, IBM i is just hanging on, yet still much more alive than most competing legacy midrange platforms (e.g. HP MPE which has been officially desupported by the vendor, although you can still get third party support for it.)

ondra1 year ago

MIPS is still quite alive in consumer networking hardware.

eqvinox1 year ago

True - but at the same time, about half¹ of it is mipsel, i.e. in little-endian mode :). It's also in decline, AFAICS there is very little new silicon development.

¹ on the OpenWRT table of hardware

tonetegeatinst1 year ago

Learning MIPS assembly currently using mars and QtSpim.

Any recommended hardware I should use for bare metal development messing around? Hopefully priced like a SBC like the raspberry pi.

Want to move from making basic programs like adding, messing with functions, etc and bring my MIPS assembly up to a real hardware environment.

AntoniusBlock1 year ago

Many routers use the MIPS ISA and they can be rooted to get shell access. That's what I did with an old Netgear router, which was like a very low spec SBC. If you have a PS2 lying around, you could try that.

unilynx1 year ago

well in a way all processors commonly use them... as big-endian is also the network byte order

flohofwoe1 year ago

...x86 CPUs actually have special mov instructions now to load big endian data. Not sure since when though (on godbolt it needs `-march=native`:

https://www.godbolt.org/z/bWfhGx7xh

...without -march=native it's a mov and bswap (so not too bad either).

cmovq1 year ago

Looks like it was introduced with Haswell (2013). So it’s safe to use if you’re also compiling with AVX2.

umanwizard1 year ago

Isn't Sparc big-endian?

zifpanachr231 year ago

Is SPARC still seeing serious hardware development like s390x? I know it's still around but I can't recall the last time I heard of any new hardware.

Keyframe1 year ago

well, network byte order is a thing. Not a processor though.

throwaway199721 year ago

"Modern" doesn't mean "currently widespread".

eqvinox1 year ago

Indeed, if it meant "currently widespread" there'd be a stronger argument for Big Endian with a lot of MIPS and PPC chugging away silently. But interpreting "modern" as recent development, BE is close to gone.

throwaway199721 year ago

Is there some end to this criticism or do you have some stake in dismissing big endian architectures?

+2
eqvinox1 year ago
israrkhan1 year ago

Most important aspect of C is its portability. From small microcontrollers to almost any computing platform. I doubt that any new version of C will see that much adoption.

If I want to live on cutting edge I would rather use C++2x or Rust rather than C.

Am I missing something? What benefit this supposedly modern C offers?

flohofwoe1 year ago

One advantage of writing C code is that you don't have annoying discussions about what idiomatic code is supposed to look like, and what language subset is the right one ;)

For the cutting edge I would recommend Zig btw, much less language complexity than both modern C++ and Rust.

One good but less visible side effect of C23 is that it harmonizes more syntax with C++ (like ... = {} vs {0}) which makes it a bit less annoying for us C library maintainers to support the people how want to compile their C code with a C++ compiler.

Arch-TK1 year ago

> C library maintainers to support the people how want to compile their C code with a C++ compiler.

Just tell them to go away.

Trying to write the subset of C and C++ is a fool's errand.

jstarks1 year ago

No inline functions in library headers, then.

flohofwoe1 year ago

Inline is mostly pointless in C anyway though.

But it might be a minor problem for STB-style header libraries.

It's not uncommon for C++ projects to include the implementation of an STB-style header into a C++ source file instead of 'isolating' them in a C source file. That's about the only reason why I still support the common C/C++ subset in my C libraries.

pjmlp1 year ago

There is enough material in C, and related compiler extensions, to have similar discussions, starting from where to place brackets.

rbanffy1 year ago

Maybe the C24 will define the One Right Way.

doe_eyes1 year ago

Of course they will, just like they did in the past with C11, GNU extensions, or some of the individual features that are now rolled into C23. For example, the 0b notation for binary numbers is widely used in the MCU world.

The microcontroller toolchains are generally built on top of GCC, so they get the features for free. There are some proprietary C compilers that are chronically lagging behind, but they are not nearly as important as they used to be two decades ago.

vitaminka1 year ago

these features will eventually trickle down into the mainstream, kind of like C11 is doing at the moment

also, unless you're targeting embedded or a very wide set of architectures, there's no reason why you couldn't start using C23 today

bboygravity1 year ago

Or in other words, for embedded and existing code: most use c99, some use c11 and nobody uses c23 until at least 10 years from now.

dhhfss1 year ago

This depends on the platform. Many embedded systems are based on arm these days and have modern toolchains available.

I cannot remember the last time I saw C99 used. C codebases generally use C11 or C17, and C++ code bases use C++20

+1
pjmlp1 year ago
bboygravity1 year ago

Most devices that are 6+ years old (as far as I can tell) use C99. If not C89. And/or C++17, if that.

That's A LOT of devices out there. A lot of which still get maintenance and even get feature updates (I'm working on one right now, C99).

So the claim that "C codebases generally use C11 or C17, and C++ code bases use C++20" intuitively sounds like totally untrue to someone working in embedded C/C++. I've been doing this for 15+ years and I've never touched anything higher than C99 or C++17.

If you're talking about gaming, sure. But that's not "C code bases generally".

vitaminka1 year ago

most non-embedded and non-legacy codebases could use c23, that's not an insignificant set

bboygravity1 year ago

I would argue that is an insignificant set.

Unless you think that code-bases created in the past year are a significant part of code bases that have been created since the inception of humanity.

shakna1 year ago

The `thread_local` specifier is used on a few microcontroller platforms already, but would be absolutely illegal in C11 and before to use. However, it vastly simplifies memory management in a threaded context.

Why would I rather step into the world of C++ just to deal with that?

casenmgreen1 year ago

IIRC, performance and cost of thread local store varies greatly between platforms.

You have to know what you're biting into, before you use that.

BenjiWiebe1 year ago

Won't any llvm/gcc supported target get the new version of C automatically? You won't get it in the vendor-modified ancient gcc toolchain for some other arch though.

israrkhan1 year ago

There are many embedded platforms that do not gcc/llvm based compilers.

Also most companies making those platforms are not good at updating their toolchains. Expecting developers to compile their own toolchain, that is unsupported by platform vendor, is too much to ask.

Also GCC dropped support for certain architectures along the way, and even if you are willing to compile your own toolchain, it may not work for you.

pragma_x1 year ago

I'm with you on this. The feature list reads like a subset of later C++ standards that fit within C's (deliberately) rudimentary feature set.

You could, in theory, just use C++ and be done with it. But like any C++ project you'd need a pretty strict style guide or even a linter, but this time it would have to be extra restrictive lest you slide into full C++ territory. And maybe that's a major stumbling block for some people?

johnisgood1 year ago

Personally this[1] just makes C much more complicated for me, and I choose C when I want simplicity. If I want complicated, I would just pick C++ which I typically would never want. I would just pick Go (or Elixir if I want a server).

"_BitInt(N)" is also ugly, reminds me of "_Bool" which is thankfully "bool" now.

[1] guard, defer, auto, constexpr, nullptr (what is wrong with NULL?), etc. On top of that "constexpr" and "nullptr" just reeks of C++.

That said, Modern C is an incredible book, I have been using it for C99 (which I intend to continue sticking to).

nickelpro1 year ago

> what is wrong with NULL?

One of the few advantages of ISO standardization is you can just read the associated papers to answer questions like this: https://wg21.link/p2312

The quick bullet points:

* Surprises when invoking a type-generic macro with a NULL argument.

* Conditional expressions such as (1 ? 0 : NULL) and (1 ? 1 : NULL) have different status depending how NULL is defined

* A NULL argument that is passed to a va_arg function that expects a pointer can have severe consequences. On many architectures nowadays int and void* have different size, and so if NULL is just 0, a wrongly sized argument is passed to the function.

consteval1 year ago

> If I want complicated, I would just pick C++ which I typically would never want

In my opinion, complexity doesn't scale linearly like this. Sometimes, in fact often times, having more complex tools means a simpler process and end result.

It's like building a house. A hammer and screwdriver are very simple. A crane is extremely complex. But which simplifies building a house? A crane. If I wanted to build a house with only a hammer and screwdriver, I would have to devise incredibly complex processes to get it done.

You see the same type of thing in programming languages. Making a generic container in C++ is trivial. It's very, very hard in C. You can make it kind of generic. You can use void * and do a bunch of manual casting. But it's cumbersome, error prone, and the code is more complex. It's counter-intuitive - how can C, a simpler language, produce code that is more complex than C++?

Or look at std::sort vs qsort. The power of templates and functors makes the implementation much simpler - and faster! We don't have to pass around void * and dereference them at runtime, instead we can build in comparison into the definition of the function itself. No redirection, no passing on the stack, and we can even go so far as to inline the comparison function.

There's really lots of examples of this kind of stuff. Point being, language complexity does not imply implementation complexity.

swells341 year ago

In my experience, complex tools encourage fluffy programming. You mention a generic container; if I were using C, I just wouldn't use a generic container; instead, I'd specify a few container types that handle what needs handled. If there seem to be too many types, then I immediately start thinking that I'm going down a bad architecture path, using too many, or too mixed, abstraction layers, or that I haven't broken down the problem correctly or fully.

The constraints of the tool are inherited in the program; if the constraints encourage better design, then the program will have a better design. You benefit from the language providing a path of least resistance that forces intentionality. That intentionality makes the code easier to reason about, and less likely to contain bugs.

You do pay for this by writing more boilerplate, and by occasionally having to do some dirty things with void pointers; but these will be the exception to the rule, and you'll focus on them more since they are so odd.

consteval1 year ago

Sometimes, but I would argue that C is too simplistic and is missing various common-sense tools. It's definitely improving, but with things like namespaces there's pretty much no risk of "too complex" stuff.

Also, I wouldn't be saying this if people didn't constantly try to recreate C++-isms in C. Which sometimes you need to do. So, then you have this strange amalgamation that kind of works but is super error prone and manual.

I also don't necessarily agree that C's constraints encourage better design. The design pushes far too much to runtime, which is poor design from a reasoning point of view. It's very difficult to reason about code when even simple data models require too much indirection. Also, the severely gimped type system means that you can do things you shouldn't be able to do. You can't properly encode type constraints into your types, so you then have to do more validation at runtime. This is also slightly improving, starting with _Bool years ago.

C++ definitely is a very flawed language with so, so many holes in its design. But the systems it has in place allows the programmer to more focus on the logic and design of their programs, and less on just trying to represent what they want to represent. And templates, as annoying as the errors are, prevent A LOT of runtime errors. Remember, every time you see a template that translates into pointers and runtime checks in C.

avvvv1 year ago

I think that is fair. A simple language with a simple memory model is nice to work with.

I also think that it wouldn't be bad for code to be more generic. It is somewhat unnecessary for a procedure to allow an argument of type A but not of type B if the types A and B share all the commonalities necessitated by the procedure. Of course procedures with equivalent source code generate different machine code for different types A or B, but not in a way that matters much.

I believe it is beneficial for the language to see code as the description of a procedure, and to permit this description to be reused as much as possible, for the widest variety of types possible. The lack of this ability I think might be the biggest criticism I have for C from a modern standpoint.

Gibbon11 year ago

I feel that if C had tagged unions and a little sugar you could write non magical generic functions in C. Non magical meaning unlike C++ etc instead of the compiler selecting the correct function based on the arguments the function itself can tell and handle each case.

Basically you can write a function that takes a tagged union and the compiler will passed the correct union based on named arguments.

   int ret = foo(.slice = name);

   int ret = foo(.src = str, .sz = strlen(str));
flohofwoe1 year ago

auto is mostly useful when tinkering with type-generic macros, but shouldn't be used in regular code (e.g. please no 'almost always auto' madness like it was popular in the C++ world for a little while). Unfortunately there are also slight differences between compilers (IIRC Clang implements a C++ style auto, while GCC implements a C style auto, which has subtle differences for 'auto pointers' - not sure if those differences have been fixed in the meantime).

_BitInt(N) isn't typically used directly but typedef'ed to the width you need, e.g.

    typedef _BitInt(2) u2;
The 'ugly' _B syntax is needed because the combination of underscore followed by a capital letter is reserved in the C standard to avoid collisions with existing code for every little thing added to the language (same reason why it was called _Bool).

AFAIK defer didn't actually make it into C23?

I'm also more on the conservative side when it comes to adding features to the C standard, but IMHO each of the C23 additions makes sense.

eqvinox1 year ago

> AFAIK defer didn't actually make it into C23?

Correct, defer didn't make it into C23.

It (in its __attribute__((cleanup())) form) is also one of the most useful extensions in GCC/clang — but, again, for use in macros.

humanrebar1 year ago

> IIRC Clang implements a C++ style auto, while GCC implements a C style auto, which has subtle differences for 'auto pointers' - not sure if those differences have been fixed in the meantime

Both have compatibly implemented the standard C++ auto. Since 2011 or so.

flohofwoe1 year ago

Well, not in C :)

Here's an example where Clang and GCC don't agree about the behaviour of auto in C23:

https://www.godbolt.org/z/WchMK18vx

IIRC Clang implements 'C++ semantics' for C23 auto, while GCC doesn't.

Last time I brought that up it turned out that both behaviours are 'standard compliant', because the C23 standard explicitly allows such differing behaviour (it basically standardized the status quo even if different compilers disagreed about auto semantics in C).

PS: at least Clang has a warning now in pedantic mode: https://www.godbolt.org/z/ovj5r4axn

+1
cpeterso1 year ago
johnisgood1 year ago

This difference of implementation in two of the major C compilers leaves a bad taste in my mouth. :/

josefx1 year ago

> (what is wrong with NULL?)

The old definition did not even specify wether it was a pointer or an integer. So for platforms that did not follow the Posix ((void*)0) requirement it was a foot gun that had neither the type nor the size of a pointer.

> On top of that "constexpr" and "nullptr" just reeks of C++.

Probably because they where back ported from C++. You can still use NULL, since that was apparently redefined to be nullptr.

johnisgood1 year ago

What platforms are those that are in use, and how widespread their use is?

jlokier1 year ago

> what is wrong with NULL?

This code has a bug, and may even crash on some architectures:

    execlp("echo", "echo", "Hello, world!", NULL);
This code doesn't have the bug:

    execlp("echo", "echo", "Hello, world!", nullptr);
Neither does this:

    execlp("echo", "echo", "Hello, world!", (char *)NULL);
cornstalks1 year ago

> what is wrong with NULL?

For starters, you have to #include a header to use it.

zik1 year ago

And it avoids the NULL == 0 ambiguity, allowing for better type checking.

johnisgood1 year ago

Well, I always include stdio.h which includes stddef.h that defines NULL as (void *)0.

dhhfss1 year ago

In my experience, hardly any source files require studio.h

stddef.h on the other hand is required by most to get size_t

johnisgood1 year ago

You are right. Hereby I correct my parent comment: I talked about my own personal experience[1], but yeah, as you said, stddef.h is often required (and yes, often I do not need stdio.h, stddef.h is what I need) which defines NULL, which was my point. If it is often required, then it does not matter whether you have to include a header file or not, IMO.

Just include the stddef.h header if you want to use NULL, similarly to how you include a header file if you want to use anything else, e.g. bool from stdbool.h.

[1] I am not entirely sure in retrospect, actually, as I might be misremembering, but my point stands with or without stdio.h!

mmphosis1 year ago

NULL is not wrong. The things that I will do with NULL are

zkirill1 year ago

I was going to ask if there is a good list of C books and then answered my own question. It categorizes _Modern C_ as Intermediate level.

https://stackoverflow.com/questions/562303/the-definitive-c-...

xbar1 year ago

I like Modern C. I have reviewed it favorably in several places. I agree it is intermediate.

I think 21st Century C by Ben Klemens and C Programming a Modern Approach by King are both more approachable alternatives as a modern C companions to K&R.

rramadass1 year ago

Also see Fluent C: Principles, Practices and Patterns by Christopher Preschern.

emmanueloga_1 year ago

Note that this is not a complete list, fwiw. For example, I doesn't include "Effective C." [1].

I like "Effective C" over "Modern C" because it's more engaging ... "Modern C" is super rigorous and feels a bit like reading an annotated spec of the language, which is what an expert may need, but makes for a dull read for a casual C user like me.

--

1: https://nostarch.com/effective-c-2nd-edition

xbar1 year ago

I agree, but I think Modern C has good, structured recommendations that make it worth getting through at least once.

musicale1 year ago
uvas_pasas_per1 year ago

I've been using modern C++ for a personal project (a language interpreter) for the last year+. I constantly think of switching to C, because of the mental burdens of C++, and because of the problems with tooling (Visual Studio's IntelliSense still barely works, because I use C++20 modules), and compile times get ugly because of the way the language failures force so much into interfaces (even with modules). But on the flip side I've gotten so used to classes, member functions, generic programming (templates), namespaces... I may be hooked.

fluoridation1 year ago

I've been using C++ for the longest time, and I would never give up destructors to switch to C.

For your particular use case, have you considered C#? VS works much more nicely with it.

uvas_pasas_per1 year ago

Yeah, I did. I want something low level and cross platform, including mobile. I think when I tried the C# for iOS stuff, nothing worked. But it's probably too much VM/runtime for me anyway, for this project.

neonsunset1 year ago

iOS C# is more or less fine, there is quite a bit of work done in .NET to make this better still. .NET 9 gains native Swift Library Evolution ABI support even - you can literally declare DllImports against public Swift APIs by simply annotating them with [typeof(CallConvSwift)], it's not as convenient as it sounds but it's only a matter of time when the tools like https://github.com/royalapplications/beyondnet adopt this. It's going to get much better once MonoAOT is replaced with NativeAOT for all major publish modes for iOS.

fluoridation1 year ago

Fair enough. I would've used C++ as well.

auggierose1 year ago

Table of contents in the sidebar doesn't work properly for me when I click on an entry (in macOS Preview).

bwidlar1 year ago

I just test some links in the table of content, works fine for me. Using zathura pdf reader.

Jtsummers1 year ago

Also works in Adobe and Firefox, but doesn't work in Chrome and Edge.

f1shy1 year ago

Doesn't work for me either... but I will not dismiss the book because of that.

channel_t1 year ago

Table of contents is definitely broken right now.

soegaard1 year ago

Same here.

enriquto1 year ago

So happy that we still get the dinosaur mascots! This is a good book.

einpoklum1 year ago

It's only been a few years since I've come to feel I can rely on C compilers all supporting C99, for a library I'm maintaing [1]. And after a couple of years, sure enough - I get an issue opened asking for C89 compatibility because of some arcane embedded toolchain or what-not.

So, C23? ... that's nice and all, but, let's talk about it in 20 years or so T_T

[1]: https://github.com/eyalroz/printf

jhatemyjob1 year ago

Can someone link me to an article that explains why C is basically frozen at C99 for all practical purposes? Few projects worth talking about leverage features from C11 and newer

pornel1 year ago

C99 is still new! Microsoft tried to kill C by refusing to implement anything that wasn't also in C++. MSVC was 16 years late implementing C99, and implemented only the bare minimum. Their C11 implementation is only 11 years late.

I suspect that decades of C being effectively frozen have caused the userbase to self-select to people who like C exactly the way it is (was), and don't mind supporting ancient junk compilers.

Everyone who lost patience, or wanted a 21st century language, has left for C++/Rust/Zig or something else.

uecker1 year ago

Most of us liking a good language just did not use MSVC. I do not think many people who appreciate C's simplicity and stability would be happy with C++ / Rust. Zig is beautiful, but still limited in many ways and I would not use it outside of fun projects.

pornel1 year ago

I don't even use Windows, but I need to write portable libraries. Unfortunately, MSVC does strongly influence the baseline, and it's not my decision if I want to be interoperable with other projects.

In my experience, Windows devs don't like being told to use a different toolchain. They may have projects tied to Visual Studio, dependencies that are MSVC-only or code written for quirks of MSVC's libc/CRT, or want unique MSVC build features.

I found it hard to convince people that C isn't just C (probably because C89 has been around forever, and many serious projects still target it). I look like an asshole when I demand them to switch to whole another toolchain, instead of me adding a few #ifdefs and macro hacks for some rare nice thing in C.

Honestly, paradoxically it's been easier to tell people to build Rust code instead (it has MSVC-compatible output with almost zero setup needed).

uecker1 year ago

The good news is that MSVC has C17 support (still missing important optional features, but at least some progress).

flohofwoe1 year ago

Microsoft basically sabotaged C99 by not implementing any of its features until around 2015 in the Visual Studio C compiler, and then still took until 2019 before they acknowledged their failure and started supporting more recent C versions again (MSVC is still reliably behind Clang and GCC when it comes to their C frontend though).

And back around 2010 MSVC still mattered a lot (which sounds weird from today's pov where most developers appear to have moved to Linux).

But OTH, few projects actually need C11 features (and C11 actually took one thing away from C99: VLAs - nothing of value was lost though).

C23 might be the first version since C99 that's actually worth upgrading to for many C code bases.

spacechild11 year ago

C11 gave us one very important thing: a standardized memory model! Just like in C++11, you can finally write cross-platform multithreaded code with standardized atomics, synchronization primitives and threads. Unfortunately, compiler/library support is still lacking...

nimish1 year ago

My kingdom for fully specified, well defined portable bitfields.

delduca1 year ago

I wish auto in C was similar to auto in C++.

russellbeattie1 year ago

Wow, the use of attributes like [[__unsequenced__]], [[maybe_unused]] and [[noreturn]] throughout the book is really awful. It seems pretty pedantic of the author to litter all the code examples with something that is mostly optional. For a second I wondered if C23 required them.

amomchilov1 year ago

Such is the issue with bad defaults. Opting into the sensible thing makes most of your code ugly, instead of just the exceptions.

bitbasher1 year ago

Really looking forward to #embed, once the compilers catch up. Until then, Golang.

enriquto1 year ago

This is not how C standards work. If it appears in the standard, it means that it is already implemented in some compilers (in that case, at least in gcc and clang).

pjmlp1 year ago

That isn't really how it goes, that is how it used to be up to C99.

enriquto1 year ago

Thanks for the correction! Do you know if there is a document from the standards body explaining the change in philosophy?

JonChesterfield1 year ago

It's a nuisance to implement the thing you want to add to the standard yourself. It's easier to ship it in the language and then complain at compiler devs that they're running behind the edge of progress.

This interacts in the obvious way with refusing to correct mistakes after the fact for fear of breaking user code.

I don't believe anyone has written a paper along the lines of "let's not bother with the existing practice part anymore", it's more an emergent feature of people following local incentive structures.

MathMonkeyMan1 year ago

I've heard something along the lines of "the standard is to define facilities that will be used in most programs, and to codify widespread existing practice." That was in the context of "I don't like this proposed feature," though. This was for C++, not C.

A lot of stuff in the C++11 standard library was based on widespread use of Boost. Since then, I don't know. Also, were things like templates and lambdas implemented as compiler extensions before standardization? I don't know, but I doubt it. Maybe "we're a committee of people who will decide on a thing and we hope you like it" was always the norm in many ways.

f1shy1 year ago
jpcfl1 year ago

Or

    xxd --include <file>
:)
Keyframe1 year ago

The anti-Rust approach!

accelbred1 year ago

I end up using a .S asm file with .incbin directives to embed files.

#embed would be much nicer

JonChesterfield1 year ago

Incbin works just fine from inline asm fwiw

flohofwoe1 year ago

Inline assembly isn't supported for x86-64 and ARM on MSVC which unfortunately also means the incbin trick can't be used there anymore.

rfl8901 year ago

Clang 19 has it.

leonheld1 year ago

One of my favorite books ever.

ralphc1 year ago

How does "Modern" C compare safety-wise to Rust or Zig?

WalterBright1 year ago

Modern C still promptly decays an array to a pointer, so no array bounds checking is possible.

D does not decay arrays, so D has array bounds checking.

Note that array overflow bugs are consistently the #1 problem with shipped C code, by a wide margin.

layer81 year ago

> no array bounds checking is possible.

This isn’t strictly true, a C implementation is allowed to associate memory-range (or more generally, pointer provenance) metadata with a pointer.

The DeathStation 9000 features a conforming C implementation which is known to catch all array bounds violations. ;)

trealira1 year ago

> The DeathStation 9000 features a conforming C implementation which is known to catch all array bounds violations. ;)

That actually really does exist already with CHERI CPUs, whose pointers are tagged with "capabilities," which catch buffer overruns at runtime.

https://tratt.net/laurie/blog/2023/two_stories_for_what_is_c...

https://msrc.microsoft.com/blog/2022/01/an_armful_of_cheris/

uecker1 year ago

Right. Also it might it sound like array-to-pointer decay is forced onto the programmer. Instead, you can take the address of an array just fine without letting it decay. The type then preserves the length.

+2
WalterBright1 year ago
+1
codr71 year ago
TZubiri1 year ago

"The DeathStation 9000"

The what now?

bsder1 year ago

Nasal daemons for those of us of a slightly older vintage ...

+1
layer81 year ago
sdk771 year ago

The thing is though that even with array bounds checking built into the language, out of bounds access due to programming error can still be attempted. Only this time it's safer because an attacker can't use the bug (which still exists) to access memory outside of bounds. In any case, the program still doesn't work as intended (has bugs) because the programmer has attempted, or allowed the attempt, to access out of bounds memory.

Writing safe code is better than depending on safety features. Writing safe code is possible in any programming language, the only things required are good design principles and discipline (i.e. solid engineering).

WalterBright1 year ago

In practice in C, that does not work because array overflow bugs are still the #1 bug in shipped C code, by a wide margin.

renox1 year ago

You'd be surprised: Zig has one UB (Undefined Behaviour) that C doesn't have!

In release fast mode, unsigned overflow/underflow is undefined in Zig whereas in C it wraps.

:-)

Of course C has many UBs that Zig doesn't have, so C is far less safe than Zig, especially since you can use ReleaseSafe in Zig..

uecker1 year ago

UB is does not automatically make things unsafe. You can have a compiler that implements safe defaults for most UB, and then it is not unsafe.

duped1 year ago

That's implementation defined behavior, not undefined behavior. Undefined behavior explicitly refers to something the compiler does not provide a definition for, including "safe defaults."

Maxatar1 year ago

The C standard says, and I quote:

>Possible undefined behavior ranges from ignoring the situation completely with unpredictable results ... or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message)

So a compiler is absolutely welcome to make undefined behavior safe. In fact every compiler I know of, such as GCC, clang, MSVC has flags to make various undefined behavior safe, such as signed integer overflow, type punning, casting function pointers to void pointers.

The Linux kernel is notorious for leveraging undefined behavior in C for which GCC guarantees specific and well defined behavior.

It looks like there is also the notion of unspecified behavior, which gives compilers a choice about the behavior and does not require compilers to document that choice or even choose consistently.

And finally there is what you bring up, which is implementation defined behavior which is defined as a subset of unspecified behavior in which compilers must document the choice.

fuhsnn1 year ago

Compilers are not prohibited to provide their own definition for UB, that's how UBsan exists.

renox1 year ago

Well Zig has ReleaseSafe for this.. ReleaseFast is for using these UBs to generate the fastest code.

ahoka1 year ago

By definition UB cannot be safe.

umanwizard1 year ago

Something can be UB according to the standard, but defined (and safe) according to a particular implementation. Lots of stuff is UB according to the C or C++ standard but does something sensible in gcc and/or clang.

Maxatar1 year ago

The definition given by the C standard allows for safe undefined behavior.

marssaxman1 year ago

this depends on your choice of definition for "safe"

secondcoming1 year ago

Does C automatically wrap? I thought you need to pass `-fwrapv` to the compiler to ensure that.

greyw1 year ago

Unsigned overflow wraps. Signed overflow is undefined behavior.

+1
kbolino1 year ago
renox1 year ago

-fwrapv is for signed integer overflow not unsigned.

sp1rit1 year ago

Yes, as unsigned overflow is fine by default. AFAIK the issue was originally that there were still machines that used ones complement for describing negative integers instead of the now customary twos complement.

jandrese1 year ago

Modern C is barely any different than older C. The language committee for C is extremely conservative, changes tend to happen only around the edges.

flohofwoe1 year ago

Except for C99 which added designated init and compound literals. With those it almost feels like a new language compared to C89 (and the C99 designated init feature is so well thought out that it still beats most similar initialization patterns in more recent languages, including C++, Rust and Zig - only Odin seems to "get it").

pornel1 year ago

There's finally a way to safely add two signed numbers, without tricky overflow checks that may trigger UB themselves!

RantyDave1 year ago

Wait, C programmers now put the star on the left hand side?

char* thing; // good

char *thing; // bad

This ... is awesome. As a C++ "native" I've always found the "star on the right" thing to be really horribly confusing.

bloppe1 year ago

Ofc this has always been an option. In my C heyday I used to put a space on both sides of the star. It makes for a more consistent syntax when you have multi layer pointers with const at various layers. For example:

// mutable pointer to mutable data:

char * str;

// Immutable pointer to immutable data:

char const*const str;

// Mutable pointer to an immutable pointer to a mutable pointer to immutable data:

char const**const* strs;

int_19h1 year ago

Given that putting it on the right reflects the actual syntax tree of the code, why do you find it "horribly confusing"?

I mean, one can reasonably argue that C & C++ declarator syntax is itself horribly confusing because it doesn't read left-to-right. But it is what it is, so why pretend that it's something else?

tenderfault1 year ago

any chance of getting a responsive TOC in any pdf reader whatsoever?

jumpman_miya1 year ago

in example 1.1 i read that as 'tits_square' until i saw the output

glass-z131 year ago

that's by design

ykonstant1 year ago

It's a booby trap.

belter1 year ago

Important reminder just in the Preface :-)

Takeaway #1: "C and C++ are different: don’t mix them, and don’t mix them up"

jasode1 year ago

>Takeaway #1: "C and C++ are different: don’t mix them, and don’t mix them up"

Where "mixing C/C++" is helpful:

- I "mix C in with my C++" projects because "sqlite3.c" and ffmpeg source code is written C. C++ was designed to interoperate with C code. C++ code can seamlessly add #include "sqlite3.h" unchanged.

- For my own code, I take advantage of "C++ being _mostly_ a superset of C" such as using old-style C printf in C++ instead of newer C++ cout.

Where the "C is a totally different language from C++" perspective is helpful:

- knowing that compilers can compile code in "C" or "C++" mode which has ramifications for name mangling which leads to "LINK unresolved symbol" errors.

- knowing that C99 C23 has many exceptions to "C++ is a superset of C" : https://en.wikipedia.org/wiki/Compatibility_of_C_and_C%2B%2B...

tialaramex1 year ago

The entire I/O streams (where std::cout comes from) feature is garbage, if this was an independent development there is no way that WG21 would have taken it, the reason it's in C++ 98 and thus still here today is that it's Bjarne's baby. The reason not to take it is that it's contradictory to the "Don't use operator overloading for unrelated operations" core idea. Bjarne will insist that "actually" these operators somehow always meant streaming I/O but his evidence is basically the same library feature he's trying to justify. No other language does this, and it's not because they can't it's because it was a bad idea when it was created, it was still a bad idea in 1998, the only difference today is that C++ has a replacement.

The modern fmt-inspired std::print and std::println etc. are much nicer, preserving all the type checking but losing terrible ideas like stored format state, and localisation by default. The biggest problem is that today C++ doesn't have a way to implement this for your own types easily, Barry illustrates a comfortable way this could work in C++ 26 via reflection which on that issue closes the gap with Rust's #[derive(Debug)].

spacechild11 year ago

Remember that C++ originally didn't have variadic templates, so something like std::format would have been impossible back in the day. Back in the day, std::iostream was a very neat solution for type safe string formatting. As you conceded, it also makes it very easy to integrate your own types. It was a big improvement over printf(). Historic perspective is everything.

+1
vitaut1 year ago
dieortin1 year ago

> The biggest problem is that today C++ doesn't have a way to implement this for your own types easily

I’m not sure about the stdlib version, but with fmtlib you can easily implement formatters for your own types. https://fmt.dev/11.0/api/#formatting-user-defined-types

tialaramex1 year ago

I think the problem is that your idea of "easy" is "Here's a whole bunch of C++ you could write by hand for each type" while the comparison was very literally #[derive(Debug)]. I wasn't abbreviating or referring to something else, that's literally what Rust programmers type to indicate that their type should have the obvious boilerplate implementation for this feature, in most types you're deriving other traits already, so the extra work is literally typing out the word Debug.

wakawaka281 year ago

>No other language does this, and it's not because they can't it's because it was a bad idea when it was created, it was still a bad idea in 1998, the only difference today is that C++ has a replacement.

Hindsight is 20/20, remember that. Streams are not that bad of an idea and have been working fine for decades. You haven't named a problem with it other than the fact the operators are used for other stuff in other contexts. But operator overloading is a feature of C++ so most operators, even the comma operator, can be something other than what you expect.

>The biggest problem is that today C++ doesn't have a way to implement this for your own types easily, Barry illustrates a comfortable way this could work in C++ 26 via reflection which on that issue closes the gap with Rust's #[derive(Debug)].

You can trivially implement input and output for your own types with streams.

You appear to be a Rust guy whose motive is to throw shade on C++ for things that are utterly banal and subjective issues.

+1
PaulDavisThe1st1 year ago
pjmlp1 year ago

Perfectly iostreams happy user since 1993.

+3
Dwedit1 year ago
+1
einpoklum1 year ago
codr71 year ago

Same, as long as I stay the hell away from locales/facets.

Type safe input/output stream types and memory backed streams served on a silver plate is a pretty decent improvement over C.

tightbookkeeper1 year ago

Yep, it’s very clean once you get the hang of it.

fra1 year ago

This was a tip my hatn excellent to you

johnisgood1 year ago

Why?

alexvitkov1 year ago

> Don't use operator overloading for unrelated operations

This disn't stop with <iostream>, they keep doing it - the latest example I can think of is std::ranges operations being "piped" with |.

throwaway20371 year ago

Over the years, I have heard numerous complaints about C++ I/O streams. Is there a better open source replacement? Or do you recommend to use C functions for I/O?

steveklabnik1 year ago
int_19h1 year ago

Sather (1991) used operator overloading for output. And, even more hilariously, they overloaded + in the same way as C++ overloaded <<, giving you:

   #OUT + "Hello, " + name + "!";
+1
tialaramex1 year ago
chipdart1 year ago

> The entire I/O streams (where std::cout comes from) feature is garbage, if this was an independent development there is no way that WG21 would have taken it, the reason it's in C++ 98 and thus still here today is that it's Bjarne's baby.

I think this is a very lazy and somewhat conspiratorial take.

C++'s IO stream library, along with C++'s adoption of std::string, is a response to and improvement over C's standard library support for IO. That alone makes it an invaluable improvement. It's easy and very lazy to look back 30 years ago and badmouth things done back then.

It's also easy to complain about no one proposing changes when literally anyone, including you, can propose changes. The only need to do the legwork and put their money where their mouth is. The funny part is that we see frameworks putting together their own IO infrastructure and it ends up being not good, such as Qt's take on IO.

But talk is cheap and badmouthing doesn't require a pull request.

int_19h1 year ago

The problem is precisely that C++ iostream library was, in practice, not an improvement on C stdio in many ways. Some of us were actually there 30 years ago, and even right after C++98 was standardized, it was pretty common for (then-)modern C++ projects to adopt all of stdlib except for iostreams (and locales/facets, another horrible wart).

tightbookkeeper1 year ago

What’s wrong with it?

lugu1 year ago

Thank you.

accelbred1 year ago

C++ can seamlessly include C89 headers.

The C library headers for libraries I write often include C11/C99 stuff that is invalid in C++.

Even when they are in C89, they are often incorrect to include without the include being in an `extern "C"`.

nuancebydefault1 year ago

Extern "C" around the prototypes is mandatory, otherwise your linker will search for C++ symbols, which cannot be found in the C libraries you pass it.

accelbred1 year ago

Not only prototypes but typedefs with function pointers as well.

Conscat1 year ago

Clang supports C11 - 23 in C++, as well as some future C features like fixed-point integers. The main pain points with Clang are just the fundamental differences like void* and char, which don't typically matter much at an interoperability layer.

+1
flohofwoe1 year ago
kccqzy1 year ago

Yeah plenty of headers first have `#ifdef __cplusplus` and then they add `extern "C"`. And of course even then they have to avoid doing things unacceptable in C++ such as using "new" as the name of a variable.

It takes a little bit of an effort to make a header work on C and C++. A lot less effort than making a single Python file work with Python 2 and 3.

flohofwoe1 year ago

The '#ifdef __cplusplus extern "C" { }' thing only removes C++ name mangling from exported symbols, it doesn't switch the C++ language into "C mode" (unfortunately).

Someone1 year ago

> C++ code can seamlessly add #include "sqlite3.h" unchanged.

Almost seamlessly. You have to do

  extern “C” {
    #include "sqlite3.h"
  }
(https://isocpp.org/wiki/faq/mixing-c-and-cpp#include-c-hdrs-...)
cornstalks1 year ago

If we're nitpicking then sqlite3.h already has `#ifdef __cplusplus` and `extern "C" {`. So yes, from the user's perspective it is seamless. They do not need to play the `extern "C" {` game.

rramadass1 year ago

Yep; I think of it as "C/C++" and not "C" and/or "C++" i.e. one "multi-paradigm" language with different sets of mix-and-match.

varjag1 year ago

kinda like python and ruby

MathMonkeyMan1 year ago

My brief foray into microcontroller land has taught me that C and C++ are very much mixed.

It's telling that every compiler toolchain that compiles C++ also compiles C (for some definition of "C"). With compiler flags, GCC extensions, and libraries that are kinda-sorta compatible with both languages, there's no being strict about it.

_My_ code might be strict about it, but what about tinyusb? Eventually you'll have to work with a library that chokes on `--pedantic`, because much (most?) code is not written to a strict C or C++ standard, but is "C/C++" and various extensions.

rramadass1 year ago

> because much (most?) code is not written to a strict C or C++ standard, but is "C/C++" and various extensions.

Absolutely true. I generally insist on folks learning C and C++ interoperability before diving in to all the "Modern C or C++" goodness. It helps them in understanding what actually is going on "under the hood" and makes them a better programmer/debugger.

See also the book Advanced C and C++ Compiling by Milan Stevanovic.

pjmlp1 year ago

Specially relevant to all those folks that insist on "Coding C with a C++ compiler", instead of safer language constructs, and standard library alternatives provided by C++ during the last decades.

flohofwoe1 year ago

Funny because for a long time the Microsoft MSVC team explicitly recommended compiling C code with a C++ compiler because they couldn't be arsed to update their C frontend for over two decades (which thankfully has changed now) ;)

https://herbsutter.com/2012/05/03/reader-qa-what-about-vc-an...

rdtsc1 year ago

That thing always baffled me, this huge company building a professional IDE couldn't figure out how to ship updates to the C compiler.

> it is hard to say no to you, and I’m sorry to say it. But we have to choose a focus, and our focus is to implement (the standard) and innovate (with extensions like everyone but which we also contribute for potential standardization) in C++.

I mean, yeah if it came from a two member team at a startup, sure focus on C++, understandably. But Microsoft, what happened to "Developers! Developers! Developers!"?

Jtsummers1 year ago

It's not baffling, it's remarkably consistent. They implemented Java as J++ and made their version incompatible in various ways with the standard so it was harder to port your code away from J++ (and later J#). They implemented things in the CSS spec almost exactly opposite the specification to lock people into IE (the dominant browser, if you have to make your site work with 2+ incompatible systems which will you focus on?). Not supporting C effectively with their tools pushed developers towards their C++ implementation, creating more lock-in opportunities.

+1
pjmlp1 year ago
AlotOfReading1 year ago

Funnily enough, the intellisense parser does support C syntax because it's using a commercial frontend by edison under the hood. MSVC's frontend doesn't.

pjmlp1 year ago

Yeah, 12 years ago, when governments couldn't care less about nation state cyberattacks, and Microsoft was yet to be called by the Congress to testify on their failures.

com2kid1 year ago

Perfectly valid to do if you need to interface with a large C code base and you just want to do some simple OO here and there. Especially if you cannot have runtime exceptions and the like.

This is how I managed to sneak C++ into an embedded C codebase. We even created some templates for data structures that supported static allocation at compile time.

f1shy1 year ago

What would be an example of "simple OO here and there" that cannot be done cleanly in plain C?

bobmcnamara1 year ago

Templating on pixel classes so that a blitter builds all supported pixel paths separately and inlines them.

Yes you can do it less cleanly with macros or inline functions. But you can't do it performantly with struct and function pointers.

+1
com2kid1 year ago
+1
adamrezich1 year ago
+2
raluk1 year ago
cozzyd1 year ago

CRTP?

pjmlp1 year ago

Yeah, but one should provide C++ type safe abstractions on top.

Just like one doesn't use Typescript to keep writing plain old JavaScript, then why bother.

Spivak1 year ago

I mean as long as your goal is specifically to do that I think it's fine. Using a C++ compiler to compile a C program isn't that rare.

f1shy1 year ago

A couple of months ago, in the company I work, there was a talk from HR, where they explained how to make a good CV (the company is firing lots of people). She say: "if you have experience in programming C, you can writing just that, or, if you have lots of experience in C, is customary to write ``C++ Experience'' "

Sooo... yeah... I should definitely change company!

kstrauser1 year ago

That literally made me do a spit take, and it was fizzy water and it burned.

My god. That's amazing.

thenipper1 year ago

How many pluses until you should just say you have D experience?

varjag1 year ago

Possibly three. Four pluses is naturally C#.

sim7c001 year ago

can't believe so many people are arguing against this honestly. you don't mix them in the sense the author means. I take it these people didn't read the paragraphs this was the 'takeaway' from.

For example, the primary reason for the sentence seems to be from the text: "Many code examples in this book won't even compile on a c++ compiler, So we should not mix sources of both languages".

It's not at all about the ability to use c libraries in c++ projects or vice versa :S.... c'mon guys!

emmelaich1 year ago

If you want a language with a great C FFI, C++ is hard to beat!

jpcfl1 year ago

Bjarne should have called it ++C.

wnoise1 year ago

Nah. It's just the natural semantics -- he added stuff to C, but returned something that wasn't actually more advanced...

card_zero1 year ago

Because people choose to use pre-increment by default instead of post-increment?

Why is that?

int_19h1 year ago

It should be ++C because with C++ the value you get from the expression is the old one.

If you're asking why people use pre-increment by default instead of post-increment, it's mostly historical. The early C compilers on resource-constrained platforms such as early DOS were not good at optimization; on those, pre-increment would be reliably translated to a simple ADD or INC, whereas code for post-increment might generate an extra copy even if it wasn't actually used.

For C++ this was even worse with iterators, because now it depended on the compiler's ability to inline its implementation of postfix ++, and then prove that all the copies produced by that implementation have no side effects to optimize it to the same degree as prefix ++ could. Depending on the type of the underlying value, this may not even be possible in general.

The other reason is that all other unary operators in C are prefix rather than postfix, and mixing unary prefix with unary postfix in a single expression produces code that is easy to misunderstand. E.g. *p++ is *(p++), not (*p)++, even though the latter feels more natural, reading it left-to-right as usual. OTOH *++p vs ++*p is unambiguous.

card_zero1 year ago

K&R seems to use pre-increment early on, then post-increment consistently (or a lot, anyway, I haven't done a thorough check) after chapter 3, in situations where either would do. In fact, after introducing post-increment at 2.8.

jpcfl1 year ago

> It should be ++C because with C++ the value you get from the expression is the old one.

You get it!

wpollock1 year ago

The PDP-11 that C originally targeted had address modes to support the stack. Pre-increment and post-decrement therefore did not require a separate instruction; they were free. After the PDP-11 went the way of the dodo, both forms took a machine cycle so it (mostly) became a stylistic issue. (The two operators have different semantics, but the trend to avoid side-effects in expressions means that both are most often used in a single expression statement like "++x;" or "x++;", so it comes down to your preferred style.)

+2
zabzonk1 year ago
jejdjdbd1 year ago

Why would you use post increment by default? The semantics are very particular.

Only on very rare occasions I need post increment semantics.

And in those cases I prefer to use a temporary to make the intent more clear

flohofwoe1 year ago

I rarely use pre-increment tbh, but post-increment all the time for array indices (since typically the array should be indexed with the value before the increment happens).

If the pre- or post-increment behaviour isn't actually needed, I prefer `x += 1` though.

+4
card_zero1 year ago
codr71 year ago

If you're used to the idiom, the intent couldn't be clearer.

I miss it when switching between C/++ and other languages.

tialaramex1 year ago

Why use this operator? Like most C and C++ features the main reason tends to be showing off, you learned a thing (in this case that there are four extra operators here) and so you show off by using it even if it doesn't make the software easier to understand.

This is not one of those beginner -> journeyman -> expert cycles where coincidentally the way you wrote it as a beginner is identical to how an expert writes it but for a very different reason. I'd expect experts are very comfortable writing either { x = k; k += 1; } or { k += 1; x = k; } depending on which they meant and don't feel an itch to re-write these as { x = k++; } and { x = ++k; } respectively.

I'm slightly surprised none of the joke languages add equally frivolous operators. a%% to set a to the remainder after dividing a by 10, or b** to set b as two to the power b or some other silliness.

+2
trealira1 year ago
+2
layer81 year ago
cozzyd1 year ago

It's more useful for pointers than for values, IMO

int_19h1 year ago

For iterators, += may not even be available.

johanvts1 year ago

I payed for this on manning and they didn’t even release the final version yet. I guess I didn’t understand what I was buying, but I can’t help feel a bit cheated.

survivedurcode1 year ago

Continuing to use a memory-unsafe language that has no recourse for safety and is full of footguns and is frankly irresponsible for the software profession. God help us all.

By the way, the US government did the profession no favors by including C++ as a memory-unsafe language. It is possible to write memory-safe C++, safe array dereferencing C++. But it’s not obvious how to do it. Herb Sutter is working on it with CppFront. The point stands that C++ can be memory-safe code. If you make a mistake, you might write some unsafe code in C++. But you can fix that mistake and learn to avoid it.

When you write C, you are in the bad luck shitter. You have no choice. You will write memory—unsafe code and hope you don’t fuck it up. You will hope that a refactor of your code doesn’t fuck it up.

Ah, C, so simple! You, only you, are responsible for handling memory safely. Don’t fuck it up, cadet. (Don’t leave it all to computers like a C++ developer would.)

Put C in the bin, where it belongs.

dxuh1 year ago

You can't just put a language in the bin that has been used for 50 years and that a huge percentage the present day software infrastructure is built on.

I see comments like yours everywhere all the time and I seriously think you have a very unhealthy emotional relationship with this topic. You should not have that much hate in your heart for a programming language that has served us very well for many decades and still continues to do so. Even if C was literally all bad (which imho isn't even possible), you shouldn't be that angry at it.

survivedurcode1 year ago

When you write C++, you can allocate memory all day long and write ZERO delete statements. That is possible, I’ve been writing C++ like that since 1998 (Visual C++ 5.0 and lcc). Can you imagine allocating memory and never risk a premature or a forgotten delete? It is not possible in C. You can call it opinion, but I see fact. That makes C all that bad.

When I say put it in the bin, I don’t mean that good software hasn’t been written already with it, or can’t be written with it. But you should stop using it given the earliest opportunity. When given the ability to write object-oriented software, clever engineers with too much time add insane complexity justified by unproven hypotheticals. Believe me, I know very well why people shy away from C++ like a trauma response. Overly-engineered/overly-abstracted complexity, incomprehensible template syntax, inadequate standard library, indecipherable error messages, C++ has its warts. But it is possible to write memory-safe software in C++, and it is not in C (unless we are talking about little code toys!). My answer is that you don’t have to write complicated garbage in C++. Keep it simple like you are writing C. Add C++ features only to get safety. Add polymorphism only when it solves a problem. Never write an abstract class ahead of time. Never write a class ahead of time.

Downvote me all day long. Call me angry. When billions of dollars are lost because someone, in our modern age, decided to write new software in C, or continue to develop software in C instead of switching to a mixed C++/C codebase with an intent to phase out new development in C.

It’s hard not to get angry when modern software is written with avoidable CVEs in 2020’s. Use after free, buffer overflows, are you kidding me? These problems should have been relics in 2010+, but here we are.

fjfaase1 year ago

There are still applications (especially with embedded devices) where you do not dynamically allocate memory or might not even use pointers at all.

uecker1 year ago

There good tools that help improving memory safety in C and I do not think Rust is a good language. Of course, the worst about Rust are its fans.

purple-leafy1 year ago

Skill issue

pornel1 year ago

It's been a skill issue for 40 years. How long are we going to continue searching for those programmers who don't make mistakes?

worksonmine1 year ago

Programmers make stupid mistakes in the safest languages too, even more so today when software is a career and not a hobby. What does it matter if the memory allocation is safe when the programmer exposes all user sessions to the internet because reading Dockers' documentation is too much work? Even Github did a variant of this with all their resources.

+2
pornel1 year ago
kristianp1 year ago

GCC support has been around since gcc 11 apparently. See table at (1). This is available in ubuntu 22.04. The page below also shows support for C26!

1) https://gcc.gnu.org/projects/cxx-status.html#:~:text=C%2B%2B...

bondant1 year ago

That's for C++ not C

kristianp1 year ago

Whoops, thanks for the correction!

sylware1 year ago

I am worried where "official" C is going. Its syntax which is already too complex and already does too much, but that would require to "break" backward compatibility namely it would require "porting". But since it would be still "C" that amount of work should be close to "a bit" of "step by step" refactoring

For instance, only sized types:u8...s64, f32, f64... no implicit casts except for void* and literals, no integer promotion, no switch, no enum, only one loop keyword (loop{}!), no anonymous code block, and no toxic attribute like "packed structure" which makes us lose sight of data alignment... no _generic, typeof, restrict, syntax based tls, etc...

But we would need explicit atomics, explicit memory barriers, explicit unaligned memory access.

Instead of adding and complexifying C to make writing a naive compiler more and more complex, long and a mouse and cat catchup "to the standard" tedious task, what should be done is exactly the other way around.

In end, I don't trust C officials anymore, I tend to stick to C99, or even assembly (I am currently writing rv64 assembly I run an x86_64).

eqvinox1 year ago

> I tend to stick to C99,

> […] But we would need explicit atomics, explicit memory barriers, […]

You should read a change summary before complaining about bits missing from C99 that have in fact been added to C11.

> […] no toxic attribute like "packed structure" which makes us lose sight of data alignment […]

And you should also familiarize yourself with what's in actual ISO C vs. compiler extensions before complaining about bits that are in fact compiler extensions.

sylware1 year ago

I TEND to stick to C99 = usually C99 with very few bits of c11+ (usually the missing bits) and even some extensions (often related to ELF/object format). But I really try hard to minimize their usage.

The pb is in ISO c11+ we got some of the missing stuff for modern hardware architecture, but also tons of tantrums (_generic, typeof, restrict....)

zerr1 year ago

Do they still use 0-terminated strings/char* as the main string type?

Is the usage of single linked lists still prevalent as the main container type?

racingmars1 year ago

> Do they still use 0-terminated strings/char* as the main string type?

Of course, it's still C.

> Is the usage of single linked lists still prevalent as the main container type?

As far as I can remember, the C standard library has never had any functions that used linked lists. Nor are there any container types, linked lists or otherwise, provided by C. So I'd say this is a question about how people teach and use C, not related to the language -- or language spec version -- itself.

eps1 year ago

Don't feed the trolls.

zerr1 year ago

I don't mean the language spec but what is commonly used in the wild.

KerrAvon1 year ago

The C standard library provides no recognizable container types, so there's no "main" anything.

EasyMark1 year ago

I don’t think that will ever change. Will the possibly introduce a more modern string2 type? Maybe but it will probably be unlikely before 2050

codr71 year ago

Embedded linked lists are pretty cool though.

ithkuil1 year ago

aka intrusive linked lists