Navigation Menu

Skip to content

Instantly share code, notes, and snippets.

@bkaradzic
Last active April 21, 2024 16:00
Show Gist options
  • Save bkaradzic/2e39896bc7d8c34e042b to your computer and use it in GitHub Desktop.
Save bkaradzic/2e39896bc7d8c34e042b to your computer and use it in GitHub Desktop.
Orthodox C++

Orthodox C++

What is Orthodox C++?

Orthodox C++ (sometimes referred as C+) is minimal subset of C++ that improves C, but avoids all unnecessary things from so called Modern C++. It's exactly opposite of what Modern C++ suppose to be.

Why not Modern C++?

Back in late 1990 we were also modern-at-the-time C++ hipsters, and we used latest features. We told everyone also they should use those features too. Over time we learned it's unnecesary to use some language features just because they are there, or features we used proved to be bad (like RTTI, exceptions, and streams), or it backfired by unnecessary code complexity. If you think this is nonsense, just wait few more years and you'll hate Modern C++ too ("Why I don't spend time with Modern C++ anymore" archived LinkedIn article).

d0pfbigxcaeip0m

Why use Orthodox C++?

Code base written with Orthodox C++ limitations will be easer to understand, simpler, and it will build with older compilers. Projects written in Orthodox C++ subset will be more acceptable by other C++ projects because subset used by Orthodox C++ is unlikely to violate adopter's C++ subset preferences.

Hello World in Orthodox C++

#include <stdio.h>

int main()
{
    printf("hello, world\n");
    return 0;
}

What should I use?

  • C-like C++ is good start, if code doesn't require more complexity don't add unnecessary C++ complexities. In general case code should be readable to anyone who is familiar with C language.
  • Don't do this, the end of "design rationale" in Orthodox C++ should be immedately after "Quite simple, and it is usable. EOF".
  • Don't use exceptions.

Exception handling is the only C++ language feature which requires significant support from a complex runtime system, and it's the only C++ feature that has a runtime cost even if you don't use it – sometimes as additional hidden code at every object construction, destruction, and try block entry/exit, and always by limiting what the compiler's optimizer can do, often quite significantly. Yet C++ exception specifications are not enforced at compile time anyway, so you don't even get to know that you didn't forget to handle some error case! And on a stylistic note, the exception style of error handling doesn't mesh very well with the C style of error return codes, which causes a real schism in programming styles because a great deal of C++ code must invariably call down into underlying C libraries.

  • Don't use RTTI.
  • Don't use C++ runtime wrapper for C runtime includes (<cstdio>, <cmath>, etc.), use C runtime instead (<stdio.h>, <math.h>, etc.)
  • Don't use stream (<iostream>, <stringstream>, etc.), use printf style functions instead.
  • Don't use anything from STL that allocates memory, unless you don't care about memory management. See CppCon 2015: Andrei Alexandrescu "std::allocator Is to Allocation what std::vector Is to Vexation" talk, and Why many AAA gamedev studios opt out of the STL thread for more info.
  • Don't use metaprogramming excessively for academic masturbation. Use it in moderation, only where necessary, and where it reduces code complexity.
  • Wary of any features introduced in current standard C++, ideally wait for improvements of those feature in next iteration of standard. Example constexpr from C++11 became usable in C++14 (per Jason Turner cppbestpractices.com curator)

Is it safe to use any of Modern C++ features yet?

Due to lag of adoption of C++ standard by compilers, OS distributions, etc. it's usually not possible to start using new useful language features immediately. General guideline is: if current year is C++year+5 then it's safe to start selectively using C++year's features. For example, if standard is C++11, and current year >= 2016 then it's probably safe. If standard required to compile your code is C++17 and year is 2016 then obviously you're practicing "Resume Driven Development" methodology. If you're doing this for open source project, then you're not creating something others can use.

UPDATE As of January 14th 2022, Orthodox C++ committee approved use of C++17.

Any other similar ideas?

Code examples

@guruprasadah
Copy link

@DBJDBJ Then what do you suggest in place of std::vector ? I once tried to follow this orthodox approach and all I ended up with was an Array class that was 500 loc. It also fully did not support all operations of std::vector . Fully implementing those will take a couple hundred more LOC. So at this point, LOC of STL and custom reaches same. So what is the use of half-assing our own implementation, that also suffers from STL problems - instead of juist stl

No. Reimplement that by yourself does NOT suffer from the problems of STL.

  1. No bad_alloc
  2. realloc and merging between all trivially_relocatable types
  3. allocator does not cause performance degradation due to the bad design of allocator<T> instead of just allocator.
  4. Provide an extra interface like emplace_back_unchecked to avoid redundant bounds checking of emplace_back.
  5. Zero page semantics aware to call things like calloc instead of malloc to avoid the cost of integers and floating points to be initialized with zero.
  6. No dependency of C++ std::string or anything that uses C++ EH which causes binary bloat and dead code.
    https://github.com/cppfastio/fast_io/blob/master/include/fast_io_dsal/impl/vector.h
    https://github.com/cppfastio/fast_io/blob/master/benchmark/0011.containers/vector/0002.multi_push_back/main.h
./fast_io
fast_io::vector<T>:0.235741445s
./std
std::vector<T>:0.355929078s

No change on allocator. the performance gap is already HUGE due to cache unfriendliness of std::vector + binary bloat issues of std::vector

I am a bit of a new person to c++, but I would like to ask - how is bad_alloc a problem here? I agree exceptions are NOT the best way to handle errors but, then in our own implementation - we throw a similar error, maybe just not using exceptions. Clarify please?

@trcrsired
Copy link

bad_alloc should NEVER EVER happen. It should just crash. Many libcs even has functions like xmalloc which will kill your process.

The problem is that bad_alloc introduces oddities.

  1. When you throw bad_alloc, you need to run destructors, destructors themselves might allocate again.
  2. Emergency heap may still crash.
  3. Many operating systems, including windows and linux, will kill your process with OOM killer. With virtual memory, bad_alloc never really happens and it just crashes out.

@asumagic
Copy link

./fast_io
fast_io::vector<T>:0.235741445s
./std
std::vector<T>:0.355929078s

No change on allocator. the performance gap is already HUGE due to cache unfriendliness of std::vector + binary bloat issues of std::vector

Not that I doubt it is easy to outperform std::vector::push_back, but I doubt the real world performance impact of std::vector is as brutal as you make it out to be, unless you have such numbers.
By the way, how do you know that your benchmark actually performs the push_backs for all the vectors properly? I do not trust the results for this reason, even if it seems to be doing some amount of what you asked according to your numbers. I quickly checked the std::vector variant and it isn't a problem there, but I don't know about your vector implementation.
I'm not trying to trash on your implementation, by the way, it is actually pretty interesting to me.
Also, TIL about clang's trivial_abi, neat.

I do find it amusing that you defeated the compiler optimization in the benchmark merely by looping the push_backs until it couldn't unroll the loop and figure it out. I feel like the compiler ought to be smart enough to figure out there is no side effect to any of this, but it probably can't see past the complexity, especially because of the relocation logic.
Which... honestly just makes me think push_back should be avoided in general, because it's going to have garbage codegen and optimization implications anyway, at least when you can get away using resize ahead of time and indexing, which, according to my experience with real-world code, is usually feasible. That usecase is probably significantly less affected by the code bloat and EH related issues you mention.

It could very well be that the unchecked variant you mention benefits codegen a lot on that front. However, the preconditions you need to check for in your code to ensure it is safe seem in practice close enough to what you'd need to .resize() and index, though?

What I do wish is that std::vector allowed resizing without initializing the Ts. With non-trivial types, I found that the compiler would sometimes still emit the memory zeroing code even when unnecessary. I worked around that by wrapping my T in a ugly way, but it was trash.

@trcrsired
Copy link

trcrsired commented Feb 24, 2023

I have written a new guideline that would kill all modern C++ nonsense.
https://github.com/trcrsired/Portable-Cpp-Guideline

@trcrsired
Copy link

./fast_io
fast_io::vector<T>:0.235741445s
./std
std::vector<T>:0.355929078s

No change on allocator. the performance gap is already HUGE due to cache unfriendliness of std::vector + binary bloat issues of std::vector

Not that I doubt it is easy to outperform std::vector::push_back, but I doubt the real world performance impact of std::vector is as brutal as you make it out to be, unless you have such numbers. By the way, how do you know that your benchmark actually performs the push_backs for all the vectors properly? I do not trust the results for this reason, even if it seems to be doing some amount of what you asked according to your numbers. I quickly checked the std::vector variant and it isn't a problem there, but I don't know about your vector implementation. I'm not trying to trash on your implementation, by the way, it is actually pretty interesting to me. Also, TIL about clang's trivial_abi, neat.

I do find it amusing that you defeated the compiler optimization in the benchmark merely by looping the push_backs until it couldn't unroll the loop and figure it out. I feel like the compiler ought to be smart enough to figure out there is no side effect to any of this, but it probably can't see past the complexity, especially because of the relocation logic. Which... honestly just makes me think push_back should be avoided in general, because it's going to have garbage codegen and optimization implications anyway, at least when you can get away using resize ahead of time and indexing, which, according to my experience with real-world code, is usually feasible. That usecase is probably significantly less affected by the code bloat and EH related issues you mention.

It could very well be that the unchecked variant you mention benefits codegen a lot on that front. However, the preconditions you need to check for in your code to ensure it is safe seem in practice close enough to what you'd need to .resize() and index, though?

What I do wish is that std::vector allowed resizing without initializing the Ts. With non-trivial types, I found that the compiler would sometimes still emit the memory zeroing code even when unnecessary. I worked around that by wrapping my T in a ugly way, but it was trash.

I can guarantee the performance gap is huge at the MACRO level due to the redundant code.

@Luiz-Monad
Copy link

Because modern C++ is a mistake, you debate on it.

C++ is a mistake.

@Luiz-Monad
Copy link

I'd say that you're not orthodox enough. First of all, there's no real use for STL. Complicated data structures are never generic, they must be carefully designed for each particular task. From the other hand, primitive data structures, such as vector or list, give almost no profit, but are way harder to debug/maintain than hand-crafted linked lists and resizeable arrays. Furthermore, the very idea of a container class is weird. Container is a thing that contains my data, but a class is a thing I can't (well, must not) look into. So, container classes is when I don't see my data. Hell, I don't want to be blind at the debugging time, as debugging is hard enough without it.

Next, there's no such thing as 'safe features from new standards'. All technical standards since around 1995 or so are simply terrorist acts, and these committees that prepare and create new standards are dangerous international terrorist organizations, and in particular, ISO with their policy of keeping standards secret and selling their texts for money, is the most dangerous terrorist organization in the world, way more dangerous that Al Qaeda. ALL they do is bad. So everything they do must be banned right off, without consideration.

I'm nevertheless glad to see that someone in the world sees the things almost as they are. Modern software industry looks like being made of complete idiots, and there must be some way out of this situation.

Now I get why Linus opted out of this insanity.

@Luiz-Monad
Copy link

Luiz-Monad commented Apr 3, 2023

@nicebyte I disagree. If you're writing low-level data oriented code, virtual classes almost always get in your way. After years of practical low-level professional game engine programming and personal computer science research, I dare to say when you write code like a game engine you end up in a situations where you either have lots of objects that must be updated the frequently and efficiently or you are dealing with large and few entities (e.g. anything called *Manager, *System, etc). In the latter case an opaque struct pointer followed by several API functions that operate on an instance of that (opaque) struct is expressive enough to create a neat abstraction and I'd say much more OOP than what you'd get in C++, as all of its private implementation details (in the .c) are truly hidden from its interface (in the .h) where in C++ you are forced to declare its private members in the class interface (yuck #1). The former, you want to process all those objects in batches, grouping all the operations and common data relevant to a single object type in a "Type" struct. Something you simply cannot do with C++ because virtual tables are hidden and not supposed to be fetched or populated manually (yuck #2).

Want a practical example? Consider writing a raytracer. You could go ahead and make a Shape virtual interface that has an Intersect() method. Then you have a AggregateShape that has a std::vector<Shape*> (yuck #3) and its own Intersect() method that simply calls each child Shape Intersect in turn and returns the closest to the ray origin. Sure this simple enough, but I'd argue not good enough. A better way is, for instance, having a ShapeType struct with a uint32 containing a unique id and a bunch of function pointers such as (* Intersect)(). your AggregateShape can now be implemented with an array of ShapeTypes (using their unique ID as index) and its Intersect() function being arguably considerably faster, as it can be implemented per ShapeType and not per (any) Shape. New iteration, new ShapeType, store the address of its Intersect function in a register, call it in a hot loop for each shape of that type (naturally stored in an array by value, and not by pointer like in the previous case). Oh, and this also uses less memory, since you have removed the vptr from each Shape object.

No matter how hard I try, the more experienced I become, the less attractive C++ gets. Not to mention the various serious design mistakes the language has I will skip because I need to work. Let's just say C++ is a language that is prone to complexity which exponentially leads to even more complexity and bugs.

Totally agree with that. VTables are performance killer, yet people in C++ keep crying about Garbage Collectors. No, child, GC isn't what made Java slow, it was the excessive use of Heap and Virtual Calls. You do that in C++ and you get slow code. You don't use virtual calls and inheritance and excessive object orientation in C# (or Java), and it goes fast as hell when the JIT kicks on.

But you take OCaml with a GC and it isn't slow, (not until you misuse closures, which go on heap)

Also, C++ has no "0-cost" abstractions, because abstractions have mental costs. Nothing in C++ is free of cost, CPU time is cheap, but Brain time is not.

Sometimes I want performance, so that's why I use C++, but mostly I get that performance from tightly coupling to whatever library is that I use and was made with C++ and then doing LTO. (one day GraalVM will kill that feat and then I won't have any reason to use C++, because I can glue any C library to any higher level language without any interop cost, that thing is true engineering marvel)

Memory management hardly matters much, I got here because of a discussion on allocators. How it is allocated and freed doesn't matter, the layout of the memory matters, which is why we use C++.

But then there's STL to bork everything.

Want to go fast, just transpose your Array of Objects into an Object of Arrays, and then it doesn't matter if you use a GC because the Object is going to be allocated with its millions of elements all contiguous (*1). Then the only thing that matters is alignment to the cache lines, but can you control that with C++ ? no, so what's the point even.... of manual memory management ?

(*1) (its just one allocation, and one free, which divided by millions of elements makes the cost of any automatic memory management almost free)

Fortran only had arrays and that was good enough. Pointers are kind of an abstraction leak from the underlying hardware architecture, I don't think there's place for pointer arithmetic in higher level language, or do you guys like to do manual bank switching too ?

@Luiz-Monad
Copy link

In IT systems in general, the programming language is not of primary importance. Architecture is.

IT system (not your desktop experiments) is made of components. Each component has a well-defined interface and set of operational requirements. E.g. size, speed, security, and such. If your component does meet operational requirements I really do not care if you have written it in Quick Basic, Julia, Rust, Elm, Awk, Bash Script or C++.

Just after that, it will be rigorously tested for meeting the functional requirements.

Also please do not (ever) forget the economy of IT systems: how much money will it take to develop it and more importantly how much money will it take to maintain it for the next 10+ years.

Kind regards ...

Great points.

As a developer: C++ is great for one thing : its complex and requires a lot of technical knowledge so I can command a higher salary.
As a business man: I want to get out of it because I care about operational requirements and finding good C++ developers is hard and expensive.

I wonder why I keep C++ , if not only for legacy (aka, Interop with legacy things already made in C++, as for example, OpenCV). But using another programing language shouldn't be out scope.

This is wisdom.

@Luiz-Monad
Copy link

  1. BLAS and LAPACK are written like that, with "in-out" arguments as are many high-performance Fortran libraries. Why is that pointless?
    We can replicate that in C++ by using plain reference parameters.

    1. Concerning void*, a lot of new libraries are using type erasure, as is boost with Boost.TypeErasure. std::any, void*, etc... why is that trolling?

The irony of having to use Fortran for performance !!!!

@Barenboim
Copy link

I believe C++ Workflow is a good practice of orthodox C++ ;) Could you add it to the code examples?

@Anongh
Copy link

Anongh commented Oct 22, 2023

It's not easy to adhere to these tenets when using libraries written after modern C++ (including C++11, obviously).

@Journeyman1337
Copy link

Journeyman1337 commented Dec 22, 2023

I agree that C++ has a lot of problems in its design stemming from the fact that the language is old and the standard committee is too reluctant to change things. I hate how a lot of features in C++ are often implemented halfway due to debate between contributors, such as how std::optional has no specialization for references because the committee couldn't agree on the behavior of the assignment operator. However, I don't think the solution is to go back in time and utilize older functionality. I think the solution is to create a successor language, create a library that wraps around and fixes the issues of the standard library, or switch to Rust. The problem is not with C++, but its rather with the standard template library. I have been messing around lately with rewriting parts of the standard library myself, and I have been surprised at how much easier it is to work with when you remove some of the more complicated "features". Why did they have to name the Mersenne Twister random number generator such a weird and hard to remember name (std::mt19937)? In my opinion, simplification without the addition of new functionality is the best kind of improvement, and it would not take much to make C++ more bearable to work with.

@trcrsired
Copy link

It's not easy to adhere to these tenets when using libraries written after modern C++ (including C++11, obviously).

unique_ptr is huge issue in C++. It should not exist in the first place.

@trcrsired
Copy link

trcrsired commented Dec 28, 2023

I agree that C++ has a lot of problems in its design stemming from the fact that the language is old and the standard comity is too reluctant to change things. I hate how a lot of features in C++ are often implemented halfway due to debate between contributors, such as how std::optional has no specialization for references because the comity couldn't agree on the behavior of the assignment operator. However, I don't think the solution is to go back in time and utilize older functionality. I think the solution is to create a successor language, create a library that wraps around and fixes the issues of the standard library, or switch to Rust. The problem is not with C++, but its rather with the standard template library. I have been messing around lately with rewriting parts of the standard library myself, and I have been surprised at how much easier it is to work with when you remove some of the more complicated "features". Why did they have to name the mersene twister random number generator such a weird and hard to remember name (std::mt19937)? In my opinion, simplification without the addition of new functionality is the best kind of improvement, and it would not take much to make C++ more bearable to work with.

No. The problem is the language itself. The exception handling was the language's biggest design failure.

Also, i do not quite buy the argument they do not change because the language is old. C++ as a capitalist imperialist product, it will always serve the interests of big capitalists. These capitalists write shit and that is why they keep promoting shit into the language, like std::format.

If you look at the history of exception handling, it was added not by fixing the issues of C++ but served the interests of big companies in 1990s (HP and Intel). The capitalists influence on C++ is why C++ sucks.

The changes we want as programmers never matter to WG21, we do not donate money to them. They only listen to capitalists. Again, you cannot work around for capitalism and you have to defeat it.

@sasmaster
Copy link

sasmaster commented Dec 28, 2023 via email

@trcrsired
Copy link

trcrsired commented Dec 29, 2023

We're discussing the practical realities of the real world here, not delving into idealistic notions of socialism or communism.

The truth is, decisions made within WG21 predominantly cater to the interests of large international corporations. If these corporations have a plethora of issues to contend with, such as exceptions for bad_alloc or an overreliance on object-oriented programming, meaningful change becomes unlikely.

Should WG21 cease to serve the interests of these corporations, it's plausible that the organization would dissolve, or its leader might be strategically removed to safeguard those interests. This aligns with the principles of historical materialism. Much like World War II would have occurred without Hitler, the functioning of the US empire remains consistent regardless of whether Donald Trump or Joe Biden assumes the presidency.

In essence, WG21 has become an arena dominated by these major corporations. For instance, while Google advocates for an ABI break, other corporations may resist. Our opinions and expressions carry minimal weight in this scenario.

You mentioned "historical reasons," and this is precisely one of them. What you and I may criticize about the language is inconsequential.

To begin with, they are unlikely to heed our concerns. They disregarded them 30 years ago, and the situation remains unchanged. Had they been receptive, issues like permitting HP to incorporate exception handling in the manner they did 30 years ago would have been avoided. Similarly, if they were attentive, freestanding would not have lacked ::std::addressof until C++23, as Nvidia desired. Even in C++26, std::array remains absent in the freestanding context. The addition of features like std::filesystem, marred by dual error reporting mechanisms, stands as another testament to their lack of heedfulness.

Bjarne himself is clearly a big fan of bourgeois democracy and believes democracy fixes everything. While in reality, it is just another kind of dictatorship.

@Journeyman1337
Copy link

Journeyman1337 commented Dec 30, 2023

I actually believe that C++ is communist. C++ is corner stone of Linux community, and everyone knows supporters of the free software movement are communist. It may as well be renamed to Communist++.

Don't let me even get started on JavaScript...

@trcrsired
Copy link

trcrsired commented Dec 30, 2023

I actually believe that C++ is communist. C++ is corner stone of Linux community, and everyone knows supporters of the free software movement are communist. It may as well be renamed to Communist++.

Don't let me even get started on JavaScript...

Free software has nothing to do with communism.

Whatever, C++'s history is always big corporation controlled. It is a fact and nobody should deny. If you look at how they vote in ISO, only big companies are allowed to vote. And companies like IBM had a long history of pushing C++ to support many legacy nonsenses, like EBCDIC execution charset.
If no one funds the WG21, it would just collapse.

The feature we as individual programmers need do not align with big corporations.

@Journeyman1337
Copy link

I actually believe that C++ is communist. C++ is corner stone of Linux community, and everyone knows supporters of the free software movement are communist. It may as well be renamed to Communist++.
Don't let me even get started on JavaScript...

Free software has nothing to do with communism.

Whatever, C++'s history is always big corporation controlled. It is a fact and nobody should deny. If you look at how they vote in ISO, only big companies are allowed to vote. And companies like IBM had a long history of pushing C++ to support many legacy nonsenses, like EBCDIC execution charset. If no one funds the WG21, it would just collapse.

The feature we as individual programmers need do not align with big corporations.

Sorry if it was not obvious, but that was sarcasm.

@aerosayan
Copy link

@bkaradzic hi,

After using Orthodox C++ standard for a few years, I can say the rule to allow new C++ features should probably be reworded in a different way.

  • "Due to lag of adoption of C++ standard by compilers, OS distributions, etc. it's usually not possible to start using new useful language features immediately. General guideline is: if current year is C++year+5 then it's safe to start selectively using C++year's features." - Orthodox C++

I don't think this rule is strict enough.
My reasoning is, in the statement you mainly provide a safety clause for ensuring people don't use unstable features.
While that is good, I don't think that's as specific as it should be.
Mainly because the problem of using modern features only bare their face after a few years of overusing them in your codebase.

For example:

  • Lambdas from your example, destroy compilation speed, and increase memory consumption of supporting tools like LSPs (clangd specifically).
  • Similarly constexpr functions also slow compilation, and severely increase memory consumption for LSPs.
  • Templates are also problematic, but we can use extern templates to make them usable, while still remaining fast to compile.

As I've observed, everything in modern C++ over-complicates the language, and slows down compilation.

All in all, if one allows modern C++ features without warning about the potential dangers of those features, and/or how to safely use them, then, I don't think it is Orthodox anymore, as they would need to depend on new standards (--std=c++11 --std=c++14 etc), and anyone could use bad modern features like std::ranges and think they're being safe.

Thanks.

@GabrielRavier
Copy link

@aerosayan What would you suggest doing, then ? Stay on C++98 ? It seems quite obvious that most of the things you are complaining about are just as present there, along with bigger footguns that the more recent standards have actually managed to remove over the years.

@aerosayan
Copy link

@GabrielRavier , I think most things that were included after C++11, are a mistake.

C++11 seems to be good enough for most things.

Though I should point out, that my aversion to modern features is due to unintended side effects, and not for the language version.

For example:

  • Almost all C++ devs heavily promote smart pointers, as they're supposed to be safe.
  • However, almost no one mentions that std::shared_ptr uses atomic locks and atomic addition internally for reference counting, so for critical software engineering, like scientific code development in my case, std::shared_ptr are harmful, as they'll degrade performance, but it will be extremely difficult, or even impossible, for us to find the cause of it.

If a team knows what is best for them, they can use whichever feature they want.

@DBJDBJ
Copy link

DBJDBJ commented Mar 4, 2024

GCC and clang should introduce new switch: /orthodox

@GabrielRavier
Copy link

GCC and clang should introduce new switch: /orthodox

@DBJDBJ what would the flag do ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment