Pointers

https://twitter.com/theprimeagen/status/1683671315377541121

>i have become unsafe, destroyer of production

30
Jump in the discussion.

No email address required.

if you use double pointers anywhere outside plain C (not C++) where it's required in some cases then you should be executed

not an opinion, just an objective fact

Jump in the discussion.

No email address required.

C is still useful in some embedded contexts where you don't want to bring along an entire C++ runtime but yeah that's about it.

Jump in the discussion.

No email address required.

This feels less and less common, no?


Follower of Christ :marseyandjesus: Tech lover, IT Admin, heckin pupper lover and occasionally troll. I hold back feelings or opinions, right or wrong because I dislike conflict.

Jump in the discussion.

No email address required.

It is. People tend to focus on cost improvements with high-end hardware but the same is true of low-end stuff, actually the effect is probably felt even more there. Realistically there's just little reason to have extremely weak microprocessors anymore. You can get something that can run C++ just fine for very very cheap and with low power draw. There are obviously still some specialty applications where you run on a tiny microcontroller with 128 KB RAM (for example) but that's become less and less common.

I just checked and you can buy a Raspberry Pi Zero 2 W for $15. That's a full butt computer that runs Linux with 512 MB RAM, a wifi chip, various connectors (including mini HDMI and video output) for $15. Obviously a major engineering firm would be designing their own boards and stuff and not using this but that's just an example for how cheap low-end microprocessors are these days.

tbh at this point I think most of the embedded systems that are only running C and not at least C++ are doing so for legacy reasons.

Jump in the discussion.

No email address required.

I'm seeing rumblings of Rust being popular for embedded too but clearly that'll take another five years to be really established.

But yeah, I remember the cool HN post of a guy finding a cheap enough chip to run Linux on for his business cards.

https://www.thirtythreeforty.net/posts/2019/12/my-business-card-runs-linux/

Appears to be 2019 even.


Follower of Christ :marseyandjesus: Tech lover, IT Admin, heckin pupper lover and occasionally troll. I hold back feelings or opinions, right or wrong because I dislike conflict.

Jump in the discussion.

No email address required.

I actually really like Rust and hope it takes off in this space. The community is :marseytrain: garbage but the actual language and ecosystem are good.

I haven't worked in embedded development for many years, but even when I did, the newer hardware we were rolling out was like 20x more powerful than what came before, and cheaper. It was actually kinda annoying because they figured with the substantially more powerful microprocessor, they didn't need any dedicated chips to run the serial buses. Works fine in practice but it means if you have to debug anything on the board, every time you hit a breakpoint (even a conditional breakpoint that misses) it fricks all the serial connections up. Made debugging a bit of a pain.

Jump in the discussion.

No email address required.

Any opinion on the Zephyr RTOS?

Kind of funny that it’s C when we were talking that becoming less and less popular but it seems to have a really good build system.


Follower of Christ :marseyandjesus: Tech lover, IT Admin, heckin pupper lover and occasionally troll. I hold back feelings or opinions, right or wrong because I dislike conflict.

Jump in the discussion.

No email address required.

Sorry no, it's been many years since I've worked with an RTOS. I had to look up a list of them to find the one my old job used, it was ThreadX. It seemed fine, at least while I was there I didn't uncover any bugs in the RTOS so that's about as good as it gets. All the boards it ran on were custom/proprietary as well so I doubt any of the free ones would work right out the box.

Jump in the discussion.

No email address required.

More comments

Yeah, it really is - I work in embedded and the hardware I'm working with is powerful enough that we can easily use modern C++ and not care. The efficiency gains (whether in execution speed or memory usage) are so small they're not worth the trade-off in developer time.

Jump in the discussion.

No email address required.

And building robots that run on microcontrollers is really fun so everyone should learn c


Putting the :e: in :marseyexcited:

Jump in the discussion.

No email address required.

This

Jump in the discussion.

No email address required.

When I don’t use C I use Go or, if really lazy, Python

Jump in the discussion.

No email address required.

Python is just so cozy

:marseytoasty:

I use it for the vast majority of my personal projects since the data sets I'm working with aren't large enough to need better performance and they aren't complicated enough to require static typing.

Jump in the discussion.

No email address required.

C is so 1970s. The future is JavaScript, bros.

Jump in the discussion.

No email address required.

thats why i always use references to pointers to references.

Jump in the discussion.

No email address required.

:#marseydizzy:

I remember at one point using some reference objects but I threw in the towel when I realized I could just use shared_ptr instead and it wasn't even in a hot loop so the overhead didn't matter.

Jump in the discussion.

No email address required.

C is my bread and butter. Double pointers for 2d variable length arrays.


Putting the :e: in :marseyexcited:

Jump in the discussion.

No email address required.

That’s cool I guess

Jump in the discussion.

No email address required.

you use int** for 2d arrays? neighbor that doesn't even work since you've erased the size info from both dimensions, unless by "2d" you mean "array of pointers to arrays of pointers" which isn't 2d, it's just an array of arrays

Jump in the discussion.

No email address required.

>it's just an array of arrays

Yeah and a matrix is just a vector of vectors, what's your point?

Jump in the discussion.

No email address required.

it's not the same in software, mathcel

Jump in the discussion.

No email address required.

what are you talking about, it's different because it:

  • involves an extra array of pointers in memory

  • requires two pointer dereferences instead of one to get a value

Jump in the discussion.

No email address required.

Unless you are using a boost array, nothing beats pointers for speed when variable size is required.

Jump in the discussion.

No email address required.

the number of scenarios where you need variable size and also std::vector<> is too slow for you is vanishingly small

not to mention that I'm talking about double (or triple or more) pointers, your scenario is only a single pointer. there are certainly some uses for raw pointers in C++ (although std::unique_ptr<> is better in the vast majority of cases and has zero runtime overhead for dereferencing) but frankly they're few and far between these days.

Jump in the discussion.

No email address required.

not to mention that I'm talking about double (or triple or more) pointers, your scenario is only a single pointer.

Depending on the design space, having triple or double pointers are better for representing a 3D space, with your vector field. If you order your data right, it becomes really easy to pass along entire blocks for MPI messages by abusing how pointers work across rows.

std::vector<>

Is dogshit slow if you are initializing a buffer during your computation. It initializes all elements where a raw array does not. This is a huge performance hit when you are talking millions of elements. There is a reason high end computational codes avoid std:vector.

Jump in the discussion.

No email address required.

int** is a pointer to a pointer to an int. Not a "2d array". There's no dimension so if you think you're gonna x[4][2] or something it's not gonna work the way you think it will.

If you're really looking for a 2d array you're better off having a plain int* / int[] buffer in a struct with the dimensions and performing lookups via getters or operator overloads. You can also slice up the larger buffer at will.

What int** is actually used for, in reality, is when you want something like an int* output parameter to a function. Even then a reference would be better so the only acceptable use is if you also need to support passing nullptr to "ignore" that output parameter.

Jump in the discussion.

No email address required.

"2d array".

Its funny, I never said the words 2d array,

so if you think you're gonna x[4][2] or something it's not gonna work the way you think it will.

Out of the box, no, but you can initialize the outer pointer as an array, and then the inner array and end up with an array of arrays that can be accessed as A[i][k]. Which if you setup right it becomes trivial to pass the start of your sections for MPI with A + r * m n for MPI or other parallel paradigms, like passing parts of the array array to a GPU.

Let me guess, you're a CS, not a CE?

Jump in the discussion.

No email address required.

You'd need one actual buffer with the data then a separate array to the side that contains pointers into that buffer. This extra array of pointers is generally completely unnecessary, at least if your data is rectangular - the only possible requirement for it is if you have a "jagged 2d array", basically where each row has its own length.

What you're describing is adding a whole extra array and indirection because you're too r-slurred to use an operator overload or getter, and apparently don't think compilers optimize getters (hint: they're virtually always inlined unless you define them outside the class/struct def).

Let me guess, you do this shit on some paltry raspberry pi "cluster" and don't actually have a job doing it for real?

Jump in the discussion.

No email address required.

Let me guess, you do this shit on some paltry raspberry pi "cluster" and don't actually have a job doing it for real?

So I guess the answer to my question is "Yes" you are a CS kiddie, based on not actually replying to what I wrote.

You are not going to what to use getter on a GPU, you want to pass the least amount of data to the GPU. For instance of you are running Strassen's Algorithm you can pass the submatricies using linear offsets to MPI, and derefencing the outer pointer after your linear offset lets you pass that submatrix to CUDA. This becomes a lot more important when your Matrix is actually a third order tensor with a vector of your flow field, which keeps it cleaner.

I do have a job doing this, I even helped get one of our GPU codes to work on Summit.

Jump in the discussion.

No email address required.

There's literally nothing stopping you from slicing up a buffer and you don't need a separate array of pointers off to the side to do it.

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.