robinm

joined 1 year ago
[–] robinm@programming.dev 1 points 10 months ago (1 children)

If you have references explain why and how that it’s easier to port C to a new architecture by creating a new compiler from scratch than to either create a backend for llvm (and soon gcc) or to create a minimal wasm executor (like what zig is doing) to this new architecture I’m interested. And of course I talking about new architectures because it’s much easier to recreate something that as already be done before.

[–] robinm@programming.dev 5 points 10 months ago

I’m not familiar with C tooling, but I have done multiple projects in C++ (in a professionnel environnement) and AFAIK the tooling is the same. Tooling to C++ is a nightmare, and that’s and understatement. Most of the difficulty is self inflicted like not using cmake/meson but a custom build system, relying on system library instead of using Conan or vcpkg, not using smart-pointers,… but adding basically anything (LSP, code coverage, a new dependency, clang-format, clang-tidy, …) is horrible in those environments. And if you compare the quality of those tools to the one of other language, they are not even close. For exemple the lint given by clang-tidy to the one of Rust clippy.

If it took no more than an hour to add any of those tools to a legacy C project, then yes it would be disingenuous to not compare C + tooling with Rust, but unfortunately it’s not.

[–] robinm@programming.dev 4 points 10 months ago (8 children)

With Bram Moolenaar death, I sincerely think that vim will no longer be able to play catch-up with nvim. Bram Moolenaar did an amazing job with nvim, but with its death I think that vim is going to be an editor of the past, just like vi is an editor of the past. And nvim is its successor since its where the developers have moved.

[–] robinm@programming.dev 1 points 10 months ago* (last edited 10 months ago)

I never had to use this estimate in front of a client, but if I had, I would decompose it first before giving the total estimate. If there is about 10 items to do per button, so 10 buttons would be a hundred complexe tasks. So let say that it take an hour per task, but since we are fast we can do 10 a day. So suddenly 10 working days, or said otherwise 2 weeks don't seems unrealistics for this apparently simple 10 buttons task.

[–] robinm@programming.dev 25 points 11 months ago (2 children)

As a rough estimation, if you include everything (apperance, discussion, functionality, interaction with other controls, …) I would say that every single input field or button is about a day of work. And then you start to realise how many buttons there is in any GUI and how much it will cost.

[–] robinm@programming.dev 7 points 11 months ago

Usually when people say “I suck at maths”, it means that they are bad at doing manual calculus. Maths is extremely useful in programming, but it’s absolutely not the same kind of math. I don’t think that the grade you had in math at school will influence in any if you will be good or bad in programming.

[–] robinm@programming.dev 2 points 11 months ago (2 children)

I would even have said that both throwing and catching should be pure, just like returning an error value/handling should be pure, but the reason for the throw/returning error itself is impure. Like if you throw and ioerror it's only after doing the impure io call, and the rest of the error reporting/handling itself can be pure.

[–] robinm@programming.dev 1 points 11 months ago (4 children)

I'm surprised about this statement, I would have said that exceptions are the consequence of an impure operation (that may or may not fail differently every time you call it).

[–] robinm@programming.dev 7 points 11 months ago* (last edited 11 months ago)

There take on what they call capabitilites is very interesting. Basically anything that would make a function non-pure seems to be declared explicitely.

A computational effect or an "effectful" computation is one which relies on or changes elements that are outside of its immediate environment. Some examples of effectful actions that a function might take are:

  • writing to a database
  • throwing an exception
  • making a network call
  • getting a random number
  • altering a global variable
[–] robinm@programming.dev 0 points 11 months ago

Interesting take but I think you are right. It's indeed critical to know how you product is used nowadays.

[–] robinm@programming.dev 2 points 11 months ago

2019, so 4-5 years ago so not that recent but not ancient either. But unfortunately tutorials have not been updated.

I would say that the biggest benefit of git switch is that you can't switch to a detached state without using a flag (--detached or -d). If you do git co $tag or git co $sha-1 you may get at one point the error “you are in a detached state” which is ununderstable for begginers. To get the same error with git switch you must explicitely use git switch --detached $tag/$sha-1 which makes it much easier to understand and remember that you are going to do something unusual.

More generally it's harder to misuse git switch/git restore. And it's easier to explain them since the only do one thing (unlike git checkout which is a mess !).

So if it's only for you git checkout is fine, but I would still advice to use git switch and git restore so you will have an easier time to teach/help begginers.

[–] robinm@programming.dev 7 points 11 months ago (3 children)

If you try to learn git one command at a time on the fly, git is HARD. If you take the time to understand its internal data structure it's much, much easier to learn. Unfortunalely most people try to do the former because it works well (or better) for most tasks.

I can't recommand enough the git parable.

view more: ‹ prev next ›