this post was submitted on 15 Jan 2025
82 points (97.7% liked)

Programming

17821 readers
204 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

This may make some people pull their hair out, but I’d love to hear some arguments. I’ve had the impression that people really don’t like bash, not from here, but just from people I’ve worked with.

There was a task at work where we wanted something that’ll run on a regular basis, and doesn’t do anything complex aside from reading from the database and sending the output to some web API. Pretty common these days.

I can’t think of a simpler scripting language to use than bash. Here are my reasons:

  • Reading from the environment is easy, and so is falling back to some value; just do ${VAR:-fallback}; no need to write another if-statement to check for nullity. Wanna check if a variable’s set to something expected? if [[ <test goes here> ]]; then <handle>; fi
  • Reading from arguments is also straightforward; instead of a import os; os.args[1] in Python, you just do $1.
  • Sending a file via HTTP as part of an application/x-www-form-urlencoded request is super easy with curl. In most programming languages, you’d have to manually open the file, read them into bytes, before putting it into your request for the http library that you need to import. curl already does all that.
  • Need to read from a curl response and it’s JSON? Reach for jq.
  • Instead of having to set up a connection object/instance to your database, give sqlite, psql, duckdb or whichever cli db client a connection string with your query and be on your way.
  • Shipping is… fairly easy? Especially if docker is common in your infrastructure. Pull Ubuntu or debian or alpine, install your dependencies through the package manager, and you’re good to go. If you stay within Linux and don’t have to deal with differences in bash and core utilities between different OSes (looking at you macOS), and assuming you tried to not to do anything too crazy and bring in necessary dependencies in the form of calling them, it should be fairly portable.

Sure, there can be security vulnerability concerns, but you’d still have to deal with the same problems with your Pythons your Rubies etc.

For most bash gotchas, shellcheck does a great job at warning you about them, and telling how to address those gotchas.

There are probably a bunch of other considerations but I can’t think of them off the top of my head, but I’ve addressed a bunch before.

So what’s the dealeo? What am I missing that may not actually be addressable?

top 50 comments
sorted by: hot top controversial new old
[–] melezhik@programming.dev 12 points 17 hours ago* (last edited 17 hours ago) (1 children)

We are not taking about use of Bash in dev vs use Bash in production. This is imho incorrect question that skirts around the real problem in software development. We talk about use of Bash for simple enough tasks where code is rarely changed ( if not written once and thrown away ) and where every primitive language or DSL is ok, where when it comes to building of medium or complex size software systems where decomposition, complex data structures support, unit tests, error handling, concurrency, etc is a big of a deal - Bash really sucks because it does not allow one to deal with scaling challenges, by scaling I mean where you need rapidly change huge code base according changes of requirements and still maintain good quality of entire code. Bash is just not designed for that.

[–] Badland9085@lemm.ee 6 points 16 hours ago (2 children)

But not everything needs to scale, at least, if you don’t buy into the doctrine that everything has to be designed and written to live forever. If robust, scalable solutions is the nature of your work and there’s nothing else that can exist, then yeah, Bash likely have no place in that world. If you need any kind of handling more complicated than just getting an error and doing something else, then Bash is not it.

Just because Bash isn’t designed for something you want to do, doesn’t mean it sucks. It’s just not the right tool. Just because you don’t practice law, doesn’t mean you suck; you just don’t do law. You can say that you suck at law though.

[–] melezhik@programming.dev 4 points 6 hours ago* (last edited 6 hours ago)

Yep. Like said - "We talk about use of Bash for simple enough tasks ... where every primitive language or DSL is ok", so Bash does not suck in general and I myself use it a lot in proper domains, but I just do not use it for tasks / domains with complexity ( in all senses, including, but not limited to team work ) growing over time ...

[–] tleb@lemmy.ca 7 points 11 hours ago (1 children)

If your company ever has >2 people, it will become a problem.

[–] Badland9085@lemm.ee 1 points 1 hour ago

You’re speaking prophetically there and I simply do not agree with that prophecy.

If you and your team think you need to extend that bash script to do more, stop and consider writing it in some other languages. You’ve move the goalpost, so don’t expect that you can just build on your previous strategy and that it’ll work.

If your “problem” stems from “well your colleagues will not likely be able to read or write bash well enough”, well then just don’t write it in bash.

[–] furrowsofar@beehaw.org 9 points 23 hours ago (1 children)

Just make certain the robustness issues of bash do not have security implications. Variable, shell, and path evalutions can have security issues depending on the situation.

[–] Badland9085@lemm.ee 2 points 22 hours ago (1 children)

Certainly so. The same applies to any languages we choose, no?

[–] furrowsofar@beehaw.org 8 points 21 hours ago* (last edited 20 hours ago) (1 children)

Bash is especially suseptable. Bash was intended to be used only in a secure environment including all the inputs and data that is processed and including all the proccess on the system containing the bash process in question for that matter. Bash and the shell have a large attack surface. This is not true for most other languages. It is also why SUID programs for example should never call the shell. Too many escape options.

[–] Badland9085@lemm.ee 1 points 1 hour ago (2 children)

Good point. It’s definitely something to keep in mind about. It’s pretty standard procedure to secure your environments and servers, wherever arbitrary code can be ran, lest they become grounds for malicious actors to use your resources for their own gains.

What could be a non-secure environment where you can run Bash be like? A server with an SSH port exposed to the Internet with just password authentication is one I can think of. Are there any others?

[–] furrowsofar@beehaw.org 1 points 28 minutes ago

By the way, I would not consider logging in via ssh and running a bash script to be in secure in general.

However taking uncontrolled data from outside of that session and injecting it could well be insecure as the data is probably crossing an important security boundary.

[–] furrowsofar@beehaw.org 1 points 42 minutes ago* (last edited 38 minutes ago)

I was more thinking of the CGI script vunerability that showed up a few years ago. In that case data came from the web into the shell environment uncontrolled. So uncontrolled data processing where the input data crosses security boundaries is an issue kind of like a lot of the SQL injection attacks.

Another issue with the shell is that all proccesses on the system typically see all command line arguments. This includes any commands the shell script runs. So never specify things like keys or PII etc as command line arguments.

Then there is the general robustness issue. Shell scripts easy to write to run in a known environment and known inputs. Difficult to make general. So for fixed environment and known and controlled inputs that do not cross security boundaries probaby fine. Not that, probablay a big issue.

By the way, I love bash and shell scripts.

[–] zygo_histo_morpheus@programming.dev 18 points 1 day ago (2 children)

One thing that I don't think anyone else has mentioned is data structures. Bash does have arrays and hashmaps at least but I've found that working with them is significantly more awkward than in e.g. python. This is one of several reasons for why bash doesn't scale up well, but sure for small enough scripts it can be fine (if you don't care about windows)

[–] syklemil@discuss.tchncs.de 6 points 1 day ago (1 children)

I think I mentioned it, but inverse: The only data type I'm comfortable with in bash are simple string scalars; plus some simple integer handling I suppose. Once I have to think about stuff like "${foo[@]}" and the like I feel like I should've switched languages already.

Plus I rarely actually want arrays, it's way more likely I want something in the shape of

@dataclass(frozen=True)
class Foo:
    # …

foos: set[Foo] = …
[–] lurklurk@lemmy.world 1 points 6 hours ago (1 children)

I use the same heuristic... if I need a hashmap or more complex math, I need a different language

Also if the script grows beyond 100 lines, I stop and think about what I'm doing. Sometimes it's OK, but it's a warning flag

[–] syklemil@discuss.tchncs.de 2 points 6 hours ago

Yeah agreed on the 100 lines, or some other heuristic in the direction of "this script will likely continue to grow in complexity and I should switch to a language that's better suited to handle that complexity".

[–] Badland9085@lemm.ee 3 points 1 day ago

That’s definitely worth mentioning indeed. Bash variables, aside from arrays and hashmaps that you get with declare, are just strings. Any time you need to start capturing a group of data and do stuff with them, it’s a sign to move on. But there are many many times where that’s unnecessary.

[–] vext01@lemmy.sdf.org 24 points 1 day ago

Honestly, if a script grows to more than a few tens of lines I'm off to a different scripting language because I've written enough shell script to know that it's hard to get right.

Shellcheck is great, but what's greater is a language that doesn't have as many gotchas from the get go.

[–] ShawiniganHandshake@sh.itjust.works 19 points 1 day ago (1 children)

I've worked in bash. I've written tools in bash that ended up having a significant lifetime.

Personally, you lost me at

reading from the database

Database drivers exist for a reason. Shelling out to a database cli interface is full of potential pitfalls that don't exist in any language with a programmatic interface to the database. Dealing with query parameterization in bash sounds un-fun and that's table stakes, security-wise.

Same with making web API calls. Error handling in particular is going to require a lot of boilerplate code that you would get mostly for free in languages like Python or Ruby or Go, especially if there's an existing library that wraps the API you want to use in native language constructs.

load more comments (1 replies)
[–] morbidcactus@lemmy.ca 1 points 21 hours ago (1 children)

I'm fine with bash for ci/cd activities, for what you're talking about I'd maybe use bash to control/schedule running of a script in something like python to query and push to an api but I do totally get using the tools you have available.

I use bash a lot for automation but PowerShell is really nice for tasks like this and has been available in linux for a while. Seen it deployed into production for more or less this task, grabbing data from a sql server table and passing to SharePoint. It's more powerful than a shell language probably needs to be, but it's legitimately one of the nicer products MS has done.

End of the day, use the right tool for the job at hand and be aware of risks. You can totally make web requests from sql server using ole automation procedures, set up a trigger to fire on update and send data to an api from a stored proc, if I recall there's a reason they're disabled by default (it's been a very long time) but you can do it.

[–] Badland9085@lemm.ee 2 points 21 hours ago (2 children)

People have really been singing praises of Powershell huh. I should give that a try some time.

But yeah, we wield tools that each come with their own risks and caveats, and none of them are perfect for everything, but some are easier (including writing it and addressing fallovers for it) to use in certain situations than others.

It’s just hard to tell if people’s fear/disdain/disgust/insert-negative-reaction towards bash is rational or more… tribal, and why I decided to ask. It’s hard to shake away the feeling of “this shouldn’t just be me, right?”

[–] some_guy@lemmy.sdf.org 3 points 13 hours ago

The nice thing about Powershell is that it was built basically now after learning all the things that previous shells left out. I'm not fluent in it, but as a Bash aficionado, I marveled at how nice it was at a previous job where we used it.

That said, I love Bash and use it for lots of fun automation. I think you're right to appreciate it as you do. I have no opinion on the rest.

[–] morbidcactus@lemmy.ca 2 points 20 hours ago (2 children)

I have to wonder if some of it is comfort or familiarity, I had a negative reaction to python the first time I ever tried it for example, hated the indent syntax for whatever reason.

[–] Badland9085@lemm.ee 2 points 1 hour ago

Creature comfort is a thing. You’re used to it. Familiarity. You know how something behaves when you interact with it. You feel… safe. Fuck that thing that I haven’t ever seen and don’t yet understand. I don’t wanna be there.

People who don’t just soak in that are said to be, maybe, adventurous?

It can also be a “Well, we’ve seen what can work. It ain’t perfect, but it’s pretty good. Now, is there something better we can do?”

[–] lurklurk@lemmy.world 2 points 6 hours ago* (last edited 6 hours ago)

The indent syntax is one of the obviously bad decisions in the design of python so it makes sense

[–] FizzyOrange@programming.dev 22 points 1 day ago (3 children)

I'm afraid your colleagues are completely right and you are wrong, but it sounds like you genuinely are curious so I'll try to answer.

I think the fundamental thing you're forgetting is robustness. Yes Bash is convenient for making something that works once, in the same way that duct tape is convenient for fixes that work for a bit. But for production use you want something reliable and robust that is going to work all the time.

I suspect you just haven't used Bash enough to hit some of the many many footguns. Or maybe when you did hit them you thought "oops I made a mistake", rather than "this is dumb; I wouldn't have had this issue in a proper programming language".

The main footguns are:

  1. Quoting. Trust me you've got this wrong even with shellcheck. I have too. That's not a criticism. It's basically impossible to get quoting completely right in any vaguely complex Bash script.
  2. Error handling. Sure you can set -e, but then that breaks pipelines and conditionals, and you end up with really monstrous pipelines full of pipefail noise. It's also extremely easy to forget set -e.
  3. General robustness. Bash silently does the wrong thing a lot.

instead of a import os; os.args[1] in Python, you just do $1

No. If it's missing $1 will silently become an empty string. os.args[1] will throw an error. Much more robust.

Sure, there can be security vulnerability concerns, but you’d still have to deal with the same problems with your Pythons your Rubies etc.

Absolutely not. Python is strongly typed, and even statically typed if you want. Light years ahead of Bash's mess. Quoting is pretty easy to get right in Python.

I actually started keeping a list of bugs at work that were caused directly by people using Bash. I'll dig it out tomorrow and give you some real world examples.

[–] lurklurk@lemmy.world 1 points 7 hours ago (1 children)

I don't disagree with your point, but how does set -e break conditionals? I use it all the time without issues

Pipefail I don't use as much so perhaps that's the issue?

[–] FizzyOrange@programming.dev 1 points 28 minutes ago (1 children)

It means that all commands that return a non-zero exit code will fail the script. The problem is that exit codes are a bit overloaded and sometimes non-zero values don't indicate failure, they indicate some kind of status. For example in git diff --exit-code or grep.

I think I was actually thinking of pipefail though. If you don't set it then errors in pipelines are ignored, which is obviously bad. If you do then you can't use grep in pipelines.

[–] lurklurk@lemmy.world 1 points 9 minutes ago

My sweet spot is set -ue because I like to be able to use things like if grep -q ...; then and I like things to stop if I misspelled a variable.

It does hide failures in the middle of a pipeline, but it's a tradeoff. I guess one could turn it on and off when needed

[–] Badland9085@lemm.ee 2 points 22 hours ago (1 children)

I honestly don’t care about being right or wrong. Our trade focuses on what works and what doesn’t and what can make things work reliably as we maintain them, if we even need to maintain them. I’m not proposing for bash to replace our web servers. And I certainly am not proposing that we can abandon robustness. What I am suggesting that we think about here, is that when you do not really need that robustness, for something that may perhaps live in your production system outside of user paths, perhaps something that you, your team, and the stakeholders of the particular project understand that the solution is temporary in nature, why would Bash not be sufficient?

I suspect you just haven’t used Bash enough to hit some of the many many footguns.

Wrong assumption. I’ve been writing Bash for 5-6 years now.

Maybe it’s the way I’ve been structuring my code, or the problems I’ve been solving with it, in the last few years after using shellcheck and bash-language-server that I’ve not ran into issues where I get fucked over by quotes.

But I can assure you that I know when to dip and just use a “proper programming language” while thinking that Bash wouldn’t cut it. You seem to have an image of me just being a “bash glorifier”, and I’m not sure if it’ll convince you (and I would encourage you to read my other replies if you aren’t), but I certainly don’t think bash should be used for everything.

No. If it's missing $1 will silently become an empty string. os.args[1] will throw an error. Much more robust.

You’ll probably hate this, but you can use set -u to catch unassigned variables. You should also use fallbacks wherever sensible.

Absolutely not. Python is strongly typed, and even statically typed if you want. Light years ahead of Bash's mess. Quoting is pretty easy to get right in Python.

Not a good argument imo. It eliminates a good class of problems sure. But you can’t eliminate their dependence on shared libraries that many commands also use, and that’s what my point was about.

And I’m sure you can find a whole dictionary’s worth of cases where people shoot themselves in the foot with bash. I don’t deny that’s the case. Bash is not a good language where the programmer is guarded from shooting themselves in the foot as much as possible. The guardrails are loose, and it’s the script writer’s job to guard themselves against it. Is that good for an enterprise scenario, where you may either blow something up, drop a database table, lead to the lost of lives or jobs, etc? Absolutely not. Just want to copy some files around and maybe send it to an internal chat for regular reporting? I don’t see why not.

Bash is not your hammer to hit every possible nail out there. That’s not what I’m proposing at all.

[–] FizzyOrange@programming.dev 1 points 19 hours ago (1 children)

And I certainly am not proposing that we can abandon robustness.

If you're proposing Bash, then yes you are.

You’ll probably hate this, but you can use set -u to catch unassigned variables.

I actually didn't know that, thanks for the hint! I am forced to use Bash occasionally due to misguided coworkers so this will help at least.

But you can’t eliminate their dependence on shared libraries that many commands also use, and that’s what my point was about.

Not sure what you mean here?

Just want to copy some files around and maybe send it to an internal chat for regular reporting? I don’t see why not.

Well if it's just for a temporary hack and it doesn't matter if it breaks then it's probably fine. Not really what is implied by "production" though.

Also even in that situation I wouldn't use it for two reasons:

  1. "Temporary small script" tends to smoothly morph into "10k line monstrosity that the entire system depends on" with no chance for rewrites. It's best to start in a language that can cope with it.
  2. It isn't really any nicer to use Bash over something like Deno. Like... I don't know why you ever would, given the choice. When you take bug fixing into account Bash is going to be slower and more painful.
[–] Badland9085@lemm.ee -1 points 13 hours ago (1 children)

I’m going to downvote your comment based on that first quote reply, because I think that’s an extreme take that’s unwarranted. You’ve essentially dissed people who use it for CI/CD and suggested that their pipeline is not robust because of their choice of using Bash at all.

And judging by your second comment, I can see that you have very strong opinions against bash for reasons that I don’t find convincing, other than what seems to me like irrational hatred from being rather uninformed. It’s fine being uninformed, but I suggest you tame your opinions and expectations with that.

About shared libraries, many popular languages, Python being a pretty good example, do rely on these to get performance that would be really hard to get from their own interpreters / compilers, or if re-implementing it in the language would be pretty pointless given the existence of a shared library, which would be much better scrutinized, is audited, and is battle-tested. libcrypto is one example. Pandas depends on NumPy, which depends on, I believe, libblas and liblapack, both written in C, and I think one if not both of these offer a cli to get answers as well. libssh is depended upon by many programming languages with an ssh library (though there are also people who choose to implement their own libssh in their language of choice). Any vulnerabilities found in these shared libraries would affect all libraries that depend on them, regardless of the programming language you use.

If production only implies systems in a user’s path and not anything else about production data, then sure, my example is not production. That said though, I wouldn’t use bash for anything that’s in a user’s path. Those need to stay around, possible change frequently, and not go down. Bash is not your language for that and that’s fine. You’re attacking a strawman that you’ve constructed here though.

If your temporary small script morphs into a monster and you’re still using bash, bash isn’t at fault. You and your team are. You’ve all failed to anticipate that change and misunderstood the “temporary” nature of your script, and allowed your “temporary thing” to become permanent. That’s a management issue, not a language choice. You’ve moved that goalpost and failed to change your strategy to hit that goal.

You could use Deno, but then my point stands. You have to write a function to handle the case where an env var isn’t provided, that’s boilerplate. You have to get a library for, say, accessing contents in Azure or AWS, set that up, figure out how that api works, etc, while you could already do that with the awscli and probably already did it to check if you could get what you want. What’s the syntax for mkdir? What’s it for mkdir -p? What about other options? If you already use the terminal frequently, some of these are your basic bread and butter and you know them probably by heart. Unless you start doing that with Deno, you won’t reach the level of familiarity you can get with the shell (whichever shell you use ofc).

And many argue against bash with regards to error handling. You don’t always need something that proper language has. You don’t always need to handle every possible error state differently, assuming you have multiple. Did it fail? Can you tolerate that failure? Yup? Good. No? Can you do something else to get what you want or make it tolerable? Yes? Good. No? Maybe you don’t want to use bash then.

[–] FizzyOrange@programming.dev 1 points 18 minutes ago

You’ve essentially dissed people who use it for CI/CD and suggested that their pipeline is not robust because of their choice of using Bash at all.

Yes, because that is precisely the case. It's not a personal attack, it's just a fact that Bash is not robust.

You're trying to argue that your cardboard bridge is perfectly robust and then getting offended that I don't think you should let people drive over it.

About shared libraries, many popular languages, Python being a pretty good example, do rely on these to get performance that would be really hard to get from their own interpreters / compilers, or if re-implementing it in the language would be pretty pointless given the existence of a shared library, which would be much better scrutinized, is audited, and is battle-tested. libcrypto is one example. Pandas depends on NumPy, which depends on, I believe, libblas and liblapack, both written in C, and I think one if not both of these offer a cli to get answers as well. libssh is depended upon by many programming languages with an ssh library (though there are also people who choose to implement their own libssh in their language of choice). Any vulnerabilities found in these shared libraries would affect all libraries that depend on them, regardless of the programming language you use.

You mean "third party libraries" not "shared libraries". But anyway, so what? I don't see what that has to do with this conversation. Do your Bash scripts not use third party code? You can't do a lot with pure Bash.

If your temporary small script morphs into a monster and you’re still using bash, bash isn’t at fault. You and your team are.

Well that's why I don't use Bash. I'm not blaming it for existing, I'm just saying it's shit so I don't use it.

You could use Deno, but then my point stands. You have to write a function to handle the case where an env var isn’t provided, that’s boilerplate.

Handling errors correctly is slightly more code ("boilerplate") than letting everything break when something unexpected happens. I hope you aren't trying to use that as a reason not to handle errors properly. In any case the extra boilerplate is... Deno.env.get("FOO"). Wow.

What’s the syntax for mkdir? What’s it for mkdir -p? What about other options?

await Deno.mkdir("foo");
await Deno.mkdir("foo", { recursive: true });

What's the syntax for a dictionary in Bash? What about a list of lists of strings?

[–] JamonBear@sh.itjust.works 5 points 1 day ago (3 children)

Agreed.

Also gtfobins is a great resource in addition to shellcheck to try to make secure scripts.

For instance I felt upon a script like this recently:

#!/bin/bash
# ... some stuff ...
tar -caf archive.tar.bz2 "$@"

Quotes are OK, shellcheck is happy, but, according to gtfobins, you can abuse tar, so running the script like this: ./test.sh /dev/null --checkpoint=1 --checkpoint-action=exec=/bin/sh ends up spawning an interactive shell...

So you can add up binaries insanity on top of bash's mess.

[–] syklemil@discuss.tchncs.de 1 points 5 hours ago

Quotes are OK, shellcheck is happy, but, according to gtfobins, you can abuse tar, so running the script like this: ./test.sh /dev/null --checkpoint=1 --checkpoint-action=exec=/bin/sh ends up spawning an interactive shell…

This runs into a part of the unix philosophy about doing one thing and doing it well: Extending programs to have more (absolutely useful) functionality winds up becoming a security risk. The shell is generally geared towards being a collection of shortcuts rather than a normal, predictable but tedious API.

For a script like that you'd generally want to validate that the input is actually what you expect if it needs to handle hostile users, though. It'll likely help the sleepy users too.

[–] lurklurk@lemmy.world 2 points 7 hours ago

I imagine adding -- so it becomes tar -caf archive.tar.bz2 -- "$@" would fix that specific case

But yeah, putting bash in a position where it has more rights than the user providing the input is a really bad idea

load more comments (1 replies)
load more comments
view more: next ›