This may make some people pull their hair out, but Iād love to hear some arguments. Iāve had the impression that people really donāt like bash, not from here, but just from people Iāve worked with.
There was a task at work where we wanted something thatāll run on a regular basis, and doesnāt do anything complex aside from reading from the database and sending the output to some web API. Pretty common these days.
I canāt think of a simpler scripting language to use than bash. Here are my reasons:
- Reading from the environment is easy, and so is falling back to some value; just do
${VAR:-fallback}
; no need to write another if-statement to check for nullity. Wanna check if a variableās set to something expected?if [[ <test goes here> ]]; then <handle>; fi
- Reading from arguments is also straightforward; instead of a
import os; os.args[1]
in Python, you just do$1
. - Sending a file via HTTP as part of an
application/x-www-form-urlencoded
request is super easy withcurl
. In most programming languages, youād have to manually open the file, read them into bytes, before putting it into your request for the http library that you need to import.curl
already does all that. - Need to read from a
curl
response and itās JSON? Reach forjq
. - Instead of having to set up a connection object/instance to your database, give
sqlite
,psql
,duckdb
or whichever cli db client a connection string with your query and be on your way. - Shipping isā¦ fairly easy? Especially if docker is common in your infrastructure. Pull
Ubuntu
ordebian
oralpine
, install your dependencies through the package manager, and youāre good to go. If you stay within Linux and donāt have to deal with differences in bash and core utilities between different OSes (looking at you macOS), and assuming you tried to not to do anything too crazy and bring in necessary dependencies in the form of calling them, it should be fairly portable.
Sure, there can be security vulnerability concerns, but youād still have to deal with the same problems with your Pythons your Rubies etc.
For most bash gotchas, shellcheck
does a great job at warning you about them, and telling how to address those gotchas.
There are probably a bunch of other considerations but I canāt think of them off the top of my head, but Iāve addressed a bunch before.
So whatās the dealeo? What am I missing that may not actually be addressable?
Honestly, if a script grows to more than a few tens of lines Iām off to a different scripting language because Iāve written enough shell script to know that itās hard to get right.
Shellcheck is great, but whatās greater is a language that doesnāt have as many gotchas from the get go.
We are not taking about use of Bash in dev vs use Bash in production. This is imho incorrect question that skirts around the real problem in software development. We talk about use of Bash for simple enough tasks where code is rarely changed ( if not written once and thrown away ) and where every primitive language or DSL is ok, where when it comes to building of medium or complex size software systems where decomposition, complex data structures support, unit tests, error handling, concurrency, etc is a big of a deal - Bash really sucks because it does not allow one to deal with scaling challenges, by scaling I mean where you need rapidly change huge code base according changes of requirements and still maintain good quality of entire code. Bash is just not designed for that.
But not everything needs to scale, at least, if you donāt buy into the doctrine that everything has to be designed and written to live forever. If robust, scalable solutions is the nature of your work and thereās nothing else that can exist, then yeah, Bash likely have no place in that world. If you need any kind of handling more complicated than just getting an error and doing something else, then Bash is not it.
Just because Bash isnāt designed for something you want to do, doesnāt mean it sucks. Itās just not the right tool. Just because you donāt practice law, doesnāt mean you suck; you just donāt do law. You can say that you suck at law though.
If your company ever has >2 people, it will become a problem.
Youāre speaking prophetically there and I simply do not agree with that prophecy.
If you and your team think you need to extend that bash script to do more, stop and consider writing it in some other languages. Youāve move the goalpost, so donāt expect that you can just build on your previous strategy and that itāll work.
If your āproblemā stems from āwell your colleagues will not likely be able to read or write bash well enoughā, well then just donāt write it in bash.
Yep. Like said - āWe talk about use of Bash for simple enough tasks ā¦ where every primitive language or DSL is okā, so Bash does not suck in general and I myself use it a lot in proper domains, but I just do not use it for tasks / domains with complexity ( in all senses, including, but not limited to team work ) growing over time ā¦
Iāve worked in bash. Iāve written tools in bash that ended up having a significant lifetime.
Personally, you lost me at
reading from the database
Database drivers exist for a reason. Shelling out to a database cli interface is full of potential pitfalls that donāt exist in any language with a programmatic interface to the database. Dealing with query parameterization in bash sounds un-fun and thatās table stakes, security-wise.
Same with making web API calls. Error handling in particular is going to require a lot of boilerplate code that you would get mostly for free in languages like Python or Ruby or Go, especially if thereās an existing library that wraps the API you want to use in native language constructs.
This is almost a strawman argument.
You donāt have to shell out to a db cli. Most of them will gladly take some SQL and spit out some output. Now that output might be in some tabular format with some pretty borders around them that you have to deal with, if you are about the output within your script, but thatās your choice and so deal with it if itās within your comfort zone to do so. Now if you donāt care about the output and just want it in some file, thatās pretty straightforward, and itās not too different from just some cli that spits something out and youāve redirected that output to a file.
Iāve mentioned in another comment where if you need to accept input and use that for your queries, psql is absolutely not the tool to use. If you canāt do it properly in bash and tools, just donāt. Thatās fine.
With web API calls, same story really; you may not be all that concerned about the response. Calling a webhook? Theyāre designed to be a fire and forget, where weāre fine with losing failed connections. Some APIs donāt really follow strict rules with REST, and will gladly include an āokā as a value in their response to tell you if a request was successful. If knowing that is important to the needs of the program, then, well, there you have it. Otherwise, there are still ways you can get the HTTP code and handle appropriately. If you need to do anything complex with the contents of the response, then you should probably look elsewhere.
My entire post is not to say that āyou can do everything in bash and you shouldā. My point is that there are many cases where bash seems like a good sufficient tool to get that simple job done, and it can do it more easily with less boilerplate than, say, Python or Ruby.
One thing that I donāt think anyone else has mentioned is data structures. Bash does have arrays and hashmaps at least but Iāve found that working with them is significantly more awkward than in e.g. python. This is one of several reasons for why bash doesnāt scale up well, but sure for small enough scripts it can be fine (if you donāt care about windows)
I think I mentioned it, but inverse: The only data type Iām comfortable with in bash are simple string scalars; plus some simple integer handling I suppose. Once I have to think about stuff like
"${foo[@]}"
and the like I feel like I shouldāve switched languages already.Plus I rarely actually want arrays, itās way more likely I want something in the shape of
@dataclass(frozen=True) class Foo: # ā¦ foos: set[Foo] = ā¦
I use the same heuristicā¦ if I need a hashmap or more complex math, I need a different language
Also if the script grows beyond 100 lines, I stop and think about what Iām doing. Sometimes itās OK, but itās a warning flag
Yeah agreed on the 100 lines, or some other heuristic in the direction of āthis script will likely continue to grow in complexity and I should switch to a language thatās better suited to handle that complexityā.
Thatās definitely worth mentioning indeed. Bash variables, aside from arrays and hashmaps that you get with
declare
, are just strings. Any time you need to start capturing a group of data and do stuff with them, itās a sign to move on. But there are many many times where thatās unnecessary.
Just make certain the robustness issues of bash do not have security implications. Variable, shell, and path evalutions can have security issues depending on the situation.
Certainly so. The same applies to any languages we choose, no?
Bash is especially suseptable. Bash was intended to be used only in a secure environment including all the inputs and data that is processed and including all the proccess on the system containing the bash process in question for that matter. Bash and the shell have a large attack surface. This is not true for most other languages. It is also why SUID programs for example should never call the shell. Too many escape options.
Good point. Itās definitely something to keep in mind about. Itās pretty standard procedure to secure your environments and servers, wherever arbitrary code can be ran, lest they become grounds for malicious actors to use your resources for their own gains.
What could be a non-secure environment where you can run Bash be like? A server with an SSH port exposed to the Internet with just password authentication is one I can think of. Are there any others?
By the way, I would not consider logging in via ssh and running a bash script to be insecure in general.
However taking uncontrolled data from outside of that session and injecting it could well be insecure as the data is probably crossing an important security boundary.
I was more thinking of the CGI script vunerability that showed up a few years ago. In that case data came from the web into the shell environment uncontrolled. So uncontrolled data processing where the input data crosses security boundaries is an issue kind of like a lot of the SQL injection attacks.
Another issue with the shell is that all proccesses on the system typically see all command line arguments. This includes any commands the shell script runs. So never specify things like keys or PII etc as command line arguments.
Then there is the general robustness issue. Shell scripts easy to write to run in a known environment and known inputs. Difficult to make general. So for fixed environment and known and controlled inputs that do not cross security boundaries probaby fine. Not that, probablay a big issue.
By the way, I love bash and shell scripts.
Over the last ten - fifteen years, Iāve written lots of scripts for production in bash. Theyāve all served their purposes (after thorough testing) and not failed. Pretty sure one of my oldest (and biggest) is called
temporary_fixes.sh
and is still in use today. Another one (admittedly not in production) was partially responsible for getting me my current job, I guess because the interviewers wanted to see what kind of person would solve a coding challenge in bash.However, I would generally agree that - while bash is good for many things and perhaps even āgood enoughā - any moderately complex problem is probably better solved using a different language.
As Iāve matured in my career, I write more and more bash. It is absolutely appropriate for production in the right scenarios. Just make sure the people who might have to maintain it in the future wonāt come knocking down your door with torches and pitchforksā¦
Thatās my take on the use of bash too. If itās something that people think itās worth bring their pitchforks out for, then itās something you should probably not write in bash.
Well then you guys will love what this guy (by tha name āicitryā) did with bash https://www.youtube.com/watch?v=b_WGoPaNPMY
He created a youtube clone with Bash
That is definitely not something I would doā¦ for work (totally not implying that I miiiight do it outside of work for shits and giggles :P).
I didnāt create this post trying to be like āyāall should just use Bashā, nor is it an attempt to say that I like Bash, but I guess thatās how people boil others down to these days. Fanatics only. Normalcy is dead. (Iām exaggerating ofc)
Basically, If you are crazy enough, you csn make anything with any language<br> Hence, me sharing the video
Dude, pihole is bash.
Run checkbashisms over your $PATH (grep for #!/bin/sh). Thatās the problem with Bash.
is for POSIX compliant shell scripts only, use
if you use bash syntax.
Btw, i quite like yash.
Always welcome a new shell. Iāve not heard of yash but Iāll check it out.
Any reason to use
over
?
I personally donāt see the point in using the absolute path to a tool to look up the relative path of your shell, because shell is always /bin/sh but the env binary might not even exist.
Maybe use it with bash, some BSDās or whatever might have it in /usr without having /bin symlinked to /usr/bin.
There are times when doing so does make sense, eg if you need the script to be portable. Of course, itās the least of your worries in that scenario. Not all systems have bash being accessible at
/bin
like you said, and some would much prefer that you use the first bash that appears in theirPATH
, e.g. in nix.But yeah, itās generally pretty safe to assume
/bin/sh
will give you a shell. But there are, apparently, distributions that symlink that to bash, and Iāve even heard of it being symlinked to dash.Not all systems have bash being accessible atĀ
/bin
Ā like you sayYeah, but my point is, neither match they
/usr/bin/env
. Bash, ok; but POSIX shell and Python, just leave it away.and Iāve even heard of it being symlinked to dash.
I think Debian and Ubuntu do that (or one of them). And me too on Artix, thereās
dash-as-bin-sh
in AUR, a pacman hook that symlinks. Nothing important breaks by doing so.Leaving it away for Python? Are you mad? Why would you want to use my system Python instead of the one specified in my
PATH
?
What gave you the impression that this was just for development? Bash is widely used in production environments for scripting all over enterprises. The people you work with just donāt have much experience at lots of shops I would think.
Itās just not wise to write an entire system in bash. Just simple little tasks to do quick things. Yes, in production. The devops world runs on bash scripts.
Bash is widely used in production environments for scripting all over enterprises.
But it shouldnāt be.
The people you work with just donāt have much experience at lots of shops I would think.
More likely they do have experience of it and have learnt that itās a bad idea.
Iāve never had that impression, and I know that even large enterprises have Bash scripts essentially supporting a lot of the work of a lot of their employees. But there are also many very loud voices that seems to like screaming that you shouldnāt use Bash almost at all.
You can take a look at the other comments to see how some are entirely turned off by even the idea of using bash, and there arenāt just a few of them.
This Lemmy thread isnāt representative of the real world. Iāve been a dev for 40 years. You use what works. Bash is a fantastic scripting tool.
I understand that. I have coworkers with about 15-20 years in the industry, and they frown whenever I put a bash script out for, say, a purpose that I put in my example: self-contained, clearly defined boundaries, simple, and not mission critical despite handling production data, typically done in less than 100 lines of bash with generous spacing and comments. So I got curious, since I donāt feel like Iāve ever gotten a satisfactory answer.
Thank you for sharing your opinion!
My #1 rule for the teams I lead is āconsistencyā. So it may fall back to that. The standard where you work is to use a certain way of doing things so everyone becomes skilled at the same thing.
I have the same rule, but I always let a little bash slide here and there.
Iām fine with bash for ci/cd activities, for what youāre talking about Iād maybe use bash to control/schedule running of a script in something like python to query and push to an api but I do totally get using the tools you have available.
I use bash a lot for automation but PowerShell is really nice for tasks like this and has been available in linux for a while. Seen it deployed into production for more or less this task, grabbing data from a sql server table and passing to SharePoint. Itās more powerful than a shell language probably needs to be, but itās legitimately one of the nicer products MS has done.
End of the day, use the right tool for the job at hand and be aware of risks. You can totally make web requests from sql server using ole automation procedures, set up a trigger to fire on update and send data to an api from a stored proc, if I recall thereās a reason theyāre disabled by default (itās been a very long time) but you can do it.
People have really been singing praises of Powershell huh. I should give that a try some time.
But yeah, we wield tools that each come with their own risks and caveats, and none of them are perfect for everything, but some are easier (including writing it and addressing fallovers for it) to use in certain situations than others.
Itās just hard to tell if peopleās fear/disdain/disgust/insert-negative-reaction towards bash is rational or moreā¦ tribal, and why I decided to ask. Itās hard to shake away the feeling of āthis shouldnāt just be me, right?ā
The nice thing about Powershell is that it was built basically now after learning all the things that previous shells left out. Iām not fluent in it, but as a Bash aficionado, I marveled at how nice it was at a previous job where we used it.
That said, I love Bash and use it for lots of fun automation. I think youāre right to appreciate it as you do. I have no opinion on the rest.
I have to wonder if some of it is comfort or familiarity, I had a negative reaction to python the first time I ever tried it for example, hated the indent syntax for whatever reason.
Creature comfort is a thing. Youāre used to it. Familiarity. You know how something behaves when you interact with it. You feelā¦ safe. Fuck that thing that I havenāt ever seen and donāt yet understand. I donāt wanna be there.
People who donāt just soak in that are said to be, maybe, adventurous?
It can also be a āWell, weāve seen what can work. It aināt perfect, but itās pretty good. Now, is there something better we can do?ā
The indent syntax is one of the obviously bad decisions in the design of python so it makes sense
I just donāt think bash is good for maintaining the code, debugging, growing the code over time, adding automated tests, or exception handling
If you need anything that complex and that itās critical for, say, customers, or people doing things directly for customers, you probably shouldnāt use bash. Anything that needs to grow? Definitely not bash. Iām not saying bash is what you should use if you want it to grow into, say, a web server, but that itās good enough for small tasks that you donāt expect to grow in complexity.
itās (bash) good enough for small tasks that you donāt expect to grow in complexity.
I donāt think youāll get a lot of disagreement on that, here. As mention elsewhere, my team prefers bash for simple use cases (and as their bash-hating boss, I support and agree with how and when they use bash.)
But a bunch of us draw the line at database access.
Any database is going to throw a lot of weird shit at the bash script.
So, to me, a bash script has grown to unacceptable complexity on the first day that it accesses a database.
We have dozens of bash scripts running table cleanups and maintenece tasks on the db. In the last 20 years these scripts where more stable than the database itself (oracle -> mysql -> postgres).
But in all fairness they just call the cliclient with the appropiate sql and check for the response code, generating a trap.
Thatās a great point.
I post long enough responses already, so I didnāt want to get into resilience planning, but your example is a great highlight that thereās rarely hard and fast rules about what will work.
There certainly are use cases for bash calling database code that make sense.
I donāt actually worry much when itās something where the first response to any issue is to run it again in 15 minutes.
Itās cases where we might need to do forensic analysis that bash plus SQL has caused me headaches.
Yeah, if it feels like a transaction would be helpful, at least go for pl/sql and save yourself some pain. Bash is for system maintenance, not for business logic.
Heck, I wrote a whole monitoring system for a telephony switch with nothing more than bash and awk and it worked better than the shit from the manufacturer, including writing to the isdn cards for mobile messaging. But I wouldnāt do that again if I have an alternative.
Bash is for system maintenance, not for business logic.
That is such a good guiding principle. Iām gonna borrow that.
small tasks that you donāt expect to grow in complexity
On one conference I heard saying: āThere is no such thing as temporary solution and there is no such thing as proof of conceptā. Itās an overexaguration of course but it has some truth to it - thereās a high chance that your āsmall changeā or PoC will be used for the next 20 years so write it as robust and resilient as possible and document it. In other words everything will be extended, everything will be maintained, everything will change hands.
So to your point - is bash production ready? Well, depends. Do you have it in git? Is it part of some automation pipeline? Is it properly documented? Do you by chance have some tests for it? Then yes, itās production ready.
If you just āwrite this quick script and run it in cronā then no. Because in 10 years people will pull their hair screaming āwhat the hell is hapenning?!ā
Edit: or worse, theyāll scream it during the next incident thatāll happen at 2 AM on Sunday
I find it disingenuous to blame it on the choice of bash being bad when goalposts are moved. Solutions can be temporary as long as goalposts arenāt being moved. Once the goalpost is moved, you have to re-evaluate whether your solution is still sufficient to meet new needs. If literally everything under the sun and out of it needs to be written in a robust manner to accommodate moving goalposts, by that definition, nothing will ever be sufficient, unless, well, weāve come to a point where a human request by words can immediately be compiled into machine instructions to do exactly what theyāve asked for, without loss of intention.
That said, as engineers, I believe itās our responsibility to identify and highlight severe failure cases given a solution and its management, and it is up to the stakeholders to accept those risks. If you need something running at 2am in the morning, and a failure of that process would require human intervention, then maybe you should consider not running it at 2am, or pick a language with more guardrails.
āUse the best tool for the job, that the person doing the job is best at.ā Thatās my approach.
I will use bash or python dart or whatever the project uses.
Iām afraid your colleagues are completely right and you are wrong, but it sounds like you genuinely are curious so Iāll try to answer.
I think the fundamental thing youāre forgetting is robustness. Yes Bash is convenient for making something that works once, in the same way that duct tape is convenient for fixes that work for a bit. But for production use you want something reliable and robust that is going to work all the time.
I suspect you just havenāt used Bash enough to hit some of the many many footguns. Or maybe when you did hit them you thought āoops I made a mistakeā, rather than āthis is dumb; I wouldnāt have had this issue in a proper programming languageā.
The main footguns are:
- Quoting. Trust me youāve got this wrong even with
shellcheck
. I have too. Thatās not a criticism. Itās basically impossible to get quoting completely right in any vaguely complex Bash script. - Error handling. Sure you can
set -e
, but then that breaks pipelines and conditionals, and you end up with really monstrous pipelines full ofpipefail
noise. Itās also extremely easy to forgetset -e
. - General robustness. Bash silently does the wrong thing a lot.
instead of a
import os; os.args[1]
in Python, you just do$1
No. If itās missing
$1
will silently become an empty string.os.args[1]
will throw an error. Much more robust.Sure, there can be security vulnerability concerns, but youād still have to deal with the same problems with your Pythons your Rubies etc.
Absolutely not. Python is strongly typed, and even statically typed if you want. Light years ahead of Bashās mess. Quoting is pretty easy to get right in Python.
I actually started keeping a list of bugs at work that were caused directly by people using Bash. Iāll dig it out tomorrow and give you some real world examples.
Agreed.
Also gtfobins is a great resource in addition to shellcheck to try to make secure scripts.
For instance I felt upon a script like this recently:
#!/bin/bash # ... some stuff ... tar -caf archive.tar.bz2 "$@"
Quotes are OK, shellcheck is happy, but, according to gtfobins, you can abuse tar, so running the script like this:
./test.sh /dev/null --checkpoint=1 --checkpoint-action=exec=/bin/sh
ends up spawning an interactive shellā¦So you can add up binaries insanity on top of bashās mess.
Quotes are OK, shellcheck is happy, but, according to gtfobins, you can abuse tar, so running the script like this: ./test.sh /dev/null --checkpoint=1 --checkpoint-action=exec=/bin/sh ends up spawning an interactive shellā¦
This runs into a part of the unix philosophy about doing one thing and doing it well: Extending programs to have more (absolutely useful) functionality winds up becoming a security risk. The shell is generally geared towards being a collection of shortcuts rather than a normal, predictable but tedious API.
For a script like that youād generally want to validate that the input is actually what you expect if it needs to handle hostile users, though. Itāll likely help the sleepy users too.
I imagine adding
--
so it becomestar -caf archive.tar.bz2 -- "$@"
would fix that specific caseBut yeah, putting bash in a position where it has more rights than the user providing the input is a really bad idea
gtfobins
Meh, most in that list are just āif it has the SUID bit set, it can be used to break out of your security contextā.
I donāt disagree with your point, but how does
set -e
break conditionals? I use it all the time without issuesPipefail I donāt use as much so perhaps thatās the issue?
It means that all commands that return a non-zero exit code will fail the script. The problem is that exit codes are a bit overloaded and sometimes non-zero values donāt indicate failure, they indicate some kind of status. For example in
git diff --exit-code
orgrep
.I think I was actually thinking of
pipefail
though. If you donāt set it then errors in pipelines are ignored, which is obviously bad. If you do then you canāt usegrep
in pipelines.My sweet spot is
set -ue
because I like to be able to use things likeif grep -q ...; then
and I like things to stop if I misspelled a variable.It does hide failures in the middle of a pipeline, but itās a tradeoff. I guess one could turn it on and off when needed
I honestly donāt care about being right or wrong. Our trade focuses on what works and what doesnāt and what can make things work reliably as we maintain them, if we even need to maintain them. Iām not proposing for bash to replace our web servers. And I certainly am not proposing that we can abandon robustness. What I am suggesting that we think about here, is that when you do not really need that robustness, for something that may perhaps live in your production system outside of user paths, perhaps something that you, your team, and the stakeholders of the particular project understand that the solution is temporary in nature, why would Bash not be sufficient?
I suspect you just havenāt used Bash enough to hit some of the many many footguns.
Wrong assumption. Iāve been writing Bash for 5-6 years now.
Maybe itās the way Iāve been structuring my code, or the problems Iāve been solving with it, in the last few years after using
shellcheck
andbash-language-server
that Iāve not ran into issues where I get fucked over by quotes.But I can assure you that I know when to dip and just use a āproper programming languageā while thinking that Bash wouldnāt cut it. You seem to have an image of me just being a ābash glorifierā, and Iām not sure if itāll convince you (and I would encourage you to read my other replies if you arenāt), but I certainly donāt think bash should be used for everything.
No. If itās missingĀ
$1
Ā will silently become an empty string.Āos.args[1]
Ā will throw an error. Much more robust.Youāll probably hate this, but you can use
set -u
to catch unassigned variables. You should also use fallbacks wherever sensible.Absolutely not. Python is strongly typed, and even statically typed if you want. Light years ahead of Bashās mess. Quoting is pretty easy to get right in Python.
Not a good argument imo. It eliminates a good class of problems sure. But you canāt eliminate their dependence on shared libraries that many commands also use, and thatās what my point was about.
And Iām sure you can find a whole dictionaryās worth of cases where people shoot themselves in the foot with bash. I donāt deny thatās the case. Bash is not a good language where the programmer is guarded from shooting themselves in the foot as much as possible. The guardrails are loose, and itās the script writerās job to guard themselves against it. Is that good for an enterprise scenario, where you may either blow something up, drop a database table, lead to the lost of lives or jobs, etc? Absolutely not. Just want to copy some files around and maybe send it to an internal chat for regular reporting? I donāt see why not.
Bash is not your hammer to hit every possible nail out there. Thatās not what Iām proposing at all.
And I certainly am not proposing that we can abandon robustness.
If youāre proposing Bash, then yes you are.
Youāll probably hate this, but you can use set -u to catch unassigned variables.
I actually didnāt know that, thanks for the hint! I am forced to use Bash occasionally due to misguided coworkers so this will help at least.
But you canāt eliminate their dependence on shared libraries that many commands also use, and thatās what my point was about.
Not sure what you mean here?
Just want to copy some files around and maybe send it to an internal chat for regular reporting? I donāt see why not.
Well if itās just for a temporary hack and it doesnāt matter if it breaks then itās probably fine. Not really what is implied by āproductionā though.
Also even in that situation I wouldnāt use it for two reasons:
- āTemporary small scriptā tends to smoothly morph into ā10k line monstrosity that the entire system depends onā with no chance for rewrites. Itās best to start in a language that can cope with it.
- It isnāt really any nicer to use Bash over something like Deno. Likeā¦ I donāt know why you ever would, given the choice. When you take bug fixing into account Bash is going to be slower and more painful.
Iām going to downvote your comment based on that first quote reply, because I think thatās an extreme take thatās unwarranted. Youāve essentially dissed people who use it for CI/CD and suggested that their pipeline is not robust because of their choice of using Bash at all.
And judging by your second comment, I can see that you have very strong opinions against bash for reasons that I donāt find convincing, other than what seems to me like irrational hatred from being rather uninformed. Itās fine being uninformed, but I suggest you tame your opinions and expectations with that.
About shared libraries, many popular languages, Python being a pretty good example, do rely on these to get performance that would be really hard to get from their own interpreters / compilers, or if re-implementing it in the language would be pretty pointless given the existence of a shared library, which would be much better scrutinized, is audited, and is battle-tested. libcrypto is one example. Pandas depends on NumPy, which depends on, I believe, libblas and liblapack, both written in C, and I think one if not both of these offer a cli to get answers as well. libssh is depended upon by many programming languages with an ssh library (though there are also people who choose to implement their own libssh in their language of choice). Any vulnerabilities found in these shared libraries would affect all libraries that depend on them, regardless of the programming language you use.
If production only implies systems in a userās path and not anything else about production data, then sure, my example is not production. That said though, I wouldnāt use bash for anything thatās in a userās path. Those need to stay around, possible change frequently, and not go down. Bash is not your language for that and thatās fine. Youāre attacking a strawman that youāve constructed here though.
If your temporary small script morphs into a monster and youāre still using bash, bash isnāt at fault. You and your team are. Youāve all failed to anticipate that change and misunderstood the ātemporaryā nature of your script, and allowed your ātemporary thingā to become permanent. Thatās a management issue, not a language choice. Youāve moved that goalpost and failed to change your strategy to hit that goal.
You could use Deno, but then my point stands. You have to write a function to handle the case where an env var isnāt provided, thatās boilerplate. You have to get a library for, say, accessing contents in Azure or AWS, set that up, figure out how that api works, etc, while you could already do that with the awscli and probably already did it to check if you could get what you want. Whatās the syntax for
mkdir
? Whatās it formkdir -p
? What about other options? If you already use the terminal frequently, some of these are your basic bread and butter and you know them probably by heart. Unless you start doing that with Deno, you wonāt reach the level of familiarity you can get with the shell (whichever shell you use ofc).And many argue against bash with regards to error handling. You donāt always need something that proper language has. You donāt always need to handle every possible error state differently, assuming you have multiple. Did it fail? Can you tolerate that failure? Yup? Good. No? Can you do something else to get what you want or make it tolerable? Yes? Good. No? Maybe you donāt want to use bash then.
Youāve essentially dissed people who use it for CI/CD and suggested that their pipeline is not robust because of their choice of using Bash at all.
Yes, because that is precisely the case. Itās not a personal attack, itās just a fact that Bash is not robust.
Youāre trying to argue that your cardboard bridge is perfectly robust and then getting offended that I donāt think you should let people drive over it.
About shared libraries, many popular languages, Python being a pretty good example, do rely on these to get performance that would be really hard to get from their own interpreters / compilers, or if re-implementing it in the language would be pretty pointless given the existence of a shared library, which would be much better scrutinized, is audited, and is battle-tested. libcrypto is one example. Pandas depends on NumPy, which depends on, I believe, libblas and liblapack, both written in C, and I think one if not both of these offer a cli to get answers as well. libssh is depended upon by many programming languages with an ssh library (though there are also people who choose to implement their own libssh in their language of choice). Any vulnerabilities found in these shared libraries would affect all libraries that depend on them, regardless of the programming language you use.
You mean āthird party librariesā not āshared librariesā. But anyway, so what? I donāt see what that has to do with this conversation. Do your Bash scripts not use third party code? You canāt do a lot with pure Bash.
If your temporary small script morphs into a monster and youāre still using bash, bash isnāt at fault. You and your team are.
Well thatās why I donāt use Bash. Iām not blaming it for existing, Iām just saying itās shit so I donāt use it.
You could use Deno, but then my point stands. You have to write a function to handle the case where an env var isnāt provided, thatās boilerplate.
Handling errors correctly is slightly more code (āboilerplateā) than letting everything break when something unexpected happens. I hope you arenāt trying to use that as a reason not to handle errors properly. In any case the extra boilerplate isā¦
Deno.env.get("FOO")
. Wow.Whatās the syntax for mkdir? Whatās it for mkdir -p? What about other options?
await Deno.mkdir("foo"); await Deno.mkdir("foo", { recursive: true });
Whatās the syntax for a dictionary in Bash? What about a list of lists of strings?
- Quoting. Trust me youāve got this wrong even with