output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
I managed to solve this issue with the instructions of https://www.raspberrypi.org/forums/viewtopic.php?t=245931 this topic.
This is due to Raspberry PI 4 USB 3.0 UASP driver issue and it make my external SSD connection intermittent. After adding line to cmdline.txt for ignore the UAS interface my SSD is working flawlessly as well as bcache
Basically you need to find your external USB 3.0 SSD / Enclosure VID and PID
lsusbThen I had to edit the cmdline.txt and add the following line end of the file. where aaaa is equal to VID and bbbb is equal to PID
usb-storage.quirks=aaaa:bbbb:uThen reboot the pi. After reboot my SSD is stable and I cannot see any errors regarding the UAS interface in my kern.log
Other than this mentioned bcache setup is working flawlessly with Raspberry pi 4. I use Ubuntu for testing
|
I was testing bache on raspberry pi 4 with ubuntu. The reason I choose ubuntu that I found standard raspbian got some issues with bcache as kernel module not properly loaded. I tried to troubleshoot bit but then I move to ubuntu and it works straight away
My setup is like this.
1 x 1TB HGST 5400RPM 2.5 laptop hard disk
1 x 256GB WD Green 2.5 SSD
Raspberry pi 4 4GB model with large heat-sink for cooling and 4A power.I hooked up both HDD and SSD to the raspberry pi (both externally powered) using USB 3.0 ports and boot to ubuntu. First I tested the the under-voltage errors and found all normal.
SSD -> /dev/sda
HDD -> /dev/sdbThen I create 1 partition on both drives and create the bcache as follows.
make-bcache -B /dev/sdb1
make-bcache -C /dev/sda1then I mount the /dev/bcache0 on /datastore
then I attached the cache device as follows
echo MYUUID > /sys/block/bcache0/bcache/attachThen I enabled write-back cache
echo writeback > /sys/block/bcache0/bcache/cache_modeThen I installed vsftpd server and make the root ftp dir as my bcache0 mount point and I started testing. First few tests I can upload files 113MBps and I notices most of the files directly write in to the backing device even if the cache is attached.
when I tested the status using bcache-status script https://gist.github.com/damoxc/6267899 I saw most of the writes misses cache and directly writing to backing device and the 113MBps is directly from the mechanical hard drive :-O ?
Then I started to fine tune. As suggested on Troubleshooting performance part of this https://www.kernel.org/doc/Documentation/bcache.txt document
first I set sequential_cutoff to zero by executing this command
echo 0 > /sys/block/bcache0/bcache/sequential_cutoffAfter this I can instantly see SSD device cache hits are increased. And at the same time I was running iostat continuously. And I was able to see from the iostat SSD is directly being accessed. But after few minutes my filezilla client hangs and I cannot restart the FTP upload stream. And when I try to access the bcache0 mount it's really slow. cache status was showing as "dirty"
Then I restart the pi and again attached the device. and set below stetting
echo 0 > /sys/fs/bcache/MYUUID/congested_read_threshold_us
echo 0 > /sys/fs/bcache/MYUUID/congested_write_threshold_usAccording to https://www.kernel.org/doc/Documentation/bcache.txt article this is for avoid bcache track backing device latency. But even after this option. my FTP upload stream continuously crashing. Then I set all back to default. Still with large number of file uploads it crashes
And I noticed within the test pi CPU is not fully utilized.
The maximum throughput I can get using pi 4 1Gbps Ethernet is 930Mbps, which is extremely good. The HGST drive when I tested with crystal disk mark with NTFS able to write up to 90MBps. It seems I can get 113MBps on pi since the file system is ext4.
If I can get more than 80MBps ftp upload speed I'm ok with that. My questions are
Why FTP stream keep crashing when using with bcache and why bcache mount getting slow overtime.
why there is very low cache usage even with sequential_cutoff set to 0
has anyone tested bcache before with Raspberry PI 4 ? if yes how can I use the SSD for caching properly
And finally can someone explain more about how bcache actually works when It is on writeback mode. I only use this for archival data and I don't need access hot data on SSD kind of setup.
| Testing bcache with raspberry pi 4 on ubuntu |
I managed to do it:
# echo 1 > /sys/block/md127/bcache/stop; echo 1 > /sys/block/md127/bcache/detach; sleep 2; echo /dev/md127 > /sys/fs/bcache/register# echo 32dbab45-f8b3-4ef8-89e2-32007fc4970b > /sys/block/md127/bcache/attach# ll /dev/bcache0
brw-rw----. 1 root disk 252, 0 Jan 5 12:56 /dev/bcache0 |
In my new Fedora 31 install there is no \dev\bcache0. The caching device is Raid 1:
# ls -d /dev/b*
/dev/block /dev/bsg /dev/btrfs-control /dev/bus# bcache-super-show /dev/md127
sb.magic ok
sb.first_sector 8 [match]
sb.csum CDCAF0DD6B68FD24 [match]
sb.version 1 [backing device]dev.label (empty)
dev.uuid b17ceaac-27ec-44d8-8bbb-235cfaa0c4a4
dev.sectors_per_block 1
dev.sectors_per_bucket 1024
dev.data.first_sector 16
dev.data.cache_mode 1 [writeback]
dev.data.cache_state 1 [clean]cset.uuid de075a7c-af4e-43e9-b229-804322e3d263# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223.6G 0 disk
├─sda1 8:1 0 700M 0 part /boot
├─sda2 8:2 0 700M 0 part /boot/efi
├─sda3 8:3 0 26G 0 part
│ └─luks-9793c78f-723c-4218-865f-83dbc4659192 253:1 0 26G 0 crypt [SWAP]
└─sda4 8:4 0 162G 0 part
└─luks-569b1153-2fab-4984-b1b6-c4a02ee206ef 253:0 0 162G 0 crypt /
sdb 8:16 0 111.8G 0 disk
├─sdb1 8:17 0 40G 0 part
└─sdb2 8:18 0 71.8G 0 part
sdc 8:32 0 1.8T 0 disk
└─sdc1 8:33 0 1.8T 0 part
└─md127 9:127 0 1.8T 0 raid1
sdd 8:48 0 1.8T 0 disk
└─sdd1 8:49 0 1.8T 0 part
└─md127 9:127 0 1.8T 0 raid1
sde 8:64 1 58.9G 0 disk
├─sde1 8:65 1 20G 0 part
└─sde2 8:66 1 38.9G 0 part
sr0 11:0 1 1024M 0 rom# blkid | grep -E "md127|sdb1"
/dev/sdb1: UUID="057a2f23-c7b1-4264-a534-183ef9cad53b" TYPE="bcache" PARTLABEL="Linux filesystem" PARTUUID="505789f1-0523-4c62-bdb1-81bc0cc7bff1"
/dev/md127: UUID="b17ceaac-27ec-44d8-8bbb-235cfaa0c4a4" TYPE="bcache"Does the above output mean that the device is Ok and not corrupted?
Right after this new install I had it working as before in the previous Fedora 30 install. Then I resized the caching device partition (sdb1) and now I can't make the backing device to appear as a bcache device.
I can't register it:
# echo /dev/md127 > /sys/fs/bcache/register
-bash: echo: write error: Invalid argument# echo de075a7c-af4e-43e9-b229-804322e3d263 > /sys/fs/bcache/register
-bash: echo: write error: Invalid argumentNeither attach
# echo de075a7c-af4e-43e9-b229-804322e3d263 > /sys/block/md127/bcache/attach
-bash: echo: write error: No such file or directoryThat file exits:
# ll /sys/block/md127/bcache/attach
--w-------. 1 root root 4096 Jan 5 10:37 /sys/block/md127/bcache/attachWhat am I missing?
| Missing bcache0 backing device |
Yes in theory because the new device should have the same UUID as the old one if you perform a block level copy.
|
Suppose I want to replace the crusty old HDD I'm using as a backing device with a new one. Can I transfer my data to the new device without having to set up.an entirely new bcache?
| Replace Bcache Backing Device |
/bin/sh is hardly ever a Bourne shell on any systems nowadays (even Solaris which was one of the last major system to include it has now switched to a POSIX sh for its /bin/sh in Solaris 11). /bin/sh was the Thompson shell in the early 70s. The Bourne shell replaced it in Unix V7 in 1979.
/bin/sh has been the Bourne shell for many years thereafter (or the Almquist shell, a free reimplementation on BSDs).
Nowadays, /bin/sh is more commonly an interpreter or another for the POSIX sh language which is itself based on a subset of the language of ksh88 (and a superset of the Bourne shell language with some incompatibilities).
The Bourne shell or the POSIX sh language specification don't support arrays. Or rather they have only one array: the positional parameters ($1, $2, $@, so one array per function as well).
ksh88 did have arrays which you set with set -A, but that didn't get specified in the POSIX sh as the syntax is awkward and not very usable.
Other shells with array/lists variables include: csh/tcsh, rc, es, bash (which mostly copied the ksh syntax the ksh93 way), yash, zsh, fish each with a different syntax (rc the shell of the once to-be successor of Unix, fish and zsh being the most consistent ones)...
In standard sh (also works in modern versions of the Bourne shell):
set '1st element' 2 3 # setting the arrayset -- "$@" more # adding elements to the end of the arrayshift 2 # removing elements (here 2) from the beginning of the arrayprintf '<%s>\n' "$@" # passing all the elements of the $@ array
# as arguments to a commandfor i do # looping over the elements of the $@ array ($1, $2...)
printf 'Looping over "%s"\n' "$i"
doneprintf '%s\n' "$1" # accessing individual element of the array.
# up to the 9th only with the Bourne shell though
# (only the Bourne shell), and note that you need
# the braces (as in "${10}") past the 9th in other
# shells (except zsh, when not in sh emulation and
# most ash-based shells).printf '%s\n' "$# elements in the array"printf '%s\n' "$*" # join the elements of the array with the
# first character (byte in some implementations)
# of $IFS (not in the Bourne shell where it's on
# space instead regardless of the value of $IFS)(note that in the Bourne shell and ksh88, $IFS must contain the space character for "$@" to work properly (a bug), and in the Bourne shell, you can't access elements above $9 (${10} won't work, you can still do shift 1; echo "$9" or loop over them)).
|
I am trying to use arrays in Bourne shell (/bin/sh). I found that the way to initialize array elements is:
arr=(1 2 3)But it is encountering an error:
syntax error at line 8: `arr=' unexpectedNow the post where I found this syntax says it is for bash, but I could not find any separate syntax for Bourne shell. Does the syntax stand the same for /bin/sh as well?
| Arrays in Unix Bourne Shell |
First of all, let's decouple the read from the text line by using a variable:
text="line-1 line-2" ### Just an example.
read -p "$text" REPLYIn this way the problem becomes: How to assign two lines to a variable.
Of course, a first attempt to do that, is:
a="line-1 \
line-2"Written as that, the var a actually gets the value line-1 line-2.
But you do not like the lack of indentation that this creates, well, then we may try to read the lines into the var from a here-doc (be aware that the indented lines inside the here-doc need a tab, not spaces, to work correctly):
a="$(cat <<-_set_a_variable_
line-1
line-2
_set_a_variable_
)"
echo "test1 <$a>"But that would fail as actually two lines are written to $a.
A workaround to get only one line might be:
a="$( echo $(cat <<-_set_a_variable_
line 1
line 2
_set_a_variable_
) )"
echo "test2 <$a>"That is close, but creates other additional issues.
Correct solution.
All the attempts above will just make this problem more complex that it needs to be.
A very basic and simple approach is:
a="line-1"
a="$a line-2"
read -p "$a" REPLYThe code for your specific example is (for any shell whose read supports -p):
#!/bin/dash
a="goat can try change directory if cd fails to do so."
a="$a Would you like to add this feature? [Y|n] "
# absolute freedom to indent as you see fit.
read -p "$a" REPLYFor all the other shells, use:
#!/bin/dash
a="goat can try change directory if cd fails to do so."
a="$a Would you like to add this feature? [Y|n] "
# absolute freedom to indent as you see fit.
printf '%s' "$a"; read REPLY |
I am writing an installation script that will be run as /bin/sh.
There is a line prompting for a file:
read -p "goat may try to change directory if cd fails to do so. Would you like to add this feature? [Y|n] " REPLYI would like to break this long line into many lines so that none of them exceed 80 characters. I'm talking about the lines within the source code of the script; not about the lines that are to be actually printed on the screen when the script is executed!
What I've tried:Frist approach:
read -p "oat may try to change directory if cd fails to do so. " \
"Would you like to add this feature? [Y|n] " REPLYThis doesn't work since it doesn't print Would you like to add this feature? [Y|n].
Second approach:
echo "oat may try to change directory if cd fails to do so. " \
"Would you like to add this feature? [Y|n] "
read REPLYDoesn't work as well. It prints a newline after the prompt. Adding -n option to echo doesn't help: it just prints:
-n goat oat may try to change directory if cd fails to do so. Would you like to add this feature? [Y|n]
# empty line hereMy current workaround is
printf '%s %s ' \
"oat may try to change directory if cd fails to do so." \
"Would you like to add this feature? [Y|n] "
read REPLYand I wonder if there is a better way.
Remember that I am looking for a /bin/sh compatible solution.
| How to break a long string into multiple lines in the prompt of read -p within the source code? |
I have found useful information in Shellcheck.net wiki, I quote:Bash¹:
for ((init; test; next)); do foo; donePOSIX:
: "$((init))"
while [ "$((test))" -ne 0 ]; do foo; : "$((next))"; donethough beware that i++ is not POSIX so would have to be translated, for instance to i += 1 or i = i + 1.
: is a null command that always has a successful exit code. "$((expression))" is an arithmetic expansion that is being passed as an argument to :. You can assign to variables or do arithmetic/comparisons in the arithmetic expansion.So the above script in the question can be POSIX-wise re-written using those rules like this:
#!/bin/sh
: "$((i=1))"
while [ "$((i != 10))" -ne 0 ]
do
echo "$i"
: "$((i = i + 1))"
doneThough here, you can make it more legible with:
#!/bin/sh
i=1
while [ "$i" -ne 10 ]
do
echo "$i"
i=$((i + 1))
doneas in init, we're assigning a constant value, so we don't need to evaluate an arithmetic expression. The i != 10 in test can easily be translated to a [ expression, and for next, using a shell variable assignment as opposed to a variable assignment inside an arithmetic expression, lets us get rid of : and the need for quoting.Beside i++ -> i = i + 1, there are more translations of ksh/bash-specific constructs that are not POSIX that you might have to do:i=1, j=2. The , arithmetic operator is not really POSIX (and conflicts with the decimal separator in some locales with ksh93). You could replace it with another operator like + as in : "$(((i=1) + (j=2)))" but using i=1 j=2 would be a lot more legible.a[0]=1: no arrays in POSIX shellsi = 2**20: no power operator in POSIX shell syntax. << is supported though so for powers of two, one can use i = 1 << 20. For other powers, one can resort to bc: i=$(echo "3 ^ 20" | bc)i = RANDOM % 3: not POSIX. The closest in the POSIX toolchest is i=$(awk 'BEGIN{srand(); print int(rand() * 3)}').¹ technically, that syntax is from the ksh93 shell and is also available in zsh in addition to bash
|
I know how to create an arithmetic for loop in bash.
How can one do an equivalent loop in a POSIX shell script?
As there are various ways of achieving the same goal, feel free to add your own answer and elaborate a little on how it works.
An example of one such bash loop follows:
#!/bin/bash
for (( i=1; i != 10; i++ ))
do
echo "$i"
done | How can I create an arithmetic loop in a POSIX shell script? |
You can use case ... esac
$ cat in.sh
#!/bin/bashcase "$1" in
"cat"|"dog"|"mouse")
echo "dollar 1 is either a cat or a dog or a mouse"
;;
*)
echo "none of the above"
;;
esacEx.
$ ./in.sh dog
dollar 1 is either a cat or a dog or a mouse
$ ./in.sh hamster
none of the aboveWith ksh, bash -O extglob or zsh -o kshglob, you could also use an extended glob pattern:
if [[ "$1" = @(cat|dog|mouse) ]]; then
echo "dollar 1 is either a cat or a dog or a mouse"
else
echo "none of the above"
fiWith bash, ksh93 or zsh, you could also use a regular expression comparison:
if [[ "$1" =~ ^(cat|dog|mouse)$ ]]; then
echo "dollar 1 is either a cat or a dog or a mouse"
else
echo "none of the above"
fi |
I’m looking for an “in” operator that works something like this:
if [ "$1" in ("cat","dog","mouse") ]; then
echo "dollar 1 is either a cat or a dog or a mouse"
fiIt's obviously a much shorter statement compared to, say, using several "or" tests.
| Is there an "in" operator in bash/bourne? |
The ^ character as a synonym of | dates back from the Thompson shell. They were introduced at the same time in Unix v4 and are mentioned together in the man page. Sven Mascheck mentions that ^ was “probably [introduced] for reasons of convenience on early upper-case-only terminals” where typing | was “somewhat of a pain”.
The Thompson shell is long gone, but its successor the Bourne shell retained the same syntax (even though its man page only mentions |).
Successor shells such as ash, bash and ksh only understand | as the pipe character. You aren't going to find an actual Bourne shell on open source unix variants since for a long time there was no open source release of the Bourne shell. (I think OpenSolaris included one, but it wasn't adopted elsewhere as by that time it was long obsoleted by newer implementations).
The Single Unix specification does not mention ^ as a special character, which effectively means that POSIX shells should interpret it literally¹. I don't think there's ever been a fully POSIX-compliant variant of the Bourne shell (only independent implementations).
^ is special in zsh when the option extendedglob is enabled, but not in its sh compatibility mode. In its default mode, it deviates from POSIX in many ways.
I recommend quoting ^ in a regular expression anyway for clarity. Quote the regular expression in a script regardless of what characters appear in it.
¹ Except as the first character of a bracket expression in a wildcard pattern, where ! is the standard negation character but implementations may also interpret ^ in the same way.
|
I wrote a small script today which contained
grep -q ^local0 /etc/syslog.confDuring review, a coworker suggested that ^local0 be quoted because ^ means "pipe" in the Bourne shell. Surprised by this claim, I tried to track down any reference that mentioned this. Nothing I found on the internet suggested this was a problem.
However, it turns out that the implementation of bsh (which claims to be the Bourne shell) on AIX 7 actually has this behaviour:
> bsh
$ ls ^ wc
23 23 183
$ ls | wc
23 23 183None of the other "Bourne shell" implementations I tried behave this way (that is, ^ is not considered a shell metacharacter at all). I tried sh on CentOS (which is really bash), and sh on FreeBSD (which is not bash). I don't have many other systems to try.
Is this behaviour expected? Which shells consider ^ to be a pipe metacharacter?
| Use of ^ as a shell metacharacter |
This is definitely POSIX syntax. Paraphrasing:
Using ${parameter-word}, if parameter isset and not null, then substitute the value of parameter,
set but null, then substitute null, and
unset, then substitute word.Example session:
$ echo "${parameter-word}"
word
$ parameter=
$ echo "${parameter-word}"$ parameter=value
$ echo "${parameter-word}"
value"Null" here simply means the empty string. There is no special null value in POSIX shells, in contrast to SQL, for example.
This is also documented in the "Parameter Expansion" section of man bash.
|
I am looking at a script that has:
if [ "${PS1-}" ]; thenThat trailing - bugs me a bit because it doesn't seem to Posix or Bash standard syntax. It this some arcane syntax that has been around forever, or is it a typo? Any references to standards / docs would be appreciated.
Normally I would code it:
if [ "$PS1" ]; thenWhich is more correct or is there a difference between them?
| Is "${PS1-}" valid syntax and how does it differ from plain "$PS1"? |
When GNU grep tries to write its result, it will fail with a non-zero exit status, because it has nowhere to write the output, because the SSH connection is gone.
This means that the if statement is always taking the else branch.
To illustrate this (this is not exactly what's happening in your case, but it shows what happens if GNU grep is unable to write its output):
$ echo 'hello' | grep hello >&- 2>&-
$ echo $?
2Here we grep for the string that echo produces, but we close both output streams for grep so that it can't write anywhere. As you can see, the exit status of GNU grep is 2 rather than 0.
This is particular to GNU grep, grep on BSD systems won't behave the same:
$ echo 'hello' | grep hello >&- 2>&- # using BSD grep here
$ echo $?
0To remedy this, make sure that the script does not generate output. You can do this with exec >/dev/null 2>&1. Also, we should be using grep with its -q option since we're not at all interested in seeing the output from it (this would generally also speed up the grep as it does not need to parse the whole file, but in this case it make very little difference in speed since the file is so small).
In short:
#!/bin/sh# redirect all output not redirected elsewhere to /dev/null by default:
exec >/dev/null 2>&1while true; do
date >sdown.txt ping -c 1 -W 1 myserver.net >pingop.txt if ! grep -q "64 bytes" pingop.txt; then
mutt -s "Server Down!" [emailprotected] <sdown.txt
break
fi sleep 10
doneYou may also use a test on ping directly, removing the need for one of the intermediate files (and also getting rid of the other intermediate file that really only ever contains a datestamp):
#!/bin/shexec >/dev/null 2>&1while true; do
if ! ping -q -c 1 -W 1 myserver.net; then
date | mutt -s "Server Down!" [emailprotected]
break
fi sleep 10
doneIn both variations of the script above, I choose to exit the loop upon failure to reach the host, just to minimise the number of emails sent. You could instead replace the break with e.g. sleep 10m or something if you expect the server to eventually come up again.
I've also slightly tweaked the options used with ping as -i 1 does not make much sense with -c 1.
Shorter (unless you want it to continue sending emails when the host is unreachable):
#!/bin/shexec >/dev/null 2>&1while ping -q -c 1 -W 1 myserver.net; do
sleep 10
donedate | mutt -s "Server Down!" [emailprotected]As a cron job running every minute (would continue sending emails every minute if the server continues to be down):
* * * * * ping -q -c 1 -W 1 >/dev/null 2>&1 || ( date | mail -s "Server down" [emailprotected] ) |
Trying here to write a shell script that keeps testing my server and email me when it becomes down.
The problem is that when I logout from ssh connection, despite running it with & at the end of command, like ./stest01.sh &, it automatically falls into else and keeps mailing me uninterruptedly, until I log again and kill it.
#!/bin/bash
while true; do
date > sdown.txt ;
cp /dev/null pingop.txt ;
ping -i 1 -c 1 -W 1 myserver.net > pingop.txt &
sleep 1 ;
if
grep "64 bytes" pingop.txt ;
then
:
else
mutt -s "Server Down!" [emailprotected] < sdown.txt ;
sleep 10 ;
fi
done | Trying to write a shell script that keeps testing a server remotely, but it keeps falling in else statement when I logout |
No, but with some tools it's not hard to test whether a regex compiles or not.
For example, with grep: echo | grep -P '[' - the exit code, $?, will be 2, indicating an error occurred (and for this example, grep will print "grep: missing terminating ] for character class" to stderr - you can redirect stderr to /dev/null if you only want the exit code).
An exit code of 1 indicates that the regex compiled OK but didn't match the input.
These exit codes are specific to GNU grep. Other tools, if they even have such a capability, will probably have different exit codes, and different ways of indicating specific kinds of errors.
Note that this is not even remotely close to telling you whether a regex will correctly match what you want it to (and not match what you don't want it to).
In short, try it and test the exit code. And know your tools.
|
I am on a closed network (i.e. no connectivity to the internet).
I have a bourne shell script that asks for the user to enter a regular expression for use with grep -P.
Generally speaking, I like to do some form of input validation.
Is there a way to test a string variable to see if it is a (valid) regex?
(Copying things from the internet onto my system can be done, but it takes forever and is a PITA -- thus I am looking for way to do it natively.)
| Does Bourne Shell have a regex validator? |
The Bourne shell is somewhat of an antique. The Solaris version doesn't have the -e operator for the test (a.k.a. [) builtin that was introduced somewhat late in the life of the Bourne shell¹ and enshrined by POSIX.
As a workaround, you can use -f to test for the existence of a regular file, or -r if you aren't interested in unreadable files.
Better, change #!/bin/sh to #!/usr/xpg4/bin/sh or #!/bin/ksh so as to get a POSIX shell.
Beware that [ $option -eq 9 ] is probably not right: -eq is a numerical comparison operator, but $option isn't really numeric — it's a date. On a 32-bit machine, when 201301271355 is interpreted as a number, it is taken modulo 232. It so happens that no date in the 21st century is very close to 0 modulo 232, but relying on this is very brittle. Make this [ "$option" = 9 ] instead.
As a general shell programming principle, always put double quotes around variable and command substitutions: "$foo", "$(foo)". If you don't, the shell splits the result at each whitespace character and treats each resulting word as a filename wildcard pattern. So an unprotected $foo is only safe if the value of foo does not contain any whitespace or \[?*. Play it safe and always use double quotes (unless you intend the splitting and pattern matching to happen).
¹ Or was it a ksh addition never ported to Bourne? I'm not sure.
|
#!/bin/sh
CONFIG_DIR="/var/opt/SUNWldm/"
read option
if [ $option -eq 9 ]; then
ret=1
elif [ -e ${CONFIG_DIR}file.xml.${option} ]; then
echo "TRUE"
fiI have the above code in a while loop to present a list of options. Unfortunately I'm having problems with the elfi statement. From: IF for Beginners the -e returns true if the file exists.I've double checked the syntax and even running the script in debug mode (I put set -x at the beginning of this script and could see that the replacement in the if is done properly as seen inline:
+ [ 201301271355 -eq 9 ]
+ [ -e /var/opt/SUNWldm/file.xml.201301271355 ]
./ldm_recover.sh: test: argument expectedI've been searching so far and haven't found a reason for failing, any ideas what I'm doing wrong?
| bourne shell if [ -e $directory/file.$suffix ] |
Sourcing your script only sets shell variables, while printenv shows environment variables. You will have to export the variables for printenv to show them. You may have meant to use set instead, which will show shell variables.
You could have made this script:
#!/bin/sh
export MYVAR=MYVALecho "EXECUTED!!"(given that you are using bash, the export works with an assignment as shown).
By the way: when you source a file, shells do not pay any attention to the hashbang line #!/bin/sh: that is the province of the kernel in any case, except obliquely. Oddly enough, that is not mentioned in the manual page. You can see this by making two files, say "foo" and "bar":
#!/bin/bash
echo "outer $0 $SHELL"
. ./bar
printenv |fgrep MYVARand
#!/bin/sh
echo "inner $0 $SHELL"
export MYVAR=$MYVALto see that the shell variables (such as $0) are the same within the sourced file, and that the shell features are unaffected.
I add that line anyway, to help with syntax highlighting.
|
I have two scripts that need to run and both require the same variables set the same way. As a result I figured I'd break the setting of the variables out into a separate script. However, I can't seem to get this to work right where the variables are showing in the main script.
For example, this is my main script:
#!/bin/sh. ./varsprintenvThis is what I have in the script "vars":
#!/bin/sh
MYVAR=MYVALecho "EXECUTED!!"In the output, I successfully see "EXECUTED!!", but the variable MYVAR is not set to anything.
EXECUTED!!
MAIL=/var/mail/testuser
SSH_CLIENT=192.168.110.1 62953 22
USER=testuser
SHLVL=1
HOME=/home/testuser
OLDPWD=/home/testuser/test
SSH_TTY=/dev/pts/0
LOGNAME=testuser
_=./mainScript
TERM=xterm
PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
LANG=en_US.UTF-8
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:
SHELL=/bin/bash
PWD=/home/testuser
SSH_CONNECTION=192.168.110.1 62953 192.168.110.133 22 | SH: How to make vars from one script available in the main script? |
Classically, which was a csh script, and it printed an error message like no foo in /usr/bin:/bin and returned a success status. (At least one common version, there may have been others that behaved differently.) Example from FreeBSD 1.0 (yes, that's ancient):
if ( ! $?found ) then
echo no $arg in $path
endif(This classic implementation is also notorious for loading the user's .cshrc, which could change the PATH, which would cause the output to be wrong.)
Modern systems usually have a different implementation of which, either written in C or in sh, and follow modern standards of dealing with error conditions: no output to stdout and a nonzero exit status.
|
I am reading the source code of the Maven wrapper written for the Bourne shell. I came across these lines:
if [ -z "$JAVA_HOME" ]; then
javaExecutable="$(which javac)"
if [ -n "$javaExecutable" ] && ! [ "$(expr "$javaExecutable" : '\([^ ]*\)')" = "no" ]; then
# snipexpr when used with arg1 and arg2 and a : matches arg1 against the regex arg2. Normally, the result would be the amount of matching characters, e.g.:
$ expr foobar : foo
3However, when using capturing parentheses (\( and \)), it returns the content of the first capturing parentheses:
$ expr foobar : '\(foo\)'
fooSo far, so good.
If I evaluate the expression from the source quoted above on my machine, I get:
$ javaExecutable=$(which javac)
$ expr "$javaExecutable" : '\([^ ]*\)'
/usr/bin/javacFor a non-existing executable:
$ nonExistingExecutable=$(which sjdkfjkdsjfs)
$ expr "$nonExistingExecutable" : '\([^ ]*\)'Which means that for a non-existing executable the output is a empty string with newline.
What's puzzling me in the source is how the output of which javac (arg1 to expr) ever returns the string no?
Is there some version of which which, instead of returning nothing, returns no when no executable can be found?
If not, this statement always evaluates to true and that would be weird.
| Does any implementation of `which` output "no" when executable cannot be found? |
Not really.
One solution is to reserve a character as the field separator. Obviously it will not be possible to include that character, whatever it is, in an option. Tab and newline are obvious candidates, if the source language makes it easy to insert them. I would avoid multibyte characters if you want portability (e.g. dash and BusyBox don't support multibyte characters).
If you rely on IFS splitting, don't forget to turn off wildcard expansion with set -f.
tab=$(printf '\t')
IFS=$tab
set -f
exec java $JVM_EXTRA_OPTS …Another approach is to introduce a quoting syntax. A very common quoting syntax is that a backslash protects the next character. The downside of using backslashes is that so many different tools use it as a quoting characters that it can sometimes be difficult to figure out how many backslashes you need.
set java
eval 'set -- "$@"' $(printf '%s\n' "$JVM_EXTRA_OPTS" | sed -e 's/[^ ]/\\&/g' -e 's/\\\\/\\/g') …
exec "$@" |
In a POSIX sh, or in the Bourne shell (as in Solaris 10's /bin/sh), is it possible to have something like:
a='some var with spaces and a special space'
printf "%s\n" $aAnd, with the default IFS, get:
some
var
with
spaces
and
a
special spaceThat is, protect the space between special and space by some combination of quoting or escaping?
The number of words in a isn't known beforehand, or I'd try something like:
a='some var with spaces and a special\ space'
printf "%s\n" "$a" | while read field1 field2 ...The context is this bug reported in Cassandra, where OP tried to set an environment variable specifying options for the JVM:
export JVM_EXTRA_OPTS='-XX:OnOutOfMemoryError="echo oh_no"'In the script executing Cassandra, which has to support POSIX sh and Solaris sh:
JVM_OPTS="$JVM_OPTS $JVM_EXTRA_OPTS"
#...
exec $NUMACTL "$JAVA" $JVM_OPTS $cassandra_parms -cp "$CLASSPATH" $props "$class"IMO the only way out here is to use a script wrapping the echo oh_no command. Is there another way?
| Is it possible to "protect" an IFS character from field splitting? |
In the Bourne shell, redirecting a compound command (like your while loop) runs that compound command in a subshell.
In Solaris 10 and earlier1, you don't want to use /bin/sh as it's the Bourne shell. Use /usr/xpg4/bin/sh or /usr/bin/ksh instead to get a POSIX sh.
If for some reason you have to use /bin/sh, then to work around that, instead of doing:
compound-command < fileYou can do:
exec 3<&0 < file
compound-command
exec <&3 3<&-That is:duplicate the fd 0 onto fd 3 to save it away and then redirect fd 0 to the file.
run the command
restore fd 0 from the saved copy on fd 3. And close fd 3 which is no longer needed.1. In Solaris 11 and later, Oracle eventually (at long last) made /bin/sh a POSIX shell, so it now behaves like the sh of most other Unices (it interprets the sh language specified by POSIX though it supports extensions over it as it's based on ksh88 (like other Unices, where sh is now generally based on ksh88, pdksh, bash, yash or an enhanced ash))
|
Could someone please explain to me why my while loop seems to have an internal scope? I've seen multiple explanations online but they all have to do with pipes. My code has none.
The code:
#!/bin/sh
while read line
do
echo "File contents: $line"
echo
if [ 1=1 ]; then
test1=bob
fi
echo "While scope:"
echo " test1: $test1"
done < test.txtif [ 1=1 ]; then
test2=test2;
fiecho;
echo "Script scope: "
echo " test1: $test1"
echo " test2: $test2"The output:
File contents: In the fileWhile scope:
test1: bobScript scope:
test1:
test2: test2 | Variable scope in while-read-loop on Solaris |
So bourne shell (IIRC) doesn't support arrays. You can still use "$@"
set -- "one two" three
for i in "${@}" ; do
echo "$i"
doneOutputs:
one two
threeTested on AIX 7.1 bsh.
|
I'm trying to get a (bourne shell) variable to expand like "$@" so it produces multiple words with some having preserved spaces. I've tried defining the variable in many different ways but still can't get it to work:
#!/bin/shn=\"one\ two\"\ three
for i in "$n"; do
echo $i
doneI want to define the variable so the script outputs one two first and then three next iteration, which is what you'd get if you replaced the quoted variable with "$@" and passed 'one two' three as the arguments.
Is "$@" just magic?
| "$@" expansion for user defined variables |
I would check the value of id -u, which is specified to:Output only the effective user ID, using the format "%u\n".Perhaps like this:
if [ $(id -u) -eq 0 ]
then
: root
else
: not root
fi | I am attempting to write a script that automates the installation of ports/packages on new FreeBSD installs. To do this, the user who executes the script must be root.
The system is "supposed" to be virgin meaning bash and sudo may or may not be installed; so I am trying to account for it. To do this, I am checking if the user ID equals 0.
The problem is, between bash and sh, the environment variables are different:bash -> $EUID (all caps)
sh -> $euid (all lower)Is there a different way other than the environment variable to check for root user or should I just adjust the checking of the user based on environment?
| Checking for root user in sh and bash [duplicate] |
In POSIX compliant shells (not the Bourne shell, that feature comes from the Korn shell), ${#var} like wc -m counts the number of characters¬π in $var and the behaviour is unspecified if the sequence of bytes stored in $var cannot be decoded to characters in the current locale.
Bytes are decoded into characters as per the current locale (its LC_CTYPE category). In a locale that uses UTF-8 as the character encoding, the 0xc3 0xa9 sequence would be decoded into a é character, while in a locale using ISO8859-1, that would be decoded into é and in a locale using BIG5 into 矇.
In any case, it has little to do with Unicode codepoints. It's also not the same as counting the number of grapheme clusters or the width of the string when displayed by a terminal or any other display device.
In:
var="e\xcc\x81"$var contains 9 bytes and 9 characters: e, \, x, c, c, \, x, 8 and 1.
Some printf (in the format argument or in arguments for %b format directives) and echo implementations will expand \xcc to the 0xcc byte, not all do. Per POSIX, \x in an argument to those leads to unspecified behaviour. (\351 does expand to the 0xe9 byte in printf format argument and \0351 in echo/%b though).
If you want $var to contain the 0x65, 0xcc, 0x81 bytes, in ksh93/zsh/bash (and these days more and more shells), you'd do:
var=$'e\xcc\x81'Or you could always do:
var=$(printf 'e\314\201')Then in a locale where locale charmap outputs UTF-8, $var would contain 3 bytes (as shown by wc -c), 2 characters (as shown by wc -m or ${#var}), 1 grapheme cluster (as shown by GNU grep -Po '\X') usually displayed with width 1 (as shown by GNU wc -L).
If the locale at the time the shell was invoked and at the time the code was parsed and executed had UTF-8 as the charset, in several shells, you can also do:
var=$'e\u0301'For $var to contain the UTF-8 encoding of the e and U+0301 (combining acute accent) characters.
If the locale's charset is not UTF-8, then the behaviour varies between shells. Also whether it's the locale that was in effect at the time the code was parsed or at the time the code was executed that is taken into account to expand the Unicode codepoint into a character depends on the shell. You'll find also variations of behaviour if the character is not present in the locale's charmap.
In the Bourne shell, to get the length in characters of a string, you had to resort to other utilities such as:
length=`expr "x$var" : '.*' - 1` || :Or:
length=`printf %s "$var" | wc -m`Though if you find a system old enough to still have a Bourne shell, chances are that its wc won't support -m or that there won't be a printf command.¬π POSIX itself doesn't specify the mapping between sequences of bytes and characters, not even in the POSIX locale, only some APIs to define and retrieve that mapping or convert sequences of bytes to sequence of characters (wchar_t). Systems generally use standard charsets for the charmap like UTF-8 which is a transformation format of the charset defined by another ISO standard (ISO/IEC 10646 aka Unicode). Some systems like GNU ones actually use the Unicode code points for the wchar_t values regardless of the locale.
|
Arising from this discussion:
When I have (zsh 5.8, bash 5.1.0)
var="ASCII"
echo "${var} has the length ${#var}, and is $(printf "%s" "$var"| wc -c) bytes long"the answer is simple: these are 5 characters, occupying five bytes.
Now, var=Müller yields
Müller has the length 6, and is 7 bytes longWhich suggests the ${#} operator counts codepoints, not bytes. This is a bit unclear in POSIX, where they say it counts "characters". This would be clearer if characters in POSIX C weren't octets, normally.
Anyways: Nice! Kind of good, seeing that LANG==en_US.utf8.
Now,
var='üßúüèø‚Äç‚ôÄÔ∏è'
echo "${var} has the length ${#var}, and is $(printf "%s" "$var"| wc -c) bytes long"üßúüèø‚Äç‚ôÄÔ∏è has the length 5, and is 17 bytes longSoooo, we decompose "Mermaid of dark skin color" into the Unicode codepointMerperson
Dark skin tone
Zero-Width Joiner
Female
Print print the previous character as emojiFine, so we're really counting Unicode codepoints!
var="e\xcc\x81"
echo "${var} has the length ${#var}, and is $(printf "%s" "$var"| wc -c) bytes long"é has the length 9, and is 9 bytes long(of course, my console font decided that the ´ combines with the following space, not the preceding e. The latter would be correct. But let's leave my rage about that for somewhen else.)
Um, a slight "wat" is in order here.
> printf "e\xcc\x81"|wc -c
3
> printf "%s" "${var}" |wc -c
9
> echo -n ${var} |wc -c
3
> echo "${var} has the length ${#var}, and is $(printf "%s" "$var"| wc -c) bytes long"
eÃÅ has the length 9, and is 9 bytes long
> printf "%s" "${var}" |xxd
00000000: 655c 7863 635c 7838 31 e\xcc\x81Here's where I give up.
echo $var, echo ${var} and echo "${var}" all "correctly" emit three bytes. However, echo ${#var} tells me it's 9 charachters.
Where is this documented/standardized, what's the rules for all this?
| What is "length" of a string in Bourne shell compatibles' `${#string}`? |
Does the script contain the command set -u?
That meansTreat unset variables and parameters
other than the special parameters "@" and "*"
as an error when performing parameter expansion.
If expansion is attempted on an unset variable or parameter,
the shell prints an error message,
and, if not interactive, exits with a non-zero status.In other words, if $BAR_EXT is not set, something like
BAR_FILE="$BAR_FILE$BAR_EXT"would fail. The command
BAR_EXT=${BAR_EXT-}will explicitly set $BAR_EXT to an empty string
if it is not defined at all, thereby avoiding such an error.
|
I'm reading a shell script for adding a progress bar to certain processes (found here). I'm having trouble understanding this (optional) line:
#BAR_EXT=${BAR_EXT-}The comment says that this will add an extension to each file, and maybe I just need to read further, but I'm not familiar with that use of the - operator.
I know about this kind of substitution, as found in the Bash Reference Manual:
${parameter:-word}I also know that the above will replace a null value for parameter with word, whereas ${parameter-word} will not. (At least, I think I know that.)
But with nothing specified after the - here, I'm not sure what's going on. Will this simply replace parameter with a null value? Generally, I would accept that as a working guess and just keep reading, but with the comment mentioning adding extensions to files.
| Bourne shell: trailing `-` operator in parameter substitution |
The closest thing to the Bourne shell, which wasn't really an open source program and cannot be found on pretty much any operating system nowadays, is the Heirloom Bourne shell from OpenSolaris, which is shells/heirloom-sh in the FreeBSD ports collection (usually installed under /usr/ports).
Note that this is not sh.
That is (a slightly altered version of) the Debian Almquist shell on FreeBSD.
The Heirloom Bourne shell installs as jsh.
This has no dealing with ~/.shrc.
It only sources ~/.profile, and that only when it is invoked as a login shell.
The only other file that it sources is /etc/profile.
The Bourne shell didn't have all of these complexities that grew up later, with lots of different files.
Ironically, it was the C shell, originally a BSD favourite, that kickstarted the trend of all of these other rc files.
The Heirloom Bourne shell isn't quite the Bourne shell.
It has job control commands, for instance; which the Bourne shell didn't have because it was written for an operating system that had no such thing.
Some of the changes that happened in the history of OpenSolaris mean that this isn't the original Bourne shell.
As I said, one cannot get that for any operating system nowadays; and even on the commercial Unices one would have got updated versions of the Bourne shell from roughly the late 1980s onwards, adding in new System 5 Release 4 stuff, just like this one.
So why do you have a ~/.shrc claiming that it's for the Bourne shell and pointing to the sh(1) manual page in a comment?
Basically, that comment is a lie.
sh on FreeBSD and NetBSD is the Almquist shell, FreeBSD using a lightly altered version of the Debian Almquist shell, which was in turn a light alteration of the NetBSD Almquist shell done by Debian people at the turn of the 21st century.
If you look at the sh(1) manual, it tells you that it is the shell written by Kenneth Almquist, not the shell written by Stephen R. Bourne.
If you want to try out the Bourne shell, then running sh on FreeBSD will not help you one bit.
That manual also tells you how to manually set up your ~/.profile so that interactive Almquist shells source a ~/.shrc file.
This doesn't happen by default in the Almquist shell, where ~/.shrc isn't a standard thing.
~/.shrc is just a conventional file that got dumped into your home directory from /usr/share/skel/ when your home directory was created by pw useradd.
It doesn't actually do anything for any shell as standard.
If you want to try out something really esoteric, more so even than the Heirloom Bourne shell, FreeBSD even has the Thompson shell available, as shells/osh in ports.
On the gripping hand, I can recommend trying out and learning the Almquist shell over (or at least before) trying out and learning the Heirloom Bourne shell.
It's a lot more useful to know, as all of the #!/bin/sh scripts on FreeBSD of course use it.
The Watanabe shell, shells/yash in ports, is another one worthwhile experiencing.
The Watanabe and Debian Almquist shells give the closest tastes of what having only the POSIX-conformant subset of shell behaviours is like, albeit that they both have the non-standard emacs command line editing mode. ☺
|
I want to try out Bourne shell on FreeBSD so I am starting to set it up for my use.
In .shrc, I set my prompt, enabled vi mode, set some aliases, and exported some variables.
However, I see that .profile also, by default, exports some variables.
It is my understanding that Bourne shell will source .profile on each startup. If so, what is the (historical) reason for having both .shrc and .profile?
| What is the difference between .shrc and .profile? |
Deleting the $(...) command substitution removes the failure for me on 5.10. That suggests that what you're seeing is the effect of . parsing the entire file before it executes, and encountering an error at that unsupported syntax. By contrast, the script is being parsed line-by-line, so it exits before the syntax error is noticed.
Experimentally, you can insert other syntax errors and see the same behaviour: . fails early, and sh executes up to the malformed line.
Why? I don't know. It doesn't seem to be specifically documented anywhere I can find. The man page just says for .: . filename Read and execute commands from filename and return. The
search path specified by PATH is used to find the direc-
tory containing filename.The documentation for <<word says "shell input is read up to ...", which perhaps implies minimalistic parsing, but I don't see anything explicit anywhere. Line-by-line parsing is quite common for shell scripts in general, and it's what Bash does in both cases, for example.Why doesn't -x work when sourcing a file?It does work, as long as commands actually do start executing. Nothing is printed during the parsing phase.For this specific example, using ` ... `-style command substitution would work, but the real script is probably more complex.
|
I have a bash script which will be called by /bin/sh on a Solaris machine. When I run the script as /bin/sh ./solarisSh, it works. When I source the same script, it fails.
I know bash and Bourne Shell are almost nothing alike. This is not my question.
My question is: Why does Solaris /bin/sh behave so differently when sourcing a file versus just plain executing a file?
Here are the data...
ns2 ~/tmp 560> env -i /bin/sh -x
$ uname -a
+ uname -a
SunOS ns2 5.7 Generic_106541-15 sun4m sparc SUNW,SPARCstation-10
$ cat ./solarisSh
+ cat ./solarisSh [ ! "$BASH" ] && {
>&2 echo "ERROR: $0 is a Bash script. Exiting."
return 1 2> /dev/null || exit 1
}haveRootPriv() {
local idCmd=/usr/bin/id
local euid
local uid [[ $OHM_OS == "SunOS" ]] && idCmd=/usr/xpg4/bin/id
if (( ( $( $idCmd -ru) == 0 ) || ( $( $idCmd -u) == 0 ) )); then
echo 1
return 0
fi
echo 0
return 1
}
$ /bin/sh -x ./solarisSh
+ /bin/sh -x ./solarisSh
+ [ ! ]
+ echo ERROR: ./solarisSh is a Bash script. Exiting.
ERROR: ./solarisSh is a Bash script. Exiting.
+ return 1
$ . ./solarisSh
+ . ./solarisSh
syntax error: `$' unexpected
$ The ERROR: ./solarisSh is a Bash script. Exiting. is what I was expecting when I sourced the file. What I got was the syntax error: `$' unexpected.
To recap the question: Why does Solaris /bin/sh behave so differently when sourcing a file versus just plain executing a file?
I guess I have a second question, too (sorry): Why doesn't -x work when sourcing a file?
Thanks.
-Erik
| Solaris /bin/sh sourcing a file behaves differently than executing a file. Why? |
I am not sure if the environment variable $PWD was set in the bourne shell, but the command pwd exists since the old minix and is part of POSIX, so:
abs_path="$(cd "$rel_path" && pwd -P)" |
Suppose I'm running bash on a Unix-ish system - not necessarily Linux and not necessarily very new; and it may not have every bit of software I'd like.
Now, I have a relative path for which I want to get the absolute path. What's the most robust and portable way of doing this?
The answers here seem to mostly assume the GNU core utilities are installed, which I would rather not do.
Bonus points if your answer works on any Bourne Shell variant.
| How do I convert a relative to an absolute path, portably and robustly? |
You don't need ; at the end of each line, this is not C.
You don't need:
cp /dev/null pingop.txtbecause the very next line in the script
ping -i 1 -c 1 -W 1 google.com > pingop.txtwill overwrite contents of pingop.txt anyway. And if we're here, you
don't even need to save output of ping to the file if you're not
going to send it or process it later, just do:
if ping -i 1 -c 1 -W 1 website.com >/dev/null 2>&1
then
sleep 1
else
mutt -s "Website Down!" [emailprotected] < wsdown.txt
sleep 10To answer your question about false alarms - ping might not be the
best way for testing if website is up. Some websites just do not
respond to ICMP requests, for example:
$ ping -i 1 -c 1 -W 1 httpbin.org
PING httpbin.org (3.222.220.121) 56(84) bytes of data.--- httpbin.org ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0msHowever, http://httpbin.org is up. If you're using website.com in
your example you most probably access it with HTTP/HTTPS and in that
case consider using curl -Is:
$ curl -Is "httpbin.org" >/dev/null 2>&1
$ echo $?
0
$ curl -Is "non-existing-domain-lalalala.com" >/dev/null 2>&1
$ echo $?
6OP asked about speed difference between ping and curl in the
comments. There is no big difference if you're testing website that
responds to ping:
$ time curl -Is google.com >/dev/null 2>&1
real 0m0.068s
user 0m0.002s
sys 0m0.001s
$ time ping -i 1 -c 1 -W 1 google.com
PING google.com (216.58.215.110) 56(84) bytes of data.
64 bytes from waw02s17-in-f14.1e100.net (216.58.215.110): icmp_seq=1 ttl=54 time=8.06 ms--- google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 8.068/8.068/8.068/0.000 ms
real 0m0.061s
user 0m0.000s
sys 0m0.000sBut when testing website that does not respond to ping then curl
is not only more reliable but also faster than ping with -W that you
use now:
$ time ping -i 1 -c 1 -W 1 httpbin.org
PING httpbin.org (3.222.220.121) 56(84) bytes of data.--- httpbin.org ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0msreal 0m1.020s
user 0m0.000s
sys 0m0.000s
$ time curl -Is httpbin.org >/dev/null 2>&1real 0m0.256s
user 0m0.003s
sys 0m0.000s |
Need improvement on a script which continuously tests website.
It's currently been used the following script, but it is giving a large amount of failing emails, while website is still up:
#!/bin/bash
while true; do
date > wsdown.txt ;
cp /dev/null pingop.txt ;
ping -i 1 -c 1 -W 1 website.com > pingop.txt ;
sleep 1 ;
if grep -q "64 bytes" pingop.txt ; then
:
else
mutt -s "Website Down!" [emailprotected] < wsdown.txt ;
sleep 10 ;
fi
doneThinking now or in somehow improving this script or using another way.
| Need Improvement on Script Which Continuously Tests Website |
The first thing you should know about scripting in csh is that it is usually a very bad idea. That said, if you insist, the problems with your script are:csh doesn't support the $() construct for command substitution, use ` ` instead.
csh doesn't support the for i ... do ... done syntax, use foreach i ... end instead.
csh doesn't do funky string manipulation like "${elapse%:*}". You'll have to get around it using some other tool.
I don't know how to get [ to work with csh (but it's probably possible), as a workaround, use if instead.So, a working version of your script in csh would be:
#!/bin/csh
set ALTER = "$1"
set NAME = "$2"
foreach pr (`pgrep "$NAME"`)
set elapse = `ps -o etime= -p "$pr" | cut -d: -f1`
if ( "$elapse" > "$ALTER" ) echo "$pr"
endSeriously though, don't script in csh, it will only cause you pain. Especially since all you really need is:
ps -o pid=,etime= -p $(pgrep $NAME) | cut -d: -f1 |
awk -vval="$ALTER" '$2>val{print $1}' |
This works fine:
#!/bin/shALTER="$1"
NAME="$2"for pr in $(pgrep $NAME); do
elapse=$(ps -o etime= -p $pr)
[ "${elapse%:*}" -gt "$ALTER" ] && echo $pr
doneBut if I try to switch it to CShell:
#!/bin/cshset ALTER = "$1"
set NAME = "$2"for pr in $(pgrep $NAME); do
set elapse = $(ps -o etime= -p $pr)
[ "${elapse%:*}" -gt "$ALTER" ] && echo $pr
doneI get an Illegal variable error. Any ideas?
| Bourne Shell to CShell |
The V7 sh.1 man page defines PS1 asPrimary prompt string, by default ‘$ ’.So yes, letters P and S in PS1 stand for “prompt string”.
PS1 was introduced with the Bourne shell in V7; older shells didn’t have anything like this. The Thompson shell, used before V7, didn’t have variables at all. The PWB (Mashey) shell introduced single-character alphabetic variables ($a through $z), with special meaning given to $n (the number of arguments given to the shell), $p (the search path), $r (the last command’s return code), $s (the user’s login directory), and $t (the terminal identification); $$ was also understood, and replaced by the shell’s process number. These were refined into the more general concept of environment variables during the design of V7.
| What do the letters PS stand for in $PS1?
Is it actually "Prompt String"?
Where did $PS1 first appear?
| What is the etymology of $PS1? [closed] |
Think about how your thought process works for translating from digit form to prose form. What do you look at first? What do you do with that information? Is there a pattern to your workflow that you could express in a procedural form? How can this be broken into small, discrete steps which are analogous to the commands available to you?
The above line of thinking is the quintessence of programming and scripting.
One you have the skeleton of the process in mind, put it down in "pseudocode" - words that make sense to you, if not to the shell -- and step through that process, to make sure it does what you want rather than just what you say. Then translate that from your native tongue into shell commands.
For instance, a good starting point might be to determine how many place values you have to indicate. There are a couple of ways to do this that come immediately to mind: "how many digits do we have", or "is the number greater than 99? greater than 9?" Or you could even work out a system doesn't need you to sort this out first.
In this case, the first thing you need is the ability to do some basic arithmetic in the shell, and comparative tests. So:
Let's say we've read the number into a variable, number, and already sanity-checked it to make sure the user didn't enter -53 or 3.14 or albatross or something we're not wanting to actually parse. We can start with:
output=""
if [[ "$number" -gt 100 ]]; then
# okay, we know $number is greater than 100
hundreds=$((number/100))
case $hundreds in
1) output="one" ;;
2) output="two" ;;
3) output="three" ;;
# et cetera
esac
number=$((number-100*hundreds))
fi
output="$output hundred"And you can build from there.
|
My problem definition is : 1.Write a Bourn shell script dTOe which takes as an input any number between 0 and 999 and prints the English value for this number. I am struggling with above problem. Could you give me any hints or helps?
#! /bin/bashnumber=$1if [ $number -lt 0 -o $number -gt 999 ]
then
echo put the right input between 0 and 999
ficase "$number"
[0-9]) | How to use switch case in my case? |
You may use pstree for this:
$ bash
bash-4.4$ pstree -p "$$"
-+= 00001 root /sbin/init
\-+= 85460 kk tmux: server (/tmp/tmux-1000/default) (tmux)
\-+= 96572 kk -ksh93 (ksh93)
\-+= 72474 kk bash
\-+= 14184 kk pstree -p 72474
\-+- 51965 kk sh -c ps -kaxwwo user,pid,ppid,pgid,command
\--- 91001 kk ps -kaxwwo userThe pstree utility will show the parent-child relationships for all processes currently running on the system. With -p "$$" you restrict its output to only contain processes related the the current shell (whose process ID is stored in the $ variable).
To cut the output off at the point where it gets to the current shell, you could use sed:
bash-4.4$ pstree -p "$$" | sed "/= $$ /q"
-+= 00001 root /sbin/init
\-+= 85460 kk tmux: server (/tmp/tmux-1000/default) (tmux)
\-+= 96572 kk -ksh93 (ksh93)
\-+= 72474 kk bashFor Linux systems, which apparently use a different implementation of this utility from what I'm using (on OpenBSD), you may want to use
$ pstree -salup "$$"to get a similar output, and
$ pstree -salup "$$" | sed "/,$$\$/q"to cut the output off at the point where it gets to the current shell.
Here's a shell function pls (for "process ls", that's the best I could come up with) that does the above for any given PID (or the PID of the current shell if left out):
function pls
{
local pid="${1:-$$}"
pstree -salup "$pid" | sed "/,$pid\$/q"
} |
From a bash or sh shell, how can I determine if it was called with the bash or sh command, a login shell, an xterm, and, in the case of the former, how was that called?
For example, if I call bash from an xterm, and then call it again, inside that instance, it might output something like
me@mylinuxmachine:~$ bash
me@mylinuxmachine:~$ bash
me@mylinuxmachine:~$ magic_command
Called by /bin/bash {
Called by /bin/bash {
Called by xterm
}
} | Determine how bash or sh was called |
(You most likely have firstline set already when you tested that code in bash, its value should be empty at the end).
When running the pipeline
cat myfile | { read firstline; read secondline; }the right hand side is running in a subshell. Not because of the {...;} but because it's part of a pipeline. The subshell environment will contain the two variables firstline and secondline (after both have been read), but will be destroyed when the subshell terminates, discarding both variables.
This holds true for both POSIX sh and bash.
In bash (4.2+), you can work around this by setting the lastpipe shell option. From the bash manual:Each command in a pipeline is executed as a separate process (i.e., in
a subshell). See COMMAND EXECUTION ENVIRONMENT for a description of a
subshell environment. If the lastpipe option is enabled using the
shopt builtin (see the description of shopt below), the last element of
a pipeline may be run by the shell process.This would work in a script, but not in an interactive shell with job control enabled (it's the job control that makes it not work, not the interactivity).
Example:
$ cat script.sh
cat << EOF > myfile
line1
line2
EOFcat myfile | { read firstline; read secondline; }
printf 'first=%s\n' "$firstline"
printf 'second=%s\n' "$secondline"shopt -s lastpipe
cat myfile | { read firstline; read secondline; }
printf 'first=%s\n' "$firstline"
printf 'second=%s\n' "$secondline"$ bash script.sh
first=
second=
first=line1
second=line2In your particular case, you could also, in both POSIX sh and bash, do away with cat and the pipe completely and instead redirect into the compound command with the two read calls directly:
{ read firstline; read secondline; } <myfileOn a tangential note, you most likely do not have a real historical Bourne shell on your machine (unless it's a Solaris system before Solaris 11). I'm assuming you mean a modern POSIX sh shell.
|
I'm trying to read two lines into two variables. In Bash I would use something like this:
cat << EOF > myfile
line1
line2
EOFcat myfile | {
read firstline
echo $firstline # "line1" in bash and sh
read secondline
}echo $firstline # "line1" in bash, empty in sh
echo $secondlineIn Bourne Shell however $firstline and $secondline are empty outside the command group. How can I do it in sh?
| Reading multiple lines in Bourne Shell |
You are running the rsync command in a command substitution. A command substitution will be replaced by the output of the command within it, and the way your script is written, this output will be executed as a command, which is why you get that error message and the seemingly weird tracing output.
Instead:
#!/bin/sh -xrsync -auh --delete --out-format='%n' "$1" "$2" || exit 1If you still want the set -x in a subshell within your script:
#!/bin/sh( set -x; rsync -auh --delete --out-format='%n' "$1" "$2" ) || exit 1The exit 1 could possibly be dropped if the rsync is the last command in the script as the exit status of the script would be the exit status of the last command executed, unless you want to force it to be exactly 1 no matter how rsync failed.
|
I'm trying to execute rsync from a Bourne shell script (read: Bash extensions not available) and after lots of searching, single/double quotes combinations, escapes, etc, I wasn't able to correctly pass the --out-format='%n' argument.
For example, this script:
#!/bin/sh(set -x ; $(rsync -auh --delete --out-format='%n' "$1" "$2")) || exit 1when invoked like ./myscript.sh dir1/ dir2/ returns this output on MacOS 10.12.6:
++ rsync -auh --delete --out-format=%n dir1/ dir2/
+ ./ file1.c file1.h file2.c file2.h
myscript.sh: line 3: ./: is a directorywhere file1.c file1.h file2.c and file2.h are the contents of dir1/
First of all, I have no idea on why the + ./ file1.c file1.h file2.c file2.h line is output, because --out-format='%n' outputs one file per line, and not all files on the same line. Also, the mysterious starting ./ seems to be the cause (or the consequence) of the error.
If I remove --out-format='%n' from the script, then it runs fine, with no errors.
If I execute the command from the terminal, it runs fine both with single quotes in the argument and without them (--out-format='%n' and --out-format=%n). When on the script, it fails the same in both cases.
What could be causing this error?
| Passing --out-format=FMT argument to rsync from Bourne shell script |
A login sh session reads the user's ~/.profile upon invocation. If the ENV variable is set to a filename after doing that, and if that file exists, the shell will use that file to further initialize the login session.
Interactive shells that are not login shells will only use $ENV if ENV is set, but will not read ~/.profile.
Non-interactive shell should not use either of these two files.
Usually, one exports ENV at the end of one's ~/.profile:
ENV="$HOME/.shrc" # for example
export ENV # may be done as export ENV="..." too, in most shells.This is, for example, what bash does if it's invoked as sh or with bash --posix.
One may use these two files (~/.profile and $ENV) for whatever one wishes, but the profile is where you might want to set and export environment variables that only needs to be set once (PATH etc.), fire up any fetchmail process or other user daemon that you wish to use etc., while the $ENV file is where you set up specific things for this particular shell session/TTY, such as setting GPG_TTY (if you're using GnuPG), setting up aliases (since aliases are not inherited by subshells) etc.
The ksh93 shell uses ~/.profile and $ENV by default, but interprets $ENV in a specific way. If $ENV starts with /./ or ././, then no system-wide configuration file will be used (e.g. /etc/ksh.kshrc).
The file ~/.login is not used by sh, unless ENV is set to this filename or it is explicitly sourced from ~/.profile or $ENV.
|
I ask in that way, because according to https://unix.stackexchange.com/a/46856/84749, when I start screen it's an "interactive, non-login" I'm doing. What's actually happening is I'm logging into a Bourne shell (not BASH) system, and when I do, it runs ~/.profile just fine, and sets up my aliases. But when I run screen these aliases are lost, and it doesn't seem to run ~/.profile or ~/.login or anything else I've tried.
I'm running LibreELEC on a Raspberry Pi 3.
| Bourne shell: what does it execute on interactive, non-login? |
"Variables present in the shell’s initial environment are automatically exported to child processes. The Bourne shell does not normally do this unless the variables are explicitly marked using the export command.".Consider:
% export FOO=abc
% bash -c 'FOO=xyz; echo "bash: FOO=$FOO"; echo "env:"; env |grep FOO'
bash: FOO=xyz
env:
FOO=xyzHere, I've set FOO in my interactive shell (zsh, but it doesn't matter), and set it as exported. Then I run Bash, which receives that variable in its environment, changes its value, prints it, and then runs env. That's an external command, so it only sees variables that inner Bash explicitly passes to it. We see the modified value of FOO is visible in the inner Bash, and in env, so Bash indeed imported the variable from its environment, and passed it on as it would pass any exported variable.
The other behaviour the quote seems to describe would be that the inner shell would not pass the variable on to env, and you'd see something like this instead:
bash: FOO=xyz
env:I don't know what the actual behaviour with all historical implementations has been, all I could reproduce was this with heirloom-sh (the same behavior as Kusalananda mentioned), only the original value of the variable is passed on:
% ./heirloom-sh -c 'FOO=xyz; echo "sh: FOO=$FOO"; echo "env:"; env |grep FOO'
sh: FOO=xyz
env:
FOO=abcAn explicit export FOO inner shell would have the current value also passed on. Here, the shell does make the original value of FOO visible to the script, so echo "FOO=$FOO" as the first thing would also print FOO=abc.
|
bash v3.2 (though I think holds for newer versions too):
In section 3.7.4 Environment, the docs say:On invocation, the shell
scans its own environment and creates a parameter for each name found, automatically marking it for export to child processes.And later in Appendix B Major Differences From The Bourne
Shell, the docs say:Variables present in the shell’s initial environment are automatically exported to child processes. The Bourne shell does not normally do this unless the variables are explicitly marked using the export command.I don't understand what this means.
In the following, cmd1.sh comprises
#!/bin/bashecho yes $ben from cmd1
./cmd2.shAnd cmd2.sh comprises
#!/bin/bashecho yes $ben from cmd2I first understood the docs to mean that all assigned variables will be exported (ie there was no need to export variables), ie when running
ben=you;
./cmd1.shI expected this to print
yes you from cmd1
yes you from cmd2But instead it prints
yes from cmd1
yes from cmd2So variable ben doesn't appear to be automatically exported. Then I thought the docs might instead mean that all environment variables will be exported, ie when running
ben=you;
export ben;
./cmd1.shBecause cmd1 receives an environment variable of ben, then ben will be automatically exported such that it will be visible in cmd2. Ie I expected this to print the following (and indeed the following is printed):
yes you from cmd1
yes you from cmd2However, to test whether this is different from Bourne shell (as the docs claimed) I ran the exact same commands, but changing the shebang to point to /bin/sh instead of /bin/bash, and I obtained the exact same result. Ie I did not see any difference. In Bourne shell, I was expecting to see an output of something like
yes you from cmd1
yes from cmd2Can anyone help me to understand what the docs are referring to when they talk of "automatically" marking parameters for export, and how this is different to Bourne shell?
Nb I did spot this question regarding a specific difference between export behaviour in bash and bourne, but that doesn't seem to be relevant.
| export command behaviour in bash vs bourne shell |
It is most unlikely that you really use the Bourne Shell it is more likely that you are using dash (The Debian Almquist Shell). You may like to check this by calling:
echo $0The exact name of the shell may be retrieved via the whatshell script from https://www.in-ulm.de/~mascheck/various/whatshell/
But dash is not the Bourne Shell and with respect to UNIX worse than the Bourne Shell as dash does not support multi-byte characters.
The Bourne Shell is a shell that has been started as a rewritten Thompson Shell by Stephen Bourne in 1976 and since then has evolved massively.
In 1983, a copy of the Bourne Shell was used as the starter for the Korn Shell (ksh) by David Korn.
In 1988, both Bourne Shell and ksh did get support for internationalization and the libc from UNIX evolved until 1992, so that both since then support multi-byte characters.
In 1989, the Korn Shell has been used as a paragon for Bash, the GNU project's shell.
In 1989, the Bourne Shell from 1982 has been used as a paragon for ash (the Almquist shell) and dash is a bug-fixed version of ash. But both ash and dash later added POSIX features.
In 1992, POSIX used ksh88 as a paragon for the POSIX shell definitions.
In 2005, OpenSolaris made the Bourne Shell open source and starting from 2006, the Bourne Shell source code evolved to become POSIX compliant.
While dash misses a history editor and multi-byte support, the current Bourne Shell implements these features.
The main differences between shells today are however (besides POSIX compliance) features that make the shells nice to use as interactive shells. This is what you get from a recent Bourne Shell (bosh), from ksh and from Bash, but not from dash.
| Is the relation between Bourne shell and Bash similar to that of C and C++(if so it would signify that both have their place as a shell)? Whenever I read something about shells it always says that Bourne shell is dead and obsolete, but why?
| Why is Bourne shell considered obsolete? [closed] |
Given the constraints explained in the question and comments, I would start by removing the differences between the style guidelines used for the working copy and the stored copy. However I understand that can be very difficult, so feel free to ignore that advice.
I don’t think rsync (i.e., filtering the files while they’re being copied) is the right place to try to apply “beautification” before committing the “stored copy”. If you can use the SCM, I would piggy-back off of that; for example, using git, after rsync:
git diff --name-only -z | xargs -O beautifierwill run beautifier on all changed files, assuming it can work on files given as parameters.
If you can’t (or won’t) use the SCM, you could use the rsync logs (see the --log-file and --log-file-format options) to find out what rsync copied, and run the beautifier on those files only.
If rsync logs aren’t exploitable, there is still another way to go about this: run the beautifier on every single file in the “stored copy”, outputting to a temporary file, and compare the output to the original. If the beautification changes a file, copy the output back over the original.
|
I keep two copies of the same source code tree: One is the "working copy", and the other is the "stored copy". When I finish editing the "working copy", I refresh the "stored copy" with rsync (only modified files will be copied and, moreover, deleted files in the working copy will be deleted in the stored one as well). There's also a SCM, but it takes place later, after the "stored copy", so we can ignore the SCM here.
But now I want to apply a code beautifier when doing the refresh from the "working copy" to the "stored" one. The code beautifier can be applied through stdin/stdout redirection, but, AFAIK, rsync doesn't allow going through a stdin/stdout filter when performing the copy.
I want to beautify only the modified files, because I don't want to modify the timestamps of unchanged files in the "stored copy". The beautification rules are different for the "working copy" and the "stored copy", so the beautifier can't be applied to the working copy.
How can I do this? Any solution that works on UNIX would be acceptable, although I prefer Bourne shell scripts, or C programs. If rsync could be used in some special way for doing this, it would be fine as well.
| Mirror-like of source code tree applying beautifier to modified files only |
Note that -prune just stops recursion into subdirectories; it doesn't stop at the first found entry. You probably want -quit with GNU or FreeBSD find or -exit with NetBSD find:
$ find . -name test
./test
./Y/test$ find . -name test -print -quit
./testInstead of testing the return code of find, you can test the output
files=$(find . -name "test" -print -quit)if [ -n "$files" ]
then
echo "error... found $files" >&2
exit 2
fi |
I would like to use find on a directory structure to exit if at least
one file exists with a target condition, because this will lead to a failure of the rest of a shell script.
Since this shell script is intended to run on big directory structures I would like to exit of it as soon as possible.
For example I would like to do:
find . -name "test" -prune
# if "test" file found, just exit immediately
if [ $? -eq 0 ] ; then
echo error... >&2
exit 2
fi
...continuation of shell scriptBut -prune is always evaluated to true.
What is the more efficient way to write a find expression to achieve
this kind of short circuit find?
I would like to use as standard as possible Bourne shell constructs
and avoid the use of any temporary file.
| find exiting on 1st found and return code |
You don't need the enclosing parentheses, test itself would suffice:
if test -e "$NAME"; thenThe (()) is for arithmetic comparison operations.
test is synonymous to [ command, so you can use:
if [ -e "$NAME" ]; thentoo.
Also some shell has the [[ keyword:
if [[ -e "$NAME" ]]; then |
I am trying to test if file "file1.c" is present in the current working directory, what am I doing wrong with my test command? I thought I understood this command, am I doing something wrong for the Bourne shell that I do not know about?
#! /bin/sh
NAME=$1
if((test -e "$NAME"));then
echo File $NAME present
else
echo File $NAME not present
fi | syntax error: invalid arithmetic operator (error token is ".c") |
It is in the bash manual (3.6.2 Redirecting Output):If the redirection operator is >|, or the redirection operator is > and
the noclobber option to the set builtin command is not enabled, the redirection is attempted even if the file named by word exists. | What does the >|-redirection in bash do?
I just found out, that echo text >| somefile creates the file somefile (if not existing yet), and fills it with text. Similar as echo text > somefile would do.
Further experiments suggest that the >|-redirection behaves as the >-redirection does.
So, what is the >|-redirection exactly?
Since it's hard to google for the string ">|", I could not really search the web (so I have added "greater-pipe" in the title of this question since that is google-able).
| What does a ">|"-redirection ("greater-pipe"-redirection) mean? [duplicate] |
You can do...
while read line
do line=${line%%[!0-9]*}
[ -n "$line" ] || continue
: work w/ digits at line's head
doneAlternatively - and probably faster - you can do:
tr -cs 0-9\\n \ |
while IFS=\ read num na
do ${num:+":"} continue
: work w/ first seq of digits on line
doneOr is if you want to ignore completely any line containing anything but spaces, tabs, or numbers, or even any line containing two space-separated nums...
b=${IFS%?}
grep "^[$b]*[0-9]\{1,\}[$b]*$" |
while read num; do : stuff with "$num"; doneWith case you could do it like this:
while read num
do case ${num:--} in
*[!0-9]*) continue;;esac
: something w/ $num
done |
I have a program that is currently working, but I need to modify it to ignore some stdin that is not fitting for its correct function.
Right now, to run the program:
printf "1\n3\n5\n" | sh prog
The program currently ignores non-integer input (like floats), but I also need it to ignore something like '4 10' on the same line and '5 text' etc.
#! /bin/shsum=0;
cnt=0while read line
do case "$line" in *[.]* ) #------I think here is where the regex needs to be edited
printf "\n0"
continue
;; [0-9]* )
sum=`expr "$sum" + "$line"`
cnt=`expr "$cnt" + 1`
printf "\n%s" `expr $sum / $cnt`
;;
esacdoneI'm pretty sure it's just a matter of changing the regex on the line I pointed out so that it goes to the print 0 and continue case with the two non-desired input types I described above but I am having trouble with it.
Thank you!
| Bourne shell: ignoring certain kinds of stdin |
Not "the old days" — we still use computers without GUIs, because we have servers, embedded systems, and even some folks who prefer to have no graphical environment on their home workstations.
When you start up your computer, or SSH in, or whatever: yes, you're greeted with $ (or a login prompt first, depending). It's mostly the same as using the terminal emulator in fullscreen. The Mac runtime is based on BSD, so also similar to older Unix systems. Many of the tools on Unix-like systems today are also very similar to those in use 30+ years ago, and some haven't seen anything but security patches in a long long time.
cmd.exe isn't exactly the same as a DOS workstation, but it's the same kind of feel. You could also try this out for yourself by finding a good DOS emulator. For Unix-like systems, you could get a VPS on any of the providers for an hour or so just to experiment, or start a new vTTY if you're on Linux.
There were less programs, you won't find Ruby or Node on SysV, and some tools have changed, but if you're already comfortable using the terminal on a modern computer, you wouldn't have many problems using a text-only system, and only a few getting using an older Unix system (especially if you're used to GNU tools).
You can also run older versions of Unix in VMs or emulators, of course. For example this answer has links for running V7 in VirtualBox, and a quick search in your preferred search engine will probably turn up instructions and images for other older OSs.
| I understand the concepts of terminal, console, shell and their differences. I know a shell today is an interpreter that communicates with the OS kernel to perform some actions and does it through terminal applications.
But in the old days when computers didn't had GUI, all the interaction a user had with a computer was through the shell?
I've read that the Bourne Shell (sh) was introduced in Unix version 7, was that like, you turn on the computer and from the moment you start typing you are communicating with sh? or you had to enter the sh program through a command and then that shell starts?
And kind of the same with windows or Mac, is that MS-DOS functionality what we have today in cmd?
Thanks in advance and if someone can leave a documentation where this evolution is explained I'll be very grateful.
| How was a shell like when operating systems didn't had a GUI? [closed] |
In the context of FreshTomato, what the command does is, if the variable $1 is greater or equal to 20, then run a telnet daemon on port 233 that will drop you inside a shell.
[ $1 -ge 20 ] && telnetd -p 233 -l /bin/sh$1 is the number of seconds the SES/WPS/AOSS button has been pressed
-p is the port number
-l to listen
/bin/sh the command to run when a connection is establishedSo, if you press that button for more than twenty seconds, go to another host and do:
telnet myFreshTomatoHost 233You'll be taken straight to the shell. It is meant as a procedure to open a backdoor if you have physical access to the system running FreshTomato.
|
Help with deciphering commands
[ $1 -ge 20 ] && telnetd -p 233 -l /bin/sh
I know /bin/sh is a Bourne shell and telnetd is a telnet daemon but I'm not sure how they work together. I think someone tried to leave a back door open but I'm not sure what / how the other commands work together.
Thanks
| Help with a deciphering command with telnetd in it |
Try this:
find . -maxdepth 1 -size +1000c -type f -exec ls -lhSa '{}' +Explanation:
-maxdepth 1 - find files only in current directory
-size +1000c - find only files greather than 1000 bytes ("c" = bytes)
-type f - find only files
-exec <command> {} + - execute command. See man find for more information
If you do not want to use find (i don't know why), you may type (thx @αғsнιη):
ls -lpSa | awk '! /\// && $5>1000'But Why not parse ls?
|
I can sort the files either in descending order (of any size) or list all files greater than 1000 bytes but don't know how to sort files greater than 1000 bytes in a user specified directory.
List files greater than 1000 bytes :
for i in "$1/*" # $1 expects a directory name
do
if [ `wc -c $i` -gt 1000 ]
echo $i
doneList files in descending order of size :
`ls -lhS`But how do I list all files greater than 1000 bytes in descending order of size?
| sort files greater than 1000 bytes in descending order |
#!/bin/bash
python3
print("Hello World")
exit()
echo "The execution is completed"Scripts work differently from typing commands directly in a terminal, ie. what you want to do won't work this easily.
The #!/bin/bash line tells the kernel that the program it's trying to start is a script that need to be run with /bin/bash. So when you run ./scriptname the kernel instead runs /bin/bash ./scriptname and bash will read commands from scriptname instead of standard input (the terminal).
When bash gets to the python line it will start the Python interpreter and kindly wait for it exit before continuing to the next line. When Python starts it does the same thing it would when you type python in the terminal, it will start the interactive interpreter.
Python is unaware that you want to run commands from a script file, and you also can't tell it to skip a few lines.
When Python exits, bash will continue from the next line quit() which is a syntax error, you also can't tell Bash to skip a few lines after python.
There are three ways to get Python to run commands:Have bash write commands to the input of python via a heredoc (*1). Python can't read user input from the terminal if you do this.
Use the -c option to supply a short list of commands. This is only suitable for extremely short programs that also don't need to use any quotes (for strings). Multiple commands can be separated with semicolons.
Have a separate Python script, this requires a second file. If you understand the security risks associated with temporary files (*2), you can make bash create the file for you if you wish. Just don't create the files in /tmp and you should be fine.The follow up question: Is it possible to get the output Hello world or any other output that is generated in the python interpreter to bash environment and use it in the bash script.You can use command substitution using $( and ), please use these inside double quoted strings. You can even nest double quoted strings on the inside:
test="$(python -c "import sys; print(sys.version)")"
echo "Python has finished"
echo "test = $test"And method which you specify to run the python commands will that work for other tools as well?All three methods listed above will work for many other scripting languages too.*1 Heredoc example
read -p "How many numbers: " n
python <<END
for i in range($n):
print(i)
print("Literal dollar sign \$")
ENDThe <<END syntax makes the shell (bash) read all lines until it reads the word END. Note that END must be at the beginning of the line.
This way you don't need to escape quotes. The shell will still interpret dollar signs, so you can use variables. If you want an actual dollar sign you need to escape it like this \$.*2 Example of the dangers of temp files. DO NOT USE THIS CODE
cat >/tmp/python-script <<END
for i in range(10):
print(i)
END
python /tmp/python-scriptIf someone else creates /tmp/python-script first, this shell script won't even stop if it fails to overwrite the file. A malicious user on the system could make a harmful Python script which will run instead of the intended script.
This is not the only thing that can go wrong.
There are ways to do this safely, but the simplest way would be to create
the files in the current working directory or in a dedicated directory in the home directory.
cat >dopythonstuff <<END
...
END
python dopythonstuff
rm dopythonstuff |
Sorry about the title it may not be clear. Here is the complete explanation of my doubt. I am writing the below shell script and expecting the mentioned output.
#!/bin/bash
python3
print("Hello World")
exit()
echo "The execution is completed"The output what i am expecting is, it should enter the python3 interpreter and execute the print and exit() commands and after executing the exit() command as the interpreter exits if we do it manually and then execute the echo command.But it is not working that way, after executing python3 it is entering the python3 interpreter but not executing the print and exit().
>>>It is entering the python3 correctly and then stops there till i manually exit the python interpreter.
What changes should i make in order to get my expected output.The follow up question: Is it possible to get the output Hello world or any other output that is generated in the python interpreter to bash environment and use it in the bash script.
2.And method which you specify to run the python commands will that work for other tools as well? | How to get into python environment and run some python commands and return to normal terminal using shell script |
Assuming you are referring to your own user's crontab, to avoid duplicating the definition of JAVA_HOME you can export the variable in ~/.zshenv (instead of ~/.zshrc), which is read even in non-interactive, non-login shells, and run zsh -c 'sh /path/to/script' in your cron job (replacing sh, based on what the program called "Bourne shell" in your question actually is, if appropriate).
Alternatively, if you are fine with defining JAVA_HOME in multiple places and if your sh implementation supports this1, you may export it in ~/.profile and invoke sh as a login shell by either appending -l to the script's shebang or changing the cron job's command into sh -l /path/to/script.
Though, in the end, the most convenient solution is probably to simply add
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre/as a line at the top of your crontab (unless you have distinct cron jobs that need distinct values of JAVA_HOME, of course).1 Your sh, which is unlikely to be a "true" Bourne shell, may have a -l option if it is actually a link to (for instance) bash or dash. As Stéphane Chazelas pointed out in a comment, 1) it does not have it if it is the Bourne shell or an implementation of POSIX sh (e.g., sh has no -l option on {Free,Net,Open}BSD); and 2) not all the implementations that support -l will read ~/.profile when given that option.
|
I set JAVA_HOME in .zshrc:
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre/which is fine for interactive programs. But I have JVM programs running via cron, which uses Bourne shell. The bourne shell programs keep giving me this:
groovy: JAVA_HOME is not defined correctly, can not execute: /usr/lib/jvm/default-java/bin/javaWhat's the neatest way to solve this? I don't remember having to worry about this before. Currently I'm setting JAVA_HOME on every crontab entry which is burdensome and redundant.
| Sharing environment variables between zsh and bourne shell (for crontab) |
The syntax str^^ which you are trying is available from Bash 4.0 and above. Perhaps yours is an older version (or you ran the script with sh explicitly):
Try this:
str="Some string"
printf '%s\n' "$str" | awk '{ print toupper($0) }' |
I searched SO and found that to uppercase a string following would work
str="Some string"
echo ${str^^}But I tried to do a similar thing on a command-line argument, which gave me the following error
Tried
#!/bin/bash
## Output
echo ${1^^} ## line 3: ${1^^}: bad substitution
echo {$1^^} ## No error, but output was still smaller case i.e. no effectHow could we do this?
| How to uppercase the command line argument? |
This is actually done by your shell, not by ls.
In bash, you'd use:
shopt -s nocasegloband then run your command.
Or in zsh:
unsetopt CASE_GLOBOr in yash:
set +o case-globand then your command.
You might want to put that into .bashrc, .zshrc or .yashrc, respectively.
Alternatively, with zsh:
setopt extendedglob
ls -d -- (#i)*abc*(that is turn case insensitive globbing on a per-wildcard basis)
With ksh93:
ls -d -- ~(i:*abc*)You want globbing to work different, not ls, as those are all files passed to ls by the shell.
|
I would like to list all files matching a certain pattern while ignoring the case.
For example, I run the following commands:
ls *abc*I want to see all the files that have "abc" as a part of the file name, ignoring the case, like
-rw-r--r-- 1 mtk mtk 0 Sep 21 08:12 file1abc.txt
-rw-r--r-- 1 mtk mtk 0 Sep 21 08:12 file2ABC.txtNote
I have searched the man page for case, but couldn't find anything.
| How to match case insensitive patterns with ls? |
From your other questions I take it you're using OS X. The default HFS+ filesystem on OS X is case-insensitive: you can't have two files called "abc" and "ABC" in the same directory, and trying to access either name will get to the same file. The same thing can happen under Cygwin, or with case-insensitive filesystems (like FAT32 or ciopfs) anywhere.
Because grep is a real executable, it's looked up on the filesystem (in the directories of PATH). When your shell looks in /usr/bin for either grep or GREP it will find the grep executable.
Shell builtins are not looked up on the filesystem: because they're built in, they are accessed through (case-sensitive) string comparisons inside the shell itself.
What you're encountering is an interesting case. While cd is a builtin, accessed case-sensitively, CD is found as an executable /usr/bin/cd. The cd executable is pretty useless: because cd affects the current shell execution environment, it is always provided as a shell regular built-in, but there is a cd executable for POSIX's sake anyway, which changes directory for itself and then immediately terminates, leaving the surrounding shell where it started.
You can try these out with the type builtin:
$ type cd
cd is a shell builtin
$ type CD
CD is /usr/bin/CDtype tells you what the shell will do when you run that command. When you run cd you access the builtin, but CD finds the executable. For other builtins, the builtin and the executable will be reasonably compatible (try echo), but for cd that isn't possible.
|
Why is this?
When I do this
CD ~/DesktopIt doesn't take me to the Desktop. But this:
echo "foo
bar" | GREP bargives me:
bar | Why can Shell builtins not be run with capital letters but other commands can? |
Here, you're running:
ls te*Using a feature of your shell called globbing or filename generation (pathname expansion in POSIX), not of the Linux system nor of any filesystem used on Linux.
te* is expanded by the shell to the list of files that match that pattern.
To do that, the shell requests the list of entries in the current directory from the system (typically using the readdir() function of the C library, which underneath will use a system-specific system call (getdents() on Linux)), and then match each name against the pattern.
And unless you've configured your shell to do that matching case insensitively (see nocaseglob options in zsh or bash) or use glob operators to toggle case insensitivity (like the (#i) extended glob operator in zsh), te* will only expand to the list of files whose name as reported by readdir() starts with te, even if pathname resolution on the system or file system underneath is case insensitive or can be made to be like NTFS.
|
I just read the following sentence:Case Sensitivity is a function of the Linux filesystem NOT the Linux operating system.What I deduced from this sentence is if I'm on a Linux machine but I am working with a device formatted using the Windows File System, then case sensitivity will NOT be a thing.
I tried the following to verify this:
$ ~/Documents: mkdir Test temp$ ~/Documents: touch Test/a.txt temp/b.txt$ ~/Documents: ls te*
b.txtAnd it listed only the files within the temp directory, which was expected because I am inside a Linux Filesystem.
When I navigated to a Windows File System (NOTE: I am using WSL2), I still get the same results, but I was expecting it to list files inside both directories ignoring case sensitivity.
$ /mnt/d: mkdir Test temp$ /mnt/d: touch Test/a.txt temp/b.txt$ /mnt/d: ls te*
b.txtI tried it with both bash and zsh.
I feel that it's somehow related to bash (or zsh), because I also read that bash enforces case sensitivity even when working with case insensitive filesystems.
This test works on Powershell, so it means that the filesystem is indeed case insensitive.
| What does “Case sensitivity is a function of the Linux filesystem not the Linux operating system” mean? |
There isn't a native feature of :s that does this as far as I know, but if you're willing to install add-ons, you could look at Michael Geddes' keepcase plugin.
|
In vim, I know I can search with or without case sensitivity. But if I want to search for a string in either upper or lower case, and replace it with a replacement of the same case, is that possible in a single :s///?
For example, I want to change these lines:
short
Short
SHORTto
long
Long
LONGI can do this in three :s commands, or one insensitive :s and go fix the cases manually, but is there a better way? A case-preserving search and replace?
| Case-preserving search and replace in vim? |
I don't know whether your unix-flavor has a rename. Many Linuxes have, and it is part of a perl-package, if you search for a repository.
find ./ -depth -exec rename -n 'y/[A-Z]/[a-z]/' {} ";"Above version with
rename -n doesn't really perform the action, but only print what would be done. You omit the -n to do it for real.
|
I'm working on a website conversion. The files as they were linked and served from the web server were case insensitive. But, I've made a dump of the site on my linux system and I'm writing scripts to migrate data. The problem is that I'm running into case sensitivity problems between link strings in the pages and the actual word case on the file system.
For instance, a page might have a link like <a href='/subfolder/PageName.asp'> while the actual file is /subfolder/pagename.asp. Likewise with images — <img src='spacer_sm.gif'> might be Spacer_Sm.gif.
So my thought is to change all directory and filenames to their lower-case equivalents for the site download. How do I do this (and might there be a better way?)
Even if there are unix commands that have case-insensitve switches, I'm using php, so not all of the filesystem commands there have options for case sensitivity.
| change entire directory tree to lower-case names |
A case-insensitive filesystem just means that whenever the filesystem has to ask "does A refer to the same file/directory as B?" it compares the names of files/directories ignoring differences in upper/lowercase (exactly what upper/lowercase differences count depends on the filesystem—it's non-obvious once you get beyond ASCII). A case-sensitive filesystem does not ignore those differences.
A case-preserving filesystem stores file names as given. A non-case-preserving filesystem does not; it'll typically convert all letters to uppercase before storing them (theoretically, it could use lowercase, or RaNsOm NoTe case, or whatever, but AFAIK all real-world ones used uppercase).
You can put those two attributes together in any combination. I'm not sure if you can find non-case-preserving case-sensitive filesystems, but you could certainly create one. All the other combinations exist or existed in real systems, though.
So a case-preserving, case-insensitive filesystem (the most common type of case-insensitive filesystem nowadays) will store and return file names in whatever capitalization you created them or last renamed them, but when comparing two file names (to check if one exists, to open one, to delete one, etc.) it'll ignore case differences.
When you use a case-insensitive filesystem on a Unix box, various utilities will do weird things because Unix traditionally uses case-sensitive filesystems—so they're not expecting Document1 and document1 to be the same file.
In the pwd case, what you're seeing is that it by default just outputs the path you actually used to get to the directory. So if you got there via cd DirName, it'll use DirName in the output. If you got there via DiRnAmE, you'll see DiRnAmE in the output. Bash does this by keeping track of how you got to your current directory in the $PWD environment variable. Mainly this is for symlinks (if you cd into a symlink, you'll see the symlink in your pwd, even though it's actually not part of the path to your current directory). But it also gives the somewhat weird behavior you observe on case-insensitive filesystems. I suspect that pwd -P will give you the directory name using the case stored on disk, but haven't tested.
|
This question occurred to me the other day when I was working on a development project that relies on an opinionated framework with regard to file names. The framework (irrelevant here) wanted to see upper-case-first filenames. This got me thinking.
On a case-insensitive file system, say extFAT or HFS+ (specifically non-case sensitive) how does the file system provide access to the same file with both upper and lower case versions of the filename.
For example:
$ cd ~/Documents
$ pwd
/home/derp/Documents$ cd ../documents
$ pwd
/home/derp/documents$ cd ../docuMents
$ pwd
/home/derp/docuMents$ cd ../DOCUMENTS
$ pwd
/home/derp/DOCUMENTS$ cd ../documentS
$ pwd
/home/derp/documentSAll of these commands will resolve to the same directory. Is this behavior, specifically the output from pwdjust a function of bash in this case just showing me what it thinks I want to see?
Another example:
$ ls ~/Documents
Derp.txt another.txt whatThe.WORLDThe filesystem here reports the case of the original filename as created by the user or program.
At what point in the filesystem stack is the human readable filename preserved as it was created (eg. upper and lower case) so that it can be accessed by any combination of the correct upper and lowercase ASCII characters? Is this just a regex trick somewhere or is there something else going on?
EDIT:
It looks like the behavior I am curious about is found in case-preserving case-insensitive filesystems after some more research...
| How do case-insensitive filesystems display both upper and lower case file names? |
Man is calling Less; the only control at the man level is choosing which options to call Less with.
Less's search case-sensitivity is controlled by two options.If -I is in effect, then searches are case-insensitive: either a or A can be used to match both a and A.
If -i is in effect but not -I, then searches are case-insensitive, but only if the pattern contains no uppercase letter.If you make -I a default option for Less, then all searches will be case-insensitive even in man pages.
Man-db passes extra options to the pager via the LESS environment variable, which Less interprets in the same way as command line options. The setting is hard-coded at compile time and starts with -i. (The value is "-ix8RmPm%s$PM%s$" as of Man-db 2.6.2; the P…$ part is the prompt string.)
If you don't want searches in man pages to be case-sensitive, or if you want them to be always case-insensitive, there is no way to configure this in man-db itself. You can make an alias for man or a wrapper script that manipulates the LESS enviroment variable, as Man-db prepends its content to the current value if present:
alias man='LESS="$LESS -I" man'To turn off the -i option and thus make searches always case-sensitive by default in man pages:
alias man='LESS="$LESS -+i" man'You can also hard-code a different value for LESS by setting the MANLESS environment variable, but if you do that, then man just sets LESS to the value of MANLESS, you lose the custom title line (“Manual page foo(42)”) and other goodies (in particular, make sure to include -R for bold and underline formatting).
|
When I search man pages, the search is case sensitive, but only with regard to upper case letters. E.g., x will match x and X whereas X only matches x. This is the man-db version of man, used on fedora derived systems by default and available on others. man man says the default pager is less -s. $LESS is not defined in the environment, my $PAGER is just less, and I have no aliases for less.
This is not the behaviour when I invoke less on its own.
Is there anyway to force lowercase x to match only lowercasex when using man?
| Can I force `man` to do lower case sensitive matching? |
I probably would define an alias with my options, e.g.:
alias grep="grep --ignore-case --color"as this would only affect interactive programs and not scripts. You could then just run \grep or /bin/grep to run it without any options.
If you want to keep using GREP_OPTIONS you can just unset it for your commandline, e.g.
GREP_OPTIONS= grep .... |
I have set GREP_OPTIONS="--ignore-case --color" in ~/.bashrc as I normally want grep to work case-insensitive. However, there are times when I need grep to actually search case-sensitive, but the man page doesn´t suggest a param for this.
How can I achieve this?
| grep: Ignoring GREP_OPTIONS to search case-sensitive |
With the -q flag the grep program will stop immediately when the first line of data matches.
However pip may still be trying to send data into the pipe. It will receive a SIGPIPE. And that causes the error traceback.
With the -i flag it's possible that the grep process is stopping sooner (earlier match), before pip has finished writing out the results.
Normally you shouldn't use -q in a pipeline like this unless you are sure the program at the other end can handle SIGPIPE.
So pip list | grep -i $packagename will work without error.
|
I'm trying to see if a certain python-library is installed by grepping the output of pip list. If I try this
pip list | grep -q $package, it works fine. If I try pip list | grep -qi $package, I get the following error output
pi@pibox:~ $ pip list | grep -i -q pyyaml
Traceback (most recent call last):
File "/usr/bin/pip", line 9, in <module>
load_entry_point('pip==1.5.6', 'console_scripts', 'pip')()
File "/usr/lib/python2.7/dist-packages/pip/__init__.py", line 248, in main
return command.main(cmd_args)
File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 156, in main
logger.fatal('Exception:\n%s' % format_exc())
File "/usr/lib/python2.7/dist-packages/pip/log.py", line 111, in fatal
self.log(self.FATAL, msg, *args, **kw)
File "/usr/lib/python2.7/dist-packages/pip/log.py", line 164, in log
consumer.flush()
IOError: [Errno 32] Broken pipethis seems to be an error on the python side of things, what would the grepm flag to ignore case have to do with pip's ability to send information down a pipe?
This is on a Raspberry Pi 3 running pip 1.5.6 from /usr/lib/python2.7/dist-packages (python 2.7) and grep (GNU grep) 2.20.
| Broken pipe when grepping output, but only with -i flag |
According to the POSIX specification:The system may provide non-standard extensions. These are features not
required by POSIX.1-2008 and may include, but are not limited to:--snip--Non-conforming file systems (for example, legacy file systems for which _POSIX_NO_TRUNC is false, case-insensitive file systems, or network file systems)--snip--
So it looks like case sensitivity is the norm, but it is possible to support a non-compliant (case-insensitive) file system and still call your product UNIX as long as it can also support case-sensitive file systems.
(edit)
Actually, see this part of the specification:Two proposals were entertained regarding case folding in filenames:Remove all wording that previously permitted case folding.Rationale
Case folding is inconsistent with the portable filename character set and filename definitions (all bytes except <slash> and null). No known implementations allowing all bytes except <slash> and null also do case folding.Change "though this practice is not recommended:" to "although this practice is strongly discouraged."Rationale
If case folding must be included in POSIX.1, the wording should be stronger to discourage the practice.
The consensus selected the first proposal. Otherwise, a conforming application would have to assume that case folding would occur when it was not wanted, but that it would not occur when it was wanted.So it looks like is purposely left ambiguous - it is neither explicitly permitted nor forbidden.
|
One answer to this question mentions the UNIX 03 certification of OSX. Now AFAIK the standard file system of OSX is/was HFS, which "saves the case of a file that is created or renamed but is case-insensitive in operation" (i.e. it's case-preserving but case-insensitive).
Does the UNIX certification or POSIX require a case-sensitive file system?
| Does the UNIX standard require case-sensitive filesystems? |
I like your approach; it's clean in that it doesn't require modification of the data on your main system.
And, yes, I think that if you want to run tune2fs then by a large margin, the easiest solution is to run that from a running Linux, so that there's no real way around having to run it when the main file system isn't mounted.
I don't think your network setup is of any significance – you know exactly what you want your system to do; preconfiguring network to give you an SSH shell into it is going be harder than just running tune2fs … /dev/disk/by-partuuid/… in a script that's autonomously executed (and which then moves on to do what is needed to boot your normal system).
Now, two options:Your debian currently boots using an initrd containing an initramfs (I expect it does)
It doesn't.In the first case, modifying that initrd generation process to just include the necessary tune2fs invocation, generate a new initrd, booting with that, is probably the easiest. Mind you, initrds are really what you want to avoid building: custom fully-fledged Linux systems (which just happen to be Linux distro's ways to initialize the system before mounting the root file system and continuing the main boot process). It's just that debian already builds these for you, anyways :)
I must admit it's been a decade (or more) since I did something like that for a debianoid Linux, so I'm not terribly much of a help on how; check out debian's (sadly seemingly a bit sparse/outdated) documentation on it, and see what you have in /etc/mkinitrd.
In the second case, your approach seems sensible.
|
I need to enable the case insensitive filesystem feature (casefold) on ext4 of a Debian 11 server with a backported 6.1 linux kernel with the required options compiled in.
The server has a swap partition of 2GB and a big ext4 partition for the filesystem, which it also boots from. I only have ssh access as root and cannot access the physial/virtual host itself, so I don't have access to (virtual) usb sticks or cdrom media.
What is the fastest way to enable the casefold feature? tune2fs doesn't want to do it because the fileystem is mounted.
Idea: Drop the swap, install a small rescue system in it, reboot into said rescue system, change the filesystem options of the root partition, reboot into the live partition and restore the swap. For this to work however I need to prepare an extra linux system just to do the tune2fs command needed.
Is there a better way? Any rescue systems I can already use and preconfigure for the required network settings after a reboot?
| How to change the casefold ext4 filesystem option of the root partition, if I only have ssh access |
Standard sh
No need to use that ksh-style [[...]] command, you can use the standard sh case construct here:
case $LINUX_CONF in
([Nn][Oo]) echo linux;;
(*) echo not linux;;
esacOr naming each possible case individually:
case $LINUX_CONF in
(No | nO | NO | no) echo linux;;
(*) echo not linux;;
esacbash
For a bash-specific way to do case-insensitive matching, you can do:
shopt -s nocasematch
[[ $LINUX_CONF = no ]] && echo linuxOr:
[[ ${LINUX_CONF,,} = no ]] && echo linux(where ${VAR,,} is the syntax to convert a string to lower case).
You can also force a variable to be converted to lowercase upon assignment with:
typeset -l LINUX_CONFThat also comes from ksh and is also supported by bash and zsh.
More variants with other shells:
zsh
set -o nocasematch
[[ $LINUX_CONF = no ]] && echo linux(same as in bash).
set -o extendedglob
[[ $LINUX_CONF = (#i)no ]] && echo linux(less dangerous than making all matches case insensitive)
[[ ${(L)LINUX_CONF} = no ]] && echo linux
[[ $LINUX_CONF:l = no ]] && echo linux(convert to lowercase operators)
set -o rematchpcre
[[ $LINUX_CONF =~ '^(?i)no\z' ]](PCRE syntax)
ksh93
[[ $LINUX_CONF = ~(i)no ]]or
[[ $LINUX_CONF = ~(i:no) ]]Note that all approaches above other than [nN][oO] to do case insensitive matching depend on the user's locale. Not all people around the world agree on what the uppercase version of a given letter is, even for ASCII ones.
In practice for the ASCII ones, at least on GNU systems, the deviations from the English rules seem to be limited to the i and I letters and whether the dot is there or not on the uppercase or lowercase version.
What that means is that [[ ${VAR,,} = oui ]] is not guaranteed to match on OUI in every locale (even when the bug in current versions of bash is fixed).
|
This syntax prints "linux" when variable equals "no":
[[ $LINUX_CONF = no ]] && echo "linux"How would I use regular expressions (or similar) in order to make the comparison case insensitive?
| bash - case-insensitive matching of variable |
It's explained in the man page for less.
The default action for REs is to ignore case if there are no uppercase characters present, but to act case-sensitively otherwise.
There are three modes available within less:Case context dependent: a search or RE without uppercase characters is considered to be case-insensitive, but a search or RE containing at least one uppercase character is considered to be case-sensitive. Examples: abc will match abc and aBC, but aBc will only match aBc and not abc or ABC. This is the default setting.
Case sensitive: a search or RE pays full regard to the case of any letter. Example: abC will match only abC and not abc or ABC.
Case insensitive: a search or RE pays no regard to the case of any letter. Example: abC will match any of abc, abC, or ABC.You can toggle case sensitive comparisons with -I, and case context sensitive comparisons with -i.
The control can be specified in three ways:On the command line, for example less -I bigfile.txt.
In the environment, for example export LESS=-i and later less bigfile.txt.
Within less itself, for example by starting less bigfile.txt and then typing -i. |
I'm trying to use a regular expression in the man page of Bash by using less.
I press / in less to enter a pattern, and I type z and press the Enter. I expected it to not match upper-case z (Z), but it does.
How do I make it not match Z? What kind of regular expressions are these that are not case sensitive?
| Using regular expressions in "less" |
First you need recent enough software:Linux kernel >= 5.2 for the kernel-side support in EXT4
userland tools: e2fsprogs >= 1.45 (eg: on Debian 10 which ships only version 1.44 this requires buster-backports). Provides among others mke2fs (alias mkfs.ext4), tune2fs and chattr.
UPDATE:e2fsprogs >= 1.45.7 needed to allow enabling casefold using tune2fs on an unmounted filesystem after it was created without it.
e2fsprogs >= 1.46.6 needed to allow disabling casefold using tune2fs after it was enabled, and only if no directory still has the +Fflag.
to also use filesystem encryption, this requires Linux kernel >= 5.13.With this installed, the documentation from man ext4 does reflect the existence of this feature:casefold
This ext4 feature provides file system level character encoding support for directories with the casefold (+F) flag enabled. This
feature is name-preserving on the disk, but it allows applications to
lookup for a file in the file system using an encoding equivalent
version of the file name.The casefold feature must first be enabled as a filesystem-wide ext4 option. Sadly, I couldn't manage to enable it on an already formatted filesystem. So using a sparse file created with dd if=/dev/zero of=/tmp/image.raw bs=1 count=1 seek=$((2**32-1)) to test on a newly created filesystem.
# tune2fs -O casefold /tmp/image.raw
tune2fs 1.45.3 (14-Jul-2019)
Setting filesystem feature 'casefold' not supported.
#UPDATE: Since this commit it's possible to use tune2fs to enable casefold on an unmounted filesystem. When this answer was written this feature was not yet available:
# tune2fs -O casefold /tmp/image.raw
tune2fs 1.47.0 (5-Feb-2023)
#So when formatting, this will enable the feature:
# mkfs.ext4 -O casefold /tmp/image.raw or to specify an other encoding rather than default (utf8). It appears that currently there is only utf8-12.1, of which utf8 is an alias anyway:
# mkfs.ext4 -E encoding=utf8-12.1 /tmp/image.raw You can verify what was done with tune2fs:
# tune2fs -l /tmp/image.raw |egrep 'features|encoding'
Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent 64bit flex_bg casefold sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
Character encoding: utf8-12.1Now to use the feature:
# mount -o loop /tmp/image.raw /mnt
# mkdir /mnt/caseinsensitivedir
# chattr +F /mnt/caseinsensitivedir
# touch /mnt/caseinsensitivedir/camelCaseFile
# ls /mnt/caseinsensitivedir/
camelCaseFile
# ls /mnt/caseinsensitivedir/camelcasefile
/mnt/caseinsensitivedir/camelcasefile
# mv /mnt/caseinsensitivedir/camelcasefile /mnt/caseinsensitivedir/Camelcasefile
mv: '/mnt/caseinsensitivedir/camelcasefile' and '/mnt/caseinsensitivedir/Camelcasefile' are the same file |
I saw that kernel 5.2 got handling of ext4 case-insensitivity per directory by flipping a +F bit in inode.This EXT4 case-insensitive file-name lookup feature works on a
per-directory basis when an empty directory is enabled by flipping the
+F inode attribute.https://www.phoronix.com/scan.php?page=news_item&px=EXT4-Case-Insensitive-Linux-5.2
But how to do that? Does any chmod handle that? My distributions doesn't look like it.
So how do I use this feature?
| How to enable new in kernel 5.2 case-insensitivity for ext4 on a given directory? |
They are certainly case sensitive in the glibc resolver libraries. Note the use of strncmp (case sensitive compare) rather than strncasecmp (case insensitive compare) in the MATCH function within glibc res_init.c.
This code is responsible for reading + parsing the /etc/resolv.conf file.
#define MATCH(line, name) \
(!strncmp(line, name, sizeof(name) - 1) && \
(line[sizeof(name) - 1] == ' ' || \
line[sizeof(name) - 1] == '\t')) if ((fp = fopen(_PATH_RESCONF, "rce")) != NULL) {
/* No threads use this stream. */
__fsetlocking (fp, FSETLOCKING_BYCALLER);
/* read the config file */
while (fgets_unlocked(buf, sizeof(buf), fp) != NULL) {
/* skip comments */
if (*buf == ';' || *buf == '#')
continue;
/* read default domain name */
if (MATCH(buf, "domain")) {
if (haveenv) /* skip if have from environ */
continue;
cp = buf + sizeof("domain") - 1;Further, quick example showing how lookup breaks with NAMESERVER rather than nameserver.
# cat /etc/resolv.conf
options timeout:2 attempts:5
; generated by /sbin/dhclient-script
search eu-west-1.compute.internal
nameserver 172.31.0.2
# getent hosts www.google.com
2a00:1450:400b:802::2004 www.google.com
# sed -i 's/nameserver/NAMESERVER/' /etc/resolv.conf
# getent hosts www.google.com
# |
Looking around I have found out the following about /etc/resolv.conf valid formatting:Trailing whitespace is allowed
Leading whitespace is NOT allowed
DNS records are case insensitive, though you may have weird issues in applications that lowercase everythingHowever, I can't find anywhere whether the resolv.conf keywords are case insensitive or case sensitive. They seem to be lowercase usually, but do they have to be? Is it an error if I find a server where they are in uppercase?
A google search turns up this forum thread, where a code example seems to indicate that the keywords are case insensitive. However, there is no link to any authoritative documentation.
Are /etc/resolv.conf keywords (such as nameserver) case sensitive?
| Are keywords in resolv.conf case sensitive? |
You need to revert two settings, nocaseglob and nocasematch:
The documentation (man bash) writes,nocaseglob If set,bash matches filenames in a case-insensitive fashion when performing pathname expansion […]
nocasematch If set, bash matches patterns in a case-insensitive fashion when performing matching while executing case or [[ conditional commands, when performing pattern substitution word expansions, or when filtering possible completions as part of programmable completion.For my Cygwin environment these are both set off (although arguably both should be on for Windows-centric use of an NTFS filesystem, which expects case-memorable but case-insensitive*):
shopt | grep case
nocaseglob off
nocasematch offIf these are set on in your environment and you want them disabled (so that comparisons are made case-sensitively), you may want to try this command
shopt -u nocasematch nocaseglobI should point out though that this may not work. An NTFS filesystem mounted on a Linux-based OS is case-sensitive (I can create and access Abc, aBC, and abc as separate files). That same filesystem on a Windows system is treated as case-insensitive and only one of the three files is accessible from File Explorer or Cygwin (an attempt to open any of the three files returns just the "first" one). I suspect this level of detail is beyond your requirement but I'm just adding it in case it's useful.
Additional information from Cygwin, which I have not tested as it seems to enforce the opposite configuration to that which you want:Case sensitive filenames
In the Win32 subsystem filenames are only case-preserved, but not case-sensitive. You can't access two files in the same directory which only differ by case, like Abc and aBc. While NTFS (and some remote filesystems) support case-sensitivity, the NT kernel does not support it by default. Rather, you have to tweak a registry setting and reboot. For that reason, case-sensitivity can not be supported by Cygwin, unless you change that registry value.
If you really want case-sensitivity in Cygwin, you can switch it on by setting the registry value HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\kernel\obcaseinsensitive
to 0 and reboot the machine.As a different angle, Cygwin autocompletes based on case-sensitivity. If I have a directory Downloads and a directory dances, then ls -d DTab only offers the one directory, which I think is the behaviour that you want. If you're not absolutely wedded to Git bash you might want to try Cygwin with its git package.* NTFS itself is case-sensitive. However, when used in a Windows environment it appears to be case-memorable but case-insensitive, so that's what we have to work with.
|
I use the Git Bash on Windows for numerous bash tasks exceeding the Git use. It works well over years, but I cannot turn of the case-insensitive behavior. Auto-complete is cumbersome this way.
I tried the flag
shopt -u nocasematchin the .bashrc as described in solution #3 but it does not resolve the issue.
Neither solution #1 or #2 are an option for what I am looking for, as I have aliases uppercase, directories mixed cases and some files all lowercase and it is too many cases to apply a case-map.
Is there any way to enable case-sensitive autocomplete in my non-default Linux shell?
Since it is a different flavor, I would appreciate some hints or suggestions.
| How can I turn off git bash for Windows case-insensitive behavior? |
There is a workaround for your problem.
Try:
bind 'set completion-ignore-case on'
bind 'TAB:menu-complete'
bind 'set menu-complete-display-prefix on'
bind 'set show-all-if-ambiguous on'Type cd h,Tab. Line expands to cd hello.
Then type _, Tab. Line expands to cd Hello_StackOverflow
Press Tab,Tab. Line expands to cd Hello_STACKOVERFLOW/
Explanation:menu-complete Similar to complete, but replaces the word to be completed with a single match from the list of possible completions. Repeated execution of menu-complete steps through the list of possible completions, inserting each match in turn. At the end of the list of completions, the bell is rung (subject to the setting of bell-style) and the original text is restored. Thiscommand is intended to be bound to TAB, but is unbound by default.Available since bash-2.02-alpha1menu-complete-display-prefix If set to On, menu completion displays the common prefix of the list of possible completions (which may be empty) before cycling through the list.Available since bash-4.2-alphashow-all-if-ambiguous This alters the default behavior of the completion functions. If set to On, words which have more than one possible completion cause the matches to be listed immediately instead of ringing the bell.Work together with menu-complete-display-prefix since bash-4.3-alpha
|
Recently i face inconvenient when using bash auto complete with ignore case.
Let's say i have this directories:
[xiaobai@xiaobai test]$ l
total 20K
3407873 drwx------. 60 xiaobai xiaobai 4.0K May 25 17:17 ../
3409017 drwxrwxr-x. 2 xiaobai xiaobai 4.0K May 25 17:35 hello/
3681826 drwxrwxr-x. 2 xiaobai xiaobai 4.0K May 25 17:55 Hello_STACKOVERFLOW/
3681837 drwxrwxr-x. 2 xiaobai xiaobai 4.0K May 25 17:55 Hello_StackOverflow/
3412549 drwxrwxr-x. 5 xiaobai xiaobai 4.0K May 25 17:56 ./
[xiaobai@xiaobai test]$then cd h[Tab]:
[xiaobai@xiaobai test]$ cd h #and press [Tab]
hello/ Hello_StackOverflow/ Hello_STACKOVERFLOW/
[xiaobai@xiaobai test]$ cd hello #auto generateYou will notice the new command prompt come with auto complete 'hello' while there's alternative 'Hello' exist. But here i have no problem because i can either insert / OR press [Enter] to go inside hello/. Or i can insert underscore _ and press [Tab] to go further Hello_*:
[xiaobai@xiaobai test]$ cd hello_ #and press [Tab]
Hello_StackOverflow/ Hello_STACKOVERFLOW/
[xiaobai@xiaobai test]$ cd Hello_StackOverflow #auto generateNow the problem become obvious, what if my target was 'Hello_STACKOVERFLOW/' ? I have to press Back Space to delete 'tackOverflow' and then insert T+[Tab] to reach my target.
What i want is:
[xiaobai@xiaobai test]$ cd hello_ #and press [Tab]
Hello_StackOverflow/ Hello_STACKOVERFLOW/
[xiaobai@xiaobai test]$ cd Hello_S #without 'tackOverflow', so i just have to type T+[Tab] without redundant erase step.Of course it wouldn't have such problem if completion-ignore-case off on my inputrc file. But i like to ignore case but avoid auto complete when ambiguous. Is it possible to do that ?
| bash - ignore case but disallow autocomplete if ambiguous |
Try grep:
grep -iv dog inputfile-i to ignore case and -v to invert the matches.
If you want to use sed you can do:
sed '/[dD][oO][gG]/d' inputfileGNU sed extends pattern matching with the I modifier, which should make the match case insensitive but this does not work in all flavors of sed. For me, this works:
sed '/dog/Id' inputfilebut it won't work on OS X.
|
I have a file that contains information as so:
20 BaDDOg
31 baddog
42 badCAT
43 goodDoG
44 GOODcATand I want to delete all lines that contain the word dog. This is my desired output:
42 badCAT
44 GOODcATHowever, the case of dog is insensitive.
I thought I could use a sed command: sed -e "/dog/id" file.txt , but I can't seem to get this to work. Does it have something to do with me working on an OSX? Is there any other method I could use?
| delete line that contains a case insensitive match |
There are patches currently under development to implement case insensitivity for ext4.
https://lwn.net/Articles/762826/
https://marc.info/?l=linux-ext4&m=154430575726827&w=2
They were included in the Linux 5.2 kernel, and also require e2fsprogs-1.45 to work. See How to enable new in kernel 5.2 case-insensitivity for ext4 on a given directory?
|
With NTFS you can enable or disable case sensitivity. Is there a way to do it with ext4 in Linux?
| Is it possible to disable ext4 case sensitivity? |
One way is to use alias shell builtin, for example:
alias Python='python'
alias PYTHON='python'
alias Python='python'
alias pyThoN='python'For a better approach, the command_not_found_handle() function can be used as described in this post: regex in alias.
For instance, this will force all the commands to lowercase:
command_not_found_handle() {
LOWERCASE_CMD=$(echo "$1" | tr '[A-Z]' '[a-z]')
shift
command -p $LOWERCASE_CMD "$@"
return $?
}Unfortunately it does not work with builtin commands like cd.
Also (if you have Bash 4.0) you can add a tiny function in your .bashrc to convert uppercase commands to lowercase before
executing them. Something similar to this:
function :() {
"${1,,}"
}Then you can run the command by calling : Python in command line.
NB as @cas mentioned in the comments, : is a reserved bash word. So to avoid inconsistencies and issues you can replace it with c or something not already reserved.
|
Is it possible for bash to find commands in a case-insensitive way?
eg. these command lines will always run python:
python
Python
PYTHON
pyThoN | bash case-insensitive commands matching |
Here's how I would approach it:Create a function that generates globs for filenames based on the requirement (any character could show up as upper- or lower-case).
Modify the loop to have scp use the glob as the remote filename, and the already lower-cased filename as the local filename.This will create the same one scp connection per file, per computer as you do currently, but the globbing will pick up the remote file, no matter how it is "cased".
Here's the (bash-specific) function:
function ul {
# for each character in $1, convert it to upper and lower case, then
# enclose it in [ ]
out=
for (( i=0; i< ${#1}; i++ ))
do
c=${1:$i:1}
if [[ "$c" =~ ^[[:alpha:]]$ ]]
then
uc=${c^}
lc=${c,}
out="${out}[${uc}${lc}]"
else
out="${out}${c}"
fi
done
printf "%s" "$out"
}So you put that into the same script, or in some common area that gets sourced.
To demonstrate its usage:
$ g=$(ul system.dbf)
$ echo "$g"
[Ss][Yy][Ss][Tt][Ee][Mm].[Dd][Bb][Ff]For step 2, this is how I modified your inner loop:
for file_name in ${file_list[@]}; do
g=$(ul "$file_name")
remote_file=${remote_path}${computer_name}/${dow}/CustomerData/system/${g}
local_file=${working_directory}${file_name}
echo $local_file
scp -i $ID $USER@$HOST:$remote_file $local_file
chmod 0777 ${local_file}
doneI added the g= assignment as well as the remote_file assignment (at the end of the line).
|
I have a directory that contains the backup of many computers using NTFS file system.
/backup/REP1/database
/backup/REP2/databaseI now want to scp from the backup file server to the database server, both are running Ubuntu 14.
Inside the backup directories are Visual FoxPro files that are not all the same case, but are the same name. There are other files in the backup directory that I do not want to scp.
/backup/REP1/database/usersupport.DBF
/backup/REP1/database/System.dbf/backup/REP2/database/UserSupport.dbf
/backup/REP2/database/system.dbfIn my bash script I am using 2 loops to create the remote path and file names.
computer_list=(REP1 REP2 REP3 REP4 REP5 REP6 REP7 REP8 REP9 REP10 REP11 REP12 REP13 REP14 REP15 REP16)
file_list=(usersupport.cdx usersupport.dbf usersupport.fpt system.dbf) for computer_name in ${computer_list[@]}; do
## delete working dir
delete_working_dir
for file_name in ${file_list[@]}; do
remote_file=${remote_path}${computer_name}/${dow}/CustomerData/system/${file_name}
local_file=${working_directory}${file_name}
#echo $remote_file
echo $local_file
# scp -i $ID $USER@$HOST:$remote_file $local_file > /dev/null 2>&1
scp -i $ID $USER@$HOST:$remote_file $local_file
# change databse file permissions
chmod 0777 ${local_file}
done
# process mysql
process_mysql
## delete working dir
delete_working_dirdoneThe command scp will not copy the source file if the case is not the same.
What would be the correct or easiest way to get the source file regardless of case.
I did try shopt -s nocasematch, but no go.
Can I use substitution on the remote file name?
[:lower]
This user uses this
scp -B -p ${Auser}@${aSrcHOST}:${aSrcDIR}/*.[Oo][Kk] $aTgtDIR
So I believe the substitution might work. I am not sure of the syntax.
| How to copy with scp a file that case is unknown via bash script |
[A-Z] doesn't mean upper case. It means letters from A to Z, which may include lower-case letters. Usually you should use [[:upper:]] instead. (This works in Bash even without extglob.)
What characters [A-Z] matches depends on your locale.
You have clarified that you want to show all filenames that contain at least on upper-case character anywhere--not only filenames consisting entirely of upper case--but that when you use ls *[A-Z]*, you get some filenames that don't have any upper-case characters in them.
This happens when your locale's lexicographic ordering interperses upper- and lower-case letters (e.g., AaBbCcDd...). Although you can set another locale (e.g., LC_ALL=C), the best solution is usually to write a pattern that specifically matches upper-case letters.
Which characters are upper-case letters may also vary between locales, but presumably if something is an upper-case letter in your locale then you want to include it. So that's probably an advantage of [[:upper:]] rather than a disadvantage.
Use [[:upper:]] instead.
Most Bourne-style shells, such as Bash, support POSIX character classes in globs. This command will list entries in /etc whose names have at least one upper-case letter:
ls -d /etc/*[[:upper:]]*Some of the entries you get may be directories. If you want to show their contents rather than just list the directories, then you can remove the -d flag. You may also want to put a -- flag before the pattern, in case you have entries in /etc that begin with -. You probably don't, though. (In a script, you will usually want to use -- here.)
You probably don't want dotfiles, but if you do...
This will not show entries that start with .. Usually you don't want to show them. If you do want them, most shells allow you to write a single glob that also matches them or to configure globbing to include them by default. The option to automatically include leading-. entries in Bash is dotglob and it can be enabled with shopt -s dotglob. For other shells see
. Or you can simply write a second glob for them:
ls -d /etc/*[[:upper:]]* /etc/.*[[:upper:]]*Most popular Bourne-style shells support brace expansions, so you can write this more compactly with less repetition:
ls -d /etc/{,.}*[[:upper:]]*In most shells including Bash, when you write two separate globs, you'll get an error message when either one does not expand--because the default behavior in most shells is to pass it unexpanded. But ls will still show the entries that matched the other one. But as Stéphane Chazelas has pointed out, in some shells including the very popular Zsh, the whole command fails and ls is never run. If you're using the shell interactively this is not really harmful, because you can modify the command run it again, but such constructions are unsuitable for portable scripts. Bash will also behave this way if you set the failglob shell option.
You don't need extended globbing for that.
In Bash you do not need to have extended globbing enabled to use POSIX character classes in glob patterns. On my system with Bash 4.3.48:
ek@Io:~$ shopt extglob
extglob off
ek@Io:~$ ls -d /etc/*[[:upper:]]*
/etc/ConsoleKit /etc/LatexMk /etc/ODBCDataSources /etc/UPower
/etc/ImageMagick-6 /etc/NetworkManager /etc/rcS.d /etc/X11But you do need it to match filenames of only upper-case letters.
What you do need extended globbing for is if you want to match filenames consisting only of upper-case letters. Then you would use +([[:upper:]]) or *([[:upper:]]), and those are extended globs.
If you're using Bash, see this article, this guide, 3.5.8.1 Pattern Matching in the GNU Bash manual for details. See also Stéphane Chazelas's answer.
|
So I've been playing around with filesystem and wondered about listing the files in /etc that contains only upper-case letters in their names. I commanded
ls *[A-Z]* But the console shows the files containing lower chars too.
I want to use only ls command. Is the console program locale dependent?
What is the underlying cause?
| Using wildcard in 'ls' command to find files containing uppercase letters only |
aptitude uses the POSIX regcomp() / regexec() API from the system's C library to do regexp matching and calls regcomp() with the REG_ICASE | REG_EXTENDED flags, so you get mostly the same as grep -iE¹, and there's no builtin support for turning case-insensitivity off.
Now, if you're willing to spend a bit of effort, there are still things you could do. It's all free and opensource software after all so there is never any limit in what you can do.
For instance, you could intercept the regcomp() libc function invocations via a LD_PRELOAD trick to remove that REG_ICASE flag:
$ cat case-sensitive.c
#define _GNU_SOURCE
#include <dlfcn.h>
#include <regex.h>int regcomp (regex_t *_Restrict_ __preg,
const char *_Restrict_ __pattern,
int __cflags)
{
static int (*orig) (regex_t *_Restrict_ __preg,
const char *_Restrict_ __pattern,
int __cflags) = 0;
if (!orig)
orig = (int (*) (regex_t *_Restrict_ __preg,
const char *_Restrict_ __pattern,
int __cflags)) dlsym (RTLD_NEXT, "regcomp");
return (*orig)(__preg, __pattern, __cflags & ~REG_ICASE);
}$ gcc -fPIC -shared -o case-sensitive.so case-sensitive.cThen you can see its effect for instance by searching for the zsh package whose description contains Zsh is a UNIX command interpreter:
$ apt-cache show zsh
Package: zsh
[...]
Description-en_GB: shell with lots of features
Zsh is a UNIX command interpreter (shell) usable as an interactive login
[...]$ aptitude search -F %p '~n "^zsh$" ~d "Zsh .* unix"'
zsh
zsh:i386
$ LD_PRELOAD=$PWD/case-sensitive.so aptitude search -F %p '~n "^zsh$" ~d "Zsh .* unix"'
$ LD_PRELOAD=$PWD/case-sensitive.so aptitude search -F %p '~n "^zsh$" ~d "Zsh .* UNIX"'
zsh
zsh:i386That case-insensive.so can be used for anything that uses the libc's regcomp()/regexec() API. You can tell if one does for a particular regexp matching with a debugger or ltrace for instance.
With int regcomp(addr, string, bitvec(int)); added to ~/.ltrace.conf:
$ ltrace -e regcomp aptitude search ZZZ
[...]
libapt-pkg.so.6.0->regcomp(0x55d9a6f1ad60, "^linux-image-[a-z0-9]*-[a-z0-9]*"..., <0-1,3>) = 0
libapt-pkg.so.6.0->regcomp(0x55d9a6f2f060, "^linux-.*-6\\.6\\.15-amd64$|^linux"..., <0-1,3>) = 0
aptitude->regcomp(0x55d9a6f1cb10, "linux-image-.*", <0-1>) = 0
aptitude->regcomp(0x55d9a6f62980, "linux-image-.*", <0-1,3>) = 0
libapt-pkg.so.6.0->regcomp(0x55d9a6ed08c0, "^linux-.*-.*$|^kfreebsd-.*-.*$|^"..., <0-1,3>) = 0
aptitude->regcomp(0x55d9a6f60270, "ZZZ", <0-1>) = 0
aptitude->regcomp(0x55d9a6f5e830, "ZZZ", <0-1,3>) = 0
p elpa-zzz-to-char - fancy version of `zap-to-char' command
p python3-zzzeeksphinx - Zzzeek's Sphinx layout and utilities(with flags here shown as bit vector with 0 being REG_EXTENDED and 1 REG_ICASE)
See how aptitude also invokes regcomp() indirectly in some apt libraries where our turning off REG_ICASE could cause problems.
Also make sure you don't use that for anything other than search in aptitude. In particular, as that $LD_PRELOAD variable will be passed along to all commands it executes, you won't want to install any package in an aptitude started in such an environment. In that regard, preloading that shared object file with /lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 --preload "$PWD/case-sensitive.so" /usr/bin/aptitude search... instead would be preferable as it would ensure it's only preloaded for aptitude.
GNU ed also uses the libc's POSIX regex API:
$ echo /Debian/I | ltrace -e regcomp ed -sE /etc/issue
ed->regcomp(0x5653bd71f520, "Debian", <0-1>) = 0
Debian GNU/Linux trixie/sid \n \l
+++ exited (status 0) +++$ echo /Debian/I | LD_PRELOAD=$PWD/case-sensitive.so ed -sE /etc/issue
Debian GNU/Linux trixie/sid \n \l
$ echo /debian/I | LD_PRELOAD=$PWD/case-sensitive.so ed -sE /etc/issue
?Standard EREs don't have operators to toggle case insensitivity on or off, but perl regexps do with (?i) (?-i), so do PCRE2 ones which is a library intended to bring perl regexps to other tools. And perl regexps are mostly compatible with EREs.
And PCRE2 does happen to have support for a regcomp()/regexec() standard API in its libpcre-posix library.
That makes it relatively easy to modify aptitude to use PCRE2 instead of standard EREs.
For instance, as a PoC, all I had to do was:
$ sudo apt build-dep aptitude
$ apt source aptitude
$ cd aptitude*(/)Create a src/regex.h as:
$ cat src/regex.h
#ifndef _REGEX_H
#define _REGEX_H 1
#include <pcre2posix.h>
#endifThen build with:
dpkg-buildpackage -b -j5Which failed because of the missing -lpcre2-posix. And all I had to do was:
make -C build MAYBE_LIBGTK=-lpcre2-posix(here using the otherwise unset MAYBE_LIBGTK make variable referenced at the linking stage to avoid having to modify any file).
Then I now have an aptitude using PCRE2 regexps:
$ build/src/aptitude search -F %p '~n "^zsh$" ~d "Zsh .* unix"'
zsh
zsh:i386
$ build/src/aptitude search -F %p '~n "^zsh$" ~d "(?-i)Zsh .* unix"'
$ build/src/aptitude search -F %p '~n "^zsh$" ~d "(?-i)Zsh .* UNIX"'
zsh
zsh:i386¹ though in the case of GNU grep, what regex engine and API is used varies with the version and a number of factors, it will likely not use the regcomp()/regexec() from the system's libc, even on systems where that's the GNU libc.
|
In a script:
aptitude search "?description($1)"
... can that be made case-sensitive?
| Can we make an aptitude search case-sensitive? |
The bash script below loops through the files in the current directory, looking for duplicate filenames case insensitively. If a match is found, it looks to create a "Duplicates" folder that doesn't exist already, then moves the duplicate file into that directory.
The outer loop is there in order to re-compute the file globs (*) for the loops, in case a file gets moved. The outer loop runs until no files are moved.
#!/bin/bashchanges=1
while [ $changes -gt 0 ]
do
changes=0
for one in *
do
for two in *
do
shopt -u nocasematch
# if it's the exact same filename, skip
[[ "$one" == "$two" ]] && continue
shopt -s nocasematch
# if the file name matches case-insensitively, then mv it
if [[ "$one" == "$two" ]]
then
suffix=
while [ -d Duplicates"${suffix}" ]
do
suffix=$((suffix + 1))
done
mkdir Duplicates"${suffix}"
mv "$two" Duplicates"${suffix}"
changes=1
break
fi
done
done
doneWith these sample files:
afile.txt
TestFile1.TXT
TESTfile1.txT
testfile1.txtA sample run of the script creates:
$ tree .
.
├── afile.txt
├── Duplicates
│ └── TestFile1.TXT
├── Duplicates1
│ └── testfile1.txt
└── TESTfile1.txT |
I have been asked by the data owner to copy a specific folder (and its large amount of subfolders and files) via FTPS to our cloud storage provider. I am using LFTP for that, and the upload worked well until I hit a snag.
There are several folders with multiple files that have the same filename except for case. For example, folder data has the following files:
testfile1.txt, TestFile1.TXT
When I try to upload those via LFTP, I get an error that the file already exists. So for my purposes, I need the files to be case-insensitive before uploading. To address this issue, I would like to use a script that searches the current directory recursively, and moves any case-insensitive duplicates to a subfolder. In my example above, I would want the script to create a subfolder called Duplicates and then move TestFile1.TXT into it. I suppose it's possible that I could have multiple duplicated filenames, so the script should create a Duplicates2 folder for the second duplicated filename, and so on.
Also, I should note that for the few "duplicated" files that I checked, they had differing filesizes. I am not going to make any assumptions about the files being actual duplicates, which is why I want to move them rather than delete them.
| Move files that have the same case-insensitive filename |
I'm assuming this has something to do with case-insensitive filenames, so if rename checks if the target file exists, it sees the original and stops to avoid destroying it.
The Perl rename on my system has this option which looks like it could work here:
-f, -force
Over write: allow existing files to be over-written.Even if that didn't work, you should be able to rename the files to something that's not just a case alteration. E.g. add an x to the beginning while changing case, and then remove that x:
rename 'y/A-Z/a-z/; s/^/x/' *
rename 's/^x//' *(of course that won't work if you have files called foo and xfoo, but you can always change the prefix to something else.)
|
When you do it it says file already exists.
example output:
rename 'y/A-Z/a-z/' *Totemic-1.12.2-0.11.6.jar not renamed: totemic-1.12.2-0.11.6.jar already exists
TreeChoppin-1.12.2-1.0.0.jar not renamed: treechoppin-1.12.2-1.0.0.jar already exists
UniDict-1.12.2-2.9.3.jar not renamed: unidict-1.12.2-2.9.3.jar already exists
VanillaFix-1.0.10-99.jar not renamed: vanillafix-1.0.10-99.jar already exists
WailaHarvestability-mc1.12-1.1.12.jar not renamed: wailaharvestability-mc1.12-1.1.12.jar already exists
WanionLib-1.12.2-2.4.jar not renamed: wanionlib-1.12.2-2.4.jar already existsHow do I make this work with wsl? it works flawlessly on my Ubuntu systems.
| rename 'y/A-Z/a-z/' * doesn't work on windows subsystem for linux (wsl) |
I'd use a hash lookup instead of a regexp comparison and *sub() for efficiency and robustness (in case you decide to use a string that contains regexp metachars or backreferences or can be a substring of some other string):
$ cat tst.awk
BEGIN {
FS = "|"
split("Father|Son|Daughter",tmp)
for (i in tmp) {
map[tolower(tmp[i])] = tmp[i]
}
}
{ lc = tolower($2) }
lc in map {
$2 = map[lc]
}
{ print }$ awk -f tst.awk file
Mark Father
Jason Son
Jose Son
Steffy Daughter |
I have "|" delimited text data, and want to transform a column values
$ cat infile
Mark|father
Jason|SOn
Jose|son
Steffy|daugHterI want to search for (father|son|daughter) case insensitively and substitute any case of father to Father, any case of son to Son, any case of daughter to Daughter
So outfile should look like this
$ cat outfile
Mark Father
Jason Son
Jose Son
Steffy DaughterI'm trying different combination of IGNORECASE with sub or gsub, but it prints all entries as is in infile
| awk case insensitive with gsub |
Here's a simple approach:
$ awk -F'[ ]' '{$1=tolower($1)}1' file
mumdfw2as123v USER=wladmin MOUNTPOINT=/apps
mumfw2as97v.mrshmc.com USER=wladmin MOUNTPOINT=/apps
mumfw3as65v USER=user MOUNTPOINT=DR-/u
mumdfw3as66v USER=oracle MOUNTPOINT=/u
mumdfw3as69v_oracle ansible_host=mumdfw3as69v USER=oracle MOUNTPOINT=/webThat simply changes $1 (the first field) to itself in lower case. The 1 at the end is awk shorthand for "print this line". The fun bit is the -F'[ ]' where we are setting the input field separator to a space, but because it is presented as a regular expression (a character class), that forces awk to recalculate the input line and means we can keep the original spacing of the input file. Without it, we would get:
$ awk '{$1=tolower($1)}1' file
mumdfw2as123v USER=wladmin MOUNTPOINT=/apps
mumfw2as97v.mrshmc.com USER=wladmin MOUNTPOINT=/apps
mumfw3as65v USER=user MOUNTPOINT=DR-/u
mumdfw3as66v USER=oracle MOUNTPOINT=/u
mumdfw3as69v_oracle ansible_host=mumdfw3as69v USER=oracle MOUNTPOINT=/webTo edit the file in place, you can use GNU awk (the default on linux systems):
$ gawk -F'[ ]' -i inplace '{$1=tolower($1)}1' file
$ cat file
mumdfw2as123v USER=wladmin MOUNTPOINT=/apps
mumfw2as97v.mrshmc.com USER=wladmin MOUNTPOINT=/apps
mumfw3as65v USER=user MOUNTPOINT=DR-/u
mumdfw3as66v USER=oracle MOUNTPOINT=/u
mumdfw3as69v_oracle ansible_host=mumdfw3as69v USER=oracle MOUNTPOINT=/web |
I have a file on RedHat with the data below:
$ cat hello.txt
mumdfw2as123v USER=wladmin MOUNTPOINT=/apps
MUMFW2as97v.mrshmc.com USER=wladmin MOUNTPOINT=/apps
MUMFW3AS65V USER=user MOUNTPOINT=DR-/u
MUMDFW3AS66V USER=oracle MOUNTPOINT=/u
mumdfw3AS69v_oracle ansible_host=mumdfw3as69v USER=oracle MOUNTPOINT=/webI wish to convert only the first column to lowercase and save the changes to the same file.
I do not have nawk tool as I did find a solution using 'nawk'
Can you please suggest?
| convert only the first column in a file to lower case |
Instead of
if [ -d abc ] ; then
echo 'Directory exists'use
if /bin/ls -d [aA][bB][cC]/ &> /dev/null ; then
echo 'Directory exists' |
I am working on restructuring the folder structure of few existing folders.
So if there are any folders missed i will have to add it.
First am checking if the directory exists or not with if command, if not present am creating one. As it is case sensitive, am ending up creating same folder again.
Example : A Folder with ABC already exists but am checking with abc, so again a new folder is created with abc, sometimes folder exists with Abc.
| Case insensitive directory search? |
In most languages, s sorts before V regardless of the case.
Sorting depends on localisation settings (LANG and LC_* variables).
You could use: LC_ALL=C sort if you wanted to sort according to the byte value order, but that may not do what you want if you're in a multi-byte locale.
If you want to sort in the order of your own language, but having uppercase letters before lowercase ones, you could do:
sed 's/./0&/g;s/0\([[:lower:]]\)/1\1/g' |
sort |
sed 's/.\(.\)/\1/g'That would cause lower-case letters to be sorted after every other character.
$ print -l Q s d é f D É F V | sort
d
D
é
É
f
F
Q
s
V$ print -l Q s d é f D É F V | sed 's/./0&/g;s/0\([[:lower:]]\)/1\1/g' |
sort |
sed 's/.\(.\)/\1/g'
D
É
F
Q
V
d
é
f
sThat would only work in locales where collating elements are single characters only.
|
Why do :
$ echo -e 'Q\ns\nV' | sort outputs
Q
s
Vwithout changing the order of my list (taking in account the lower/uppercase ?)
| `echo -e 'Q\ns\nV' | sort` doesn't sort |
Especially for your case where the glob is $top/**/*.jpg, I would not turn the caseglob option off (same as turning nocaseglob on¹) globally, as that affects all path components in glob patterns:
$ top=a zsh +o caseglob -c 'print -rC1 -- $top/*.jpg'
a/foo.jpg
a/foo.JPG
a/FOO.jpg
a/FOO.JPG
A/foo.jpg
A/foo.JPG
A/FOO.jpg
A/FOO.JPGSee how it did find all the jpg and JPG files in $top (a), but also the ones in an unrelated directory (A) which just happened to have the same name but in upper case. Even if you don't have such directories, zsh will still look for them which means it needs to list the contents of every directory that makes up the components of $top making that glob expansion more costly.
IMO, that nocaseglob option is better left forgotten. It was only added to zsh for compatibility with bash², and there most likely added to make the life of users of systems like Cygwin / macos that have case insensitive filesystem APIs easier.
Instead, I'd used the (#i) glob operator (with extendedglob) where you can specify which part of which glob should be case insensitive (similar to the ~(i) of ksh93):
set -o extendedglob # needed for (#i)
for file in $top/**/*.(#i)jp(e|)g(NDn.); doOr you can always do:
for file in $top/**/*.[jJ][pP]([eE]|)[gG](NDn.); doas you would in sh or any shell without case-insensitive glob operators.
Also note that *.jp(|e)g instead of *.jp*g which would match on filenames such as my.jpeg.import.log for instance.¹ or CASEGLOB, CASE_GLOB, C_A_se_G_lob, case and underscores are ignored in option names, and the support of no to turn an option off is to try and accommodate the mess with POSIX sh options (and other shells' including zsh itself) where some options are named with a no prefix and some without for no immediately obvious reason.
² though the bash behaviour is different (and preferable IMO, at least on systems with case sensitive file names) in this instance in that a/*.jpg would only find jpg/JPG files in a, not A as it only does case insensitive matching for path components that do have glob operators ([a]/*.jpg would also find the jpg/JPG files in A).
|
Context: macOS Catalina (zsh)
This script is to process all JPEG files. This script does not process .JPG files, however it does process .jpg files.
top=/Users/user/Desktop/
for file in $top/**/*.jp*g(NDn.); do #selects filetypes: .jpg .jpeg
mogrify -auto-orient \
-gravity northWest \
-font "Arial-Bold-Italic" \
-pointsize 175 \
-fill red \
-annotate +30+30 $n \
-- $file &&
echo $file "was watermarked with" $n | tee -a forLooplog.txt
(( n++ ))
doneHow can the second line be modified to be case insensitive and trap .JPG .JPEG files?
| zsh case-insensitive globbing |
In zsh, and with the extendedglob option on, you can do:
$ set -o extendedglob
$ printf '%s\n' (#i)path/to/file
Path/to/FILETo get the path/to/file with the stored case.
In ksh93:
$ printf '%s\n' ~(i)path/to/file
Path/to/FILE(beware that if there's no match, that will expand to ~(i)path/to/file, ksh93 has no equivalent to the nomatch or failglob options, though you could use ~(Ni)path/to/file for that to expand to nothing when it doesn't match)
In bash with the extglob, failglob and nocaseglob options on, you can do:
$ shopt -s extglob failglob nocaseglob
$ printf '%s\n' @(path)/@(to)/@(file)
Path/to/FILEWithout extglob, you can also do printf '%s\n' [p]ath/[t]o/[f]ile, though that's harder to automate reliably.
(in any case, underneath the shell does the equivalent of your ls | grep -i, that is, it has to read the full directory contents to find matching files. Note that like for grep -i, case comparison is done as per the locale, it may differ from the way NTFS does case comparison)
|
Cygwin is case-insensitive in the manner of Windows, e. g.:
$ touch ABC; rstr=$(openssl rand -base64 12); echo $rstr; echo $rstr > AbC; cat abc
dGRMOHqqoy0/nc96
dGRMOHqqoy0/nc96$ ls | grep -i abc
ABCThe cases of characters in a file or directory name are stored but ignored when doing operations on it.
ABC, AbC and abc select the identical file.
Is there a robust way to get for a given file path or directory path the capitalization as stored? The grep trick quickly becomes very cumbersome.
| Cygwin: get a path's stored capitalization |
mkfs.hfs -s /dev/sdd2from man mkfs.hfs:
-s Creates a case-sensitive HFS Plus filesystem. By default a
case-insensitive filesystem is created. Case-sensitive HFS
Plus file systems require a Mac OS X version of 10.3 (Darwin
7.0) or later. |
Linux can format an (external) disk as HFS+, e.g.:
apt-get install gparted hfsprogs, then
gparted /dev/sdd, rightclick on the partition to format, choose HFS+, click Apply, quit; mount -t hfsplus /dev/sdd2 /mnt/foo.
But then you can't make both /mnt/foo/xyzzy and /mnt/foo/XYZZY, because gparted used macOS's default option, case-insensitive. So copying files onto it from Linux causes all sorts of problems.
Can Linux format it as case sensitive?
Or must I plug the disk into a Mac to format it like that?
Related: https://apple.stackexchange.com/questions/334330/which-filesystems-support-symbolic-links
| Format disk as HFS+, but case sensitive? |
If you want to avoid re-inventing the wheel, you could use the mv command's built in ability to do automatic numbered backups; if your shell supports the case conversion natively that could be as simple as
for f in *; do mv --backup=numbered -- "$f" "${f,,}"; doneThe default backup number format is .~1~, for example given
SOME FILE sOmE fIlE some filethen
$ for f in *; do mv -v --backup=numbered -- "$f" "${f,,}"; done
‘SOME FILE’ -> ‘some file’ (backup: ‘some file.~1~’)
‘sOmE fIlE’ -> ‘some file’ (backup: ‘some file.~2~’)
mv: ‘some file’ and ‘some file’ are the same fileIf you don't like the default numbering, you could always change that after the fact; if your system includes the perl-based rename command that could be something like
$ rename -v -- 's/\.~(\d+)~/$1/' *.~*~
some file.~1~ renamed as some file1
some file.~2~ renamed as some file2finally giving
$ ls
some file some file1 some file2 |
I have a folder on my Linux system that is cross-synchronized with other computers, some of them with Windows. The problem is that in that folder there are files that are "case duplicates", i.e. their file name is same except that one or more characters are uppercase vs. lowercase. For the Linux system this isn't a problem, but for the Windows system it is and it complains about duplicate file names.
Is there a simple command line way to find and replace such file names, something like "convert all file names to lowercase, if this results in two files with the same name append '1' to one of them"?
| Batch rename "case duplicates" |
Use comm -12 file1 file2 to get common lines in both files.
You may also needs your file to be sorted to comm to work as expected.
comm -12 <(sort file1) <(sort file2)From man comm:
-1 suppress column 1 (lines unique to FILE1)
-2 suppress column 2 (lines unique to FILE2)Or using grep command you need to add -x option to match the whole line as a matching pattern. The F option is telling grep that match pattern as a string not a regex match.
grep -Fxf file1 file2Or using awk.
awk 'NR==FNR{seen[$0]=1; next} seen[$0]' file1 file2This is reading the whole line of file1 into an array called seen where the key is a whole line (in awk the $0 represents the whole current line).
We used NR==FNR as a condition to run the following block only for the first input file1 and not file2 (NR is referring to the number of records across all inputs, and FNR is the file number of records for each individual input. So, FNR is unique for each input file whereas NR is unique for all inputs files.)
The next statement telling awk to not continue the rest of the code and rather start over again until NR is not equal to FNR, which means all lines of file1 are read by awk.
Then next condition seen[$0] will apply only for the second input file2. For each line in file2 it will print every line that was marked as present =1 in file1 in the array.
Another simple option is using sort and uniq:
sort file1 file2|uniq -dThis will print both files sorted then uniq -d will print only duplicated lines. BUT this is granted when there is NO duplicated lines in both files themselves, else below is always granted even if there is a lines duplicated within both files.
uniq -d <(sort <(sort -u file1) <(sort -u file2)) | I have the following code that I run on my Terminal.
LC_ALL=C && grep -F -f genename2.txt hg38.hgnc.bed > hg38.hgnc.goi.bedThis doesn't give me the common lines between the two files. What am I missing there?
| Common lines between two files [duplicate] |
Per the comm manual, "Before `comm' can be used, the input files must be sorted using the collating sequence specified by the `LC_COLLATE' locale."
And the sort manual: "Unless otherwise specified, all comparisons use the character collating sequence specified by the `LC_COLLATE' locale.
Therefore, and a quick test confirms, the LC_COLLATE order comm expects is provided by the sort's default order, dictionary sort.
sort can sort files in a variety of manners:-d: Dictionary order - ignores anything but whitespace and alphanumerics.
-g: General numeric - alpha, then negative numbers, then positive.
-h: Human-readable - negative, alpha, positive. n < nk = nK < nM < nG
-n: Numeric - negative, alpha, positive. k,M,G, etc. are not special.
-V: Version - positive, caps, lower, negative. 1 < 1.2 < 1.10
-f: Case-insensitive.
-R: Random - shuffle the input.
-r: Reverse - usually used with one of dghnVThere are other options, of course, but these are the ones you're likely to see or need.
Your test shows that the default sort order is probably -d, dictionary order.
d | g | h | n | V
------+-------+-------+-------+-------
1 | a | -1G | -10 | 1
-1 | A | -1k | -5 | 1G
10 | z | -10 | -1 | 1g
-10 | Z | -5 | -1g | 1k
1.10| -10 | -1 | -1G | 1.2
1.2 | -5 | -1g | -1k | 1.10
1g | -1 | a | a | 5
1G | -1g | A | A | 10
-1g | -1G | z | z | A
-1G | -1k | Z | Z | Z
1k | 1 | 1 | 1 | a
-1k | 1g | 1g | 1g | z
5 | 1G | 1.10 | 1G | -1
-5 | 1k | 1.2 | 1k | -1G
a | 1.10 | 5 | 1.10 | -1g
A | 1.2 | 10 | 1.2 | -1k
z | 5 | 1k | 5 | -5
Z | 10 | 1G | 10 | -10 |
I was trying to find the intersection of two plain data files, and found from a previous post that it can be done through
comm -12 <(sort test1.list) < (sort test2.list)It seems to me that sort test1.list aims to sort test1.list in order. In order to understand how sort works, I tried sort against the following file, test1.list as sort test1.list > test2.list
100
-200
300
2
92
15
340However, it turns out that test2.list is
100
15
2
-200
300
340
92This re-ordered list make me quite confused about how this sort works, and how does sort and comm work together.
| Issues of using sort and comm |
If your comm supports non-text input (like GNU tools generally do), you can always swap NUL and nl (here with a shell supporting process substitution (have you got any plan for that in mksh btw?)):
comm -23 <(tr '\0\n' '\n\0' < file1) <(tr '\0\n' '\n\0' < file2) |
tr '\0\n' '\n\0'That's a common technique.
|
I’m writing something that deals with file matches, and I need an inversion operation. I have a list of files (e.g. from find . -type f -print0 | sort -z >lst), and a list of matches (e.g. from grep -z foo lst >matches – note that this is only an example; matches can be any arbitrary subset (including empty or full) or lst), and now I want to invert this list.
Background: I’m sorta implementing something like find(1) excepton file lists (although the files do exist in the filesystem at the point of calling, the list may have been pre-filtered). If the list of files weren’t potentially so large, I could use find "${files[@]}" -maxdepth 0 -somecondition -print0, but even moderate use of what I’m writing would go beyond the Linux or BSD argv size limit.
If the lines were not NUL-separated, I could use comm -23 lst matches >inverted. If the matches were not NUL-separated, I could use grep -Fvxzf matches lst. But, from the generators I mentioned in the first paragraph, both are.
Assume GNU tools are installed, so this needs not be portable beyond e.g. Debian, as I’m using find -print0, sort -z and friends already (although some BSDs have it, so if it can be done in “more portable”, I won’t complain).
I’m trying to do code reuse here; plus, comm -23 is basically the perfect tool for this already except it doesn’t support changing the input line separator (yet), and comm is an underrated and not-enough-well-known tool anyway. If the Unix/Linux toolbox doesn’t offer anything sensible, I’m likely to reimplement a form of comm -23 (reduced to just this one use case) in shell, as the script already (for other reasons) requires a shell that happens to support read -d '' for NUL-delimited input, but that’s going to be slow (and effort… I posted this at the end of the workday in the hopes someone has got an idea for when I pick this up tomorrow or on the 28th).
| Invert matching lines, NUL-separated |
GNU coreutils includes the command join that does exactly what you want if line sorting in the result is irrelevant:
join <(sort file1) <(sort file2)
A 1 9
B 3 3
C 1 2If you want the tabs back, do:
join <(sort file1) <(sort file2) | tr ' ' '\t'
A 1 9
B 3 3
C 1 2Or use the t option to join.
(<() aka process substitution, requires ksh93 (where the feature originated in), bash or zsh)
|
I have two files with tab-separated values that look like this:
file1:
A 1
B 3
C 1
D 4file2:
E 1
B 3
C 2
A 9I would like to find rows between files 1 and 2 where the string in column 1 is the same, then get the corresponding values. The desired output is a single file that looks like this:
B 3 3
C 1 2
A 1 9Can this be done with a Unix one-liner?
| Find common elements in a given column from two files and output the column values from each file |
As an alternative to comm, consider grep:
grep -vxFf /tmp/required /tmp/allThis asks for the lines in /tmp/all that do not (-v) exist in the file (-f) /tmp/required. To avoid interpreting any line in /tmp/all as a regular expression, I added the "fixed strings" -F flag. In addition, we want to force the entire line in /tmp/all to match the one(s) from /tmp/required, so we use the -x option.
This method does not require sorted input.
I suspect that your comm -23 <(sort ...) <(sort ...) command would work, if the "SearchText.json" line matched exactly in both files (same amount of trailing spaces, if any).
|
I have two files, (no blank lines/Spaces/Tabs)
/tmp/all
aa
bb
cc
hello
SearchText.json
xyz.txt /tmp/required
SearchText.json and the end output I want is : (all uncommon lines from /tmp/all)
aa
bb
cc
hello
xyz.txt I have tried below commands :-
# comm -23 /tmp/required /tmp/all
SearchText.json# comm -23 /tmp/all /tmp/required
aa
bb
cc
hello
SearchText.json
xyz.txt # comm -13 /tmp/all /tmp/required
SearchText.json # comm -13 /tmp/required /tmp/all
aa
bb
cc
hello
SearchText.json
xyz.txt # grep -vf /tmp/all /tmp/required
# grep -vf /tmp/required /tmp/all
aa
bb
cc
hello
SearchText.json
xyz.txt # comm -23 <(sort /tmp/all) <(sort /tmp/required)
aa
bb
cc
hello
SearchText.json
xyz.txt | bash remove common lines from two files |
Process substitution is your friend here:
$ comm -23 <(find /dir -name 'something') <(cut -c43- list)The format <(command) applies a temp file descriptor to the command and the whole <( ) is used as a file input to comm (or any other command).
See more about process substitution here . Also check man bash :Process Substitution
Process substitution allows a process's input or output to be referred to using a filename. It takes the form of <(list) or >(list). The process list is
run asynchronously, and its input or output appears as a filename. This filename is passed as an argument to the current
command as the result of the expansion. If the >(list) form is used, writing to the file will provide input for list. If the <(list) form is used, the file passed
as an argument should be
read to obtain the output of list. Process substitution is supported on systems that support named pipes (FIFOs) or the /dev/fd
method of naming open files. |
From my understanding I want to use comm -23 file1 file2. file1 is the result of find and file2 is cut -c43- list. Is it possible I can write this as 1 line and not use any files (except the one I have named list)?
| How can I use comm in this way? |
You could pipe to:
expand -t "$((${COLUMNS:-$(tput cols)} / 2))"Or for the angle brackets:
awk -v cols="${COLUMNS:-$(tput cols)}" '
BEGIN {width = cols/2-1; space = sprintf("%*s", width, "")}
/^\t/ {print space ">", substr($0, 2); next}
{printf "%-*s<\n", width, $0}'If your tput doesn't output the number of columns, you could try parsing the output of stty size or stty -a. Or use zsh -c 'echo $COLUMNS' (also works with mksh). There's no standard/portable way to get that information.
If the input files contain multi-byte or double-width characters, YMMV. Depending on the expand/awk implementation alignment may be off.
That also assumes that the input files have no line that start with a Tab character. If that can't be guaranteed, the GNU implementation of comm has a --output-delimiter which you could use to specify a unique string. Or you could implement the comm functionality in awk which shouldn't be too complicated.
|
I am looking from something which gives me an output of comm -3 on two sorted outputs (line-by-line comparison, only additional/missing lines from either side) but which looks more like the output from diff -y, e.g. in that it uses the whole width.
file1:
bar/a
bar/feugiat
bar/libero
bar/mauris
bar/scelerisque
bar/urna
foo/blandit
foo/elementum
foo/feugiat
foo/laoreet
foo/luctus
foo/non
foo/pellentesque
foo/pulvinar
foo/rutrum
foo/sed
foo/ut
foo/vivamusfile2:
bar/a
bar/molestie
bar/quam
bar/risus
bar/tristique
foo/blandit
foo/elementum
foo/feugiat
foo/ligula
foo/massa
foo/mauris
foo/metus
foo/pellentesque
foo/pulvinar
foo/utOutput from comm -3 file1 file2:
bar/feugiat
bar/libero
bar/mauris
bar/molestie
bar/quam
bar/risus
bar/scelerisque
bar/tristique
bar/urna
foo/laoreet
foo/ligula
foo/luctus
foo/massa
foo/mauris
foo/metus
foo/non
foo/rutrum
foo/sed
foo/vivamusOutput from diff -y --suppress-common-lines file1 file2 (GNU), it depends on the screen width:
bar/feugiat | bar/molestie
bar/libero | bar/quam
bar/mauris | bar/risus
bar/scelerisque | bar/tristique
bar/urna <
foo/laoreet | foo/ligula
foo/luctus | foo/massa
foo/non | foo/mauris
> foo/metus
foo/rutrum / foo/ut
foo/sed <
foo/ut <
foo/vivamus <Possible output I would wish for:
bar/feugiat <
bar/libero <
bar/mauris <
> bar/molestie
> bar/quam
> bar/risus
bar/scelerisque <
> bar/tristique
bar/urna <
foo/laoreet <
> foo/ligula
foo/luctus <
> foo/massa
> foo/mauris
> foo/metus
foo/non <
foo/rutrum <
foo/sed <
foo/vivamus <Without the arrows would be OK as well, just the screen width should be used better:
bar/feugiat
bar/libero
bar/mauris
bar/molestie
bar/quam
bar/risus
bar/scelerisque
bar/tristique
bar/urna
foo/laoreet
foo/ligula
foo/luctus
foo/massa
foo/mauris
foo/metus
foo/non
foo/rutrum
foo/sed
foo/vivamus | Naive line-by-line comparison like "comm -3" but looking like "diff -y" |
Yes, if your input lines are ordered in the current collating sequence. From POSIX comm STDOUT documentation:If the input files were ordered according to the collating sequence of
the current locale, the lines written shall be in the collating
sequence of the original lines.If you guaranteed your input sorted, the comm output guaranteed sorted, too.
POSIX also defined that if your input is not ordered according to the collating sequence of the current locale, the comm output will be unspecified.
If you have GNU comm, you can use option --check-order to make unsorted inputs will cause a fatal error message.
|
Is the output of comm guaranteed sorted? In my simple examples they are and that makes sense to me (how I think comm works); however, I need to comm very large files and worried that comm might do some black magic for very large files.
Also, can someone point me to the source of comm? I've never been able to find the source for such scripts.
Thanks
| Is comm output guaranteed sorted? |
The kernel recognizes certain file formats that it can execute natively. This includes at least one binary format. Additionally, files that begin with #! (shebang) are considered scripts; for example, if a file is located at /path/to/script and begins with #!/bin/bash then the kernel executes /bin/bash /path/to/script arg1 arg2 when you invoke /path/to/script arg1 arg2.
If the kernel doesn't recognize the file format, it returns ENOEXEC (exec format error) from the execve system call. Following a tradition from the ancient days of Unix kernels that didn't have the shebang feature, most programs make a second attempt to execute a program when the first attempt failed with the error ENOEXEC: they try to execute /bin/sh as the script interpreter.
Bash is a notable exception: it runs the script in itself, not in /bin/sh. So your script was working by accident when you invoked it from bash.
If you leave off the shebang line, your script may be executed under /bin/bash or under /bin/sh, depending on what program it was executed from. And /bin/sh evidently doesn't support process substitution on your system. It's probably still bash (the error message looks like the one from bash), but when bash is invoked under the name sh, it goes into POSIX compatibility mode which doesn't support process substitution.
The moral of the story: if you write a bash script, you must put #!/bin/bash on the first line.
|
I have a script that's supposed to get the list of files of two directories, get differences and execute some code for certain files.
These are the commands to get the file lists:
list_in=$(find input/ -maxdepth 1 - type f | sed 's/input\///' | sort -u);
list_out=$(find output/ -maxdepth 1 - type f | sed 's/output\///' | sort -u);I execute the script in the correct directory, so this shouldn't fail. The unprocessed files are determined by
list_todo=$(comm -23 <(echo "$list_in") <(echo "$list_out"));since the option -23 for comm only prints lines of the first argument, that don't appear in both arguments and doesn't print lines that uniquely appear in the second argument.
However, only occasionally I get an error saying
command substitution: line 3: syntax error near unexpected token `('
command substitution: line 3: `comm -23 <(echo "$list_in") <(echo "$list_out")'This really puzzles me, since the exact same script worked fine for the last 3 weeks. I'm using this on a cluster, so several processes might execute the script simultaneously. May the error be caused by this?
Update. The script is called with ./script and I've obviously set chmod +x script before.
(Disclaimer: Even though I'm working on a cluster and these first three lines of my script don't include any locking mechanisms: Of course, no file is ever processed twice)
| comm fails on bash variable input |
awk '/^\/page1?/ {print $1}' /path/to/access.log | sort -u > result.txtIf you want a count of each unique IP, change sort -u to sort | uniq -c
If you want to match only the request-path field of the log (rather than the entire line) against /page1:
awk '$7 ~ /^\/page1?/ {print $1}' /path/to/access.log | sort -u > result.txtNote: I think nginx access logs are the same as apache access logs. If not, count the fields (count every space, including the one between the the Date:Time and the TimeZone) in the nginx log, and use the correct field number instead of $7
Finally, if you want to print both the IP address (or hostname if they've already been resolved) and the request path:
awk -v OFS='\t' '$7 ~ /^\/page1?/ {print $1, $7}' /path/to/access.log |
sort -u > result.txtTo see IP addresses that have visited /page1 but have never visited /page2:
awk '$7 ~ /^\/page1?/ {print $1}' /path/to/access.log | sort -u > result1.txt
awk '$7 ~ /^\/page2?/ {print $1}' /path/to/access.log | sort -u > result2.txt
comm -2 -3 result1.txt result2.txtcomm's -2 option suppresses lines that appear only in result2.txt, and -3 suppresses lines that appear in both files. output is thus lines that appear only in results1.txt.
see man comm for more details.
|
I need to select specific data's from log files.
I need two scripts:I need to select all IP addresses that only visited /page1
I need to select all IP addresses that visited /page1 but never visited /page2I have my desired logs in a .tar file. I want them extracted into a folder, and then I will use the script to parse them and delete them. ALL duplicated IP addresses.
This is what I have so far:
# filter /page1 visitors
cat access.log | grep "/page1" > /tmp/res.txt
# take the IP portion of record
cat res.txt | grep '^[[:alnum:]]*\.[[:alnum:]]*\.[[:alnum:]]*\.[[:alnum:]]*' -o > result.txtTypical access log looks like
162.158.86.83 - - [22/May/2016:06:31:18 -0400] "GET /page1?vtid=nb3 HTTP/1.1" 301 128 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:46.0) Gecko/20100101 Firefox/46.0" | Find IP addresses visiting /page1 but not /page2 from nginx access logfile |
When you dereference RES in:
comm $FILE ${RES}the content of RES replaces ${RES}. But comm expects a filename as argument, so for instance if $RES contains hello comm tries to open a file named hello.
Instead you could use a temporary file to store the common lines during the process:
tmp=$(mktemp --tmpdir)
tmp2=$(mktemp --tmpdir)
comm -12 ${1} ${2} >$tmpfor FILE in ${@:3}
do
comm -12 $FILE $tmp >$tmp2
rm $tmp
mv $tmp2 $tmp
donecat $tmp
rm $tmp |
I'd like to write a simple script for finding the intersection of multiple files (the common lines among all files), so after reading some here (link) i tried to write a bash script, which unfortunately fails for me.
what am i doing wrong?
RES=$(comm -12 ${1} ${2})for FILE in ${@:3}
do
RES=$(comm -12 $FILE ${RES})
doneIs there is any other suggestion how to implement this perhaps with parallel or xargs?
| How to find the intersection of multiple files (not necessarily two files)? |
Assuming each number can only appear once in a file:
$ awk '{c[$1]++} END{for (i in c) if (c[i] == (ARGC-1)) print i}' a.txt {1..2}.txt
3
6
7 |
I want to extract common number present in all file. I have 1000 file in folder. I Want to compare all file number and find out common number in 1000 file. I have used below code:
for ((i=2;i<=10000;i++))
do
comm -12 --nocheck-order a.txt "$i".txt > final.txt
mv final.txt file.txt
doneBut it is only over writting and comparing only last file with a.txt.
But I want common number present in all file.
let say a.txt file:1
3
47
8
6
71.txt file :2
3
6
7
82.txt file :3
5
6
7
9 3.txt and 4.txt....1000.txt. If this works fine for this 3 file, It should work fine for all file. So common in this file is:3
7while it is giving me 3
8
3Please let me know how I can proceed?
| how to find common number from multiple file? |
Taking a guess as to what those Windows commands do, I'd say the equivalent in a POSIX sh script would be:
equal=no
cmp -s file1 file2 && equal=yeswhich would set the equal variable to yes if the two files can be read and have identical content (byte-to-byte).
As an alternative to cmp -s, on some systems including Linux-based ones, you can use diff -q. diff -q (q for quiet), contrary to most cmp -s (s for silent) would report an error message if any of the files could not be read. While the GNU implementations of diff and cmp both first check to see if the two files are paths to the same file (including as hard or symbolic links one of the other) or are of different sizes to save having to read them, the busybox implementation of cmp does not while busybox diff does. So on those systems using busybox, you may prefer diff -q for performance reason.
|
I'm migrating to Linux, and I need to convert the following Windows cmd command:
fc file1.txt file2.txt | find /i "no se han encontrado diferencias" > nul && set equal=yesI think fc can be replaced by diff or comm, find with grep, but I don't how to do the && part, maybe an if statement...
| Linux equivalent of windows cmd command |
You could do this (if no files have a tab caracter in their names):
grep -T -r . mainfolder | sort -k 2 | uniq -D -f 1The recursive grep will output each line prefixed by the filename it is in. Then you sort based on all the fields but the first one. Finally uniq outputs just the duplicated lines, skipping the first field.
You can have more control on the files that go into sort by using find for example, or the --include and --exclude grep flags.
|
when i want to find duplicate lines between two files i use this command
comm -12 <(sort file1.txt) <(sort file2.txt)or
sort file1.txt file2.txt | awk 'dup[$0]++ == 1'But, how do I find duplicate lines in multiple files within folders. example:
mainfolder
folder1
file1-1.txt
file1-2.txt
etc
folder2
file2-1.txt
file2-2.txt
etcand that the result in terminal is displayed by file (that is, the lines repeated in all files but specify which file is the one that contains it) to know the origin of the problem.
PD: I tried this command and it didn't work for me
file_expr="*.txt"; sort $file_expr | sed 's/^\s*//; s/\s*$//; /^\s*$/d' | uniq -d | while read dup_line; do grep -Hn "^\s*$dup_line\s*$" $file_expr; done| sort -t: -k3 -k1,2 | awk -F: '{ file=$1; line=$2; $1=$2=""; gsub(/(^[ \t]+)|([ \t]+$)/,"",$0); if (prev != "" && prev != $0) printf ("\n"); printf ("\033[0;33m%s (line %s)\033[0m: %s\n", file, line, $0); prev=$0; }' | How do I find duplicate lines in multiple files within folders |
comm should tell you that one of the files isn’t sorted:
comm: file 1 is not in sorted orderIt expects the files to be sorted using the current locale’s collation order (as determined by LC_COLLATE); it won’t accept numerical order.
To compare the files, you can pre-sort them (lexicographically as you point out):
comm <(sort file1) <(sort file2)If you want the result to be sorted numerically, sort it again:
comm <(sort file1) <(sort file2) | sort -nThis produces
1
2
3
4
5
6
7
8
9
11
12
13
15
16
17
18
19
20
21
22
23
705
707
709
711
712
826
827
839
846
847
848
872
873
874
875
891 |
I have a list of IDs (sorted) in two files and I ran the comm command to compare them, but it seems to miss out one lines common to both files. Why is that?
File1:
1
2
3
4
5
6
7
8
9
11
12
13
15
16
17
18
19
20
21
22File2:
16
18
21
23
705
707
709
711
712
826
827
839
846
847
848
872
873
874
875
891Comm output: $> comm file1 file1
1
16 //exists in both files
18 //exists in both files
2
21
23
3
4
5
6
7
705
707
709
711
712
8
826
827
839
846
847
848
872
873
874
875
891
9
11
12
13
15
16 //it's here!
17
18 //...and here!
19
20
21
22The files are both sorted. However, my guess is that comm doesn't do numeric comparison and only looks at entries lexicographically? If so, what are some alternatives that I can try for this?
| Why does the output of comm fail to show common records? |
Use grep:
$ grep -Ff f1 f2
palm
calmman grep:
-F, --fixed-strings
Interpret PATTERN as a list of fixed strings (instead of regular
expressions), separated by newlines, any of which is to be
matched.
-f FILE, --file=FILE
Obtain patterns from FILE, one per line. If this option is used
multiple times or is combined with the -e (--regexp) option,
search for all patterns given. The empty file contains zero
patterns, and therefore matches nothing. |
File 1:
happy
sad
calm
palmFile 2:
palm
dream
calmI want to compare the two files and display only those line that are common in both the files, but I want to maintain the order of File 2. My output should be:
palm
calmI know I can use comm after sorting the files but I want to maintain the order. Is there any way to do this?
| Compare two files line by line without comm (I need to maintain order of file 1) |
Why not?
2 text files in Russian
$ file -i test1.txt test2.txt
test1.txt: text/plain; charset=utf-8
test2.txt: text/plain; charset=utf-8$ cat test1.txt
Привет $ cat test2.txt
Добрый день $ diff test1.txt test2.txt
1c1
< Привет
---
> Добрый день |
I want to compare two UTF-8 encoded text file. Can Linux command diff and comm handle these encoding?
| Can linux command comm handle UTF-8 encoded text files? |
You are almost certainly right that additional characters on each line are causing corresponding lines to fail to match exactly. Those additional characters might have the form of carriage-return characters from Windows-style line terminators, space or tab characters, or possibly other non-printing characters. For example, maybe the Python script is right-justifying the numbers so that some or all of them have leading spaces.
The surest thing to do would be to filter out all such unwanted characters, and since the data are strictly numeric, that's pretty easy to do with, for example, sed:
sed 's/[^0-9]//g' < input > outputYou could interpose that at various points in your process. Here's just one:
comm <(sed 's/[^0-9]//g' file1.txt | sort) <(sed 's/[^0-9]//g' file2.txt | sort) |
I have two files:one generated using find command in a folder to list files, sorting them numerically and writing to a file,
and the other generated by a python script, which is not sorted, so I explicitly sort it numerically.The problem is that my sort output only has two columns and is as follows:
500016
500016
500174
500174
500277
500277As you can see, even the common entries are shown separately in two columns and the third column is missing altogether, implying that there is nothing common between the two files, whereas these first three entries are indeed same. sort otherwise works as expected with some test files that I make.
I know that comm needs the two files to be lexically sorted, and here is a list of options I tried and failed:
comm <(sort file1.txt) <(sort file2.txt)from https://unix.stackexchange.com/a/377689/187419 failed. I also tried giving the -d option to sort explicitly, and also tried explicitly rewriting the files with dictionary sort -- both didn't work
comm --check-order <(sort file1.txt) <(sort file2.txt)from https://unix.stackexchange.com/a/186101/187419 did not return any order error; it ran as usual giving two output columns.
This solution for a problem very close to mine is also not working.
Thinking that it might be because of some additional characters in the file, I also the solution mentioned here to do :set list in vim.
Just to test if sort is causing issues, I deliberately sorted the test files I made (with which comm worked earlier) numerically and comm still worked.
I tried the solutions I could find, to no avail. Any other suggestions?
| comm command behaving strangely |
Assuming that both file1 and file2 are sorted (otherwise join won't work):
diff -u file1 file2 |
grep -E "^[+-]($(echo $(join -o0 file1 file2) | tr ' ' '|'))"Explanation:
The join command will output the join field that occurs in both files (i.e. the first word of the line which is the same in both files), one on each line. We echo this though tr, replacing all spaces with a pipe (|). The reason for doing the slightly convoluted echo (and not just piping the result from join directly through tr) is that the output from join will have a newline at the end of it which we do not want to replace with a pipe.
For the example files (the ones that were originally given by the OP before his edit of the question), the join, echo, tr thingy will produce bar|foo. This is the used as part of an extended regular expression in grep -E to filter through the output of diff -u.
The output of the command line is:
-bar c d
+bar x y |
I'd like to print a list of lines where the first word in two files is identical, and the rest of the words are not. Some complicated mess with comm, grep and cut would be possible, but hopefully there's a simpler way.
Edit: I've managed to slap together some working code. Example tests:
$ cat file1
a 1 E
b 2 F
c 3 G$ cat file2
a M X
b 2 Y
c 3 G$ difff 1 file1 file2 # Differences in fields 2+3
1,2c1,2
< a 1 E
< b 2 F
---
> a M X
> b 2 Y$ difff 1-2 file1 file2 # Differences in field 3 only
1c1
< b 2 F
---
> b 2 YEdit 2: The speed is now bearable (compares two files of 1800 and 8700 lines in half a second).
| Diff similar lines |
You don't need any of that, just use diff -qr dir1 dir2. For example:
$ tree
.
├── dir1
│ ├── file1
│ ├── file3
│ ├── file4
│ ├── file6
│ ├── file7
│ ├── file8
│ └── subdir1
│ ├── dsaf
│ ├── sufile1
│ └── sufile3
└── dir2
├── file1
├── file2
├── file3
├── file4
├── file9
└── subdir1
├── sufile1
└── sufile34 directories, 16 filesIf I now run diff -qr (-r for "recursive" and -q to only report when the files differ, and not show the actual differences) on the two directories, I get:
$ diff -qr dir1/ dir2/
Only in dir2/: file2
Only in dir1/: file6
Only in dir1/: file7
Only in dir1/: file8
Only in dir2/: file9
Only in dir1/subdir1: dsafThat said, the way to get a list of files is find:
$ find dir1 -type f
dir1/subdir1/dsaf
dir1/subdir1/sufile1
dir1/subdir1/sufile3
dir1/file6
dir1/file1
dir1/file8
dir1/file4
dir1/file7
dir1/file3Then, you can remove the dir1/ and dir2/ using sed, and compare the output of two directories using process substitution in a shell that supports it:
$ comm -3 <(find dir1 -type f | sed 's|dir1/||' | sort) <(find dir2 -type f | sed 's|dir2/||' | sort)
file2
file6
file7
file8
file9
subdir1/dsafNote that this assumes file names with no newline characters. If you need to handle those, just use the diff -r approach above.
|
Why
I have two folders that should contain the exact same files, however, when I look at the number of files, they are different. I would like to know which files/folders are present in one, not the other. My thinking is I will make a list of all the files and then use comm to find differences between the two folders.
Question
How to make a list recursively of files and folders in the format /path/to/dir and /path/to/dir/file ?
Important notes
OS: Windows 11, subsystem Ubuntu 20.04.4 LTS
Locations folders: One network drive, one local
Size of folders: ~2tb each
| Recursively list path of files only |
GNU comm (as of GNU coreutils 8.25) now has a -z/--zero-terminated option for that.
For older versions of GNU comm, you should be able to swap NUL and NL:
comm -13 <(cd dir1 && find . -type f -print0 | tr '\n\0' '\0\n' | sort) \
<(cd dir2 && find . -type f -print0 | tr '\n\0' '\0\n' | sort) |
tr '\n\0' '\0\n'That way comm still works with newline-delimited records, but with actual newlines in the input encoded as NULs, so we're still safe with filenames containing newlines.
You may also want to set the locale to C because on GNU systems and most UTF-8 locales at least, there are different strings that sort the same and would cause problems here¹.
That's a very common trick (see Invert matching lines, NUL-separated for another example with comm), but needs utilities that support NUL in their input, which outside of GNU systems is relatively rare.¹ Example:
$ touch dir1/{â‘,â‘¡} dir2/{â‘¡,â‘¢}
$ comm -12 <(cd dir1 && find . -type f -print0 | tr '\n\0' '\0\n' | sort) \
<(cd dir2 && find . -type f -print0 | tr '\n\0' '\0\n' | sort)
./â‘¢
./â‘¡
$ (export LC_ALL=C
comm -12 <(cd dir1 && find . -type f -print0 | tr '\n\0' '\0\n' | sort) \
<(cd dir2 && find . -type f -print0 | tr '\n\0' '\0\n' | sort))
./â‘¡(2019 edit: The relative order of â‘â‘¡â‘¢ has been fixed in newer versions of the GNU libc, but you can use 🧙 🧚 🧛 instead for instance in newer versions (2.30 at least) that still have the problem like 95% of Unicode code points)
|
Over in an answer to a different question, I wanted to use a structure much like this to find files that appear in list2 that do not appear in list1:
( cd dir1 && find . -type f -print0 ) | sort -z > list1
( cd dir2 && find . -type f -print0 ) | sort -z > list2
comm -13 list1 list2However, I hit a brick wall because my version of comm cannot handle NULL-terminated records. (Some background: I'm passing a computed list to rm, so I particularly want to be able to handle file names that could contain an embedded newline.)
If you want an easy worked example, try this
mkdir dir1 dir2
touch dir1/{a,b,c} dir2/{a,c,d}
( cd dir1 && find . -type f ) | sort > list1
( cd dir2 && find . -type f ) | sort > list2
comm -13 list1 list2Without NULL-terminated lines the output here is the single element ./d that appears only in list2.
I'd like to be able to use find ... -print0 | sort -z to generate the lists.
How can I best reimplement an equivalent to comm that outputs the NULL-terminated records that appear in list2 but that do not appear in list1?
| Using comm with NULL-terminated records |
For comm to work properly, both files have to be sorted lexicographically, not numerically. You may sort your files before calling comm using
sort -o file1 file1
sort -o file2 file2 Then:
$ comm -23 file1 file2
4
8Or, you may sort the files at the same time as you call comm, if your shell supports process substitutions:
$ comm -23 <( sort file1 ) <( sort file2 )
4
8 |
I have 2 different files-
File 1
2
4
6
8
10
12File 2
2
3
5
6
10
12I want to compare 2 files and get the output data which is in File 1 but not in File 2-
Output
4
8I am using below command but not getting desired output-comm -23 file1 file2 | Comparing Data between 2 different files in Unix |
Not being able to presort the files isn’t a problem:
comm -13 <(sort fileA) <(sort fileB)This gives
1199.com
1299.com
www2.1329.comwith your examples, assuming each host is on a separate line. -13 tells comm to drop column 1 (lines unique to the first file) and 3 (lines common to both files), leaving only lines unique to the second file.
|
Good day everyone,
I know there are a lot of similar questions already answered, but I can't find a satisfying answer and it drives me nuts.
I have two files which both contain hostnames : one that holds all the ones opened to the Internet, the other logs all the scan results of ALL our hosts, opened to Internet or not.
File A (1111.com,1112.com,www.1113.com,1114.com)
File B (1111.com,1199.com,1299.com,www2.1329.com)
My goal is to print a file that would print ONLY the hosts that are exclusively in file B. I tried diff and comm but I cannot presort the files, as the entries a sometimes a little bit different.
Does anyone have a solution ?
| Print only what is exclusive to a file compared to another in Bash |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.