source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
456,343
I have a CentOS server where i am logging in as a user jenkins but user name is appearing as root instead of jenkins [root@centos-7-1 ~]# sudo su - jenkins Last login: Sat Jul 14 20:21:16 UTC 2018 on pts/0[root@centos-7-1 ~]# hostname centos-7-1 I checked the sudoers file i found this, i am not sure if it is somehow related to the problem [root@centos-7-1 etc]# cat group | grep jenkinsjenkins:x:993:[root@centos-7-1 ~]# grep jenkins /etc/passwd jenkins:x:996:993:Jenkins Automation Server:/var/lib/jenkins:/bin/false[root@centos-7-1 ~]# When I execute whoami after switching users I get: [root@centos-7-1 ~]# su - jenkins Last login: Sat Jul 14 20:33:18 UTC 2018 on pts/0 [root@centos-7-1 ~]# whoami root
Ah, this is simple. Your jenkins user is defined in /etc/passwd like this: jenkins:x:996:993:Jenkins Automation Server:/var/lib/jenkins:/bin/false See that last entry, which says /bin/false ? There isn't a valid shell defined for the jenkins user, so the session immediately terminates — and you're returned back to the root shell you started with.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/456343", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/300649/" ] }
456,364
I'm pulling VIN specifications from the National Highway Traffic Safety Administration API for approximately 25,000,000 VIN numbers. This is a great deal of data, and as I'm not transforming the data in any way, curl seemed like a more efficient and lightweight way of accomplishing the task than Python (seeing as Python's GIL makes parallel processing a bit of a pain). In the below code, vins.csv is a file containing a large sample of the 25M VINs, broken into chunks of 100 VINs. These are being passed to GNU Parallel which is using 4 cores. Everything is dumped into nhtsa_vin_data.csv at the end. $ cat vins.csv | parallel -j10% curl -s --data "format=csv" \ --data "data={1}" https://vpic.nhtsa.dot.gov/api/vehicles/DecodeVINValuesBatch/ \ >> /nas/BIGDATA/kemri/nhtsa_vin_data.csv This process was writing about 3,000 VINs a minute at the beginning and has been getting progressively slower with time (currently around 1,200/minute). My questions Is there anything in my command that would be subject to increasing overhead as nhtsa_vin_data.csv grows in size? Is this related to how Linux handles >> operations? UPDATE #1 - SOLUTIONS First solution per @slm - use parallel's tmp file options to write each curl output to its own .par file, combine at the end: $ cat vins.csv | parallel \--tmpdir /home/kemri/vin_scraper/temp_files \--files \-j10% curl -s \--data "format=csv" \--data "data={1}" https://vpic.nhtsa.dot.gov/api/vehicles/DecodeVINValuesBatch/ > /dev/nullcat <(head -1 $(ls *.par|head -1)) <(tail -q -n +2 *.par) > all_data.csv Second solution per @oletange - use --line-buffer to buffer output to memory instead of disk: $ cat test_new_mthd_vins.csv | parallel \ --line-buffer \ -j10% curl -s \ --data "format=csv" \ --data "data={1}" https://vpic.nhtsa.dot.gov/api/vehicles/DecodeVINValuesBatch/ \ >> /home/kemri/vin_scraper/temp_files/nhtsa_vin_data.csv Performance considerations I find both the solutions suggested here very useful and interesting and will definitely be using both versions more in the future (both for comparing performance and additional API work). Hopefully I'll be able to run some tests to see which one performs better for my use case. Additionally, running some sort of throughput test like @oletange and @slm suggested would be wise, seeing as the likelihood of the NHTSA being the bottleneck here is non-negligible.
Ah, this is simple. Your jenkins user is defined in /etc/passwd like this: jenkins:x:996:993:Jenkins Automation Server:/var/lib/jenkins:/bin/false See that last entry, which says /bin/false ? There isn't a valid shell defined for the jenkins user, so the session immediately terminates — and you're returned back to the root shell you started with.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/456364", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/300663/" ] }
456,425
On a Debian Stretch and testing/Buster system with a current kernel and installed microcode I still see meltdown and spectre listed as bugs in /proc/cpuinfo . However, running the spectre-meltdown-checker shows not vulnerable. So I'm wondering what /proc/cpuinfo does show. Are these just the vulnerabilities for this cpu and will those always be listed despite having a patched system?
The intent of the “bugs” field in /proc/cpuinfo is described in the commit message which introduced it : x86/cpufeature : Add bug flags to /proc/cpuinfo Dump the flags which denote we have detected and/or have applied bug workarounds to the CPU we're executing on, in a similar manner to the feature flags. The advantage is that those are not accumulating over time like the CPU features. Previously, hardware bugs that the kernel detected were listed as separate features ( e.g. the infamous F00F bug, which has its own f00f_bug entry in /proc/cpuinfo on 32-bit x86 systems). The “bugs” entry was introduced to hold these in a single feature going forwards, in the same style as x86 CPU flags . As far as what the entries mean in practice, as you can see in the message, all that’s guaranteed is that the kernel detected a hardware bug. You’ll need to look elsewhere (boot messages, or specific /proc entries or /sys entries such as the files in /sys/devices/system/cpu/vulnerabilities/ ) to determine whether the issues are dealt with. The usefulness of the “bugs” entries is limited in two ways. The first is that true negatives can’t be distinguished from unknowns: if the field doesn’t specify “cpu_meltdown”, you can’t know (just from the field) whether that means that the kernel doesn’t know about Meltdown, or that your CPU isn’t affected by Meltdown. The second is that the detection can be too simplistic; it errs on the side of caution, so it might report that your CPU is vulnerable when it isn’t. Because the “detection” is table-driven, its accuracy depends on which version of the kernel you’re running. In the case of Meltdown and Spectre bugs, the detection process which feeds the values in /proc/cpuinfo works as follows , on x86: if the CPU doesn’t perform any speculation (486-class, some Pentium-class, some Atoms), it’s not flagged as affected by Meltdown or Spectre; all remaining CPUs are flagged as affected by Spectre variants 1 and 2 (regardless of microcode revision etc.); if the CPU isn’t listed as not susceptible to speculative store bypass , if its microcode doesn’t claim to mitigate SSB, and if the CPU doesn’t claim to mitigate SSB, then it’s flagged as affected by SSB; if the CPU isn’t listed as not susceptible to Meltdown (AMD), and if its microcode doesn’t claim to mitigate Meltdown, then it’s flagged as affected by Meltdown.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/456425", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/300710/" ] }
456,481
I have a file like this 1|2345|John|Smith2|4563||Smith3|5968||Doe4|896|Rick|Lawson5|889||Eddy How do I count the number of rows that have data in the third column?
awk -F '|' 'length($3) { ++count } END { print count }' < input On the sample input, it results in: 2 It works by setting the field separator to pipe, then increments count on lines that have a non-empty value in the 3rd field. At the end of the file, it prints the final count .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/456481", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/300760/" ] }
456,506
For Ubuntu, is there a website/file which for a given release, e.g. 18.04 LTS (Bionic Beaver), lists the available (default) version of available software packages?For example, I want to know which version of gcc is shipped with Ubuntu 18.04. Note that I do not have Ubuntu installed on my current system, I just need to know what is available in which version. Unfortunately I was not able to find this information in the Release Notes.
Check the Ubuntu packages website, e.g. gcc for 18.04 .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/456506", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/215294/" ] }
456,632
I created a file using: $ ls > ls.txt Now I want to go to a different directory and use this file to create a bunch of empty files with the names from each line of ls.txt . What is the easiest way to do this?
You can use xargs . After changing to the new directory, xargs -d '\n' touch -- < path/to/ls.txt Note that if there were originally any directories in the ls output, these will be created as plain files in the new location.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/456632", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80787/" ] }
456,648
I have a file with strings like one below. F1B308F2B3094F3B310F4B317CF5B312F6BC313DF7B315 The strings are demarcated by markers in this case the "F" and a number. In this case the markers are F1 , F2 , F3 , F4 , F5 , F6 , and F7 . I would like to print the 5 characters following F2 and the 6 characters following F6 separated by a space so that the result is B3094 BC313D Here is my attempt though it prints in two lines instead of one. How would like both values on one line. $ echo F1B308F2B3094F3B310F4B317CF5B312F6BC313DF7B315 | \ awk '{match($0,/F2/); print substr($0, RSTART +2, RLENGTH +3);} \ {match($0,/F6/); print substr($0, RSTART +2,RLENGTH +4);}'
You're super close to a working solution. Here is one way to do it (formatted for readability): awk '{ match($0,/F2/); a=substr($0, RSTART +2, RLENGTH +3); match($0,/F6/); b=substr($0, RSTART +2,RLENGTH +4); print a" "b}' In this case, I'm taking your two substr() functions and assigning them to variables, instead of printing them directly, and then just setting them to print at the same time at the end. By printing them in a single print call, awk only adds a single newline character at the end of the line, rather than after each part of the line, which is what was splitting your result into two lines. bash:~$ echo F1B308F2B3094F3B310F4B317CF5B312F6BC313DF7B315 | awk '{match($0,/F2/); a=substr($0, RSTART +2, RLENGTH +3); match($0,/F6/); b=substr($0, RSTART +2,RLENGTH +4); print a" "b}'B3094 BC313D
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/456648", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/299390/" ] }
456,692
Of course a file can be open, or not open. However, a file descriptor by definition refers to an open file (right?). (Well, except when it refers to something besides a file, like a pipe or what have you. But it's still open.) I've encountered the phrase "open file descriptor" several times. I believe this is redundant, and that in fact there is no other kind of file descriptor besides an open file descriptor—but I would like to verify this. Is a file descriptor ever in any other state besides "open"? (When it's closed, doesn't it cease to exist?)
A program executes this: close(0); The standard input file descriptor has not changed value, but it is no longer referencing an open file description. It is available for re-allocation. Subsequent attempts to use it in, say, read() will result in EBADF because whilst it is still a file descriptor it is not an allocated one that references an open file description. It is a bad file descriptor. Further reading " close() " . The Open Group Base Specifications Issue 7. IEEE 1003.1:2008. The Open Group. 2018. " read() " . The Open Group Base Specifications Issue 7. IEEE 1003.1:2008. The Open Group. 2018.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/456692", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135943/" ] }
456,698
I have a function that depending on an argument where the functionality changes. I know I can do: function foo { PARAM1=$1 PARAM2="$2" VAR=$3 if[[ -z "$VAR" ]]; then # code here else # other code here fi } I was wondering if there is a more appropriate approach for bash. This would work but I wouldn't like to have something like foo "x" "y" "blah" foo "x" "y" "true" foo "y" "y" "1" all to be equivalent. Is there a more Bash suitable approach?
You may supply a command line option to your function. Using command line options that takes no argumets is a common way of providing binary/boolean values ("on/off", "true/false", "enable/disable") to shell scripts, shell functions and utilities in general. foo () { local flag=false OPTIND=1 while getopts 't' opt; do case $opt in t) flag=true ;; *) echo 'Error in command line parsing' >&2 exit 1 esac done shift "$(( OPTIND - 1 ))" local param1="$1" local param2="$2" if "$flag"; then # do things for "foo -t blah blah" else # do things for "foo blah blah" fi} The option -t acts like a boolean flag for the user. Using it would set flag inside the function to true (changing it from its default value of false ). The -t option would be used as the first argument to the function. Calling the function would be done using foo "some value" "some other value" or foo -t "some value" "some other value" where the latter call would set the flag variable in the function to true .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/456698", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/264385/" ] }
456,702
I had trouble installing DBan, (see other thread) on my Mac, I was wondering if there is a Linux distro I can install to destroy all of the data on a laptop (I've been given it on the condition I remove data), as UNetbootin seems to work better with Linux distros.
You may supply a command line option to your function. Using command line options that takes no argumets is a common way of providing binary/boolean values ("on/off", "true/false", "enable/disable") to shell scripts, shell functions and utilities in general. foo () { local flag=false OPTIND=1 while getopts 't' opt; do case $opt in t) flag=true ;; *) echo 'Error in command line parsing' >&2 exit 1 esac done shift "$(( OPTIND - 1 ))" local param1="$1" local param2="$2" if "$flag"; then # do things for "foo -t blah blah" else # do things for "foo blah blah" fi} The option -t acts like a boolean flag for the user. Using it would set flag inside the function to true (changing it from its default value of false ). The -t option would be used as the first argument to the function. Calling the function would be done using foo "some value" "some other value" or foo -t "some value" "some other value" where the latter call would set the flag variable in the function to true .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/456702", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117910/" ] }
456,704
I have a vertical bar delimited file as follows: 968666|JOHN|M|DOB145465|DAVID|M|NULL898563|SAUL|NULL|DOB968666|JOHN|F|NULL How do I delete the lines that have NULL in the 4th column? Expected output 968666|JOHN|M|DOB898563|SAUL|NULL|DOB
You may supply a command line option to your function. Using command line options that takes no argumets is a common way of providing binary/boolean values ("on/off", "true/false", "enable/disable") to shell scripts, shell functions and utilities in general. foo () { local flag=false OPTIND=1 while getopts 't' opt; do case $opt in t) flag=true ;; *) echo 'Error in command line parsing' >&2 exit 1 esac done shift "$(( OPTIND - 1 ))" local param1="$1" local param2="$2" if "$flag"; then # do things for "foo -t blah blah" else # do things for "foo blah blah" fi} The option -t acts like a boolean flag for the user. Using it would set flag inside the function to true (changing it from its default value of false ). The -t option would be used as the first argument to the function. Calling the function would be done using foo "some value" "some other value" or foo -t "some value" "some other value" where the latter call would set the flag variable in the function to true .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/456704", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/300760/" ] }
456,722
I have a command that takes a file name as a variable, I have given it as a name pattern, file_*_123.txt Script.sh file_*_123.txt Now if there are more than one files with matching pattern it only executes for the first file, I want this script to be executed for each file matching the name pattern. for e.g. file_1_123.txt, file_2_123.txt, file_3_123.txt Someone kind of for loop. However, because the number of file(s) can be 1 or more than 1 so not sure how to iterate through. Can anyone please suggest a solution to it.
This would either involve calling your script in a loop, or arranging for your script to loop over its command line arguments. The first option: for filename in file_*_123.txt; do ./Script.sh "$filename"done This would call your script once for each file that matches the pattern file_*_123.txt . The second option: You would modify the script and wrap the body of it in a loop, such as #!/bin/shfor filename do # here you do whatever you need to do with "$filename"done or, #!/bin/shfor filename in "$@"; do # here you do whatever you need to do with "$filename"done (these two variants of the loop are equivalent) This would cause the script to loop over its command line arguments. You would then run your script as you have shown in the question: ./Script.sh file_*_123.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/456722", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/300969/" ] }
456,723
I want to shrink a 250GB ext4 partition down to 200GB with resize2fs. The df tool says the partition is about 40% filled up. So, I guess I can, at least, shrink the partition to that level. Yet, when I try to shrink the partition with resize2fs , it fails. It says that the shrunk partition size is "smaller than minimum". I think it’s because the partition still has traces of files that were previously deleted, so resize2fs thinks they’re still on the partition, and shrinking could mean losing those data, thus refusing to shrink the partition. So my questions are: How can I make resize2fs realize the partition is indeed 60% empty, so that I can shrink it? Is there any way to reallocate used blocks, or something like that? NOTE: I'm a moderately experienced Xubuntu user.
resize2fs doesn’t care about your deleted data. If it’s refusing to shrink your file system down to 200 GiB, it’s because it reckons it needs more room either to store the file system structure after the resize, or to perform the resize operation itself. You can see the details here (assuming you can read C); in summary: the file system needs enough room to store the required number of group descriptors, given the number of inodes; it needs enough room to store the data in the files; the resize operation needs extra inode tables for its background operations to ensure the resize will successfully complete; each inode group incurs some overhead, which must be accounted for (and can affect the way data is split across groups, requiring more groups and thus more overhead); space needs to be reserved to allow the extent tree to grow if necessary, and this can cause large amounts of overhead (especially when resizing, if there’s a lot of data in the part of the file system which needs to be cleared). Some extra fudging overhead is added too (file system tools tend to play it very safe). You can find out how small your file system can be made by running resize2fs -P . resize2fs -M will automatically make it as small as possible.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/456723", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/300971/" ] }
456,726
I want to use English language with German locale settings. Right now my system runs with the following setup (configured during installation procedure in Debian Expert Installer): Language: English - English (Default) Country, territory or area: other -> Europe -> Austria Country to base default locale settings on: United States - en_US.UTF-8 Keyboard: German My question now is: How can I preserve English language but switch the current locale ( United States - en_US.UTF-8 ) to desired German locale ( de_DE.UTF-8 )? During installation procedure this was not possible because an error occurred ("Invalid language/locale settings combination detected").
en_DE doesn’t exist as a default locale, so you can’t select English localised for German-speaking countries as a locale during installation. ( Why should one use update-locale instead of directly setting LANGUAGE? describes the checks involved in choosing a locale.) There are two approaches to achieve what you’re after. One is to create a new locale with your settings; see How to (easily) be able to use a new en_** locale? for details. The other is to set up your locale settings in a finer-grained fashion, using the various LC_ variables ; for example: export LANG=en_US.UTF-8export LC_MONETARY=de_DE.UTF-8export LC_TIME=de_DE.UTF-8 or, if you want German to be the default except for messages: export LANG=de_DE.UTF-8export LC_MESSAGES=en_US.UTF-8 (and unset any other conflicting LC_ variables, in particular LC_ALL which overrides all other settings). You can check your settings using the locale program; see How does the "locale" program work? for details.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/456726", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/241507/" ] }
456,788
Ubuntu 16.04 Bash version 4.4.0 nginx version: nginx/1.14.0 How can I test the Nginx configuration files in a Bash script? At the moment I use -t when I'm in a shell: $ sudo nginx -tnginx: the configuration file /etc/nginx/nginx.conf syntax is oknginx: configuration file /etc/nginx/nginx.conf test is successful But I would like to do this in a script?
Use the exit status. From the nginx manpage: Exit status is 0 on success, or 1 if the command fails. and from http://www.tldp.org/LDP/abs/html/exit-status.html : $? reads the exit status of the last command executed. An example: [root@d ~]# /usr/local/nginx/sbin/nginx -t;echo $?nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is oknginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful0[root@d ~]# echo whatever > /usr/local/nginx/nonsense.conf[root@d ~]# /usr/local/nginx/sbin/nginx -t -c nonsense.conf;echo $?nginx: [emerg] unexpected end of file, expecting ";" or "}" in /usr/local/nginx/nonsense.conf:2nginx: configuration file /usr/local/nginx/nonsense.conf test failed1 A scripted example: #!/bin/bash/usr/local/nginx/sbin/nginx -t 2>/dev/null > /dev/nullif [[ $? == 0 ]]; then echo "success" # do things on successelse echo "fail" # do whatever on failfi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/456788", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210789/" ] }
456,794
I want to use dc to handle some base 16 numbers with hexadecimal points but I'm running into precision problems. For example, below I'm multiplying F423F.FD by 100 , both hex. The expected answer is F423FFD , instead it is giving F423FFA.E1 , close but not accurate enough even after rounding. $ dc16 d i o F423F.FD 100 * pF423FFA.E1 I read that dc was an unlimited precision calculator, and this is not a large number by any means. Is there something I'm doing wrong? Thank for your answers. Given the issues with dc , I bit the bullet and wrote my own parser for real numbers in other bases. If anyone is interested in the code, I can post it here.
Note that just printing the original number shows it is rounded: $ dc <<<'16 d i o F423F.FD p'F423F.FA You can get around it by adding lots of trailing zeroes for more precision: $ dc <<<'16 d i o F423F.FD000000 100 * p'F423FFD.0000000
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/456794", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40428/" ] }
456,843
I need to parse a file, and I'm looking to print a segment of data between two specific lines. From a "range start" to "range end" but only if the "range end" is present. If the source file is: [This is the start] of some data this is informationthis is more informationThis is does not contain the ending required[This is the start] of some other datathis is info I wantthis is info I want[This is the ending I was looking for] It should print: [This is the start] of some other datathis is info I wantthis is info I want[This is the ending I was looking for] Using grep I've been able to find the data I need and print upwards, but only by a fix number of lines. Given that the number of lines of data is not constant, is there a way I can use grep or sed, to work up from the end line to find the next occurrence of a given string and capture the specific range I want? The "range start" of the data segment should be printed along with any data between the "range start" and "range end" point, and the "range end" match is what determines if the whole range of lines should be printed at all. If a range (data segment) does not have the specified end, it should not be printed. If multiple segments have an end point, then all segments containing an end should be printed. No case exists where the input file will have an end without a start, or multiple ends to a single start. Print lines between (and including) two patterns does not solve my problem, as it starts printing on the first line matched and keeps printing until the the first end segment is found. I need to print only the segments that contain the specified end statement.
Using sed : $ sed -n '/This is the start/{h;d;}; H; /This is the ending/{x;p;}' file[This is the start] of some other datathis is info I wantthis is info I want[This is the ending I was looking for] Annotated sed script: /This is the start/{ # We have found a start h; # Overwrite the hold space with it d; # Delete from pattern space, start next cycle};H; # Append all other lines to the hold space/This is the ending/{ # We have found an ending x; # Swap pattern space with hold space p; # Print pattern space}; What the script does is to save all lines into the "hold space" (a general purpose buffer in sed ), but as soon as we find a "start line", we reset that space. When an "end line" is found, the saved data is printed. This breaks if an "end line" is found before a "start line", and possibly also if two "end lines" are found with no "start line" in-between. An awk program that goes through the same procedure as the above sed program: $ awk '/This is the start/ { hold = $0; next } { hold = hold ORS $0 } /This is the ending/ { print hold }' file (identical output as above)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/456843", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/301063/" ] }
456,849
I've just installed Kafka following tutorial. It doesn't start because of sh script error: $ sudo kafka-server-start.sh /etc/kafka.properties/opt/Kafka/kafka_2.12-1.1.0/bin/kafka-run-class.sh: line 252: [[: 10 2018-04-17: syntax error in expression (error token is "2018-04-17")[0.000s][warning][gc] -Xloggc is deprecated. Will use -Xlog:gc:/opt/Kafka/kafka_2.12-1.1.0/bin/../logs/kafkaServer-gc.log instead.Unrecognized VM option 'PrintGCDateStamps'Error: Could not create the Java Virtual Machine.Error: A fatal exception has occurred. Program will exit. checking line 252: if [[ "$JAVA_MAJOR_VERSION" -ge "9" ]] ; then I've added echo to get more info: JAVA_MAJOR_VERSION=$($JAVA -version 2>&1 | sed -E -n 's/.* version "([^.-]*).*"/\1/p')echo "JAVA_MAJOR_VERSION: $JAVA_MAJOR_VERSION"if [[ "$JAVA_MAJOR_VERSION" -ge "9" ]] ; then output is changed with: JAVA_MAJOR_VERSION: 10 2018-04-17 my java: $ java -versionopenjdk version "10.0.1" 2018-04-17 Question How I should change JAVA_MAJOR_VERSION=$($JAVA -version 2>&1 | sed -E -n 's/.* version "([^.-]*).*"/\1/p') to fix my Kafka starter?
Using sed : $ sed -n '/This is the start/{h;d;}; H; /This is the ending/{x;p;}' file[This is the start] of some other datathis is info I wantthis is info I want[This is the ending I was looking for] Annotated sed script: /This is the start/{ # We have found a start h; # Overwrite the hold space with it d; # Delete from pattern space, start next cycle};H; # Append all other lines to the hold space/This is the ending/{ # We have found an ending x; # Swap pattern space with hold space p; # Print pattern space}; What the script does is to save all lines into the "hold space" (a general purpose buffer in sed ), but as soon as we find a "start line", we reset that space. When an "end line" is found, the saved data is printed. This breaks if an "end line" is found before a "start line", and possibly also if two "end lines" are found with no "start line" in-between. An awk program that goes through the same procedure as the above sed program: $ awk '/This is the start/ { hold = $0; next } { hold = hold ORS $0 } /This is the ending/ { print hold }' file (identical output as above)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/456849", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/179910/" ] }
456,997
I'm having an issue mounting a shared NAS drive that is hosted on a Windows 2000 server . This is a drive that I'm certain I have access to, and I frequently access it from a Windows 7 machine and a Windows Server 2008 machine. Now, I'm attempting to mount this drive from a RHEL7 machine , but I'm having some issues. What I've done: mkdir /mnt/neededFoldermount -t cifs //DNS.forMyDrive.stuff/neededFolder /mnt/neededFolder -o username=myUserId,password=myPassword,domain=myDomain What I expected: I expected to be able to access the folder at /mnt/neededFolder What actually happened: The error I'm receiving (partially shown in the subject line here) is mount error(115): Operation now in progressRefer to the mount.cifs(8) manual page (e.g. man mount.cifs) What the log says: dmesg output: [1712257.661259] CIFS VFS: Error connecting to socket. Aborting operation.[1712257.662098] CIFS VFS: cifs_mount failed w/return code = -115 We can all see that there is a connection issue, that is obvious. I know both machines are connected to the network. What can I try next to get this drive mounted? EDIT: It may be worth noting that I am able to ping the DNS name and the raw IP of the remote location that I am trying to mount.
The issue ended up being that the route to the NAS was missing. Once the route was added, I was able to mount the drive with ease. route add-net x.x.x.x netmask x.x.x.x gw x.x.x.x Hopefully this helps someone else in the future!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/456997", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68573/" ] }
457,166
On Linux Debian 9 I am able to resolve a specific local domain e.g. my.sample-domain.local using some commands like nslookup or host , but not with some other commands like ping or the Postgres client psql . I think stuff like Network Manager has setup my DNS resolver correctly (the content of /etc/resolv.conf ), so I am not sure why is this happening? I checked with a colleague using Windows 10 and they don't have any custom entry in their host file, although in their case the Windows version of ping and their database UI for Postgres works as expected resolving the domain into an IP address. Please see below: $ ping my.sample-domain.localping: my.sample-domain.local: Name or service not known$ host my.sample-domain.localmy.sample-domain.local has address <THE_IP_REPRESENTING_THE_LOCAL_DOMAIN>$ ping -c 5 <THE_IP_REPRESENTING_THE_LOCAL_DOMAIN>PING <THE_IP_REPRESENTING_THE_LOCAL_DOMAIN> (<THE_IP_REPRESENTING_THE_LOCAL_DOMAIN>) 56(84) bytes of data.64 bytes from <THE_IP_REPRESENTING_THE_LOCAL_DOMAIN>: icmp_seq=1 ttl=128 time=1.16 ms64 bytes from <THE_IP_REPRESENTING_THE_LOCAL_DOMAIN>: icmp_seq=2 ttl=128 time=0.644 ms64 bytes from <THE_IP_REPRESENTING_THE_LOCAL_DOMAIN>: icmp_seq=3 ttl=128 time=0.758 ms64 bytes from <THE_IP_REPRESENTING_THE_LOCAL_DOMAIN>: icmp_seq=4 ttl=128 time=0.684 ms64 bytes from <THE_IP_REPRESENTING_THE_LOCAL_DOMAIN>: icmp_seq=5 ttl=128 time=0.794 ms--- <THE_IP_REPRESENTING_THE_LOCAL_DOMAIN> ping statistics ---5 packets transmitted, 5 received, 0% packet loss, time 4056msrtt min/avg/max/mdev = 0.644/0.808/1.160/0.183 ms$ nslookup my.sample-domain.localServer: <THE_IP_REPRESENTING_THE_NAMESERVER>Address: <THE_IP_REPRESENTING_THE_NAMESERVER>#53Non-authoritative answer:Name: my.sample-domain.localAddress: <THE_IP_REPRESENTING_THE_LOCAL_DOMAIN>$ cat /etc/resolv.confdomain <AN_INTERNAL_DOMAIN>search <AN_INTERNAL_DOMAIN>nameserver <THE_IP_REPRESENTING_THE_NAMESERVER>nameserver <ANOTHER_IP_REPRESENTING_THE_NAMESERVER> EDIT: Meanwhile I realized there is an Ubuntu 16 Virtual Machine in the same office LAN, so I logged into it and tried the ping command which is working there. Also that Ubuntu VM does not have any particular custom setting in /etc/hosts (the same as my Debian 9 laptop with not customized /etc/hosts ). Both the /etc/resolv.conf look similar (some shared domains/IPs, some other IPs for the same domain). However the file /etc/nsswitch.conf is different, so I think there is something going on with this mdsn4_minimal and the order of hosts resolution in there like mdsn4_minimal coming before dns : hosts: files mdns4_minimal [NOTFOUND=return] dns and on Ubuntu: hosts: files dns EDIT 2: Both the Ubuntu 16 VM and my Debian 9 laptop are able to resolve that .local domain using the dig command.
host and nslookup perform DNS lookups, however most applications use glibc's Name Service Switch to decide how host names are looked up. Your /etc/nsswitch.conf might enable mDNS, which might cause the issues when resolving .local names. You could change the order in which lookups are made or just remove mDNS service if you think you won't need it. Your nsswitch.conf 's has mdns4_minimal , which does mDNS lookup (for .local names). The [NOTFOUND=return] after it causes the lookup to stop and therefore DNS is never used and your application can't resolve the host name. You could either remove the whole mdns4_minimal [NOTFOUND=return] , so mDNS lookups are not used, or just remove the NOTFOUND action so DNS lookup would be made should mDNS lookup fail. For further details, I recommend checking out the Name Service Switch documentation .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/457166", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/255367/" ] }
457,179
I am working on some custom Openwrt compilation. Some of my scripts do ping for testing connection before doing their stuff. if [ "$(ping -c 1 -w 3 8.8.8.8)" ]; then do stuff;else echo "no connection"fi Some of them run other ones and because ping takes some time, running scripts takes more time than it should which is a problem in some cases. I would like to make some kind of continuous loop which writes 0 or 1 to some file. From now on, scripts which use ping for connection testing will test what is in somefile . Is there a way to write such script?
Do you mean something as simple as this: while :; do ping -c 1 -w 3 8.8.8.8; echo $? > /tmp/ping.status sleep 1done That will write the exit status of ping to /tmp/ping.status once a second. Then, in another script, you could have something like: pingFailed=$(cat /tmp/ping.status)if [ $pingFailed -ne 0 ]; then echo "No connection"else echo "Connected!"fi So yes, you could do this. However, it is a really bad method of checking your connection. Obviously, there are race conditions here. That the connection was active when the first loop ran, doesn't mean it is active this second. More importantly, if you read the ping.status file at the start of the script, that doesn't mean the connection will still be there at the end of it. In addition, you are spamming both your network and your CPU with continuous requests. This is really not very elegant. A faster and simpler way of testing if your connection is up (at least on Linux) is looking at /sys/class/net/$NIC/link_mode where $NIC is the name of your network card. For example, on my system: ## Wireless connection up$ cat /sys/class/net/wlp3s0/link_mode 1## Wireless connection down$ cat /sys/class/net/wlp3s0/link_mode 0 You can write a function that checks this: isLinkDown(){ return $(cat /sys/class/net/wlp3s0/link_mode)} And you can use it in your scripts like this: if isLinkDown; then echo Link Downelse echo Link Up
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/457179", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/219571/" ] }
457,222
In my current directory, I execute the command: ls -1 and it gives a list of the current directory contents. In the same directory, I repeat the command: ls and it gives me the same result, with perhaps a different formatted output. Finally, I try to find out about the available commands by typing ls --help and the output is: usage: ls [-ABCFGHLOPRSTUWabcdefghiklmnopqrstuwx1] [file ...] It looks like the last option is 1 (#1). Can someone explain what the ls -1 does and how it's different to the standard ls command?
Yes, the formatting of the output is the only difference between ls -1 and ls without any options. From the ls manual on my system : -1 (The numeric digit "one".) Force output to be one entry per line. This is the default when output is not to a terminal. This is also a POSIX option to the ls utility . The manual for ls on your system is bound to say something similar (see man ls ). Related: is piped ls the same as ls -1?
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/457222", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/301386/" ] }
457,229
If I run smartctl -i /dev/sdb I get correct disk information including serial number and drive model number. But this is for a disk marked as JBOD. For n drives that are RAID'ed (using an LSI RAID card in my case), that presents the assembled volume to Linux under just dev/sda for example, and I make one partition that would be sda1 and is of the expected size... fairly simple. Without having to power off my server and remove each drive to look at the sticker for model and serial information, is there a way to see each individual disk behind a RAID card that has been combined into a virtual drive , and get the basic information of any one of those RAID'ed disks?
Since you mention an LSI RAID card, I’ll assume it’s a MegaRAID device; in this case, you can get the information about each underlying drive by running smartctl -i -d megaraid,0 /dev/sda replacing /dev/sda as appropriate (it should correspond to the device node of your RAID drive as visible in the system), and 0 (increment it to see each drive). The smartctl manpage lists the different types of controllers which are supported, and the syntax used to address them.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/457229", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/154426/" ] }
457,235
Pasting some consecutive commands into terminal stops on commands with user input, e.g.: read VARecho $VAR or select VAR in 1 2 3; do break; doneecho $VAR echo $VAR is not getting pasted/executed. Having all commands on a single line works though: read VAR; echo $VAR But this is not preferred when having more commands following. Why is this the case and how to work around it ? My use case is having some recurring commands in a service documentation. I could of course write a script, but that is not what I intend to do and might not be possible on systems with read access only.
A very comfortable way is the following : Just type the following in your terminal: ( paste-your-multiline-script-here ) enter Long description : In the terminal you start with ( Optional: Press enter (only for formatting reasons) Now you can paste multiple lines e.g.: echo hello echo world Alternative: You type/paste line by line (finishing each one with the enter key). Finally, type the finalizing ) and hit enter again, which will execute the whole pasted/entered lines. Little working example (for pasting line by line with enter ) : anderson@tp ~ % (\`subsh> echo hello\`subsh> echo world\`subsh> )helloworldanderson@tp ~ % Little working example (for pasting whole script) : anderson@tp ~ % (\`subsh> echo helloecho world\`subsh> )helloworldanderson@tp ~ % Little working example neglecting formatting (for pasting whole script) : anderson@tp ~ % (echo hello echo world)helloworldanderson@tp ~ %
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/457235", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/236063/" ] }
457,388
Let's say I want to install mysql from a script without being asked any configuration questions like what root password I want to set by apt . I would then preset the debconf variables: echo mysql-server-5.5 mysql-server/root_password password xyzzy | debconf-set-selectionsecho mysql-server-5.5 mysql-server/root_password_again password xyzzy | debconf-set-selections I got this from a tutorial. What is unclear to me: How did the guy find out the variable names? How did he knew that he had to set mysql-server-5.5 mysql-server/root_password password and mysql-server-5.5 mysql-server/root_password_again respectively? I know I could extract the .deb package by issuing dpkg-deb -R package.deb EXTRACTDIR/ - but I don't see where those variables are stored. How would I find out the debconf variables for any other package?
You can inspect what gets stored in debconf using debconf-get-selections . This is useful if you have actually done the installation already. Alternately, these settings are used in the package maintainer scripts . With the dpkg-deb command you have run, these are in the DEBIAN subdirectory of EXTRACTDIR . As an example, from lightdm : $ grep db_ lightdm/DEBIAN -Rlightdm/DEBIAN/postrm: db_purgelightdm/DEBIAN/prerm: db_unregister shared/default-x-display-managerlightdm/DEBIAN/prerm: if db_get shared/default-x-display-manager; thenlightdm/DEBIAN/prerm: db_metaget shared/default-x-display-manager ownerslightdm/DEBIAN/prerm: db_subst shared/default-x-display-manager choices "$RET"lightdm/DEBIAN/prerm: db_get shared/default-x-display-managerlightdm/DEBIAN/prerm: if db_get "$RET"/daemon_name; thenlightdm/DEBIAN/prerm: db_fset shared/default-x-display-manager seen falselightdm/DEBIAN/prerm: db_input critical shared/default-x-display-manager || truelightdm/DEBIAN/prerm: db_golightdm/DEBIAN/prerm: db_get shared/default-x-display-managerlightdm/DEBIAN/prerm: db_get "$RET"/daemon_namelightdm/DEBIAN/postinst: if db_get shared/default-x-display-manager; thenlightdm/DEBIAN/postinst: if db_get "$DEFAULT_DISPLAY_MANAGER"/daemon_name; thenlightdm/DEBIAN/postinst:db_stoplightdm/DEBIAN/config:if db_metaget shared/default-x-display-manager owners; thenlightdm/DEBIAN/config:if db_metaget shared/default-x-display-manager choices; thenlightdm/DEBIAN/config: db_subst shared/default-x-display-manager choices "$OWNERS" || :lightdm/DEBIAN/config: db_fset shared/default-x-display-manager seen false || :lightdm/DEBIAN/config: db_set shared/default-x-display-manager "$CURRENT_DEFAULT"lightdm/DEBIAN/config: if db_get shared/default-x-display-manager; thenlightdm/DEBIAN/config: db_set shared/default-x-display-manager lightdmlightdm/DEBIAN/config: db_fset shared/default-x-display-manager seen truelightdm/DEBIAN/config: db_input high shared/default-x-display-manager || :lightdm/DEBIAN/config: db_go || :lightdm/DEBIAN/config:if db_get shared/default-x-display-manager; then The various db_* functions are helper functions for handling debconf , obtained from /usr/share/debconf/confmodule . So, in the case of lightdm , shared/default-x-display-manager is an important debconf key.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/457388", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/264975/" ] }
457,398
du -ch -- **/*.jpg | grep total Especially, what do the -- (double dash) and ** (double asterisk) really mean? Using the Z shell
The ** in zsh matches just like * , but allows for matching across / in pathnames. The pattern **/*.jpg will therefore expand to the pathname of any file that has a filename suffix of .jpg anywhere in or below the current directory. The ** pattern is available in bash as well, if enabled with shopt -s globstar . The ksh93 shell has it too, if enabled with set -o globstar . The -- prevents any pathname (matching the above pattern) that starts with a dash from being interpreted by du as a command line option. The -- stops the command line parsing of du from looking for further options. This is not dependent on the shell but is a POSIX " utility guideline " for standard utilities. The -- could be removed if the filename globbing pattern was changed to ./**/*.jpg . The command would give you the total size of all *.jpg files in or below the current directory by extracting the line with the total from the output of du (run the command without | grep total to see what du produces).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/457398", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/301538/" ] }
457,502
As far as I understand, the kernel is not a process, but rather a set of handlers that can be invoked from the runtime of another progress (or by the kernel itself via a timer or something similar?) If a program hits some exception handler that requires long-running synchronous processing before it can start running again (e.g. hits a page fault that requires a disk read), how does the kernel identify that the context should be switched? In order to achieve this, it would seem another process would need to run? Does the kernel spawn a process that takes care of this by intermittently checking for processes in this state? Does the process that invokes the long-running synchronous handler let the kernel know that it should switch contexts until the handler is complete (e.g. the disk read completes)?
"The kernel is not a process." This is pure terminology. (Terminology is important.) The kernel is not a process because by definition processes exist in userland. But the kernel does have threads . "If a program hits some exception handler that requires long-running synchronous processing before it can start running again (e.g. hits a page fault that requires a disk read)" . If a userland process executes a machine instruction which references an unmapped memory page then: The processor generates a trap and transitions to ring 0/supervisor mode . (This happens in hardware.) The trap handler is part of the kernel. Assuming that indeed the memory page must be paged in from disk, it will put the process in the state of uninterruptible sleep (this means it saves the process CPU state in the process table and it modifies status field in the process entry in the table of processes), finds a victim memory page, initiates the I/O to page out the victim and page in the requested page, and invokes the scheduler (another part of the kernel) to switch userland context to another process which is ready to run. Eventually, the I/O completes. This generates an interrupt. In response to the interrupt, the processor invokes a handler and transitions to ring 0/supervisor mode. (This happens in hardware.) The interrupt handler is part of the kernel. It clears the waiting for I/O state of the process which was waiting for the memory page and marks it ready to run. It then invokes the scheduler to switch userland context to a process which is ready to run. In general, the kernel runs: In response to a hardware trap or interrupt; this includes timer interrupts. In response to an explicit system call from a user process. Most of the time, the processor is at ring 3/user mode and executes instructions from some userland process. It transitions to ring 0/supervisor mode (where the kernel lives) when an userland process makes a syscall (for example, because it wants to do some input/output operation) or when the hardware generates a trap (invalid memory access, division by zero, and so on) or when an interrupt request is received from the hardware (I/O completion, timer interrupt, mouse move, packet arrived on the network interface, etc.) To answer the question in the title, "how does the kernel scheduler know how to pre-empt a process" : the kernel handles timer interrupts. If, when a timer interrupt arrives, the schduler notices that the currently running userland process has exhausted its quantum then the process is put at the end of the running queue and another process is resumed. (In general, the scheduler takes care to ensure that all userland processes which are ready to run receive processor time fairly.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/457502", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/162496/" ] }
457,536
For instance, if one wants to access the account bob on a machine on a local network behind a router, they would simply type: $ ssh -p xx [email protected]` However, how does ssh handle the possibility of two machines on the local network having the same username? Is there a flag to differentiate between user bob on machine A vs a different user bob on machine B, or does ssh throw an error?
Why would ssh care about reiterating usernames on different hosts? It is absolutely expected that this will happen. Hint: the root user is omnipresent, is it not? So the answer to your questions is: ssh handles it the same way everything else would handle it: by not caring about which user is being referenced until talking to the host in question. A simplified expansion on the above: The first thing that happens is that the ssh client attempts to establish a conversation with the remote ssh server. Once a communications channel is opened, the client looks to see if it's a known host (e. g. an entry is present in ~/.ssh/known_hosts ), and handle things properly if it's either an unknown host or a known host with invalid credentials (e. g. the host key had changed). Now that all that is out of the way and a line of communication is properly open between the ssh server and client, the client will say to the server "I would like to authenticate for the user bob ". Naturally, the server won't care about any other bob s on the network; only itself.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/457536", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/301615/" ] }
457,551
Some applications behave differently at a different screen resolution. Is there any way to get the system to report a different, user-specified, resolution to a GUI application when starting it? By behave differently I mean for example their unresizable window is smaller (not necessarily physically, for obvious reasons, but fewer pixels) if I first switched the monitor to a lower resolution. Something like: ~$ sudolution 800x600 unresizableapp Or is there any method to force -resize unresizable windows?
Why would ssh care about reiterating usernames on different hosts? It is absolutely expected that this will happen. Hint: the root user is omnipresent, is it not? So the answer to your questions is: ssh handles it the same way everything else would handle it: by not caring about which user is being referenced until talking to the host in question. A simplified expansion on the above: The first thing that happens is that the ssh client attempts to establish a conversation with the remote ssh server. Once a communications channel is opened, the client looks to see if it's a known host (e. g. an entry is present in ~/.ssh/known_hosts ), and handle things properly if it's either an unknown host or a known host with invalid credentials (e. g. the host key had changed). Now that all that is out of the way and a line of communication is properly open between the ssh server and client, the client will say to the server "I would like to authenticate for the user bob ". Naturally, the server won't care about any other bob s on the network; only itself.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/457551", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/103363/" ] }
457,584
I am trying to change the color of text in a label on the fly at runtime. I've tried applying a css style, I've tried two depreciated methods, and none of it works. Can it even be done, and if not, why is something this simple not available? Applying a css style on the fly partially works: when I specify .pinkStyle { background-color: rgb(241, 135, 135); color: black;} at runtime I can see the background turn pink. But the text stays white.
Oh my gosh. I'm documenting this so no one will suffer the way I suffered. If you want runtime control of your text, do not under any circumstances use Glade to set the foreground color with Edit Attributes. If you do, you have PERMANENTLY set the text color in a way that neither css changes, pango markup, or depreciated functions like gtk_widget_modify_fg can touch at runtime. You can still use css to change the background color of the label, but to get at the text's own color and background, I'm using gtk_label_set_markup with <span background=\"#0022ff\" foreground=\"#ff0044\"> with success. AFTER deleting all attributes from all my labels in Glade. GTK is a nightmare; I've never met anything in Linux before that made me long for Windows, but this did it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/457584", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/287253/" ] }
457,645
I have a need to convert a list of decimal values in a text file into hex format, so for example test.txt might contain: 131072196608262144327680393216... the output should be list of hex values (hex 8 digit, with leading zeroes): 000200000003000000040000... the output is printed into the text file. How to make this with python or linux shell script? EDIT #1 I missed one extra operation: I need to add 80000000 hex to each of the created hex values. (arithmetic addition, to apply to already created list of hex values).
You can do this using printf and bash : printf '%08x\n' $(< test.txt) Or using printf and bc...just...because? printf '%08s\n' $(bc <<<"obase=16; $(< test.txt)") In order to print the output to a text file just use the shell redirect > like: printf '%08x\n' $(< test.txt) > output.txt
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/457645", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138994/" ] }
457,670
I am using the newest version of netcat ( v1.10-41.1 ) which does not seem to have an option for IPv6 addresses (as the -6 was in the older versions of nc ). If I type in nc -lvnp 2222 and check listening ports with netstat -punta , the server appears to be listening on port 2222 for IPv4 addresses only: tcp 0 0 0.0.0.0:2222 0.0.0.0:* LISTEN 2839/nc tcp6 is not active like, for example, my apache2 server: tcp6 0 0 :::80 :::* LISTEN -
There are at least 3 or 4 different implementations of netcat as seen on Debian: netcat-traditional 1.10-41 the original which doesn't support IPv6: probably what you installed. netcat6 which was made to offer IPv6 (oldstable, superseded). netcat-openbsd 1.130-3 . Does support IPv6. ncat 7.70+dfsg1-3 probably a bit newer since not in Debian stable, provided by nmap , does support IPv6. I'd go for the openbsd one. Each version can have subtly different syntax, so take care. By the way: socat is a much better tool able to really do much more than netcat. You should try it!
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/457670", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288674/" ] }
457,673
I'm using a dark theme in Debian, but the Dolphin file manager just ignores it. I have seen some examples of a dark theme in Dolphin, but I can't find a way to do it. I've seen some ways to set a background image, but of course that doesn't help me, because I would have black text on black background. From what I've seen I might need a KDE theme. I have found this , but it gives me a .qtcurve file. I looked up how to use it and this page told me to use KDE system settings. Sounds weird, but ok, I installed the systemsettings package. But in the KDE system settings I only had the categories "shortcuts", "network settings" and "network connectivity". So I installed kde-config-gtk-style and it did indeed add the "application style" category to the KDE settings. There I first tried setting "BlackMATE" as the GTK2 and GTK3 theme, which did nothing. Then I tried importing the downloaded theme file, which didn't work, because it expected a .tar file. So I packed the theme file into a .tar archive (which seems weird, why would I need to do that?) and imported that, which made the settings window become unresponsive and then close itself. I guess it's not the sort of tar file it expects? When I click "download GTK2 themes" or "download GTK3 themes", it stays at "initialising" and does nothing. So how do I set a dark theme in Dolphin? Do I even need the KDE settings? Debian 9.5 Cinnamon 3.2.7 Dolphin 16.08.3
I recently switched from Debian to Manjaro and from Cinnamon to Mate, but this solution should hopefully apply to all distributions and desktop environments: Firstly, the program "qt5ct" can be used to edit the theme of programs using Qt themes instead of whatever Cinnamon, Mate, etc. use. On Manjaro I installed it with yay qt5ct , on Debian it's probably sudo apt-get install qt5ct . I selected an arbitrary dark theme ("style") in there. But that doesn't change the background image, that is still white. So I found this answer on AskUbuntu. It's pretty long, but what matters, if you just want a black background, is: Create a file somewhere that contains a custom Qt style sheet, with this content: DolphinViewContainer > DolphinView > QAbstractScrollArea { background-color: black;} Start Dolphin like this in the future: dolphin -stylesheet /path/to/style_sheet.qss
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/457673", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216015/" ] }
457,721
There seem to be similar questions here , here and here but with no confirmed answer, and no answer that addresses my situation satisfactorily. Update: I've deleted Windows and reset BIOS factory settings and the problem persists. This is no longer a dual-boot specific question and has been updated. I'm trying to install Linux Mint on a Dell XPS 13 9350 with no installed Hard Drive. I also tried Ubuntu with the same results, but I'll talk specifically about Mint in this question as its my desired distro. I have added Mint to an 8GB USB stick via Yumi. I reboot the machine and hold down F12 , then choose to boot from the USB. A second screen allows me to "Start" Linux. I start it, and then begin installing from the install icon on the desktop. After being asked about language, keyboard and WiFi I'm told that I only have 10GB of space, which is not enough to install. It appears to be trying to install on the USB drive, as this is a 256GB hard drive. Output of lsblk -f : NAME FSTYPE LABEL UUID MOUNTPOINTloop0 iso966 Linux Mint 19 Cinnamon 64-bit 2018-06-26-15-38-36-00 /cdromloop1 squashfs /rofssda Lsda1 vfat MULTIBOOT 190... /isodevicenvme0n1Lnvme0n1p1 ext4 16639... I have manually toggled "RAID On" to AHCI in the BIOS and that allowed me to complete the Linux install wizard, but gave me a Dell Support window message on boot about a missing OS. Since then I have reset to factory BIOS settings and I get a "Missing Hard Drive" message on boot. What can I do to both install and boot up Mint, now on a computer with no OS?
I was able to solve this with the help of a colleague, finally. It took several steps in BIOS: Disable Secure Boot. Set SATA-controller to AHCI from RAID On. Set boot mode to legacy from UEFI. I wasn't able to figure out exactly what was wrong, but the installer seems to have installed the OS in a drive that UEFI did not auto-detect, but legacy boot mode did.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/457721", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/110181/" ] }
457,844
I find the page numbers of a multiline pattern in a pdf file, by How shall I grep a multi-line pattern in a pdf file and in a text file? and How can I search a string in a pdf file, and find the physical page number of each page where the string appears? $ pdfgrep -Pn '(?s)image\s+?not\s+?available' main_text.pdf 49: image notavailable51: image notavailable53: image notavailable54: image notavailable55: image notavailable I would like to extract the page number only, but because the pattern is multiline, I get $ pdfgrep -Pn '(?s)image\s+?not\s+?available' main_text.pdf | awk -F":" '{print $1}'49 notavailable51 notavailable53 notavailable54 notavailable55 notavailable instead of 4951535455 I wonder how I can extract the page numbers only, regardless if the pattern is multiline? Thanks.
It's a bit hacky, but since you are already using a perl compatible RE, you could use \K "keep left" modifier to match everything in your expression (and anything else up to the next line end) but exclude it from the output: pdfgrep -Pn '(?s)image\s+?not\s+?available.*?$\K' main_text.pdf The output will still include the : separator however.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/457844", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
457,856
I have an up-to-date Kali Linux 2018.2 with open-vm-tools installed on an up-to-date VMware Workstation Pro 14. It's working as expected, but there's a little bug that I'd like to fix, the NetworkManager question mark and lack of Ethernet configuration inside 'Settings → Network', as seen in the image below. My network is fully working, I can ping LAN and WAN, can resolve IP's and hostnames etc. So, I don't know why I'm getting this behavior. My VMware network settings are as follows:
It's a bit hacky, but since you are already using a perl compatible RE, you could use \K "keep left" modifier to match everything in your expression (and anything else up to the next line end) but exclude it from the output: pdfgrep -Pn '(?s)image\s+?not\s+?available.*?$\K' main_text.pdf The output will still include the : separator however.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/457856", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/95396/" ] }
457,868
From https://unix.stackexchange.com/a/277707/674 find . ! -empty -type f -exec md5sum {} + | sort | uniq -w32 -dD can find duplicate files under the current directory. What does -dD mean to uniq ?I saw the meanings of -d and -D in manpage, but not sure what they mean when they are used together. Thanks.
TLDR Bottom line, they do nothing when used together; -dD is identical to -D . Research If you look at the case/switch logic of the uniq.c command you can see this first hand: case 'd': output_unique = false; output_option_used = true; break;case 'D': output_unique = false; output_later_repeated = true; if (optarg == NULL) delimit_groups = DM_NONE; else delimit_groups = XARGMATCH ("--all-repeated", optarg, delimit_method_string, delimit_method_map); output_option_used = true; break; The way that this code is structured, if either -dD is set, ouput_unique is set to false; but more importantly, output_later_repeated is set to true. Once that condition is set, output_later_repeated , there's no feasible way for -dD to have anything but identical output to -D . Incidentally, the computerhope man page has a better table that explains the -d and -D switches. References Linux uniq command (computerhope) coreutils/src/uniq.c
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/457868", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
457,888
I would like to know if there is a way to determine where a specific part of my $PATH variable is being set. About a year and and a half ago I went through the tedious process of setting up a Oracle XE 11.2.0 on my machine for a course I was taking. Somewhere in the process I added the path "/u01/app/oracle/product/11.2.0/xe/bin" to my $PATH variable to get things working. Well now I've deleted the root /u01/ folder that was exclusively used by the Oracle DB and so bash throws the error on startup that the file or directory doesn't exist. So I went manually looking through every possible file I can find listed and nothing. As far as I can tell, that part of $PATH is not being set in any of these files: /etc/login.defs , ~/.profile , /etc/environment , /etc/profile , ~/.bash_login , ~/.bash_profile or ~/.bashrc . That was verified first by running cat ~/.bashrc | grep "*oracle*" on every file listed above. I even did the insane thing and ran sudo strings /dev/sdb -n 11 | grep -C100 "/u01/app/oracle/*" to give me a list of every file that contained the string. I got lots of results, but nothing super valuable. My poor SSD didn't deserve that. So any tips? How can I find out where that part of $PATH is being concatenated onto? Are there any other typical files that I should check? I'm running this on Linux Mint 18.3 if that narrows anything down.
One way to debug shell initialization would be to run a login, interactive shell ( -li ) and tell it to print all commands as they're executed, and look for what you want in the output: PS4=' $BASH_SOURCE:$LINENO: ' bash -lixc true |& grep oracle PS4 is use by bash for printing the extra information from the -x option, and when set to $BASH_SOURCE:$LINENO , it will print the path to the file being sourced and the line number being executed. Running with -c true |& grep oracle allows us to filter the initialization of a single shell quickly. With the leading space, bash intends lines when nested sourcing takes place.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/457888", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/301846/" ] }
457,889
rm ~/.ssh/known_hosts Without doing a backup of this fileNow file is empty, is there any way to recovery/restore this file ?
One way to debug shell initialization would be to run a login, interactive shell ( -li ) and tell it to print all commands as they're executed, and look for what you want in the output: PS4=' $BASH_SOURCE:$LINENO: ' bash -lixc true |& grep oracle PS4 is use by bash for printing the extra information from the -x option, and when set to $BASH_SOURCE:$LINENO , it will print the path to the file being sourced and the line number being executed. Running with -c true |& grep oracle allows us to filter the initialization of a single shell quickly. With the leading space, bash intends lines when nested sourcing takes place.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/457889", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/301859/" ] }
457,916
Let's suppose we have 4 files: a.zip created on 28 feb 2018b.zip created on 28 feb 2018c.zip created on 2 mar 2018d.zip created on 2 mar 2018 I want to search the 28 Feb files and Tar it. find ./ -type f -ls | grep 'Feb 28' | tar -czf somefile3.tar --null -T - But it's not working.
With GNU or FreeBSD find and GNU or BSD tar : find . -type f -newermt 2018-02-28 ! -newermt 2018-03-01 -print0 | tar -cf file.tar --null -T - (note that it excludes files last-modified at the exact nano-second 2018-02-28T00:00:00.000000000 (and could include a file at the time the next day) which on filesystems with nano-second granularity would almost never happen unless the files were created with touch -t / touch -d or were themselves extracted from archives that don't store timestamps with sub-second precision) POSIXly and assuming file names don't contain newline characters (the standard tar archive format also has extra limitations on file names): touch -d 2018-02-28T00:00:00 .starttouch -d 2018-03-01T00:00:00 .endfind . -type f -newer .start ! -newer .end ! -path ./.start ! -path ./.end | pax -x ustar -w > file.tar If you wanted all the regular files last modified on any Feb 28, not just 2018, with GNU tools: find . -type f -printf '%Tm-%Td-%p\0' | sed -nz 's/^02-28-//p' | tar -cf file.tar --null -T - The output of find -ls is not post-processable automatically reliably, it's only intended for human consumption.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/457916", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/301878/" ] }
458,162
I have a Python script that needs to open a file in a directory that I created: /var/www/html/myDIR/myFILE.htm The directory needed to be created as root using sudo mkdir /var/www/html/myDIR as required by the parent folder. As a result, my Python script cannot touch /var/www/html/myDIR/myFILE.htm . What minimum permissions are required to allow scripts (that are not running as root ) access to this file (or any file in this position)?
When creating the directory, set its group ownership to the same group as the user who will be running the script. Include the group permission g+wx . The script will then be able to create and edit files in that directory.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/458162", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/297322/" ] }
458,194
Logging out and logging back in do not fix this problem. Rebooting does not fix this problem. NOTE: Please do not mark this question as a duplicate, I'm aware of this question: I added a user to a group, but group permissions on files still have no effect and have tried these things multiple times. I've added my user account to some groups using usermod -aG , however I am not able to access any resources associated with that group. I can fix this using sudo su - $USER , however this only lasts for my current terminal session. Example terminal session: $ iduid=1008(erik) gid=1009(erik) groups=1009(erik)$ echo $USERerik$ sudo su - $USER[sudo] password for erik: $ iduid=1008(erik) gid=1009(erik) groups=1009(erik),27(sudo),100(users),999(docker),1001(rvm) I have to do this every time I open a terminal. Rebooting does not fix this. Is there any way I get get all the groups assigned to me when I login? Distro: Ubuntu 16.04 x64User Catscrash has reported the same issue with Ubuntu 18.04 This seems to only happen when I open a terminal in my desktop. If I ssh to the machine or switch to a TTY (ctrl+alt+F2 for example), then I get the correct groups. $ iduid=1008(erik) gid=1009(erik) groups=1009(erik)$ ssh localhosterik@localhost's password: Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-131-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage0 packages can be updated.0 updates are security updates.---------------------------------------------------------------- Ubuntu 16.04.2 LTS built 2017-04-20----------------------------------------------------------------Last login: Thu Jul 26 09:18:38 2018 from ::1$ iduid=1008(erik) gid=1009(erik) groups=1009(erik),27(sudo),100(users),999(docker),1001(rvm)$ Current using XFCE as my desktop environment. I also have KDE installed. I've tried both possible values for starting the terminal as a login shell - neither makes any difference. I've tried disabling apparmor via sudo systemctl disable apparmor and rebooted. After reboot, since I have docker installed, it still loads apparmor with the docker profile. So additionally I tried disabling docker: sudo systemctl disable docker and then rebooting. After this, the output of apparmor_status is: $ sudo apparmor_statusapparmor module is loaded.0 profiles are loaded.0 profiles are in enforce mode.0 profiles are in complain mode.0 processes have profiles defined.0 processes are in enforce mode.0 processes are in complain mode.0 processes are unconfined but have a profile defined. so, that means it's loaded but not doing anything? Either way, it doesn't resolve the issue. I still cannot access any resource that I should have group permissions for. Any other ideas?
Update2: This seems to be a lightdm / kwallet bug, see here: https://bugs.launchpad.net/lightdm/+bug/1781418 and here: https://bugzilla.redhat.com/show_bug.cgi?id=1581495 Commenting out auth optional pam_kwallet.soauth optional pam_kwallet5.so to #auth optional pam_kwallet.so#auth optional pam_kwallet5.so in /etc/pam.d/lightdm - as suggested in the link above, solves the problem for now. Update: This seems to be an issue with lightdm. Switching to GDM solved the issue temporarily for me. Still don't know what's wrong with lightdm though. I have exactly the same issue (Ubuntu 18.04). I don't have a solution yet, but I noticed, that everything is correct when I log in via ssh or a text console, but not when I open a terminal emulator on my desktop environment. Is this the same for you? Maybe it has something to do with some pam-files? Also weird: correct would be: uid=1000(catscrash) gid=1000(catscrash) Gruppen=1000(catscrash),4(adm),6(disk),24(cdrom),27(sudo),30(dip),46(plugdev),113(lpadmin),128(sambashare),132(vboxusers),136(libvirtd) wrong is uid=1000(catscrash) gid=1000(catscrash) Gruppen=1000(catscrash) now I can do catscrash@catscrash-desktop ~ % newgrp admcatscrash@catscrash-desktop ~ % newgrp catscrashcatscrash@catscrash-desktop ~ % iduid=1000(catscrash) gid=1000(catscrash) Gruppen=1000(catscrash),4(adm)catscrash@catscrash-desktop ~ % newgrp sudocatscrash@catscrash-desktop ~ % iduid=1000(catscrash) gid=27(sudo) Gruppen=27(sudo),4(adm),1000(catscrash)catscrash@catscrash-desktop ~ % newgrp catscrashcatscrash@catscrash-desktop ~ % iduid=1000(catscrash) gid=1000(catscrash) Gruppen=1000(catscrash),4(adm),27(sudo)catscrash@catscrash-desktop ~ % So it definitely knows about my groups, as I couldn't do that with groups I'm not in, and once I changed the primary group and back, those groups appear... weird! I also noticed, that this only happens in KDE / plasmashell, is this the same for you? When logging in via gnome shell everything works fine.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/458194", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272192/" ] }
458,199
I have tried echo yes | ssh [email protected] | ssh [email protected] -y [email protected] none of which appear to work? EDIT #1 Part of my problem was I thought every command after the ssh command was a remote command when the commands were in fact local. I guess remote commands have to be declared in a string which is passed to the ssh command as an argument e.g. $ ssh [email protected] 'remote command'
This is by design. ssh 's host verification and authentication interactions deliberately do not accept input from pipes. However, you can if you are confident in your host keys do: ssh-keyscan host.example.com >> $HOME/.ssh/known_hostsssh host.example.com
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/458199", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269365/" ] }
458,265
From https://unix.stackexchange.com/a/458074/674 Remember to use -- when passing arbitrary arguments to commands (or use redirections where possible). So sort -- "$f1" or better sort < "$f1" instead of sort "$f1" . Why is it preferred to use -- and redirection? Why is sort < "$f1" preferred over sort -- "$f1" ? Why is sort -- "$f1" preferred over sort "$f1" ? Thanks.
sort "$f1" fails for values of $f1 that start with - or here for the case of sort some that start with + (can have severe consequences for a file called -o/etc/passwd for instance). sort -- "$f1" (where -- signals the end of options) addresses most of those issues but still fails for the file called - (which sort interprets as meaning its stdin instead). sort < "$f1" Doesn't have those issues. Here, it's the shell that opens the file. It also means that if the file can't be opened, you'll also get a potentially more useful error message (for instance, most shells will indicate the line number in the script), and the error message will be consistent if you use redirections wherever possible to open files. And in sort < "$f1" > out (contrary to sort -- "$f1" > out ), if "$f1" can't be opened, out won't be created/truncated and sort not even run. To clear some possible confusion (following comments below), that does not prevent the command from mmap() ing the file or lseek() ing inside it (not that sort does either) provided the file itself is seekable. The only difference is that the file is opened earlier and on file descriptor 0 by the shell as opposed to later by the command possibly on a different file descriptor. The command can still seek/mmap that fd 0 as it pleases. That is not to be confused with cat file | cmd where this time cmd 's stdin is a pipe that cannot be mmaped/seeked.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/458265", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
458,297
File1 12584,"Capital of America, Inc.",,HORIZONCAPITAL,USA,......etc25841,"Capital of America, Inc.",,HORIZONCAPITAL,USA,......etc87455,"Capital of America, Inc.",,HORIZONCAPITAL,USA,......etc Output 12584|Capital of America, Inc.||HORIZONCAPITAL|USA|......etc25841|Capital of America, Inc.||HORIZONCAPITAL|USA|......etc87455|Capital of America, Inc.||HORIZONCAPITAL|USA|......etc I have a csv file, which i have to conver into a text file delimited with pipe(|)I have done the shell script sed 's/^/"/;s/,/|/g;s/$/"/' $File > $Output But the problem is the field "Capital of America, Inc." contains a comma, that's also replaced by the pipe (|).So I just wanted to replace all, with pipe except not inside the value is given double quotes " ". Is there any shell script to do this?
Using csvkit : $ csvformat -D '|' file.csv12584|Capital of America, Inc.||HORIZONCAPITAL|USA|......etc25841|Capital of America, Inc.||HORIZONCAPITAL|USA|......etc87455|Capital of America, Inc.||HORIZONCAPITAL|USA|......etc csvkit is a collection of CSV manipulation/querying tools written in Python. These do proper CSV parsing and csvformat may be used to replace the default comma delimiter with any other character. The utility will make sure that the result is properly quoted according to CSV rules.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/458297", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102978/" ] }
458,371
How can I use a dash in printf string? x=xxxprintf -v args "-x=%s" "$x"printf "$var" But it gives an error: ./tmp.sh: line 2: printf: -x: invalid optionprintf: usage: printf [-v var] format [arguments] I know that it's because printf interprets -x as an option, but how to overcome it? I use printf to create a string that will correspond to command line arguments like -x=xxx -y=yyy -z=zzz . Then I want to call a tool like eval $tool args
printf supports the conventional end-of-options argument -- : $ printf -- '-x\n'-x
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/458371", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223046/" ] }
458,475
I'm following an Ansible tutorial I got from Packt, I reached this part where I've created 3 Ubuntu containers (lxc) and got them up and running. I'm also able to login to each of them. I've downloaded Ansible by doing: git clone ansible-git-url and then sourced it. My working setup is as follows: /home/myuser/code in here I have 2 folders: ansible (the whole git repo) and ansible_course where I have 2 files: ansible.cfg and inventory . inventory contains the following: [allservers]192.168.122.117 192.168.122.146192.168.122.14[web]192.168.122.146192.168.122.14[database]192.168.122.117 And ansible.cfg contains: [root@localhost ansible_course]# cat ansible.cfg[defaults]host_key_checking = False Then from this path: /home/myuser/code/ansible_course I try to execute the following: $ ansible 192.168.122.117 -m ping -u root The guy from the tutorial does exactly like this, and he gets success response from the ping , but I get the following error messages: [WARNING]: Unable to parse /etc/ansible/hosts as an inventory source[WARNING]: No inventory was parsed, only implicit localhost is available[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'[WARNING]: Could not match supplied host pattern, ignoring: 192.168.122.117 In the tutorial, he never says that I need to do something special in order to give an inventory source, he just says that we need to create an inventory file with the IP addresses of the Linux containers that we have. I mean, he doesn't say that we need to execute a command to set this up.
You'll probably want to tell ansible where the hosts file is in ansible.cfg , e.g. [defaults]inventory=inventory assuming inventory is actually your inventory file.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/458475", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/302300/" ] }
458,486
Context: Due to missing drivers for my 2-in-1 convertible, folding the screen back only triggers a lid switch event. This causes the laptop to suspend or, when "Suspend when laptop lid is closed" is disabled in Gnome, still causes all input devices to be disabled (including the touchscreen, which renders the tablet mode useless). As a workaround, I would like to handle the switch to tablet mode manually. This requires to inhibit all lid switch events. Question: How can I completely inhibit lid switch events in Linux ? Alternatively, finding a way to list / disable processes responding to lid switch would solve the issue. Currently, folding the screen in tablet mode and back logs the following events: Jul 25 23:58:54 jl-xps systemd-logind[816]: Lid closed.Jul 25 23:58:58 jl-xps systemd-logind[816]: Lid opened. The lid switch event is mapped to /dev/input/event0 . /proc/bus/input/devices lists (truncated): I: Bus=0019 Vendor=0000 Product=0005 Version=0000N: Name="Lid Switch"P: Phys=PNP0C0D/button/input0S: Sysfs=/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0D:00/input/input0U: Uniq=H: Handlers=event0 B: PROP=0B: EV=21B: SW=1 Attempts: Method 1 : systemd-inhibit , taken from How to disable auto suspend when I close laptop lid? # systemd-inhibit --what=handle-lid-switch sleep 1m and then flip the screen before the end of the timer. Lid switch events are still logged by systemd-logind , and I can see /dev/input/event0 being written to (and other input devices are still disabled). Method 2 : ACPI $ echo "LID0" | sudo tee /proc/acpi/wakeup and check that it is indeed disabled: LID0 S3 *disabled platform:PNP0C0D:00 with the same systemd-logind log and /dev/input/event0 still being written to. Method 3 : brute force # mv /dev/input/event0 /dev/input/event0-off# ln -s /dev/null /dev/input/event0 The lid switch events are still logged by systemd-logind . So it seems that /dev/input/event0 is only informative. System information: $ inxi -FxmzSystem: Host: jl-xps Kernel: 4.18.0-0.rc5.git4.1.fc29.x86_64 x86_64 bits: 64 compiler: gcc v: 8.1.1 Desktop: Gnome 3.28.3 Distro: Fedora release 28 (Twenty Eight) Machine: Type: Laptop System: Dell product: XPS 15 9575 v: N/A serial: <filter> Mobo: Dell model: 0C32VW v: A00 serial: <filter> UEFI: Dell v: 1.1.5 date: 05/30/2018 Battery: ID-1: BAT0 charge: 72.3 Wh condition: 72.3/75.0 Wh (96%) model: BYD DELL TMFYT84 status: Full Memory: RAM Report: permissions: Unable to run dmidecode. Are you root? CPU: Topology: Quad Core model: Intel Core i7-8705G bits: 64 type: MT MCP arch: Skylake rev: 9 L2 cache: 8192 KiB flags: lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx bogomips: 49536 Speed: 900 MHz min/max: 800/4100 MHz Core speeds (MHz): 1: 900 2: 900 3: 900 4: 900 5: 900 6: 900 7: 900 8: 900 Graphics: Card-1: Intel driver: i915 v: kernel bus ID: 00:02.0 Card-2: Advanced Micro Devices [AMD/ATI] Polaris 22 [Radeon RX Vega M GL] driver: amdgpu v: kernel bus ID: 01:00.0 Display: wayland server: Fedora Project X.org 11.0 driver: amdgpu resolution: 3840x2160~60Hz OpenGL: renderer: Mesa DRI Intel HD Graphics 630 (Kaby Lake GT2) v: 4.5 Mesa 18.1.4 direct render: Yes Audio: Card-1: Intel CM238 HD Audio driver: snd_hda_intel v: kernel bus ID: 00:1f.3 Card-2: N/A type: USB driver: snd-usb-audio bus ID: 3:2 Sound Server: ALSA v: k4.18.0-0.rc5.git4.1.fc29.x86_64 Network: Card-1: Qualcomm Atheros QCA6174 802.11ac Wireless Network Adapter driver: ath10k_pci v: kernel bus ID: 02:00.0 IF: wlp2s0 state: up mac: <filter> Card-2: Intel I210 Gigabit Network Connection driver: igb v: 5.4.0-k port: 2000 bus ID: 40:00.0 IF: enp64s0 state: up speed: 1000 Mbps duplex: full mac: <filter> Card-3: Realtek RTL8153 Gigabit Ethernet Adapter type: USB driver: r8152 bus ID: 10:3 IF: enp65s0u2u2 state: down mac: <filter> IF-ID-1: tap0 state: unknown speed: 10 Mbps duplex: full mac: <filter> IF-ID-2: virbr0 state: up speed: N/A duplex: N/A mac: <filter> IF-ID-3: virbr0-nic state: down mac: <filter> Drives: HDD Total Size: 232.89 GiB used: 77.91 GiB (33.5%) ID-1: /dev/sda type: USB vendor: Samsung model: Portable SSD T3 size: 232.89 GiB RAID: Hardware-1: Intel 82801 Mobile SATA Controller [RAID mode] driver: ahci v: 3.0 bus ID: 00:17.0 Partition: ID-1: / size: 114.35 GiB used: 77.69 GiB (67.9%) fs: ext4 dev: /dev/sda3 ID-2: /boot size: 975.9 MiB used: 202.9 MiB (20.8%) fs: ext4 dev: /dev/sda2 ID-3: swap-1 size: 16.00 GiB used: 0 KiB (0.0%) fs: swap dev: /dev/sda4 Sensors: System Temperatures: cpu: 50.0 C mobo: 37.0 C gpu: amdgpu temp: 49 C Fan Speeds (RPM): cpu: 0 Info: Processes: 381 Uptime: 3h 01m Memory: 15.36 GiB used: 7.66 GiB (49.9%) Init: systemd runlevel: 5 Compilers: gcc: 8.1.1 Shell: fish v: 2.7.1 inxi: 3.0.14 $ xinput list⎡ Virtual core pointer id=2 [master pointer (3)]⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]⎜ ↳ xwayland-pointer:15 id=6 [slave pointer (2)]⎜ ↳ xwayland-relative-pointer:15 id=7 [slave pointer (2)]⎜ ↳ xwayland-touch:15 id=9 [slave pointer (2)]⎜ ↳ xwayland-stylus:15 id=10 [slave pointer (2)]⎜ ↳ xwayland-eraser:15 id=11 [slave pointer (2)]⎜ ↳ xwayland-cursor:15 id=12 [slave pointer (2)]⎣ Virtual core keyboard id=3 [master keyboard (2)] ↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)] ↳ xwayland-keyboard:15 id=8 [slave keyboard (3)] Below is the output from sudo libinput debug-events when changing to tablet mode and back to laptop mode (with the lid switch device disabled): -event3 DEVICE_ADDED Power Button seat0 default group1 cap:k-event5 DEVICE_ADDED Video Bus seat0 default group2 cap:k-event1 DEVICE_ADDED Power Button seat0 default group3 cap:k-event2 DEVICE_ADDED Sleep Button seat0 default group4 cap:k-event9 DEVICE_ADDED Integrated_Webcam_HD: Integrate seat0 default group5 cap:k-event10 DEVICE_ADDED Integrated_Webcam_HD: Integrate seat0 default group5 cap:k-event13 DEVICE_ADDED Wacom HID 486A Pen seat0 default group6 cap:T size 344x194mm calib-event14 DEVICE_ADDED Wacom HID 486A Finger seat0 default group6 cap:t size 344x194mm ntouches 10 calib-event12 DEVICE_ADDED DELL080D:00 06CB:7A13 Touchpad seat0 default group7 cap:pg size 102x77mm tap(dl off) left scroll-nat scroll-2fg-edge click-buttonareas-clickfinger dwt-on-event15 DEVICE_ADDED CalDigit, Inc. CalDigit Thunderbolt 3 Audio seat0 default group8 cap:k-event16 DEVICE_ADDED Razer Razer Imperator seat0 default group9 cap:p left scroll-nat scroll-button-event23 DEVICE_ADDED Razer Razer Imperator Keyboard seat0 default group9 cap:k-event24 DEVICE_ADDED Razer Razer Imperator Consumer Control seat0 default group9 cap:kp scroll-nat-event25 DEVICE_ADDED Razer Razer Imperator System Control seat0 default group9 cap:k-event26 DEVICE_ADDED TypeMatrix.com USB Keyboard seat0 default group10 cap:k-event27 DEVICE_ADDED TypeMatrix.com USB Keyboard System Control seat0 default group10 cap:k-event28 DEVICE_ADDED TypeMatrix.com USB Keyboard Consumer Control seat0 default group10 cap:kp scroll-nat-event8 DEVICE_ADDED Intel Virtual Button driver seat0 default group11 cap:kS-event17 DEVICE_ADDED HDA Intel PCH Headphone Mic seat0 default group12 cap:-event18 DEVICE_ADDED HDA Intel PCH HDMI/DP,pcm=3 seat0 default group12 cap:-event19 DEVICE_ADDED HDA Intel PCH HDMI/DP,pcm=7 seat0 default group12 cap:-event20 DEVICE_ADDED HDA Intel PCH HDMI/DP,pcm=8 seat0 default group12 cap:-event21 DEVICE_ADDED HDA Intel PCH HDMI/DP,pcm=9 seat0 default group12 cap:-event22 DEVICE_ADDED HDA Intel PCH HDMI/DP,pcm=10 seat0 default group12 cap:-event6 DEVICE_ADDED Intel HID events seat0 default group13 cap:k-event7 DEVICE_ADDED Intel HID 5 button array seat0 default group14 cap:k-event11 DEVICE_ADDED Dell WMI hotkeys seat0 default group15 cap:k-event4 DEVICE_ADDED AT Translated Set 2 keyboard seat0 default group16 cap:k-event8 SWITCH_TOGGLE +3.90s switch tablet-mode state 1 event8 SWITCH_TOGGLE +5.44s switch tablet-mode state 0 More details about this "Intel Virtual Button driver", which seems to be responsible for the switch to tablet-mode : I: Bus=0019 Vendor=0000 Product=0000 Version=0000N: Name="Intel Virtual Button driver"P: Phys=S: Sysfs=/devices/pci0000:00/0000:00:1f.0/PNP0C09:00/INT33D6:00/input/input8U: Uniq=H: Handlers=kbd event8 B: PROP=0B: EV=33B: KEY=2000000000000 0 0 0 0 1000000000000 0 201c000000000000 0B: MSC=10B: SW=2 $ find /sys/bus/ -name 'PNP0C09:00'/sys/bus/platform/devices/PNP0C09:00/sys/bus/acpi/devices/PNP0C09:00/sys/bus/acpi/drivers/ec/PNP0C09:00 $ find /sys/devices/ -name 'PNP0C09:00'/sys/devices/pci0000:00/0000:00:1f.0/PNP0C09:00/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:11/PNP0C09:00 $ udevadm info /sys/class/input/event8P: /devices/pci0000:00/0000:00:1f.0/PNP0C09:00/INT33D6:00/input/input8/event8N: input/event8S: input/by-path/pci-0000:00:1f.0-platform-INT33D6:00-eventE: DEVLINKS=/dev/input/by-path/pci-0000:00:1f.0-platform-INT33D6:00-eventE: DEVNAME=/dev/input/event8E: DEVPATH=/devices/pci0000:00/0000:00:1f.0/PNP0C09:00/INT33D6:00/input/input8/event8E: ID_INPUT=1E: ID_INPUT_KEY=1E: ID_INPUT_SWITCH=1E: ID_PATH=pci-0000:00:1f.0-platform-INT33D6:00E: ID_PATH_TAG=pci-0000_00_1f_0-platform-INT33D6_00E: MAJOR=13E: MINOR=72E: SUBSYSTEM=inputE: TAGS=:power-switch:E: USEC_INITIALIZED=5811208
You can test unbinding the driver on the parent device. This will remove the child device input0 - along with any other child devices that were there . cd /sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0D:00echo PNP0C0D:00 | sudo tee driver/unbind A second example, based on the other input device you mention: cd /sys/devices/pci0000:00/0000:00:1f.0/PNP0C09:00/INT33D6:00echo INT33D6:00 | sudo tee driver/unbind (If you arrange for such a command to be run automatically, you will want to make sure it runs after the driver is bound... In many cases I think you would get away with putting it in rc.local though). Further reading LWN.net describes this feature here: Manual driver binding and unbinding .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/458486", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/302304/" ] }
458,509
I am looking for a terminal command which doesn't require the executing user to be in the sudoers group and also to be universal and not requiring to install additional packages. So far I have found that if the system has systemd installed then I can use: $ hostnamectl status Static hostname: mint Icon name: computer-laptop Chassis: laptop Machine ID: bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb Boot ID: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa Operating System: Linux Mint LMDE Kernel: Linux 3.16.0-6-amd64 and under Icon name and Chassis I can see if it is VM or physical machine. But I was wondering if I can use lscpu , especially since it is more universal method than hostnamectl and it doesn't require systemd. My theory is that if the CPU has only one thread per core and also not listed minimum and maximum CPU frequency this should be an indication that the server is indeed virtualized. $ lscpuArchitecture: x86_64CPU op-mode(s): 32-bit, 64-bitByte Order: Little EndianCPU(s): 8On-line CPU(s) list: 0-7Thread(s) per core: 2Core(s) per socket: 4Socket(s): 1NUMA node(s): 1Vendor ID: GenuineIntelCPU family: 6Model: 60Model name: Intel(R) Core(TM) i7-4710HQ CPU @ 2.50GHzStepping: 3CPU MHz: 2500.488CPU max MHz: 3500.0000CPU min MHz: 800.0000BogoMIPS: 4988.18Virtualization: VT-xL1d cache: 32KL1i cache: 32KL2 cache: 256KL3 cache: 6144KNUMA node0 CPU(s): 0-7 I know that the if a CPU has only one thread per core doesn't necessarily means that it is VM for sure, but then all modern CPUs should have 2 threads per core and in addition I can also take into account the lack/presence of minimum and maximum CPU frequency in the lscpu output.
Under given conditions: terminal command which doesn't require the executing user to be in the sudoers group and also to be universal and not requiring to install additional packages. the obvious simplest method for unmodified VMs, which owners intentionally haven't tried to hide fact the OS is VM, is cat /sys/class/dmi/id/product_name More possibilities: How to detect virtualization 16 Methods To Check If A Linux System Is Physical or Virtual Machine Outside of the given by OP author conditions there are more complicated approaches like this one: Where am I? Operating System and Virtualization Identification Without System Calls
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/458509", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216159/" ] }
458,511
I'm using socat to route incoming tcp6 to tcp4. The destination (tcp4) is a pod/container with the pod service external-ip. Within the container I use ncat to listen for the port 5555. # socat TCP6-LISTEN:5555,reuseaddr,fork,bind=[fe80::250:56ff:fe91:bd5c%ens192] TCP4:10.40.5.125:5555 (Update) socat return Connection refused (Update) 2018/07/27 01:15:41 socat[26914] E connect(5, AF=2 10.40.5.125:5555, 16): Connection refused I'm getting no acknowledgment from within the container ( # ncat -4 -vv --exec cat -l -p 5555 ) I try to use -vv & -lf in the socat command to get more information about the tcp6 traffic but no significant log was written to the log file. Before attempting tcp6 format I was able to route tcp4 traffic to tcp4 same destination listed above using socat . The command is below, # socat TCP-LISTEN:5555,fork TCP:10.101.74.206:5555 Can someone please point what I am missing for tcp6? OS: CentOS 7.5
Under given conditions: terminal command which doesn't require the executing user to be in the sudoers group and also to be universal and not requiring to install additional packages. the obvious simplest method for unmodified VMs, which owners intentionally haven't tried to hide fact the OS is VM, is cat /sys/class/dmi/id/product_name More possibilities: How to detect virtualization 16 Methods To Check If A Linux System Is Physical or Virtual Machine Outside of the given by OP author conditions there are more complicated approaches like this one: Where am I? Operating System and Virtualization Identification Without System Calls
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/458511", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/302333/" ] }
458,521
Saying that I have a bash script /home/me/test/sh : #!/bin/bash# cmd # here is the output of the cmdpwd echo 'aaa' # /home/meval=pwd echo 'aaa' # aaaval=pwd echo 'aaa'$val # aaaval=pwd echo 'aaa' && echo $val # aaa <and an empty line> I can't quite understand how these bash codes seperated by a space works. BTW, I pose this question because I just installed ROS and a bash script named setup.sh was generated, and I'm reading this script. There is a line as below: CATKIN_SHELL=bash /opt/ros/kinetic/_setup_util.py >> file.log
So I think what is tripping you here is the command format where the first word(s) in the command are a variable assignment (which the shell determines based on the presence of a = .) In those cases, the shell will evaluate those assignments, but it will only export those variables in the command that it is running. In particular, the shell itself won't have access to those variables. This is useful for running commands or scripts that are affected by environment variables, in which case you don't need to export them in a shell and can just set them as "one-offs" for that single execution. Let's break down your commands and explain them one by one: pwd echo 'aaa' # /home/me Ok, so this is running the pwd command and passing it two arguments, echo and aaa . But pwd doesn't take arguments, so it's just ignoring those two and printing the local directory. val=pwd echo 'aaa' # aaa So as I explained, this performs the variable assignment, setting $val to contain the string pwd and then executes the echo command with the aaa argument. The $val variable is available only to the echo command and not to the shell itself! The echo command doesn't really do anything with that variable, so it simply goes ahead and prints aaa . After this command is finished, the $val variable is not defined in the shell. val=pwd echo 'aaa'$val # aaa So, again, similar to the command above. This one gets tricky, since you might expect the final $val to get expanded. But variables are expanded by the shell , not by the commands. And, as explained, the shell doesn't really see $val which is set to pwd , only the command (in this case, echo ) does, so $val gets expanded to an empty string, and this is exactly equivalent to the above. val=pwd echo 'aaa' && echo $val # aaa <and an empty line> And one more time, $val is not available after the first echo command completes, so the second echo $val will print nothing (just the blank line), as $val was never really set in the shell itself throughout this execution. If you want to set $val in the shell, then you need to make that assignment a standalone, without any commands. For instance, this behaves differently from the above: $ val=pwd; echo 'aaa' && echo $valaaapwd Since val=pwd is followed by a ; , then the shell considers it a command on its own and will set $val in the current shell, so the echo $val will work as you expect. (Note that there's one more difference here, since with val=pwd; ... , the variable $val is not really exported, so echo will not see that variable in its environment, while with val=pwd echo ... it would get exported, but then in that case, it is not available in the shell.) And considering you're using pwd , I wonder if what you wanted was to store the output of that command in the variable... In which case, you need to use shell command substitution , either using backticks, or better yet, surrounding the command with $( and ) . So one final example here: $ val=$(pwd); echo 'aaa' && echo $valaaa/home/me Or: $ val=$(pwd) && echo 'aaa' && echo $valaaa/home/me (There's a subtle difference here, where the latter will break execution if the command inside the substitution, in this case pwd , fails and exits with a non-zero status, in which case the following commands will not be executed. When using ; that's not the case, the following commands are executed even if the first one fails.) I hope this explains what you weren't understanding about these commands!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/458521", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145824/" ] }
458,531
I installed Debian 9. Now I need to install WiFi card driver. But, I couldn't find a working one on my system. These are my devices listed by command lspci -nn : 00:00.0 Host bridge [0600]: Intel Corporation 2nd Generation Core Processor Family DRAM Controller [8086:0104] (rev 09)00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port [8086:0101] (rev 09)00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0116] (rev 09)00:16.0 Communication controller [0780]: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 [8086:1c3a] (rev 04)00:1a.0 USB controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 [8086:1c2d] (rev 05)00:1b.0 Audio device [0403]: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller [8086:1c20] (rev 05)00:1c.0 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 [8086:1c10] (rev b5)00:1c.1 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 [8086:1c12] (rev b5)00:1c.3 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 [8086:1c16] (rev b5)00:1c.4 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 [8086:1c18] (rev b5)00:1c.5 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 6 [8086:1c1a] (rev b5)00:1d.0 USB controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 [8086:1c26] (rev 05)00:1f.0 ISA bridge [0601]: Intel Corporation HM67 Express Chipset Family LPC Controller [8086:1c4b] (rev 05)00:1f.2 SATA controller [0106]: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller [8086:1c03] (rev 05)00:1f.3 SMBus [0c05]: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller [8086:1c22] (rev 05)01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF108M [GeForce GT 525M] [10de:0df5] (rev a1)03:00.0 Network controller [0280]: Intel Corporation Centrino Wireless-N 1030 [Rainbow Peak] [8086:008a] (rev 34)04:00.0 USB controller [0c03]: NEC Corporation uPD720200 USB 3.0 Host Controller [1033:0194] (rev 04)06:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 06) Question: how to install wifi drivers?
Your WiFi card is apparently this one: 03:00.0 Network controller [0280]: Intel Corporation Centrino Wireless-N 1030 [Rainbow Peak] [8086:008a] (rev 34) It is supported by the iwlwifi driver, which is included in the standard kernel package. Your adapter has been supported this way since kernel version 2.6.36. With some kernels, the driver module might be named iwlagn instead. What you need is the firmware file for the adapter. In Debian 9, you'll first need to enable the non-free part of Debian repository if you didn't enable it when installing Debian in the first place. (Non-free here means "you don't get the source code for it", not "you must pay for it".) then install the firmware-iwlwifi package using your favorite package management utility, then reboot and the driver should be auto-loaded and ready for use.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/458531", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/302336/" ] }
458,564
I can order a list of files / directories by size: ls -lS But if I am using du in human readable format: du --max-depth=1 -h . I get: 128K ./something3,3M ./more3,2M ./even-more... Which is not ordered. Is there any standard tool to order this kind of data? Standard sort does not seem to support this. Do I need to roll my own?
GNU sort has a -h / --human-numeric-sort option and h sort key flag to handle those (it expects 1024-based units (1023 sorts before 1K) which happens to be how GNU du counts as well. Now note that some precision is lost when you use du -h , so the order may end-up being wrong: $ du -k a b1212 a1208 b$ du -h a b | sort -h1.2M a1.2M b As mentioned by @StephenKitt, you can work around it by telling du to give you the full precision and only convert to human format after sorting using for instance GNU numfmt : $ du --block-size=1 a b | sort -n | numfmt --to=iec1.2M b1.2M a (beware that spacing is affected). All of the above assume file names don't contain newline characters. As for the generic question about ordering by size, zsh globs have a oL glob qualifier for that (note that it's by size, not disk usage). ls -S could be done (with GNU ls for its -U for unsorted): ls -ldU -- *(oL) For sorting by size after symlink resolution: ls -LldU -- *(-oL)wc -c -- *(-oL)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/458564", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39807/" ] }
458,571
I test content of a set of data files whether they contain at least one of a set of characters which consists of printing and non-printing characters. My last issue is detecting whether the file contains a line-feed. My GNU grep 3.0 states every input contains a LF... Why is that? echo -n "test" | grep -UF -e $'\x0a' Any ideas? I suspect some implicit EOL/EOF interference.
grep is line-oriented - if the input does not end with a newline, it still considers the text past the last newline (or start of file) as a line. Line-oriented programs are a poor fit for directly processing binary files - they will often have pathological cases if a binary file has a particularly long "line". Instead, consider something like a combination of tr and cmp : $ echo -n foo | tr -d -c $'\n' | cmp /dev/null - # no output and exits with status 0$ echo foo | tr -d -c $'\n' | cmp /dev/null -cmp: EOF on /dev/null which is empty# exits with status 1 This method also has the advantage of needing to read the input only up to the first newline character (plus buffering).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/458571", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/194609/" ] }
458,648
When checking a service status via systemctl systemctl status docker the output is something like ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: inactive (dead) (Result: exit-code) since Mon 2018-03-19 13:52:21 CST; 4min 32s ago Docs: https://docs.docker.com Process: 6001 ExecStart=/usr/bin/dockerd -H fd:// (code=exited, status=205/LIMITS) Main PID: 6001 ( code=exited, status=205/LIMITS ) The question is about the part in bold: the main process exit code and status information. Is there a list of all the codes and statuses along with their explanation ? I know that most times it's self-explanatory (and I know the answer to the question here) but lately we get this question a lot at work (some people search via google but can't find it, other people open the systemd.service man page, search for e.g. code 203 and don't find it...) so I thought I might as well put it here so it's easier for people to find the answer via google.
Yes, but only since 2017 when Jan Synacek finally documented them in the systemd manual. Your work colleagues are simply reading the wrong page of the manual. ☺ Further reading Lennart Poettering (2017). " Process exit codes: systemd-specific exit codes ". systemd.exec . systemd manual pages. Freedesktop.org. Jan Synacek (2016-06-15) Document service status codes . systemd bug #3545. GitHub.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/458648", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22142/" ] }
458,650
How can i read list of server entered by user & save it into variable ? Example: Please enter list of server:(user will enter following:)abcdefghiEND$echo $variableabcdefghi I want it to be running in shell script.If i use following in shell script: read -d '' x <<-EOF It is giving me an error : line 2: warning: here-document at line 1 delimited by end-of-file (wanted `EOF') Please suggest how can I incorporate it in shell script ?
Yes, but only since 2017 when Jan Synacek finally documented them in the systemd manual. Your work colleagues are simply reading the wrong page of the manual. ☺ Further reading Lennart Poettering (2017). " Process exit codes: systemd-specific exit codes ". systemd.exec . systemd manual pages. Freedesktop.org. Jan Synacek (2016-06-15) Document service status codes . systemd bug #3545. GitHub.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/458650", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/302429/" ] }
458,668
I have a bash script, created by someone that is no longer able to explain the meaning, that is used for auto TARing a folder and backing it up. At the moment I'm confused with the final line: find '/home/_backups/pokebrawl' -mtime +6 -type f -delete The full script is below. #!/bin/bash#Purpose = Backup of Important Data#Created on 17-1-2012#Author = Hafiz Haider#Version 1.0#STARTTIME=$(date +"%m-%d-%Y-%T")FILENAME=pokebrawl-$TIME.tar.gzSRCDIR=/home/servers/brawl/worldDESDIR=/home/_backups/pokebrawltar -cpzf $DESDIR/$FILENAME $SRCDIRfind '/home/_backups/pokebrawl' -mtime +6 -type f -delete#END
The find command will delete any regular file in or below the /home/_backups/pokebrawl directory that is more than seven days old (or more precisely, not modified within the last week). It should probably read find "$DESDIR" -mtime +6 -type f -delete or find "${DESDIR:?Not set correctly}" -mtime +6 -type f -delete as there is a perfectly good variable holding that directory name already. The second variation would cause an error if DESDIR for some reason was empty or unset. I'm assuming this is a way of keeping only the last week's worth of backups. I would suggest using something like borgbackup or restic instead, as these would be able to keep backups for a lot longer without using much more space (even hourly backups for a year wouldn't take much space at all if the data wasn't changing much). They do deduplication of data and borgbackup additionally supports compression (both support remote backups and encryption too). To run the script every 24 hours (at midnight), use a cron job. First, issue the command crontab -e . This would open an editor with the current crontab for the active user (this may or may not be an empty file). Then add @daily /path/to/the/script.sh (where path/to/the/script.sh is the pathname of the script). Save and exit the editor. The script would now be called at midnight, even night. Would you want to run the script at, say, 13:30 every afternoon, add the following as the crontab entry instead: 30 13 * * * /path/to/the/script.sh See the crontab manual on your system for details on how to write a crontab schedule ( man 5 crontab ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/458668", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/302441/" ] }
458,672
cron can be used to schedule running a program once in a while. But it seems to be not specific to an existing shell process. If I have a script which accesses the state of a specific bash process (e.g. to access the output of running jobs and dirs in the shell by source the script), how can I schedule its running in the specific bash process once in a while? Thanks. update: neither reply actually can directly access the state of an existing bash process. They can indirectly for some state information copied from the parent shell process to the child shell process. I don't remember why I accepted one.
The find command will delete any regular file in or below the /home/_backups/pokebrawl directory that is more than seven days old (or more precisely, not modified within the last week). It should probably read find "$DESDIR" -mtime +6 -type f -delete or find "${DESDIR:?Not set correctly}" -mtime +6 -type f -delete as there is a perfectly good variable holding that directory name already. The second variation would cause an error if DESDIR for some reason was empty or unset. I'm assuming this is a way of keeping only the last week's worth of backups. I would suggest using something like borgbackup or restic instead, as these would be able to keep backups for a lot longer without using much more space (even hourly backups for a year wouldn't take much space at all if the data wasn't changing much). They do deduplication of data and borgbackup additionally supports compression (both support remote backups and encryption too). To run the script every 24 hours (at midnight), use a cron job. First, issue the command crontab -e . This would open an editor with the current crontab for the active user (this may or may not be an empty file). Then add @daily /path/to/the/script.sh (where path/to/the/script.sh is the pathname of the script). Save and exit the editor. The script would now be called at midnight, even night. Would you want to run the script at, say, 13:30 every afternoon, add the following as the crontab entry instead: 30 13 * * * /path/to/the/script.sh See the crontab manual on your system for details on how to write a crontab schedule ( man 5 crontab ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/458672", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
458,694
How can I search and replace horizontal tabs in nano? I've been trying to use [\t] in regex mode, but this only matches every occurrence of the character t . I've just been using sed 's/\t//g' file , which works fine, but I would still be interested in a nano solution.
In nano to search and replace: Press Ctrl + \ Enter your search string and hit return Enter your replacement string and hit return Press A to replace all instances To replace tab characters you need to put nano in verbatim mode: Alt + Shift + V . Once in verbatim mode, you can type any character in it'll be be accepted literally when in verbatim mode, then hit return . References 3.8. Tell me more about this verbatim input stuff! Nano global search and replace tabs to spaces or spaces to tabs Is it possible to easily switch between tabs and spaces in nano?
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/458694", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240990/" ] }
458,717
I have mirrored a directory structure but I don't care about the content of the files, I just want to keep the name and structure of everything. How can I replace all files' (not folders) content with "nothing" (null, 1 byte, empty string or something like that)?
Generically, find /top -type f -exec cp /dev/null {} \; or (courtesy of jordanm ): find /top -type f -exec sh -c '> $1' -- {} \; On a Linux system (or one with the truncate command from the GNU coreutils package ): find /top -type f -exec truncate -s 0 {} +
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/458717", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32351/" ] }
458,734
If I mark a line in less, the mark will lost when the current less session ends. So, let's say, if I often check the bash man page for READLINE section, I have to search ^REA every time. Seems like less doesn't have any config files. Is there a way to save the marks in less so I can use it for next time?
Generically, find /top -type f -exec cp /dev/null {} \; or (courtesy of jordanm ): find /top -type f -exec sh -c '> $1' -- {} \; On a Linux system (or one with the truncate command from the GNU coreutils package ): find /top -type f -exec truncate -s 0 {} +
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/458734", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274163/" ] }
458,781
I have compiled a package that uses autotools as build system ( autoreconf , ./configure , make , make install ). ./configure stops when a package is missing. For each missing package, I look up its name, then either I do apt install package or I compile it from source if not available. Then I run ./configure again and it tells me the name of another unsatisfied dependency. If there are only one or two missing packages, this is ok. But there were 19! libmspack-devlibglib2.0-devlibpam0g-devlibssl-devlibxml2-devlibxmlsec1-devlibx11-devlibcunit1-devlibxext-devlibxinerama-devlibxi-devlibxrender-devlibxrandr-devlibxtst-devlibgdk-pixbuf2.0-devlibgtk-3-devlibgtkmm-3.0-devlibtooldnet I would like ./configure to continue on error, and show me all missing packages at once, so I can install them all at once. Otherwise it is long and painful to run ./configure and apt install 19 times.
The simple approach in your case is to install the open-vm-tools package. To address your question, there is no fool-proof way of listing all missing packages at once, mostly because this wasn’t designed in and configure scripts allow their authors to do anything — so there’s no way to know in advance how to continue and whether continuing is safe. An example of the kind of issue you could run into is tests which build upon the results of previous tests; e.g. check for an installed program, fail if it isn’t installed, and use it in subsequent tests if it is. Continuing if the program is absent isn’t going to give very useful results. However, in many cases you can get useful results by tweaking configure to not exit when it encounters an error. Typically, this involves replacing AC_MSG_ERROR with AC_MSG_WARN , in configure.ac and any M4 library used by configure.ac : sed -i 's/AC_MSG_ERROR/AC_MSG_WARN/g' configure.ac m4/*.m4autoreconf -i./configure ... and look for “WARNING:” messages. You should of course restore configure.ac and the M4 libraries before you attempt to build the software “properly”. Looking at this more generally, there are other ways to determine dependencies. In many cases, they’re listed in the documentation ( README , INSTALL ...), sometimes even with corresponding package names for popular distributions. Another useful place to look is configure itself, either by running ./configure --help or by reading configure.ac (or CMakeLists.txt or meson.build or whatever file is appropriate for the build tool being used). If the software you’re looking at is packaged in a Linux distribution, you can look at the metadata there too, although it will only correspond to the version of the software being packaged, and will reflect the maintainer’s packaging choices ( apt showsrc ... in Debian derivatives).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/458781", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/133611/" ] }
458,805
I am trying to write a script and at some point i needed to list all the things in a directory and grep someting from that directory. I can't do it with ls. ls is not my guy to do it. So i tried to do ls' job with echo command instead , but it is giving me a permission denied error now.You can see the script below : #!/bin/sh# Test for network connectionfor interface in $(echo $(/sys/class/net/*) | grep -v lo);do if [ "$(cat /sys/class/net/"$interface"/carrier)" = 1 ]; then OnLine=1; fidoneif ! [ $OnLine ]; then echo "Not Online" > /dev/stderr; exit;fi And i am getting this error : ./carriercontrol.sh: line 10: /sys/class/net/apcli0: Permission denied What can i do to complete this script ? Is there a way to get the listing of a directory and pipe it with something. Also even i could get rid of that permission then i think echo will cause more trouble to me. EDIT : I tried to replace echo with find command , here is the results and errors. #!/bin/sh# Test for network conectionfor interface in $(find /sys/class/net -mindepth 1 | grep -v lo);do if [ "$(cat /sys/class/net/"$interface"/carrier)" = 1 ]; then OnLine=1; fidoneif ! [ $OnLine ]; then echo "Not Online" > /dev/stderr; exit;ficat: can't open '/sys/class/net//sys/class/net/ra0/carrier': No such file or directorycat: can't open '/sys/class/net//sys/class/net/eth0/carrier': No such file or directorycat: can't open '/sys/class/net//sys/class/net/br-lan/carrier': No such file or directorycat: can't open '/sys/class/net//sys/class/net/eth0.1/carrier': No such file or directorycat: can't open '/sys/class/net//sys/class/net/apcli1/carrier': No such file or directorycat: can't open '/sys/class/net//sys/class/net/apcli0/carrier': No such file or directory
You seem to want to query the carrier files under /sys/class/net/*/ to see if there is at least one that indicates whether you're online or not (ignoring */lo/carrier ). With a shell loop: #!/bin/shonline=0for carrier in /sys/class/net/*/carrier; do case "$carrier" in */lo/carrier) continue ;; esac if read online <"$carrier" && [ "$online" -eq 1 ]; then break fidoneif [ "$online" -ne 1 ]; then echo 'not online' >&2 exit 1fi Using bash : #!/bin/bashshopt -s extglobonline=0for carrier in /sys/class/net/!(lo)/carrier; do if read online <"$carrier" && [ "$online" -eq 1 ]; then break fidoneif [ "$online" -ne 1 ]; then echo 'not online' >&2 exit 1fi Or, as a bash "almost-one-liner" #!/bin/bashshopt -s extglobgrep -qx 1 /sys/class/net/!(lo)/carrier || ! echo not online >&2 This last one assumes that the files hold a single digit 1 if that carrier is online and that there is no data, or at least no 1 , if it's not. The loops above (using read ) will read the first line only from each carrier file until a 1 is found. The issue in your code is the command substitution $(/sys/class/net/*) which will try to execute the first matching filename as a command with the other matching filenames as arguments. Also, the test [ $OnLine ] would be "true" whenever $OnLine is non-empty. I'm not sure what the files that you parse contains if the carrier is not on-line, but even a zero would be taken as "true" here.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/458805", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/302244/" ] }
458,839
I'm trying to rename a few images: IMG_1.JPGIMG_2.JPGIMG_3.JPG I want to replace IMG with img and .JPG with .jpg - I know how to do the second part: $ rename 's/\.JPG$/\.jpg/' *.JPG My problem is that I can't seem to mv IMG_.JPG to img_.jpg - I know you can pass multiple patterns to rename , but I can't seem to use the existing filename with an amended lowercase value. How do I go about this?
Maybe you need to be using the perl rename command. On my CentOS box, it's called 'prename'. $ lsIMG_1.JPG IMG_2.JPG IMG_3.JPG$ prename 's/^IMG/img/;s/\.JPG$/\.jpg/' *JPG$ lsimg_1.jpg img_2.jpg img_3.jpg$$ prename -hUsage: prename [OPTION]... PERLEXPR FILE...Rename FILE(s) using PERLEXPR on each filename. -b, --backup make backup before removal -B, --prefix=SUFFIX set backup filename prefix -f, --force remove existing destinations, never prompt -i, --interactive prompt before overwrite -l, --link-only link file instead of reame -n, --just-print, --dry-run don't rename, implies --verbose -v, --verbose explain what is being done -V, --version-control=METHOD override the usual version control -Y, --basename-prefix=PREFIX set backup filename basename prefix -z, -S, --suffix=SUFFIX set backup filename suffix --help display this help and exit --version output version information and exitThe backup suffix is ~, unless set with SIMPLE_BACKUP_SUFFIX. Theversion control may be set with VERSION_CONTROL, values are: numbered, t make numbered backups existing, nil numbered if numbered backups exist, simple otherwise simple, never always make simple backupsReport bugs to [email protected]$ If you want to use the dumb rename command from util-linux (sometimes called rename.ul ), perhaps needs doing in two steps, e.g. $ lsIMG_1.JPG IMG_2.JPG IMG_3.JPG$ rename IMG img *JPG$ rename JPG jpg *JPG$ lsimg_1.jpg img_2.jpg img_3.jpg$
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/458839", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/226212/" ] }
458,866
I'm new to unix. I typed this command in ubuntu terminal: pwd | echo I expected to see the output of pwd in terminal( /home/fatemeh/Documents/Code/test ) but the output was just a single empty line . why this happens?
echo does not do anything with standard input; it only parses its parameters. So you are effectively running echo which, by itself, outputs a single empty line, and the standard input is discarded. If you want to see the behavior you are trying to implement, use a tool designed to parse standard input, such as cat : $ pwd | cat/home/username If you really want to use echo to display the current working directory (or the output of another command), you can use Command Substitution to do this for you: $ echo "Your shell is currently working in '$(pwd)'."Your shell is currently working in '/home/username'.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/458866", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271328/" ] }
458,875
Suppose I install an ISO of Linux Mint as a virtual machine on VirtualBox. I want to install several programs, such as Terminator, Netbeans, Java, Ruby on Rails, etc. How can I convert the virtual machine back to ISO so when I install on any other physical computer I have already my programs working as I configured?
echo does not do anything with standard input; it only parses its parameters. So you are effectively running echo which, by itself, outputs a single empty line, and the standard input is discarded. If you want to see the behavior you are trying to implement, use a tool designed to parse standard input, such as cat : $ pwd | cat/home/username If you really want to use echo to display the current working directory (or the output of another command), you can use Command Substitution to do this for you: $ echo "Your shell is currently working in '$(pwd)'."Your shell is currently working in '/home/username'.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/458875", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/128788/" ] }
458,877
I have this quite unusual condition here. I have a old linux system that has no sudo or su commands. I do not have physical access to this computer so I cannot login as another user. Linux kernel is 2.6.18-498 and the system is a red-hat 4.1.2-55. I can go to the /bin directory and can say for sure there are no su or sudo binaries there. So this is not the case of PATH variables misconfigaration. Also this is a web server so maybe it is configured this way. Is there any way to run a command as a different user? Any help would be appreciated.
echo does not do anything with standard input; it only parses its parameters. So you are effectively running echo which, by itself, outputs a single empty line, and the standard input is discarded. If you want to see the behavior you are trying to implement, use a tool designed to parse standard input, such as cat : $ pwd | cat/home/username If you really want to use echo to display the current working directory (or the output of another command), you can use Command Substitution to do this for you: $ echo "Your shell is currently working in '$(pwd)'."Your shell is currently working in '/home/username'.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/458877", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/302582/" ] }
458,893
When running a virtual machine in VMware (Ubuntu 16.04 host), both the guest system (Windows 10 at the moment) and the host system regularly become unresponsive for several seconds, e.g. when starting Atom or Visual Studio in the guest VM. RAM usage reports look normal (16 GB total, 6.5 GB used by the VM as “shared memory”, some GB free), but while the system is unresponsive, IO tasks are either suspended or very slow, for example copy/paste of text takes several seconds. Changing settings (virtualisation settings, VM’s RAM, …) in VMware does not have any effect.
The solution is to disable khugepaged defragmenting: echo never | sudo tee /sys/kernel/mm/transparent_hugepage/defragecho 0 | sudo tee /sys/kernel/mm/transparent_hugepage/khugepaged/defrag See this answer from the question Arch Linux becomes unresponsive from khugepaged . Also, it is probably a good idea to limit the amount of RAM which VMware can use for running VMs to reserve some for the host system (Edit > Preferences). Note: I am re-posting this answer under this question because the answer is very hard to find – it literally took me years.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/458893", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18442/" ] }
458,906
sed removing everything until and including the first period if there is more than one period on that line and do this for the whole file. Before sed: akamai.comcdnjs.cloudflare.comcom.cdn.cloudflare.net After sed: akamai.comcloudflare.comcdn.cloudflare.net
$ sed '/\..*\./s/^[^.]*\.//' fileakamai.comcloudflare.comcdn.cloudflare.net The sed script first matches lines that contain at least two dots using the regular expression \..*\. (could also have been written [.].*[.] ). For lines matching this, a substitution that removes everything up to and including the first dot is performed. Using awk , being somewhat long-winded in comparison to the above: $ awk -F '.' -vOFS='.' 'NF > 2 { n=split($0, a); $0=""; for (i=2;i<=n;++i) $(NF+1)=a[i] } 1' fileakamai.comcloudflare.comcdn.cloudflare.net Here, whenever there are more than two dot-delimited fields, we split the current line on dots, and then re-create the current record from that, skipping the first field. The trailing 1 at the end causes every line (modified or not) to be printed. Shorter awk in the same fashion as the sed solution: $ awk -F '.' 'NF > 2 { sub("^[^.]*\.", "") } 1' fileakamai.comcloudflare.comcdn.cloudflare.net
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/458906", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/302603/" ] }
458,935
I have a file that looks like this: [ { "billingAccountNumber": "x", "paymentResponseObject": { "uiErrorDipslayMessage": "", "transactionStatus": "S", "transactionDescription": "", "transactionCode": "", "confirmationNumber": "1" } }, { "billingAccountNumber": "y", "paymentResponseObject": { "uiErrorDipslayMessage": "", "transactionStatus": "S", "transactionDescription": "", "transactionCode": "", "confirmationNumber": "2" } }, { "billingAccountNumber": "z", "paymentResponseObject": { "uiErrorDipslayMessage": "", "transactionStatus": "S", "transactionDescription": "", "transactionCode": "", "confirmationNumber": "3" } }] The data doesn't look exactly like this, and I have more than three elements. From this data, I want to create three files: x.json , y.json , and z.json . I want the contents of each of those files to be the contents of the paymentResponseObject . Is there a way to do this with jq ? I've figured out how to do this in awk , but it's very clunky and I want to be able to repeat this process with different schemas. I have to rewrite 80% of the awk script for each schema.
From this SO thread: jq -cr 'keys[] as $k | "\($k)\n\(.[$k])"' input.json | while read -r key; do fname=$(jq --raw-output ".[$key].billingAccountNumber" input.json) read -r item printf '%s\n' "$item" > "./$fname"done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/458935", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101311/" ] }
458,954
I have a file of genomic data with tag counts, I want to know how many are represented once: $ grep "^1" file |wc -l includes all lines beginning with 1, so it includes tags represented 10 times, 11, times, 100 times, 1245 times, etc. How do I do this? Current format79 TGCAG.....1 TGCAG.....1257 TGCAG.....1 TGCAG...... I only want the lines that are: 1 TGCAG..... So it cannot include the lines beginning with 1257. NOTE: The file above is tab delimited.
The question in the body Select lines that start with a 1 and are followed by an space grep -c '^1\s' filegrep -c '^1[[:space:]]' file That will also give the count of lines (without needing the call to wc) The question in the title A 1 not followed by another number (or nothing): grep -cE '^1([^0-9]|$)' file But both solutions above have some interesting issues, keep reading. In the body of the question the user claim that the file is "tab delimited". Delimiter tab A line starting with a 1 followed by a tab (an actual tab in the command). This fails if the delimiter is an space (or any other, or none): grep '^1 ' file space A line starting with a 1 followed by a space (an actual space in the command). This fails if the delimiter is any other or none.: grep '^1 ' file tab or space grep '^1( | )' filegrep '^1[[:blank:]]' file whitespace A more flexible option is to include several space (horizontal and vertical) characters. The [:space:] character class set is composed of (space), \t (horizontal tab), \r (carriage return), \n (newline), \v (vertical tab) and \f (form feed). But grep can not match a newline (it is an internal limitation that could only be avoided with the -z option). It is possible to use it as a description on the delimiter. It is also possible, and shorter, to use the GNU available shorthand of \s : grep -c '^1[[:space:]]` filegrep -c '^1\s' file But this option will fail if the delimiter is something like a colon : or any other punctuation character (or any letter). Boundary Or, we can use the transition from a digit to a "not a digit" boundary, well, actually "a character not in [_[:alnum:]] ( _a-zA-Z0-9 )": grep -c '^1\b' file # portable but not POSIX.grep -c '^1\>' file # portable but not POSIX.grep -wc '^1' file # portable but not POSIX.grep -c '^1\W' file # portable but not POSIX (not match only a `1`) (not underscore in BSD). This will accept as valid lines that start with a 1 and are followed by some punctuation character.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/458954", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/293255/" ] }
458,965
Suppose I have an environment where there isn't a shell running, so I can't use redirection, pipes, here-documents or other shell-isms, but I can launch a command (through execvp or some similar way). I want to write an arbitrary string to a named file. Is there a standard command that will do something like: somecommand outputfile 'string' for instance: somecommand /proc/sys/net/ipv4/ip_forward '1' A really dumb example might be: curl -o /proc/sys/net/ipv4/ip_forward http://example.com/1.txt where I set up 1.txt to contain the string I want. Is there a common command that can be abused to do this?
If you know of any other non-empty file on the system, then with POSIX sed : sed -e 's/.*/hello world/' -e 'wtarget' -e q otherfile With GNU sed and just your own non-empty file, you can use: sed -i.bak -e '$ihello world' -e 'd' foo With BSD sed , this would work instead: sed -i.bak -e '$i\hello world' -e d foo If you're not using a shell then presumably the linebreak isn't an issue. With ex , if the target file exists: ex -c '0,$d' -c 's/^/hello world/' -c 'x' foo This just deletes everything in the file, replaces the first line with "hello world", then writes and quits. You could do the same thing with vi in place of ex . Implementations are not required to support multiple -c options, but they generally do. For many ex implementations the requirement that the file already exist is not enforced. Also with awk : awk -v FN=foo -v STR="hello world" 'BEGIN{printf(STR) > FN }' will write "hello world" to file "foo". If there are existing files containing the bytes you want at known locations, you can assemble a file byte by byte over multiple commands with dd (in this case, alphabet contains the alphabet, but it could be a mix of input files): dd if=alphabet bs=1 skip=7 count=1 of=testdd if=alphabet bs=1 skip=4 count=1 seek=1 of=testdd if=alphabet bs=1 skip=11 count=1 seek=2 of=testdd if=alphabet bs=1 skip=11 count=1 seek=3 of=testdd if=alphabet bs=1 skip=14 count=1 seek=4 of=testcat testhello From there, just regular cp will work, or you might have been able to put it in-place to start with. Less commonly, the mispipe command from moreutils allows constructing a shell-free pipe: mispipe "echo 1" "tee wtarget" is equivalent to echo 1 | tee wtarget , but returning the exit code of echo . This uses the system() function internally, which doesn't strictly require a shell to exist. Finally, perl is a common command and will let you write arbitrary programs to do whatever you want on the command line, as will python or any other common scripting language. Similarly, if a shell just isn't "running", but it does exist, sh -c 'echo 1 > target' will work just fine.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/458965", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157128/" ] }
458,973
Is it correct that pstree <pid> will output all the descendant processes of the given process pstree -s <pid> will output all the descendant processes and ancestry processes of the given process How can I get only the ancestry processes of a given process? Thanks.
You can always walk the ancestry tree by hand using ps -o ppid= : #! /bin/bash -pid=${1?Please give a pid}while [ "$pid" -gt 0 ] && read -r ppid name < <(ps -o ppid= -o comm= -p "$pid")do printf '%s\n' "$pid $name" pid=$ppiddone Or to avoid running ps several times: #! /bin/sh -pid=${1?Please give a pid}ps -Ao pid= -o ppid= -o comm= | awk -v p="$pid" ' { pid = $1; ppid[pid] = $2 sub(/([[:space:]]*[[:digit:]]+){2}[[:space:]]*/, "") name[pid] = $0 } END { while (p) { print p, name[p] p = ppid[p] } }'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/458973", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
459,074
I am trying to ssh to remote machine, the attempt fails: $ ssh -vvv [email protected]_7.7p1, OpenSSL 1.0.2o 27 Mar 2018.....debug2: ciphers ctos: aes128-cbc,3des-cbc,aes192-cbc,aes256-cbcdebug2: ciphers stoc: aes128-cbc,3des-cbc,aes192-cbc,aes256-cbcdebug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1debug2: compression ctos: none,[email protected]: compression stoc: none,[email protected]: languages ctos: debug2: languages stoc:debug2: first_kex_follows 0 debug2: reserved 0 debug1: kex: algorithm: curve25519-sha256debug1: kex: host key algorithm: rsa-sha2-512Unable to negotiate with 192.168.100.14 port 22: no matching cipher found. Their offer: aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc As far as I understand the last string of the log, the server offers to use one of the following 4 cipher algorithms: aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc . Looks like my ssh client doesn't support any of them, so the server and client are unable to negotiate further. But my client does support all the suggested algorithms: $ ssh -Q cipher3des-cbcaes128-cbcaes192-cbcaes256-cbcrijndael-cbc@lysator.liu.seaes128-ctr... and there are several more. And if I explicitly specify the algorithm like this: ssh -vvv -c aes256-cbc [email protected] I can successfully login to the server. My ~/.ssh/config doesn't contain any cipher-related directives (actually I removed it completely, but the problem remains). So, why client and server can't decide which cipher to use without my explicit instructions? The client understands that server supports aes256-cbc , client understands that he can use it himself, why not just use it? Some additional notes: There was no such problem some time (about a month) ago. I've not changed any ssh configuration files since then. I did update installed packages though. There is a question which describes very similar-looking problem, but there is no answer my question: ssh unable to negotiate - no matching key exchange method found UPDATE: problem solved As telcoM explained the problem is with server: it suggests only the obsolete cipher algorithms. I was sure that both client and server are not outdated. I have logged into server (by the way, it's Synology, updated to latest available version), and examined the /etc/ssh/sshd_config . The very first (!) line of this file was: Ciphers aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc This is very strange (the fact that line is very first in the file), I am sure I've never touched the file before. However I've changed the line to: Ciphers aes256-ctr,aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc restarted the server (did not figure out how to restart the sshd service only), and now the problem is gone: I can ssh to server as usual.
The -cbc algorithms have turned out to be vulnerable to an attack. As a result, up-to-date versions of OpenSSH will now reject those algorithms by default: for now, they are still available if you need them, but as you discovered, you must explicitly enable them. Initially when the vulnerability was discovered (in late 2008, nearly 10 years ago!) those algorithms were only placed at the tail end of the priority list for the sake of compatibility, but now their deprecation in SSH has reached a phase where those algorithms are disabled by default. According to this question in Cryptography.SE , this deprecation step was already happening in year 2014. Please consider this a gentle reminder to update your SSH server , if at all possible. (If it's a firmware-based implementation, see if updated firmware is available for your hardware.)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/459074", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50667/" ] }
459,087
EDIT : actually, it is not an alias (see answers) As you all know, in a shell, the dot-command ( . ) is an alias to the source command. But I wonder if there's a reason behind such a weird alias? Apparently, I don't use it so often that I would need such a short alias. So, why a dot? And why for source and not a more commonly used command such as cd or ls ? Why even an alias? Is there a good reason behind that? Or is there a historical reason? NOTE: I originally posted this question on Server Fault but I was suggested to post it here instead.
. is the POSIX standard . source is a bash builtin synonym for . and is not as portable as . Also note in the Bash reference manual . is listed under 4.1 Bourne Shell Builtins and source is listed under 4.2 Bash Builtin Commands as: A synonym for . (see Bourne Shell Builtins). It's possible that bash named it source because that's what it was called in C shell (where the command apparently originated).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/459087", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/302729/" ] }
459,099
I am wondering after seeing this question why the . symbol was chosen to represent the dot command ? I am not able to find much about it's origin or who created it and am curious as to why a full stop was chosen to represent this command.
The earliest mentioning of the dot command that I can find is in the manual for Stephen Bourne's sh shell in Unix Release 7 (it may be older, but not evidently present as one of the built-in commands in sh in Release 6 ). . file Read and execute commands from file and return. The search path $PATH is used to find the directory containing file. The dot, in quite general terms, seems to have been associated with "here" or "current". The . directory is the current directory , and the . address in the adb debugger from the same release of Unix had a . address which was the current address . Likewise, entering a . followed by newline in the ed editor will re-display the current line of the editing buffer ( . addresses the current line). The dot also means the current node in certain structured query languages for XML, JSON, YAML, etc. (although these are later inventions). It is therefore, I think, not too far fetched to speculate that the . command in the shell also means "here" or "current". In particular, "run this script in the current environment ." The dot is also quite quick and easy to type, and having a short command for doing a common task (whether it be in ed , adb or in the shell) may have been another reason why another longer name was not used. Note that I don't have a functioning version of sh from Release 7 to test things in, and that I can't find the actual implementation of . in Bourne's shell from that release in the above-mentioned Git repository, so I can't say for sure that it actually did exactly what it does today. But it's likely that it did.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/459099", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237982/" ] }
459,252
During the course of my learning about the non-interactive login shell, I have come across 2 ways to remotely execute the commands via SSH. For me, they both look to be the same. But unfortunately, they aren't. With the command echo "shopt login_shell; echo \$-" | ssh -l poweruser 192.168.1.67 , I get the following output. Pseudo-terminal will not be allocated because stdin is not a [email protected]'s password:login_shell on hBs But with the command ssh -l poweruser 192.168.1.67 "shopt login_shell; echo \$-" , I get a different output. [email protected]'s password:login_shell offhBc Could you please tell why the shell is not a login shell in the second case even though it prompts for the password.
man ssh documents that : If a command is specified, it is executed on the remote host instead of a login shell. The reason is then that in one case you specified a command, and in the other you didn't, and ssh deliberately (by design) behaves differently in those cases. In the one where you didn't provide a command, a login shell was launched and it read the piped input and executed it. In the one where you did provide a command, it was launched instead. Prompting for the password is unrelated. That is authenticating you to the server, before the shell or command is launched.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/459252", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/302852/" ] }
459,255
I have two files. The content of both files has dynamic and generated by the system when required. The first file contains the meaning for specific row number as below : head simdb.txt MSISDNAccount_IDCOSP_IDCurrencyLanguageHome_ZoneSIM_PINScreening_PINThird_ParAnothercess_PINCumulative_Incorrect_PIN Other file contains the dynamic data as below head subscriber.txt0='917598936722' 4='ENG' 6='1234' Output should be like : 0='917598936722' //MSISDN4='ENG' //Language6='1234' //SIM_PIN Question updated======== Adding to the above query, if subscriber.txt would have multiple lines then how can we have a script to print line first then it required output? For example, if we subscriber.txt file like below head subscriber.txt0='917598936722' 4='ENG' 6='1234'0='919654680634' 4='ENG' 6='1234' Then desired output be like : 0='917598936722' 4='ENG' 6='1234'0='917598936722' //MSISDN4='ENG' //Language6='1234' //SIM_PIN ========================================= 0='919654680634' 4='ENG' 6='1234'0='919654680634' //MSISDN4='ENG' //Language6='1234' //SIM_PIN
man ssh documents that : If a command is specified, it is executed on the remote host instead of a login shell. The reason is then that in one case you specified a command, and in the other you didn't, and ssh deliberately (by design) behaves differently in those cases. In the one where you didn't provide a command, a login shell was launched and it read the piped input and executed it. In the one where you did provide a command, it was launched instead. Prompting for the password is unrelated. That is authenticating you to the server, before the shell or command is launched.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/459255", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/302865/" ] }
459,300
I am looking to have my date command show time: mohi@DESKTOP-PM4LGGS:~/Dropbox/mpl$ dateMon, Jul 30, 2018 3:31:41 PM like I am seeing in my Git Bash on my office Windows. How can I get my date command show AM/PM time by default on my home Ubuntu 18.04 LTS (GNOME Terminal) ?
Supplying date with an explicit format string would do this: $ date +'%a, %b %d, %Y %r'Mon, Jul 30, 2018 11:45:58 AM or $ date +'%a, %b %d, %Y %l:%M:%S %p' where %l:%M:%S %p is a bit more locale-independent than %r might be. As a shell function that overloads date with this format only when called without any options: date () { [ "$#" -eq 0 ] && set -- +'%a, %b %d, %Y %r' command date "$@"} You would execute the function definition as written above directly in your interactive shell to make it available there, or put it wherever you ordinarily put aliases.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/459300", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37576/" ] }
459,367
In a Bash script, I'm trying to store the options I'm using for rsync in a separate variable. This works fine for simple options (like --recursive ), but I'm running into problems with --exclude='.*' : $ find sourcesourcesource/.barsource/foo$ rsync -rnv --exclude='.*' source/ destsending incremental file listfoosent 57 bytes received 19 bytes 152.00 bytes/sectotal size is 0 speedup is 0.00 (DRY RUN)$ RSYNC_OPTIONS="-rnv --exclude='.*'"$ rsync $RSYNC_OPTIONS source/ destsending incremental file list.barfoosent 78 bytes received 22 bytes 200.00 bytes/sectotal size is 0 speedup is 0.00 (DRY RUN) As you can see, passing --exclude='.*' to rsync "manually" works fine ( .bar isn't copied), it doesn't work when the options are stored in a variable first. I'm guessing that this is either related to the quotes or the wildcard (or both), but I haven't been able to figure out what exactly is wrong.
In general, it's a bad idea to demote a list of separate items into a single string, whether it's a list of command line options or a list of pathnames. Using an array instead: rsync_options=( -rnv --exclude='.*' ) or rsync_options=( -r -n -v --exclude='.*' ) and later... rsync "${rsync_options[@]}" source/ target This way, the quoting of the individual options is maintained (as long as you double quote the expansion of ${rsync_options[@]} ). It also allows you to easily manipulate the individual entries of the array, would you need to do so, before calling rsync . In any POSIX shell, one may use the list of positional parameters for this: set -- -rnv --exclude='.*'rsync "$@" source/ target Again, double quoting the expansion of $@ is critical here. Tangentially related: How can we run a command stored in a variable? The issue is that when you put the two sets of option into a string, the single quotes of the --exclude option's value becomes part of that value. Hence, RSYNC_OPTIONS='-rnv --exclude=.*' would have worked¹... but it's better (as in safer) to use an array or the positional parameters with individually quoted entries. Doing so would also allow you to use things with spaces in them, if you would need to, and avoids having the shell perform filename generation (globbing) on the options. ¹ provided that $IFS is not modified and that there's no file whose name starts with --exclude=. in the current directory, and that the nullglob or failglob shell options are not set.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/459367", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30497/" ] }
459,384
I was building a package from SRPM on Fedora: $ rpmbuild --rebuild *.src.rpm...warning: user mockbuild does not exist - using rootwarning: group mockbuild does not exist - using rootwarning: user mockbuild does not exist - using rootwarning: group mockbuild does not exist - using root... The package was built but there were many such mockbuild does not exist warnings. There doesn't seem to be such an account on my system, even though I have package mock installed. Are there any other packages I'm missing? Is this a fault on the package or my system? How to eliminate these warnings?
You don't, and they don't do anything anyways. They are an artifact of the package having been built in the Fedora buildsystem .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/459384", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7157/" ] }
459,399
Let me give a bit of background to my question. I am using a terminal RSS reader newsboat which allows for the usage of macros to operate on links. For example, I have a macro running cd ~/videos && youtube-dl %u , which will download a youtube video to ~/videos . However, the output of youtube-dl will be printed in my terminal until the download is complete and for this time I cannot continue using newsboat . I am wondering how I could phrase the command so that it is executed “somewhere else” so that I can immediately continue using the terminal from which it is run.
If you don't need the output at all then redirect it to /dev/null yourcommand > /dev/null 2>&1 otherwise you can redirect into a file: yourcommand > /somwhere/file 2>&1 And as you run the command from another application and you want use your news reader immediately you may want to run the command in the background. I am not sure how it works in this newsboat, but in a shell you can send programs into the backround with & yourcommand > /somwhere/file 2>&1 &
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/459399", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
459,402
I have a Centos 6 installed. I did yum update . I need pg_recvlogical . I didn't find it using yum search pg_recvlogical so I found it mentioned here . So I download it and wanted to install using rpm -i , but I quickly got to a tree of unsatisfied dependencies which collide with the installed versions. How should I do to install just pg_recvlogical ? Thanks!
If you don't need the output at all then redirect it to /dev/null yourcommand > /dev/null 2>&1 otherwise you can redirect into a file: yourcommand > /somwhere/file 2>&1 And as you run the command from another application and you want use your news reader immediately you may want to run the command in the background. I am not sure how it works in this newsboat, but in a shell you can send programs into the backround with & yourcommand > /somwhere/file 2>&1 &
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/459402", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5801/" ] }
459,441
I remember running across this command a while back although I do not remember the command itself. As I remember it, you ran the command, it would create a temp buffer that would then be edited by the default editor (vim) and upon closing the buffer, the command would be executed. Eg: $ <buffer edit command>~ # Write bash temp script~ for i in *; do~ echo $i~ done$ file1$ file2$ file3$ ... Does anyone know what this command is? It is like writing a bash script in vim only without saving the file and just running it.
You want to bind some key combo to edit-and-execute-command . I use: bind '"\C-e": edit-and-execute-command' in my ~/.bashrc . When I hit Ctrl-e, it invokes $EDITOR and lets me edit the command. When I save and quit, it executes the edited command.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/459441", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72016/" ] }
459,450
Is there a more direct way to exit an OpenSSH control master process and delete its socket file than with the help of the programs lsof or fuser and without knowing the destination host (connection details) as suggested by the accepted answer of this question ? I thought of something like this along the lines of: $ ssh -O exit -S ~/.ssh/7eb92b0827f3e8e1e8591fb3d1a5cd1b94b758cb.socket I'm asking because I'm looking for a scriptable way to exit every open control master connection each time when I log out of my user account. Not doing so causes my sytemd -powered openSUSE computer to wait for a timeout of two minutes until it forcefully terminates the still open control master connection powering off eventually. Unfortunately OpenSSH's client program ssh requires the destination host and the ControlPath file name pattern in order to deduce the actual file path to the socket file. I on the contrary thought of the more direct method of providing the concrete socket file via the program's -S ctl_path option. In the global section of my system-wide OpenSSH client config file I configured OpenSSH's connection multiplexing feature as follows: ControlMaster auto ControlPath ~/.ssh/%C.socketControlPersist 30m Please note that I want to keep the file name pattern for socket files, i.e. hashing the token string %l%h%p%r with the token %C . Any ideas?
This works for me using just the socket file for the control master: $ ssh -o ControlPath=~/.ssh/<controlfile> -O check <bogus arg> NOTE: You can also use ssh -S ~/.ssh/<controlfile> ... as well, which is a bit shorter form of the above. Example Here's an example where I've already established a connection to a remote server: $ ssh -S ~/.ssh/master-57db26a0499dfd881986e23a2e4dd5c5c63e26c2 -O check blahMaster running (pid=89228)$ And with it disconnected: $ ssh -S ~/.ssh/master-66496a62823573e4760469df70e57ce4c15afd74 -O check blahControl socket connect(/Users/user1/.ssh/master-66496a62823573e4760469df70e57ce4c15afd74): No such file or directory$ If it were still connected, this would force it to exit immediately: $ ssh -S ~/.ssh/master-66496a62823573e4760469df70e57ce4c15afd74 -O exit blahExit request sent.$ It's unclear to me, but it would appear to potentially be a bug in ssh that it requires an additional argument at the end, even though blah is meaningless in the context of the switches I'm using. Without it gives me this: $ ssh -S ~/.ssh/master-57db26a0499dfd881986e23a2e4dd5c5c63e26c2 -O checkusage: ssh [-1246AaCfGgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec] [-D [bind_address:]port] [-E log_file] [-e escape_char] [-F configfile] [-I pkcs11] [-i identity_file] [-L [bind_address:]port:host:hostport] [-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port] [-Q cipher | cipher-auth | mac | kex | key] [-R [bind_address:]port:host:hostport] [-S ctl_path] [-W host:port] [-w local_tun[:remote_tun]] [user@]hostname [command] Version info OSX $ ssh -VOpenSSH_6.9p1, LibreSSL 2.1.8 CentOS 7.x $ ssh -VOpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017 I confirmed that on both of these versions, the need for the additional bogus argument was required. UPDATE #1 I raised this as a potential bug to the OpenSSH upstream and they replied as follows: Yes - this is intentional. ControlPath may contain %-expansions that need a hostname to expand fully. We could conceivably check to see whether ControlPath actually needs this and make the hostname optional but 1) ControlPaths that use %h are common and 2) I'd rather not make the argument parsing code more of a maze than it already is. References How to tell if an ssh ControlMaster connection is in use How to close (kill) ssh ControlMaster connections manually
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/459450", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27412/" ] }
459,452
Is it possible to use a single DHCP server to push completely different ranges of ip to different network interfaces in different LANs? EXAMPLE: I have the Severs #1 and #2. Severs #1: A DHCP server with a network interface called "enpA" which is connected to LAN A. Severs #2: A DHCP client with an network interface called "enpA" which is also connected to LAN A. It has a network interface called "enpB" which is connected to LAN B. LAN A: 192.168.56.0/24 LAN B: 10.0.2.0/24 I wanted the LAN A server (Severs #1) push an ip to the "enpB" (Severs #2) interface which is connected to LAN B. That is, is it possible to do this with a single DHCP server connected in LAN A? If yes what strategy should I use with a DHCP server like the isc KEA?
Yes, you can do this. What you need to do is run a DHCP relay agent on server B, which listens for DHCP requests on UDP/67 (UDP/547 for DHCPv6) on its LAN B interface and forwards them to LAN A (the DHCP server obviously needs to be set up to have network pools for both networks!) The system works something like this: Server 1 LAN A Server 2 LAN B Client * <-- Request * <-- Request (for Client) Answer --> * Answer (from Server 1) --> * [...] DHCPv4 and DHCPv6 relays are handled individually, so you will need two instances of the relay running if you want to handle both types. Assuming Server A is 192.168.56.1 / 2001:db8:1::1 , and Server B has eth0 / 192.168.56.2 / 2001:db8:1::2 on LAN A and eth1 / 10.0.2.1 / 2001:db8:2::1 on LAN B, you would run the relay like this: DHCPv4: /path/to/dhcrelay -4 -i eth1 192.168.56.1 DHCPv6: /path/to/dhcrelay -6 -l eth1 -u eth0 Note that I haven't had a need to use DHCPv6 relays, so this is based on the documentation.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/459452", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61742/" ] }
459,492
I'm using screen terminal multiplexer. For some reason a dashed line appears on the 10th row. ...[ec2-user@ip-172-31-1-24 var]$ lsaccount db games lib lock mail opt run tmp ypcache empty kerberos local log nis preserve spool www[ec2-user@ip-172-31-1-24 var]$----------------------------------------------------------------------------... My available work area was limited to a few lines. What does this means, and how can I recover the entire workspace?
I'd like to present another answer that might help other users who come across this: On GNU Screen by default, the dashed line means something else than what @slm said. His answer was covering tmux . For screen splitting ( CTRL + A + S ) in screen , the line is made out of solid full block characters. ( █ ) The dashed line on screen is being used to show the window borders of the display/terminal which first attached the screen session. I'll give an example: You have a 1280x1024px monitor on which you start a screen session on a fullscreen terminal. Now on another machine, with a 1920x1200px monitor and a fullscreen terminal, you execute screen -x <session> , entering multi-display mode. On that terminal you will now see dashed-lines the size of your first terminal, and you won't be able to use more space until you detach ( -d ) the screen from the other terminal. This is to ensure that every attached terminal always sees all the contents.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/459492", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/278329/" ] }
459,521
How can I truncate a (UTF-8 encoded) text file to given number of characters? I don't care about line lengths and the cut can be in the middle of word. cut seems to operate on lines, but I want a whole file. head -c uses bytes, not characters.
Some systems have a truncate command that truncates files to a number of bytes (not characters). I don't know of any that truncate to a number of characters, though you could resort to perl which is installed by default on most systems: perl perl -Mopen=locale -ne ' BEGIN{$/ = \1234} truncate STDIN, tell STDIN; last' <> "$file" With -Mopen=locale , we use the locale's notion of what characters are (so in locales using the UTF-8 charset, that's UTF-8 encoded characters). Replace with -CS if you want I/O to be decoded/encoded in UTF-8 regardless of the locale's charset. $/ = \1234 : we set the record separator to a reference to an integer which is a way to specify records of fixed length (in number of characters ). then upon reading the first record, we truncate stdin in place (so at the end of the first record) and exit. GNU sed With GNU sed , you could do (assuming the file doesn't contain NUL characters or sequences of bytes which don't form valid characters -- both of which should be true of text files): sed -Ez -i -- 's/^(.{1234}).*/\1/' "$file" But that's far less efficient, as it reads the file in full and stores it whole in memory, and writes a new copy. GNU awk Same with GNU awk : awk -i inplace -v RS='^$' -e '{printf "%s", substr($0, 1, 1234)}' -E /dev/null "$file" -e code -E /dev/null "$file" being one way to pass arbitrary file names to gawk RS='^$' : slurp mode . Shell builtins With ksh93 , bash or zsh (with shells other than zsh , assuming the content doesn't contain NUL bytes): content=$(cat < "$file" && echo .) && content=${content%.} && printf %s "${content:0:1234}" > "$file" With zsh : read -k1234 -u0 s < $file && printf %s $s > $file Or: zmodload zsh/mapfilemapfile[$file]=${mapfile[$file][1,1234]} With ksh93 or bash (beware it's bogus for multi-byte characters in several versions of bash ): IFS= read -rN1234 s < "$file" && printf %s "$s" > "$file" ksh93 can also truncate the file in place instead of rewriting it with its <>; redirection operator: IFS= read -rN1234 0<>; "$file" iconv + head To print the first 1234 characters, another option could be to convert to an encoding with a fixed number of bytes per character like UTF32BE / UCS-4 : iconv -t UCS-4 < "$file" | head -c "$((1234 * 4))" | iconv -f UCS-4 head -c is not standard, but fairly common. A standard equivalent would be dd bs=1 count="$((1234 * 4))" but would be less efficient, as it would read the input and write the output one byte at a time¹. iconv is a standard command but the encoding names are not standardized, so you might find systems without UCS-4 Notes In any case, though the output would have at most 1234 characters, it may end up not being valid text, as it would possibly end in a non-delimited line. Also note that while while those solutions wouldn't cut text in the middle of a character, they could break it in the middle of a grapheme , like a é expressed as U+0065 U+0301 (a e followed by a combining acute accent), or Hangul syllable graphemes in their decomposed forms. ¹ and on pipe input you can't use bs values other than 1 reliably unless you use the iflag=fullblock GNU extension, as dd could do short reads if it reads the pipe quicker than iconv fills it
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/459521", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60577/" ] }
459,525
I have file.txt A2RP FAULTA2RP FAULTA2CELLA2CELL how I can just grep 2 words: A2 & RP FAULT , The result should be : A2RP FAULTA2RP FAULT what I try : cat file.txt | grep -E "A2|RP FAULT" but the result like this A2RP FAULTA2RP FAULTA2A2
Some systems have a truncate command that truncates files to a number of bytes (not characters). I don't know of any that truncate to a number of characters, though you could resort to perl which is installed by default on most systems: perl perl -Mopen=locale -ne ' BEGIN{$/ = \1234} truncate STDIN, tell STDIN; last' <> "$file" With -Mopen=locale , we use the locale's notion of what characters are (so in locales using the UTF-8 charset, that's UTF-8 encoded characters). Replace with -CS if you want I/O to be decoded/encoded in UTF-8 regardless of the locale's charset. $/ = \1234 : we set the record separator to a reference to an integer which is a way to specify records of fixed length (in number of characters ). then upon reading the first record, we truncate stdin in place (so at the end of the first record) and exit. GNU sed With GNU sed , you could do (assuming the file doesn't contain NUL characters or sequences of bytes which don't form valid characters -- both of which should be true of text files): sed -Ez -i -- 's/^(.{1234}).*/\1/' "$file" But that's far less efficient, as it reads the file in full and stores it whole in memory, and writes a new copy. GNU awk Same with GNU awk : awk -i inplace -v RS='^$' -e '{printf "%s", substr($0, 1, 1234)}' -E /dev/null "$file" -e code -E /dev/null "$file" being one way to pass arbitrary file names to gawk RS='^$' : slurp mode . Shell builtins With ksh93 , bash or zsh (with shells other than zsh , assuming the content doesn't contain NUL bytes): content=$(cat < "$file" && echo .) && content=${content%.} && printf %s "${content:0:1234}" > "$file" With zsh : read -k1234 -u0 s < $file && printf %s $s > $file Or: zmodload zsh/mapfilemapfile[$file]=${mapfile[$file][1,1234]} With ksh93 or bash (beware it's bogus for multi-byte characters in several versions of bash ): IFS= read -rN1234 s < "$file" && printf %s "$s" > "$file" ksh93 can also truncate the file in place instead of rewriting it with its <>; redirection operator: IFS= read -rN1234 0<>; "$file" iconv + head To print the first 1234 characters, another option could be to convert to an encoding with a fixed number of bytes per character like UTF32BE / UCS-4 : iconv -t UCS-4 < "$file" | head -c "$((1234 * 4))" | iconv -f UCS-4 head -c is not standard, but fairly common. A standard equivalent would be dd bs=1 count="$((1234 * 4))" but would be less efficient, as it would read the input and write the output one byte at a time¹. iconv is a standard command but the encoding names are not standardized, so you might find systems without UCS-4 Notes In any case, though the output would have at most 1234 characters, it may end up not being valid text, as it would possibly end in a non-delimited line. Also note that while while those solutions wouldn't cut text in the middle of a character, they could break it in the middle of a grapheme , like a é expressed as U+0065 U+0301 (a e followed by a combining acute accent), or Hangul syllable graphemes in their decomposed forms. ¹ and on pipe input you can't use bs values other than 1 reliably unless you use the iflag=fullblock GNU extension, as dd could do short reads if it reads the pipe quicker than iconv fills it
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/459525", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/293880/" ] }
459,555
I'm currently using this one-liner to get the latest release version of docker-compose . curl --silent "https://api.github.com/repos/docker/compose/releases/latest" | grep "tag_name" | sed -E 's/.*"([^"]+)".*/\1/' This isn't my code. I copied & pasted it, and it worked, and I wanted to learn more. Specifically, I am very interested in the sed command. Can anyone help me understand it better? sed -E 's/.*"([^"]+)".*/\1/' Essentially I don't understand any of the string. I know the items individually( .* = any character one or more, [^"] = accept anything that isn't " ). But when it is written in that way I am unsure how it works out. Output of command without the sed command: "tag_name": "1.22.0", Output of command with the sed command: 1.22.0
sed -E 's/.*"([^"]+)".*/\1/ -E : sed will use Extended Regex 's': to substitute value. / : the separator of pattern and replacement that will be use. .*"([^"]+)".* : the best way I know to explain regex is a graph: Basically it matches every line that have two block of quotes and put the second one (without the quotes) inside group one. / : separator between your regexp and your replacement \1 : replace your original line with the group number 1 : 1.22.0 in this case. / : last separator without option after it so it will replace only once a line. Hope this is explain well enough. If you need to read a regex in a more visual way you can use the site regexper which is amazing.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/459555", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/303037/" ] }
459,571
I'm wondering what the difference is between pip, the Python package installer, and yum? As a means of providing some context to my question : I assume the first answer will be because PIP is only for installing Python packages and yum installs packages from different types of vendors. But lets focus on the installation of Python packages using both tools as a means of identifying the difference between both: I had an issue in work (using CentOS6) where Django was a missing dependency for an rpm I was trying to install. I thought the correct fix was to 'pip install Django'. However, when I did this and re-tried the 'rpm -i' the Django dependency was still reported as missing. A far more experienced colleague told me what I was doing was wrong and I needed to un-install this and only ever install using yum. From experience I know he is not one to elaborate on things and after a bit of googling I am still in the dark. I can't get my head around why there is a difference, if both install the same package how come one works as a means of satisfying the required dependency and the other (pip) does not.
Extending on the excellent @dr01 answer about yum vs pip : With yum normally all the official packages installed by the distribution are updated in one single operation. Also, the system will do a better job of getting dependencies that do not enter on conflit with packages already installed, and with packages that have been tested by the distribution maintainers. Using pip , especially if are not so experienced in what you are doing, it is easier to shoot yourself on the foot, and end up configuring (or not configuring things) as desired and/or getting dependencies wrong. In addition, depending on your configuration, there might be different pip commands that map to different versions of python you might have installed. When doing security updates, you will also have to remember to update things installed with pip , and that brings unwanted complexity to system administration procedures. Summing it up, unless you need a special version of a python library, and/or you cannot find it in the distribution repositories, using yum instead of pip is good advice.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/459571", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/252422/" ] }
459,582
I have an application that outputs a set of log files to a central directory like this: /tmp/experiment/log/ ├── node01.log├── node02.log├── node03.log├── node04.log├── node05.log├── node06.log Inside each file, different measures are taken during the lifetime of each log's process, so the lines look like this: prop1=5, ts=X, node01prop2=3, ts=X, node01prop1=7, ts=Y, node01... I'm struggling to write some commands that can process all files and output the LAST reading of a giving property, ideally outputting something like this: node01, prop1=7, ts=...node02, prop1=9, ts=...node03, prop1=3, ts=... Any suggestions? I started using a combination of grep , cut , sort , uniq like this: $ grep -sirh "prop1" /tmp/experiment/log/ | \ cut --delimiter=, --fields=1,4 | uniq | sort | \ tail -n 14` --this example had 14 log files but it only worked partially as in some experiments it would end up printing multiple records of the same log and exclude some other logs. I moved on to awk with this: $ awk -F":" '/prop1/ { print $NF $2}' /tmp/experiment/log/node*.log | \ awk 'END { print }' and had the problem that when I pass multiple input files, it only gives me the last line of the last log file instead of 1 output line per log file. Any suggestions on how to accomplish this?
Take a look at ENDFILE blocks (GNU awk specific).You could run something along the lines of awk 'BEGINFILE { a = ""} /prop1/ { a=$NF $2 $1} ## Change this if necessary ENDFILE { if (a != "") print FILENAME, a}' ./node*.log
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/459582", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/303060/" ] }
459,610
As opposed to editing /etc/hostname, or wherever is relevant? There must be a good reason (I hope) - in general I much prefer the "old" way, where everything was a text file. I'm not trying to be contentious - I'd really like to know, and to decide for myself if it's a good reason.Thanks.
Background hostnamectl is part of systemd, and provides a proper API for dealing with setting a server's hostnames in a standardized way. $ rpm -qf $(type -P hostnamectl)systemd-219-57.el7.x86_64 Previously each distro that did not use systemd, had their own methods for doing this which made for a lot of unnecessary complexity. DESCRIPTION hostnamectl may be used to query and change the system hostname and related settings. This tool distinguishes three different hostnames: the high-level "pretty" hostname which might include all kinds of special characters (e.g. "Lennart's Laptop"), the static hostname which is used to initialize the kernel hostname at boot (e.g. "lennarts-laptop"), and the transient hostname which is a default received from network configuration. If a static hostname is set, and is valid (something other than localhost), then the transient hostname is not used. Note that the pretty hostname has little restrictions on the characters used, while the static and transient hostnames are limited to the usually accepted characters of Internet domain names. The static hostname is stored in /etc/hostname, see hostname(5) for more information. The pretty hostname, chassis type, and icon name are stored in /etc/machine-info, see machine-info(5). Use systemd-firstboot(1) to initialize the system host name for mounted (but not booted) system images. hostnamectl also pulls a lot of disparate data together into a single location to boot: $ hostnamectl Static hostname: centos7 Icon name: computer-vm Chassis: vm Machine ID: 1ec1e304541e429e8876ba9b8942a14a Boot ID: 37c39a452464482da8d261f0ee46dfa5 Virtualization: kvm Operating System: CentOS Linux 7 (Core) CPE OS Name: cpe:/o:centos:centos:7 Kernel: Linux 3.10.0-693.21.1.el7.x86_64 Architecture: x86-64 The info here is coming from /etc/*release , uname -a , etc. including the hostname of the server. What about the files? Incidentally, everything is still in files, hostnamectl is merely simplifying how we have to interact with these files or know their every location. As proof of this you can use strace -s 2000 hostnamectl and see what files it's pulling from: $ strace -s 2000 hostnamectl |& grep ^open | tail -5open("/lib64/libattr.so.1", O_RDONLY|O_CLOEXEC) = 3open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3open("/proc/self/stat", O_RDONLY|O_CLOEXEC) = 3open("/etc/machine-id", O_RDONLY|O_NOCTTY|O_CLOEXEC) = 4open("/proc/sys/kernel/random/boot_id", O_RDONLY|O_NOCTTY|O_CLOEXEC) = 4 systemd-hostname.service? To the astute observer, you should notice in the above strace that not all files are present. hostnamectl is actually interacting with a service, systemd-hostnamectl.service which in fact does the "interacting" with most of the files that most admins would be familiar with, such as /etc/hostname . Therefore when you run hostnamectl you're getting details from the service. This is a ondemand service, so you won't see if running all the time. Only when hostnamectl runs. You can see it if you run a watch command, and then start running hostnamectl multiple times: $ watch "ps -eaf|grep [h]ostname"root 3162 1 0 10:35 ? 00:00:00 /usr/lib/systemd/systemd-hostnamed The source for it is here: https://github.com/systemd/systemd/blob/master/src/hostname/hostnamed.c and if you look through it, you'll see the references to /etc/hostname etc. References systemd/src/hostname/hostnamectl.c systemd/src/hostname/hostnamed.c hostnamectl systemd-hostnamed.service
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/459610", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46626/" ] }
459,692
I am writing a bash script to use rsync and update files on about 20 different servers. I have the rsync part figured out. What I'm having trouble with is going through a list of variables. My script thus far look like this: #!/bin/bashSERVER1="192.xxx.xxx.2"SERVER2="192.xxx.xxx.3"SERVER3="192.xxx.xxx.4"SERVER4="192.xxx.xxx.5"SERVER5="192.xxx.xxx.6"SERVER6="192.xxx.xxx.7"for ((i=1; i<7; i++))do echo [Server IP Address]done Where [Server IP Address] should be the value of the associated variable. So when i = 1 I should echo the value of $SERVER1. I've tried several iterations of this including echo "$SERVER$i" # printed the value of iecho "SERVER$i" # printer "SERVER" plus the value of i ex: SERVER 1 where i = 1echo $("SERVER$i") # produced an error SERVER1: command not found where i = 1echo $$SERVER$i # printed a four digit number followed by "SERVER" plus the value of iecho \$$SERVER$i # printed "$" plus the value of i It has been a long time since I scripted so I know I am missing something. Plus I'm sure I'm mixing in what I could do using C#, which I've used for the past 11 years. Is what I'm trying to do even possible? Or should I be putting these values in an array and looping through the array? I need to this same thing for production IP addresses as well as location names. This is all in an effort to not have to repeat a block of code I will be using to sync the files on the remote server.
Use an array. #! /bin/bashservers=( 192.xxx.xxx.2 192.xxx.xxx.3 192.xxx.xxx.4 192.xxx.xxx.5 192.xxx.xxx.6 192.xxx.xxx.7)for server in "${servers[@]}" ; do echo "$server"done
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/459692", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/286397/" ] }
459,732
Consider the following folder structure: .├── test1│   ├── nested1│   ├── testfile11│   └── testfile12└── test2 ├── nested1 -> /path/to/dir/test1/nested1 └── testfile21 test2/nested1 is a symlink to the directory test1/nested1 . I would expect, if it were the cwd, .. would resolve to test2 . However, I have noticed this inconsistency: $ cd test2/nested1/$ ls ..nested1 testfile11 testfile12$ cd ..$ lsnested1 testfile21 touch also behaves like ls , creating a file in test1 . Why does .. as an argument to cd refer to the parent of the symlink, while to (all?) others refers to the parent of the linked dir? Is there some simple way to force it to refer to paths relative to the symlink? I.e. the "opposite" of readlink ? # fictional commandls $(linkpath ..) EDIT: Using bash
The commands cd and pwd have two operational modes. -L logical mode: symlinks are not resolved -P physical mode: symlinks are resolved before doing the operation The important thing to know here is that cd .. does not call the syscall chdir("..") but rather shortens the $PWD variable of the shell and then chdirs to that absolute path. If you are in physical mode, this is identical to calling chdir("..") , but when in logical mode, this is different. The main problem here: POSIX decided to use the less safe logical mode as default. If you call cd -P instead of just cd , then after a chdir() operation, the return value from getcwd() is put into the shell variable $PWD and a following cd .. will get you to the directory that is physically above the current directory. So why is the POSIX default less secure? If you crossed a symlink in POSIX default mode and do the following: ls ../*.ccd ..rm *.c you will probably remove different files than those that have been listed by the ls command before. If you like the safer physical mode, set up the following aliases: alias cd='cd -P'alias pwd='pwd -P' Since when using more than one option from -L and -P the last option wins, you still may be able to get the other behavior. Historical background: The Bourne Shell and ksh88 did get directory tracking code at the same time. The Bourne Shell did get the safer physical behavior while ksh88 at the same time did get the less safe logical mode as default and the options -L and -P . It may be that ksh88 used the csh behavior as reference. POSIX took the ksh88 behavior without discussing whether this is a good decision. BTW: some shells are unable to keep track of $PWD values that are longer than PATH_MAX and drive you crazy when you chdir into a directory with an absolute path longer than PATH_MAX . dash is such a defective shell.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/459732", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26838/" ] }
459,734
The quick version: I want to point my '/volume1/Plex/Library/Application Support/Plex Media Server/Logs/' directory to a black hole, so nothing gets written to it. Is there a way to do this? Will it stop Plex from keeping my hard drives spinning all day? The long version: I'm running a Synology NAS with lots of movies etc. I was using Synology's "DS Video" app on my Roku TV, which worked great until I started ripping my blu-rays with DTS audio. Unfortunately, DS Video can't play DTS audio for whatever reason. To make matters worse, DS Video is going to be discontinued on Roku TVs as of 12/31/2018 (according to the support rep who got back to me about the audio issue). So I'm back to using Plex, which I stopped using months ago because it has a god-awful "feature" where it writes to log files on a regular basis, because stupid. See this , this , and this for just a few examples of people who don't like this "feature", with Plex basically refusing to change it. Because Plex writes to the log files so often, my hard drives never spin down. When Plex is disabled, the disks spin down and stay nice and quiet until I need the NAS. So I'm in the same boat as other Plex users who don't want Plex spinning up the drives all day. Is there a way to point the log directory that Plex uses to a black hole, like /dev/null or something? Will this allow the drives to spin down? Other notes / disclaimers: I know that NAS hard disks are perfectly ok with spinning all daylong. I don't care. They make noise and I want them quiet until Ineed them. I don't need them often, so at most I might spin them uponce a day. FWIW I'm using two Seagate 10TB IronWolf Pro drives. I don't care about the log files or Plex's reasoning for wanting themenabled.
The commands cd and pwd have two operational modes. -L logical mode: symlinks are not resolved -P physical mode: symlinks are resolved before doing the operation The important thing to know here is that cd .. does not call the syscall chdir("..") but rather shortens the $PWD variable of the shell and then chdirs to that absolute path. If you are in physical mode, this is identical to calling chdir("..") , but when in logical mode, this is different. The main problem here: POSIX decided to use the less safe logical mode as default. If you call cd -P instead of just cd , then after a chdir() operation, the return value from getcwd() is put into the shell variable $PWD and a following cd .. will get you to the directory that is physically above the current directory. So why is the POSIX default less secure? If you crossed a symlink in POSIX default mode and do the following: ls ../*.ccd ..rm *.c you will probably remove different files than those that have been listed by the ls command before. If you like the safer physical mode, set up the following aliases: alias cd='cd -P'alias pwd='pwd -P' Since when using more than one option from -L and -P the last option wins, you still may be able to get the other behavior. Historical background: The Bourne Shell and ksh88 did get directory tracking code at the same time. The Bourne Shell did get the safer physical behavior while ksh88 at the same time did get the less safe logical mode as default and the options -L and -P . It may be that ksh88 used the csh behavior as reference. POSIX took the ksh88 behavior without discussing whether this is a good decision. BTW: some shells are unable to keep track of $PWD values that are longer than PATH_MAX and drive you crazy when you chdir into a directory with an absolute path longer than PATH_MAX . dash is such a defective shell.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/459734", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/303146/" ] }
459,735
I have a bash script in crontab that runs @reboot: The script itself contains a wget command to pull and download a file from the internet When I run my script after signing in and opening the terminal it works and saves the files properly(html, png). But when I reboot my system it saves runs and saves as plain text files with no content. Solved --> I used the sleep function in crontab and it works!!! New to linux and code, so thanks for all the feedback! I'm going to explore the /etc/network/if-up.d/ options as well.
The commands cd and pwd have two operational modes. -L logical mode: symlinks are not resolved -P physical mode: symlinks are resolved before doing the operation The important thing to know here is that cd .. does not call the syscall chdir("..") but rather shortens the $PWD variable of the shell and then chdirs to that absolute path. If you are in physical mode, this is identical to calling chdir("..") , but when in logical mode, this is different. The main problem here: POSIX decided to use the less safe logical mode as default. If you call cd -P instead of just cd , then after a chdir() operation, the return value from getcwd() is put into the shell variable $PWD and a following cd .. will get you to the directory that is physically above the current directory. So why is the POSIX default less secure? If you crossed a symlink in POSIX default mode and do the following: ls ../*.ccd ..rm *.c you will probably remove different files than those that have been listed by the ls command before. If you like the safer physical mode, set up the following aliases: alias cd='cd -P'alias pwd='pwd -P' Since when using more than one option from -L and -P the last option wins, you still may be able to get the other behavior. Historical background: The Bourne Shell and ksh88 did get directory tracking code at the same time. The Bourne Shell did get the safer physical behavior while ksh88 at the same time did get the less safe logical mode as default and the options -L and -P . It may be that ksh88 used the csh behavior as reference. POSIX took the ksh88 behavior without discussing whether this is a good decision. BTW: some shells are unable to keep track of $PWD values that are longer than PATH_MAX and drive you crazy when you chdir into a directory with an absolute path longer than PATH_MAX . dash is such a defective shell.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/459735", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/303148/" ] }
459,805
I have the following json [root@mdfdevha1 ~]# echo "$Group_ID"[ { "id" : "e27206c0-aeb6-43db-acda-c4ba43233071", "name" : "A1", "path" : "/A1", "subGroups" : [ ]}, { "id" : "89f3bd6a-33a9-4e02-9fe3-eae660c5a6cf", "name" : "Admin_UserGroup", "path" : "/Admin_UserGroup", "subGroups" : [ ]}, { "id" : "cdc2bce5-c3bb-4b88-bdaf-d87b8bb6c644", "name" : "Group104", "path" : "/Group104", "subGroups" : [ ]}, { "id" : "a0d749f2-ab6c-4c27-ad55-3357eaab9527", "name" : "Group105", "path" : "/Group105", "subGroups" : [ ]}, { "id" : "fbf99c34-d50d-408b-8d19-9713f9af3e3a", "name" : "Group106", "path" : "/Group106", "subGroups" : [ ]}, { "id" : "ebd8336f-4017-4fb1-8035-153ae1d9ba37", "name" : "Group201", "path" : "/Group201", "subGroups" : [ ]}, { "id" : "38f4aef7-caf0-4430-9e61-1ae7026e872f", "name" : "Group202", "path" : "/Group202", "subGroups" : [ ]}, { "id" : "436a0f4a-8b1b-4d7d-a014-fcec3513644e", "name" : "Group203", "path" : "/Group203", "subGroups" : [ ]}, { "id" : "41962c5f-e7e9-4748-b81f-e3f1880b78de", "name" : "Sure_Groups", "path" : "/Sure_Groups", "subGroups" : [ { "id" : "593dfe69-1ed8-4649-bde4-a277166333f8", "name" : "Test1", "path" : "/Sure_Groups/Test1", "subGroups" : [ ] } ]}, { "id" : "6856b69b-9113-46e1-90c6-f34548625278", "name" : "UG_1", "path" : "/UG_1", "subGroups" : [ ]}, { "id" : "6496a0fe-b41f-4f0f-9eb9-5ef749c9130a", "name" : "UG_12", "path" : "/UG_12", "subGroups" : [ ]}, { "id" : "71a5f5ae-bf91-4cdf-ab3c-c09ca15080d6", "name" : "UG_1456", "path" : "/UG_1456", "subGroups" : [ ]}, { "id" : "385ea518-1d40-45f7-afcd-c0488ff02e97", "name" : "UG_26", "path" : "/UG_26", "subGroups" : [ { "id" : "a4064e3a-e2e3-47bb-99b8-9f7fadb0bc20", "name" : "Test1", "path" : "/UG_26/Test1", "subGroups" : [ ] } ]}, { "id" : "9c5efedc-b901-4dcf-bbc8-8ddeaa5d84f7", "name" : "UG_266", "path" : "/UG_266", "subGroups" : [ ]}, { "id" : "c5eb3064-752c-4f7c-b4f1-ac59f50397dd", "name" : "Usergroup_01", "path" : "/Usergroup_01", "subGroups" : [ ]}, { "id" : "d39dc10c-558b-433e-82b4-e01a8f1d8998", "name" : "Usergroup_02", "path" : "/Usergroup_02", "subGroups" : [ ]} ] How to get particular data with awk or sed .I need to get data where name="Admin_UserGroup" ? EDIT #1 Thanks to Hossein Vatani for his answer and here are the final commands: $ /opt/keycloak/bin/kcadm.sh get groups -r T0_Realm > Group.json$ GROUP_ID_TEMP=$(grep -B1 -A0 '"name" : "Admin_UserGroup"' Group.json)$ GROUP_ID=$(echo $GROUP_ID_TEMP | cut -d : -f2 | awk -F\" '{print $2}')
Using jq : $ printf '%s\n' "$Group_ID" | jq '.[] | select(.name == "Admin_UserGroup")'{ "id": "89f3bd6a-33a9-4e02-9fe3-eae660c5a6cf", "name": "Admin_UserGroup", "path": "/Admin_UserGroup", "subGroups": []} This selects all objects in the array whose name key corresponds to a value of Admin_UserGroup .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/459805", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/133012/" ] }
459,876
I have a new GNU/Linux Debian 9 server installation. This is what I get from ethtool : root@web-server:~# ethtool enp2s0Settings for enp2s0: Supported ports: [ TP MII ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Advertised pause frame use: Symmetric Receive-only Advertised auto-negotiation: Yes Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Link partner advertised pause frame use: Symmetric Receive-only Link partner advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: MII PHYAD: 0 Transceiver: internal Auto-negotiation: on Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000033 (51) drv probe ifdown ifup Link detected: yes So, you see the Magic Packet is on ( Wake-on: g ). I am waking this computer from power off state like this: ./wolcmd 00********** 192.168.0.104 255.255.255.0 7 # I've hidden the MAC address here from Cygwin on Windows 10 using Depicus Wake On Lan Command Line . What I do not understand is, why I need to specify the IP address and mask or port number? Why is MAC address not enough? Could anyone elaborate...
A WoL magic packet can be sent either to UDP port 0, 7, or 9 (depending on the hardware in use) or as a raw Ethernet packet of type 0x0842. wolcmd has elected to use the former method, defaulting to port 7. Note that wolcmd does support UDP broadcast, meaning that you can specify 255.255.255.255 as the address and mask if your hardware and network support TCP/IP broadcasts. The magic packet will only be interpreted by the machine whose MAC address it contains; all others will ignore it.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/459876", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
459,882
I have two directories that both have a couple thousand files each, and I am trying to grep certain IPs from the files. My grep string is: grep "IP" cdr/173/07/cdr_2018_07* This grep string returns "grep: Argument list too long". However, when I do the following: grep "IP" cdr/173/06/cdr_2018_06* it returns what I am looking for. Below is the ls -l for the parent directory for each of these. It seems that the difference is about 400KB, so I'm not sure that size is really the issue here. Am I missing something? jeblin@debian:~$ ls -l cdr/173total 18500REDACTEDdrwxr-xr-x 2 jeblin jeblin 2781184 Jul 2 09:34 06drwxr-xr-x 2 jeblin jeblin 2826240 Aug 1 07:33 07 If it makes a difference, I wrote a Python script that automates this process (searching for multiple IPs), and it works for 06, but not 07 as well, which is why I tried to do the manual grep search first.
The shell is not able to call grep with too many files, or rather, the length of the command line 1 for calling an external utility has a limit, and you're hitting it when the shell tries to call grep with the the expanded cdr/173/07/cdr_2018_07* globbing pattern. What you can do is either to grep each file individually, with for pathname in cdr/173/07/cdr_2018_07*; do grep "IP" "$pathname" /dev/nulldone where the extra /dev/null will force grep to always report the filename of the file that matched, or you can use find : find cdr/173/07 -maxdepth 1 -type f -name 'cdr_2018_07*' \ -exec grep "IP" /dev/null {} + which will be more efficient as grep will be called with as many matching pathnames as possible in batches. It could also be that if you first cd into cdr/173/07 and do grep "IP" cdr_2018_07* it may work since the generated list of filenames would be shorter due to not containing the directory bits, but you're probably very close to the limit with 44.7k files and should seriously consider moving to another way of doing this, especially if you're expecting the number of files to fluctuate around that number. Related: Understanding the -exec option of `find` What defines the maximum size for a command single argument? (tangentially related) How to cause "Argument list too long" error? Other questions on U&L relating to "Argument list too long" 1 The limit is on the combined length on the command line and the length of the environment (the sum of the length of each argument and environment variable's name and value, also accounting for the pointers to them), and it is a limit imposed by the execve() system call which is used by the shell to execute external commands. Built-in commands such as echo etc. do not have this issue.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/459882", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/303260/" ] }