source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
369,597
I'm trying to send information via pyusb from host a to host b, the issue is that I can't get my USB crossover cable to show up on either end via lsusb, or even doubly connected to the same host. I don't know how to address/identify the port I've connected to to even send information over. I know its possible to do this, but not how to actually set it up. I would like to make one of the hosts a device, or a slave to the other, as if it was a real USB device, I only care about two way communication in so far as it is needed to set up the device -> host communication I imagine I'm going to have to create my own device description, but that doesn't explain any part of the process to actually get a linux based system to identify it. I guess what I'm looking for is a way to address the hosts on either end or the connection itself, something for me to identify and then use with pyusb so I can actually send information over, and then let me use one as an actual usb device. EDIT: Looking around more it seems like I need to use g_serial some how on the host I intend to make act like a device. That should include the proper drivers and I should be able to hook up both sides that way, however this still seems to require a usb device port, and obviously I'm not using an embedded system on either end, so I don't have access to a device port or both. I'm open for some sort of host hardware converter to device, but I need to make sure bandwidth is not sacrificed. Bandwidth is also the reason I'm not using Ethernet. I'm willing to forgo all that though if I'm able to send information straight over the port some how. Clearly this is also possible because there exists special software with other cables that allow file transfer (and windows recognizes my cable when connected to linux). I need the ability to do this as well. EDIT: dmesg output is too large, but here is something interesting: usb usb4-port1: Cannot enable. Maybe the USB cable is bad? this gets displayed for thousands of lines. also get this from windows side (not actually what I'm trying to do)
Apache failed to start, with an error saying (13)Permission denied: AH00091: httpd: could not open error log file /var/www/mytestdeployment/error.log. AH00015: Unable to open logs Since SELinux was in enforcing mode, it prevented Apache from writing to the non-standard log directory. In order to keep Dan Walsh from weeping and CodeMed productive, we can apply the httpd_log_t policy to that directory: semanage fcontext -a -t httpd_log_t "/var/www/mytestdeployment(/.*)?"restorecon -Rv /var/www/mytestdeployment and confirm with: ls -lZ /var/www/mytestdeployment If you don't have the semanage utility, you can install it with: yum install policycoreutils-python
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/369597", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/232910/" ] }
369,598
I have a database file called prod_database.sql and I wanted to search all Urls in that file except hyperlinks* and http://www.example.com and wanted to store the results in a file. Hyperlinks pattern* : <a href="http://www.hyperlink.com"></a> Suppose i have a file prod_database.sqlwhich have below content <html> <head> <script src="http://www.script.com/javascript1.js"> <link href="http://www.css.com/style.css"> </head> <body> Hello Anwar<br/> <a href="http://www.anchortag.com">Google</a><br/> <iframe src="http://www.iframe.com"></iframe> </body></html> So I have to search all URLs which not the part of anchor tags(hyperlinks) in above file I should get URL from <script> , <link> and iframe tag onlyexpected result: http://www.script.com/javascript1.js , http://www.css.com/style.css , http://www.iframe.com
Apache failed to start, with an error saying (13)Permission denied: AH00091: httpd: could not open error log file /var/www/mytestdeployment/error.log. AH00015: Unable to open logs Since SELinux was in enforcing mode, it prevented Apache from writing to the non-standard log directory. In order to keep Dan Walsh from weeping and CodeMed productive, we can apply the httpd_log_t policy to that directory: semanage fcontext -a -t httpd_log_t "/var/www/mytestdeployment(/.*)?"restorecon -Rv /var/www/mytestdeployment and confirm with: ls -lZ /var/www/mytestdeployment If you don't have the semanage utility, you can install it with: yum install policycoreutils-python
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/369598", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/234873/" ] }
369,603
I have successfully managed to setup PXEboot environment (based on foreman) with BIOS based systems. I could do the same with virtualbox using default BIOS subsystem. As UEFI starts to be more and more popular I would like to PXEBoot UEFI-based systems. I see that there is setting in Virtualbox to "enable EFI" I have grubx64.efi on my TFTP server and ProxyDHCP ready to send this as option But after starting such EFI-enabled VM, some strange shell appears that lists couple of BLK*: devices and that's it... How to even request the boot file from TFTP using this shell (I see no DHCP traffic)? I found some hints about edit startup.nsh but I don't have such file on none of my BLK devices
Using Virtualbox 6.1 following worked for me. On the machine settings do following: "System->Enable EFI" "Network->Advanced->Adapter Type: Paravirtualized Network (virtio-net)". With this, it by default boots into UEFI shell. Type exit , which brings up a Boot Manager menu. In that menu, chose UEFI PXEv4 from the Boot Manager and you'll see a new screen that says >>Start PXE over IPv4. Then it will boot into your grub.cfg. To make the VM automatically boot over PXE, you need to go into boot menu and change boot order. Note that to see UEFI PXEv4 , I specifically had to chose Paravirtualized Network adapter. None other worked.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/369603", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183417/" ] }
369,620
I'm trying to install mongodb v3.4 guided by MongoDB . First I create "/etc/yum.repo.d/mongodb.repo",then I paste this repo info in the file: [mongodb-org-3.4]name=MongoDB Repositorybaseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.4/x86_64/gpgcheck=1enabled=1gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc and I try to install mongodb-org, but I get the following error from yum: Loaded plugins: fastestmirrorbase| 3.6 kB 00:00:00 extras| 3.4 kB 00:00:00 https://repo.mongodb.org/yum/redhat/7/mongodb-org/3.4/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 403 - ForbiddenTrying other mirror.To address this issue please refer to the below knowledge base articlehttps://access.redhat.com/solutions/69319If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/ One of the configured repositories failed (MongoDB Repository), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=mongodb-org-3.4 ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable mongodb-org-3.4 or subscription-manager repos --disable=mongodb-org-3.4 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=mongodb-org-3.4.skip_if_unavailable=truefailure: repodata/repomd.xml from mongodb-org-3.4: [Errno 256] No more mirrors to try.https://repo.mongodb.org/yum/redhat/7/mongodb-org/3.4/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 403 - Forbidden Is this error because of SELinux restrictions? How should I allow yum to install mongodb?
This might be too late, but after running into the same issue, I followed a combination of Installing MongoDB via yum on AWS Linux fails: HTTPS Error 404 - Not Found (on Stack Overflow)with one of the responses to Yum error while installing MongoDB on CentOS? (not the selected one),so my steps were: sudo rm -rf /etc/yum.repos.d/mongod*sudo yum clean all again create repo file sudo vi /etc/yum.repos.d/mongodb-org-3.4.repo paste the following (notice, I'm replacing '$releasever' with 7, for my system): [mongodb-org-3.4]name=MongoDB Repositorybaseurl=https://repo.mongodb.org/yum/redhat/7/mongodb-org/3.4/x86_64/gpgcheck=1enabled=1gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc then I ran this, and it was successful: sudo yum install -y mongodb-org
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/369620", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/194145/" ] }
369,646
I'm doing my first mail server config on Ubuntu 16.04. In all tutorials and How-To's there's the mail subdomain as in mail.example.com . I'm wondering if this is some formal requirement or just an example of a possible solution that is required by no standards. I'm trying to do it with these DNS records: MX main.dom main.dom 1 14400CNAME www.main.dom main.dom 43200A main.dom XXX.XXX.XXX 3600 I'm not sure how I can test it. Nor can I predict the consequences for the lack of experience. I can tell the server itself is responsive to telnet on port 25, giving this: $ telnet main.dom 25Trying XXX.XXX.XXX.XXX...Connected to main.dom.Escape character is '^]'.220 server1.main.dom ESMTP Postfix (Ubuntu) main.dom is not the real address, just a structural representation. When called on localhost XXX.XXX.XXX.XXX is 127.0.0.1 , but the FQDN stays the same (3 parts). Answers to this are hard to find on the net. And assuming I use the mail subdomain, MX main.dom mail.main.dom 1 14400 do I also need to create a corresponding CNAME?
Most domains of any meaningful size have a machine dedicated exclusively to mail, hence mail.example.com . do I also need to create a corresponding CNAME? No, you need an A record for mail.main.dom . MX records should always point to an A. It's a common mistake to point an MX record to a CNAME. With Bind syntax: main.dom. IN MX 10 mail.main.dom.mail.main.dom. IN A 1.2.3.4 Or if you want to serve everything on the same machine: main.dom. IN A 1.2.3.4main.dom. IN MX 10 main.dom.www.main.dom. IN CNAME main.dom. Side notes: It's a bad idea to set MX priority to 1. If at any point you need an emergency re-route of mail you can add an MX with a higher priority, say 5. For the same reason you shouldn't set TTL for MX too high. Something like 3600 is big enough not to hammer your DNS, yet small enough to allow you to make changes in an emergency (changes should propagate in less than an hour). Priority 0 works, but there are technical reasons for not using it.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/369646", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
369,660
I would like to replace the contents between $Elements$ and $EndElements$ in text file, f1, with the data from another file, f2. The contents of f1 is given simply by $Elements$3157$EndElements$ And the contents of f2 is given as 1 65 712 32 873 39 984 41 63 What I would like to get at the end is: $Elements$1 65 712 32 873 39 984 41 63$EndElements$ For this I tried some sed code from stackexchange pages(well I copied the code and do not have the window open anymore so I can not provide the direct link, sorry) lead='^\$Elements\$$' tail='^\$EndElements\$$'# f2 is the file where the information# to replace is kept in sed -e "/$lead/,/$tail/{ /$lead/{p; r insert_file > }; /$tail/p; d }" f2 which does not work, basically doing nothing.
Most domains of any meaningful size have a machine dedicated exclusively to mail, hence mail.example.com . do I also need to create a corresponding CNAME? No, you need an A record for mail.main.dom . MX records should always point to an A. It's a common mistake to point an MX record to a CNAME. With Bind syntax: main.dom. IN MX 10 mail.main.dom.mail.main.dom. IN A 1.2.3.4 Or if you want to serve everything on the same machine: main.dom. IN A 1.2.3.4main.dom. IN MX 10 main.dom.www.main.dom. IN CNAME main.dom. Side notes: It's a bad idea to set MX priority to 1. If at any point you need an emergency re-route of mail you can add an MX with a higher priority, say 5. For the same reason you shouldn't set TTL for MX too high. Something like 3600 is big enough not to hammer your DNS, yet small enough to allow you to make changes in an emergency (changes should propagate in less than an hour). Priority 0 works, but there are technical reasons for not using it.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/369660", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/234921/" ] }
369,771
my command looks something like this find $PATH -name '$FILE.log' > /tmp/file-list.txt and I keep getting an error message that says find: illegal option -- n What am I doing wrong here?
From that error message, I'd bet that despite your "linux" tag, you're probably not on a Linux-based system as find implementations typically found on Linux-based systems don't use that wording for the errors about unknown options the $PATH variable is unset (which indicates the shell you run that in is probably the Bourne or Korn shell (AT&T implementation)). That illegal option is the wording typically found on traditional Unix find implementations. Those traditional implementations also require at least one file argument before you can use a predicate like -name . Otherwise, -name would be taken as options (as opposed to predicates) if it was the next argument after find . And -n (the first option in -name which is short for -n -a -m -e ) is not a valid find option. So most likely, the expansion of $PATH results in no argument at all. That would happen in cases where: $PATH is unset $PATH is set to the empty string $PATH contains only space, tab or newline characters (the default value of $IFS ). Since $PATH is the special variable containing a colon-separated list of directories to look up executables including that find command, we can rule out 3 and most probably 2 (unless there's a find command in the current directory), or otherwise you'd get a find: command not found error. When $PATH is unset ( 1 above), in execvp() (as typically used by env or find 's -exec predicate for instance) and in some shells (including the Korn and Bourne shell typically found on those traditional OSes), find would be found through a search in a default search path (shells like bash don't do that but set (though not export) $PATH to a default value when it was unset on start-up). Here, I think you want to: use a different variable name than $PATH to store the directory name (like $dir ) be sure to quote your variables use ${dir:-.} if you want to default to the current directory if $dir is empty or unset. (maybe also investigate why the $PATH variable is unset which is highly unusual) So: dir=some-dirfind "${dir:-.}" -name "$FILE.log" Note that it assumes that $dir doesn't start with - and is not otherwise a find operator (like ! , ( , ) ) and that $FILE doesn't contain wildcards ( * , ? , [...] ) (note the use of double quotes instead of single quotes around $FILE.log as I assume you want to find files named something.log where something is the content of $FILE as opposed to files named $FILE.log literally).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/369771", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/235008/" ] }
369,818
I found many string manipulation tutorials, but can not figure out how to apply them to my particular situation. I need to insert (not substitute) a string variable word into a text variable text using either method (can not depend on line numbering, variable manipulation preferred over read/write to a file): Before the matched string, or At a specific index (byte position) text="mytextMATCHmytext"word="WORD"match="MATCH"# method1 - not working, because text is not a filesed '/$word/ i $match' text# method2indx="${text%%$match*}"indx=${indx%:*} # leave only the byte index where match startstext="$text{0-$index-1}$word$text{$index-end}"# expected value of text:"mytextWORDMATCHmytext" Please, help to figure out the syntax. It would be nice to fix both methods. Any other ways of doing it? The text contains >1MB of text, so, the efficient way is preferred.
To insert the text j into the variable text at position p (counting from zero): p=5text="$(seq 10)" ## arbitrary texttext="${text:0:p}j${text:p}" To insert the text j before the matching portion in $match : text="${text%%${match}*}j${match}${text##*${match}}" This pulls off the leading portion of $text until it finds $match , then adds the j , then the $match , then the trailing portion of $text until it finds $match . Hopefully there's only one match of $match in $text !
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/369818", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216417/" ] }
369,832
On my reelbox (running Ubuntu) I have found a file in /etc/init that contains the following # frontpanel-pre - check for frontpanel CAPs and adjust time#description "check frontpanel caps"#start on starting mountall#start on tty-device-added DEVNAME=/dev/ttyS0taskscript ( /sbin/dev_frontpanel.sh /sbin/reelfpctl -capability ) > /dev/.frontpanel.caps initctl emit --no-wait frontpanel-linkedend script I wonder if the dot in /dev/.frontpanel has some special meaning in linux I thought the output of the commands in the brackets will be written to a file called " .frontpanel.caps " in /dev/ but there is no such file.In /dev/ there is a frontpanel which is a link to /dev/ttyS0 Could it be, that e.g. echo something > /dev/.frontpanel.caps actually sends data (something in this case) to /dev/frontpanel ? What does .caps do then?
To insert the text j into the variable text at position p (counting from zero): p=5text="$(seq 10)" ## arbitrary texttext="${text:0:p}j${text:p}" To insert the text j before the matching portion in $match : text="${text%%${match}*}j${match}${text##*${match}}" This pulls off the leading portion of $text until it finds $match , then adds the j , then the $match , then the trailing portion of $text until it finds $match . Hopefully there's only one match of $match in $text !
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/369832", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/232213/" ] }
369,835
I know that PID 1 is init. Now I would like to knom, can I replace init process ID to another one and assign to PID 1 a new process. If yes how can I do that?
The first process that is started at boot time receives PID 1. The first process that is started at boot time has a job: it has to start all other processes, directly or indirectly. All processes¹ are ultimately descendants of this, since apart from the kernel running a program at boot time the only way a process gets created is that some process executed a system call to create a new process. The process whose PID is 1 has a job: if a process dies while it has running child processes, the children's parent process ID is set to 1. When the children die, PID 1 should reap them, i.e. call the wait system call, otherwise a zombie of the child process stays behind. The various programs called init (there are multiple implementations) perform both of these jobs. The Linux kernel has a command line argument to change which executable is executed as the first process² . It can be used to run any executable, but if that executable doesn't perform the jobs of init, the system isn't going to run normally. This feature is mostly used to enter a system repair mode, e.g. only running a shell on the console and nothing else. Once the system has started normally, it is not possible to replace PID 1 because init doesn't die. Not only does init not die, because it's programmed to run forever (init is supposed to keep running until the system shuts down), but it even gets a special protection from signals that would kill other processes, such as SIGKILL. Linux has a PID namespace feature that allows defining a subsystem with its own set of process IDs. The processes in a PID namespace have different PIDs when viewed from inside the namespace and from outside the namespace. The first process in the namespace gets PID 1 in the namespace. Outside the namespace it won't have PID 1 (unless init chose to enter a new PID namespace, but init doesn't do that because that would prevent it from doing its job). ¹ This isn't completely true, some kernels have other ways to launch a process. For example Linux launches modprobe when some hardware is discovered under certain circumstances. But descendants of init account for a vast majority of processes. ² First after the initramfs or initrd .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/369835", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/235046/" ] }
369,842
I am using fetchmail and procmail to forward email to a gmail account. I am using Mac Terminal. Here is my .fetchmailrc: set no bouncemailpoll outlook.office365.com with protocol imapport 993auth passworduser [email protected] password passwordsslsslfingerprint "<Correct Fingerprint - not sure if I should copy this here>"sslcertpath /Users/myuser/.certskeepno rewritemda "/usr/local/bin/procmail -f %F -d %T"; and here is my .procmailrc file: VERBOSE=yes:0! [email protected] When I run fetchmail -vv everything seems to work fine, it finds the one unread email in the email account I am fetching from. And the last thing in the output under procmail is: procmail: Executing "/usr/sbin/sendmail,-oi,[email protected]" No apparent errors are listed. However, nothing is showing up in my gmail account?
The first process that is started at boot time receives PID 1. The first process that is started at boot time has a job: it has to start all other processes, directly or indirectly. All processes¹ are ultimately descendants of this, since apart from the kernel running a program at boot time the only way a process gets created is that some process executed a system call to create a new process. The process whose PID is 1 has a job: if a process dies while it has running child processes, the children's parent process ID is set to 1. When the children die, PID 1 should reap them, i.e. call the wait system call, otherwise a zombie of the child process stays behind. The various programs called init (there are multiple implementations) perform both of these jobs. The Linux kernel has a command line argument to change which executable is executed as the first process² . It can be used to run any executable, but if that executable doesn't perform the jobs of init, the system isn't going to run normally. This feature is mostly used to enter a system repair mode, e.g. only running a shell on the console and nothing else. Once the system has started normally, it is not possible to replace PID 1 because init doesn't die. Not only does init not die, because it's programmed to run forever (init is supposed to keep running until the system shuts down), but it even gets a special protection from signals that would kill other processes, such as SIGKILL. Linux has a PID namespace feature that allows defining a subsystem with its own set of process IDs. The processes in a PID namespace have different PIDs when viewed from inside the namespace and from outside the namespace. The first process in the namespace gets PID 1 in the namespace. Outside the namespace it won't have PID 1 (unless init chose to enter a new PID namespace, but init doesn't do that because that would prevent it from doing its job). ¹ This isn't completely true, some kernels have other ways to launch a process. For example Linux launches modprobe when some hardware is discovered under certain circumstances. But descendants of init account for a vast majority of processes. ² First after the initramfs or initrd .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/369842", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/235051/" ] }
369,847
In zsh my current (left) prompt is the following PROMPT="%F{106}%22<…<%3~%f%(?..%{$fg[red]%} %?%{$reset_color%})%(1j.%{$fg[cyan]%} %j%{$reset_color%}.)%# " The %22<…< specifies that the prompt will become truncated when it is longer than 22 characters. This is because I don't want the prompt to take up too much horizontal space. However, when I have a terminal that is very wide, I could have spent more than 22 characters on the prompt and I would still have had plenty of horizontal space to spare. So, I'd like to format the prompt so that it has a maximum width that is a percentage of the full terminal width (e.g. 20%). Is there a way to do that? Ideally, the prompt width should be recalculated if the terminal changes width. (In case it matters: the shell will normally be inside tmux on MacOS)
This is what I eventually ended up writing. # this variable can be changed later to change the fraction of the line export PROMPT_PERCENT_OF_LINE=20# make a function, so that it can be evaluated repeatedlyfunction myPromptWidth() { echo $(( ${COLUMNS:-80} * PROMPT_PERCENT_OF_LINE / 100 )) }# for some reason you can't put a function right in PROMPT, so make an# intermediary variablewidth_part='$(myPromptWidth)'# use ${} to evaluate the variable containing functionPROMPT="%F{106}%${width_part}<…<%3~%f%(?..%{$fg[red]%} %?%{$reset_color%})%(1j.%{$fg[cyan]%} %j%{$reset_color%}.)%# " This will recalculate the width immediately on terminal resize.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/369847", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55240/" ] }
369,883
Objective : Run a program as root (C++ binary).The same as : SetUID bit not working in Ubuntu? And : Why setuid does not work on executable? ./a.out output: E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?psurana //output for "whoami" Look below for the code. ls -l output: -rwsrwxr-x 1 root root 46136 Jun 7 20:13 a.out The Code : #include <string>#include <stdlib.h>int main(int argc, char *argv[]){ std::string input = "apt-get install " + std::string(argv[1]); system(input.c_str()); system("whoami"); return 0;} Details: : compiled the program and then did chown root:root a.out && chmod u+s a.out . Please look above for ls -l output. I still do not get the root privileges and the output for system("whoami") in the code is my own username on the machine. Reading the two linked questions did not yield me anywhere. :(.both the creator and the owner of the file are root. The setuid bit is set, so it should work. The filesystem is not external either, it is my own machine. How can I make this work?
If you change the code like this you can see the effective and real UIDs: #include <string>#include <stdlib.h>int main(int argc, char *argv[]){ system("id"); system("bash -c id"); return 0;} On my system this returns these two lines (I've used ... to skip irrelevant groups): uid=1001(roaima) gid=1001(roaima) euid=0(root) groups=1001(roaima),24(cdrom),...,103(vboxsf)uid=1001(roaima) gid=1001(roaima) groups=1001(roaima),24(cdrom),...,103(vboxsf) As you can see, the raw call to id returns an Effective UID of 0 (root), but the Real UID is still my own. This is what you would expect. However, you can see that the bash -c id call has stripped the Effective UID away so it is no longer running as root. This is documented under man bash as follows: If the shell is started with the effective user (group) id not equal to the real user (group) id, and the -p option is not supplied, no startup files are read, shell functions are not inherited from the environment, the SHELLOPTS , BASHOPTS , CDPATH , and GLOBIGNORE variables, if they appear in the environment, are ignored, and the effective user id is set to the real user id. If the -p option is supplied at invocation, the startup behavior is the same, but the effective user id is not reset. So the solution here should be to include the -p flag. (You can find out about the process by which bash resets its UID at Setuid bit seems to have no effect on bash .) However, the story's not finished here because I know you're going to say you didn't invoke bash . Unfortunately for you, that's pretty much what system() does on your behalf, and it doesn't allow you to specify -p . strace discards the root privileges, but here's enough of the strace -f ./a.out output for you to see what's going on: execve("./a.out", ["./a.out"], [/* 44 vars */]) = 0brk(0) = 0x24f1000...clone(child_stack=0, flags=CLONE_PARENT_SETTID|SIGCHLD, parent_tidptr=0x7ffee0d42a1c) = 4619wait4(4619, Process 4619 attached <unfinished ...> At this point the child process kicks off, ready to run our id [pid 4619] rt_sigaction(SIGINT, {SIG_DFL, [], SA_RESTORER, 0x7f100eb270e0}, NULL, 8) = 0[pid 4619] rt_sigaction(SIGQUIT, {SIG_DFL, [], SA_RESTORER, 0x7f100eb270e0}, NULL, 8) = 0[pid 4619] rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0[pid 4619] execve("/bin/sh", ["sh", "-c", "id"], [/* 44 vars */]) = 0[pid 4619] brk(0) = 0x7f849dd71000[pid 4619] brk(0) = 0x7f849dd71000...[pid 4619] clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f849d1d89d0) = 4620 Now we have a shell running, and it will have discarded our Effective UID. Next you'll see it starting the id command and writing its output to stdout for you: Process 4620 attached[pid 4619] wait4(-1, <unfinished ...>[pid 4620] execve("/usr/bin/id", ["id"], [/* 44 vars */]) = 0[pid 4620] brk(0) = 0x1785000...[pid 4620] write(1, "uid=1001(roaima) gid=1001(roaim"..., 149) = 149uid=1001(roaima) gid=1001(roaima) groups=1001(roaima),24(cdrom),...,103(vboxsf)... The solution for you here will be either to use one of the exec*() family directly, or to include a call to setuid(0) , or to configure a tool such as sudo to allow you to call your target program directly and (presumably) without a password. Of these options I'd personally go with the sudo solution. The authors of that spent a long time ensuring the code was safe against (un)intended escalation of privilege attacks.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/369883", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/235096/" ] }
369,937
I've got a Fedora 25 x86_64 stand alone workstation. Something is listening on port 111 (identified with an nmap scan): $ sudo lsof -i :111COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEsystemd 1 root 36u IPv4 15170 0t0 TCP *:sunrpc (LISTEN)systemd 1 root 37u IPv4 15171 0t0 UDP *:sunrpcsystemd 1 root 38u IPv6 15172 0t0 TCP *:sunrpc (LISTEN)systemd 1 root 39u IPv6 15173 0t0 UDP *:sunrpc I disabled the Sun gear on the port with the following commands: $ sudo systemctl disable rpcbind$ sudo systemctl disable sunrpcFailed to disable unit: No such file or directory After reboot the port is still open. It appears something other than Sun gear wants to listen on port 111. Or maybe systemd is not respecting my wishes to disable the unused service. Or maybe something else... How do I determine what is trying to listen on the port, and how do I disable it? From below: $ sudo systemctl -a | grep -E "rpc|port" var-lib-nfs-rpc_pipefs.mount loaded active mounted RPC Pipe File System abrtd.service loaded active running ABRT Automated Bug Reporting Tool auth-rpcgss-module.service loaded inactive dead Kernel Module supporting RPCSEC_GSS fedora-import-state.service loaded active exited Import network configuration from initramfs fedora-readonly.service loaded active exited Configure read-only root support rpc-gssd.service loaded inactive dead RPC security service for NFS client and server rpc-statd-notify.service loaded inactive dead Notify NFS peers of a restart rpc-statd.service loaded inactive dead NFS status monitor for NFSv2/3 locking.● rpc-svcgssd.service not-found inactive dead rpc-svcgssd.service rpcbind.service loaded inactive dead RPC Bind rpcbind.socket loaded active listening RPCbind Server Activation Socket rpc_pipefs.target loaded active active rpc_pipefs.target rpcbind.target loaded active active RPC Port Mapper
When you run sudo systemctl disable rpcbind on Fedora 25 I think there is a warning: Warning: Stopping rpcbind.service, but it can still be activated by:rpcbind.socket So you can try following: sudo systemctl stop rpcbind.socketsudo systemctl disable rpcbind.socket
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/369937", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
369,972
I'm trying to set an RSA key as an environment variable which, as a text file, contains newline characters. Whenever I attempt to read from the file and pass it into an environment variable, it will just stop at the newline character on the first line. How can I prevent this?
Note that except in zsh, shell variables cannot store arbitrary sequences of bytes. Variables in all other shells can't contain the NUL byte. And with the yash , they can't contain bytes not forming valid characters. For files that don't contain NUL bytes, in POSIX-like shells, you can do: var=$(cat file; echo .); var=${var%.} We add a .\n and strip the trailing . to work around the fact that $(...) strips all trailing newline characters. The above would also work in zsh for files that contain NULs though in zsh you could also use the $mapfile special associative array: zmodload zsh/mapfilevar=$mapfile[file] In zsh or bash, you can also use: { IFS= read -rd '' var || :; } < file That reads up to the first NUL byte. It will return a non-zero exit status unless a NUL byte is found. We use the command group here to be able to at least tell the errors when opening the file, but we won't be able to detect read errors via the exit status. Remember to quote that variable when passed to other commands. Newline is in the default value of $IFS , so would cause the variable content to be split when left unquoted in list contexts in POSIX-like shells other than zsh (not to mention the other problems with other characters of $IFS or wildcards). So: printf %s "$var" for instance (not printf %s $var , certainly not echo $var which would add echo 's problems in addition to the split+glob ones). With non-POSIX shells: Bourne shell: The bourne shell did not support the $(...) form nor the ${var%pattern} operator, so it can be quite hard to achieve there. One approach is to use eval and quoting: eval "var='`printf \\' | cat file - | awk -v RS=\\' -v ORS= -v b='\\\\' ' NR > 1 {print RS b RS RS}; {print}; END {print RS}'`" With (t)csh , it's even worse, see there . With rc , you can use the ``(separator){...} form of command substitution with an empty separator list: var = ``(){cat file}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/369972", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/235184/" ] }
369,995
I have a script ( run.sh ) which calls my application script. At a given time, I may have multiple run.sh running. The general format of the run.sh script is, #!/bin/bash# Running DALTON JOB: Helixdir=$(pwd) echo "-----------------------------------------------" export DALTON_TMPDIR=/mnt/raid0/scratch export OMP_NUM_THREADS=6 source /opt/intel/compilers_and_libraries_2017.0.098/linux/bin/compilervars.sh intel64 source /opt/intel/mkl/bin/mklvars.sh intel64 echo "//-------process started----------------------------//"./application.sh -mb 14550 input.mol output.outecho "//-------process finished----------------------------//" Is it possible to get the PID of the application.sh inside the run.sh script. (I found that $$ gives the PID of the script itself.) Also, I noticed that the PID of the application is always larger numerically than parent script but maybe its coincidence.
If you want to see the PID of application.sh while it is running, then I would suggest explicitly putting it into the background, capturing the PID, then waiting for it to exit: # ..../application.sh -mb 14550 input.mol output.out &app_pid=$!echo "The application pid is: $app_pid"wait "$app_pid"# ...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/369995", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/170079/" ] }
370,085
I believe there is no official way to do this but I am hoping for a best practices suggestion. Background: When Debian 9 is rolled out and testing is unfrozen (and therefore becomes v10), I assume lots of packages from unstable will move to testing. At that point I will want to start using testing again, assuming the packages I need are included. So the question I have is how can I do this elegantly?
If you want to track the testing distribution, I would strongly recommend running a mixture of testing and unstable: that will allow you to pull in updated packages from unstable if necessary ( e.g. for security fixes). To do this, ensure both testing (named as such, rather than the specific release name) and unstable are available in your configured repositories; then set up pinning, e.g. in /etc/apt/preferences : Package: *Pin: release a=testingPin-Priority: 500Package: *Pin: release a=unstablePin-Priority: 200 This will result in packages being tracked in testing if they’re available there, unstable if they’re not, or if they’re installed in a version newer than what’s available in testing. As Debian transitions from preparing Stretch to preparing Buster, and packages migrate from unstable to testing, your local installation will progressively start tracking Buster instead of unstable. This avoids needing to downgrade anything, and hopefully should result in a Buster setup in relatively short order after Stretch is released since testing and unstable haven’t yet diverged too much. (This will change very quickly after Stretch releases, so make sure you set this up before then.) This kind of setup avoids issues with packages disappearing from testing for sometimes long periods. It also makes it easy to track security uploads to unstable, using Paul Wise’s patch to debsecan . I’ve been running this on my main setup for years without issue (but then again, I’m intimately familiar with the inner workings of Debian). The annoyances Fahim mentions in his answer mostly concern new installations of packages, which can be troublesome in pure testing; in practice they’re not much of an issue on a running system. The usual caveats to running testing and/or unstable apply. You should make sure you’re familiar with the best practices . In particular, make sure you’re aware of all the changes apt-get wants to make on upgrades before letting it loose.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/370085", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/228679/" ] }
370,089
I am installing a program following instructions What you need for a manual installation is the subdirectory MaTiSSe-vx.x.x under the release directory (chose the version x.x.x you want). Just copy this subdirectory and make a link to the wrapper script MaTiSSe.py where your environment can find it. I don't know how to make it with command line.
If you want to track the testing distribution, I would strongly recommend running a mixture of testing and unstable: that will allow you to pull in updated packages from unstable if necessary ( e.g. for security fixes). To do this, ensure both testing (named as such, rather than the specific release name) and unstable are available in your configured repositories; then set up pinning, e.g. in /etc/apt/preferences : Package: *Pin: release a=testingPin-Priority: 500Package: *Pin: release a=unstablePin-Priority: 200 This will result in packages being tracked in testing if they’re available there, unstable if they’re not, or if they’re installed in a version newer than what’s available in testing. As Debian transitions from preparing Stretch to preparing Buster, and packages migrate from unstable to testing, your local installation will progressively start tracking Buster instead of unstable. This avoids needing to downgrade anything, and hopefully should result in a Buster setup in relatively short order after Stretch is released since testing and unstable haven’t yet diverged too much. (This will change very quickly after Stretch releases, so make sure you set this up before then.) This kind of setup avoids issues with packages disappearing from testing for sometimes long periods. It also makes it easy to track security uploads to unstable, using Paul Wise’s patch to debsecan . I’ve been running this on my main setup for years without issue (but then again, I’m intimately familiar with the inner workings of Debian). The annoyances Fahim mentions in his answer mostly concern new installations of packages, which can be troublesome in pure testing; in practice they’re not much of an issue on a running system. The usual caveats to running testing and/or unstable apply. You should make sure you’re familiar with the best practices . In particular, make sure you’re aware of all the changes apt-get wants to make on upgrades before letting it loose.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/370089", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220963/" ] }
370,098
I desire to run a cat heredocument in a single row instead the natural syntax of 3 rows (opener, content, and delimiter). My need to do so is mostly aesthetic as the redirected content aimed aimed to be part of a handbook text file and I would like to save as much rows as I can, in that particular file). Doing cat <<< TEST > ~/myRep/tiesto tiesto TEST (what I would normally split for 3 parts) results in an errors: tiesto: No such file or direcotry TEST: No such file or directory. Is it even possible to execute one-row heredocuments in Bash?
Yes, but you'd be using a an here-string rather than a here-document : cat >"$HOME/myRep/tiesto" <<<'tiesto' This will send the string tiesto to cat on its standard input, and it will write the string to the file $HOME/myRep/tiesto through a redirection of its standard output. Note that here-strings are not standard but are implemented by at least zsh (where it comes from, at the same time as the UNIX version of rc , though that rc and its derivatives like es or akanga don't add an extra newline character in the end), ksh93 , bash , mksh and yash .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/370098", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
370,261
What I mean is that, suppose we have the following directory Dropbox (folder) ---> Bob (folder) -------> 2017 (folder) ------------> images (folder) ----------------> image.png (file) I could do cd Dropbox , but then I have to manually navigate all the way to the deepest directory images . Is there a command like cd Dropbox:deepest directory that will take me to Dropbox/Bob/2017/images ? If there is a tie on any level, then stop at that level
With zsh : bydepth() REPLY=${REPLY//[^\/]}cd Dropbox/**/*(D/O+bydepth[1]) We define a bydepth sorting function that returns the file with the characters other than / removed (so the order after that transformation is on depth) and use recursive globbing ( **/ being any level of subdirectories) with glob qualifiers: D to also consider hidden dirs / for only dirs O+bydepth : reverse sort by depth [1] get the first one only (after sorting). With bash and GNU tools, the equivalent would be something like: IFS= read -rd '' deepest < <(find Dropbox/ -type d -print0 | awk -v RS='\0' -v ORS='\0' -F / ' NF > max {max = NF; deepest = $0} END {if (max) print deepest}') && cd -- "$deepest" (in case of ties, the chosen one will not necessarily be the same as in the zsh approach). With your new extra requirement, it becomes more complicated. Basically, if I understand correctly, in case of ties, it should change to the directory that is the deepest common parent of all those directories at the maximum depth. With zsh : cd_deepest() { setopt localoptions rematchpcre local REPLY dirs result dir match dirs=(${1:-.}/**/*(ND/nOne:' REPLY=${#REPLY//[^\/]}-$REPLY':)) (($#dirs)) || return result=$dirs[1] for dir ($dirs[2,-1]) { [[ $result//$dir =~ '^([^-]*-.*)/.*//\1/' ]] || break result=$match[1] } cd -- ${result#*-} && print -rD -- $PWD} Example: $ tree DropboxDropbox├── a│   └── b│   ├── 1│   │   └── x│   └── 2│   └── x└── c └── d └── e9 directories, 0 files$ cd_deepest Dropbox~/Dropbox/a/b ( Dropbox/a/b/1/x and Dropbox/a/b/2/x are the deepest ones, and we change to their deepest common parent ( Dropbox/a/b )).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/370261", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/230247/" ] }
370,283
I have a script that runs several different psql statements. I'm trying to capture the error output from psql when password entered is incorrect.The password is entered in before the check (and when correct, the psql statements execute successfully) I've tried the following: pwcheck=`psql -q -U postgres -h $ip -d $database;`echo "error message: $pwcheck" When I enter an incorrect password to check it, the error messages are output but the variable is empty. psql: FATAL: password authentication failed for user "postgres"FATAL: password authentication failed for user "postgres"error message: Ideally, I'd like to save the error message to a variable and not print my own error message/prompt and not display the psql errors at all. How can I store either of these error messages in a bash variable?
You can't, directly. At least, not without either commingling it with or discarding standard output. However, there is a way! #!/bin/basherrorlog=$(mktemp)trap 'rm -f "$errorlog"' EXITpwcheck="$(psql -q -U postgres -h $ip -d $database 2> "$errorlog")"if [[ 0 -ne $? ]]; then echo "Something went wrong; error log follows:" cat "$errorlog"fi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/370283", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/218603/" ] }
370,306
I have a JSON array returned from curl that looks like this: [ { "title": "Some Title", "tags":"tagA tag-B tagC" }, { "title": "Some Title 2", "tags":"tagA tagC" }, ...] I'd like to convert it to... [ { "title": "Some Title", "tags":["tagA", "tag-B", "tagC"] }, { "title": "Some Title 2", "tags":["tagA", "tagC"] }, ...] So far I have: (map(select(.tags!=null)) | map(.tags | split(" "))) as $tags | $tags and that appears to give me something like: [ [ "tagA", "tag-B", "tagC" ], [ "tagA", "tagC" ] ] But I don't seem to be able to weave that back into an output that would give me .tags as an array in the original objects with the original values...
You're making it a lot more complicated than it is. Just use map() and |= : jq 'map(.tags |= split(" "))' file.json Edit: If you want to handle entries without tags : jq 'map(try(.tags |= split(" ")))' file.json Alternatively, if you want to keep unchanged all entries without tags : jq 'map(try(.tags |= split(" ")) // .)' file.json Result: [ { "tags": [ "tagA", "tag-B", "tagC" ], "title": "Some Title" }, { "tags": [ "tagA", "tagC" ], "title": "Some Title 2" }]
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/370306", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3570/" ] }
370,307
I live in a remote location and internet is scarce. Whenever i do sudo poweroff I want it to try git push also. And if push fails, abort shutdown, otherwise continue with shutdown. How can I achieve this without modifying the binary?
You're making it a lot more complicated than it is. Just use map() and |= : jq 'map(.tags |= split(" "))' file.json Edit: If you want to handle entries without tags : jq 'map(try(.tags |= split(" ")))' file.json Alternatively, if you want to keep unchanged all entries without tags : jq 'map(try(.tags |= split(" ")) // .)' file.json Result: [ { "tags": [ "tagA", "tag-B", "tagC" ], "title": "Some Title" }, { "tags": [ "tagA", "tagC" ], "title": "Some Title 2" }]
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/370307", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/108755/" ] }
370,313
I have several mp3's that i need to add '_p192' before the extension. I've tried this : for f in *.mp3; do printf '%s\n' "${f%.mp3}_p192.mp3"; done following this post: adding text to filename before extension But it just prints the file names to stdout without actually renaming the files.
The answer that you looked at was for a question that did not ask for the actual renaming of files on disk, but just to transform the names of the files as reported on the terminal. This means that you solution will work if you use the mv command instead of printf (which will just print the string to the terminal). for f in *.mp3; do mv -- "$f" "${f%.mp3}_p192.mp3"; done I'd still advise you to run the the printf loop first to make sure that the names are transformed in the way that you'd expect. It is often safe to just insert echo in front of the mv to see what would have been executed (it may be a good idea to do this when you're dealing with loops over potentially destructive command such as mv , cp and rm ): for f in *.mp3; do echo mv -- "$f" "${f%.mp3}_p192.mp3"; done or, with nicer formatting: for f in *.mp3; do echo mv -- "$f" "${f%.mp3}_p192.mp3"done
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/370313", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102918/" ] }
370,316
I am sick of always having to google for the process of adding a drive to the fstab using text editor. Is there a way to add say a CIFS samba share to the fstab with a Ubuntu GUI? Like Windows' map network path functionality.
On Ubuntu you can edit your fstab using the gnome-disk-utility . From the terminal run gnome-disks or type Disks from the dash. Select the disk then the partition, from the Option menu select Edit Mount Options .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/370316", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67439/" ] }
370,318
I had installed CentOS(CLI,minimal).it have no GUI.i want connect to WiFi but answers on askubuntu are not working. .I want to know following:- How to turn WiFi on/off? How to get list of available WiFi connections? How to connect WiFi that i want to connect with?
To run the ifconfig ... command , you should install the net-tools package. Because the net-tools is deprecated there is the ip and iw commands which answer your question: How to turn WiFi on/off? $ ip link set <interface> up$ ip link set <interface> down How to get list of available WiFi connections? $ iw dev <interface> scan | grep SSID How to connect WiFi that i want to connect with? Create a wpa_supplicant configuration file with the following content: ctrl_interface=/run/wpa_supplicantupdate_config=1ap_scan=1 To add the SSID and the password, run: $ wpa_passphrase "YOUR-SSID" YOUR-PASSWD >> /etc/wpa_supplicant/wpa_supplicant.conf To connect run: $ wpa_supplicant -i <interface> -c /etc/wpa_supplicant/wpa_supplicant.conf -Bdhclient <interface>
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/370318", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/235453/" ] }
370,329
I have browser shell leash and I'm executing shell commands in php and returing them to the browser and I just discovered chroot command and I want to run for example ls on root directory on changed root, In fact I need to run bash -c "ls /" . I've try this (I've try without sudo but it was not working): sudo chroot ~/projects/jcubic/leash ls but got error: chroot: failed to run command ‘ls’: No such file or directory do I use this command properly? Is it possible to run ls on different root directory? when I've try to run chroot without sudo I've got this error: chroot: cannot change root directory to '/home/kuba/projects/jcubic/leash': Operation not permitted
chroot: failed to run command ‘ls’: No such file or directory To run any command inside the chroot, you need to have this program available in the chroot (since it can not use the program installed in the / of filesystem. The simplest way is to copy the /usr/bin/ls from to /home/kuba/projects/jcubic/leash/usr/bin/ (you will also need the dependent shared libraries: ldd /usr/bin/ls ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/370329", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1806/" ] }
370,376
I have 2 existing files : abcd and xyz . $ cat abcdabcd $ cat xyzxyz Now, when I try to softlink these files, I get this message : ln: cannot create xyz: File exists I do not want to use ln -sf abcd xyz command as it will overwrite the content of xyz with that of abcd . What I want is: both abcd and xyz should display their original content once they are unlinked. Or in other words, I just want to temporarily link both of these files. Please suggest if there is any other solution to this other than soft/hard link like using mount, etc. Edit : I am using Solaris OS which has no manual entry for commands like mount --bind , mount -B , bindfs , fusermount ,etc Also, I tried to use : mount -o bind abcd xyz and it gave following msg : cannot open /etc/vfstab . I checked and found out that /etc/vfstab was having only Root access.
This is not what ln is meant to do. ln creates a hardlink of an existing file, i.e., two (or more) directory entries that point to the same file in the disk. Linked files work in a way that editing one will affect all. The functionality you want is not something native to Unix (link files so that they appear as one and so that they can be unlinked later). Linux , though, has (some years ago) implemented something called bind mounting, allowing a file or a directory to be mounted on top of another (files on top of files and directories on top of directories). Proposed solution: If you want a file to appear "to be" another temporarily, then use bind mount ( mount -B file1 file2 ). This will mount file1 on top of file2 . After unmounting this later, both files will show again as they originally exist. # echo A >A# echo B >B# mount -B A B# cat AA# cat BA# umount B# cat AA# cat BB If you expected that the "linked" files show as a concatenation of both, you will have to create a third file and remove it later.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/370376", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/194061/" ] }
370,400
I had install centOS 7 minimal version in my virtualbox in Ubuntu mate. It has no GUI. It is totally CLI. My user name is Smit and able to login in to it. But when i type command sudo yum update and enter my password, it says Smit is not in sudoers files. This incident will be reported. But when I try to add my user to sudo group by command adduser Smit sudo gives something like this: (I am unable to copy-paste via virtual-box. I do this by login in root.)
I don't know why your command doesn't work. It may have to do with either: your CentOS not using sudo by default the way the sudoers file should be edited the syntax of adduser command on that particular machine. Apparently, and it is my guess, it's first of all a matter of the last point. Anyhow, the easiest way is to add the user to the wheel group, which should have sudo priviliges on your CentOS. Try out this command: usermod -aG wheel Smit This of course has to be done by root . Once successfully executed, change identity to Smit and check if you can sudo . su - Smitsudo yum update As an alternative, you can use visudo . Adding this line should do: Smit ALL=(ALL) ALL But here's a guide with a few more details if you're interested.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/370400", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
370,407
I would like to create a key binding, using the key sequence C-x r , to reload the configuration of bash , stored in ~/.bashrc , and the one of the readline library stored in ~/.inputrc . To reload the configuration of readline, I think I could use the re-read-init-file function which is described in man 3 readline : re-read-init-file (C-x C-r) Read in the contents of the inputrc file, and incorporate any bindings or variable assignments found there. To reload the configuration of bash , I could use the source or . command. However, I'm not sure what's the best way to combine a shell command with a readline function. So, I came up with the combination of 2 key bindings: bind '"\C-xr": ". ~/.bashrc \C-x\C-z1\C-m"'bind '"\C-x\C-z1": re-read-init-file' When I hit C-x r in bash, here's what happens: . ~/.bashrc `~/.bashrc` is inserted on the command lineC-x C-z 1 `C-x C-z 1` is typed which is bound to `re-read-init-file`C-m `C-m` is hit which executes the current command line It seems to work because, inside tmux, if I have 2 panes, one to edit ~/.inputrc or ~/.bashrc , the other with a shell, and I change a configuration file, after hitting C-x r in the shell, I can see the change taking effect (be it a new alias or a new key binding), without the need to close the pane to reopen a new shell. But, is there a better way of achieving the same result? In particular, is it possible to execute the commands without leaving an entry in the history? Because if I hit C-p to recall the last executed command, I get . ~/.bashrc , while I would prefer to directly get the command which was executed before I re-sourced the shell configuration. I have the same issue with zsh : bindkey -s '^Xr' '. ~/.zshrc^M' Again, after hitting C-x r , the command . ~/.zshrc is logged in the history. Is there a better way to re-source the config of zsh ?
Don't inject a command into the command line to run it! That's very brittle — what you're trying assumes that there's nothing typed at the current prompt yet. Instead, bind the key to a shell command, rather than binding it to a line edition command. In bash, use bind -x . bind -x '"\C-xr": . ~/.bashrc' If you also want to re-read the readline configuration, there's no non-kludgy way to mix readline commands and bash commands in a key binding. A kludgy way is to bind the key to a readline macro that contains two key sequences, one bound to the readline command you want to execute and one bound to the bash command. bind '"\e[99i~": re-read-init-file'bind -x '"\e[99b~": . ~/.bashrc'bind '"\C-xr": "\e[99i~\e[99b~"' In zsh, use zle -N to declare a function as a widget, then bindkey to bind that widget to a key. reread_zshrc () { . ~/.zshrc}zle -N reread_zshrcbindkey '^Xr' reread_zshrc
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/370407", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/232487/" ] }
370,567
I have a directory located inside /var/www/html . I renamed this directory with mv from example1 to to example2 . How could I enter the newly named dir the moment the namechange was made? How would you achieve this? Might do with find , mmin , 0*60 ? I aim to rename and enter in one operation instead of two different operations. Ideally, I would aim for a single-row solution and without customizing anything in the system.
You can add this in your .bashrc or .bash_aliases file (or equivalent if your shell is not bash ): mvcd () { mv -- "$1" "$2" && cd -P -- "$2"} Then, restart your shell, then you can use the function like so: mvcd foo bar That assumes $2 is not an existing directory as otherwise mv would move $1 into it as opposed to to it (see the -T option of GNU mv to guard against that). -- marks the end of options. mv "$1" "$2" would be mv "$option_or_source" "$option_or_argument_to_first_option_or_destination" . mv -- "$1" "$2" guarantees that $1 and $2 are not treated as options even if their name starts with - so it's always treated as mv -- "$source" "$destination" . Generally, you want to use -- wherever a command is given an arbitrary argument. -P (for physical directory traversal) is to prevent the special processing that the cd builtin of POSIX shells do by default with .. path components so that it treats the content of $2 the same as mv did. Without it, in cdmv foo ../bar , cd could cd into a different bar directory from the one mv renamed foo as. If you have set $CDPATH (or it was in the environment when the shell was started), you would also need to disable it for that one cd invocation: mvcd () { mv -- "$1" "$2" && CDPATH= cd -P -- "$2"} Some extra corner-case problems remain: - (and in some shells -2 , +3 ) are treated specially even after -- . If you want to rename foo to - , use mvcd foo ./- instead of mvcd foo - .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/370567", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
370,584
I uploaded some font files to AWS (running Amazon Linux) and moved them to the /usr/share/fonts directory using a cp command in .ebextensions. When I SSH in from my mac and use ls -a , I see some files are colored differently - one set of font files is black while others are green. I'm curious what caused this to be so, and if it will create any problems for my code . From another answer on AskUbuntu I found this key on how to interpret these colours. I can't understand why a .ttf would be executable or why one set of .ttfs would be recognized and not another. Blue: Directory Green: Executable or recognized data file Sky Blue: Linked file Yellow with black background: Device Pink: Graphic image file Red: Archive file These files were all downloaded to a mac from various font sites before uploading.
ls -l will tell you definitively whether a file is executable or not. I don't think there's any great mystery here. You downloaded files from various sources each of which might have had different permission bits set for one reason or another. * If you don't like seeing some with colors and others without try chmod -x *.ttf ...font files should not need the executable bit set. * As Matteo Italia's highly upvoted comment, which should be preserved, says: Most probably they were copied from a FAT or NTFS volume, which don't store the executable bit, so are mounted by default so that all files have the executable bit set .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/370584", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111668/" ] }
370,622
How could I have a workspace sliding animation in i3 on ArchLinux ? I don't want to use a full DE, I'm right now using compton as a compositor but it only offers fade in/out when switching workspaces, I can't make it perform a sliding animation such as the one in KDE or Gnome. I don't mind installing another compositor but I'd like to be able to do it with compton and i3 if it's possible. (I don't mind neither having to use a more low level api and code the animation myself, but I don't know where to start) The second step would be to have a workspace switching like on MacOs (Or now also on Windows 10) where you drag your fingers on the trackpad and it switches between workspaces smoothly : if you stop draging the workspace will just pop back in place. (I'm talking about this ) That would be really cool to setup on a Linux system. I'm using Libinput (and libinput gestures) but I don't know if there is such a feature. How could I get the closest to the MacOs/Windows10 workspace switching experience with i3 on ArchLinux ?
This is what i have done long time ago but it probably shouldn't be done like this, it is incredibly hacky and inefficient ! :P It basically makes a screenshot of the current screen and slide it to the side one pixel at a time. (speed depend on computers i guess.) I have a bash script goto_to_workspace.sh that is triggered every time i change workspace with this code inside : (script take argument number as workspace number like for example : goto_to_workspace.sh 4 ) WORKSPACE=$1WKSP=`xprop -root -notype _NET_CURRENT_DESKTOP | sed 's#.* =##'`CURRENT_WORKSPACE=`expr 1 + $WKSP`if [ $CURRENT_WORKSPACE -ne $WORKSPACE ]; then scrot -q 50 PRTSRC.jpeg feh PRTSRC.jpeg& FEH_WINDOW=$! #WAIT (give i3 time to switch workspace in the background) sleep .2fislide_FEH_LEFT(){ LONG_LINE="move left 1px" for i in {1..11}; do LONG_LINE=$LONG_LINE","$LONG_LINE done i3-msg "[class=feh] $LONG_LINE"}slide_FEH_RIGHT(){ LONG_LINE="move right 1px" for i in {1..11}; do LONG_LINE=$LONG_LINE","$LONG_LINE done i3-msg "[class=feh] $LONG_LINE"}if [ $CURRENT_WORKSPACE -gt $WORKSPACE ]; then slide_FEH_RIGHTelse slide_FEH_LEFTfi#SIMPLE KILL AFTER 500ms{ sleep .5 && kill $FEH_WINDOW; } & EDIT : looking into the problem deeper. It is smarter to use wmctrl instead. So the function going down can be for example : (for my 1920x1080 screen) slide_FEH_DOWN_wmctrl(){ FEH_ID=`wmctrl -l|grep "PRTSRC.jpeg$"|awk '{print $1}'` for (( c=0; c!=1100; c=c+10 )) do wmctrl -i -r $FEH_ID -e 1,0,$c,1920,1080 done} I also tried to do something quickly with xlib (c or python) but it's less smooth than wmctrl. So if someone can do that better let us know. EDIT2 : Of course you need feh to be sticky with for example in your i3 config : for_window [class="feh"] floating enable, sticky enable, border pixel 0, move absolute position 0 px 0 px
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/370622", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/151839/" ] }
370,628
I am using Openbsd 6.1/amd64 here. Suddenly, I am no longer able to update or install a package. When trying to do a pkg_add vlc I am greeted with the message: https://ftp.OpenBSD.org/pub/OpenBSD/6.1/packages-stable/amd64/: ftp: SSL write error: ocsp verify failed: ocsp response not currenthttps://ftp.OpenBSD.org/pub/OpenBSD/6.1/packages-stable/amd64/: ftp: SSL write error: ocsp verify failed: ocsp response not currenthttps://ftp.OpenBSD.org/pub/OpenBSD/6.1/packages-stable/amd64/: empty I already ran pkg_add -uU , and pkg_check however, still the same errors. Have you a suggestion?
This is likely a temporary issue, I can reproduce it. The mirror that I'm using, https://ftp.eu.openbsd.org/pub/OpenBSD , works as expected. Someone reported this to the openbsd-misc mailing list. A response says : It's a server-side problem, same on www.openbsd.org. Not visible in normal graphical browsers because they fallback to the CA's OCSP server whereas ftp(1) just relies on the stapled cert. Simplest workaround is to use a mirror, [...] It will likely be resolved within a day or two (my guess). In the meanwhile, use one of the many mirror sites . UPDATE (the day after): The issue has now been resolved on the affected servers. Note that -U is never needed when you run pkg_add -u . It's only when you install a new package that -U will update any outdated packages that the new package depends on. With pkg_add -u you update all packages.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/370628", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138261/" ] }
370,667
I have the following in an expect script spawn cat versionexpect -re 5.*.*set VERSION $expect_out(0,string)spawn rpm --addsign dist/foo-$VERSION-1.i686.rpm The cat command is getting the version correctly however it appears to be adding a new line. Since I expect the output to be the following: dist/foo-5.x.x-1.i686.rpm but am getting including the error at the begining the following: cannot access file dist/foo-5.x.x-1.i686.rpm Why is expect adding a new line to the cat command output and is there any way to have this not be done or to fix the output of the cat command?
TCL can read a file directly without the complication of spawn cat : #!/usr/bin/env expect# open a (read) filehandle to the "version" file... (will blow up if the file# is not found)set fh [open version]# and this call handily discards the newline for us, and since we only need# a single line, the first line, we're done.set VERSION [gets $fh]# sanity check value read before blindly using it...if {![regexp {^5\.[0-9]+\.[0-9]+$} $VERSION]} { error "version does not match 5.x.y"}puts "spawn rpm --addsign dist/foo-$VERSION-1.i686.rpm"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/370667", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72302/" ] }
370,668
After few days, since I asked this question: LINK I realized, that maybe it's another problem. I used package inotifywait to check, is a temporary file generating after sending html form. Unfortunately temp file is not creating, after clicking "upload" button in my form, but I don't know why, because even I had turned off firewall (I thought that it is a problem - I was wrong). Maybe someone has this same problem? OS is newly installed, so I didn't change so much in httpd.conf and php.ini . Below is a list ' What I checked? ': enctype='multipart/form-data' is set, /tmp/ is a upload_tmp_dir , file_uploads is on , File size is in limit, which is set into upload_max_filesize (limit is 2MB, but file have 18KB), I tried to use aboslute path, /tmp/ and /var/www/html/upload have chmod set on 777 and upload owner and owner group is apache , I tried change upload_tmp_dir in php.ini , but it bring this same result.
TCL can read a file directly without the complication of spawn cat : #!/usr/bin/env expect# open a (read) filehandle to the "version" file... (will blow up if the file# is not found)set fh [open version]# and this call handily discards the newline for us, and since we only need# a single line, the first line, we're done.set VERSION [gets $fh]# sanity check value read before blindly using it...if {![regexp {^5\.[0-9]+\.[0-9]+$} $VERSION]} { error "version does not match 5.x.y"}puts "spawn rpm --addsign dist/foo-$VERSION-1.i686.rpm"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/370668", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/234991/" ] }
370,673
I have a Web Application running in HTTP. However I wanted to test a Web Service that is running on HTTPS using CORS. Since HTTP won't allow HTTPS request, I wanted to setup a proxy in my Web Application so that it appears as HTTPS. How can I achieve this redirection in Linux?
TCL can read a file directly without the complication of spawn cat : #!/usr/bin/env expect# open a (read) filehandle to the "version" file... (will blow up if the file# is not found)set fh [open version]# and this call handily discards the newline for us, and since we only need# a single line, the first line, we're done.set VERSION [gets $fh]# sanity check value read before blindly using it...if {![regexp {^5\.[0-9]+\.[0-9]+$} $VERSION]} { error "version does not match 5.x.y"}puts "spawn rpm --addsign dist/foo-$VERSION-1.i686.rpm"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/370673", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78548/" ] }
370,856
I am trying to print only the matched pattern in a CSV file. Example: all the columns value starting with 35=its value . Thanks. CSV file: 35=A,D=35,C=129,ff=136D=35,35=BCD,C=129,ff=136900035=G,D=35,C=129,ff=13635=EF,D=35,C=129,ff=136,35=G36=o,D=35,k=1 Output: 35=A35=BCD35=EF35=G The command I used did not work: sed -n '/35=[A-Z]*?/ s/.*\(35=[A-Z]*?\).*/\1/p' filename
With GNU grep which supports -o option to print only matched string, each on its own line $ grep -oE '\b35=[^,]+' ip.csv 35=A35=BCD35=EF35=G \b is word boundary, so that 900035 won't match [^,]+ to match one or more non , characters assumes the values do not contain , With awk $ awk -F, '{ for(i=1;i<=NF;i++){if($i~/^35=/) print $i} }' ip.csv 35=A35=BCD35=EF35=G -F, set , as input field separator for(i=1;i<=NF;i++) iterate over all fields if($i~/^35=/) if field starts with 35= print $i print that field Similar with perl perl -F, -lane 'foreach (@F){print if /^35=/}' ip.csv
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/370856", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/235843/" ] }
370,876
What will happen if I write a line in bash like commandA && commandB ; commandC If commandA fails, will commandC be executed?
Yes, and you can easily check it yourself: $ non-existent-command && echo hi ; echo after semicolonbash: non-existent-command: command not foundafter semicolon In man bash it says: Commands separated by a ; are executed sequentially; the shell waits for each command to terminate in turn.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/370876", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/235861/" ] }
370,889
I have the code file="JetConst_reco_allconst_4j2t.png"if [[ $file == *_gen_* ]];then echo "True"else echo "False"fi I test if file contains "gen". The output is "False". Nice! The problem is when I substitute "gen" with a variable testseq : file="JetConst_reco_allconst_4j2t.png"testseq="gen"if [[ $file == *_$testseq_* ]];then echo "True"else echo "False"fi Now the output is "True". How could it be? How to fix the problem?
You need to interpolate the $testseq variable with one of the following ways: $file == *_"$testseq"_* (here $testseq considered as a fixed string) $file == *_${testseq}_* (here $testseq considered as a pattern). Or the _ immediately after the variable's name will be taken as part of the variable's name (it's a valid character in a variable name).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/370889", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/226971/" ] }
370,904
I have this: $ echo $SHELL/bin/sh$ uname -aFreeBSD 11.0-RELEASE-p8 And this works: sudo bash my_script.sh some_arg But this not: sudo bash my_script.sh some_arg >& /dev/null Error: -sh: Syntax error: Bad fd number In Linux with default bash as a shell this works fine. How to fix it? In the script I have this: #!/usr/local/bin/bash# other stuff
bash does support this, but you explicitly state that your current shell is not bash , but rather sh , which is a different shell.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/370904", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/235885/" ] }
370,916
I have a piece of software which I would like to install in a separate hierarchy beneath $HOME/local on an Ubuntu 16.04 machine. The software is distributed as a Debian package, and the source code is not available (I would happily have downloaded it and compiled it myself had it been). I don't have (and should not have) sudo access on the machine I'm attempting this on. The software is not to be installed system-wide, but only for my personal use. I tried to $ dpkg --root="$HOME/local" -i package_x.y.z_x86_64.deb but I get dpkg: error: requested operation requires superuser privilege After trying with --force-all and creating all the necessary files and directories needed to satisfy dpkg ( local/usr/bin , local/var/dpkg with subdirectories info , triggers and updates , along with an empty status file in local/var/dpkg ), I get stuck with $ dpkg --root=$HOME/local -i --force-all package-x.y.z_x86_64.debdpkg: could not open log '/var/log/dpkg.log': Permission denied(Reading database ... 0 files and directories currently installed.)Preparing to unpack package_x.y.z_x86_64.deb ...Unpacking package (1:x.y.z) ...dpkg: error processing archive package_x.y.z_x86_64.deb (--install): error setting ownership of './usr/bin/application': Operation not permitteddpkg-deb: error: subprocess paste was killed by signal (Broken pipe)Errors were encountered while processing: package_x.y.z_x86_64.deb It's obviously failing to chown the files to the correct users in accordance with the package specification. The next step for me would probably be to have a talk with the sysadmins on this machine to see if they could install this software for me, but I wonder if there's something I've missed that would have allowed me to have my own local package installation root?
No, you haven’t missed anything. The best you can do in such circumstances is use dpkg-deb to extract the contents of the package, and hope they’ll work: dpkg-deb -x package_x.y.z_x86_64.deb my-private-root This won’t run any of the maintainer scripts contained in the package; you can extract those using dpkg-deb -e package_x.y.z_x86_64.deb my-private-control
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/370916", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/116858/" ] }
370,932
I am trying to test a server that is working normal in web browser, with openssl s_client option, connecting it directly using openssl returns the 400 Bad Request: openssl s_client -servername example.com -connect example.com:443 -tls1(some information about the certificate)GET / HTTP/1.1 (and the error occurs **immediately** - no time to include more headers like Host:) Important: I already tried to put the Host: header, the thing is that when i run GET, the error occur immediately leaving me no chance to include more headers. Replace example.com with my host...
According to https://bz.apache.org/bugzilla/show_bug.cgi?id=60695 my command was: openssl s_client -crlf -connect www.pgxperts.com:443 where -crlf means, according to help of the openssl command, -crlf - convert LF from terminal into CRLF Then I could input multiline commands and no "bad request" as response after the first commandline any more.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/370932", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/170792/" ] }
370,941
i need to edit the page number of multiple url links in the text file.eg: http://gk4success.com/questions.php?page= 1 &parent=0&lang=2&c-id=27&q_type=... http://gk4success.com/questions.php?page= 162 &parent=0&lang=2&c-id=27&q_type= There are 162 links in the web site and i cannot edit links 162 times,even if i copy the lines 162 times,then how do i edit the page numbers easily ? is there any easy way with any text editor ?
According to https://bz.apache.org/bugzilla/show_bug.cgi?id=60695 my command was: openssl s_client -crlf -connect www.pgxperts.com:443 where -crlf means, according to help of the openssl command, -crlf - convert LF from terminal into CRLF Then I could input multiline commands and no "bad request" as response after the first commandline any more.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/370941", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/185795/" ] }
370,954
I am attempting to write a bash script that will insert a string after matching on a string in /usr/lib/systemd/system/neutron-server.service I have been able to do this on other files easily as I was just insert variables into neccessary config files, but this one seems to be giving me trouble. I believe the error is that sed is not ignoring the special characters. In my attempt I have tried using sed of single quotes and double quotes (which I understand are for variables, but thought it might change something. Is there a better way of going about this or some special sed flags or syntax I am missing? sed ‘/--config-file /etc/neutron/plugin.ini/a\--config-file /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini‘ /usr/lib/systemd/system/neutron-server TL;DR - Insert --config-file /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini After --config-file /etc/neutron/plugin.ini Orginial File [Unit]Description=OpenStack Neutron ServerAfter=syslog.target network.target[Service]Type=notifyUser=neutronExecStart=/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file /var/log/neutron/server.logPrivateTmp=trueNotifyAccess=allKillMode=processTimeoutStartSec="infinity"[Install]WantedBy=multi-user.target File after desired change command. [Unit]Description=OpenStack Neutron ServerAfter=syslog.target network.target[Service]Type=notifyUser=neutronExecStart=/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-file /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file /var/log/neutron/server.logPrivateTmp=trueNotifyAccess=allKillMode=processTimeoutStartSec="infinity"[Install]WantedBy=multi-user.target
According to https://bz.apache.org/bugzilla/show_bug.cgi?id=60695 my command was: openssl s_client -crlf -connect www.pgxperts.com:443 where -crlf means, according to help of the openssl command, -crlf - convert LF from terminal into CRLF Then I could input multiline commands and no "bad request" as response after the first commandline any more.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/370954", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/235932/" ] }
370,959
I'm having a heard time figuring out how to extract the IP information out of an output similar to this: Fri Jun 9 19:01:54 2017,10.0.0.65,devi1,0,unknown osFri Jun 9 19:01:54 2017,10.0.0.55,host1,0,unknown osFri Jun 9 19:01:54 2017,10.0.0.35,srv01,0,unknown osSat Jun 10 23:11:13 2017,10.0.0.10,switch.domain.com,0,unknown os Any tips on how I can, from that output, get: 10.0.0.6510.0.0.5510.0.0.3510.0.0.10 Running on Bash 4.3.30 in Linux. Any help would be greatly appreciated. Thank you very much!
While you could do this with awk or sed , for a simple extraction between fixed delimiters cut is probably the best fit: $ cut -d, -f2 < input
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/370959", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204855/" ] }
370,985
I'm just wondering where these values are being set and what they default to? Mine is currently 18446744073692774399. I didn't set it anywhere that I can see. $ cat /proc/sys/kernel/shmmax 18446744073692774399$ sysctl kernel.shmmaxkernel.shmmax = 18446744073692774399
The __init function ipc_ns_init sets the initial value of shmmax by calling shm_init_ns , which sets it to the value of the SHMMAX macro. The definition of SHMMAX is in <uapi/linux/shm.h> : #define SHMMAX (ULONG_MAX - (1UL << 24)) /* max shared seg size (bytes) */ On 64-bit machines, that definition equals the value you found, 18446744073692774399 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/370985", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3285/" ] }
371,014
I have a laptop with Debian on it, and I am going to sell this laptop. Would it suffice to erase the Debian installation before selling it to completely clean up my laptop from my personal data, and if yes how can I uninstall Debian (so that there isn't any operating system on the laptop)?
This nixCraft post explain how to erase hard disk The secure removal of data is not as easy as you may think. When you delete a file using the default commands of the operating system (for example “rm” in Linux/BSD/MacOS/UNIX or “del” in DOS or emptying the recycle bin in WINDOWS) the operating system does NOT delete the file, the contents of the file remains on your hard disk. The only way to make recovering of your sensitive data nearly impossible is to overwrite (“wipe” or “shred”) the data with several defined patterns. For erasing hard disk permanently, you can use the standard dd command. However, I recommend using shred command or wipe command or scrub command. Warning : Check that the correct drive or partition has been targeted. Wrong drive or partition target going to result into data loss . Under no circumstances we can be help responsible for total or partial data loss, so please be careful with disk names. YOU HAVE BEEN WARNED! Erase disk permanently using a live Linux cd First, download a knoppix Live Linux CD or SystemRescueCd live CD. Next, burn a live cd and boot your laptop or desktop from live CD. You can now wipe any disk including Windows, Linux, Mac OS X or Unix-like system. 1. How do I use the shred command? Shred originally designed to delete file securely. It deletes a file securely, first overwriting it to hide its contents. However, the same command can be used to erase hard disk. For example, if your hard drive named as /dev/sda, then type the following command: # shred -n 5 -vz /dev/sda Where, -n 5: Overwrite 5 times instead of the default (25 times).-v : Show progress.-z : Add a final overwrite with zeros to hide shredding. The command is same for IDE hard disk hda (PC/Windows first hard disk connected to IDE) : # shred -n 5 -vz /dev/hda Note: Comment from @Gilles Replace shred -n 5 by shred -n 1 or by cat /dev/zero. Multiple passes are not useful unless your hard disk uses 1980s technology. In this example use shred and /dev/urandom as the source of random data: # shred -v --random-source=/dev/urandom -n1 /dev/DISK/TO/DELETE# shred -v --random-source=/dev/urandom -n1 /dev/sda 2. How to use the wipe command You can use wipe command to delete any file including disks: # wipe -D /path/to/file.doc 3. How to use the scrub command You can use disk scrubbing program such as scrub. It overwrites hard disks, files, and other devices with repeating patterns intended to make recovering data from these devices more difficult. Although physical destruction is unarguably the most reliable method of destroying sensitive data, it is inconvenient and costly. For certain classes of data, organizations may be willing to do the next best thing which is scribble on all the bytes until retrieval would require heroic efforts in a lab. The scrub implements several different algorithms. The syntax is: # scrub -p nnsa|dod|bsi|old|fastold|gutmann|random|random2 fileNameHere To erase /dev/sda, enter: # scrub -p dod /dev/sda 4. Use dd command to securely wipe disk You can wipe a disk is done by writing new data over every single bit. The dd command can be used as follows: # dd if=/dev/urandom of=/dev/DISK/TO/WIPE bs=4096 Wipe a /dev/sda disk, enter: # dd if=/dev/urandom of=/dev/sda bs=4096 5. How do I securely wipe drive/partition using a randomly-seeded AES cipher from OpenSSL? You can use openssl and pv command to securely erase the disk too. First, get the total /dev/sda disk size in bytes: # blockdev --getsize64 /dev/sda399717171200 Next, type the following command to wipe a /dev/sda disk: # openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt </dev/zero | pv -bartpes 399717171200 | dd bs=64K of=/dev/sda 6. How to use badblocks command to securely wipe disk The syntax is: # badblocks -c BLOCK_SIZE_HERE -wsvf /dev/DISK/TO/WIPE# badblocks -wsvf /dev/DISK/TO/WIPE# badblocks -wsvf /dev/sda
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/371014", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
371,025
I need one more loopback interface in my OpenBSD 6.1, with the IP address 127.0.0.2. I can create it by hand with the command: ifconfig lo1 127.0.0.2 And to have it at boot time, I just inserted that command into /etc/rc.local . I have researched for a more standard way to do that, was not successful. Having it in /etc/rc.local also means I only have that interface late in the boot process. How may I configure it in a cleaner "OpenBSD" way?
As hinted at in lo(4) , you may create /etc/hostname.lo1 : inet 127.0.0.2 255.0.0.0 This will create the lo1 interface when the boot process runs /etc/netstart . With that file in place, you may also set up the interface without rebooting through $ doas sh /etc/netstart lo1 The interface is reported as lo1: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 32768 index 4 priority 0 llprio 3 groups: lo inet 127.0.0.2 netmask 0xff000000 by ifconfig . For further info, see hostname.if(5) , netstart(8) and ifconfig(8) .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/371025", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138261/" ] }
371,062
I'm running into weird behavior when trying to grep a man page on macOS. For example, the Bash man page clearly has an occurrence of the string NAME : $ man bash | head -5 | tail -1NAME And if I grep for name I do get results, but if I grep for NAME I don't: $ man bash | grep 'NAME'$ man bash | grep NAME I've tried other uppercase words that I know are in there, and searching for SHELL yields nothing whereas searching for BASH yields results. What's going on here? Update : Thanks for all the answers! I thought it worth adding the context in which I ran into this. I wanted to write a bash function to wrap man and in cases where I've tried to look up the man page for a shell builtin, jump to the relevant section of the Bash man page. There might be a better way, but here's what I've got currently: man () { case "$(type -t "$1")" in builtin) local pattern="^ *$1" if bashdoc_match "$pattern \+[-[]"; then command man bash | less --pattern="$pattern +[-[]" elif bashdoc_match "$pattern\b"; then command man bash | less --pattern="$pattern[[:>:]]" else command man bash fi ;; keyword) command man bash | less --hilite-search --pattern='^SHELL GRAMMAR$' ;; *) command man "$@" ;; esac}bashdoc_match() { command man bash | col -b | grep -l "$1" > /dev/null}
If you add a | sed -n l to that tail command, to show non-printable characters, you'll probably see something like: N\bNA\bAM\bME\bE That is, each character is written as X Backspace X . On modern terminals, the character ends up being written over itself (as Backspace aka BS aka \b aka ^H is the character that moves the cursor one column to the left) with no difference. But in ancient tele-typewriters, that would cause the character to appear in bold as it gets twice as much ink. Still, pagers like more / less do understand that format to mean bold, so that's still what roff does to output bold text. Some man implementations would call roff in a way that those sequences are not used (or internally call col -b -p -x to strip them like in the case of the man-db implementation (unless the MAN_KEEP_FORMATTING environment variable is set)), and don't invoke a pager when they detect the output is not going to a terminal (so man bash | grep NAME would work there), but not yours. You can use col -b to remove those sequences (there are other types ( _ BS X ) as well for underline). For systems using GNU roff (like GNU or FreeBSD), you can avoid those sequences being used in the first place by making sure the -c -b -u options are passed to grotty , for instance by making sure the -P-cbu options is passed to groff . For instance by creating a wrapper script called groff containing: #! /bin/sh -exec /usr/bin/groff -P-cbu "$@" That you put ahead of /usr/bin/groff in $PATH . With macOS' man (also using GNU roff ), you can create a man-no-overstrike.conf with: NROFF /usr/bin/groff -mandoc -Tutf8 -P-cbu And call man as: man -C man-no-overstrike.conf bash | grep NAME Still with GNU roff , if you set the GROFF_SGR environment variable (or don't set the GROFF_NO_SGR variable depending on how the defaults have been set at compile time), then grotty (as long as it's not passed the -c option) will use ANSI SGR terminal escape sequences instead of those BS tricks for character attributes. less understand them when called with the -R option. FreeBSD's man calls grotty with the -c option unless you're asking for colours by setting the MANCOLOR variable (in which case -c is not passed to grotty and grotty reverts to the default of using ANSI SGR escape sequences there). MANCOLOR=1 man bash | grep NAME will work there. On Debian, GROFF_SGR is not the default. If you do: GROFF_SGR=1 man bash | grep NAME however, because man 's stdout is not a terminal, it takes it upon itself to also pass a GROFF_NO_SGR variable to grotty (I suppose so it can use col -bpx to strip the BS sequences as col doesn't know how to strip the SGR sequences, even though it still does it with MAN_KEEP_FORMATTING ) which overrides our GROFF_SGR . You can do instead: GROFF_SGR=1 MANPAGER='grep NAME' man bash (in a terminal) to have the SGR escape sequences. That time, you'll notice that some of those NAME s do appear in bold on the terminal (and in a less -R pager). If you feed the output to sed -n l ( MANPAGER='sed -n /NAME/l' ), you'll see something like: \033[1mNAME\033[0m$ Where \e[1m is the sequence to enable bold in ANSI compatible terminals, and \e[0m the sequence to revert all SGR attributes to the default. On that text grep NAME works as that text does contain NAME , but you could still have problems if looking for text where only parts of it is in bold/underline...
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/371062", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47044/" ] }
371,144
From the manpage of rsync --remove-source-files This tells rsync to remove from the sending side the files (meaning non-directories ) that are a part ofthe transfer and have been successfully duplicated on the receiving side . Does it mean files on the sending side that are either part of the transfer or duplicated on the receiving side? Can I also remove directories on the sending side? Note that you should only use this option on source files that are quiescent . What does "source files that are quiescent" mean? If you are using this tomove files that show up in a particular directory over to another host, make sure that the finishedfiles get renamed into the source directory, not directly written into it, so that rsync can't possiblytransfer a file that is not yet fully written. What does this mean? If you can't first write the files into a different directory,you should use a naming idiom that lets rsync avoid transferring files that are not yet finished (e.g.name the file "foo.new" when it is written, rename it to "foo" when it is done, and then use theoption --exclude='*.new' for the rsync transfer). What does this mean? Starting with 3.1.0, rsync will skip the sender-side removal (and output an error) if the file's size ormodify time has not stayed unchanged. What does this mean? Thanks.
Q: Does it mean files on the sending side that are either part of the transfer or duplicated on the receiving side? A: Both Q: Can I also remove directories on the sending side? A: Yes --remove-source-files then issue the command find <source_directory> -type d -empty -delete OR find <source_directory> -type l -type d -empty -delete (to include symlinks in the deletion) (Was: --remove-source-files then issue the command rm -rf <source_directory> ) WARNING: As mentioned in OrangeDog's comment, the rm -rf suggestion is unsafe. Specifically, any files that were for any reason not transferred (file changed between building the transfer list and starting to actually transfer that file, receiving side ran out of disk space, network connection dropped, etc.) will be left untouched in the source directory by rsync — but after your rm -rf invokation they're just gone. The find command above will recursively delete the empty source tree if all the source files have been successfully transferred and removed, but will leave alone any remaining files (and their containing directories, of course). Q: What does "source files that are quiescent" mean? A: It means files that have been written to and closed Q: If you are using this to move files that show up in a particular directory over to another host, make sure that the finished files get renamed into the source directory, not directly written into it, so that rsync can't possibly transfer a file that is not yet fully written. What does this mean? A: It means exactly what I said above Q: If you can't first write the files into a different directory, you should use a naming idiom that lets rsync avoid transferring files that are not yet finished (e.g. name the file "foo.new" when it is written, rename it to "foo" when it is done, and then use the option --exclude='*.new' for the rsync transfer). What does this mean? A: It means that RSYNC makes a list of files to be transferred first. Then it writes them into a different directory (Destination Directory), thus if you transfer a file that hasn't finished, it is best to rename it after it is done using the --exclude option Q: Starting with 3.1.0, rsync will skip the sender-side removal (and output an error) if the file's size or modify time has not stayed unchanged. What does this mean? A: If RSYNC detects that when its about to write the file to the destination directory that the file size has changed between the time it scanned it, to the time it actually writes it to the destination directory, then RSYNC will skip the file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/371144", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
371,150
I used Linux a bit in college, and am familiar with the terms. I develop in .NET languages regularly, so I'm not computer illiterate. That said, I can't really say I understand the "compile it yourself" [CIY] mentality that exists in *nix circles. I know it's going away, but still hear it from time to time. As a developer, I know that setting up compilers and necessary dependencies is a pain in the butt, so I feel like CIY work flows have helped to make *nix a lot less accessible. What social or technical factors led to the rise of the CIY mentality?
Very simply, for much of the history of *nix, there was no other choice. Programs were distributed as source tarballs and the only way you had of using them was to compile from source. So it isn't so much a mentality as a necessary evil. That said, there are very good reasons to compile stuff yourself since they will then be compiled specifically for your hardware, you can choose what options to enable or not and you can therefore end up with a fine tuned executable, just the way you like it. That, however, is obviously only something that makes sense for expert users and not for people who just want a working machine to read their emails on. Now, in the Linux world, the main distributions have all moved away from this many years ago. You very, very rarely need to compile anything yourself these days unless you are using a distribution that is specifically designed for people who like to do this like Gentoo. For the vast majority of distributions, however, your average user will never need to compile anything since pretty much everything they'll ever need is present and compiled in their distribution's repositories. So this CIY mentality as you call it has essentially disappeared. It may well still be alive and kicking in the UNIX world, I have no experience there, but in Linux, if you're using a popular distribution with a decent repository, you will almost never need to compile anything yourself.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/371150", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48548/" ] }
371,153
I'm looking for a process monitor that produce an easy to parse output to stdout. Is there any tool like that in unix? Something like htop or top , but mean to be consumed by another program. To be more specific, let's say i want to create a gui program for process monitoring. So, I need to get a real time process information (maybe every second). Do I need to call ps every second, or maybe there's a better alternative?
Sounds like ps ... It can be configured to output specific information on specific processes (or all processes). If you don’t mind making your program OS-specific, you could also parse whatever ps parses on your system, e.g. /proc on a Linux system.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/371153", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102187/" ] }
371,226
I see a lot of shell scripts (for example, this one ) checking for a variable's presence/absence like: [ -n "${VAR-}" ] As far as I can tell, using the ${VAR-fallback} form without providing a fallback serves no purpose when checking for variable presence/absence ( -n or -z ). The same goes for ${VAR:-fallback} . For example, with an unset variable, unset VAR[ -z "$VAR" ] && [ -z "${VAR-}" ] && [ -z "${VAR:-}" ] && echo True # => True and with a null variable VAR=[ -z "$VAR" ] && [ -z "${VAR-}" ] && [ -z "${VAR:-}" ] && echo True # => True But I see it in enough places that I have to ask, am I missing something? Is it just a misunderstanding that results in misuse, or is there actually a reason to do it?
If set -u is in effect, and VAR is unset, [ -z "$VAR" ] will cause an error. With [ -z "${VAR-}" ] the default value overrides the check for using an unset variable, and there is no error.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/371226", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47044/" ] }
371,238
I have a most confusing questing that bothered me for years. What is the difference between the file size given by ls -l and du -sh *. GRILL:/user/MAIL/DATA>lltotal 270drwxr-xr-x 11 user users 1024 Mar 21 2013 .drwxr-xr-x 6 user users 96 May 28 2008 ..drwxr-xr-x 10 user users 1024 Jun 14 09:40 Roddrwxr-xr-x 3 user users 96 Sep 17 2010 Atlasdrwxr-xr-x 2339 user users 132096 Jun 14 15:00 Admin drwxr-xr-x 3 user users 96 Jul 11 2014 DEdrwxr-xr-x 5 user users 96 Jun 14 08:30 Expressdrwxr-xr-x 3 user users 96 Sep 17 2010 Deferreddrwxr-xr-x 2 user users 96 Feb 10 2009 Imagidrwxr-xr-x 6 user users 1024 Jul 11 2014 NOdrwxr-xr-x 3 user users 2048 Mar 21 2013 SE-rw-r--r-- 1 user users 55 Mar 21 2013 cmdGRILL:/user/MAIL/DATA>du -sk *6723 Rod0 Atlas435494 Admin2 DE111273 Express2 Deferred0 Imagi541 NO12 SE1 cmd The size of Admin in ls -l is 132096 , I tried removing 400000+ files from Admin directory and I didnt find the space reduced even a bit. Whereas du -sk gives the size as 435494 . Which one is the original size of the file and what is the difference between them? Could anyone please elaborate?
For files, ls -l file shows (among other things) the size of file in bytes, while du -k file shows the space occupied by file on disk (in units of 1 kB = 1024 bytes). Since disk space is allocated in blocks, the size indicated by du -k is always slightly larger than the space indicated by ls -kl (which is the same as ls -l , but in 1 kB units). For directories, ls -ld dir shows (among other things) the size of the list of filenames (together with a number of attributes) of the files and subdirectories in dir . This is just the list of filenames, not the files' or subdirectories' contents. So this size increases when you add files to dir (even when files are empty), but it stays unchanged when one of the files in dir grows. However, when you delete files from dir the space from the list is not reclaimed immediately, but rather the entries for deleted files are marked as unused, and are later recycled (this is actually implementation-dependent, but what I described is pretty much the universal behavior these days). That's why you may not see any changes in ls -ld output when you delete files until much later, if ever. Finally, du -ks dir shows (an estimate of) the space occupied on disk by all files in dir , together with all files in all of dir 's subdirectories, in 1 kB = 1024 bytes units. Taking into account the description above, this has no relation whatsoever with the output of ls -kld dir .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/371238", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/131122/" ] }
371,267
I have the input below: csdi_d_trs_proc_uxs1 26 24csdi_d_tdp_process_uxs1 28 32 I only need the line which contains proc . When I use: grep proc filename both lines are output, so I tried using grep -w proc filename , but no output is getting displayed. How can I get the line which has only proc but not process ?
The -w flag for grep will make the given expression match only whole words. A "word" is a string of "word characters" surrounded by "non-word characters" (or start/end of line). The issue in your case is that _ (underscore) happens to be a "word character", and does therefore not serve to delimit the word proc as a word on its own. Instead of using -w with grep , use a pattern that explicitly delimits the word by _ : grep '_proc_' filename Alternatively, use [^a-z] instead of _ if you want to delimit the word by anything that is not a lower-case alphabetical character: grep '[^a-z]proc[^a-z]' filename Note that this won't recognize proc as a word at the very start/end of a line though.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/371267", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/236203/" ] }
371,273
I got an issue using the postprocessing in cups-pdf.The script is called as I can see in the log files, but nothing happens. /var/log/cups/cups-pdf-myPrinter_log: Thu Jun 15 10:07:11 2017 [DEBUG] postprocessing commandline built: /etc/cups/postprocessing/ppmyPrinter.sh /srv/samba/cups-pdf/myPrinter/user/000012198600001.pdf [email protected] userThu Jun 15 10:07:11 2017 [DEBUG] postprocessing has finished: 32256 vim /etc/cups/cups-pdf-myPrinter.conf PostProcessing /etc/cups/postprocessing/ppmyPrinter.sh -rwxrwxrwx 1 root lp 194 Jun 15 09:35 ppmyPrinter.sh vim /etc/cups/postprocessing/ppmyPrinter.sh #!/bin/bashecho "$1" >> /etc/cups/postprocessing/userecho "$2" >> /etc/cups/postprocessing/userecho "$3" >> /etc/cups/postprocessing/user if I run the script ./ppmyPrinter.sh test1 test2 test3 it creates the file user with the content test1 test2 test3 but by calling from cups nothing happens.I red, that on a debian based system (ubuntu) cups-pdf is watched by apparmor and I have to permit to execute the script, but on my CentOs 7 there is no apparmor running. Could you give me a hint, where to look for this issue, are there som log-files where I can see the problem?
The -w flag for grep will make the given expression match only whole words. A "word" is a string of "word characters" surrounded by "non-word characters" (or start/end of line). The issue in your case is that _ (underscore) happens to be a "word character", and does therefore not serve to delimit the word proc as a word on its own. Instead of using -w with grep , use a pattern that explicitly delimits the word by _ : grep '_proc_' filename Alternatively, use [^a-z] instead of _ if you want to delimit the word by anything that is not a lower-case alphabetical character: grep '[^a-z]proc[^a-z]' filename Note that this won't recognize proc as a word at the very start/end of a line though.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/371273", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/230332/" ] }
371,375
I'm moving a file to a different folder and would like to add some kind of index to the newly moved file if a file with the same name exists already (the old one should remain untouched). For example, if file.pdf existed I would prefer something like file1.pdf or file_1.pdf for the next file with the same name. Here I've found a variant for the opposite idea — but I don't want to make a "backup". Does mv have some parameters out of the box for that scenario? I use Ubuntu Linux.
As the answer to the question you linked already states, mv can suffix files that would otherwise get overwritten by the file you move with a number to give them a unique file name: mv --backup=t <source_file> <dest_file> The command works by appending the next unused number suffix to the file that was first in the destination directory. The file you are moving will keep its original name. However, this will appends suffixes like .~1~ , which seems to be not what you want: $ lsfile.pdffile.pdf.~1~file.pdf.~2~ You can rename those files in a second step though to get the names in a format like file_1.pdf instead of file.pdf.~1~ , e.g. like this: rename 's/((?:\..+)?)\.~(\d+)~$/_$2$1/' *.~*~ This takes all files that end with the unwanted backup suffix (by matching with the shell glob *.~*~ ) and lets the rename tool try to match the regular expression ((?:\..+)?)\.~(\d+)~$ on the file name. If this matches, it will capture the index from the .~1~ -like suffix as second group ( $2 ) and optionally, if the file name has an extension before that suffix like .pdf , that will be captured by the first group ( $1 ). Then it replaces the complete matched file name part with _$2$1 , inserting the captured values instead of the placeholders though. Basically it will rename e.g. file.pdf.~1~ to file_1.pdf and something.~42~ to something_42 , but it can not detect whether a file has multiple extensions, so e.g. archive.tar.gz.~5~ would become archive.tar_5.gz
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/371375", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115043/" ] }
371,383
I’m trying to write a script – or an alias, to be more precise – which allows me to move files and follow ( cd ) them to their target directory. The accepted answer to this question suggests this code: mvf() { mv "$@" && goto "$_"; } where goto is just a safer variant of cd and $_ is the last argument passed to the last command. My derived implementation is this: alias mvaf="mv $@ && cd $_" Note that I didn’t quote $@ in order to not try to move a file by the name of all arguments. I did try this variant originally, but the script failed, too. If I call the above implementation with mvaf test1 test2 .. , it throws (translated): “mv: Missing file operand” While debugging, I tried without the cd , and indeed alias mvaf="mv $@ " , which basically is just renaming mv , moves the files. I’d like to know now why mv lacks an operand in my first implementation, and how this may be caused by the && .
Alias does not support operands like $@, or $1,$2 etc. Your command alias mvaf="mv $@ && cd $_" equals to mv ' ' && cd $_ because $@ is not recognized by alias in the way you expect. This can be proved easily like this: $ alias mvaf='echo "Part 1:" $@ && echo "Part 2: " $_'$ mvaf file66 /tmp/Part 1:Part 2: Part 1: file66 /tmp/#Part 2 includes the previous executed command (echo "Part 1:" $@) & the text sent after alias name $ alias mvaf='echo "Part 1:" $@;echo "Part 2: "'$ mvaf file66 /tmp/Part 1:Part 2: file66 /tmp/ On the other hand , this works but not because of $@ $ alias mvaf='echo "mv $@"'$ mvaf file66 /tmp/mv file66 /tmp/$ alias mvaf='echo "mv"'$ mvaf file66 /tmp/mv file66 /tmp/ As a general idea, alias is a kind of simple substitution. Alias aa='command1;command2' , when called like aa sometext equals to command1;command2 sometext To make this to work, you need to do it with a function.Bash discourage the use of alias and encourages the use of functions for such jobs. You can stick this function to your bash profile file, and this function can be called by name directly from your terminal as you would do with any alias: mvcd() { mv "$1" "$2" && cd "$2"; } Chaining mv and cd commands with && is important here, since && ensures that second command cd will be executed only if the previous command mv was successful. Alternativelly, as has been already advised in the link of accepted answer in your question , you could do something like mvf() { mv "$@" && goto "$_"; }goto() { [ -d "$1" ] && cd "$1" || cd "$(dirname "$1")"; } Be careful about bash word splitting . To make such a function to work correctly you need to insert double quotes when calling the function if the file you are going to move or the directory that file is going to be sent include space in their name. $ mvcd() { echo "1=$1";echo "2=$2";echo "3=$3";echo "4=$4"; }$ mvcd spaced file1 /spaced directory/1=spaced2=file13=/spaced4=directory/$ mvcd "spaced file1" "/spaced directory/"1=spaced file12=/spaced directory/3=4=
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/371383", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216811/" ] }
371,391
I have two directories in /var/www one is a fresh install of Laravel the other is a git clone of a Laravel application. I basically need the Laravel application to move into the fresh install of Laravel so it can run properly with the requisite /vendor code. However, I can't figure out how to use the command line to do this. I either rsync and delete the /vendor files because they aren't in the git clone or it doesn't overwrite anything. -www--laravelFresh--laravelGithubApplication I want everything in laravelGithubApplication to come out and overwrite everything in laravelFresh that is a duplicate. I don't want it to sync because whatever is in laravelFresh that doesn't have a duplicate in laravelGithubApplication shouldn't be overwritten. Please help.
Alias does not support operands like $@, or $1,$2 etc. Your command alias mvaf="mv $@ && cd $_" equals to mv ' ' && cd $_ because $@ is not recognized by alias in the way you expect. This can be proved easily like this: $ alias mvaf='echo "Part 1:" $@ && echo "Part 2: " $_'$ mvaf file66 /tmp/Part 1:Part 2: Part 1: file66 /tmp/#Part 2 includes the previous executed command (echo "Part 1:" $@) & the text sent after alias name $ alias mvaf='echo "Part 1:" $@;echo "Part 2: "'$ mvaf file66 /tmp/Part 1:Part 2: file66 /tmp/ On the other hand , this works but not because of $@ $ alias mvaf='echo "mv $@"'$ mvaf file66 /tmp/mv file66 /tmp/$ alias mvaf='echo "mv"'$ mvaf file66 /tmp/mv file66 /tmp/ As a general idea, alias is a kind of simple substitution. Alias aa='command1;command2' , when called like aa sometext equals to command1;command2 sometext To make this to work, you need to do it with a function.Bash discourage the use of alias and encourages the use of functions for such jobs. You can stick this function to your bash profile file, and this function can be called by name directly from your terminal as you would do with any alias: mvcd() { mv "$1" "$2" && cd "$2"; } Chaining mv and cd commands with && is important here, since && ensures that second command cd will be executed only if the previous command mv was successful. Alternativelly, as has been already advised in the link of accepted answer in your question , you could do something like mvf() { mv "$@" && goto "$_"; }goto() { [ -d "$1" ] && cd "$1" || cd "$(dirname "$1")"; } Be careful about bash word splitting . To make such a function to work correctly you need to insert double quotes when calling the function if the file you are going to move or the directory that file is going to be sent include space in their name. $ mvcd() { echo "1=$1";echo "2=$2";echo "3=$3";echo "4=$4"; }$ mvcd spaced file1 /spaced directory/1=spaced2=file13=/spaced4=directory/$ mvcd "spaced file1" "/spaced directory/"1=spaced file12=/spaced directory/3=4=
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/371391", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210510/" ] }
371,392
I was following the gentoo installation guide . But I stucked at the updating the world set part because I was not able to install dbus with emerge. It was failing to change SUID permission of its binary with a following log output: chmod 4750 /var/tmp/portage/sys-apps/dbus-1.10.18/image//usr/libexec/dbus-daemon-launch-helper;...chmod: changing permissions of '/var/tmp/portage/sys-apps/dbus-1.10.18/image//usr/libexec/dbus-daemon-launch-helper': Permission denied So for example: I mount the filesystem as the root user: mount /dev/sdb3 /mnt/gentoo After that I chroot to it (as the root user too): chroot /mnt/gentoo /bin/bashsource /etc/profile And creating a file and trying to change it's permissions touch /hellochmood 4750 /hello fails saying "permission denied". However chmod 4750 /mnt/gentoo/hello from the outside filesystem works fine. Why permission is denied? I also tried mounting with -o suid but it doesn't seem to work either. So how to make chmod 4750 work on different fs? UPDATE: When I do the same thing from my linux mint it works. From the gentoo livecd it fails.
Alias does not support operands like $@, or $1,$2 etc. Your command alias mvaf="mv $@ && cd $_" equals to mv ' ' && cd $_ because $@ is not recognized by alias in the way you expect. This can be proved easily like this: $ alias mvaf='echo "Part 1:" $@ && echo "Part 2: " $_'$ mvaf file66 /tmp/Part 1:Part 2: Part 1: file66 /tmp/#Part 2 includes the previous executed command (echo "Part 1:" $@) & the text sent after alias name $ alias mvaf='echo "Part 1:" $@;echo "Part 2: "'$ mvaf file66 /tmp/Part 1:Part 2: file66 /tmp/ On the other hand , this works but not because of $@ $ alias mvaf='echo "mv $@"'$ mvaf file66 /tmp/mv file66 /tmp/$ alias mvaf='echo "mv"'$ mvaf file66 /tmp/mv file66 /tmp/ As a general idea, alias is a kind of simple substitution. Alias aa='command1;command2' , when called like aa sometext equals to command1;command2 sometext To make this to work, you need to do it with a function.Bash discourage the use of alias and encourages the use of functions for such jobs. You can stick this function to your bash profile file, and this function can be called by name directly from your terminal as you would do with any alias: mvcd() { mv "$1" "$2" && cd "$2"; } Chaining mv and cd commands with && is important here, since && ensures that second command cd will be executed only if the previous command mv was successful. Alternativelly, as has been already advised in the link of accepted answer in your question , you could do something like mvf() { mv "$@" && goto "$_"; }goto() { [ -d "$1" ] && cd "$1" || cd "$(dirname "$1")"; } Be careful about bash word splitting . To make such a function to work correctly you need to insert double quotes when calling the function if the file you are going to move or the directory that file is going to be sent include space in their name. $ mvcd() { echo "1=$1";echo "2=$2";echo "3=$3";echo "4=$4"; }$ mvcd spaced file1 /spaced directory/1=spaced2=file13=/spaced4=directory/$ mvcd "spaced file1" "/spaced directory/"1=spaced file12=/spaced directory/3=4=
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/371392", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
371,593
There are a lot of folders in the / directory what do they all do?I know a few like /dev has links to devices in the system but what about /lost+found or /proc I just am curious.
The official reference for this on Linux is the Filesystem Hierarchy Standard . Distributions mostly follow the FHS (currently at version 3.0 ), but can occasionally deviate. Other Unix variants have many similarities but again can deviate. There is also a good summary on Wikipedia . I'll summarize the role of each directory that is found on typical Linux installations. See the FHS or Wikipedia for more details about the role of each directory. /bin : system programs meant for every user. See also /usr/bin . /boot : files used to start up the system: typically a bootloader , a kernel image , and a few associated files. These files are mostly not accessed after booting. /dev : device files. These are the ways applications communicate with hardware and more generally with kernel features that are about shuffling data around, such as disk partitions, terminals including virtual ones, etc. /etc : system configuration files. (So named because it started out as “stuff that didn't fit in the other directories”, but nowadays it's exclusively for configuration files on Linux and mostly if not exclusively for configuration files on other Unix variants. Miscellaneous stuff is now under /var .) /home : the directory containing users' home directories . E.g. Alice's files are typically under /home/alice . On systems with many users, the administrator may choose to have more levels (e.g. /home/faculty/alice , /home/students/bob , …). A few sites have home directories at a different location such as /homes , /users , … /lib contains shared libraries . See also /usr/lib . Some distributions have other directories such as /lib32 and /lib64 to store libraries for different processor architectures. /lost+found : for files recovered from filesystem corruption (but you're rarely so lucky). /media : contains mount points for removable media. On some systems, the mount points are at a third level, under directories named after users. /mnt : There used to be a dispute as to whether /mnt should be a directory that's available to the system administrator as a temporary mount point, or whether it should be a directory where the administrator can create subdirectories to be used as mount points. Nowadays the first position has won, and /media plays the second role. /opt : contains additional software with one subdirectory per software package. Some distributions use it heavily, others not at all. /proc : contains one subdirectory per process, exposing various information about the processes. That's where tools such as ps and top get their information. Not present on all Unix variants (BSD tends not to have it). On Linux, /proc also contains information about the system in general, but see also /sys . Content in /proc is generated on the fly by the kernel when an application reads it. /root : the root user 's home directory. Not present on all systems; traditionally root's home directory was / . /run : an in-memory filesystem containing system files that don't need to be preserved upon reboots, such as information about running services. There are typically per-user directories under /run/user . This is a Linux thing. /sbin : system programs meant only for administrators. See also /usr/sbin . /srv : sort of like /home , but for system services. A creation of the FHS that hasn't been universally adopted. /sys : like /proc , but presents information about kernel drivers and about hardware (the use of /proc for non-process-related information is deprecated but files that were in /proc remain in /proc for backward compatibility). Specific to Linux. /tmp : temporary files, accessible by every user. This is often an in-memory filesystem . /usr : This is where most of the software is installed. /usr contains subdirectories such as /bin , /lib and /sbin (but usually not /etc ). The distinction is that the subdirectories of / contain essential files needed while the system is starting, and /usr contains all the rest. /usr exists separately because there were reasons to keep it on a separate filesystem (which could be read-only, and shared between multiple machines) but the distinction is not always relevant, and less and less so as time goes on, so e.g. /bin can be a symbolic link to /usr/bin or vice versa. The name comes from “user” but it has been a very long time since /usr had anything to do with users, today /usr contains system files and that's that. /var : contains files that tend to change over time, in contrast with /usr which contains files that don't change except when upgrading or installing software. Unlike /tmp , the files under /var are (for the most part) meant to be preserved if the system reboots. /var is pretty diverse: it contains caches, metadata about installed software, printer spools , system mail, log files, temporary files (like /tmp , but /var/tmp is always preserved upon reboot and usually has more space), etc.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/371593", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
371,664
I want to determine whether a multi-line string ends with a line containing specified pattern. These code failed, it doesn't match. s=`echo hello && echo world && echo OK`[[ "$s" =~ 'OK$' ]] && echo match
In bash 3.2 or above and if the compatibility to 3.1 is not enabled (with the compat31 option or BASH_COMPAT=3.1 ), quoting regular expression operators (not only with \ but with any of the bash quoting operators ( '...' , "..." , $'...' , $"..." )) removes their special meaning. [[ $var =~ 'OK$' ]] matches only in strings that contain OK$ literally (that $ matches a literal $ ) [[ $var =~ OK$ ]] matches on strings that end in OK (that $ is the RE operator that matches at the end of the string). That also applies to regexps stored in variables or the result of some substitution. [[ $var =~ $regexp ]] # $var matches $regexp[[ $var =~ "$string" ]] # $var contains $string Note that it can become awkward because there are some characters that you need to quote for the shell syntax (like blanks, < , > , & , parenthesis when not matched). For instance, if you want to match against the .{3} <> [)}]& regexp (3 characters followed by a " <> " , either a ) or } and a & ), you need something like: [[ $var =~ .{3}" <> "[}\)]\& ]] If in doubt about which characters need quoting, you can always use a temporary variable . That also means it will make the code compatible to bash31 , zsh or ksh93 : pattern='.{3} <> [})]&'[[ $var =~ $pattern ]] # remember *not* to quote $pattern here That's also the only way (short of using the compat31 option (or BASH_COMPAT=3.1 )) you can make use of the non-POSIX extended operators of your system's regexps. For instance, for \< to be treated as the word boundary which it is in many regexp engines, you need: pattern='\<word\>'[[ $var =~ $pattern ]] Doing: [[ $var =~ \<word\> ]] won't work as bash treats those \ as shell quoting operators and strip them before passing <word> to the regexp library. Note that it's a lot worse in ksh93 where: [[ $var =~ "x.*$" ]] for instance will match on whatever-xa* but not whatever-xfoo . The quoting above removes the special meaning to * , but not to . nor $ . The zsh behaviour is simpler: quoting doesn't change the meaning of regexp operators there (like in bash31) which makes for a more predictable behaviour (it can also use PCRE regexps instead of ERE (with set -o rematchpcre )). yash doesn't have a [[...]] construct, but its [ builtin has a =~ operator (also in zsh ). And of course, [ being a normal command, quoting can't affect the way regexp operators are interpreted. Also note that strictly speaking, your $s doesn't contain 3 lines, but 2 full lines followed by an unterminated line. It contains hello\nworld\nOK . In the OK$ extended regular expression, the $ operator would only match at the end of the string . In a 3-full-lines string , like hello\nworld\nOK\n (which you wouldn't be able to obtain with command substitution as command substitution strips all trailing newline characters), the $ would match after the \n , so OK$ wouldn't match on it. With zsh -o pcrematch however, the $ matches both at the end of the string and before the newline at the end of the string if there's one as it doesn't pass the PCRE_DOLLAR_ENDONLY flag to pcre_compile . That could be seen as a bad idea as generally, variables in shells do not contain a trailing newline character, and when they do, we generally want them considered as data .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/371664", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153046/" ] }
371,699
tee can redirect the piped standard input into the standard output and file. echo Hello, World! | tee greeting.txt The command above would display the greeting on the terminal screen and save it in the contents of greeting.txt file, creating the file if there's none by that name. There's also -a switch for tee to append to the existing file instead of overwriting. Is there a convenient way to redirect the piped input to the command and standard output instead of file? I am trying to create a wrapper script for buku to copy to primary selection the URL of the bookmark specified by its index number. # bukuc:#!/bin/shurl=$(buku -f 1 -p $1 | cut -f 2) # NUMBER : URLecho $url # DISPLAYecho $url | xsel # PRIMARY SELECTION Here I use echo two times, first for displaying on the terminal, and then saving in the primary selection (clipboard). I imagine something of echo $url | teeC xsel or a shortcut to display the output before passing to the next command (chaining commands), what would allow me to chain the whole command in one line without the need to save the result in a variable as follows: buku -f 1 -p $1 | cut -f 2 | teeC xsel I can also use it with urlview to view, select, and open with the $BROWSER as follows: bukuc 10-20 | urlview
It's straightforward in shells that support process substitution , e.g. bash $ echo foo | tee >(xsel)foo$ xsel -ofoo Otherwise, you could use a FIFO (although it lacks convenience) $ mkfifo _myfifo$ xsel < _myfifo &$ echo bar | tee _myfifobar$ xsel -obar[1] + Done xsel 0<_myfifo$
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/371699", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
371,715
Normally, bash globbing is case sensitive: $ echo c*casefix.pike cdless chalices.py charconv.py chocolate.pike circum.py clip.pike cpustats.pike crop.pike cwk2txt.py$ echo C*CarePackage.md ChocRippleCake.md Clips Using square brackets doesn't seem to change this: $ echo [c]*casefix.pike cdless chalices.py charconv.py chocolate.pike circum.py clip.pike cpustats.pike crop.pike cwk2txt.py$ echo [C]*CarePackage.md ChocRippleCake.md Clips It still doesn't change it if a hyphen is used: $ echo [c-c]*casefix.pike cdless chalices.py charconv.py chocolate.pike circum.py clip.pike cpustats.pike crop.pike cwk2txt.py$ echo [C-C]*CarePackage.md ChocRippleCake.md Clips But the letters are interspersed: $ echo [B-C]*CarePackage.md casefix.pike cdless chalices.py charconv.py chocolate.pike ChocRippleCake.md circum.py clip.pike Clips cpustats.pike crop.pike cwk2txt.py$ echo [b-c]*beehive-anthem.txt bluray2mkv.pike branch branchcleanup.pike burdayim.pike casefix.pike cdless chalices.py charconv.py chocolate.pike circum.py clip.pike cpustats.pike crop.pike cwk2txt.py This suggests that the hyphen is using a locale order, "AaBbCcDd". So: is there any way to glob for all files that begin with an uppercase letter?
In bash version 4.3 and later, there is a shopt option called globasciiranges : According to shopt builtin gnu man pages : globasciiranges If set, range expressions used in pattern matching bracket expressions (see Pattern Matching) behave as if in the traditional C locale when performing comparisons. That is, the current locale’s collating sequence is not taken into account, so ‘b’ will not collate between ‘A’ and ‘B’, and upper-case and lower-case ASCII characters will collate together. As a result you can $ shopt -s globasciiranges $ echo [A-Z]* Use shopt -u for disabling. Another way is to change locale to C. You can do this temporarily using a subshell: $ ( LC_ALL=C ; printf '%s\n' [A-Z]*; ) You will get the results you need, and when the sub shell is finished, the locale of your main shell remains unchanged to whatever was before. Another alternative is instead of [A-Z] to use brace expansion {A..Z} together with nullglob bash shopt option. By enabling the nullglob option, if a pattern is not matched during pathname expansion, a null string is returned instead of the pattern itself. As a result this one will work as expected: $ shopt -s nullglob;printf '%s\n' {A..Z}*
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/371715", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98662/" ] }
371,722
Some commands are provided as both builtins and external utilities. Take echo for example. On my machine (macOS) running Bash 3.2, $ type echoecho is a shell builtin Running man bash | less --pattern='^ *echo +\[' shows: echo [-neE] [arg ...] But running man 1 echo shows a man page for a different implementation of echo , with a different signature: echo [-n] [string ...] I'm able to use -e successfully, so I must be running the builtin, and presumably that's /bin/echo $ which echo/bin/echo Where does the other implementation live, and how can I distinguish between builtins and external utils in general (e.g. printf ) Update/Correction Thanks @Gilles for clarifying. And the proof is in the pudding! $ /bin/echo -e "\tabc"-e \tabc$ echo -e "\tabc" abc
To find out whether a command is built in, run type . $ type echoecho is a shell builtin type is itself a builtin and knows what commands are built in. (In bash, builtins can be disabled, and type will correctly report that a command is not built in if the builtin has been disabled.) type reports whatever will be executed if you use the command name — alias, function, builtin or external command. which is an external command that reports the location of external commands. It doesn't know anything about aliases, functions or builtins. And it might even not report the correct external commands, depending on your setup. Just forget about which and use type instead . I must be running the builtin, and presumably that's /bin/echo No! By definition, a builtin is not an external command. The code that implements the echo builtin, like all other builtins, is in /bin/bash . /bin/echo is an external command that has the same name as the echo builtin. When a command exists both as a builtin and as an external command, using its name calls the builtin. The precedence order for command names is alias, then function, then builtin, then external command in the directories listed in $PATH in order. If, for some reason, you want to force an external command, use its full path.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/371722", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47044/" ] }
371,789
I have 200 GB free disk space, 16 GB of RAM (of which ~1 GB is occupied by the desktop and kernel) and 6 GB of swap. I have a 240 GB external SSD, with 70 GB used 1 and the rest free, which I need to back up to my disk. Normally, I would dd if=/dev/sdb of=Desktop/disk.img the disk first, and then compress it, but making the image first is not an option since doing so would require far more disk space than I have, even though the compression step will result in the free space being squashed so the final archive can easily fit on my disk. dd writes to STDOUT by default, and gzip can read from STDIN, so in theory I can write dd if=/dev/sdb | gzip -9 - , but gzip takes significantly longer to read bytes than dd can produce them. From man pipe : Data written to the write end of the pipe is buffered by the kernel until it is read from the read end of the pipe. I visualise a | as being like a real pipe -- one application shoving data in and the other taking data out of the pipe's queue as quickly as possible. What when the program on the left side writes more data more quickly than the other side of the pipe can hope to process it? Will it cause extreme memory or swap usage, or will the kernel try to create a FIFO on disk, thereby filling up the disk? Or will it just fail with SIGPIPE Broken pipe if the buffer is too large? Basically, this boils down to two questions: What are the implications and outcomes of shoving more data into a pipe than is read at a time? What's the reliable way to compress a datastream to disk without putting the entire uncompressed datastream on the disk? Note 1: I cannot just copy exactly the first 70 used GB and expect to get a working system or filesystem, because of fragmentation and other things which will require the full contents to be intact.
Technically you don't even need dd : gzip < /dev/drive > drive.img.gz If you do use dd , you should always go with larger than default blocksize like dd bs=1M or suffer the syscall hell ( dd 's default blocksize is 512 bytes, since it read() s and write() s that's 4096 syscalls per MiB , too much overhead). gzip -9 uses a LOT more CPU with very little to show for it. If gzip is slowing you down, lower the compression level, or use a different (faster) compression method. If you're doing file based backups instead of dd images, you could have some logic that decides whether to compress at all or not (there's no point in doing so for various file types). dar ( tar alternative`) is one example that has options to do so. If your free space is ZERO (because it's an SSD that reliably returns zero after TRIM and you ran fstrim and dropped caches) you can also use dd with conv=sparse flag to create an uncompressed, loop-mountable, sparse image that uses zero disk space for the zero areas. Requires the image file to be backed by a filesystem that supports sparse files. Alternatively for some filesystems there exist programs able to only image the used areas.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/371789", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136107/" ] }
371,793
Currently, I'm using Deepin OS 15.4. By default Deepin sets the desktop to extended mode when a projector is connected, but what I want is to duplicate my screen. I can't find it in the Control Center configuration, but I stumbled upon in this question . The above extends the monitor with xrandr, How do I duplicate my desktop to the projector with xrandr?
First find out the name of each display e.g. using xrandr --current . Then the following command should work to duplicate them. $ xrandr --output <projector> --same-as <desktop>
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/371793", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/221971/" ] }
371,827
I have a shell script that uses the following to print a green checkmark in its output: col_green="\e[32;01m"col_reset="\e[39;49;00m"echo -e "Done ${col_green}✓${col_reset}" After reading about Bash's ANSI-C Quoting , I realized I could use it when setting my color variables and remove the -e flag from my echo . col_green=$'\e[32;01m'col_reset=$'\e[39;49;00m'echo "Done ${col_green}✓${col_reset}" This seems appealing, since it means the message prints correctly whether it's passed to Bash's builtin echo or the external util /bin/echo (I'm on macOS). But does this make the script less portable? I know Bash and Zsh support this style of quoting, but I'm not sure about others.
$'…' is a ksh93 feature that is also present in zsh, bash, mksh, FreeBSD sh and in some builds of BusyBox sh (BusyBox ash built with ENABLE_ASH_BASH_COMPAT ). It isn't present in the POSIX sh language yet. Common Bourne-like shells that don't have it include dash (which is /bin/sh by default on Ubuntu among others), ksh88, the Bourne shell, NetBSD sh, yash, derivatives of pdksh other than mksh and some builds of BusyBox. A portable way to get backslash-letter and backslash-octal parsed as control characters is to use printf . It's present on all POSIX-compliant systems. esc=$(printf '\033') # assuming an ASCII (as opposed to EBCDIC) systemcol_green="${esc}[32;01m" Note that \e is not portable. It's supported by many implementations of printf but not by the one in dash¹. Use the octal code instead. ¹ It is supported in Debian and derivatives that ship at least 0.5.8-2.4, e.g. since Debian stretch and Ubuntu 17.04.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/371827", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47044/" ] }
371,843
I'm trying to add a user to a group wireshark as explained here . I have already executed multiple different commands and was under the impression that the user was successfully added. ~$ sudo adduser $USER wiresharkThe user `user' is already a member of `wireshark'. And have re logged into the system. ~$ groups user adm cdrom sudo dip plugdev lpadmin sambashare but it seems as if the user hasn't been added to the group (which is in contrast with the first command). Also the assumption that it wasn't added is supported by Wireshark not working correctly. Which should I consider correct?
It starter to show the appropriate groups only after a system restart. The logout - login wasn't enough. Don't know what to make of it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/371843", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29436/" ] }
371,860
Whenever I run "ping -c 1 www.google.com", I get this result: PING www.google.com (xxx.xx.xxx.xxx) 56(84) bytes of data.64 bytes from xxxxxxxx-xx-xxxxxxxxxx.net (xxx.xx.xxx.xxx): icmp_seq=1 ttl=56 time=25.8 ms--- www.google.com ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 25.816/25.816/25.816/0.000 ms However, I'd like to print out the ping in a simple format like this: [23:00:25] 25.8 ms How can I achieve this? So far I've tried ping -c 1 www.google.com | grep -oP '(?<=time\s/)w+' > ping.txt to print the ping without the time, but as you probably guessed, it didn't work.
It starter to show the appropriate groups only after a system restart. The logout - login wasn't enough. Don't know what to make of it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/371860", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/236664/" ] }
371,901
I'm using openssh7.5p1 and gnupg 2.1.21 on arch linux (these are the default versions that come with arch). I would like to use gpg-agent as an ssh agent. I put the following in my ~/.gnupg/gpg-agent.conf : pinentry-program /usr/bin/pinentry-qtenable-ssh-support Arch automatically starts a gpg-agent from systemd, so I set export SSH_AUTH_SOCK="$XDG_RUNTIME_DIR/gnupg/S.gpg-agent.ssh" When I run ssh-add -l , it reports no identities and ps reports a gpg-agent --supervised process as I would expect. Unfortunately, when I run ssh-add , no matter what the key type, it doesn't work. Here is an example of how I tried dsa: $ ssh-keygen -f testkey -t dsa -N ''Generating public/private dsa key pair.Your identification has been saved in testkey.Your public key has been saved in testkey.pub.$ ssh-add testkeyCould not add identity "testkey": agent refused operation All other gpg functions work properly (encrypting/decrypting/signing). Also, the keys I generate work fine if I use them directly with ssh, and they work properly if I run the ssh-agent that came with openssh. The documentation says that ssh-add should add keys to ~/.gnupg/sshcontrol , but obviously nothing is happening. My question: What's the easiest way to load a key generated by openssh's ssh-keygen into gpg-agent , and can someone please cut and paste a terminal session showing how this works?
The answer was apparently to run: echo UPDATESTARTUPTTY | gpg-connect-agent I have no idea why the pinentry program worked fine for other uses such as decrypting files, but didn't work for ssh-add . While this now works, it also makes a copy of the ssh private key that doesn't show up under gpg -Kv , and furthermore doesn't seem to allow you to change the passphrase on your private key (since you can't edit it with --edit-key ). Basically I'm pretty unhappy with the way gpg-agent provides low visibility into where your secrets are being copied. If you hit this question because you hoped gpg-agent might be a better alternative to ssh-agent , then I'd encourage you to stick to ssh-agent instead of trying out my answer. The main reason to prefer gpg-agent is if you need to for smart-card use.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/371901", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/121912/" ] }
371,915
Run the following commands on linux (4.4.59 and 4.9.8 are tested) and it will fail: mkdir -p /tmp/procmount -t overlay overlay -o lowerdir=/proc:/tmp/proc /tmp/proc and there is a error message in dmesg : overlayfs: maximum fs stacking depth exceeded Why can't /proc be a layer of a overlay file system? If I replace /proc with /dev or /sys , it mounts without issue, so it seems there is something special with /proc . P.S. The use case is creating a safer chroot environment, I want to make /dev , /sys and /proc read-only in chroot . There are 2 known workarounds: read-only bind mount . The limitation is two commnads instead one required. read-only special mount: mount -t proc -o ro none /tmp/proc . The limitation is sub-mount not mapped automatically. Anyway, I'm still curious about why /dev and /sys play well with overlay but /proc doesn't. The question is migrated from stackoverflow .
https://github.com/torvalds/linux/commit/e54ad7f1ee263ffa5a2de9c609d58dfa27b21cd9 /* * procfs isn't actually a stacking filesystem; however, there is * too much magic going on inside it to permit stacking things on * top of it */s->s_stack_depth = FILESYSTEM_MAX_STACK_DEPTH; This might not be a very informative answer, but the kernel developers specifically don't support it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/371915", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/236700/" ] }
371,950
Example history: $ history1 whoami2 pwd3 ls To get a reversed history list, I do: $ history|tac3 ls2 pwd1 whoami But are there any better ways to do this, perhaps that needn't invoke another program, for those without tac installed, for example?
Since the owner of a separate answer deleted it, I'll suggest: history | sort -rn
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/371950", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85900/" ] }
371,966
I have this: test1="1"test2="2"test3="3"for i in "$test1" "$test2" "$test3"; do echo "$i"done ; I want to echo the $i variable name, not its content. The echo output should be "test1", or "test2", or "test3" How can I do this?
If you really want to do this, just do that: #!/bin/bashtest1='1'test2='2'test3='3'for v in "test1" "test2" "test3"; do echo "The variable's name is $v" echo "The variable's content is ${!v}"done But you probably would prefer to use arrays rather than dynamic variable names as this can be seen as a bad practice and make your code harder to understand. So consider this, much better, form: #!/bin/bashtest[0]='1'test[1]='2'test[2]='3'for ((i=0;i<=2;i++)); do echo "The variable's name is \$test[$i]" echo "The variable's content is ${test[$i]}"done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/371966", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/236740/" ] }
371,978
I was thinking if there is "canonical" way to have this? Background & description I have to install some program on live server. Although I do trust the vendor (FOSS, Github, multiple authors...) I would rather to ensure avoiding not entirely impossible scenario of script falling in some trouble and exhausting system resources and leaving server unresponsive. I had the case of installing amavis which was started right after and because of some messy configuration it have produced loadavg of >4 and system barely responsive. My first taught was nice - nice -n 19 thatscript.sh . This may and may not help, but I was thinking that it would be best that I write and activate script that would do following: run as daemon, on (example) 500ms-2s check for labeled process with ps and grep if labeled process(es) (or any other process) takes too much CPU (yet to be defined) - kill them with SIGKILL My second taught was - it would not be the first time that I'm reinventing the wheel. So, is there any good way to "jail" the program and processes produced by it into some predefined limited amount of system resources or automated kill if some threshold was reached by them?
If you really want to do this, just do that: #!/bin/bashtest1='1'test2='2'test3='3'for v in "test1" "test2" "test3"; do echo "The variable's name is $v" echo "The variable's content is ${!v}"done But you probably would prefer to use arrays rather than dynamic variable names as this can be seen as a bad practice and make your code harder to understand. So consider this, much better, form: #!/bin/bashtest[0]='1'test[1]='2'test[2]='3'for ((i=0;i<=2;i++)); do echo "The variable's name is \$test[$i]" echo "The variable's content is ${test[$i]}"done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/371978", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68350/" ] }
371,981
I want to download chrome browser version 51 or below through terminal; however the latest version is 58. Linux: centOS
Took a little digging, but just pieced this together: yum install -y https://dl.google.com/linux/chrome/rpm/stable/x86_64/google-chrome-stable-${GOOGLE_CHROME_VERSION}-1.x86_64.rpm You can extract available versions from the following URL: https://www.ubuntuupdates.org/package/google_chrome/stable/main/base/google-chrome-stable
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/371981", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/236759/" ] }
371,989
How can I fix the output width using printf ? This is an example script: #!/bin/bashOK=$(printf '\t%+50s\n' OK)FAIL=$(printf '\t%+50s\n' FAIL)for i in a aa aaa aaaa aaaaaa aaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaa; do echo "$i $OK"done Output: a OKaa OKaaa OKaaaa OKaaaaaa OKaaaaaaaaaaaaaaaaa OKaaaaaaaaaaaaaaaaaaaaaaaaaa OK I want something like: a OKaa OKaaa OKaaaa OKaaaaaa OKaaaaaaaaaaaaaaaaa OKaaaaaaaaaaaaaaaaaaaaaaaaaa OK
Use the following printf approach to get the needed output: #!/bin/bashfor i in a aa aaa aaaa aaaaaa aaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaa; do printf '%-50sOK\n' $i done Script output: a OKaa OKaaa OKaaaa OKaaaaaa OKaaaaaaaaaaaaaaaaa OKaaaaaaaaaaaaaaaaaaaaaaaaaa OK OK - as static string is moved to the FORMAT ( printf FORMAT [ARGUMENT] ) $i - considered as printf argument
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/371989", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/236740/" ] }
371,997
I am trying to set up a single SSL certificate that will make any *.local website work over https. I have all .local domains pointing back to my local machine. I use these when developing websites. A lot of new features (geo location, service workers etc.) require an SSL. I believe that for recent versions of Chrome/Firefox, an old school self-signed certificate no longer works. Below are the steps I have taken after following a combination of these guides: https://deliciousbrains.com/https-locally-without-browser-privacy-errors/ https://codeghar.wordpress.com/2008/03/17/create-a-certificate-authority-and-certificates-with-openssl/ https://stackoverflow.com/questions/27294589/creating-self-signed-certificate-for-domain-and-subdomains-neterr-cert-commo Here is my config file: #..................................[ ca ]default_ca = CA_default[ CA_default ]dir = /home/*****/Sites/root-caserial = $dir/serialdatabase = $dir/index.txtnew_certs_dir = $dir/certscertificate = $dir/certs/cacert.pemprivate_key = $dir/private/cakey.pemdefault_days = 3000default_md = sha256preserve = noemail_in_dn = nonameopt = default_cacertopt = default_capolicy = policy_matchcopy_extensions = copyall[ policy_match ]countryName = matchstateOrProvinceName = matchorganizationName = matchorganizationalUnitName = optionalcommonName = suppliedemailAddress = optional[ req ]default_bits = 2048 # Size of keysdefault_keyfile = key.pem # name of generated keysdefault_md = md5 # message digest algorithmstring_mask = nombstr # permitted charactersdistinguished_name = req_distinguished_namereq_extensions = v3_req[ req_distinguished_name ]# Variable name Prompt string#------------------------- ----------------------------------0.organizationName = Organization Name (company)organizationalUnitName = Organizational Unit Name (department, division)emailAddress = Email AddressemailAddress_max = 40localityName = Locality Name (city, district)stateOrProvinceName = State or Province Name (full name)countryName = Country Name (2 letter code)countryName_min = 2countryName_max = 2commonName = Common Name (hostname, IP, or your name)commonName_max = 64# Default values for the above, for consistency and less typing.# Variable name Value#------------------------ ------------------------------0.organizationName_default = *****localityName_default = *****stateOrProvinceName_default = *****countryName_default = *****emailAddress_default = *****[ v3_ca ]basicConstraints = CA:TRUEsubjectKeyIdentifier = hashauthorityKeyIdentifier = keyid:always,issuer:alwayssubjectAltName = @alternate_names[ v3_req ]subjectKeyIdentifier = hashbasicConstraints = CA:FALSEkeyUsage = digitalSignature, keyEnciphermentsubjectAltName = @alternate_namesnsComment = "OpenSSL Generated Certificate"[ alternate_names ]DNS.1 = *.local I first create a new certificate authority: openssl req -new -x509 -extensions v3_ca -keyout private/cakey.pem -out certs/cacert.pem -days 3000 -config conf/caconfig.cnf I have given the Common name here as my name Common Name (hostname, IP, or your name) []:Jonathan Hodgson The file certs/cacert.pem I then import into chromium's authorities which works without a problem. I then create a certificate request: openssl req -extensions v3_req -new -nodes -out local.req.pem -keyout private/local.key.pem -config conf/caconfig.cnf I have given the Common name here as *.local Common Name (hostname, IP, or your name) []:*.local I then sign the request: openssl ca -out certs/local.cert.pem -config conf/caconfig.cnf -infiles local.req.pem I add the files to my http config: <VirtualHost *:80> ServerName test.local ServerAlias *.local VirtualDocumentRoot /home/jonathan/Sites/%-2/public_html CustomLog /home/jonathan/Sites/access.log vhost_combined ErrorLog /home/jonathan/Sites/error.log</VirtualHost><VirtualHost *:443> ServerName test.local ServerAlias *.local VirtualDocumentRoot /home/jonathan/Sites/%-2/public_html CustomLog /home/jonathan/Sites/access.log vhost_combined ErrorLog /home/jonathan/Sites/error.log SSLEngine On SSLCertificateFile /home/jonathan/Sites/root-ca/certs/local.cert.pem SSLCertificateKeyFile /home/jonathan/Sites/root-ca/private/local.key.pem</VirtualHost> I have restarted apache but I am still getting NET::ERR_CERT_COMMON_NAME_INVALID I was under the impression that this was because I needed to add the subjectAltName to the config file which I have done. Please let me know what I should do differently. Thanks in advance for any help Edit I think the problem is to do with the wildcard. If I set the alternate_names to example.local and the Common name for the request to example.local, example.local shows as secure in both Chrome and Firefox. I tried to set DNS.1 to local and DNS.2 to *.local , I then just got ERR_SSL_SERVER_CERT_BAD_FORMAT in chrome and SEC_ERROR_REUSED_ISSUER_AND_SERIAL in firefox. I definitely reset my serial file and my index file before generating the certificates.
You added SAN to the CSR but you didn't tell ca to include extensions from the CSR in the certificate. See https://security.stackexchange.com/questions/150078/missing-x509-extensions-with-an-openssl-generated-certificate or the man page for ca also on the web at copy_extensions EDIT: You also need to specify x509_extensions in the ca config, or equivalent but less convenient the commandline option -extensions , in either case pointing to a section that exists but can be empty if you don't want any CA-required extensions. I didn't notice this at first because I had never tried the case of extensions from CSR only and not config, which is unrealistic for most CAs. If you specify copy_extensions other than none (and the CSR has some) but don't specify x509_extensions then ca does put the extensions in the cert but does not set the cert version to v3 as is required by standards (like rfc5280) when extensions are present. It's arguable if this is a bug; the manpage says x509_extensions/extensions controls the v3 setting, and by not saying any similar thing about copy_extensions implies that does not, but IMHO it's certainly a very suboptimal feature. EDIT: it is a bug and will be fixed but until then use the workaround, see https://unix.stackexchange.com/a/394465/59699 HOWEVER: in my test, this didn't actually solve your problem. Even though the cert has *.local in SAN and CN and is (now) otherwise valid, my Firefox (53.0.2) and Chrome (59.0.3071.109) still reject it with SSL_ERROR_CERT_DOMAIN_ERROR and ERR_CERT_COMMON_NAME_INVALID respectively. I guessed they might not be excluding local from the normal 2+-level logic and tried *.example.local : Chrome does accept that, but Firefox doesn't. I also tried *.example.org and both Chrome and IE11 like that but still not Firefox (and of course assigning yourself names in real TLDs like .org is not the way DNS is supposed to work). This has me stuck. With some work OpenSSL can be made to generate a cert containing almost anything you want, but what Firefox and Chrome will accept I do not know. I will try to look into that and update if I find anything. I hope you mean you gave *.local as the CommonName only for the server CSR and NOT for the CA (self-signed) cert. If Subject names for CA and leaf certs are the same nothing will work reliably. EDIT: your edited Q confirms they were correctly different. Although it does not mention also specifying Country, State, and Organization as is required by the ca policy you used. Note 'self-signed' is a term of art and means signed with the same key . Your CA cert is self-signed. Your server cert is signed by you yourself using your own key but it is not self-signed. Trying to apply instructions for a self-signed cert to a not-self-signed cert was part of your problem. And Gilles point about md5 for the signature algorithm is also correct. EDIT: 'resetting' serial (and index) for an openssl ca setup is a bad idea, unless you permanently discard the CA cert and name they were used for. The standards say a given CA must not issue more than one cert with the same serial value in the cert, and the serial file is the way openssl ca (and also x509 -req ) implements this. 'Real' (public) CAs nowadays no longer use a simple counter but include entropy to block collision attacks on PKI -- google hashclash -- but this is not an issue for a personal CA like yours. I can readily believe a browser (or other relier) being unhappy if it sees multiple certs with the same serial and CA name, although I would NOT expect a browser to persistently store a leaf cert -- and thus see both the old and new ones in one process unless long-running -- unless you import it to the applicable store, including in Firefox if you make it a permanent 'exception'.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/371997", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/228295/" ] }
372,024
I have a disk image file I'm trying to mount locally using a loop device. Using parted I can see the image has two partitions, however, I'm not able to mount the first partition and losetup thinks the second partition doesn't exist. Anyone know how I can mount the second partition? /m/sf_VMShare ❯❯❯ sudo losetup /dev/loop0 ./imm_image-2017-05-28.img/m/sf_VMShare ❯❯❯ sudo losetup -a/dev/loop0: [0023]:99 (/media/sf_VMShare/imm_image-2017-05-28.img)/m/sf_VMShare ❯❯❯ sudo parted /dev/loop0 printModel: Loopback device (loop)Disk /dev/loop0: 1206MBSector size (logical/physical): 512B/512BPartition Table: msdosNumber Start End Size Type File system Flags 1 10.5MB 360MB 349MB primary ext4 2 361MB 1205MB 844MB primary ext4/m/sf_VMShare ❯❯❯ sudo mount -t ext4 /dev/loop0p2 /tmp/vdiskmount: special device /dev/loop0p2 does not exist/m/sf_VMShare ❯❯❯ sudo mount -t ext4 /dev/loop0p1 /tmp/vdisk mount: wrong fs type, bad option, bad superblock on /dev/loop0p1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so/m/sf_VMShare ❯❯❯ mount | grep /tmp/vdisk/m/sf_VMShare ❯❯❯/m/sf_VMShare ❯❯❯ ls /dev/loop*/dev/loop0 /dev/loop1 /dev/loop3 /dev/loop5 /dev/loop7/dev/loop0p1 /dev/loop2 /dev/loop4 /dev/loop6 /dev/loop-control/m/sf_VMShare ❯❯❯ lsblk -fNAME FSTYPE LABEL MOUNTPOINTsda ├─sda1 /├─sda2 └─sda5 [SWAP]sdb └─sdb1 /home/foo/workspacesr0 loop0 └─loop0p1
How to mount a partition in a full disk image that contains a msdos partition table. Tools: fdisk mount calculator Get the partition layout of the image. sudo fdisk -l -u=sectors /work/loop_test/disk_image.img Example output: Disk /work/loop_test/disk_image.img: 29 MB, 29629952 bytes255 heads, 63 sectors/track, 3 cylinders, total 57871 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x0009d7e5 Device Boot Start End Blocks Id System/work/loop_test/disk_image.img1 2048 18431 8192 83 Linux/work/loop_test/disk_image.img2 18432 57343 19456 7 HPFS/NTFS/exFAT Calculate the offset from the start of the image to the partition start. In this case the ntfs partition. formula: Sector size * Start = Offset512 * 18432 = 9437184 Mount the image, passing the offset for the desired partition. In this example the ntfs partition. sudo mount -o loop,offset=9437184 /work/loop_test/disk_image.img /mnt/ntfs_partition
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/372024", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/185530/" ] }
372,031
I have a whole bunch of files with certain patterns ABCD: Something 1 Anything 2EFGH:Something 3Anything 4ABCD:Something 5Anything 6HIJK:Something 7Anything 8 I want to retain the second line after ABCD and delete everything else in all these files. On a single file, this can be achieved using vim by the following commands /ABCD\_[^a-zA-Z] (*searches the pattern*)qaq (*flush register*):g//norm! "A3Y (*yank 3 lines including pattern into register A*)ggVG"ap (*delete everything else*) Then I can perform some easy regex searches to delete the ABCDs and Somethings to be left with the correct Anythings. However using args and argdo as suggested here for multiple files throws up errors "Not an editor command" at the second operation above. Same thing happens if I bypass the second, and go directly to the third and fourth. I am performing args and argdo after each step. Any recommendations staying within or going beyond vim?
How to mount a partition in a full disk image that contains a msdos partition table. Tools: fdisk mount calculator Get the partition layout of the image. sudo fdisk -l -u=sectors /work/loop_test/disk_image.img Example output: Disk /work/loop_test/disk_image.img: 29 MB, 29629952 bytes255 heads, 63 sectors/track, 3 cylinders, total 57871 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x0009d7e5 Device Boot Start End Blocks Id System/work/loop_test/disk_image.img1 2048 18431 8192 83 Linux/work/loop_test/disk_image.img2 18432 57343 19456 7 HPFS/NTFS/exFAT Calculate the offset from the start of the image to the partition start. In this case the ntfs partition. formula: Sector size * Start = Offset512 * 18432 = 9437184 Mount the image, passing the offset for the desired partition. In this example the ntfs partition. sudo mount -o loop,offset=9437184 /work/loop_test/disk_image.img /mnt/ntfs_partition
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/372031", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/236797/" ] }
372,105
I'm trying to parse a usage message like: Usage: docker-compose [-f <arg>...] [options] [COMMAND] [ARGS...] docker-compose -h|--help...Commands: build Build or rebuild services bundle Generate a Docker bundle from the Compose file... to grab the Command names only. So I'm looking to skip all lines up to and including the Commands: line, then print the first word on all following lines, i.e. build bundle ... Currently I'm doing docker-compose --help | sed -e '1,/Commands:/d' | awk '{ print $1 }' and while this works, I suspect I could do the whole thing with a single awk . The closest I've got so far is: docker-compose --help | awk '/Commands:/,0 { print $1 }' But that includes the matched Commands: line. Can it be done?
If you mark the presence of your fence, then you can use it to decide to print the next line and after like: awk 'x==1 {print $1} /Commands:/ {x=1}'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/372105", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47044/" ] }
372,115
How can I only receive emails from cron if there are errors? In the overwhelmingly vast majority of cases, the tasks will run just fine - and I truly do not care about the output. It is only in the rare case of a failure that I want/need to know. I have procmail available - but am not sure if what I'm describing is possible to manage externally to cron "correctly".
As you are not caring for the output, you can redirect the STDOUT of a job to /dev/null and let the STDERR being send via mail (using MAILTO environment variable). So, for example: [email protected]......* * * * * /my/script.sh >/dev/null will send mail when there is output only on STDERR (with the STDERR), and will discard the STDOUT. This of course assumes that when a program has written on STDERR, has failed; this might not be always the case. If you have control over the program, you can make it do so. For any complex case, you should write a wrapper of some kind that runs the command(s) and send mail accordingly. And put the wrapper as the cron job.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/372115", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6388/" ] }
372,199
Consider the following: root@debian-lap:/tmp echo "Step 1 " || echo "Failed to execute step 1" ; echo "Step 2"Step 1Step 2root@debian-lap:/tmp As you can see, the 1st and 3rd echo command executed normally. And if I the first command failed I want to stop the script and exit from it: root@debian-lap:/home/fugitive echo "Step 1 " || echo "Failed to execute step 1" && exit 2 ; echo "Step 2"Step 1 exitfugitive@debian-lap:~$ The exit command executes and exits the shell, even thou exit code of the first command is 0. My question is - why? In translation, doesn't this say: echo "Step 1" if the command failed , echo 'Failed to execute step 1' and exit the script else echo "Step 2" Looking at this like : cmd foo1 || cmd foo2 && exit Shouldn't cmd foo2 and (&&) exit execute only when cmd foo1 failed? What I am missing? Edit I am adding 2nd example, something that I am really trying to do (still dummy test) root@debian-lap:/tmp/bashtest a="$(ls)" || echo "Failed" ; echo $atest_file # < - This is OKroot@debian-lap:root@debian-lap:/tmp/bashtest a="$(ls)" || echo "Unable to assign the variable" && exit 2; echo $aexitfugitive@debian-lap:~$ # <- this is confusing partroot@debian-lap:/tmp/bashtest a="$(ls /tmpppp/notexist)" || echo "Unable to assign the variable" ; echo $als: cannot access /tmpppp/notexist: No such file or directoryUnable to assign the variable # <- This is also OKroot@debian-lap:
Because the last command executed (the echo ) succeeded. If you want to group the commands, it's clearer to use an if statement: if ! step 1 ; then echo >&2 "Failed to execute step 1" exit 2fi You could also group the error message and exit with { ... } but that's somewhat harder to read. However that does preserve the exact value of the exit status. step 1 || { echo >&2 "step 1 failed with code: $?"; exit 2; } Note, I changed the && to a semicolon, since I assume you want to exit even if the error message fails (and output those errors on stderr for good practice). For the if variant to preserve the exit status, you'd need to add your code to the else part: if step 1; then : OK, do nothingelse echo >&2 "step 1 failed with code: $?" exit 2fi (that also makes it compatible with the Bourne shell that didn't have the ! keyword). As or why the commands group like the do, the standard says : An AND-OR list is a sequence of one or more pipelines separated by the operators "&&" and "||" . The operators "&&" and "||" shall have equal precedence and shall be evaluated with left associativity. Which means that something like somecmd || echo && exit acts as if somecmd and echo were grouped together first, i.e. { somecmd || echo; } && exit and not somecmd || { echo && exit; } .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/372199", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126666/" ] }
372,245
I have seen many tutorials saying that the bin directory is used to store binary files, meaning there is only 0 and 1 in the files in that directory. However, in many cases, I see files in bin that are not only 0 and 1 . For example, the django-admin.py under the xx/bin/ directory: #!/usr/bin/env pythonfrom django.core import managementif __name__ == "__main__": management.execute_from_command_line()
No, a bin directory is not for storing only binary files. It's for keeping executable files, primarily. Historically, before scripts written in various scripting languages became more common, bin directories would have contained mainly binary (compiled or assembled) non-text files, as opposed to source code. The main thing about the files in bin nowadays is that they are executable. An executable script is a text file, interpreted by an interpreter. The script in your example is a Python script. When you run it, the python interpreter (which is another executable file somewhere in your $PATH ) will be used to run it. Also, as an aside, a text file is as much a file made up of zeroes and ones as a binary file is.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/372245", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/228116/" ] }
372,253
I have a file with data in the below format Item1|keys,books,helmet,handle,Item2|Bike,Item3Item4|Tyre,brakes,headlight,clamps,rollergrip,Item5|Nails,hammers, I wanted the above data to be converted to the below format Item1|keysItem1|booksItem1|helmetItem1|handleItem2|BikeItem3Item4|TyreItem4|brakesItem4|headlightItem4|clampsItem4|rollergripItem5|NailsItem5|hammers I was trying to achieve this by using cut command though that was working fine I wanted to know can this be achieved using awk command. Since if the input file size gets bigger then it should be cumbersome.
No, a bin directory is not for storing only binary files. It's for keeping executable files, primarily. Historically, before scripts written in various scripting languages became more common, bin directories would have contained mainly binary (compiled or assembled) non-text files, as opposed to source code. The main thing about the files in bin nowadays is that they are executable. An executable script is a text file, interpreted by an interpreter. The script in your example is a Python script. When you run it, the python interpreter (which is another executable file somewhere in your $PATH ) will be used to run it. Also, as an aside, a text file is as much a file made up of zeroes and ones as a binary file is.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/372253", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48806/" ] }
372,302
I need to change the hostname of a system without rebooting it. Im running CentOS 7 and have the correct hostname in the /etc/hostname file but im still showing the old hostname at prompt. I know that when I reboot the system it will check the hostname file and apply it, but is there anyway for me to update that without rebooting? Here is some info from the command line: [root@gandalf sysconfig]# cat networkNETWORKING=yesGATEWAY=192.168.80.1HOSTNAME="sauron.domain.com"[root@gandalf sysconfig]# cd ..[root@gandalf etc]# cat hostnamesauron[root@gandalf etc]# Im unable to reboot this server anytime soon and some of my team is mixing up the server due to the hostname showing an older system name. Simply put: need prompt to show [user@sauron dir]# instead of [user@gandalf dir]# . Google'd around for this but wasn't able to see a way to do this without the reboot. Thanks for your consideration!
You should be able to do this using the hostname command: hostname -F /etc/hostname After this change, the previous hostname will still show at your current prompt. To see the change without rebooting, enter a new shell. If you are using bash , type: bash Your new hostname should now be displayed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/372302", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/215012/" ] }
372,314
I'm having issues getting LDAP sudoers rules to work. My environment is: Active Directory on Windows Server 2012 R2 Ubuntu 16.04.2 SSSD 1.13.4-1ubuntu1.5 sudo 1.8.20-3 (latest as of the posting, tried both LDAP and non-LDAP versions) I followed these instructions to create a sudo_debug.log (sanitized): Jun 19 14:53:28 sudo[60452] Received 2 rule(s)Jun 19 14:53:28 sudo[60452] -> sudo_sss_filter_result @ ./sssd.c:225...Jun 19 14:53:28 sudo[60452] sssd/ldap sudoHost 'ALL' ... MATCH!...Jun 19 14:53:28 sudo[60452] val[0]=%linuxadmins...Jun 19 14:53:28 sudo[60452] sudo_get_grlist: looking up group names for [email protected] 19 14:53:28 sudo[60452] sudo_getgrgid: gid 1157000513 [] -> group domain [email protected] [] (cache hit)...Jun 19 14:53:28 sudo[60452] user_in_group: user [email protected] NOT in group linuxadminsJun 19 14:53:28 sudo[60452] <- user_in_group @ ./pwutil.c:1031 := falseJun 19 14:53:28 sudo[60452] user [email protected] matches group linuxadmins: false @ usergr_matches() ./match.c:969Jun 19 14:53:28 sudo[60452] <- usergr_matches @ ./match.c:970 := falseJun 19 14:53:28 sudo[60452] sssd/ldap sudoUser '%linuxadmins' ... not ([email protected])... From this log, you can see that: the sudoers rules are getting from AD to sudo (2 rules, the one displayed matching an AD entry) the match fails on the linuxadmins group However, the user is in the linuxadmins group (sanitized, but "user" matches): $ getent group [email protected]:*:1157001133:[email protected],[email protected] The only odd thing about this log is that it sudo_get_grlist appears to return only the user's Primary Group domain [email protected] . This would explain the lack of a match. Has anyone seen this before? Any idea if the list of groups is resolved inside sudo (that I should continue to wait on my question to sudo-users ) or somewhere else like SSSD (that I should find their list)?
You should be able to do this using the hostname command: hostname -F /etc/hostname After this change, the previous hostname will still show at your current prompt. To see the change without rebooting, enter a new shell. If you are using bash , type: bash Your new hostname should now be displayed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/372314", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/84617/" ] }
372,388
The Linux ps command shows different memory usages like RSS ( resident set size ), size in kB by default. Is there a way to show in MB or GB, like ls -s --human-readable does?
AFAIK you cannot achieve it simply by pure ps command with options.However you can use some text processors like awk and make it to do what you want: ps afu | awk 'NR>1 {$5=int($5/1024)"M";}{ print;}' This takes result from ps and then for every line except the first one it replacecs 5th column which is in KB normally, to MB adding M suffix. You can make it an alias and store it in .bashrc file so you can call it by something like myps . Most of people are asking how to preserve format or use other units and precision. For simple version you can use column -t output filter: ps afu | awk 'NR>1 {$5=int($5/1024)"M";}{ print;}' | column -t This however does not recognize spaces in last column correctly.Unfortunately we've got to deal with text formatting and prepare our own format string in printf -like format. ps afu | awk 'NR==1 {o=$0; a=match($0,$11);}; NR>1 {o=$0;$5=int(10*$5/1024)/10"M";}{ printf "%-8s %6s %-5s %-5s %9s %9s %-8s %-4s %-6s %-5s %s\n", $1, $2, $3, $4, $5, $6, $7, $8, $9, $10, substr(o, a);}' Explanation: NR==1 condition is for first line only (header). We are using original ps output to determine where COMMAND is starting: o=$0 stores unmodified entire line so we can use it later a=match($0,$11) finds location of 11th field (which should be where COMMAND column is starting in original output) NR>1 is for following lines (data). We are changing 5th field: $5=int(10*$5/1024)/10"M" changes value into Megabytes with one decimal place and adds "M" suffix. printf displays all fields in column-like flavor: %-10s means s for string, 10 for 10 characters wide, - for left align %8s means s for string, 8 for 8 characters wide, and because of no - output of this field is right-aligned. substr(o, a) takes substring of original line (hence o stored before) starting from position a calculated in previous condition, so we can have command output displayed with spaces preserved.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/372388", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2832/" ] }
372,408
I have written a script to ssh remote hosts, execute commands, save output to files, and examine outputs. But it always exit silently at line (( success++ )) when iterate first item in array workers . If I replace (( success++ )) with echo "process $worker" , it will work fine and print all hosts. I cannot figure out what's wrong. #!/bin/bashset -xset -eworkers=('host-1' 'host-2' 'host-3')output_dir=$(mktemp -d)for worker in ${workers[@]}; do ssh $worker ' echo abc echo OK ' > "$output_dir/$worker" &doneecho "waiting..."sleep 3waitsuccess=0regexp='OK$'for worker in ${workers[@]}; do output=`cat "$output_dir/$worker"` if [[ "$output" =~ $regexp ]]; then (( success++ )) fidoneecho "Total ${#workers[@]}; success: $success; failure: $((${#workers[@]} - success))"
A simple example should explain why: $ ((success++))$ echo $?1 The reason is that any arithmetic operation which produces a numeric value of zero returns 1 . I don't know what to say - Bash has gotchas enough for the whole world.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/372408", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153046/" ] }
372,457
I know this should be an easy one by googling but was not successful. Sorry for that. I would like to print the first line of groups defined the value in the first column. Delimiter is tab. Input: A 5A 3B 2B 1B 77C 4C 10000D 99 Output: A 5B 2C 4D 99
The shortest one: awk -F'\t' '!a[$1]++' file The output: A 5B 2C 4D 99 !a[$1]++ - ensures line printing on encountering the first unique value of the 1st column
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/372457", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/200923/" ] }
372,482
Is there a command that will allow me to edit the last n lines in a file?I have several files, that all have a different number of lines inside. But I would like to modify the last n lines in each file. The goal is to replace commas with semicolons in the last n lines. But only in the very last n lines. I do not want to delete any lines, I just want to replace every comma with a semicolon in the last n lines in each file. Using the sed command I am able to replace the very last line with this command. As described here: How can I remove text on the last line of a file? But this only enables me to modify the very last line, and not the last n number of lines.
To replace commas with semicolons on the last n lines with ed : n=3ed -s input <<< '$-'$((n-1))$',$s/,/;/g\nwq' Splitting that apart: ed -s = run ed silently (don't report the bytes written at the end) '$-' = from the end of the file ( $ ) minus ... $((n-1)) = n-1 lines ... ( $' ... ' = quote the rest of the command to protect it from the shell ) ,$s/,/;/g = ... until the end of the file ( ,$ ), search and replace all commas with semicolons. \nwq = end the previous command, then save and quit To replace commas with semicolons on the last n lines with sed : n=3sed -i "$(( $(wc -l < input) - n + 1)),\$s/,/;/g" input Breaking that apart: -i = edit the file "in-place" $(( ... )) = do some math: $( wc -l < input) = get the number of lines in the file -n + 1 = go backwards n-1 lines ,\$ = from n-1 lines until the end of the file: s/,/;/g = replace the commas with semicolons.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/372482", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/227283/" ] }
372,483
Short of writing a loop, is there a way to repeat the last command N times. For example, I can repeat the last command once by using a double bang (!!), but how do I repeat it say 30 times?
With zsh , and provided the last command line was only one command or pipeline or and-or list (that is for instance echo x , echo x | tr x y , echo x && echo y , even compound commands like { x; y; } or for / while loops but not echo x; echo y ): repeat 30 !! To repeat the previous command line even if it contained several commands, use: repeat 30 do !!; done Or: repeat 30 {!!} With bash and for simple-commands only (among the examples above, only the echo x case), you could define a helper function like: repeat() { local n="$1" shift while ((n-- > 0)); do "$@" done} (and use repeat 30 !! like above). A side effect is that because the code will be running in a function, it will see different "$@" , "$#" and things like typeset will work differently, so you can't do things like: eval 'echo "$1"'repeat 30 !! Another approach to emulate zsh 's repeat 30 {!!} would be to declare an alias like: alias repeat='for i in $(seq' (assuming an unmodified $IFS ) And then use: repeat 30); { !!; }
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/372483", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47542/" ] }
372,506
Can anyone explain to me what is login class in simple words.
Login classes has been a part of FreeBSD for as long I can remember. It allows the system administrator (root) to set resource constraints for users, or a group of users as configured in /etc/login.conf . This is particularly useful on multi-user servers such as webhosting and shell providers. These kind of constraints involves: CPU utilization Memory utilization Maximum open files (file descriptors) Biggest individual file allowed to create within that login class (not redundant to quotas). And a lot more. In case you make any tweaks, or add new login classes you have to use cap_mkdb to generate a capability database from /etc/login.conf. Apply changes: cap_mkdb /etc/login.conf
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/372506", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237111/" ] }
372,541
I have the following in a script: yes >/dev/null &pid=$!echo $pidsleep 2kill -INT $pidsleep 2ps aux | grep yes When I run it, the output shows that yes is still running by the end of the script. However, if I run the commands interactively then the process terminates successfully, as in the following: > yes >/dev/null &[1] 9967> kill -INT 9967> ps aux | grep yessean ... 0:00 grep yes Why does SIGINT terminate the process in the interactive instance but not in the scripted instance? EDIT Here's some supplementary information that may help to diagnose the issue. I wrote the following Go program to simulate the above script. package mainimport ( "fmt" "os" "os/exec" "time")func main() { yes := exec.Command("yes") if err := yes.Start(); err != nil { die("%v", err) } time.Sleep(time.Second*2) kill := exec.Command("kill", "-INT", fmt.Sprintf("%d", yes.Process.Pid)) if err := kill.Run(); err != nil { die("%v", err) } time.Sleep(time.Second*2) out, err := exec.Command("bash", "-c", "ps aux | grep yes").CombinedOutput() if err != nil { die("%v", err) } fmt.Println(string(out))}func die(msg string, args ...interface{}) { fmt.Fprintf(os.Stderr, msg+"\n", args...) os.Exit(1)} I built it as main and running ./main in a script, and running ./main and ./main & interactively give the same, following, output: sean ... 0:01 [yes] <defunct>sean ... 0:00 bash -c ps aux | grep yessean ... 0:00 grep yes However, running ./main & in a script gives the following: sean ... 0:03 yessean ... 0:00 bash -c ps aux | grep yessean ... 0:00 grep yes This makes me believe that the difference has less to do on Bash's own job control, though I'm running all of this in a Bash shell.
What shell is used is a concern as different shells handle job control differently (and job control is complicated; job.c in bash presently weighs in at 3,300 lines of C according to cloc ). pdksh 5.2.14 versus bash 3.2 on Mac OS X 10.11 for instance show: $ cat codepkill yesyes >/dev/null &pid=$!echo $pidsleep 2kill -INT $pidsleep 2pgrep yes$ bash code3864338643$ ksh code38650$ Also relevant here is that yes performs no signal handling so inherits whatever there is to be inherited from the parent shell process; if by contrast we do perform signal handling— $ cat sighandlingcode perl -e '$SIG{INT} = sub { die "ouch\n" }; sleep 5' &pid=$!sleep 2kill -INT $pid$ bash sighandlingcode ouch$ ksh sighandlingcode ouch$ —the SIGINT is triggered regardless the parent shell, as perl here unlike yes has changed the signal handling. There are system calls relevant to signal handling which can be observed with things like DTrace or here strace on Linux: -bash-4.2$ cat codepkill yesyes >/dev/null &pid=$!echo $pidsleep 2kill -INT $pidsleep 2pgrep yespkill yes-bash-4.2$ rm foo*; strace -o foo -ff bash code2189921899code: line 9: 21899 Terminated yes > /dev/null-bash-4.2$ We find that the yes process ends up with SIGINT ignored: -bash-4.2$ egrep 'exec.*yes' foo.21*foo.21898:execve("/usr/bin/pkill", ["pkill", "yes"], [/* 24 vars */]) = 0foo.21899:execve("/usr/bin/yes", ["yes"], [/* 24 vars */]) = 0foo.21903:execve("/usr/bin/pgrep", ["pgrep", "yes"], [/* 24 vars */]) = 0foo.21904:execve("/usr/bin/pkill", ["pkill", "yes"], [/* 24 vars */]) = 0-bash-4.2$ grep INT foo.21899rt_sigaction(SIGINT, {SIG_DFL, [], SA_RESTORER, 0x7f18ebee0250}, {SIG_DFL, [], SA_RESTORER, 0x7f18ebee0250}, 8) = 0rt_sigaction(SIGINT, {SIG_DFL, [], SA_RESTORER, 0x7f18ebee0250}, {SIG_DFL, [], SA_RESTORER, 0x7f18ebee0250}, 8) = 0rt_sigaction(SIGINT, {SIG_IGN, [], SA_RESTORER, 0x7f18ebee0250}, {SIG_DFL, [], SA_RESTORER, 0x7f18ebee0250}, 8) = 0--- SIGINT {si_signo=SIGINT, si_code=SI_USER, si_pid=21897, si_uid=1000} ----bash-4.2$ Repeat this test with the perl code and one should see that SIGINT is not ignored, or also that under pdksh there is no ignore being set as there is in bash . With "monitor mode" turned on like it is in interactive mode in bash , yes is killed. -bash-4.2$ cat monitorcode #!/bin/bashset -mpkill yesyes >/dev/null &pid=$!echo $pidsleep 2kill -INT $pidsleep 2pgrep yespkill yes-bash-4.2$ ./monitorcode 22117[1]+ Interrupt yes > /dev/null-bash-4.2$
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/372541", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14703/" ] }
372,551
I understand the definition of fileless malware: Malicious code that is not file based but exists in memory only… More particularly, fileless malicious code … appends itself to an active process in memory… Can somebody please explain how this appending itself to an active process in memory works ? Also, what (kernel) protection/hardening is available against such attacks ?
Fileless malware attacks the target by exploiting a vulnerability e.g. in a browser's Flash plugin, or in a network protocol. A Linux process can be modified by using the system call ptrace() . This system call is usually used by debuggers to inspect and manage the internal state of the target process, and is useful in software development. For instance, let's consider a process with PID 1234. This process' whole address space can be viewed in the pseudo filesystem /proc at the location /proc/1234/mem . You can open this pseudofile, then attach to this process via ptrace() ; after doing so, you can use pread() and pwrite() to write to the process space. char file[64];pid = 1234;sprintf(file, "/proc/%ld/mem", (long)pid);int fd = open(file, O_RDWR);ptrace(PTRACE_ATTACH, pid, 0, 0);waitpid(pid, NULL, 0);off_t addr = ...; // target process addresspread(fd, &value, sizeof(value), addr);// orpwrite(fd, &value, sizeof(value), addr);ptrace(PTRACE_DETACH, pid, 0, 0);close(fd); (Code taken from here . Another paper about a ptrace exploit is available here .) Concerning kernel-oriented defense against these attacks, the only way is to install kernel vendor patches and/or disabling the particular attack vector. For instance, in the case of ptrace you can load a ptrace-blocking module to the kernel which will disable that particular system call; clearly this also makes you unable to use ptrace for debugging.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/372551", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }
372,590
Today if I do $ yum remove packageA I am greeted with: Removing:packageA noarch 3.5.1.b37-15 @yumFS 293 kRemoving for dependencies: packageB noarch 3.5.1.b125-7 @yumFS 87 M..Is this ok? I would like to remove packageA without removing packageB (etc) is this possible?
Appears possible , by using rpm: $ rpm -e --nodeps packageA though obviously be very careful, since if you remove a dependency package and don't put it back that could lead to unexpected results for the packages still installed that depend on it and anticipate it being present...
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/372590", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8337/" ] }
372,606
I have Lubuntu 17.04 with i3 window manager. I am tryig to open the network manager from the i3wm but I can't. The internet connection works by going to the defaiult network that I set via LXDE, however unless I log back in via LXDE, I can't change the network via nm-applet. When I run nm-applet in the terminal in i3, nothing happens. Any suggestions? (I just want to be able to change between the wifi networks using the mouse, I don't want to work in the command line for that.) Addendum: Here is the output of systemctl status dbus-org.freedesktop.nm-dispatcher.service ● NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service Loaded: loaded (/lib/systemd/system/NetworkManager-dispatcher.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2017-06-25 21:35:57 EDT; 967ms ago Main PID: 1477 (nm-dispatcher) Tasks: 3 (limit: 4915) CGroup: /system.slice/NetworkManager-dispatcher.service └─1477 /usr/lib/NetworkManager/nm-dispatcherJun 25 21:35:57 dot-HP-Compaq-8200-Elite-USDT-PC systemd[1]: Starting Network Manager Script Dispatcher Service...Jun 25 21:35:57 dot-HP-Compaq-8200-Elite-USDT-PC systemd[1]: Started Network Manager Script Dispatcher Service.Jun 25 21:35:57 dot-HP-Compaq-8200-Elite-USDT-PC nm-dispatcher[1477]: req:1 'dhcp6-change' [wlxc83a35c67f33]: new request (1 scripts)Jun 25 21:35:57 dot-HP-Compaq-8200-Elite-USDT-PC nm-dispatcher[1477]: req:1 'dhcp6-change' [wlxc83a35c67f33]: start running ordered scripts.lines 1-12/12 (END) Update 2: I was able to pinpoint to the problem and it is a strange one (does not change with exec_always ): When I use an additional screen, then the nm-applet does not show up while everything else is there. If I disconnect the second monitor and reboot the PC, it shows up in the rigth lower corner as it's supposed to.
After trying numerous suggestions found here and elsewhere, my solution was: bar { tray_output primary status_command i3status} be careful with the c&p, not sure it'll format correctly
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/372606", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/149766/" ] }
372,627
As the title says, I want to be able to change environment variablesin a parent process (specifically, a shell)from a child process (typically a script). From pseudo terminal /dev/pts/id trying to export key=value from child script,so exported variables has to be passed somehow to parent, if possible? echoing cmd > /proc/$$/fd/0 doesn't execute cmd , only view command in shell terminal emulator, and of course using $(cmd) instead of cmd executes in subshell, and export doesn't add variables to parent process. I prefer that all the work be done in the child side. I was asked in comments, what am I trying to achieve? that is a general question, and I'm trying to use the positive answer to pass variables from a script executed (spawned) by a (parent) shell, so that the user can benefit from added variables without any further work. For example, I would like to have a script install an application,and the application directory should be added in the parent shell path.
No , it's not possible Not without some kind of workaround . Environment variables can only be passed from parent to child (as part of environment export/inheritance), not the other way around.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/372627", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/189097/" ] }
372,633
HFS+ formatted drive connected to an Ubuntu box via a SATA/USB cradle. No issues are reported for the partition by fsck.hfsplus. Attempting to run "ls" (or anything else) on the affected files results in "no such file or directory". Running "ls -lh" on the container folder throws the same complaint but still shows the file in the list, but with the following format: -rw-r--r-- 1 501 dialout 53M Mar 4 15:26 normal_file-????????? ? ? ? ? ? uncooperative_file I'm not concerned about the 501:dialout ownership of the other files (the drive is from a different machine). There are a few files that are being affected by this. They only seem to be files with Unicode and/or Emoji in the name. I've tried: "ls" with the "-b" and "-q" options, but neither has revealed anything "ls -lh > ~/tmp.txt" and editing in "vi" in an attempt to detect extraneous bytes in the name "chown root:root filename" "chmod 644 filename" The file shows up in the output of "ls" and tab-completion fills it in as well. But any sort of actual interaction fails. Anyone able to offer some guidance? Ultimately, I want to be able to rsync/scp these files to another box (which unfortunately doesn't play nicely with the drive cradle) and I figured being able to ls/mv would be a good starting point. EDIT: Using bash. Tab-completion fills in the full filename, though with some '???' in the place of certain characters (unsure of the original chars at this point). Locale on the source box: LANG=en_CA.UTF-8LANGUAGE=en_CA:enLC_CTYPE="en_CA.UTF-8"LC_NUMERIC="en_CA.UTF-8"LC_TIME="en_CA.UTF-8"LC_COLLATE="en_CA.UTF-8"LC_MONETARY="en_CA.UTF-8"LC_MESSAGES="en_CA.UTF-8"LC_PAPER="en_CA.UTF-8"LC_NAME="en_CA.UTF-8"LC_ADDRESS="en_CA.UTF-8"LC_TELEPHONE="en_CA.UTF-8"LC_MEASUREMENT="en_CA.UTF-8"LC_IDENTIFICATION="en_CA.UTF-8"LC_ALL=
No , it's not possible Not without some kind of workaround . Environment variables can only be passed from parent to child (as part of environment export/inheritance), not the other way around.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/372633", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26480/" ] }