source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
353,896
I downloaded both latest Centos and Fedora ISO and burned onto CD using Imgburn. I then first booted up with Centos CD on my old HP DC5000 computer. It appears to boot to the CD just fine and gives me options: Install Centos Verify files and install Centos etc... I select Install Centos, it then goes to a blank screen. I waited over an hour and still just a blank screen. I tried the same with the Fedora CD and the same exact issue happened, it booted off the CD, asked which option, even when I select the second option, same thing happens, goes to a blank screen and stays there. Any ideas/suggestions? Thanks
This happens often on computers with old graphics hardware. By default the system tries to use a 1024x768 framebuffer mode to start the system with, but this doesn't work with some old PCs. In this case, you can select Troubleshooting from the menu, and then select Install <distro> in basic graphics mode . On some really ancient computers, even this won't work. In that case you'll need to do a text mode installation. Do this by selecting Install <distro> but instead of pressing Enter , press Tab and add nomodeset text to the end of the boot command line.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353896", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223455/" ] }
353,921
I need to write a script that would "cut" the output of the id command into three parts. For example: Let's say id gives this output: uid=12345(mylogin) gid=100(users) And my script should output it like this: Login: myloginId: 12345Group: users
Using multiple character field separator in awk $ echo 'uid=12345(mylogin) gid=100(users)' | awk -F'[=()]' '{print "Login: " $3 "\nId: " $2 "\nGroup: " $6}'Login: myloginId: 12345Group: users -F'[=()]' set = or ( or ) as field separators $3 will be third field, after first = and the first ( terminated by ) . So it gets the value mylogin Similarly for other fields and print as required
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353921", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/217109/" ] }
353,922
I can't remember how to append a command to a shell script. I searched for append , add , concat , and more without success. Basically I have belly = tail -n +"$HEAD" "$1" | head -n $((TAIL-HEAD+1))if [ -z "${NUMBER+x}" ]; then # check if NUMBER exists tail -n +"$HEAD" "$1" | head -n $((TAIL-HEAD+1))else tail -n +"$HEAD" "$1" | head -n $((TAIL-HEAD+1)) | cat -nfi and it works fine, but I don't like the duplicate logic. I know that I could use a function or eval, but is there a simpler way to do this? In my head I have something like this: belly = tail -n +$HEAD $1 | head -n $((TAIL-HEAD+1))if [ -z "${NUMBER+x}" ]; then # check if NUMBER exists bellyelse belly | cat -nfi But it doesn't work. What am I missing?
Using multiple character field separator in awk $ echo 'uid=12345(mylogin) gid=100(users)' | awk -F'[=()]' '{print "Login: " $3 "\nId: " $2 "\nGroup: " $6}'Login: myloginId: 12345Group: users -F'[=()]' set = or ( or ) as field separators $3 will be third field, after first = and the first ( terminated by ) . So it gets the value mylogin Similarly for other fields and print as required
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353922", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150597/" ] }
353,924
Does anyone know how I can copy my customizations of XFCE's settings plus its appearance to another machine? The settings for appearance/design, panels, keyboard shortcuts and geany are not there yet, like at all. So far I have done: copied ~/.config/{autostart,xfce4,Thunar} (not literally like that) logged out and back in, rebooted Resources: https://forum.xfce.org/viewtopic.php?id=4168 https://askubuntu.com/questions/563382/copy-xfce4-configuration-files-from-one-user-to-another https://superuser.com/questions/677151/how-can-i-migrate-my-xfce-configuration-and-settings-to-another-system Some info, which is true for both machines: $ pacman -Qi xfwm4 | grep VersionVersion : 4.12.4-1$ uname -r4.10.5-1-ARCH
Xfce usually stores its configuration files in ~/.config/xfce4 (as well as ~/.local/share/xfce4 and ~/.config/Thunar ). Copying these directories to your laptop should do the job. Keyboard shortcuts are stored in ~/.config/xfce4/xfconf/xfce-perchannel-xml/xfce4-keyboard-shortcuts.xml , so they should get copied as well. It's possible that after you copy the files they are getting overwritten when you log out of the session, thus preventing the new settings from getting enabled. Perhaps you could try copying the aforementioned directories by logging in through a tty? Note that there's a global set of configuration files in /etc/xdg/xfce4 , /etc/xdg/Thunar/ , /etc/xdg/menus , etc. (as well as /etc/xdg/xdg-xubuntu if you're using Xubuntu). If you're copying the configuration files between two systems having completely different base installations, you'll have to copy these files as well.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/353924", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31568/" ] }
353,951
Using bash I am trying to query /etc/passwd for any users with an id over 1000. If they exist do something, else do something else. I'm stumped. Any help is appreciated. if [ "$(id -u)" -gt "1000" </etc/passwd]; then do somethingelse do something elsefi
Try this: if grep -E '^[^:]*:[^:]*:[^:]{4}' /etc/passwd | grep -Evq '^[^:]*:[^:]*:1000:' The first grep searches passwd for lines with a uid of four or moredigits. The second grep filters out the line with uid 1000. The exitstatus will be 0 if any lines remain, 1 if not.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353951", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/222862/" ] }
353,962
I have a Debian Jessie 64bit 8.7 and Google Chrome Stable 57.0.2987.110. When I visited the GNOME Shell extensions site, I saw the following information: Although GNOME Shell integration extension is running, native host connector is not detected. Refer documentation for instructions about installing connector. On Firefox ESR (Mozilla Firefox 45.6.0) , I got the following error: ReferenceError: chrome is not defined I can't install any GNOME extension because of it.Should I install chrome-gnome-shell? It is in stretch and sid repositiories, not in jessie.Should I change browsers?
Yes, you should install the GNOME Shell integration for Chrome . The Debian 9 package ’s dependencies are satisfiable in Debian 8, so wget http://ftp.debian.org/debian/pool/main/c/chrome-gnome-shell/chrome-gnome-shell_8-4_all.debsudo gdebi chrome-gnome-shell_8-4_all.deb should work (assuming you have gdebi installed). You’ll need to copy all the JSON files from /etc/chromium/native-messaging-hosts to /etc/opt/chrome/native-messaging-hosts to get the packaged extension to work with Chrome; see the troubleshooting section for details.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353962", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/222868/" ] }
353,978
I need to search for 1st occurrence of string pattern "EPMAT-" in a log file and extract the numeric part from it. EPMAT- will be followed by some number. I would like to extract 20 from EPMAT-20 and print it. Ex file: This is a test test EPMAT-20 ...... .... EPMAT.33 test end of test.
Yes, you should install the GNOME Shell integration for Chrome . The Debian 9 package ’s dependencies are satisfiable in Debian 8, so wget http://ftp.debian.org/debian/pool/main/c/chrome-gnome-shell/chrome-gnome-shell_8-4_all.debsudo gdebi chrome-gnome-shell_8-4_all.deb should work (assuming you have gdebi installed). You’ll need to copy all the JSON files from /etc/chromium/native-messaging-hosts to /etc/opt/chrome/native-messaging-hosts to get the packaged extension to work with Chrome; see the troubleshooting section for details.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353978", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/222875/" ] }
354,014
I try to make a join from two sources — from a pipe and a file. I wrote this simple one-liner: FILENAME=$TMPDIR".netstat"; \join -t , $FILENAME <(nettop -t wifi -P -x -L1 | cut -d , -f 2,5,6 | tail -n +2) | \awk '{print $0}' It normally runs from the shell, but it doesn't run from a script.sh (full script see below): ./script.sh: line 16: syntax error near unexpected token `(' I tried to use different quotes inside my expression to mask variables, parameters or commands entirely, but I could not run the script. Can anybody help? P. S. This script is invoking by GeekTool (Shell widget) on macOS Sierra 10.12.3. P.P.S. Yes, I know that OS X is not "Unix & Linux", but I thought the difference is not so great. Full script with some comments is: #!/bin/shFILENAME=$TMPDIR".netstat"# checking if existing stats fileif [ ! -f $FILENAME ]; then nettop -t wifi -P -x -L 1 | cut -d , -f 2,5,6 | tail -n +2 > $FILENAME # save stats to file exitfifts=`stat -r $FILENAME | cut -d " " -f 10` # get timestamp of stats filenow=`date +%s` # get current datetimejoin -t, $FILENAME <(nettop -t wifi -P -x -L1 | cut -d , -f 2,5,6 | tail -n +2) | awk '{print $0}' UPDATED. SOLVED Shebang changed to #!/bin/bash
Almost certainly this is happening because the shell you are using to run the script is not the same as your interactive shell. Since the shebang line is #!/bin/sh , your script is executed by the interpreter /bin/sh . On distros such as Ubuntu, /bin/sh is the dash shell, which does not support all the features bash does, such as process substitution (that <() bit). The solution, either call the script with /path/to/bash /path/to/script , or fix the shebang line ( #!/path/to/bash ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/354014", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/222903/" ] }
354,043
I need to convert ".xlsx" file to ".xls" using shell command. At my work we are currently using xlsx2csv command but now requirement has been changed and we need to convert all ".xlsx" files to ".xls" files for further calculation. For that, Some guy at my work has developed one command that can convert ".xlsx" to ".xls" but, that is applicable for only one sheet.. We have multiple sheets in one file. Thanks in advance....
If you install LibreOffice, you can use the following command: libreoffice --headless --convert-to xls myfile.xlsx or just: libreoffice --convert-to xls myfile.xlsx in recent version (>= 4.5) where --convert-to implies --headless . This will create myfile.xls , and keep the original myfile.xlsx —so you’ll probably need to do a cleanup after you've validated the conversion is successful.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/354043", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216530/" ] }
354,073
On my Linux Red Hat machines, I do the following from root # su - starus$ <-- now I am in starus user$ su - moon <-- now I want to access moon user from starus userPassword: But I get prompted for a password! Please advise why I get password if already added in visudo the following moon ALL=(starus) NOPASSWD: ALL What is wrong? I also try to run the following script as user moon but password is needed starus@host sudo -u moon /home/USER261/test.bash[sudo] password for starus:
First of all, root can become any user without needing a password. That's one of the privileges of being the super user. So, with su - starus , you can switch to starus without being prompted. However, at that point, you are starus and no longer root , so you do need a password to switch to moon . The simple solution is to switch back to root first (just run exit ) and then switch to moon . Now, visudo is irrelevant here. You're not using sudo so any changes you make there (in /etc/sudoers , the file that visudo edits) won't affect the behavior of su , only that of sudo which is not the same program. In any case, the line you show ( moon ALL=(starus) NOPASSWD: ALL ) simply means that the user moon can run any command as the user starus with sudo without needing to enter a password. It doesn't mean that anyone can become moon without knowing moon 's password. It just means that commands like this don't need a password: moon@host $ sudo -u starus command If you are logged in as moon , you can use sudo to run a command as starus without a password.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/354073", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153544/" ] }
354,086
I am trying to push to my git repository from cygwin but in vain. It used to work. I get $ git pushssh: Could not resolve hostname prooftheory: Name or service not knownfatal: Could not read from remote repository.Please make sure you have the correct access rightsand the repository exists. Now, my ssh-config file is readable for me -rwxrwxr--+ 1 user Tartományfelhasználók 230 Dec 10 2015 /cygdrive/c/Users/user/.ssh/config and it contains Host phd HostName bitbucket.org IdentityFile ~/.ssh/id_rsa IdentitiesOnly yes User gitHost prooftheory HostName bitbucket.org IdentityFile ~/.ssh/pt_rsa IdentitiesOnly yes User git .git/config contains, among other things: [remote "origin"] url = ssh://git@prooftheory/gergely_/prooftheory.git fetch = +refs/heads/*:refs/remotes/origin/* I can ping bitbucket.org . . What am I missing here? EDIT https://serverfault.com/questions/518729/cygwin-ssh-issue-could-not-resolve-hostname-awshost1-hostname-nor-servname-pro says cygwin might get ssh config info from somewhere else but it is not clear how can I configure git to use ~/.ssh/config. I copied that config to ~/.ssh/ssh_config but that did not help. Unfortunately, ssh -vvv does not write which config file it reads.
First of all, root can become any user without needing a password. That's one of the privileges of being the super user. So, with su - starus , you can switch to starus without being prompted. However, at that point, you are starus and no longer root , so you do need a password to switch to moon . The simple solution is to switch back to root first (just run exit ) and then switch to moon . Now, visudo is irrelevant here. You're not using sudo so any changes you make there (in /etc/sudoers , the file that visudo edits) won't affect the behavior of su , only that of sudo which is not the same program. In any case, the line you show ( moon ALL=(starus) NOPASSWD: ALL ) simply means that the user moon can run any command as the user starus with sudo without needing to enter a password. It doesn't mean that anyone can become moon without knowing moon 's password. It just means that commands like this don't need a password: moon@host $ sudo -u starus command If you are logged in as moon , you can use sudo to run a command as starus without a password.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/354086", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46710/" ] }
354,092
My current code looks like this: x=${y:0:40} , which limits the length of string to 40 characters. In case of string being shorter than 40 characters, is it possible to fill the trailing places with spaces? So if my y="very short text" I would like my y to be: y="very short text (+25 trailing spaces) "
You should try printf : printf '%-40s' "$y"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/354092", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/222977/" ] }
354,096
I am installing a custom package to /opt/package_name , storing configuration files in /etc/opt/package_name and static files in /var/opt/package_name/static/ - all following the conventions suggested by FHS. [ 1 ] [ 2 ] [ 3 ] I also need to store some logfiles. I want them to be discoverable by analysis tools, so they should also be in a conventional location. Should these go in: /var/log/package_name (like a system package, even though this is a custom package) /var/opt/package_name/log (following the /var/opt convention - but is this discoverable?) something else?
I would place them in /var/log/package_name ; it satisfies the principle of least surprise better than /var/opt/package_name/log . I don't have a citation for this; it simply matches where I'd look for logs. I might also forego writing my own log files, and instead log to syslog with an appropriate tag and facility; if I'm looking for clean integration with established analysis tools, I don't believe I can do better for a communications channel: Every generic tool with "log analysis" as a listed feature already watches syslog . Log file release and rotation semantics are handled for me; I don't have to set up a mechanism for logrotate to tell me to let go of the file and open a new one. I don't even have to tell logrotate about new files to rotate! Offloading logs to central logging servers is handled for me, if the site demands it; Existing established tools like rsyslog will be in use if needed, so I don't have to contemplate implementing that feature myself. Access controls (POSIX and, e.g. SELinux) around the log files are already handled, so I don't need to pay as much attention to distribution-specific security semantics. Unless I'm doing some custom binary format for my log and even then, I prefer syslog-friendly machine-parseable text formats like JSON. I have a hard time justifying my own separate log files; analysis tools already watch syslog like a hawk.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/354096", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269/" ] }
354,119
There are some commands like cd or ll that if I run them as sudo , their execution just "breaks". What is a rule of thumb to know which commands will "break" this way when a sudo command precedes them? This data can help me and other newcomers to code stabler scripts.
Only external commands can be run by sudo . Sudo The sudo program forks (start) a new process to launch an external command with the effective privileges of the superuser (or another user if the -u option is used). That means that no commands that are internal to the shell can be specified; this includes shell keywords, builtins, aliases, and functions. The best way to find out if a command is available as an external command (and not internal to the shell) is to run type -a command_name which displays all locations containing the specified executable. Example 1: Shell builtin In this case, the cd command is only available as a shell builtin: $ type -a cdcd is a shell builtin It fails when you try to run it with sudo : $ sudo cd /sudo: cd: command not found Example 2: Alias In this case, the ls command is external – but an alias with the same name has also been created in the user’s shell. $ type -a lsls is aliased to `ls -F --color'ls is /bin/ls If I was to run sudo ls , it would not be the alias that runs as the superuser; if I wanted the -F option, it would have to be explicitly included as an option, i.e., sudo ls -F . Example 3: Shell builtin and external command In this case, the pwd command is provided as both a shell builtin and an external command: $ type -a pwdpwd is a shell builtinpwd is /bin/pwd In this case, the external /bin/pwd command would run with sudo : $ sudo pwd/home/anthony Other examples of commands that are often provided as both shell builtins and external commands are kill , test ( [ ) and echo . Run internal shell commands with sudo If you really want to run a shell builtin with superuser privileges, you’d have to launch a shell as the external command. E.g., the following command runs bash as the superuser with the cd builtin command provided as a command line option: $ sudo bash -c "cd /; ls"bin etc lib media mnt ntp.peers proc sbin share systmp var boot dev home lost+found misc net opt … … Note: Aliases can not be passed as commands to Bash using its -c option. Shell redirection Another issue to watch out for is that shell redirection takes place in the context of the current shell. If I try to run sudo /bin/echo abc > /test.file , it won’t work. I get -bash: /test.file: Permission denied . While the echo command runs with superuser privileges, it prints its output to my current (non-privileged) shell and, as a regular user, I don’t have permission to write to the / directory. One work-around for this is to use sudo to launch a new shell (similar to the above example): sudo bash -c "echo abc > /test.file" In this case, the ouptut redirection takes place in the context of the privileged shell (which does have permission to write to / ). Another solution is to run the tee command as the superuser: echo abc | sudo tee /test.file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/354119", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
354,277
When type a nonexistent command, bash show "command not found...". In ubuntu, it will give advice for which package need to be installed; In Fedora, nothing show. So which software has this function in Fedora? I think it's not in *bash-completion. PackageKit-command-not-found suggest by Stephen Kitt:
In Fedora, this functionality is provided by the PackageKit-command-not-found package. It adds a /etc/profile.d/PackageKit.sh startup script which sets up command-not-found handling. With this in place, I get for example $ evolutionbash: evolution: command not found...Install package 'evolution' to provide command 'evolution'? [N/y] It only works if DBus is running and if packagekitd is installed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/354277", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199344/" ] }
354,322
I have CDs for Age of Empire III and I would like to play it in a Windows 10 VM. Is this possible? I know how to insert virtual CDs (i.e., ISO files) into a VirtualBox VM (via the "Storage" settings), but physical CDs are a different story. The best solution I can think of is to add where I've mounted the CDs on my Linux system to the system via shared folders.
Yes you can, but you need to have DVD passthrough active. Go to VirtualBox's Machine > Settings > Storage > Enable Passthrough for the DVD drive. To allow an external DVD drive to be recognized by a VirtualBox Virtual Machine (VM) it must be configured in such a way that "passthrough" is enabled. Enabling Passthrough allows the underlying operating system to pass the required commands through to the device that is connected to the Virtual Machine as opposed to the host operating system instance. http://www.tempusfugit.ca/techwatch.ca/passthrough.html
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/354322", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27613/" ] }
354,341
Say I have a string foobar How can I easily produce something like [f]oobar without specifically replacing f with [f] ? Use case: I accept user input of a process name, and I want to show the process details while eliminating the grep .
Do: sed 's/^./[&]/' ^. matches the first character of line In the replacement, & is expanded to the match, we are enclosing the match with [] Example: % sed 's/^./[&]/' <<<'foobar'[f]oobar
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/354341", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150289/" ] }
354,342
I have a website, let's just call it example.com. Right now, I've got the URL http://www.example.com , which has the DNS address of 10.0.1.1(:80) which is port forwarded to 192.168.0.14 on my network. I also want to have a website at http://edu.example.com which would be hosted on my other server, but I only have one router therefore, only one IP address. I have tried using a VirtualHost in my apache config file, but it can't specify a directory on another device
Do: sed 's/^./[&]/' ^. matches the first character of line In the replacement, & is expanded to the match, we are enclosing the match with [] Example: % sed 's/^./[&]/' <<<'foobar'[f]oobar
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/354342", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223160/" ] }
354,364
If I do the following: touch /tmp/test and then perform ls -la /tmp/ I could see the test file with 0 Bytes in the directory. But how does the Operating System handle a concept of 0 Bytes . If I put it in layman terms: 0 Bytes is no memory at all, hence nothing is created. Creation of a file, must or should at least require certain memory, right?
A file is (roughly) three separate things: An "inode", a metadata structure that keeps track of who owns the file, permissions, and a list of blocks on disk that actually contain the data. One or more directory entries (the file names) that point to that inode The actual blocks of data themselves When you create an empty file, you create only the inode and a directory entry pointing to that inode. Same for sparse files ( dd if=/dev/null of=sparse_file bs=10M seek=1 ). When you create hardlinks to an existing file, you just create additional directory entries that point to the same inode. I have simplified things here, but you get the idea.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/354364", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/178625/" ] }
354,377
For the purpose of a forensic mission, we must get a docker image without using the famous export from a docker command. Does copy and paste of the folder /var/lib/docker/containers in another server allow us to retrieve information without any corrupted data ? Thanks.
A file is (roughly) three separate things: An "inode", a metadata structure that keeps track of who owns the file, permissions, and a list of blocks on disk that actually contain the data. One or more directory entries (the file names) that point to that inode The actual blocks of data themselves When you create an empty file, you create only the inode and a directory entry pointing to that inode. Same for sparse files ( dd if=/dev/null of=sparse_file bs=10M seek=1 ). When you create hardlinks to an existing file, you just create additional directory entries that point to the same inode. I have simplified things here, but you get the idea.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/354377", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160856/" ] }
354,460
I'm learning to use dd by experimentally playing with its arguments. I would like to create a 10-byte file. I thought the following would work: dd if=/dev/zero of=./foo count=1 bs=1 obs=9 seek=1 ...because of these comments from the man page: obs=BYTES write BYTES bytes at a time (default: 512) seek=N skip N obs-sized blocks at start of output ...but it does not; it creates a 2-byte file: >ls -l foo-rw-rw-r-- 1 user user 2 Mar 28 16:05 foo My workaround has been: dd if=/dev/zero of=./foo count=1 bs=1 obs=1 seek=9 But for my learning, I'd like to understand why the first version does not work. Thank you.
Your command dd if=/dev/zero of=./foo count=1 bs=1 obs=9 seek=1 creates a two-byte file rather than a 10-byte file because of poorly-defined interaction between bs and obs . (Call this a program bug if you like, but it's probably better defined as a documentation bug.) You are supposed to use either bs or ibs and obs . Empirically it appears that bs overrides obs , so what gets executed is dd if=/dev/zero of=./foo count=1 bs=1 seek=1 , which creates a two-byte file as you have seen. If you had used dd if=/dev/zero of=./foo count=1 ibs=1 obs=9 seek=1 you would have got a 10-byte file as expected. As an alternative, if you want to create an empty file that doesn't take any data space on disk you can use the counter-intuitively named truncate command: truncate --size=10 foo
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/354460", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/214773/" ] }
354,462
I have downloaded BASH for Windows 10. How would I navigate to a network address as I would in a Windows environment? I have seen SAMBA mentioned and have downloaded smbclient . I have tried: smbclient \\localhost\ I receive the error ERROR: Could not determine network interfaces, you must use a interfaces config file I am a novice user of BASH, and see this as an opportunity to be more efficient. As a bonus please show how I could accomplish some common tasks such as copying files across a network, as well as how to authenticate since this would likely be required for such operations.
In the latest Windows release "Fall Creators Update" it is possible to mount UNC paths, or any other filesystem that Windows can access, from within WSL . You can do this with the mount command as usual, with the filesystem " drvfs " provided by WSL: sudo mount -t drvfs '\\server\share' /mnt/share Single quotes are useful around the UNC path so that you don't have to escape the backslashes. You can mount on an arbitrary directory; I've used /mnt/share as an example here, but any empty directory will do. All files will show up with full a+rwx 777 permissions. The real access rights will be checked when you try to access a file, and you can get an error at that point even if it looks like the operation should succeed. Every readable file will be treated as executable. For locations that require credentials you have three options: Prior to mounting, navigate to the location using Windows' File Explorer and authenticate. WSL will inherit your credentials and permissions. This is the easiest way for a one-off. Use the net use command from a cmd prompt, or net.exe use from inside WSL ( cd /mnt/c first to suppress a warning). You'll need something like net.exe use \\server\share <PASSWORD> /USER:<USERNAME> . You can use '*' for the password to be prompted instead. Other configurations are shown with net.exe help use . Use the Windows Credential Manager to set up a stored credential. I've never done this one. I understand that Samba proper can be made to work under WSL as well, but since the host provides the same functionality I would use the built-in version from Windows when it's available. smbclient is primarily for FTP-style access to SMB servers and retrieving/putting individual files, and it should work when appropriately configured as usual.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/354462", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140356/" ] }
354,499
I am using the following code to write files iteratively in 20 folders for job scheduling. #!/bin/bashfor i in {1..20}do cd conf$i cp ../nvt.mdp $PWD cp ../topol.top $PWD grompp -v -f nvt.mdp -c conf$i.gro -p topol.top -o conf_nvt$i.tpr >> nvt.log cat<<KHIK >> run_nvt$i.pbs #!/bin/bash #PBS -l nodes=1:ppn=16 #PBS -l walltime=120:00:00 #PBS -N GROMACS:TAUAT_P #PBS -q blaze #PBS -j oe #PBS -V cd \$PBS_O_WORKDIR export I_MPI_DEVICE=rdma /home/apps/ics/impi/latest/bin64/mpiexec.hydra /home/braf/md/gromacs-4.5.6/bin/mdrun_mpi -deffnm conf_nvt$i KHIK cd ..done And it is giving out a bizarre error. Can you please tell me what changes I need to make? ./umbrnvt.sh: line 22: warning: here-document at line 9 delimited by end-of-file (wanted `KHIK')./umbrnvt.sh: line 23: syntax error: unexpected end of file
You should have the close token at the begin of line. So your script should be like: #!/bin/bashfor i in {1..20}do cd conf$i cp ../nvt.mdp $PWD cp ../topol.top $PWD grompp -v -f nvt.mdp -c conf$i.gro -p topol.top -o conf_nvt$i.tpr >> nvt.log cat<<KHIK >> run_nvt$i.pbs #!/bin/bash #PBS -l nodes=1:ppn=16 #PBS -l walltime=120:00:00 #PBS -N GROMACS:TAUAT_P #PBS -q blaze #PBS -j oe #PBS -V cd \$PBS_O_WORKDIR export I_MPI_DEVICE=rdma /home/apps/ics/impi/latest/bin64/mpiexec.hydra /home/braf/md/gromacs-4.5.6/bin/mdrun_mpi -deffnm conf_nvt$iKHIK cd .. done Otherwise bash will not recognise KHIK as end of the block
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/354499", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223307/" ] }
354,509
What is the function of bash shebang? What is the difference between executing a file using ./file or sh file ? How does bash understand it?
The function of shebang is: Interpreter directives allow scripts and data files to be used ascommands, hiding the details of their implementation from users andother programs, by removing the need to prefix scripts with theirinterpreter on the command line. What is the different between executing a file using ./file or sh file? A Bourne shell script that is identified by the path some/path/to/foo,has the initial line, #!/bin/sh -x and is executed with parameters bar and baz as some/path/to/foo bar baz provides a similar result as having actually executed the following command line instead: /bin/sh -x some/path/to/foo bar baz Note: On 1980 Dennis Ritchie introduced kernel support for interpreter directives: The system has been changed so that if a file being executedbegins with the magic characters #! , the rest of the line is understoodto be the name of an interpreter for the executed file.Previously (and in fact still) the shell did much of this job;it automatically executed itself on a text file with executable modewhen the text file's name was typed as a command.Putting the facility into the system gives the followingbenefits.1) It makes shell scripts more like real executable files,because they can be the subject of 'exec.'2) If you do a 'ps' while such a command is running, its realname appears instead of 'sh'.Likewise, accounting is done on the basis of the real name.3) Shell scripts can be set-user-ID. Note: Linux ignores the setuid bit on all interpreted executables (i.e. executables starting with a #! line). 4) It is simpler to have alternate shells available;e.g. if you like the Berkeley csh there is no question aboutwhich shell is to interpret a file.5) It will allow other interpreters to fit in more smoothly. More info : Wikipedia Shebang The #! magic, details about the shebang/hash-bang
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/354509", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/219654/" ] }
354,553
I am trying to print the lines using the repetition symbol {n} but it is not working. For. e.g. I want to print all lines whose length is 4 char long awk '/^.{4}$/' test_data The above code is not printing that .How to fix it so that I can use the repetition symbol?I know the alternative like awk '/^....$/' test_data and awk 'length ==3 ' test_data
According to The GNU Awk User's Guide: Feature History , support for regular expression range operators was added in version 3.0 but initially required explicit command line option New command-line options: New command-line options: The --lint-old option to warn about constructs that are not available in the original Version 7 Unix version of awk (see V7/SVR3.1). The -m option from BWK awk. (Brian was still at Bell Laboratories at the time.) This was later removed from both his awk and from gawk. The --re-interval option to provide interval expressions in regexps (see Regexp Operators). The --traditional option was added as a better name for --compat (see Options). In gawk 4.0, Interval expressions became part of default regular expressions Since you are using gawk 3.x, you will need to use awk --re-interval '/^.{4}$/' or awk --posix '/^.{4}$/' or (thanks @StéphaneChazelas) if you want a solution that is portable, use POSIXLY_CORRECT=anything awk '/^.{4}$/' (since --posix or --re-interval would cause an error in other awk implementations).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/354553", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18290/" ] }
354,566
I have two lines in vim editor as you can see below 3 àáâ4 aaa Based on these two lines, I'd like to get the result below 'à' => 'a','á' => 'a','â' => 'a', Any ideas?
If you have repetitive tasks to do, you can record a macro. For example here, qajxkphi' Esc la' => ' Esc la', Enter Esc q Explanations: qa : start recording macro a jxkp : go down one line, erase-copy one character, go up, print it hi' Esc : go left, insert one ' , go back to normal mode la' => ', Esc : go right, append after the current character ' => ' , go back to normal mode la' Enter Esc : go right, append ', and a newline, and go back to normal mode. q : stop recording To use the macro (and confirm that it works), place yourself on the first character and press @a . The result is: 'à' => 'a',áâaa and you're in the second line. Press 2@a to execute the macro twice and get: 'à' => 'a','á' => 'a','â' => 'a',
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/354566", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/142331/" ] }
354,583
I'm trying to create a runtime folder at /run/gunicorn for some Gunicorn socket / PID files, which are for a Django application. I can get everything working if I manually create directories. However, I'm trying to make this a robust setup, and eventually use Ansible to automate everything. I think I have 2 options, based on this question . Option 1 - RuntimeDirectory I think the first option is to use RuntimeDirectory= within my systemd service file, but I can't get it to create the folder. The service files contains: #/etc/systemd/system/gunicorn_django_test.service[Unit]Description=gunicorn_django daemonAfter=network.target[Service]User=gunicornGroup=www-dataRuntimeDirectory=gunicorn #This line is supposed to create a directoryRuntimeDirectoryMode=755PIDFile=/run/gunicorn/django_test_pidWorkingDirectory=/vagrant/webapps/django_venv/django_testExecStart=/vagrant/webapps/django_venv/bin/gunicorn --pid /run/gunicorn/django_test_pid --workers 3 --bind unix:/run/gunicorn/django_test_socket django_test.wsgi --error-logfile /var/log/gunicorn/django_test_error.logExecReload=/bin/kill -s HUP $MAINPIDExecStop=/bin/kill -s TERM $MAINPIDPrivateTmp=true[Install]WantedBy=multi-user.target When I run systemctl start gunicorn_django_test.service , the service fails to start. When I snip out the exec line, and run it manually, I get Error: /run/gunicorn doesn't exist. Can't create pidfile. If I create the /run/gunicorn folder manually, I can get things to work. Option 2 - tmpfiles.d The second option is to use tmpfiles.d to have a folder created on boot, ready for the pid / socket files. I've tried this file: #/etc/tmpfiles.d/gunicorn.confd /run/gunicorn 0755 gunicorn www-data - This creates a directory, but it is quickly deleted somehow, and by the time I start the service, the folder isn't available. I can manually add some PreExec mkdir commands into the service file, but I'd like to get to the bottom of why RuntimeDirectory / tmpfiles.d aren't working. Thanks. Versions / Info:Ubuntu 16.04 Server / systemd 229 / Gunicorn 19.7.1 / runtime dir = /run
I added in PermissionsStartOnly=True and set a runtime folder per service, as suggested. I also added 0 to the start of the folder mode. [Unit]Description=gunicorn_django daemonAfter=network.target[Service]PermissionsStartOnly=TrueUser=gunicornGroup=www-dataRuntimeDirectory=gunicorn_djangoRuntimeDirectoryMode=0775PIDFile=/run/gunicorn_django/django_test_pidWorkingDirectory=/vagrant/webapps/django_venv/django_testExecStart=/vagrant/webapps/django_venv/bin/gunicorn --pid /run/gunicorn_django/django_test_pid --workers 3 --bind unix:/run/gunicorn_django/django_test_socket django_test.wsgi --error-logfile /var/log/gunicorn/django_test_error.logExecReload=/bin/kill -s HUP $MAINPIDExecStop=/bin/kill -s TERM $MAINPID[Install]WantedBy=multi-user.target It's now creating a folder with the correct permissions. drwxrwxrw- 2 gunicorn www-data 40 Mar 30 07:11 gunicorn_django/ Thanks @quixotic and @mark-stosberg
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/354583", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223356/" ] }
354,584
I am trying to use this script: #!/bin/cshforeach SUB (1 2 3 4 5 6 7 8 9 10 11 12 13 14) echo $SUB foreach VISIT (1 2 3 4 5 6 7 8) echo $VISIT grep 'StudyDate' -f /home/colourlab/Desktop/DrummingDTI/D${SUB}/D${SUB}V${VIS}/scout/001/infodump.dat donedone but every time I receive this error message: 11SUB: Undefined variable. and I don't know why.
Aside from all the reasons scripts shouldn't be written in csh , you are mixing bash syntax and csh syntax in your script. You're starting your loop with the csh foreach and trying to finish them with the bash done . The loop exit for a csh foreach is end , not done . Also, you have a variable VISIT that you are calling $VIS in your grep statement. So you script would be syntactically correct in csh with: #!/bin/cshforeach SUB (1 2 3 4 5 6 7 8 9 10 11 12 13 14) echo $SUB foreach VISIT (1 2 3 4 5 6 7 8) echo $VISIT grep 'StudyDate' -f /home/colourlab/Desktop/DrummingDTI/D${SUB}/D${SUB}V${VISIT}/scout/001/infodump.dat endend or in bash: #!/bin/bashfor SUB in 1 2 3 4 5 6 7 8 9 10 11 12 13 14; do echo $SUB for VISIT in 1 2 3 4 5 6 7 8; do echo $VISIT grep 'StudyDate' -f /home/colourlab/Desktop/DrummingDTI/D${SUB}/D${SUB}V${VISIT}/scout/001/infodump.dat donedone EDIT 2017/04/03 Here's a version of the bash script that adds a test for the file: #!/bin/bashidf_pfx="/home/colourlab/Desktop/DrummingDTI"idf_sfx="scout/001/infodump.dat"for SUB in 1 2 3 4 5 6 7 8 9 10 11 12 13 14; do echo $SUB for VISIT in 1 2 3 4 5 6 7 8; do echo $VISIT idfile="${idf_pfx}/D${SUB}/D${SUB}V${VISIT}/${idf_sfx}" if [ -f "${idfile}" ]; then grep 'StudyDate' $idfile else echo "No studydate file: $idfile" fi donedone
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/354584", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223360/" ] }
354,594
I'd like to set ssh_config so after just typing ssh my_hostname i end up in specific folder. Just like I would type cd /folder/another_one/much_much_deeper/ . How can i achieve that? EDIT. It's have been marked as duplicate of "How to ssh into dir..." yet it is not my question. I know i can execute any commands by tailing them to ssh command.My question is about /ssh_config file not the command.
There wasn't a way to do that, until OpenSSH 7.6 . From manual : RemoteCommand Specifies a command to execute on the remote machine after successfully connecting to the server. The command string extends to the end of the line, and is executed with the user's shell. Arguments to RemoteCommand accept the tokens described in the TOKENS section. So now you can have RemoteCommand cd /tmp && bash It was introduced in this commit .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/354594", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223364/" ] }
354,610
I don't know what changed and I don't know how to set it back,but now Super + E launches baobab instead of Nemo. I have check the keyboard - shortcuts configurations.and Launchers/Home Folder is associated to Super + E . There is no custom shortcuts. So my issues are of 2 types: the shortcut configuration seems correct in cinnamon panel, still it launches the wrong application(baobab), which doesn't appear in said configuration. What can I check and correct to get back default behavior: Super + E => Home folder? A package name to reinstall the default bindings would be ok as well. EDIT:when downloading with Firefox, "show in folder" menu trigger nemo,but When downloading with chrome, "show in folder" trigger baoab. So chrome seem to use the same information as cinnamon to launch nemo,but which one ? PS: So far I didn't found any answer that solve this specific issue (including on askubuntu.com . lsb_release -a ouputs No LSB modules are available.Distributor ID: LinuxMintDescription: Linux Mint 18 SarahRelease: 18Codename: sarah
I finally found the issue in /usr/share/applications/mimeinfo.cache, with the following entry: inode/directory=org.gnome.baobab.desktop;nemo.desktop; changing it back to: inode/directory=nemo.desktop; solved the issue. It's the chrome behavior and the answer to Change Chromium from automatically launches Nautilus with the Show In Folder command that get me to the solution. EDIT: /usr/share/applications/mimeinfo.cache is regenerated from .desktop files present in folder /usr/share/applications/ . Initial error is in org.gnome.baobab.desktop and reappear each time mimeinfo.cache is regenereated. It is not clear how to fix this.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/354610", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/192991/" ] }
354,629
I'm doing something where I'm frequently changing between two directories far away from each other in the system's file tree. Is there anyway I can assign some kind of short name to each one for use with the cd command so that I can type cd directoryA and cd directoryB for example, instead of repeatedly typing cd C:/A/Really/Long/File/Path/Name/Makes/My/Fingers/Hurt ?
For exactly two directories, use cd - $ cd /tmp$ cd /var/tmp$ cd -/tmp$ cd -/var/tmp$ cd -/tmp$
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/354629", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223402/" ] }
354,681
I've been troubleshooting an issue with a sysVinit service not coming online properly at boot within a systemd environment. What I've found is that when no service file or overrides are present in /etc/systemd/system/ for the said service, it autostarts properly. In this case, as I understand it, systemd should be dynamically loading the startup script via reading "legacy" sysvinit scripts present on the system, although I'm not 100% clear on that. What I'm confused about is that as soon as I pass the edit --full option to systemctl for said service, a flat file is generated at /etc/systemd/system/ and said service now fails to autostart at boot. Using the edit option and trying to add any overrides also seems to cause the service to fail to boot. Examples, if needed, provided below... Example of the system when it works The service (on Centos), in this example called "ProgramExample" has an init script placed in /etc/init.d/programexample and also /etc/rc.d/init.d/programexample : # ls -l /etc/rc.d/init.d/programexample-rwxr-xr-x. 1 root root 2264 Mar 29 14:11 /etc/rc.d/init.d/programexample No service file present at /etc/systemd/system/ : # ls -lh /etc/systemd/system/programexample.servicels: cannot access /etc/systemd/system/programexample.service: No such file or directory Systemctl status output in this configuration: # systemctl status programexample.service● programexample.service - LSB: Start Program Example at boot time Loaded: loaded (/etc/rc.d/init.d/programexample; bad; vendor preset: disabled) Active: active (exited) since Wed 2017-03-29 15:53:06 CDT; 14min ago Docs: man:systemd-sysv-generator(8) Process: 1297 ExecStart=/etc/rc.d/init.d/programexample start (code=exited, status=0/SUCCESS)Mar 29 15:53:05 centos7-box systemd[1]: Starting LSB: Start ProgramExample at boot time...Mar 29 15:53:05 centos7-box su[1307]: (to programexample) root on noneMar 29 15:53:06 centos7-box programexample[1297]: ProgramExample (user programexample): instance name set to centos7-boxMar 29 15:53:06 centos7-box programexample[1297]: instance public base uri set to https://192.168.0.148.programexample.net/programexample/Mar 29 15:53:06 centos7-box programexample[1297]: instance timezone set to US/CentralMar 29 15:53:06 centos7-box programexample[1297]: starting java servicesMar 29 15:53:06 centos7-box programexample[1297]: ProgEx server started.Mar 29 15:53:06 centos7-box systemd[1]: Started LSB: Start ProgramExample at boot time. With the above configuration, without any files created/placed in /etc/systemd/system/, the ProgramExample service autostarts properly. Once systemctl edit --full (or just edit ) is used: Once any edits are passed to systemctl, I have observed the following: A flat file or an override directory will be placed in /etc/systemd/system/ Said service, in this case ProgramExample, fails to start at boot. I will be unable to "enable" said service using systemctl Systemctl status output in this configuration (post edit): # systemctl status programexample.service● programexample.service - LSB: Start ProgramExample at boot time Loaded: loaded (/etc/rc.d/init.d/programexample; static; vendor preset: disabled) Active: inactive (dead) Docs: man:systemd-sysv-generator(8) This is the service file that is being generated and placed in /etc/systemd/system/ when using the edit --full option: # Automatically generated by systemd-sysv-generator[Unit]Documentation=man:systemd-sysv-generator(8)SourcePath=/etc/rc.d/init.d/programexampleDescription=LSB: Start ProgramExample at boot timeBefore=runlevel2.targetBefore=runlevel3.targetBefore=runlevel4.targetBefore=runlevel5.targetBefore=shutdown.targetBefore=adsm.serviceAfter=all.targetAfter=network-online.targetAfter=postgresql-9.4.serviceConflicts=shutdown.target[Service]Type=forkingRestart=noTimeoutSec=5minIgnoreSIGPIPE=noKillMode=processGuessMainPID=noRemainAfterExit=yesExecStart=/etc/rc.d/init.d/programexample startExecStop=/etc/rc.d/init.d/programexample stopExecReload=/etc/rc.d/init.d/programexample reload What is happening here? Am I correct that without the flat service file and/or service override directory in /etc/systemd/system/ that systemd is dynamically reading this information from said service's init script? I've tried numerous iterations of editing the service file at /etc/systemd/system/ and leaving the default file in place and cannot get autostarting to work or the service to go into an "enabled" state. I believe it would be preferable to have a systemd .service file for systemd configurations instead of relying on systemd to read from init script LSB headers for compatibility and concurrency reasons but the default file systemd is creating is failing to start or enable along with numerous other more simple iterations of the .service file I've attempted.
I've now found that the issue was that the service file automatically generated by systemd-sysv-generator lacks an install section with a WantedBy option. I added the following to my generated file at /etc/systemd/system/programexample.service which allowed me to properly enable the service: [Install]WantedBy = multi-user.target After that I ran systemctl daemon-reload to ensure my service file was read by systemd. Now I received a proper notification that my service was actually symlinked somewhere to be "enabled": [root@centos7-box ~]# systemctl enable programexample.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/programexample.service to /etc/systemd/system/programexample.service. This link helped me better understand the service file. I am not a fan of the way that systemd-sysv-generator does not include an install section with a WantedBy option by default. If systemd can dynamically read the LSB headers and properly start services at boot, why doesn't it generate the service file accordingly? I suppose some systemd growing pains are to be expected. Update July 7 2020 : Working with Debian Buster and trying to enable a SysVInit legacy service, I was presented with this wonderful message, which I believe would have saved me some time when I dealt with this issue in 2017: Synchronizing state of programexample.service with SysVservice script with /lib/systemd/systemd-sysv-install.Executing: /lib/systemd/systemd-sysv-install enableprogramexampleThe unit files have no installation config (WantedBy=,RequiredBy=, Also=, Alias= settings in the [Install] section,and DefaultInstance= for template units). This means they arenot meant to be enabled using systemctl. Possible reasons for having this kind of units are:• A unit may be statically enabled by being symlinked from another unit's .wants/ or .requires/ directory.• A unit's purpose may be to act as a helper for some other unit which has a requirement dependency on it.• A unit may be started when needed via activation (socket, path, timer, D-Bus, udev, scripted systemctl call, ...).• In case of template units, the unit is meant to be enabled with some instance name specified.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/354681", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92245/" ] }
354,691
I have a logfile with 2 distinct events (among others) that I need to capture. Each event generates a separate, dedicated line in the logfile with this format: timestamp - PID - process - event-type - event-details I don't care much about anything but the event-details column of the file, and the data I'm expecting to receive there, looks like this: Example 1: { "values":{ "SPEED":"7.0" } } Example 2: { "values":{ "CADENCE":"41" } } I've been trying to write a shell script that would only read the last line of the logfile every time, and depending on the contents of the event-details column, redirect the resulting SPEED or CADENCE data to a specific text file (when I say resulting SPEED/CADENCE data I mean the "integer" after the SPEED":" expression for example). So far I was able to redirect the results to two different files, but: I have to "tail" the logfile twice in order for the script to work and... ...as a result of that, I have the feeling that the second file is not being updated at the same rate as the first one...as if, for some reason, I was missing some of the CADENCE events due to the order in which the script was written. I tried using the sleep function, and also tried to "tail" more than one line at a time to try to mitigate the lack of CADENCE update with no luck.I just keep missing CADENCE events from time to time. A note on the logfile behavior: Looking at the log, there are 3 events that appear most of the time, and they are always logged in the same order of appearance (CADENCE, SPEED and OTHER), and from time to time there is a 4rd event. I just wanted to clarify that the missing CADENCE events have nothing to do with that "4rd" event appearance. This is a summarized version of the script that I have currently running: #!/bin/bashwhile :do tail -1 logfile.txt | grep -oP '(?<=SPEED":")[0-9]+' > spd.txt tail -1 logfile.txt | grep -oP '(?<=CADENCE":")[0-9]+' > cad.txtdone =======UPDATE:======= This is the complete log line and ouput expected: Example of line 1: Input (from logfile.txt): 03-16 21:05:28.641 2797-2842/process:Service D/WEBSOCKET: receiving: { "values":{ "Speed MPH":"3.1", "Speed KPH":"4.9", "Miles":"0.551", "Kilometers":"0.886" } } Output (sent to spd.txt): 4.9 Example of line 2: Input (from logfile.txt): 03-16 21:05:29.309 2797-2842/process:Service D/WEBSOCKET: receiving: { "values":{ "RPM":"27" } } Output: (sent to cad.txt): 27
I've now found that the issue was that the service file automatically generated by systemd-sysv-generator lacks an install section with a WantedBy option. I added the following to my generated file at /etc/systemd/system/programexample.service which allowed me to properly enable the service: [Install]WantedBy = multi-user.target After that I ran systemctl daemon-reload to ensure my service file was read by systemd. Now I received a proper notification that my service was actually symlinked somewhere to be "enabled": [root@centos7-box ~]# systemctl enable programexample.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/programexample.service to /etc/systemd/system/programexample.service. This link helped me better understand the service file. I am not a fan of the way that systemd-sysv-generator does not include an install section with a WantedBy option by default. If systemd can dynamically read the LSB headers and properly start services at boot, why doesn't it generate the service file accordingly? I suppose some systemd growing pains are to be expected. Update July 7 2020 : Working with Debian Buster and trying to enable a SysVInit legacy service, I was presented with this wonderful message, which I believe would have saved me some time when I dealt with this issue in 2017: Synchronizing state of programexample.service with SysVservice script with /lib/systemd/systemd-sysv-install.Executing: /lib/systemd/systemd-sysv-install enableprogramexampleThe unit files have no installation config (WantedBy=,RequiredBy=, Also=, Alias= settings in the [Install] section,and DefaultInstance= for template units). This means they arenot meant to be enabled using systemctl. Possible reasons for having this kind of units are:• A unit may be statically enabled by being symlinked from another unit's .wants/ or .requires/ directory.• A unit's purpose may be to act as a helper for some other unit which has a requirement dependency on it.• A unit may be started when needed via activation (socket, path, timer, D-Bus, udev, scripted systemctl call, ...).• In case of template units, the unit is meant to be enabled with some instance name specified.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/354691", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223435/" ] }
354,705
I just broke a machine running Linux Mint with KDE 4. Thankfully I managed to back up /home/ , so I have all my data. Now I've installed Kubuntu with KDE 5, and I'm trying to configure my Shell colorscheme to match my previous setup. I usually created a variant on "Linux Colors" in "Shell Profiles". Where can I find this data?
I've now found that the issue was that the service file automatically generated by systemd-sysv-generator lacks an install section with a WantedBy option. I added the following to my generated file at /etc/systemd/system/programexample.service which allowed me to properly enable the service: [Install]WantedBy = multi-user.target After that I ran systemctl daemon-reload to ensure my service file was read by systemd. Now I received a proper notification that my service was actually symlinked somewhere to be "enabled": [root@centos7-box ~]# systemctl enable programexample.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/programexample.service to /etc/systemd/system/programexample.service. This link helped me better understand the service file. I am not a fan of the way that systemd-sysv-generator does not include an install section with a WantedBy option by default. If systemd can dynamically read the LSB headers and properly start services at boot, why doesn't it generate the service file accordingly? I suppose some systemd growing pains are to be expected. Update July 7 2020 : Working with Debian Buster and trying to enable a SysVInit legacy service, I was presented with this wonderful message, which I believe would have saved me some time when I dealt with this issue in 2017: Synchronizing state of programexample.service with SysVservice script with /lib/systemd/systemd-sysv-install.Executing: /lib/systemd/systemd-sysv-install enableprogramexampleThe unit files have no installation config (WantedBy=,RequiredBy=, Also=, Alias= settings in the [Install] section,and DefaultInstance= for template units). This means they arenot meant to be enabled using systemctl. Possible reasons for having this kind of units are:• A unit may be statically enabled by being symlinked from another unit's .wants/ or .requires/ directory.• A unit's purpose may be to act as a helper for some other unit which has a requirement dependency on it.• A unit may be started when needed via activation (socket, path, timer, D-Bus, udev, scripted systemctl call, ...).• In case of template units, the unit is meant to be enabled with some instance name specified.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/354705", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48496/" ] }
354,729
I have a script that updates my google drive. I made a systemd unit to run this script, and a timer that runs the unit every 10 seconds, which both work. However, when I get disconnected from internet, the script fails and systemd stops running the it even if the internet comes back on. Is there any way I can make systemd keep on running the script, or is there a way to have systemd run the script only if there is an internet connection? Here are the files /etc/systemd/system/grive.service: [Unit]Description=Syncronize google drive folder[Service]User=my_nameExecStart=/home/my_name/bin/update-grive /etc/systemd/system/grive.timer: [Unit]Description=Timer for how often to syncronize google drive folder[Timer]OnUnitActiveSec=10sOnBootSec=10s[Install]WantedBy=timers.target /home/my_name/bin/update-grive: #!/usr/bin/env bashcd /home/my_name/gdrivegrive
Add Restart=always to the service unit, so systemd will keep bringing up the service if it crashes. On a side note you should use OnUnitInactiveSec instead of OnUnitActiveSec . OnUnitInactiveSec=10s (or 20s) will start the service 10 seconds after it stopped. This way you make sure it doesn't get called twice and possibly avoid banning for DOSing google
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/354729", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176809/" ] }
354,835
I'm looking for a one liner that will achieve something like this (with 2 or more argument strings): $ make_combinations "1 2" "a b c"1 a1 b1 c2 a2 b2 c Of course I could make nested for loops, but if there is a generic and fast way to achieve this, it would be better. This would be very useful to use with xargs. Thanks in advance!
printf "%s\n" {1,2}" "{a,b,c}1 a1 b1 c2 a2 b2 c Or echo {1,2}" "{a,b,c} | xargs -n 21 a1 b1 c2 a2 b2 c As @George Vasiliou mention in his comment when the list can be wriiten as a range you can use it as below: printf '%s\n' {1..2}" "{a..c}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/354835", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223551/" ] }
354,888
Introduction The aim is to update a CentOS 7 system automatically. Attempt Based on this documentation the following steps were executed: yum-cron has been installed The yum-cron.conf was changed as follows: user@host ~ $ cat /etc/yum/yum-cron.conf [commands]update_cmd = defaultupdate_messages = yesdownload_updates = yesapply_updates = yes The yum-cron was: checked ( systemctl status yum-cron ) user@host ~ $ systemctl status yum-cron● yum-cron.service - Run automatic yum updates as a cron jobLoaded: loaded (/usr/lib/systemd/system/yum-cron.service; enabled; vendor preset: disabled)Active: active (exited) since enabled ( systemctl enable yum-cron ) started ( systemctl start yum-cron ) After a couple of days the yum.log was checked user@host$ sudo cat /var/log/yum.log[sudo] password for user:Feb 23 18:49:51 Installed: libreoffice5.2-freedesktop-menus-5.2.5-1.noarchMar 02 15:42:09 Installed: qpid-tools-1.35.0-1.el7.noarchMar 27 08:41:33 Installed: yum-cron-3.4.3-150.el7.centos.noarch but nothing was installed automatically. This was verified when yum upgrade indicated that multiple packages could be installed: user@host$ sudo yum upgradeTransaction Summary===================================================================================================================================================================================================================Install 3 Packages (+2 Dependent packages)Upgrade 155 PackagesRemove 2 PackagesTotal size: 488 MTotal download size: 53 MIs this ok [y/d/N]: Discussion Q: Perhaps this issue is OS version related? user@host ~ $ cat /etc/redhat-releaseCentOS Linux release 7.3.1611 (Core) A: No evidence was found that this problem is related to CentOS 7.3.1611 .
You're missing a great deal of parameters that are in the default yum-cron.conf . I wonder if the omission of some of those parameters is what's causing your problem. Here's one of my working yum-cron.conf setups, decrufted: # grep -v -e '^#' -e '^$' yum-cron.conf [commands]update_cmd = defaultupdate_messages = yesdownload_updates = yesapply_updates = yesrandom_sleep = 10800[emitters]system_name = Noneemit_via = stdioouput_width = 80[email]email_from = root@localhostemail_to = rootemail_host = localhost[groups]group_list = Nonegroup_package_types = mandatory, default[base]debuglevel = -2mdpolicy = group:main Also, check to be sure /etc/cron.daily/0yum-cron.cron exists: #!/bin/bash# Only run if this flag is set. The flag is created by the yum-cron init# script when the service is started -- this allows one to use chkconfig and# the standard "service stop|start" commands to enable or disable yum-cron.if [[ ! -f /var/lock/subsys/yum-cron ]]; then exit 0fi# Action!exec /usr/sbin/yum-cron Finally, make sure SELinux labels and basic unix permissions and ownership are correct. These values work: # ls -Z /etc/cron.daily/0yum-daily.cron /etc/yum/yum-cron.conf-rwxr-xr-x. root root system_u:object_r:bin_t:s0 /etc/cron.daily/0yum-daily.cron-rw-r--r--. root root unconfined_u:object_r:etc_t:s0 /etc/yum/yum-cron.conf
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/354888", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65367/" ] }
354,899
I want to switch from Ubuntu 16.04 to Debian 8.7.1. I have the correct Debian ISO image on a USB stick. However, my computer will not boot from it, nor will it boot from a CD/DVD with the image on it either. It just flat out ignores the other boot media. Oddly enough, my computer will boot from a USB stick if there is an Ubuntu ISO image on it. I am using a PC with amd64 architecture. I've looked into installing Debian from a Unix/Linux system but that looks really messy. Any advice?
You're missing a great deal of parameters that are in the default yum-cron.conf . I wonder if the omission of some of those parameters is what's causing your problem. Here's one of my working yum-cron.conf setups, decrufted: # grep -v -e '^#' -e '^$' yum-cron.conf [commands]update_cmd = defaultupdate_messages = yesdownload_updates = yesapply_updates = yesrandom_sleep = 10800[emitters]system_name = Noneemit_via = stdioouput_width = 80[email]email_from = root@localhostemail_to = rootemail_host = localhost[groups]group_list = Nonegroup_package_types = mandatory, default[base]debuglevel = -2mdpolicy = group:main Also, check to be sure /etc/cron.daily/0yum-cron.cron exists: #!/bin/bash# Only run if this flag is set. The flag is created by the yum-cron init# script when the service is started -- this allows one to use chkconfig and# the standard "service stop|start" commands to enable or disable yum-cron.if [[ ! -f /var/lock/subsys/yum-cron ]]; then exit 0fi# Action!exec /usr/sbin/yum-cron Finally, make sure SELinux labels and basic unix permissions and ownership are correct. These values work: # ls -Z /etc/cron.daily/0yum-daily.cron /etc/yum/yum-cron.conf-rwxr-xr-x. root root system_u:object_r:bin_t:s0 /etc/cron.daily/0yum-daily.cron-rw-r--r--. root root unconfined_u:object_r:etc_t:s0 /etc/yum/yum-cron.conf
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/354899", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
354,905
I am trying to redirect the sound produced by laptop to a bluetooth speaker. I know how to redirect the sound of sink inputs to different sinks. However, when I am playing music, sound is coming from my laptop but when I use the command $ pacmd list-sink-inputs I get this in response 0 sink input(s) available. I have no idea why I am getting this response. I have tested with multiple applications but I continue to receive this. Does anyone have ideas where I can look into this to investigate this further?
You're missing a great deal of parameters that are in the default yum-cron.conf . I wonder if the omission of some of those parameters is what's causing your problem. Here's one of my working yum-cron.conf setups, decrufted: # grep -v -e '^#' -e '^$' yum-cron.conf [commands]update_cmd = defaultupdate_messages = yesdownload_updates = yesapply_updates = yesrandom_sleep = 10800[emitters]system_name = Noneemit_via = stdioouput_width = 80[email]email_from = root@localhostemail_to = rootemail_host = localhost[groups]group_list = Nonegroup_package_types = mandatory, default[base]debuglevel = -2mdpolicy = group:main Also, check to be sure /etc/cron.daily/0yum-cron.cron exists: #!/bin/bash# Only run if this flag is set. The flag is created by the yum-cron init# script when the service is started -- this allows one to use chkconfig and# the standard "service stop|start" commands to enable or disable yum-cron.if [[ ! -f /var/lock/subsys/yum-cron ]]; then exit 0fi# Action!exec /usr/sbin/yum-cron Finally, make sure SELinux labels and basic unix permissions and ownership are correct. These values work: # ls -Z /etc/cron.daily/0yum-daily.cron /etc/yum/yum-cron.conf-rwxr-xr-x. root root system_u:object_r:bin_t:s0 /etc/cron.daily/0yum-daily.cron-rw-r--r--. root root unconfined_u:object_r:etc_t:s0 /etc/yum/yum-cron.conf
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/354905", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/214964/" ] }
354,928
I am trying to deploy django app.When I print apt-get update I see W: Unable to read /etc/apt/apt.conf.d/ - DirectoryExists (13: Permission denied)W: Unable to read /etc/apt/sources.list.d/ - DirectoryExists (13: Permission denied)W: Unable to read /etc/apt/sources.list - RealFileExists (13: Permission denied)E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied)E: Unable to read /var/cache/apt/ - opendir (13: Permission denied)E: Unable to read /var/cache/apt/ - opendir (13: Permission denied)E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)E: Unable to lock the administration directory (/var/lib/dpkg/), are you root? When I print sudo apt-get update I see -bash: sudo: command not found I tried to use su instead of sudo .But it is strange. For example I print su apt-get update And nothing happensI just see a new line, (uiserver):u78600811:~$ su apt-get update(uiserver):u78600811:~$ The same if I try to install some packages.What do I do? If it is useful info - I am using Debian (uiserver):u87600811:~$ uname -aLinux infong1559 3.14.0-ui16294-uiabi1-infong-amd64 #1 SMP Debian 3.14.79-2~ui80+4 (2016-10-20) x86_64 GNU/Linux
By default sudo is not installed on Debian, but you can install it. First enable su-mode: su - Install sudo by running: apt-get install sudo -y After that you would need to play around with users and permissions. Give sudo right to your own user. usermod -aG sudo yourusername Make sure your sudoers file have sudo group added. Run: visudo to modify sudoers fileand add following line into it (if it is missing): # Allow members of group sudo to execute any command%sudo ALL=(ALL:ALL) ALL You need to relogin or reboot device completely for changes to take effect.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/354928", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104867/" ] }
354,943
I'm using curl to get JSON back from a rest api like this: content=$(curl -s -X GET -H "Header:Value" http://127.0.0.1:8200/etc)echo "${content}"| jq -r '.data.value' which produces the value I need. However; when I change the above code to look like this: content=$(curl -s -X GET -H "Header:Value" http://127.0.0.1:8200/etc)username=$(echo "${content}"| jq -r '.data.value')echo $username Produces nothing. How can I change this so that the username variable gets assigned the output?
Changed the code to this and it worked: content=$(curl -s -X GET -H "Header:Value" http://127.0.0.1:8200/etc) username=$( jq -r '.data.value' <<< "${content}" ) echo "${username}"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/354943", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/190199/" ] }
355,008
I want to exctact pages from a pdf document such that all pages are extracted except the first one and the last one. Code but (end-1) does not work, nor 2-end-1 pdftk 1.pdf cat 2-(end-1) output output.pdf
pdftk yourpdfile.pdf cat 2-r2 output out.pdf
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/355008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16920/" ] }
355,016
Our team decided to add a welcome banner to all our hosts. A team member, instead of adding the message in /etc/motd added the message with echo in ~/.cshrc . This is breaking scp between the hosts. Can someone explain how this is breaking scp ? Is cshrc loaded even when you do scp ? And how will some echo messages in it break it? I am not aware of the internal workings of scp . The message we added in ~/.cshrc : echo "##############################################################################"echo " Alert! Aler! Alert! Alert! Alert! Alert!"echo "This is a restricted box, any actions performed here will be reported to [email protected]"echo "##############################################################################"
Commands running on top of the ssh transport do not expect large amounts of output before they can start their server. This will affect a number of utilities. The solution is to have your administration team print the message only if stdout is connected to a terminal. if ( $?prompt ) then echo "Secure machine message..." echo "More warnings" echo "Etc."endif Better still, you wouldn't put this in .cshrc at all, but instead the message content itself would go in /etc/issue.net , which is displayed before login. This may need enabling in /etc/ssh/sshd_config though, with a line like this: Banner /etc/issue.net
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/355016", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/197337/" ] }
355,023
Sometimes you run a command and get a "command not found" error message. After that you try to install the package that contains that command (I think that's what happens anyway?) e.g. showmount: command not found apt-get install showmount does nothing, so I guess the showmount command is part of a package, but I don't know what that package is. How can I find out what package I need to install to get whichever command I need? I am using Kali Linux.
You can use apt-file for that (you might need to install it): apt-file search showmount This reveals that the command is in the nfs-common package. Typically when you're looking for a binary you can restrict the search by prefixing the binary with bin/ : apt-file search bin/showmount To install apt-file , run sudo apt-get install apt-filesudo apt-file update If you end up with apt-file 3.0 or later, you won’t need to update the indexes again separately (after the initial download above), they are updated whenever the main APT indexes are updated.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/355023", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/202110/" ] }
355,030
if [[ -s log.txt ]]; What does -s mean? I know -z means zero sized string. I cannot find any documentation on -s. What does [] or [[]] mean, while writing an if condition. I have used if without [] or [[]] and it worked fine.
The -s test returns true if [...] if file exists and has a size greater than zero This is documented in the bash manual, and also in the manual for the test utility (the test may also be written if test -s file; then ). For [ ... ] and [[ ... ]] , see: Bash - If Syntax confusion
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/355030", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180454/" ] }
355,065
I followed the steps here and compiled the kernel successfully in usermode: https://btrfs.wiki.kernel.org/index.php/Debugging_Btrfs_with_GDB But when I start ./linux in various ways it always gives me a very similar error: pc@linux-94q0:~/linux-4.11-rc4> ./linux root=/mntCore dump limits : soft - 0 hard - NONEChecking that ptrace can change system call numbers...OKChecking syscall emulation patch for ptrace...OKChecking advanced syscall emulation patch for ptrace...OKChecking environment variables for a tempdir...none foundChecking if /dev/shm is on tmpfs...OKChecking PROT_EXEC mmap in /dev/shm...OKAdding 33251328 bytes to physical memory to account for exec-shield gapLinux version 4.11.0-rc4 (pc@linux-94q0) (gcc version 4.8.5 (SUSE Linux) ) #1 Fri Mar 31 12:40:07 CEST 2017Built 1 zonelists in Zone order, mobility grouping on. Total pages: 16087Kernel command line: root=/mntPID hash table entries: 256 (order: -1, 2048 bytes)Dentry cache hash table entries: 8192 (order: 4, 65536 bytes)Inode-cache hash table entries: 4096 (order: 3, 32768 bytes)Memory: 26140K/65240K available (3518K kernel code, 770K rwdata, 948K rodata, 114K init, 195K bss, 39100K reserved, 0K cma-reserved)NR_IRQS:15clocksource: timer: mask: 0xffffffffffffffff max_cycles: 0x1cd42e205, max_idle_ns: 881590404426 nsCalibrating delay loop... 6966.47 BogoMIPS (lpj=34832384)pid_max: default: 32768 minimum: 301Mount-cache hash table entries: 512 (order: 0, 4096 bytes)Mountpoint-cache hash table entries: 512 (order: 0, 4096 bytes)Checking that host ptys support output SIGIO...YesChecking that host ptys support SIGIO on close...No, enabling workarounddevtmpfs: initializedUsing 2.6 host AIOclocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 nsfutex hash table entries: 256 (order: 0, 6144 bytes)xor: measuring software checksum speed 8regs : 19749.600 MB/sec 8regs_prefetch: 17312.000 MB/sec 32regs : 18694.400 MB/sec 32regs_prefetch: 17317.600 MB/secxor: using function: 8regs (19749.600 MB/sec)NET: Registered protocol family 16raid6: int64x1 gen() 4139 MB/sraid6: int64x1 xor() 2318 MB/sraid6: int64x2 gen() 3758 MB/sraid6: int64x2 xor() 2685 MB/sraid6: int64x4 gen() 3413 MB/sraid6: int64x4 xor() 2153 MB/sraid6: int64x8 gen() 2865 MB/sraid6: int64x8 xor() 1626 MB/sraid6: using algorithm int64x1 gen() 4139 MB/sraid6: .... xor() 2318 MB/s, rmw enabledraid6: using intx1 recovery algorithmclocksource: Switched to clocksource timerVFS: Disk quotas dquot_6.6.0VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)NET: Registered protocol family 2TCP established hash table entries: 512 (order: 0, 4096 bytes)TCP bind hash table entries: 512 (order: 0, 4096 bytes)TCP: Hash tables configured (established 512 bind 512)UDP hash table entries: 256 (order: 1, 8192 bytes)UDP-Lite hash table entries: 256 (order: 1, 8192 bytes)NET: Registered protocol family 1console [stderr0] disabledmconsole (version 2) initialized on /home/pc/.uml/y33GMV/mconsoleChecking host MADV_REMOVE support...OKworkingset: timestamp_bits=62 max_order=13 bucket_order=0io scheduler noop registeredio scheduler deadline registered (default)io scheduler mq-deadline registeredNET: Registered protocol family 17Initialized stdio console driverConsole initialized on /dev/tty0console [tty0] enabledInitializing software serial port version 1console [mc-1] enabledFailed to initialize ubd device 0 :Couldn't determine size of device's fileBtrfs loaded, crc32c=crc32c-generic, debug=onVFS: Cannot open root device "/mnt" or unknown-block(0,0): error -6Please append a correct "root=" boot option; here are the available partitions:Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)CPU: 0 PID: 1 Comm: swapper Not tainted 4.11.0-rc4 #1Stack: 6381bd80 60066344 602a250a 62cab500 602a250a 600933ba 6381bd90 60297e6f 6381beb0 60092b41 6381be30 60380ea1Call Trace: [<600933ba>] ? printk+0x0/0x94 [<6001c4d8>] show_stack+0xfe/0x158 [<60066344>] ? dump_stack_print_info+0xe1/0xea [<602a250a>] ? bust_spinlocks+0x0/0x4f [<602a250a>] ? bust_spinlocks+0x0/0x4f [<600933ba>] ? printk+0x0/0x94 [<60297e6f>] dump_stack+0x2a/0x2c [<60092b41>] panic+0x173/0x322 [<60380ea1>] ? klist_next+0x0/0xa6 [<600929ce>] ? panic+0x0/0x322 [<600cac33>] ? kfree+0x0/0x8a [<600f01da>] ? SyS_mount+0xae/0xc0 [<600933ba>] ? printk+0x0/0x94 [<600f012c>] ? SyS_mount+0x0/0xc0 [<60002378>] mount_block_root+0x356/0x374 [<6029e3f9>] ? strcpy+0x0/0x18 [<60002432>] mount_root+0x9c/0xa0 [<6029e543>] ? strncmp+0x0/0x25 [<60002614>] prepare_namespace+0x1de/0x238 [<600eb9d3>] ? SyS_dup+0x0/0x5e [<60001ee1>] kernel_init_freeable+0x300/0x31b [<600933ba>] ? printk+0x0/0x94 [<603835e9>] kernel_init+0x1c/0x14a [<6001b140>] new_thread_handler+0x81/0xa3Aborted (core dumped) I have now tried everything I could think of to satisfy the ./linux root= option but nothing seems to work. I created a root filesystem with https://buildroot.org/ , passed that as .gz, .tar, .tar.gz, uncompressed folder I put the contents of the buildroot.org into the btrfs loop device, then right clicked in the disks utility and created an .img file. Tried starting with that. Of course I tried all the usual options I could think of, like ./linux root=/mnt , ./linux root=/dev/loop0 I don't know what else to try. Why is this not working?I tried finding out what the -6 error code means, but it seems all the Linux kernel error codes are positive numbers. https://gist.github.com/nullunar/4553641 I really don't know what else to do, I guess I could start reading up for hours and hours about what exactly the udb stuff means, but I was really hoping somebody could just tell me what I need to pass to the command line as my interest right now is only in debugging btrfs, not Linux in general.
From https://www.linux.com/news/how-run-linux-inside-linux-user-mode-linux : ./linux-2.6.19-rc5 ubda=FedoraCore5-x86-root_fs mem=128M The ubda parameter in this command is giving the UML kernel the name of a file to use to create the virtual machine's /dev/ubda virtual block device, which will be its root filesystem.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/355065", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6411/" ] }
355,159
Good evening, The following is a piece of the code I'm using in a script. Launching from a SSH sesssion works fine, however, when it runs via cron, it displays broken pipe errors on screen. I can't reproduce it via SSH. Code: IP=$(sort --random-sort /root/ips.csv | head -n 1); nc -zv -w 2 $IP 443 2>&1 | grep succeeded >> outfile Error in screen: sort: write failed: standard output; Broken pipesort: write error Any tips/pointers? Thank you!
When head finishes after handling the first line, it exits, closing the other end of the pipe. sort may still be trying to write more, and writing to a closed pipe or socket returns the EPIPE error. But it also raises the SIGPIPE signal, killing the process, unless the signal is ignored or handled. With the signal ignored, sort sees the error, complains, and exits. If the signal is not ignored, sort just dies. We can use the trap builtin to ignore a signal from the shell, getting the error: $ trap "" PIPE$ sort bigfile | head -1 > /dev/null sort: write failed: standard output: Broken pipesort: write error But sadly, we can't use trap to un-ignore the signal and get the desired behaviour, as POSIX requires that it's not allowed to do that in non-interactive shells (scripts). It does allow it for interactive shells, but Bash's trap doesn't do it in that case either. To test: sh$ trap '' PIPE # ignore the signal sh$ PS1='another$ ' bash # run another shellanother$ trap - PIPE # try to reset the signal # it doesn't workanother$ sort bigfile |head -1 > /dev/nullsort: write failed: 'standard output': Broken pipesort: write error Instead, we could use an external tool, like a Perl one-liner to run a script or command with the signal un-ignored ( sort exits silently here): another$ perl -e '$SIG{PIPE}="DEFAULT"; exec "@ARGV"' \ 'sort bigfile |head -1' > /dev/null another$ As for your situation with cron, the reason could be that systemd apparently makes SIGPIPE ignored by default , mentioning: [SIGPIPE is] not really useful for normal daemons though, and as we try to provide a good, useful execution environment for daemons, we turn this off. Of course, shells and suchlike should turn this on again. Of course, this is also mentioned in the documentation (systemd.exec) : IgnoreSIGPIPE= Takes a boolean argument. If true, causes SIGPIPE to be ignored in the executed process. Defaults to true because SIGPIPE generally is useful only in shell pipelines. On my Debian system, /lib/systemd/system/cron.service explicitly sets IgnoreSIGPIPE=false , undoing the systemd default for cron. You may want to check if that would help in your case.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/355159", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204855/" ] }
355,168
My question is not so much a question of computer science as it is a question of etymology. The command touch changes file access and modification times. What does 'touch' stand for?
It doesn't stand for anything; it's not an abbreviation or initialism. It's a verb. When you touch a file, you're "putting fresh fingerprints on it", updating its last-modified date (or creating it if it did not yet exist).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/355168", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114428/" ] }
355,214
I want to put a command into a shell script which will create a symlink to directory, but this script could be run over and over, so on subsequent invocations, the command should not change anything. Here is the directory structure: % tree /tmp/test_symlink /tmp/test_symlink├── foo└── repo └── resources └── snippets ├── php.snippets ├── sh.snippets ├── snippets.snippets ├── sql.snippets └── vim.snippets I want to create a symlink in foo/ called snippets which points tothe directory /tmp/test_symlink/repo/resources/snippets . So I run: % ln -sfv /tmp/test_symlink/repo/resources/snippets /tmp/test_symlink/foo/snippets'/tmp/test_symlink/foo/snippets' -> '/tmp/test_symlink/repo/resources/snippets' which gives the desired result. % tree /tmp/test_symlink /tmp/test_symlink├── foo│   └── snippets -> /tmp/test_symlink/repo/resources/snippets└── repo └── resources └── snippets ├── php.snippets ├── sh.snippets ├── snippets.snippets ├── sql.snippets └── vim.snippets 5 directories, 5 files However, when the command is run again, % ln -sfv /tmp/test_symlink/repo/resources/snippets /tmp/test_symlink/foo/snippets'/tmp/test_symlink/foo/snippets/snippets' -> '/tmp/test_symlink/repo/resources/snippets' it creates a symlink to a directory, where the symlink aleady exists puts symlink inside the real directory % tree /tmp/test_symlink /tmp/test_symlink├── foo│   └── snippets -> /tmp/test_symlink/repo/resources/snippets└── repo └── resources └── snippets ├── php.snippets ├── sh.snippets ├── snippets -> /tmp/test_symlink/repo/resources/snippets ├── snippets.snippets ├── sql.snippets └── vim.snippets why is this happening and how can I modify the command so that subsequent invocations won't create this weird effect?
You should use the -T option for this, it tells ln to always treat the link name as the desired link name, never as a directory. Without this option, if you give ln a link name which exists as a directory, it creates a new link to the target inside that directory. Note that -T is a GNU-ism (at least, it’s not in POSIX), but you’re already using -v which is a GNU-ism too so I imagine that’s not an issue. Alternatively, you can just specify the parent directory as the link name, and the link will always be (re-)created there: ln -sfv /tmp/test_symlink/repo/resources/snippets /tmp/test_symlink/foo/ This works because your symlink has the same name as the target.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/355214", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106525/" ] }
355,217
I need to put the output of a command into an associative array. For example: dig mx +short google.com Will return: 20 alt1.aspmx.l.google.com.40 alt3.aspmx.l.google.com.50 alt4.aspmx.l.google.com.10 aspmx.l.google.com.30 alt2.aspmx.l.google.com. How can I create an associative array using the priorities (10,20,...) as the key and the record (aspmx.l.google.com.) as the value?
Here is one way to read that data into a bash associative array: Code: #!/usr/bin/env bashdeclare -A hostswhile IFS=" " read -r priority host ; do hosts["$priority"]="$host"done < <(dig mx +short google.com) for priority in "${!hosts[@]}" ; do echo "$priority -> ${hosts[$priority]}"done Output: 20 -> alt1.aspmx.l.google.com.10 -> aspmx.l.google.com.50 -> alt4.aspmx.l.google.com.40 -> alt3.aspmx.l.google.com.30 -> alt2.aspmx.l.google.com.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/355217", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223839/" ] }
355,226
I have a huge collection of images and I would like to put them in folders matching their first 3 charachers+.jpg extension. So i would like to grab 4_1_0002.png file (all the files starting with 4_1) and put it in the 4_1_.jpg folder. Similarly I would like to grab 4_2_0002.png file (all the files starting with 4_2) and put it in the 4_2_.jpg folder. All those files that I would like to sort are now in one huge folder.I expected to use a find command, but I don't know how to extract the first three characters from {} expansion parameter. find . -type f -ok echo mv {} "path/first3char.jpg" \;
Here is one way to read that data into a bash associative array: Code: #!/usr/bin/env bashdeclare -A hostswhile IFS=" " read -r priority host ; do hosts["$priority"]="$host"done < <(dig mx +short google.com) for priority in "${!hosts[@]}" ; do echo "$priority -> ${hosts[$priority]}"done Output: 20 -> alt1.aspmx.l.google.com.10 -> aspmx.l.google.com.50 -> alt4.aspmx.l.google.com.40 -> alt3.aspmx.l.google.com.30 -> alt2.aspmx.l.google.com.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/355226", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213796/" ] }
355,229
I'm new to NetBSD, although often using pkgsrc on ubuntu. I can build and install packages from source without any effect ubuntu's package dependencies. For example, even if ruby 2.2 is installed by apt-get, I can build and install ruby 2.3 into $HOME/pkg/bin by pkgsrc. I can use ruby 2.3 without any dependency problems. This feature have been very helpful to me and that is why I love pkgsrc so far. Now I installed NetBSD 7.1 to my another PC.I want to download pkgsrc as my own package manager and want to build packages into my home directory( $HOME/pkg/bin ), without any system-wide effect as I'm doing on ubuntu, even though NetBSD itself uses pkgsrc. When I tried ./bootstrap --unpriviledged in the home directory, it didn't work.Before I ask why with detail error messages, let me ask whether NetBSD is designed or considered that downloading and using another pkgsrc for each user without any effect to the system environment. P.S English is not my native language; please excuse typing, grammar or/and word selecting errors. UPDATE(2017/04/08) Thanks to Greg A. Woods's answer, I understood I have to show detail error messages. At first, I had always installed from binary using pkg_add with root account. -bash-4.4$ uname -aNetBSD hello-netbsd 7.1 NetBSD 7.1 (GENERIC.201703111743Z) amd64-bash-4.4$ pkg_info -asudo-1.8.17p1 Allow others to run commands as rootbash-4.4.012 The GNU Bourne Again Shellcvs-1.12.13nb4 Concurrent Versions Systemgcc6-6.3.0 The GNU Compiler Collection (GCC) - 6 Release Series Then I logined as a non root user, downloaded pkgsrc and bootstrap. -bash-4.4$ cvs -q -z2 -d [email protected]:/cvsroot checkout -r pkgsrc-2017Q1 -P pkgsrc-bash-4.4$ cd pkgsrc/bootstrap-bash-4.4$ ./bootstrap --unprivileged......checking for gcc... nochecking for cc... nochecking for cl.exe... noconfigure: error: in `/home/vagrant/pkgsrc/bootstrap/work/bmake':configure: error: no acceptable C compiler found in $PATHSee `config.log' for more details.===> exited with status 1aborted. I modified $PATH . -bash-4.4$ vi ~/.profilePATH=$HOME/bin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R7/bin:/usr/X11R6/bin:/usr/pkg/binPATH=${PATH}:/usr/pkg/sbin:/usr/games:/usr/local/bin:/usr/local/sbinGCC_PATH=/usr/pkg/gcc6/binPATH=${PATH}:${GCC_PATH}-bash-4.4$ . ~/.profile-bash-4.4$ echo $PATH/home/vagrant/bin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R7/bin:/usr/X11R6/bin:/usr/pkg/bin:/usr/pkg/sbin:/usr/games:/usr/local/bin:/usr/local/sbin:/usr/pkg/gcc6/bin then tried again. -bash-4.4$ rm -fr work -bash-4.4$ ./bootstrap --unprivileged......checking for gcc... gccchecking for C compiler default output file name... configure: error: in `/home/vagrant/pkgsrc/bootstrap/work/bmake':configure: error: C compiler cannot create executablesSee `config.log' for more details.===> exited with status 77aborted. config.log was like this: -bash-4.4$ view work/bmake/config.log 1 This file contains any messages produced by compilers while 2 running configure, to aid debugging if configure makes a mistake. 3 4 It was created by bmake configure 20140214, which was 5 generated by GNU Autoconf 2.64. Invocation command line was 6 7 $ configure --prefix=/home/vagrant/pkgsrc/bootstrap/work --with-default-sys-path=/home/vagrant/pkgsrc/bootstrap/work/share/mk --with-machine-arch=x86_64 8 9 ## --------- ## 10 ## Platform. ## 11 ## --------- ## 12 13 hostname = hello-netbsd 14 uname -m = amd64 15 uname -r = 7.1 16 uname -s = NetBSD 17 uname -v = NetBSD 7.1 (GENERIC.201703111743Z) 18 19 /usr/bin/uname -p = x86_64 20 /bin/uname -X = unknown 21 22 /bin/arch = unknown 23 /usr/bin/arch -k = unknown 24 /usr/convex/getsysinfo = unknown 25 /usr/bin/hostinfo = unknown 26 /bin/machine = unknown 27 /usr/bin/oslevel = unknown 28 /bin/universe = unknown 29 30 PATH: /home/vagrant/pkg/bin 31 PATH: /home/vagrant/pkg/sbin 32 PATH: . 33 PATH: /home/vagrant/bin 34 PATH: /bin 34 PATH: /bin 35 PATH: /sbin 36 PATH: /usr/bin 37 PATH: /usr/sbin 38 PATH: /usr/X11R7/bin 39 PATH: /usr/X11R6/bin 40 PATH: /usr/pkg/bin 41 PATH: /usr/pkg/sbin 42 PATH: /usr/games 43 PATH: /usr/local/bin 44 PATH: /usr/local/sbin 45 PATH: /usr/pkg/gcc6/bin 46 PATH: /sbin 47 PATH: /usr/sbin 48 49 50 ## ----------- ## 51 ## Core tests. ## 52 ## ----------- ## 53 54 configure:2371: checking for gcc 55 configure:2387: found /usr/pkg/gcc6/bin/gcc 56 configure:2398: result: gcc 57 configure:2627: checking for C compiler version 58 configure:2636: gcc --version >&5 59 gcc (GCC) 6.3.0 60 Copyright (C) 2016 Free Software Foundation, Inc. 61 This is free software; see the source for copying conditions. There is NO 62 warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 63 64 configure:2647: $? = 0 65 configure:2636: gcc -v >&5 66 Using built-in specs. 67 COLLECT_GCC=gcc 68 COLLECT_LTO_WRAPPER=/usr/pkg/gcc6/libexec/gcc/x86_64--netbsd/6.3.0/lto-wrapper 69 Target: x86_64--netbsd 70 Configured with: ../gcc-6.3.0/configure --disable-libstdcxx-pch --enable-nls --with-libiconv-prefix=/usr --enable-__cxa_atexit --with-gxx-include-dir=/usr/pkg/gcc6/include/c++/ --enable-languages='c obj-c++ objc fortran c++' --enable-shared --enable-long-long --with-local-prefix=/usr/pkg/gcc6 --disable-libssp --enable-threads=posix --with-boot-ldflags='-static-libstdc++ -static-libgcc -Wl,-R/usr/pkg/lib ' --with-arch=nocona --with-tune=nocona --with-fpmath=sse --with-gnu-ld --with-ld=/usr/bin/ld --with-gnu-as --with-as=/usr/bin/as --prefix=/usr/pkg/gcc6 --build=x86_64--netbsd --host=x86_64--netbsd --infodir=/usr/pkg/gcc6/info --mandir=/usr/pkg/gcc6/man 71 Thread model: posix 72 gcc version 6.3.0 (GCC) 73 configure:2647: $? = 0 74 configure:2636: gcc -V >&5 75 gcc: error: unrecognized command line option '-V' 76 gcc: fatal error: no input files 77 compilation terminated. 78 configure:2647: $? = 1 79 configure:2636: gcc -qversion >&5 80 gcc: error: unrecognized command line option '-qversion'; did you mean '--version'? 81 gcc: fatal error: no input files 82 compilation terminated. 83 configure:2647: $? = 1 84 configure:2669: checking for C compiler default output file name 85 configure:2691: gcc conftest.c >&5 86 In file included from conftest.c:9:0: 87 /usr/pkg/gcc6/lib/gcc/x86_64--netbsd/6.3.0/include-fixed/stdio.h:54:23: fatal error: sys/cdefs.h: No such file or directory 88 #include <sys/cdefs.h> 89 ^ 90 compilation terminated. 91 configure:2695: $? = 1 92 configure:2732: result: 93 configure: failed program was: 94 | /* confdefs.h */ 95 | #define PACKAGE_NAME "bmake" 96 | #define PACKAGE_TARNAME "bmake" 97 | #define PACKAGE_VERSION "20140214" 98 | #define PACKAGE_STRING "bmake 20140214" 99 | #define PACKAGE_BUGREPORT "[email protected]" 100 | #define PACKAGE_URL "" 101 | /* end confdefs.h. */ 102 | #include <stdio.h> 103 | int 104 | main () 105 | { 106 | FILE *f = fopen ("conftest.out", "w"); 107 | return ferror (f) || fclose (f) != 0; 108 | 109 | ; 110 | return 0; 111 | } 112 configure:2738: error: in `/home/vagrant/pkgsrc/bootstrap/work/bmake': 113 configure:: error: C compiler cannot create executables 114 See `config.log' for more details. 115 116 ## ---------------- ## 117 ## Cache variables. ## 118 ## ---------------- ## 119 120 ac_cv_env_CC_set= 121 ac_cv_env_CC_value= 122 ac_cv_env_CFLAGS_set= 123 ac_cv_env_CFLAGS_value= 124 ac_cv_env_CPPFLAGS_set= 125 ac_cv_env_CPPFLAGS_value= 126 ac_cv_env_CPP_set= 127 ac_cv_env_CPP_value= 128 ac_cv_env_LDFLAGS_set= 129 ac_cv_env_LDFLAGS_value= 130 ac_cv_env_LIBS_set= 131 ac_cv_env_LIBS_value= 132 ac_cv_env_build_alias_set= 133 ac_cv_env_build_alias_value= 134 ac_cv_env_host_alias_set= 135 ac_cv_env_host_alias_value= 136 ac_cv_env_target_alias_set= 137 ac_cv_env_target_alias_value= 138 ac_cv_prog_ac_ct_CC=gcc 139 140 ## ----------------- ## 141 ## Output variables. ## 142 ## ----------------- ## 143 144 CC='gcc' 145 CFLAGS='' 146 CPP='' 147 CPPFLAGS='' 148 DEFS='' 149 ECHO_C='' 150 ECHO_N='-n' 151 ECHO_T='' 152 EGREP='' 153 EXEEXT='' 154 GCC='' 155 GREP='' 156 INSTALL='' 157 INSTALL_DATA='' 158 INSTALL_PROGRAM='' 159 INSTALL_SCRIPT='' 160 LDFLAGS='' 161 LIBOBJS='' 162 LIBS='' 163 LTLIBOBJS='' 164 OBJEXT='' 165 PACKAGE_BUGREPORT='[email protected]' 166 PACKAGE_NAME='bmake' 167 PACKAGE_STRING='bmake 20140214' 168 PACKAGE_TARNAME='bmake' 169 PACKAGE_URL='' 170 PACKAGE_VERSION='20140214' 171 PATH_SEPARATOR=':' 172 SHELL='/bin/sh' 173 ac_ct_CC='gcc' 174 ac_exe_suffix='' 175 bindir='${exec_prefix}/bin' 176 bmake_path_max='' 177 build_alias='' 178 datadir='${datarootdir}' 179 datarootdir='${prefix}/share' 180 default_sys_path='' 181 diff_u='' 182 docdir='${datarootdir}/doc/${PACKAGE_TARNAME}' 183 dvidir='${docdir}' 184 exec_prefix='NONE' 185 filemon_h='no' 186 force_machine='' 187 host_alias='' 188 htmldir='${docdir}' 189 includedir='${prefix}/include' 190 infodir='${datarootdir}/info' 191 libdir='${exec_prefix}/lib' 192 libexecdir='${exec_prefix}/libexec' 193 localedir='${datarootdir}/locale' 194 localstatedir='${prefix}/var' 195 machine='' 196 machine_arch='' 197 mandir='${datarootdir}/man' 198 mksrc='' 199 oldincludedir='/usr/include' 200 pdfdir='${docdir}' 201 prefix='/home/vagrant/pkgsrc/bootstrap/work' 202 program_transform_name='s,x,x,' 203 psdir='${docdir}' 204 sbindir='${exec_prefix}/sbin' 205 sharedstatedir='${prefix}/com' 206 sysconfdir='${prefix}/etc' 207 target_alias='' 208 use_meta='yes' 209 210 ## ----------- ## 211 ## confdefs.h. ## 212 ## ----------- ## 213 214 /* confdefs.h */ 215 #define PACKAGE_NAME "bmake" 216 #define PACKAGE_TARNAME "bmake" 217 #define PACKAGE_VERSION "20140214" 218 #define PACKAGE_STRING "bmake 20140214" 219 #define PACKAGE_BUGREPORT "[email protected]" 220 #define PACKAGE_URL "" 221 222 configure: exit 77 I guessed some files were missing, but couldn't understand what to do. update2(2017/04/11) the following is the result of `gcc -v' -bash-4.4$ gcc -vUsing built-in specs.COLLECT_GCC=gccCOLLECT_LTO_WRAPPER=/usr/pkg/gcc6/libexec/gcc/x86_64--netbsd/6.3.0/lto-wrapperTarget: x86_64--netbsdConfigured with: ../gcc-6.3.0/configure --disable-libstdcxx-pch --enable-nls --with-libiconv-prefix=/usr --enable-__cxa_atexit --with-gxx-include-dir=/usr/pkg/gcc6/include/c++/ --enable-languages='c obj-c++ objc fortran c++' --enable-shared --enable-long-long --with-local-prefix=/usr/pkg/gcc6 --disable-libssp --enable-threads=posix --with-boot-ldflags='-static-libstdc++ -static-libgcc -Wl,-R/usr/pkg/lib ' --with-arch=nocona --with-tune=nocona --with-fpmath=sse --with-gnu-ld --with-ld=/usr/bin/ld --with-gnu-as --with-as=/usr/bin/as --prefix=/usr/pkg/gcc6 --build=x86_64--netbsd --host=x86_64--netbsd --infodir=/usr/pkg/gcc6/info --mandir=/usr/pkg/gcc6/manThread model: posixgcc version 6.3.0 (GCC) And I chose minimum install at the installation process. Is that wrong?
Here is one way to read that data into a bash associative array: Code: #!/usr/bin/env bashdeclare -A hostswhile IFS=" " read -r priority host ; do hosts["$priority"]="$host"done < <(dig mx +short google.com) for priority in "${!hosts[@]}" ; do echo "$priority -> ${hosts[$priority]}"done Output: 20 -> alt1.aspmx.l.google.com.10 -> aspmx.l.google.com.50 -> alt4.aspmx.l.google.com.40 -> alt3.aspmx.l.google.com.30 -> alt2.aspmx.l.google.com.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/355229", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124929/" ] }
355,236
For an script I'm making I need to convert the output of a command to an array. For simplifying I have made an example using echo: arr=( $(echo '"test example" "test2 example"') ) What I want is the first element of the array to be test example but when doing this: echo ${arr[0]} I get "test What I have to do to get the result I want?
Here is one way to read that data into a bash associative array: Code: #!/usr/bin/env bashdeclare -A hostswhile IFS=" " read -r priority host ; do hosts["$priority"]="$host"done < <(dig mx +short google.com) for priority in "${!hosts[@]}" ; do echo "$priority -> ${hosts[$priority]}"done Output: 20 -> alt1.aspmx.l.google.com.10 -> aspmx.l.google.com.50 -> alt4.aspmx.l.google.com.40 -> alt3.aspmx.l.google.com.30 -> alt2.aspmx.l.google.com.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/355236", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223850/" ] }
355,266
How can I sort input such as this using the sort command? I would like the numbers to be sorted numerically before the letters. 10111211314151617181920212223456789XY
As @terdon noticed, the inclusion of X and Y and the fact that the numbers run from 1 to 22 identifies this as a possible list of human chromosomes (which is why he says that chromosome M (mitochondrial) may be missing). To sort a list of numbers, one would usually use sort -n : $ sort -n -o list.sorted list where list is the unsorted list, and list.sorted will be the resulting sorted list. With -n , sort will perform a numerical sort on its input. However, since some of the input is not numerical, the result is probably not the intended; X and Y will appear first in the sorted list, not last (the sex chromosomes are usually listed after chromosome 22). However, if you use sort -V (for "version sorting"), you will actually get what you want: $ sort -V -o list.sorted list$ cat list.sorted12345678910111213141516171819202122XY This will probably still not work if you do add M as that would be sorted before X and not at the end (which I believe is how it's usually presented).
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/355266", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223875/" ] }
355,268
I have a 1TB HDD ( /dev/sda1 , mount point /run/media/<name>/4733A97E4133EADF ) that I'm trying to mount as read-write, but I can only get it to mount as read-only. System: $ uname -aLinux <hostname> 4.10.6-1-ARCH #1 SMP PREEMPT Mon Mar 27 08:28:22 CEST 2017 x86_64 GNU/Linux$ lsblk -fNAME FSTYPE LABEL UUID MOUNTPOINTsda └─sda1 ntfs 4733A97E4133EADF /run/media/<name>/4733A97E4133EADFsdb ├─sdb1 swap d9cea12d-5273-49ef-8950-3cd662fe63c7 [SWAP]└─sdb2 ext4 e09a8578-53e9-4c26-9a97-a47b6350a1ab /... What I've tried Adding a fstab entry to automount the drive on boot: $ cat /etc/fstab# # /etc/fstab: static file system information## <file system> <dir> <type> <options> <dump> <pass># /dev/sdb2UUID=e09a8578-53e9-4c26-9a97-a47b6350a1ab / ext4 rw,relatime,data=ordered 0 1# /dev/sdb1UUID=d9cea12d-5273-49ef-8950-3cd662fe63c7 none swap defaults 0 0# /dev/sda1UUID=4733A97E4133EADF /run/media/<name>/4733A97E4133EADF ntfs defaults,users,user 0 0 I've tried with defaults , defaults,users , and defaults,users,user . Rebooted after each change, but the drive is still mounted as read-only: $ ls -l /run/media/<name>...dr-x------ 1 root root 4096 Mar 28 17:35 4733A97E4133EADF... Manually remounting: $ sudo mount -o remount,rw /dev/sda1 /run/media/<name>/4733A97E4133EADFmount: cannot remount /dev/sda1 read-write, is write-protected$ sudo umount /run/media/<name>/4733A97E4133EADF$ sudo mount -o rw /dev/sda1 /run/media/<name>/4733A97E4133EADF At this point, the command just hanged for a few minutes, so I terminated it. $ sudo umount /run/media/<name>/4733A97E4133EADF$ sudo mount /dev/sda1 /run/media/<name>/4733A97E4133EADF No change. As of yet, I have not been able to write to the drive at all (from this system, at least), even as root. chown , chmod have no effect because the filesystem is read-only. What must I do to (auto)mount this drive as read-write, with normal (non-root) user access? Have tried solutions from the following: How do I remount a filesystem as read/write? | Ask Ubuntu Ubuntu remount a root mount that's changed to ro as rw without rebooting | Server Fault
Although @ingopingo answered the question in one of the comments, i am going to write an answer with further information now. By default the Linux kernel only supports reading from the NTFS file system. For read/write access you will need a read-write NTFS driver like the ntfs-3g package from extra repository. After installation with sudo pacman -S ntfs-3g you are able to mount your NTFS partitions the usual way with sudo mount /path/to/ntfs /mount/point . This is possible due to a symlink of /usr/bin/mount.ntfs to /usr/bin/ntfs-3g . Note: You need to have root privilegs to mount the filesystem. Requirements for an exception are listed in the ntfs-3g-FAQ . Using the default settings the NTFS-partition will be mounted at boot. Put the following in your /etc/fstab : /path/to/ntfs /mount/point ntfs-3g defaults 0 0 To be able to read-write with a non-root user, you have to set some additional options (username has to be changed to your username): /path/to/ntfs /mount/point ntfs-3g uid=username,gid=users,umask=0022 0 0
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/355268", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/108277/" ] }
355,407
I want to cut 30% from the top of the image. I know the thread How to cut a really large raster image into smaller chunks? but there is no successful approach because I cannot find a distance measure of convert from zero to the end , only by absolute value dimensions.Pseudocode convert -crop-y -units-percentage 0x30 heart.png Fig. 1 Input figure I can do the task with LaTeX's adjustbox but the output in the pdf file is not really end result but a presentation of it. So copying the image from the pdf document yields the original image. So this approach failed.
You can crop a percentage of your image though in this case, to avoid running additional commands to get the image height and width (in order to calculate crop offset which by default is relative to top-left corner) you'll also have to crop relative to gravity (so that your crop offset position is relative to the bottom-left corner of the image): convert -gravity SouthWest -crop 100x70%x+0+0 infile.jpg outfile.jpg
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/355407", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16920/" ] }
355,559
I have one quick question. Is it normal that bash (i am using 4.4.11) is not displaying lines/text that is separated / end with plain \r ? I was a bit surprised to see this behavior: $ a=$(printf "hello\ragain\rgeorge\r\n")$ echo "$a"george But "hello again" text is still there,somehow "hidden": $ echo "$a" |od -w32 -t x1c0000000 68 65 6c 6c 6f 0d 61 67 61 69 6e 0d 67 65 6f 72 67 65 0d 0a h e l l o \r a g a i n \r g e o r g e \r \n And as soon as we just play with bash is fine.... But is this a potential security risk? What if contents of variable "a" come from outter world and include "bad commands" instead of just hello? Another test, a bit unsecure this time: $ a=$(printf "ls;\rGeorge\n")$ echo "$a"George$ eval "$a"0 awkprof.out event-tester.log helloworld.c oneshot.sh rightclick-tester.py tmp uinput-simple.py<directory listing appears with an error message at the end for command George> Imagine a hidden rm instead of a hidden ls . Same behavior when using echo -e: $ a=$(echo -e "ls;\rGeorge\r\n"); echo "$a"George Is it me that does something wrong...?
Your echo "$a" prints "hello", then goes back to the beginning of the line (which is what \r does), print "again", goes back again, prints "george", goes back again, and goes to the next line ( \n ). It’s all perfectly normal, but as chepner points out, it doesn’t have anything to do with Bash: \r and \n are interpreted by the terminal, not by Bash (which is why you get the full output when you pipe the command to od ). You can see this better with $ a=$(printf "hellooooo\r again,\rgeorge\r\n")$ echo "$a" since that will leave the end of the overwritten text: georgen,o You can’t really use that to hide commands though, only their output (and only if you can be sure to overwrite with enough characters), unless using eval as you show (but using eval is generally not recommended). A more dangerous trick is using CSS to mask commands intended to be copied and pasted from web sites.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/355559", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/188385/" ] }
355,573
I'm trying to ignore a particular error message in playbook output is having below error message. fatal: [192.168.0.1]: FAILED! => {"changed": false, "failed": true, "msg": "Sending passwords in plain text without SSL/TLS is extremely insecure.. Query == CHANGE MASTER TO ['MASTER_HOST=%(master_host)s', 'MASTER_USER=%(master_user)s', 'MASTER_PASSWORD=%(master_password)s', 'MASTER_LOG_FILE=%(master_log_file)s', 'MASTER_LOG_POS=%(master_log_pos)s']"} > Task: > - name: Setup Replication become: true mysql_replication:> login_host: "{{ slave_ip }}"> login_user: "user"> login_password: "***"> mode: changemaster> master_host: "{{ master_ip }}"> master_log_file: mysql-bin.000001> master_log_pos: 107> master_user: "{{ mysql_replicator_user }}"> master_password: "{{ mysql_replicator_password }}" Any luck? how to achieve this? EDITED: Reply to Marco Answer - well that is the problem here, I would not know what error I might get. But I'm sure that if err msg contains "Sending passwords in plain text without SSL" then ignore if not and anyother error then don't ignore. To explain in simple "Throw exception if error msg doesn't contain -> 'null' or SSL."
There are a couple of options regarding error handling in Ansible. You could use ignore_errors: yes attribute on your task. If you don't want to ignore all errors, you can specify what exactly constitutes an error using something like: - name: task name module: arguments ... register: out failed_when: 'error message' in out.stderr If you want to add more complex failure checks, you can split error handling off into separate task like this: - name: test shell: echo error; exit 123 register: out ignore_errors: yes- fail: msg="{{ out.stdout }}" when: "out.rc != 0 and 'error' not in out.stdout" In this example first task fails with return code 123 and prints "error" on it's standard output. This will be registered, but ignored. Second task analyzes output values and fails only if return code is different than zero AND standard output does NOT contain string "error". You can read more details in Ansible documentation: https://docs.ansible.com/ansible/playbooks_error_handling.html
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/355573", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31507/" ] }
355,593
I've created a regex which I need to run with grep, I'm pretty sure the regex is fine as it works with online regex tools, however when I run grep -r -P -o -h '(?<=(?<!def )my_method )(["'])(?:(?=(\\?))\2.)*?\1' I get the error Syntax error: ")" unexpected .
Your regular expression is quoted with single quotes, but it also contains a single quote. The single quote in ["'] needs to be escaped, or it will signal the end of the quoted string to the shell. This will fix it: grep -r -P -o -h '(?<=(?<!def )my_method )(["'\''])(?:(?=(\\?))\2.)*?\1'# ^^^^ With ["'\''] , the first ' ends the first part of the string, the \' inserts a literal single quote, and the last ' starts a new single quoted string that will be concatenated with the previous bits. Only the middle single quote will end up in the regular expression itself, the other two will be removed by the shell.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/355593", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140978/" ] }
355,610
You should never paste from web to your terminal . Instead, you should paste to your text editor, check the command and then paste to the terminal. That's OK, but what if Vim is my text editor? Could one forge a content that switches Vim to command mode and executes the malicious command?
Short answer: In many situations, Vim is vulnerable to this kind of attack (when pasting text in Insert mode). Proof of concept Using the linked article as a starting point, I was able to quickly create a web page with the following code, using HTML span elements and CSS to hide the middle part of the text so that only ls -la is visible to the casual viewer (not viewing the source). Note: the ^[ is the Escape character and the ^M is the carriage return character. Stack Exchange sanitises user input and protects against hiding of content using CSS so I’ve uploaded the proof of concept . ls ^[:echom "This could be a silent command."^Mi -la If you were in Insert mode and pasted this text into terminal Vim (with some qualifiers, see below) you would see ls -la but if you run the :messages command, you can see the results of the hidden Vim command. Defence To defend against this attack it’s best to stay in Normal mode and to paste using "*p or "+p . In Normal mode, when p utting text from a register, the full text (including the hidden part) is pasted. This same doesn’t happen in Insert mode (even if :set paste ) has been set. Bracketed paste mode Recent versions of Vim support bracketed paste mode that mitigate this type of copy-paste attack. Sato Katsura has clarified that “Support for bracketed paste appeared in Vim 8.0.210, and was most recently fixed in version 8.0.303 (released on 2nd February 2017)”. Note: As I understand it, versions of Vim with support for bracketed paste mode should protect you when pasting using Ctrl - Shift - V (most GNU/Linux desktop environments), Ctrl - V (MS Windows), Command - V (Mac OS X), Shift - Insert or a mouse middle-click. Testing I did some testing from a Lubuntu 16.04 desktop machine later but my results were confusing and inconclusive. I’ve since realised that this is because I always use GNU screen but it turns out that screen filters the escape sequence used to enable/disable the bracketed paste mode (there is a patch but it looks like it was submitted at a time when the project was not being actively maintained). In my testing, the proof of concept always works when running Vim via GNU screen, regardless of whether Vim or the terminal emulator support bracketed paste mode. Further testing would be useful but, so far, I found that support for bracketed paste mode by the terminal emulator block my Proof of Concept – as long as GNU screen isn’t blocking the relevant escape sequences. However, user nneonneo reports that careful crafting of escape sequences may be used to exit bracketed paste mode. Note that even with an up-to-date version of Vim, the Proof of Concept always works if the user pastes from the * register while in Insert mode by typing ( Ctrl - R * ). This also applies to GVim which can differentiate between typed and pasted input. In this case, Vim leaves it to the user to trust the contents of their register contents. So don’t ever use this method when pasting from an untrusted source (it’s something I often do – but I’ve now started training myself not to). Related links What you see is not what you copy (from 2009, first mention of this kind of exploit that I found) How can I protect myself from this kind of clipboard abuse? Recent discussion on vim_dev mailing list (Jan 2017) Conclusion Use Normal mode when pasting text (from the + or * registers). … or use Emacs. I hear it’s a decent operating system. :)
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/355610", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6117/" ] }
355,640
In order to ssh into my work computer from home, let's call it C I have to do the following: ssh -t user@B ssh C B is a server that I can connect to from home but C can only be connected to from B. This works fine. If I want to copy a file that is on C to my home computer using scp , what command do I need from my home computer?
I’d suggest the following in your .ssh/config : Host C User user ProxyCommand ssh -W %h:%p user@B I’t much safer if host B is untrusted, and works for scp and sftp.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/355640", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224142/" ] }
355,642
I have an external drive with Linux Mint 18 on it. It boots to my desktop, which is where I used the live CD to install the OS onto the external drive. However, when I try to boot this external drive from my laptop, it will not boot. Now, I changed the boot menu, made sure I set "Secure Boot" to disabled, and verified that "UEFI boot" is set to enabled, and I still cannot get the drive to boot. I used EasyBCD to install a grub, but that also puts me into the grub menu upon boot. I am not able to see the drive in windows explorer, but it is in the Manage Disks. I tried all of the EasyBCD grub options to no avail. Am I missing something? I do not want to take a chance and install the grub from the command when it first boots, (I get this message: Minimal BASH-like line editing is supported. For the first word. TAB lists possible command completions. Anywhere else TAB Lists possible device or file complete ) because I do not want to take that chance and wipe out my windows boot, or more. When I boot it to my desktop, I get the options from grub asking whether to boot to Linux or Windows. What am I missing? Can I just add a grub through EasyBCD and if so, which procedure is it? I would like to get this grub on my laptop so that when it boots, I get the option to boot to either Linux or Windows when the drive is plugged in. I want to be able to install this drive to any computer and be able to boot into the Linux OS on this external drive. Even if I have to change the BOIS setting upon boot. That does not bother me. I was also thinking of just reinstalling the live CD again, only this time, use my laptop to perform the install to "the same" external drive I have Linux Mint 18 on right now. Basically overwriting the OS to the same OS. This way, the grub is on my laptop as well. However, when I do this, I was thinking about removing the two drives I have in the laptop before installing. My question about this is, "if I do remove the drives, will this work, because I didn't remove the drive from my desktop when I installed the initial OS on the external drive. Does this grub play a role within the C:/? Any help would be greatly appreciated.
I’d suggest the following in your .ssh/config : Host C User user ProxyCommand ssh -W %h:%p user@B I’t much safer if host B is untrusted, and works for scp and sftp.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/355642", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224148/" ] }
355,664
According to a computation I did on the data from ifconfig , my ethernet connection between my router and computer averages 1298 bytes/packet for TX (close to the MTU of 1500) and only 131 bytes/packet for RX. What could cause such a large discrepancy in the average TX vs. RX packet sizes?
One possibility is that if you are sending data out predominantly, most of the packets coming back to your system will be ACKs, and those are going to be much smaller than the PUSH you're sending.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/355664", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37153/" ] }
355,763
Situation: I need a filesystem on thumbdrives that can be used across Windows and Linux. Problem: By default, the common FS between Windows and Linux are just exFAT and NTFS (at least in the more updated kernels) Question: In terms of performance on Linux (since my base OS is Linux), which is a better FS? Additional information: If there are other filesystems that you think is better and satisfies the situation, I am open to hearing it. EDIT 14/4/2020: ExFAT is being integrated into the Linux kernel and may provide better performance in comparison to NTFS (which I have learnt since that the packages that read-write to NTFS partitions are not the fastest [granted, it is a great interface]). Bottom line is still -- if you need the journal to prevent simple corruptions, go NTFS. EDIT 18/9/2021: NTFS is now being integrated into the Linux kernel (soon), and perhaps this will mean that NTFS performance will be much faster due to the lesser overhead than when it was a userland module. EDIT 15/6/2022: The NTFS3 kernel driver is officially part of the Linux Kernel as of version 5.15 (Released November 2021). Will do some testing and update this question with results.
NTFS is a Microsoft proprietary filesystem. All exFAT patents were released to the Open Invention Network and it has a fully functional in-kernel Linux driver since version 5.4 (2019). [1] exFat, also called FAT64, is a very simple filesystem, practically an extension of FAT32, due to its simplicity, it's well implemented in Linux and very fast. But due to its easy structure, it's easily affected by fragmentation, so performance can easily decrease with the use. exFAT doesn't support journaling thus meaning it needs full checking in case of unclean shutdown. NTFS is slower than exFAT, especially on Linux, but it's more resistant to fragmentation. Due to its proprietary nature it's not as well implemented on Linux as on Windows, but from my experience it works quite well. In case of corruption, NTFS can easily be repaired under Windows (even for Linux there's ntfsfix ) and there are lots of tools able to recover lost files. Personally, I prefer NTFS for its reliability. Another option is to use ext4, and mount under Windows with extfsd , ext4 is better on Linux, but the driver is not well implemented on Windows. Extfsd doesn't fully support journaling, so there is a risk to write under Windows, but ext is easier to repair under Linux than exFAT.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/355763", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208121/" ] }
355,775
On Debian Jessie, using php5.6 and telnet version: $ dpkg -l | grep telnetii telnet 0.17-36 amd64 The telnet client I have written a php script to listen on port 23 for incoming tcp connections. For testing, I telnet into it, however I have noticed that it actually makes a difference wither I telnet into it like this: $ telnet localhost 23 vs like this: $ telnet localhost But according to man telnet , it should not make a difference: port Specifies a port number or service name to contact. If not specified, the telnet port (23) is used. If I do not specify the port, then I get some weird noise on the line. Or maybe its not noise? But if I do specify the port then I do not get this noise on the line. The noise is the following set of ascii characters: <FF><FD><03><FF><FB><18><FF><FB><1F><FF><FB><20><FF><FB><21><FF><FB><22><FF><FB><27><FF><FD><05> And just in case this is due to a bug in my server-side code, here is a cut down version of the script, which does exhibit the noise (though I don't think there are any bugs in the code, I just include this because someone is bound to ask): #!/usr/bin/php<?phpset_time_limit(0); // infinite execution time for this scriptdefine("LISTEN_ADDRESS", "127.0.0.1");$sock = socket_create(AF_INET, SOCK_STREAM, SOL_TCP);socket_set_option($sock, SOL_SOCKET, SO_RCVTIMEO, array('sec' => 30, 'usec' => 0)); // timeout after 30 secsocket_bind($sock, LISTEN_ADDRESS, 23); // port = 23socket_listen($sock);echo "waiting for a client to connect...\n";// accept incoming requests and handle them as child processes// block for 30 seconds or until there is a connection.$client = socket_accept($sock); //get the handle to this clientecho "got a connection. client handle is $client\n";$raw_data = socket_read($client, 1024);$human_readable_data = human_str($raw_data);echo "raw data: [$raw_data], human readable data: [$human_readable_data]\n";echo "closing the connection\n";socket_close($client);socket_close($sock);function human_str($str){ $strlen = strlen($str); $new_str = ""; // init for($i = 0; $i < $strlen; $i++) { $new_str .= sprintf("<%02X>", ord($str[$i])); } return $new_str;}?> And the output from the script (from connecting like so: telnet localhost ) is: waiting for a client to connect...got a connection. client handle is Resource id #5raw data: [�������� ��!��"��'��], human readable data: [<FF><FD><03><FF><FB><18><FF><FB><1F><FF><FB><20><FF><FB><21><FF><FB><22><FF><FB><27><FF><FD><05>]closing the connection But when connecting like telnet localhost 23 (and issuing the word hi ) the output is: waiting for a client to connect...got a connection. client handle is Resource id #5raw data: [hi], human readable data: [<68><69><0D><0A>]closing the connection So my question is whether this is expected behavior from the telnet client, or whether this is noise? It is very consistent - its always the same data - so it could be some kind of handshake? Here is the "noise" string again with spaces and without spaces, in case its more useful: FFFD03FFFB18FFFB1FFFFB20FFFB21FFFB22FFFB27FFFD05FF FD 03 FF FB 18 FF FB 1F FF FB 20 FF FB 21 FF FB 22 FF FB 27 FF FD 05
telnet is not netcat . The telnet protocol is more than raw TCP. Among other things it can have a number of options , and the "noise" you're seeing is the negotiation of these options between your client and the server. When you specify a port you don't see any noise because according to the manual: When connecting to a non-standard port, telnet omits any automatic initiation of TELNET options. When the port number is preceded by a minus sign, the initial option negotiation is done. So apparently your implementation of telnet disables option negotiation when you specify a port (even when the port is 23), and re-enables it when the port is preceded by a minus sign. On a more general note, it's generally safe to forget about telnet these days. Use netcat instead if you need a simple plain TCP client (or server, for that matter).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/355775", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5451/" ] }
355,781
I have a Ubuntu server (16.10) at home and was wondering if there is a way to turn it on remotely when I am away from home? I set up Wake-On-LAN but that only seems to work when I'm using another computer connected to the same network as my Ubuntu server. Any ideas on getting WOL working remotely?
First, the fact that your computer is running Ubuntu when powered on is unrelated to Wake-on-LAN (WOL) functionality. Second, WOL uses Ethernet frames with a particular format. Third, Ethernet frames are not routed outside of the local network segment. In the case of the Internet, intermediate networks might not even use Ethernet at all. The consequence of the second and third points is that, in order to send a WOL request to a computer on a network, you need to do so from another system on the local network segment. It is not possible to directly issue WOL requests over the Internet. Of course, you can do something like what CyberFonic suggests and have a small, low-power system on the local network segment that you can use to issue a WOL request. But in that case, the WOL request is really issued by another system on the local network segment; you just happen to access that system over the Internet.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/355781", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7317/" ] }
355,816
Following input: A 13A 12B 17C 33D 344C 24A 5C 99 I want to get only the lines where column one is unique: B 17D 344 A solution with awk would be nice, but something else is acceptable as well.
With awk : awk 'NR==FNR { a[$1]++ } NR!=FNR && a[$1]==1' file file (the filename is passed twice). Edit: If the file comes from stdin you need a temporary copy. Something like this: tmp="$( mktemp -t "${0##*/}"_"$$"_.XXXXXXXX )" && \ trap 'rm -f "$tmp"' 0 HUP INT QUIT TERM || exit 1... | tee "$tmp" | awk '...' - "$tmp"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/355816", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/200923/" ] }
355,848
Environment: # uname -aLinux shrimpwagon 3.16.0-4-amd64 #1 SMP Debian 3.16.39-1+deb8u1 (2017-02-22) x86_64 GNU/Linux I have already installed: # apt-get install strongswan xl2tpd I'm trying to connect to a Meraki VPN. I spoke to a Meraki tech and he said that it looks like it is not authenticating but didn't give me much more detail: # ipsec up L2TP-PSKgenerating QUICK_MODE request 2711688330 [ HASH SA No ID ID NAT-OA NAT-OA ]sending packet: from 10.0.0.4[4500] to 50.123.152.194[4500] (252 bytes)received packet: from 50.123.152.194[4500] to 10.0.0.4[4500] (68 bytes)parsed INFORMATIONAL_V1 request 2555305796 [ HASH N(NO_PROP) ]received NO_PROPOSAL_CHOSEN error notifyestablishing connection 'L2TP-PSK' failed ipsec.conf: config setup virtual_private=%v4:10.0.0.0/8# nat_traversal=yes protostack=auto oe=off plutoopts="--interface=eth0"conn L2TP-PSK keyexchange=ikev1 ike=aes128-sha1-modp1024,3des-sha1-modp1024! phase2=ah phase2alg=aes128-sha1-modp1024,3des-sha1-modp1024! authby=secret aggrmode=yes pfs=no auto=add keyingtries=2# dpddelay=30# dpdtimeout=120# dpdaction=clear# rekey=yes ikelifetime=8h keylife=1h type=transport left=%defaultroute# leftnexthop=%defaultroute# leftprotoport=udp/l2tp right=50.123.152.194 rightsubnet=10.2.150.0/24 ipsec.secrets: %any %any : PSK "****" xl2tpd.conf: [lac vpn-connection]lns = 50.123.152.194;refuse chap = yes;refuse pap = no;require authentication = yes;name = vpn-serverppp debug = yespppoptfile = /etc/ppp/options.l2tpd.clientlength bit = yes options.l2tpd.client: ipcp-accept-localipcp-accept-remoterefuse-eaprequire-mschap-v2noccpnoauthidle 1800mtu 1410mru 1410defaultrouteusepeerdnsdebuglockconnect-delay 5000name swelchpassword **** I have gotten most of my instructions from this site: https://www.elastichosts.com/blog/linux-l2tpipsec-vpn-client/ I did have to put it into aggresive mode, specify ikev1 and set the ike algorithms. Once I did that then I was able to start communicating to the MX. But I'm getting this error now and I am at a total loss. Thanks in advance!
When connecting as a Meraki Client VPN, it only supports protocols that have been removed from the Strongswan default protocol negotiation list (because the SWEET32 birthday attack is possible against some of these protocols) so you have to specify them explicitly (as you have). If you install ike-scan and run it against your Meraki "server" sudo ipsec stop; sudo service xl2tpd stop; sudo ike-scan YOUR.SERVER.IP you can see what the default protocol is. I'm fairly confident it is 3des-sha1-modp1024 like you have above, though in my (NetworkManager) generated ipsec.conf I don't have the phase2 and phase2alg lines, but an esp . Here is the snippet from my working config with the protocols: keyexchange=ikev1 ike=3des-sha1-modp1024! esp=3des-sha1! Sidenote: This probably doesn't matter for you since you are using the CLI, but I'm using a PPA for the NM plugin for L2TP from ppa:nm-l2tp/network-manager-l2tp and in my NetworkManager GUI it refers Phase 1 and Phase 2 , but in the generated ipsec config those map to the ike and esp above. I used this blog post
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/355848", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40160/" ] }
355,853
I have multiple serial ports to each of which devices are connected. They are listed as /dev/ttyUSB* . Now, I need to make sure using a python script that no other process is using any of these before I run a kermit script (so that access is not denied) login_init . I tried ps and lsof commands. lsof gave the following output: sof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete.COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMElogin_ini 13395 user4 4u CHR 188,9 0t0 512 /dev/ttyUSB9python 14410 user4 6u CHR 188,9 0t0 512 /dev/ttyUSB9 I got the pid s of the processes alright, but when I give the killall command, it says no process found as follows: user4@user-pc-4:~/Scripts$ killall -9 1339513395: no process founduser4@user-pc-4:~/Scripts$ killall -9 1441013395: no process found Is this the right and the only way or there are better ways to do it?
killall expects a substring of the program's name as argument. To kill a process by its process ID, use kill . You can directly kill all the processes that have a file open with the command fuser . fuser -k /dev/ttyUSB9
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/355853", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138465/" ] }
355,854
I have a windows machine(Server 2012). I have a shared location in it restricted to my user id(\\my_windows\test). I have few files inside test folder. I want to upload or download files to this location using curl. Is this possible? Whatever examples I saw till now has https or https as the url for Curl. I am able to open this network location in Mozilla with 'file://///my_windows/test'. I want to curl to this location. How do I do it or is there anyway that I can make this location to open with http url? Thanks
killall expects a substring of the program's name as argument. To kill a process by its process ID, use kill . You can directly kill all the processes that have a file open with the command fuser . fuser -k /dev/ttyUSB9
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/355854", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224290/" ] }
355,855
I have a file: AC AF AN3 0.375 83 0.375 8 I want the output as: AC AF ANAC=3 AF=0.375 AN=8AC=3 AF=0.375 AN=8 Is there any unix command for that?
awk ' NR==1 {split($0,a); $1=$1} NR>1 {for(i=1;i<=NF;i++) $i=a[i]"="$i} 1' OFS='\t' yourfile Explanation: split the first record (header row) into an array, based on the default field separator; reassign $1 so that the record gets written with the new output field separator for the remaining records, loop over fields prepending each field value with the array element corresponding to the field index, separated by = print records with tab as the output field separator
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/355855", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148248/" ] }
355,918
I was trying to use the mv command but I made a mistake. I had a file called relazione in a directory and I was trying to move it to another directory with the same name, so I typed: mv relazione /relazione The terminal didn't let me, so I typed: sudo mv relazione /relazione It worked. Then I realised that nothing had moved. I did everything while I was working with gdl . I would like to know what I have done and if this can somehow give some problem with gdl .
Files and directories on Unix-like systems (including Linux) are organized in to a tree. At the bottom (or the top, if you're a computer scientist—they have funny trees) is the trunk or "root directory". The path for that is / . From that, you can build other paths: /relazione is a directory that is immediately off the root. Normally, your personal files are somewhere inside your home directory (typically /home/username —so home is off of the root directory, then username is off of home . ) What you did isn't likely to break anything (though it may make it hard for you to find your files—e.g., if you use a GUI, it'll start looking in your home directory, or maybe even a directory inside that). If you're using a file indexer, those files will probably no longer be indexed. Etc. You can just move it back, with something like: sudo mv -i /relazione ~/relazione ~/ is a quick way to specify your home directory (to save typing). However, there is something that will break your computer : that habit of running things with sudo . When you get an error trying to run a command, there is a reason for that! Permissions are there to (among other things) protect the system from destruction, and sudo removes all restrictions. You should use it as sparingly as possible, and only when you understand the command you're about to run.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/355918", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224335/" ] }
355,928
Does the AWK language support arithmetic for complex numbers? If yes, how do I define an imaginary unit?
You can always define complex numbers as an array of two numbers (the real and imaginary part). You'd need to define all the arithmetic operators by hand: function cset(x, real, imaginary) { x["real"] = real x["imaginary"] = imaginary}function cadd(n1, n2, result) { result["real"] = n1["real"] + n2["real"] result["imaginary"] = n1["imaginary"] + n2["imaginary"]}function cmult(n1, n2, result) { result["real"] = n1["real"] * n2["real"] - n1["imaginary"] * n2["imaginary"] result["imaginary"] = n1["real"] * n2["imaginary"] + n2["real"] * n1["imaginary"]}function c2a(x, tmp) { if (x["real"]) { tmp = x["real"] if (x["imaginary"] > 0) tmp = tmp "+" } if (x["imaginary"]) { if (x["imaginary"] == -1) tmp = tmp "-i" else if (x["imaginary"] == 1) tmp = tmp "i" else tmp = tmp x["imaginary"] "i" } if (tmp == "") tmp = "0" return "(" tmp ")"}BEGIN { cset(i, 0, 1) cmult(i, i, i2) printf "%s * %s = %s\n", c2a(i), c2a(i), c2a(i2) cset(x, 1, 2) cset(y, 0, 4) cadd(x, y, xy) printf "%s + %s = %s\n", c2a(x), c2a(y), c2a(xy)} Which would output: (i) * (i) = (-1)(1+2i) + (4i) = (1+6i) For languages with native support for complex numbers, see: python : $ python -c 'print(1j*1j)'(-1+0j) octave : $ octave --eval 'i*i'ans = -1 calc ( apcalc package on Debian): $ calc '1i * 1i' -1 R : $ $ Rscript -e '1i*1i'[1] -1+0i
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/355928", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52830/" ] }
355,956
If I execute banner ABC I get: # ###### ##### # # # # # # # # # # ## # ###### ######## # # ## # # # # ## # ###### ##### and another program, figlet , has more elaborate fonts and mechanisms for joining adjacent characters, e.g.: _ ____ ____ / \ | __ ) / ___| / _ \ | _ \| | / ___ \| |_) | |___ /_/ \_\____/ \____| but what if I want to have each letter printed using just that letter, i.e.: A BBBBBB CCCCC A A B B C C A A B B CA A BBBBBB CAAAAAAA B B CA A B B C CA A BBBBBB CCCCC or better yet, something more compact, like: A BBBB CCCC A A B B C A A BBBB C AAAAA B B C A A BBBB CCCC ? What's the simplest way to make that happen?
There are four optional fonts for figlet which use the single-ASCII-character letters themselves to draw larger versions of these letter: Either here ( alphabet , letters and tanja ) here (same) or here ( letter ): alphabet b AA BBBB CCC b A A B B C aa bbb ccc AAAA BBBB C a a b b c A A B B C aaa bbb ccc A A BBBB CCC letter A BBBB CCC A BBBB CCC A A B B C C A A B B C C AAAAA BBBB C AAAAA BBBB C A A B B C C A A B B C C A A BBBB CCC A A BBBB CCC letters bb AAA BBBBB CCCCC aa aa bb cccc AAAAA BB B CC C aa aaa bbbbbb cc AA AA BBBBBB CC aa aaa bb bb cc AAAAAAA BB BB CC C aaa aa bbbbbb ccccc AA AA BBBBBB CCCCC tanja b) A)aa B)bbbb C)ccc b) A) aa B) bb C) cc a)AAAA b)BBBB c)CCCC A) aa B)bbbb C) a)AAA b) BB c) A)aaaaaa B) bb C) a) A b) BB c) A) aa B) bb C) cc a)AAAA b)BBBB c)CCCC A) aa B)bbbbb C)ccc And maybe doh $ figlet -f doh abcABC bbbbbbbb b::::::b b::::::b b::::::b b:::::b aaaaaaaaaaaaa b:::::bbbbbbbbb cccccccccccccccc a::::::::::::a b::::::::::::::bb cc:::::::::::::::c aaaaaaaaa:::::a b::::::::::::::::b c:::::::::::::::::c a::::a b:::::bbbbb:::::::bc:::::::cccccc:::::c aaaaaaa:::::a b:::::b b::::::bc::::::c ccccccc aa::::::::::::a b:::::b b:::::bc:::::c a::::aaaa::::::a b:::::b b:::::bc:::::c a::::a a:::::a b:::::b b:::::bc::::::c ccccccca::::a a:::::a b:::::bbbbbb::::::bc:::::::cccccc:::::ca:::::aaaa::::::a b::::::::::::::::b c:::::::::::::::::c a::::::::::aa:::ab:::::::::::::::b cc:::::::::::::::c aaaaaaaaaa aaaabbbbbbbbbbbbbbbb cccccccccccccccc AAA BBBBBBBBBBBBBBBBB CCCCCCCCCCCCC A:::A B::::::::::::::::B CCC::::::::::::C A:::::A B::::::BBBBBB:::::B CC:::::::::::::::C A:::::::A BB:::::B B:::::B C:::::CCCCCCCC::::C A:::::::::A B::::B B:::::B C:::::C CCCCCC A:::::A:::::A B::::B B:::::BC:::::C A:::::A A:::::A B::::BBBBBB:::::B C:::::C A:::::A A:::::A B:::::::::::::BB C:::::C A:::::A A:::::A B::::BBBBBB:::::B C:::::C A:::::AAAAAAAAA:::::A B::::B B:::::BC:::::C A:::::::::::::::::::::A B::::B B:::::BC:::::C A:::::AAAAAAAAAAAAA:::::A B::::B B:::::B C:::::C CCCCCC A:::::A A:::::A BB:::::BBBBBB::::::B C:::::CCCCCCCC::::C A:::::A A:::::A B:::::::::::::::::B CC:::::::::::::::C A:::::A A:::::A B::::::::::::::::B CCC::::::::::::CAAAAAAA AAAAAAABBBBBBBBBBBBBBBBB CCCCCCCCCCCCC
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/355956", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34868/" ] }
355,965
Is there a way to check which line number of a bash script is being executed "right now"? Using bash -x script.sh looks promising; however, I need to get the current line number.
Combine xtrace with PS4 inside the script: $ cat test.sh #!/usr/bin/env bashset -xPS4='+${LINENO}: 'sleep 1msleep 1d$ timeout 5 ./test.sh+3: PS4='+${LINENO}: '+5: sleep 1m or in the parent shell : $ cat test.sh sleep 1msleep 1d$ export PS4='+${LINENO}: '$ timeout 5 bash -x ./test.sh+1: sleep 1m
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/355965", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224371/" ] }
356,084
I am running Firefox 52.0.2 on Arch Linux and although on the system (e.g.: in Nautilus) I have Japanese characters showing fine, in Firefox they are unreadable because all that is shown for them is this: This will be because I do not have the font installed which allows the showing of Hiragina, Katakana and Kanji installed for Firefox. But the problem is that I am not entirely sure about how I get this for Firefox. I tried installing the Japanese dictionary in Firefox, but that didn't seem to make any difference. So what do I have to do and install to get them to display properly? Because in this form it is obviously very difficult to read and write them.
It should be enough to install the great noto fonts bundles: sudo pacman -S noto-fonts-cjk noto-fonts-emoji noto-fonts The restart firefox and you should be abe to see them. Personally, I also installed the following from AUR : yaourt -S ttf-freefont ttf-ms-fonts ttf-linux-libertine ttf-dejavu ttf-inconsolata ttf-ubuntu-font-family I doubt those will help for Japanese, but they do provide a respectable variety of fonts for your system.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/356084", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
356,106
here is a shell script which takes domain and its parameters to find status code . this runs way faster due to threading but misses lot of requests. while IFS= read -r url <&3; do while IFS= read -r uri <&4; do urlstatus=$(curl -o /dev/null --insecure --silent --head --write-out '%{http_code}' "${url}""${uri}" --max-time 5 ) && echo "$url $urlstatus $uri" >> urlstatus.txt &done 4<uri.txt done 3<url.txt if i ran normally it process all requests but the speed is very low. is there a way through which speed is maintained and it also not misses all requests .
You are experiencing the problem of appending to a file in parallel. The easy answer is: Don't. Here is how you can do it using GNU Parallel: doit() { url="$1" uri="$2" urlstatus=$(curl -o /dev/null --insecure --silent --head --write-out '%{http_code}' "${url}""${uri}" --max-time 5 ) && echo "$url $urlstatus $uri"}export -f doitparallel -j200 doit :::: url uri >> urlstatus.txt GNU Parallel defaults to serializing the output, so you will not get output from one job that is mixed with output from another. GNU Parallel makes it easy to get the input included in the output using --tag . So unless the output format is fixed, I would do: parallel --tag -j200 curl -o /dev/null --insecure --silent --head --write-out '%{http_code}' {1}{2} --max-time 5 :::: url uri >> urlstatus.txt It will give the same output - just formatted differently. Instead of: url urlstatus uri you get: url uri urlstatus
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/356106", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224329/" ] }
356,113
System: Linux Mint 18.1 64-bit Cinnamon . Objective: To define Bash aliases to launch various CLI and GUI text editors while opening a file in root mode from gnome-terminal emulator. Progress For example, the following aliases seem to work as expected: For CLI , in this example I used Nano ( official website ): alias sunano='sudo nano' For GUI , in this example I used Xed ( Wikipedia article ): alias suxed='sudo xed' They both open a file as root . Problem I have an issue with gksudo in conjunction with sublime-text : alias susubl='gksudo /opt/sublime_text/sublime_text' Sometimes it works. It just does not do anything most of the time. How do I debug such a thing with inconsistent behavior? It does not output anything. No error message or similar. Question gksudo has been deprecated in Debian and also no longer included in Ubuntu 18.04 Bionic, so let me re-formulate this question to a still valid one: How to properly edit system files (as root) in GUI (and CLI) in Linux? Properly in this context I define as safely in case, for instance, a power loss occurs during the file edit, another example could be lost SSH connection, etc.
You shouldn’t run an editor as root unless absolutely necessary; you should use sudoedit or sudo -e or sudo --edit , or your desktop environment’s administrative features. sudoedit Once sudoedit is set up appropriately, you can do SUDO_EDITOR="/opt/sublime_text/sublime_text -w" sudoedit yourfile sudoedit will check that you’re allowed to do this, make a copy of the file that you can edit without changing ownership manually, start your editor, and then, when the editor exits, copy the file back if it has been changed. I’d suggest a function rather than an alias: function susubl { export SUDO_EDITOR="/opt/sublime_text/sublime_text -w" sudoedit "$@"} although as Jeff Schaller pointed out, you can use env to put this in an alias and avoid changing your shell’s environment: alias susubl='env SUDO_EDITOR="/opt/sublime_text/sublime_text -w" sudoedit' Take note that you don't need to use the $SUDO_EDITOR environment variable if $VISUAL or $EDITOR are already good enough. The -w option ensures that the Sublime Text invocation waits until the files are closed before returning and letting sudoedit copy the files back. Desktop environments (GNOME) In GNOME (and perhaps other desktop environments), you can use any GIO/GVFS -capable editor, with the admin:// scheme; for example gedit admin:///etc/shells This will prompt for the appropriate authentication using PolKit, and then open the file for editing if the authentication was successful.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/356113", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
356,172
I am using my Nokia C2-01 which does not accept mp3s. I am thinking how to convert my mp3s to aac because it should maintain the sound quality well. Another option is to convert mp3 to m4a but I think it is not so good because m4a is mostly a container. There is a lot of discussion about the reverse: convert from aac/m4a to mp3 but not relevant here. I did not find anything relevant in apt-get for aac . [Michael] I can do for one file ffmpeg -i test.mp3 test.aac but for many files the following does not work where the command is wrying to overwrite some .mp3 files for some reason. ffmpeg -i *.mp3 *.aac Output for a single file with ffmpeg Command ffmpeg -i test.mp3 test.aac takes a lot of time (50 seconds for 9 MB file) and takes a lot of space (9 < 27 MB), with its output in the following. The increment of the space is significant. I think a less space taking format could be better. ffmpeg version 3.2.2-1~bpo8+1 Copyright (c) 2000-2016 the FFmpeg developers built with gcc 4.9.2 (Debian 4.9.2-10) configuration: --prefix=/usr --extra-version='1~bpo8+1' --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --disable-libebur128 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libiec61883 --enable-libopencv --enable-frei0r --enable-libx264 --enable-chromaprint --enable-shared libavutil 55. 34.100 / 55. 34.100 libavcodec 57. 64.101 / 57. 64.101 libavformat 57. 56.100 / 57. 56.100 libavdevice 57. 1.100 / 57. 1.100 libavfilter 6. 65.100 / 6. 65.100 libavresample 3. 1. 0 / 3. 1. 0 libswscale 4. 2.100 / 4. 2.100 libswresample 2. 3.100 / 2. 3.100 libpostproc 54. 1.100 / 54. 1.100[mp3 @ 0x5619e57e29a0] Estimating duration from bitrate, this may be inaccurateInput #0, mp3, from 'test.mp3': Duration: 00:49:26.67, start: 0.000000, bitrate: 24 kb/s Stream #0:0: Audio: mp2, 22050 Hz, mono, s16p, 24 kb/sOutput #0, adts, to 'test.aac': Metadata: encoder : Lavf57.56.100 Stream #0:0: Audio: aac (LC), 22050 Hz, mono, fltp, 69 kb/s Metadata: encoder : Lavc57.64.101 aacStream mapping: Stream #0:0 -> #0:0 (mp2 (native) -> aac (native))Press [q] to stop, [?] for helpsize= 25740kB time=00:49:26.67 bitrate= 71.1kbits/s speed=53.5x video:0kB audio:25303kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.725856%[aac @ 0x5619e57f44c0] Qavg: 375.695
You need to use a loop in the shell to loop over all the MP3 files, as ffmpeg typically only wants one output file per run. In bash, it'd look like: for f in *.mp3; do ffmpeg -i "$f" "${f%.mp3}.aac"done Note that for sound quality reasons, you probably want to give ffmpeg some options. The ffmpeg AAC Encoding Guide has details, but as a quick example that middle line might become: ffmpeg -i "$f" -c:a libfdk_aac -vbr 3 "${f%.mp3}.aac" (PS: It's somewhat surprising your phone doesn't support MP3, support is very common and its listed on the spec sheet for your phone). don_chrissti offers an alternative using GNU Parallel (which should be quicker on multi-core processors, as it will run multiple encodes simultaneously): parallel ffmpeg -i {} -c:a libfdk_aac -vbr 3 {.}.aac ::: *.mp3 Please note there is a moreutils version of parallel as well, which has completely different syntax (and won't work in the above).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/356172", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16920/" ] }
356,221
I'm trying to set several environment variables with the results from command substitution. I want to run the commands in parallel with & and wait . What I've got currently looks something like export foo=`somecommand bar` &export fizz=`somecommand baz` &export rick=`somecommand morty` &wait But apparently when using & variable assignments don't stick. So after the wait , all those variables are unassigned. How can I assign these variables in parallel? UPDATE: Here's what I ended up using based off the accepted answer declare -a datadeclare -a outputdeclare -a processesvar_names=( foo fizz rick)for name in "${var_names[@]}"do processes+=("./get_me_a_value_for $name")doneindex=0for process in "${processes[@]}"; do output+=("$(mktemp)") ${process} > ${output[$index]} & index=$((index+1))donewaitindex=0for out in "${output[@]}"; do val="$(<"${out}")" rm -f "${out}" export ${var_names[index]}="$val" index=$((index+1))doneunset dataunset outputunset processes
After some ruminations, I came up with an ugly workaround: #!/bin/bashproc1=$(mktemp)proc2=$(mktemp)proc3=$(mktemp)/path/to/longprocess1 > "$proc1" &pid1=$!/path/to/longprocess2 > "$proc2" &pid2=$!/path/to/longprocess3 > "$proc3" &pid3=$!wait "$pid1" "$pid2" "$pid3"export var1="<("$proc1")"export var2="<("$proc2")"export var3="<("$proc3")"rm -f "$proc1" "$proc2" "$proc3" As requested in a comment, here is how to make this more extensible for an arbitrarily large list: #!/bin/bashdeclare -a pidsdeclare -a datadeclare -a outputdeclare -a processes# Generate the list of processes for demonstrative purposesprocesses+=("/path/to/longprocess1")processes+=("/path/to/longprocess2")processes+=("/path/to/longprocess3")index=0for process in "${processes[@]}"; do output+=("$(mktemp") $process > ${output[$index]} & pids+=("$!") index=$((index+1))donewait ${pids[@]}index=0for process in "${processes[@]}"; do data+="$(<"${output[index]}")" rm -f "${output[index]}" index=$((index+1))doneexport data The resultant output will be in the data array.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/356221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/119058/" ] }
356,256
This works: crypttab: sda2_crypt UUID=6bbba323-ddad-479d-863e-4bd939b46f96 none luks,swapsda3_crypt UUID=3087cec6-dcc9-44ee-8a08-5555bb2ca566 none luks fstab: /dev/mapper/sda3_crypt / ext4 errors=remount-ro 0 1/dev/mapper/sda2_crypt none swap sw 0 0 But when I try to change it to this and run update-initramfs -u -k all , it gives me this error: cryptsetup: WARNING: failed to determine cipher modules to load for part_root_crypt crypttab: part_swap_crypt UUID=6bbba323-ddad-479d-863e-4bd939b46f96 none luks,swappart_root_crypt UUID=3087cec6-dcc9-44ee-8a08-5555bb2ca566 none luks fstab: /dev/mapper/part_root_crypt / ext4 errors=remount-ro 0 1/dev/mapper/part_swap_crypt none swap sw 0 0 I wanted to change this because when I installed my operative system, this disk was sda , but afterwards I've added more disks and now it's sdb and I'd like to change it's name to something disk independent. What am I missing here?
After some ruminations, I came up with an ugly workaround: #!/bin/bashproc1=$(mktemp)proc2=$(mktemp)proc3=$(mktemp)/path/to/longprocess1 > "$proc1" &pid1=$!/path/to/longprocess2 > "$proc2" &pid2=$!/path/to/longprocess3 > "$proc3" &pid3=$!wait "$pid1" "$pid2" "$pid3"export var1="<("$proc1")"export var2="<("$proc2")"export var3="<("$proc3")"rm -f "$proc1" "$proc2" "$proc3" As requested in a comment, here is how to make this more extensible for an arbitrarily large list: #!/bin/bashdeclare -a pidsdeclare -a datadeclare -a outputdeclare -a processes# Generate the list of processes for demonstrative purposesprocesses+=("/path/to/longprocess1")processes+=("/path/to/longprocess2")processes+=("/path/to/longprocess3")index=0for process in "${processes[@]}"; do output+=("$(mktemp") $process > ${output[$index]} & pids+=("$!") index=$((index+1))donewait ${pids[@]}index=0for process in "${processes[@]}"; do data+="$(<"${output[index]}")" rm -f "${output[index]}" index=$((index+1))doneexport data The resultant output will be in the data array.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/356256", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68563/" ] }
356,297
I have a file containing line-separated text: GCAACACGGTGGGAGCACGTCAACAAGGAGTAATTCTTCAAGACCGTTCCAAAAACAGCATGCAAGAGCGGTCGAGCCTAGTCCATCAGCAAATGCCGTTTCCAGCAATGCAAAGAGAACGGGAAGGTATCAGTTCACCGGTGACTGCCATTACTGTGGACAAAAAGGGCACATGAAGAGAGACTGTGACAAGCTAAAGGCAGATGTAGC From this, I want to extract characters 10 to 80, so: TGGGAGCACGTCAACAAGGAGTAATTCTTCAAGACCGTTCCAAAAACAGCATGCAAGAGCGGTCGAGCCT I have found how to count the characters in a file: wc -m file and how to get a number of characters per line: awk '{print substr($0,2,6)}' file but I cannot find a way to get the characters 10 to 80. Newlines do not count as characters. Any ideas? Yes, this is DNA, from a full genome. I have extracted this bit of DNA from a fasta file containing different scaffolds (10 and 11 in this case) using awk '/scaffold_10\>/{p=1;next} /scaffold_11/{p=0;exit} p' Ultimately, I would like to have a simple command to get characters 100 to 800 (or something like that) from that specified scaffold. EDIT: Question continues here: use gff2fasta instead of a bash script to get parts of DNA sequences out of a full genome
I wonder how the line feed in the file should be handled. Does that count as a character or not? If we just should take from byte 10 and print 71 bytes (A,C,T,G and linefeed) then Sato Katsura solution is the fastest (here assuming GNU dd or compatible for status=none , replace with 2> /dev/null (though that would also hide error messages if any) with other implementations): dd if=file bs=1 count=71 skip=9 status=none If the line feed should be skipped then filter them out with tr -d '\n' : tr -d '\n' < file | dd bs=1 count=70 skip=9 status=none If the Fasta-header should be skipped it is: grep -v '^[;>]' file | tr -d '\n' | dd bs=1 count=70 skip=9 status=none grep -v '^[;>]' file means skip all lines that start with ; or > .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/356297", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140431/" ] }
356,298
I was using Rsync to copy files from my laptop to external HDD (probably NTFS formatted since I can use it under another Windows machine). Somewhere in the middle the connection got interrupted and the drive was mounted again by itself. Not knowing about Version control, I tried to delete the whole newly copied directory - only problem I cannot. Someone on the internet posted a solution saying chattr -i [filename] and it gives the output input output error, cannot stat [filename] with the name of a video file coming up that should have been deleted. How may I overcome this problem?
I wonder how the line feed in the file should be handled. Does that count as a character or not? If we just should take from byte 10 and print 71 bytes (A,C,T,G and linefeed) then Sato Katsura solution is the fastest (here assuming GNU dd or compatible for status=none , replace with 2> /dev/null (though that would also hide error messages if any) with other implementations): dd if=file bs=1 count=71 skip=9 status=none If the line feed should be skipped then filter them out with tr -d '\n' : tr -d '\n' < file | dd bs=1 count=70 skip=9 status=none If the Fasta-header should be skipped it is: grep -v '^[;>]' file | tr -d '\n' | dd bs=1 count=70 skip=9 status=none grep -v '^[;>]' file means skip all lines that start with ; or > .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/356298", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135744/" ] }
356,312
In a XML configuration file I need to add a line, in order to not to break the last closing tag. Is it possible to do it with SED ? The number of line of the whole file can change from a server to another... Edit :Some exemple of file I need to edit : <configuration> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <!-- encoders are assigned the type ch.qos.logback.classic.encoder.PatternLayoutEncoder by default --> <encoder> <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern> </encoder> </appender> <root level="debug"> <appender-ref ref="STDOUT" /> </root></configuration> An other exemple: <?xml version="1.0" encoding="UTF-8"?> <configuration> <property name="DEV_HOME" value="c:/logs" /> <appender name="FILE-AUDIT" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${DEV_HOME}/debug.log</file> <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"> <Pattern> %d{yyyy-MM-dd HH:mm:ss} - %msg%n </Pattern> </encoder> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <!-- rollover daily --> <fileNamePattern>${DEV_HOME}/archived/debug.%d{yyyy-MM-dd}.%i.log </fileNamePattern> <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP"> <maxFileSize>10MB</maxFileSize> </timeBasedFileNamingAndTriggeringPolicy> </rollingPolicy> </appender> <logger name="com.mkyong.web" level="debug" additivity="false"> <appender-ref ref="FILE-AUDIT" /> </logger> <root level="error"> <appender-ref ref="FILE-AUDIT" /> </root> <logger name="com.mkyong.ext" level="debug" additivity="false"> <appender-ref ref="FILE-AUDIT" /> </logger> <logger name="com.mkyong.other" level="info" additivity="false"> <appender-ref ref="FILE-AUDIT" /> </logger> <logger name="com.mkyong.commons" level="debug" additivity="false"> <appender-ref ref="FILE-AUDIT" /> </logger> </configuration>
To i nsert a line before the last ( $ ) one: $ cat testonetwothreefourfive$ sed '$i<hello>!' testonetwothreefour<hello>!five That's for GNU sed (and beware leading spaces or tabs are stripped). Portably (or with GNU sed , if you want to preserve the leading spaces or tabs in the inserted line), you'd need: sed '$i\<hello>!' test
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/356312", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91490/" ] }
356,385
I have a folder named 'sample' and it has 3 files in it. I want to write a shell script which will read these files inside the sample folder and post it to an HTTP site using curl. I have written the following for listing files inside the folder: for dir in sample/*; do echo $dir; done But it gives me the following output: sample/logsample/clksample/demo It is attaching the parent folder in it. I want the output as follows (without the parent folder name) logclkdemo How do I do this?
Use basename to strip the leading path off of the files: for file in sample/*; do echo "$(basename "$file")"done Though why not: ( cd sample; ls )
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/356385", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224290/" ] }
356,386
Is there a maximum to bash file name expansion (globbing) and if so, what is it?See globbing on tldp.org. Let's say I want to run a command against a subset of files: grep -e bar foo*rm -f bar* Is there a limit to how many files bash will expand to, and if so what is it? I am not looking for alternative ways to perform those operations (e.g. by using find ).
There is no limit (other than available memory) to the number of files that may be expanded by a bash glob. However when those files are passed as arguments to a command that is executed (as opposed to a shell builtin or function), then you may run into a limit of the execve() system call on some systems. On most systems, that system call has a limit on the cumulative size of the arguments and environment passed to it, and on Linux also a separate limit on the size of a single arguments. For more details, see: What defines the maximum size for a command single argument? CP: max source files number arguments for copy utility To work around that limit, you can use (assuming GNU xargs or compatible): printf '%s\0' foo* | xargs -r0 rm -f Above, since printf is built-in (in bash and most Bourne-like shells), we don't hit the execve() limit. And xargs will split the list of arguments into as many rm invocations as needed to avoid the execve() limitation. With zsh : autoload zargszargs foo* -- rm -f With ksh93 : command -x rm -f foo*
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/356386", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43548/" ] }
356,408
Please explain this: #!/bin/bash# This is scripta.sh./scriptb.sh &pid=$!echo $pid startedsleep 3while truedo kill -SIGINT $pid echo scripta.sh $$ sleep 3done - #!/bin/bash# This is scriptb.shtrap "echo Ouch;" SIGINTwhile truedo echo scriptb.sh $$ sleep 1done When I run ./scripta.sh the trap doesn't print. If I switch from SIGINT to any other signal (I tried SIGTERM, SIGUSR1) the trap prints "Ouch" as expected. How can this happen?
If I switch from SIGINT to any other signal (I tried SIGTERM, SIGUSR1) the trap prints “Ouch” as expected. Apparently you didn’t try SIGQUIT;you will probably find that it behaves the same as SIGINT. The problem is job control. In the early days of Unix,whenever the shell put a process or pipeline into the background,it set those processes to ignore SIGINT and SIGQUIT,so they would not be terminated if the user subsequentlytyped Ctrl + C (interrupt)or Ctrl + \ (quit) to a foreground task. When job control came along, it brought process groups along with it,and so now all the shell needs to dois put the background job into a new process group;as long as that is not the current terminal process group,the processes will not see signals coming from the keyboard( Ctrl + C , Ctrl + \ and Ctrl + Z (SIGTSTP)). The shell can leave background processeswith the default signal disposition. In fact, it probably has to, for the processes to be killableby Ctrl + C when they are brought into the foreground. But non-interactive shells don’t use job control. It makes sense that a non-interactive shell would fall backto the old behaviorof ignoring SIGINT and SIGQUIT for background processes,for the historical reason —to allow the background processes to continue to run,even if keyboard-type signals are sent to them. And shell scripts run in non-interactive shells. And, if you look at the last paragraphunder the trap command in bash(1) , you’ll see Signals ignored upon entry to the shell cannot be trapped or reset. So, if you run ./scriptb.sh & from your interactive shell’s command prompt,its signal dispositions are left alone(even though it’s being put into the background),and the trap command works as expected. But, if you run ./scripta.sh (with or without & ),it runs the script in a non-interactive shell. And when that non-interactive shell runs ./scriptb.sh & ,it sets the scriptb process to ignore interrupt and quit. And therefore the trap command in scriptb.sh silently fails.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/356408", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224744/" ] }
356,437
I am using GNU sed 4.2.2 and after searching cannot find out the reason sed behaves oddly in some situations: I have a directory with the following: foofile.txtbarfile.txtbazfile.txtconfig/ Case 1 sed -i 's/foo/bar/g' *.txt This works as I expect. It replaces all the "foo"s with "bar"s in the three regular files. Case 2 sed -i 's/foo/bar/g' *sed: couldn't edit config: not a regular file It replaces "foo" with "bar" in barfile.txt and bazfile.txt , but not in foofile.txt . I assume it goes through the list of files expanded from the * alphabetically, and when it hits config/ it errors and exits. Is there a way to have sed ignore errors and continue processing files? Case 3 for file in $(find . -maxdepth 1 -type f); do sed -i 's/foo/bar/g' <"$file"; donesed: no input filessed: no input filessed: no input files Could someone please explain why sed does this? Why does it say there's no input file when it's being given one? I know that I can use the following but I'm asking why sed acts this way, not how do I solve this one use case. find . -maxdepth 1 -type f -exec sed -i 's/foo/bar/g' {} \;
It's normal behavior. In both cases sed exits with error code 4 ... per info sed : 4 An I/O error, or a serious processing error during runtime, GNU 'sed' aborted immediately. and in both cases the messages are self-explanatory. Not sure what's unclear but for the record: the first time it errors out because it cannot edit a directory and the second time it complains because it cannot edit stdin in-place, it needs a file (i.e. remove that redirect before $file ) The proper way to do this with find is, as you noted, via -exec ... With globs, you'll have to use a loop and test if input is a regular file before running sed . Or, if you're a zsh user, you can simply do: sed -i 's/foo/bar/g' *(.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/356437", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/162650/" ] }
356,439
I was trying to edit my sources.list in order to add local mirror information. I am not comfortable with command line editors, so I tried using sudo mousepad /etc/apt/sources.list . I got the following error report. No protocol specified(mousepad:4942): Mousepad-ERROR **: Cannot open display: I tried several other editors such as gedit, kwrite etc. but I get similar error reports. No protocol specified** (gedit:4957): WARNING **: Could not open X displayNo protocol specifiedUnable to init server: Could not connect: Connection refused(gedit:4957): Gtk-WARNING **: cannot open display: :0 I am on a local 64 bit system running Debian Jessie.
You shouldn’t run an editor as root to edit system files, you should use sudoedit (especially since you have sudo set up already). That will make a copy of the file, that you can edit, open it in the editor of your choice, wait for you to finish editing it, and if you make changes to it, copy it back over the system file. In a little more detail, you’d run something like SUDO_EDITOR="gedit -w" sudoedit /etc/apt/sources.list This will: check that you’re allowed to edit the file (according to the sudo configuration in /etc/sudoers ; yours should be OK already); copy /etc/apt/sources.list to a temporary file and make it editable for you; start gedit with the temporary file; wait for you to close the file (this is why we need the -w option); check whether you made changes to the temporary file, and if so, copy it over the original file. You can set SUDO_EDITOR up permanently in your shell’s startup files ( e.g. ~/.bashrc ). If it’s not defined, sudoedit will also check VISUAL and EDITOR . You can specify any editor you like, as long as it’s capable of waiting for an editing session to finish.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/356439", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224776/" ] }
356,444
I read another answer that describes how to use AWK to view the last line of output: $ seq 42 | awk 'END { print }'42 So it seems like when the END block is run the last line is loaded in $0 . This surprised me because the first line isn't loaded into the BEGIN block: $ seq 42 | awk 'BEGIN { print }'#=> blank Is this behavior documentation anywhere? (I searched through the man page but didn't find anything)
The BEGIN block is run before any input is processed, so $0 hasn’t been initialised yet. The END block doesn’t do anything to $0 , which keeps its last value. In your AWK script, that’s just the last line read, because AWK reads all its input line by line, does its usual field-splitting processing (assigning $0 and so on), but never finds a matching block; but for example seq 42 | awk '{ $0 = "21" } END { print }' outputs 21, not 42, so it’s not the case that “when the END block is run the last line is loaded in $0 ”. This isn’t documented in the gawk(1) manpage, but it is documented in mawk(1) (for that implementation of AWK obviously): Similarly, on entry to the END actions, $0 , the fields and NF have their value unaltered from the last record. The GNU AWK manual does mention this behaviour : In fact, all of BWK awk , mawk , and gawk preserve the value of $0 for use in END rules. “BWK awk ” is Brian Kernighan’s awk , the “one true awk ” ; it implemented this behaviour in 2005, as documented in its FIXES file: Apr 24, 2005: modified lib.c so that values of $0 et al are preserved in the END block, apparently as required by posix. thanks to havard eidnes for the report and code. That change is visible in the “one true awk ” history . The latest release of BWK awk behaves in the same way as GNU AWK: $ echo three fields here | ./awk '{ $0 = "one" } END { print $0 " " NF }'one 1$ echo three fields here | ./awk 'END { $0 = "one"; print $0 " " NF }'one 1
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/356444", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173557/" ] }
356,502
I have a text file containing birthdays: 1/23 Horace3/1 Taraneh7/14 Valerian11/31 Carmen I want to display the birthdays from today's date . For example if today is 4/7 (April 7th): 7/14 Valerian11/31 Carmen1/23 Horace3/1 Taraneh How to do this in a bash script? I have found how to split a text file based on a pattern (I could then concatenate the splits in the opposite order) but here the trick is that today's date might be absent, or might be the birthday of several people. Note: all dates are valid dates.
Maybe something like: date +'%m/%d 0000' | sort -nt/ -k1 -k2 - birthdays.txt | awk '$2 == "0000" {past_today = 1; next} past_today {print; next} {next_year = next_year $0 RS} END {printf "%s", next_year}' That is, insert a 04/07 0000 line ( date +%-m/%-d would output 4/7 with some date implementations but is not portable, and 04/07 works just as well) before sorting by date, and then have awk move the lines that are before that one to the end. sort ... - birthdays.txt sorts both its stdin (represented by - , here a pipe that is fed by date ) and the content of birthdays.txt . We set the key separator to / with -t/ , -k1 specifies a sort key that runs from start to end of the line (in essence, -k1 specifies the full line as a sort key), and -k2 a sort key that starts from the first character after the first / to the end of the line, but with -n , those are interpreted as numbers, so only the initial sequence of digits matters. (the above would work with any Bourne-like shell (including bash ), no need to install bash just for that).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/356502", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2305/" ] }
356,540
I've got a dataset that looks like this: >TRINITY_DN37_c0_g1_i1 len=333 path=[361:0-43 362:44-332] [-1, 361, 362, -2]GCCGCCATCATGGATGCCAGCCGTGTGCAGCCCATCAAGCTTGCCAGAGTCACCAAGGTT>TRINITY_DN37_c0_g2_i1 len=356 path=[359:0-66 360:67-355] [-1, 359, 360, -2]ACGTGACCCCCTTTCTGTCTCAAGCCGCCATCATGGATGCCAGTCGTGTGCAGCCCATCA>TRINITY_DN15_c1_g1_i1 len=394 path=[372:0-393] [-1, 372, -2]GCACTTACCATGCATGGAAGGCAAATGCCATCGGAAGGTCTGCAAAGACTGTTAGGGAGT I would need to replace the string 'len=XXX', which is in the same position across thousands of lines, with a series of numbers in order to tag uniquely each sequence. Ideally I was thinking of something like: >TRINITY_DN37_c0_g1_i1 1 path=[361:0-43 362:44-332] [-1, 361, 362, -2]GCCGCCATCATGGATGCCAGCCGTGTGCAGCCCATCAAGCTTGCCAGAGTCACCAAGGTT>TRINITY_DN37_c0_g2_i1 2 path=[359:0-66 360:67-355] [-1, 359, 360, -2]ACGTGACCCCCTTTCTGTCTCAAGCCGCCATCATGGATGCCAGTCGTGTGCAGCCCATCA>TRINITY_DN15_c1_g1_i1 3 path=[372:0-393] [-1, 372, -2]GCACTTACCATGCATGGAAGGCAAATGCCATCGGAAGGTCTGCAAAGACTGTTAGGGAGT I am using OSX. Any ideas?
$ cat ip.txt >TRINITY_DN37_c0_g1_i1 len=333 path=[361:0-43 362:44-332] [-1, 361, 362, -2]GCCGCCATCATGGATGCCAGCCGTGTGCAGCCCATCAAGCTTGCCAGAGTCACCAAGGTT>TRINITY_DN37_c0_g2_i1 len=356 path=[359:0-66 360:67-355] [-1, 359, 360, -2]ACGTGACCCCCTTTCTGTCTCAAGCCGCCATCATGGATGCCAGTCGTGTGCAGCCCATCA>TRINITY_DN15_c1_g1_i1 len=394 path=[372:0-393] [-1, 372, -2]GCACTTACCATGCATGGAAGGCAAATGCCATCGGAAGGTCTGCAAAGACTGTTAGGGAGT$ awk '/len=/{sub(/len=[0-9]+/,++c)} 1' ip.txt >TRINITY_DN37_c0_g1_i1 1 path=[361:0-43 362:44-332] [-1, 361, 362, -2]GCCGCCATCATGGATGCCAGCCGTGTGCAGCCCATCAAGCTTGCCAGAGTCACCAAGGTT>TRINITY_DN37_c0_g2_i1 2 path=[359:0-66 360:67-355] [-1, 359, 360, -2]ACGTGACCCCCTTTCTGTCTCAAGCCGCCATCATGGATGCCAGTCGTGTGCAGCCCATCA>TRINITY_DN15_c1_g1_i1 3 path=[372:0-393] [-1, 372, -2]GCACTTACCATGCATGGAAGGCAAATGCCATCGGAAGGTCTGCAAAGACTGTTAGGGAGT /len=/ lines matching this pattern sub(/len=[0-9]+/,++c) replace first occurrence of len=[0-9]+ pattern of matched line with incremented value of c (default value is 0 ) Or with perl perl -i -pe 's/len=\d+/++$c/e' ip.txt the -i option is for inplace editing
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/356540", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224847/" ] }
356,563
I have the following directory structure: test/test/1/test/foo2bar/test/3/ I want to compress directory "test" excluding everything which is in subdirectories (depth not predefined), which include strings "1" or "2" in them. In bash shell, i want to use find and feed its output to tar . I first test find : find test/ -not -path "*1*" -not -path "*2*" Output: test/test/3 Great. So i combine it with tar : find test/ -not -path "*1*" -not -path "*2*" | tar -czvf test.tar.gz --files-from - Output: test/test/3/test/1/test/foo2bar/test/3/ Indeed, both "test/1" and "test/foo2bar" are present in the archive. Why were these arguments passed to tar, if they were not supposed to be present in find output?
To expand on what @cuonglm said, tar by default operates recursively. If you pass it a directory name, it will archive the contents of that directory. You could modify your find command to return only the names of files, not directories... find test/ -type f -not -path "*1*" -not -path "*2*" |tar -czvf test.tar.gz --files-from - You could instead use the --no-recursion flag to tar : find test/ -not -path "*1*" -not -path "*2*" | tar -czvf test.tar.gz --no-recursion --files-from - Which results in: test/test/3/ The --no-recursion flag is specific to GNU tar. If you're using something else, consult the appropriate man page to see if there is a similar feature available. Note that your find command will exclude files that contain 1 or 2 in the path as well as directories.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/356563", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224873/" ] }
356,569
When I'm trying to connect to x11vnc server started on Ubuntu 16.10 x11vnc The "Screen Sharing" app on on OS X 10.11.6 just hangs. How can I fix this?
If you want to connect to x11vnc server using "Screen Sharing" app on OS X, you need to tweak the x11vnc starting command: x11vnc -display :0 -noxrecord -noxfixes -noxdamage -forever -passwd 123456 You can't use -ncache You have to use -passwd [source]
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/356569", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67785/" ] }
356,576
I am quite new to bash scripting and so any help would be much appreciated. Below is what I want to achieve. I have two text files. I want to delete all of the lines on the first file where it matches any of my string on the second file before the a comma. e.g. File 1: this_is_a_test.txt,11dsdsdsdsdthis_is_a_test24.txt,545467dddthis_is_a_test22,121244442 File 2: this_is_a_test.txtthis_is_a_test24.txtthis_is_a_test22 Desired Output:Blank
If you want to connect to x11vnc server using "Screen Sharing" app on OS X, you need to tweak the x11vnc starting command: x11vnc -display :0 -noxrecord -noxfixes -noxdamage -forever -passwd 123456 You can't use -ncache You have to use -passwd [source]
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/356576", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224879/" ] }
356,652
Is git diff related to diff ? Is git diff implemented based on diff ? Is the command line syntax of git diff similar to that of diff ? Does learning one help using the other much? are their output files following the same format? Can they be both used by git patch and patch ? (Is there git patch ? How is it related to patch ?) Thanks.
The file format is interoperable. Git uses the best format, diff -u . It also extends it to represent additional types of changes. The equivalent to patch is git apply . It stages the changes in the index as well as applying them to the working tree. I remember git apply being stricter than patch , although the reference documentation doesn't seem to make an explicit comparison. It does mention several tests / errors which can be enabled or disabled. The reference documentation also suggests that it could be used as "a replacement for GNU patch" - even outside of a git repository, if you use a certain option.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/356652", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
356,673
I have two folders with hundreds of video files that have duplicate names (such as vid1,vid2,etc). I just want to put all these files in the same folder and I don't care about them being renamed. When I drag a couple of files over it gives me the option to "Keep both", but when I try and drag this large amount of files it no longer gives me that option. I tried using mv command in terminal, but it seems to either replace or skip rather than 'keep both'. What's the easiest way to do this?
The file format is interoperable. Git uses the best format, diff -u . It also extends it to represent additional types of changes. The equivalent to patch is git apply . It stages the changes in the index as well as applying them to the working tree. I remember git apply being stricter than patch , although the reference documentation doesn't seem to make an explicit comparison. It does mention several tests / errors which can be enabled or disabled. The reference documentation also suggests that it could be used as "a replacement for GNU patch" - even outside of a git repository, if you use a certain option.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/356673", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123443/" ] }
356,686
The echo in coreutils seems to be ubiquitous, but not every system will have it in the same place (usually /bin/echo ). What's the safest way to invoke this echo without knowing where it is? I'm comfortable with the command failing if the coreutils echo binary doesn't exist on the system -- that's better than echo'ing something different than I want. Note: The motivation here is to find the echo binary, not to find a set of arguments where every shell's echo builtin is consistent. There doesn't seem to be a way to safely print just a hyphen via the echo builtin, for example, without knowing if you're in zsh or bash .
Note that coreutils is a software bundle developed by the GNU project to provide a set of Unix basic utilities to GNU systems. You'll only find coreutils echo out of the box on GNU systems ( Debian , trisquel , Cygwin , Fedora , CentOS ...). On other systems, you'll find a different (generally with different behaviour as echo is one of the least portable applications) implementation. FreeBSD will have FreeBSD echo , most Linux-based systems will have busybox echo , AIX will have AIX echo ... Some systems will even have more than one (like /bin/echo and /usr/ucb/echo on Solaris (the latter one being part of package that is now optional in later versions of Solaris like the for GNU utilities package from which you'd get a /usr/gnu/bin/echo ) all with different CLIs). GNU coreutils has been ported to most Unix-like (and even non-Unix-like such as MS Windows) systems, so you would be able to compile coreutils ' echo on most systems, but that's probably not what you're looking for. Also note that you'll find incompatibilities between versions of coreutils echo (for instance it used not to recognise \x41 sequences with -e ) and that its behaviour can be affected by the environment ( POSIXLY_CORRECT variable). Now, to run the echo from the file system (found by a look-up of $PATH ), like for every other builtin, the typical way is with env : env echo this is not the builtin echo In zsh (when not emulating other shells), you can also do: command echo ... without having to execute an extra env command. But I hope the text above makes it clear that it's not going to help with regards to portability. For portability and reliability, use printf instead .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/356686", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123247/" ] }
356,688
When I echo $PATH I get this: Users/myusername/.node_modules_global/bin:/Users/mac/.node_modules_global/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/mac/Library/Android/sdk/platform-tools:/platform-tools . I want to remove some paths from this, but when I open the file using the command vim /etc/paths , I get the following results: /usr/local/bin/usr/bin/bin/usr/sbin/sbin Is the file /etc/paths different from the $PATH variable?
/etc/paths is part of what's used to set up $PATH for shell processes. When you open a new Terminal window, it starts bash , which runs several startup scripts: /etc/profile AND ~/.bash_profile OR (if that doesn't exist) ~/.bash_login OR (if that doesn't exist either) ~/.profile . These scripts set up the shell environment, including $PATH . One of the things /etc/profile does is run /usr/libexec/path_helper , which reads /etc/paths and any files in /etc/paths.d , and adds their contents to $PATH . But this is just a starting point; your own startup script (if any exist) can add to $PATH , edit it, replace it completely, etc. It looks to me like your startup script (and/or things it runs) is adding a number of entries to the basic set it gets from /etc/paths . "Users/myusername/.node_modules_global/bin:/Users/mac/.node_modules_global/bin:" is added to the beginning of $PATH (meaning those directories will be searched first), and ":/Users/mac/Library/Android/sdk/platform-tools:/platform-tools" is added at the end. If you want to know exactly what's adding them, you need to look at your startup script. BTW, this process for setting up $PATH only applies to bash "login" shells. Anything run by a bash shell will inherit $PATH from it, so probably have essentially the same thing. bash non-login shells follow a somewhat different setup process. Other shells, and things not started from a shell at all (e.g. cron jobs) may have completely different $PATHs .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/356688", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224951/" ] }
356,699
I am using software called Gnome Sound Recorder to record some audio. However, it gives me no options to do anything with the recordings aside from deleting it. I have looked high and low for the file, even using the whereis command and poking around in the corresponding directories, but have found nothing. Any help would be greatly appreciated--I recorded a half hour long recording and don't want to lose it! Here is a screenshot:
n8te commented that the files are in the subdirectory Recordings of your home directory . My answer covers how to find the files if the application doesn't give you a clue. While an application has the file open, you can use lsof to locate it. Note that this only works while the file is open at the operating system level, which may not always be the case while the application displays the file. For example a text or image editor typically opens the file to read or save it, but closes it immediately after each load or save operation. But I would expect a sound recorder to write progressively to the output file, and for that it would keep the file open as long as it's recording. To find what files an application has open, first install lsof . It's available as a package on most distributions. Open a terminal; all my instructions use the command line. You'll need to determine the process ID of the application. You can run the command ps xf (that's on Linux; other Unix variants have different options for the ps command; as a last resort you can use ps -e to list everything). Try pgrep sound ps x | grep -i sound to locate all the running programs whose name contains “sound”. Alternatively, run xprop | grep _NET_WM_PID and click on the program window. Once you've determined the process ID, for example 1234, run lsof -p1234 Another approach is to look for recently modified files. You can use the find command for that. For example, to look for files modified in the last 5 minutes: find ~ -type f -mmin -5 ~ means your home directory. A saved file would normally be in your home directory because that's the only location where an application is guaranteed to be able to write, except for temporary files that can be wiped out as soon as the application exits. -type f restricts to regular files (we don't need to see directories here) and -mmin 5 means “less than 5 minutes ago”. There's also -mtime which counts in days instead of minutes. If you're looking for a file that's been moved rather than created or modified, use -cmin instead of -mmin ; the ctime is the time at which anything was last done on the file except for reading it (but including changing permissions, moving, etc.). You can also look for files by name, e.g. find ~ -name '*blendervid*' -type f looks for files whose name contains blendervid (and you can add something like `-mmin -5 further restrict matches to recent files). If you know part of the name of a file and the file was created a while ago, you can use the locate command. locate blendervid locate is a lot faster than find because it uses a pre-built index. But it can only find files that existed when the index was built. Most distributions arrange for the index to be rebuilt every night, or soon after boot (via anacron ) if the system isn't always on.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/356699", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135355/" ] }