source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
210,325 | in school we have been assigned a homework in which we are suppose to print an ascii art into a terminal window. A input is data in format [x_coordinate, y_coordinate, char_ascii_value] (there is no data for coordinates where shouldn't be print any character). I don't have any trouble actually doing it but I guess I am simply too lazy to go into for cycle and print an empty space every time there is no data for character, then go to another line in terminal and do the same, etc. So I was thinking that, there must be an easier way! Since we are allowed to work only with commands which are in POSIX, is there any command that allows you to move cursor to specific position in terminal? I ran into the command named tput and tput cup does exactly what I need but I am not quite sure if tput cup is in POSIX. P.S. Please don't take this like some kind of cheating. I am just trying to find a way to make my life easier instead of brainless writing code. | As mikeserv explains, POSIX doesn't specify tput cup . POSIX does specify tput but only minimally. That said, tput cup is widely supported! The standardised way of positioning the cursor is using ANSI escape sequences . To position the cursor you'd use something like printf "\33[%d;%dH%s" "$Y" "$X" "$CHAR" which will print $CHAR at line $Y and column $X . A more complete solution would be printf "\337\33[%d;%dH%s\338" "$Y" "$X" "$CHAR" which will restore the cursor position. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210325",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119823/"
]
} |
210,331 | I installed KDE onto a minimal Debian image using apt-get install kde-plasma-desktop --no-install-recommends but the desktop environment that launches when I run startkde is KDE 4. How can I update this to the latest KDE 5 version, which was my original intention? | Debian doesn't support KDE5 yet . Try a different distro . KDE5 is now available in Stretch (Testing) . You could upgrade from Jessie to Stretch, or if you know what you're doing you could use pinning to get just the packages you want (though you'll probably end up upgrading quite a bit more than you'd expect). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210331",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111049/"
]
} |
210,368 | After adding alias rm='rm -i' to my ~/.bashrc file (because, when I removed a file, it wasn't asking for confirmation), file names are surrounded with "â" signs as in the example below: rm: cannot remove âfile1.txtâ: No such file or directory List of aliases: alias egrep='egrep --color=auto'alias fgrep='fgrep --color=auto'alias grep='grep --color=auto'alias l.='ls -d .* --color=auto'alias ll='ls -l --color=auto'alias ls='ls --color=auto'alias rm='rm -i'alias vi='vim'alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde' Note: I am ssh'ing to my CentOS machine with PuTTY from my Windows machine, so this is definitely a character encoding issue. Using Ubuntu guest in my VM, everything is fine. Smart quotes are showing as they need to. | These â s are UTF-8 quotes that your current terminal is unable to display properly, being configured in ISO-8859-1 or similar. You can have a proper display with setting a matching locale or the POSIX one: $ rm file.txtrm: cannot remove â file.txtâ : No such file or directory$ LC_ALL=en_US.UTF-8 rm file.txtrm: cannot remove â file.txtâ : No such file or directory$ LC_ALL=C rm file.txtrm: cannot remove 'file.txt' : No such file or directory $ rm foo 2>&1 | od -c0000000 r m : c a n n o t r e m o v0000020 e 342 200 230 f o o 342 200 231 : N o 0000040 s u c h f i l e o r d i r0000060 e c t o r y \n0000067$ LC_ALL=C rm foo 2>&1 | od -c0000000 r m : c a n n o t r e m o v0000020 e ' f o o ' : N o s u c h0000040 f i l e o r d i r e c t o0000060 r y \n0000063 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210368",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106176/"
]
} |
210,378 | When you run sudo rm -rf --no-preserve-root / Your system will delete every file, one by one. However, this also includes running processes and the operating system itself, eventually deleting rm . How is Linux able to continue to run if program files are being deleted the operating system is missing critical files? Additionally, how does rm continue to run if it's deleted? | Despite its name, rm doesn't remove file. It actually unlinks -- removes directory entry referencing file. If there is still hard links for that file, data is kept intact. When program is executed, Kernel keeps a kind of hard links inside (they all are treated as same inode object), so data will be kept until last process closes unlinked file. Note how unlink system call is described: If that name was the last link to a file and no processes have the file open the file is deleted and the space it was using is made available for reuse. If the name was the last link to a file but any processes still have the file open the file will remain in existence until the last file descriptor referring to it is closed. For example: # cp /bin/sleep ./sleep# ln ./sleep ./sleep2# ./sleep 1000 &[1] 24399# rm ./sleep At this point, data is still accessible through hardlink, and inode is still known to kernel as (task_struct)->mm->exe_file : # ls -lh ./sleep2-rwxr-xr-x 1 myaut users 31K Jun 17 23:10 ./sleep2# > ls -l /proc/24399/exe lrwxrwxrwx 1 myaut users 0 Jun 17 23:11 /proc/24399/exe -> /tmp/test/sleep (deleted) Even after deletion of second hardlink, data is kept (BTW, if you remove plug and your system loose power at this moment, you will get FS space leakage): # rm ./sleep2# ls -l /proc/24399/exe/proc/24399/exe -> /tmp/test/sleep (deleted) Now I kill process, and only at this moment disk (or tmpfs) space will be deallocated: # kill 24399 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210378",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55908/"
]
} |
210,416 | I had just asked this question where I was unable to boot after installing a new RAID1 Array. I was able to get to the terminal, but once I sorted that out I realized that my problem isn't so much an fstab boot problem as an mdadm auto assemble problem. I have three RAID1 arrays on my system, with /dev/md1 mounted at / and /dev/md0 mounted as swap , and these currently run without problems. I did not create these arrays. I created a new RAID1 array, /dev/md2 which I formatted to ext4 using this guide , and in doing so I created a new partition (the only one) as md2p1 (the guide also created a similarly named partition although fdisk never explicitly asked for a name). Upon creating this new array I was able to mount manually using mount -t ext4 /dev/md2p1 /srv/Waveforms And this worked fine. I was able to access the directory and added about 700 GB worth of data to it. After doing this, I get cat /proc/mdstatPersonalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 sdc1[0] sdd1[1] 1953423552 blocks [2/2] [UU]md1 : active raid1 sda2[0] sdb2[1] 961136576 blocks [2/2] [UU]md0 : active raid1 sda1[0] sdb1[1] 15624128 blocks [2/2] [UU]unused devices: <none> , so the computer clearly recognizes the array. I then used sudo mdadm --detail --scan >> /etc/mdadm/mdadm.conf and the file now contains # mdadm.conf## Please refer to mdadm.conf(5) for information about this file.## by default, scan all partitions (/proc/partitions) for MD superblocks.# alternatively, specify devices to scan, using wildcards if desired.DEVICE partitions# auto-create devices with Debian standard permissionsCREATE owner=root group=disk mode=0660 auto=yes# automatically tag new arrays as belonging to the local systemHOMEHOST <system># instruct the monitoring daemon where to send mail alertsMAILADDR root# definitions of existing MD arraysARRAY /dev/md0 level=raid1 num-devices=2 UUID=afa7ccee:df4dfa79:a84dbc05:35401226ARRAY /dev/md1 level=raid1 num-devices=2 UUID=a0c526cc:6de93504:c8b94301:85474c49ARRAY /dev/md2 level=raid1 num-devices=2 UUID=1ac720e1:192b2c38:f6e0357b:f3e0074f# This file was auto-generated on Thu, 10 Mar 2011 00:57:55 -0700# by mkconf $Id$ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=afa7ccee:df4dfa79:a84dbc05:35401226ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=a0c526cc:6de93504:c8b94301:85474c49ARRAY /dev/md2 level=raid1 num-devices=2 metadata=0.90 UUID=1ac720e1:192b2c38:f6e0357b:f3e0074f Two things to note here. The original file had metadata=00.90 , but I modified this to be metadata=0.90 as this solved a minor issue where the metadata was not recognized (a quick google search will explain this). The second think to note is that auto is set to yes, meaning that the system should automatically assemble all the arrays on boot. This must be the case, as the fact that I am able to boot at all must mean that /dev/md1 has been assembled. Anyways, now the trouble. Upon reboot, my machine hangs and tells me fsck from util-linux-ng 2.17.2/dev/md1: clean, 3680768/60071936 files, 208210802/240284144 blocks My fstab currently reads # /etc/fstab: static file system information.## Use 'blkid -o value -s UUID' to print the universally unique identifier# for a device; this may be used with UUID= as a more robust way to name# devices that works even if disks are added and removed. See fstab(5).## <file system> <mount point> <type> <options> <dump> <pass>proc /proc proc nodev,noexec,nosuid 0 0# / was on /dev/md1 during installationUUID=1d3cb392-f522-485b-8516-a7791fc23c4d / ext4 errors=remount-ro 0 1# swap was on /dev/md0 during installationUUID=6eb8e6f2-3166-4f77-883c-26268d636b0b none swap sw 0 0/dev/md2p1 /srv/Waveforms ext4 defaults,noauto 0 0 with blkid/dev/sda1: UUID="afa7ccee-df4d-fa79-a84d-bc0535401226" TYPE="linux_raid_member" /dev/sda2: UUID="a0c526cc-6de9-3504-c8b9-430185474c49" TYPE="linux_raid_member" /dev/sdb1: UUID="afa7ccee-df4d-fa79-a84d-bc0535401226" TYPE="linux_raid_member" /dev/sdb2: UUID="a0c526cc-6de9-3504-c8b9-430185474c49" TYPE="linux_raid_member" /dev/sdc1: UUID="1ac720e1-192b-2c38-f6e0-357bf3e0074f" TYPE="linux_raid_member" /dev/sdd1: UUID="1ac720e1-192b-2c38-f6e0-357bf3e0074f" TYPE="linux_raid_member" /dev/md0: UUID="6eb8e6f2-3166-4f77-883c-26268d636b0b" TYPE="swap" /dev/md1: UUID="1d3cb392-f522-485b-8516-a7791fc23c4d" TYPE="ext4" /dev/md2p1: UUID="867ee91e-527e-435b-b6bc-2f6d89d2d8c6" TYPE="ext4" I had previously used UUID=867ee91e-527e-435b-b6bc-2f6d89d2d8c6 in lieu of /dev/md2p1 , but that gave me no results. I have also tried options as defaults, defaults+noatime,errors=remount-ro with this md2p1, but none worked. I am able to boot by modifying fstab to exclude my new md2p1 line. After boot with this configuration, I get cat /proc/mdstatPersonalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sda1[0] sdb1[1] 15624128 blocks [2/2] [UU]md1 : active raid1 sda2[0] sdb2[1] 961136576 blocks [2/2] [UU] So the system has not assembled md2. I can then run sudo mdadm --assemble --scan[sudo] password for zach: mdadm: /dev/md2 has been started with 2 drives. whence cat /proc/mdstatPersonalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 sdc1[0] sdd1[1] 1953423552 blocks [2/2] [UU]md0 : active raid1 sda1[0] sdb1[1] 15624128 blocks [2/2] [UU]md1 : active raid1 sda2[0] sdb2[1] 961136576 blocks [2/2] [UU] and I can now manually mount as above. So the problem seems to be that the RAID1 array is not assembling on boot, and thus is not recognized by fstab, and thus I cannot boot at all except in a recovery mode. I've found this post, but I don't think it applies to me, as the answer seems to be to set auto to yes, which then automatically assembles the arrays on boot. My configuration is already set to do this, so I'm at a loss. There is an answer in that post that does seem applicable, but I don't understand what his solution was. This is the post by Deplicator, which says After a reboot I could never see /dev/md0. Running the mdadm --detail --scan again (without putting the result in a file) I would see ARRAY /dev/md/ubuntu:0 metadata=1.2 name=ubuntu:0 UUID=a8a570c6:96f61865:05abe131:5c2e2f7e and manually mounting /dev/md/ubuntu:0 would work. In the end, that was what I put in the fstab file too. What was put into fstab? The problem seems to be that I am not assembling md2 on boot, and thus I will hang every time fstab tries to mount the md2p1 partition. This may in fact be related to md2 being partitioned while the others are not, but I fail to see why this should be the case. Edit: Just in case uname -aLinux ccldas2 2.6.32-74-server #142-Ubuntu SMP Tue Apr 28 10:12:19 UTC 2015 x86_64 GNU/Linux | Two issues spring to mind You've got duplicate array definitions in mdadm.conf . Replace (or comment out) the block of three ARRAY lines following # definitions of existing MD arrays so that each array is declared only by your most recent scan. A typical scenario for RAID arrays that fail to build on boot is that either they have not been updated in the initramfs or they are not set to run at boot time. A really quick scan through the guide you referenced doesn't appear to mention these steps, but I could be wrong. On Debian systems the commands are: dpkg-reconfigure mdadm # Choose "all" disks to start at boot update-initramfs -u # Updates the existing initramfs | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/210416",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118923/"
]
} |
210,425 | I have many big folders with thousands of files in them and I want to use touch to set their modification times to be "original time"+3 hours. I got this script from a similar thread in superuser: #!/bin/shfor i in all/*; do touch -r "$i" -d '+3 hour' "$i"done so I'm guessing what I need is to make it work in any directory instead of a fixed one (so I won't need to edit the script everytime I want to run it somewhere different) and for it to able to find and edit files recursively. I have little experience using Linux and this is my first time setting up a bash script, though I do know a thing or two about programming (mainly in C). thanks you all very much for the help :) | Use find -exec for recursive touch , with command line args for dirs to process. #!/bin/shfor i in "$@"; do find "$i" -type f -exec touch -r {} -d '+3 hour' {} \;done You can run it like this: ./script.sh /path/to/dir1 /path/to/dir2 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210425",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119877/"
]
} |
210,448 | I have a VM running CentOS 7 that I have not used for a long time. Today I launched it and tried to update the CentOS system to the latest version using yum update , but I got a lot of errors: Loaded plugins: fastestmirror, langpackshttp//bay.uchicago.edu/centos/7.0.1406/os/x86_64/repodata/repomd.xml:[Errno 14] HTTP Error 404 - Not Found Trying other mirror.http//mirror.cs.pitt.edu/centos/7.0.1406/os/x86_64/repodata/repomd.xml:[Errno 14] HTTP Error 404 - Not Found Trying other mirror.http//mirror.anl.gov/pub/centos/7.0.1406/os/x86_64/repodata/repomd.xml:[Errno 14] HTTP Error 403 - Forbidden Trying other mirror.http//mirror.pac-12.org/7.0.1406/os/x86_64/repodata/repomd.xml: [Errno14] HTTP Error 404 - Not Found Trying other mirror.http//centos.expedientevirtual.com/7.0.1406/os/x86_64/repodata/repomd.xml:[Errno 14] HTTP Error 404 - Not Found Trying other mirror. (Many other similar errors are omitted ...) Trying other mirror. Loading mirror speeds from cached hostfile *base: bay.uchicago.edu * epel: csc.mcs.sdsmt.edu * extras:mirror.ancl.hawaii.edu * nux-dextop: li.nux.ro * updates:centos-distro.cavecreek.net No packages marked for update I deleted the colon after http in the above error messages to avoid warnings. I think these errors might come from the CentOS version I am using: 7.0.1406 -- since current latest version is a new one, say, 7.0.1588 or something, the corresponding path does not exist and hence the HTTP error 404. But how to have my current CentOS automatically adjust the path name to the latest version and be able to update from the correct URL? Thanks. | Run the following command to clean the metadata: yum clean all This will clean all yum caches including cached mirrors of your yum repositories. On the next run it will get a new list of mirrors. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/210448",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/98928/"
]
} |
210,514 | On my host I enter the following, which returns a bunch of information which isn't particularly easy to read at the CLI. echo $ENV_VAR | base64 --decode Is there a way to format it? This is a sample output from the command. {"something": [{"path": "something", "host": "something.internal", "scheme": "solr", "port": 8080, "ip": "123.4.567.89"}], "second_database": [{"username": "user", "password": "", "ip": "123.4.567.89", "host": "second_database.internal", "query": {"is_master": true}, "path": "main", "scheme": "mysql", "port": 3306}], "redis": [{"ip": "123.4.567.89", "host": "redis", "scheme": "redis", "port": 6379}], "database": [{"username": "user", "password": "", "ip": "123.4.567.89", "host": "database.internal", "query": {"is_master": true}, "path": "main", "scheme": "mysql", "port": 3306}]} It's probably worth pointing out that my host, like many, offers a read-only file system. | cat file.json | json_pp #perl utilitycat file.json | jq . jq packs much more than just pretty-printing abilities. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/210514",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17366/"
]
} |
210,528 | Here are commands on a random file from pastebin : wget -qO - http://pastebin.com/0cSPs9LR | wc -l350wget -qO - http://pastebin.com/0cSPs9LR | sort -u | wc -l287wget -qO - http://pastebin.com/0cSPs9LR | sort | uniq | wc -l287wget -qO - http://pastebin.com/0cSPs9LR | sort | uniq -u | wc -l258 The man pages are not clear on what the -u flag is doing. Any advice? | uniq with -u skips any lines that have duplicates. Thus: $ printf "%s\n" 1 1 2 3 | uniq123$ printf "%s\n" 1 1 2 3 | uniq -u23 Usually, uniq prints lines at most once (assuming sorted input). This option actually prints lines which are truly unique (having not appeared again). | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/210528",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119938/"
]
} |
210,534 | I'm trying to make sense of the sudo documentation on the Debian Wiki . On it, it uses the two examples below. However I don't understand the difference between them. Why has the group sudo got ( ALL:ALL ) as compared to the ( ALL ) option for root? What does each part of the command do. # Allow members of group sudo to execute any command%sudo ALL=(ALL:ALL) ALL#Default rule for root.root ALL=(ALL) ALL | Eplanation for %sudo ALL=(ALL:ALL) ALL :- %sudo - the group (named sudo ) allowed to use sudo . 1 st ALL means to allow sudo from any terminal, or from any host (on any machine) (ALL:ALL) indicates command can be run as (User:Group) Last All means all commands can be executed Explanation for root ALL=(ALL) ALL root - the user (root) allowed to do everything on any machine as any user Explanation for (ALL:ALL) : (Run as (User:Group) ) 1 st "ALL" indicates that the user (in case of root ) or group members (in case of %admin ) can run commands as all users 2 nd "ALL" indicates that user (i.e root ) or group members (i.e. of %admin ) can run commands as all groups. If only (ALL) is used then it doesn't allow to run as another group whereas (ALL:ALL) says Run as All users and All groups . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210534",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119849/"
]
} |
210,602 | I would like to determine if the user's locale uses UTF-8 encoding. This seems a little bit ugly: [[ $LANG =~ UTF-8$ ]] && echo "Uses UTF-8 encoding.." is there a more general/portable way? | From Wikipedia : On POSIX platforms, locale identifiers are defined similarly to the BCP 47 definition of language tags, but the locale variant modifier is defined differently, and the character encoding is included as a part of the identifier. It is defined in this format: [language[_territory][.codeset][@modifier]]. (For example, Australian English using the UTF-8 encoding is en_AU.UTF-8.) However, if the codeset suffix is missing in the locale identifier, for example as in en_AG (see this question), then the codeset is defined by a default setting for that locale, which could very well be UTF-8. As a result, the current encoding cannot be determined by looking at the LANG environment variable. Further, the locale command only shows the current values of the environment variables.. so it seems that that command cannot be used to determine the codeset either.. However, there is a Perl module I18N::Langinfo , see also this question that seems to be a solution: perl -MI18N::Langinfo=langinfo,CODESET -E 'say "Uses UTF-8 encoding .." if langinfo(CODESET()) eq "UTF-8"' This Perl module is a wrapper for the C library function nl_langinfo . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210602",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45537/"
]
} |
210,615 | To solve this problem I always have to use scp or rsync to copy the file into my local computer to open the file and simply copy the contents of the text file into my local clipboard. I was just wondering if there is a more clever way to do this without having the need of copying the file. | Of course you have to read the file, but you could </dev/null ssh USER@REMOTE "cat file" | xclip -i though that still means to open a ssh connection and copy the contents of the file. But finally you don't see anything of it anymore ;) And if you are connecting from an OS X computer you use pbcopy instead: </dev/null ssh USER@REMOTE "cat file" | pbcopy PS: Instead of </dev/null you can use ssh -n but I don't like expressing things in terms of software options, where I can use the system to get the same. PPS: The </dev/null pattern for ssh is extremely usefull for loops printf %s\\n '-l user host1' '-l user host2' | while read cdo </dev/null ssh $u "ip address; hostname; id"done | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/210615",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91570/"
]
} |
210,620 | The goal: to be able to get an "infobox" to open in a terminal after some time; alarm clock style, (on a Debian derived linux box).However: > at now + 3 mindialog --infobox "Time to attend to matters\!" 6 33 produces no output. and a system email that says "Error opening terminal: unknown". So we prefix the dialog with some environmental variable stuff which did the trick in the past, that the command after "at" now looks like this: TERM=linux DISPLAY=":0.0" dialog --infobox "Seek ye the truth\!" 6 33 Now the only thing produced is a system email filled with escape sequences, which i'll guess is the output of dialog itself? How can one get dialog to play well with "at"? (thankee!) | Of course you have to read the file, but you could </dev/null ssh USER@REMOTE "cat file" | xclip -i though that still means to open a ssh connection and copy the contents of the file. But finally you don't see anything of it anymore ;) And if you are connecting from an OS X computer you use pbcopy instead: </dev/null ssh USER@REMOTE "cat file" | pbcopy PS: Instead of </dev/null you can use ssh -n but I don't like expressing things in terms of software options, where I can use the system to get the same. PPS: The </dev/null pattern for ssh is extremely usefull for loops printf %s\\n '-l user host1' '-l user host2' | while read cdo </dev/null ssh $u "ip address; hostname; id"done | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/210620",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/98896/"
]
} |
210,628 | I was recently running tail -f on a log file of a server that was running, trying to diagnose a bug, when I accidentally bumped the keyboard and typed some characters. They got mixed in with the output of the log, with no way to tell which was which. I have had similarly annoying things happen to me countless times, and I'm sure it has happened to many other people here. So my question is this: why does the shell (or terminal, or whatever is doing it) ambiguously mix keyboard input with command output? I am not asking for a practical solution to an immediate problem. I can maybe figure out some way to make the shell run stty -echo when a command is run and stty echo when it finishes. But I want to know the rationale behind designing the terminal like this. Is there some practical purpose? Or is it something done only for compatibility reasons, or something that wasn't given much thought at all? | People usually want to see what they're typing (unless it's a password) :-) The terminal accepts input at any time, and buffers it until an application reads it. More than that, when the tty is in cooked mode, the kernel buffers whole lines at a time and provides some rudimentary line editing functionality that allow you to kill the entire buffered line (default binding Ctrl - u and backspace. During the time that the line is being entered and edited and until you press Enter , applications reading from the terminal read nothing at all. The tty functionality in the kernel does not and can not know if and when an application like tail is planning to produce output on the terminal, so it would not be able to somehow... cancel (?) line editing during such times and only during such times. Anyway, being able to prepare the next line for the shell while something else is still busy running on the terminal and the shell is not yet ready to read that command is a feature , not a bug, so I wouldn't advocate removing it. Maybe not so useful for tail (which will never terminate on its own), but pre-typing the next command during a long-running cp or make (for example), and even editing that command with Ctrl - h and Ctrl - u , all before the shell gets ahold of it, is a common thing to do. Timothy Martin wrote in a comment : It is worth mentioning that less +F somefile provides similar functionality to tail -f somefile except that (accidentally) typed keystrokes will not echo to the screen. Yeah, but less not only prevents those characters from being echoed, but it also eats them, so they are not available to the next application that wants to read them! Finally, there is one more reason: In historical times (before my time!) terminals with local echo were common. That is, the terminal (usually in hardware) would echo the characters you typed locally while also sending them down the serial line. This is useful for giving the user quick feedback even if there was lots of latency over the conection to the UNIX system (think 300 baud modem dialing up a terminal server with auto-telnet to a slow UNIX system over a token ring network — or whatever). If you have a terminal with local echo, then you want stty -echo at all times on the UNIX server to which you are connected. The result is approximately the same as a terminak without local echo (the common kind today) and stty echo enabled. So from that point of view, stty echo 's job is to echo charatcers immediately as soon as they are received, regardless of what software is running, in emulation of what would happen on a terminal with local echo. (By the way, if you have a terminal with local echo, you can't hide your password.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210628",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6378/"
]
} |
210,638 | I am trying to put these two Cron Jobs: 0 3 * * * ! sudo -u asterisk /var/lib/asterisk/bin/module_admin --repos extended,standard,unsupported upgradeall30 3 * * * ! sudo -u asterisk /var/lib/asterisk/bin/module_admin reload into a repository so that i can run a wget www.website.com cronjob.(zip or text) How would I save those so that I can inject them into crontab and how? sorry if this is very simple, but I am very new and other web resources haven't been any help | People usually want to see what they're typing (unless it's a password) :-) The terminal accepts input at any time, and buffers it until an application reads it. More than that, when the tty is in cooked mode, the kernel buffers whole lines at a time and provides some rudimentary line editing functionality that allow you to kill the entire buffered line (default binding Ctrl - u and backspace. During the time that the line is being entered and edited and until you press Enter , applications reading from the terminal read nothing at all. The tty functionality in the kernel does not and can not know if and when an application like tail is planning to produce output on the terminal, so it would not be able to somehow... cancel (?) line editing during such times and only during such times. Anyway, being able to prepare the next line for the shell while something else is still busy running on the terminal and the shell is not yet ready to read that command is a feature , not a bug, so I wouldn't advocate removing it. Maybe not so useful for tail (which will never terminate on its own), but pre-typing the next command during a long-running cp or make (for example), and even editing that command with Ctrl - h and Ctrl - u , all before the shell gets ahold of it, is a common thing to do. Timothy Martin wrote in a comment : It is worth mentioning that less +F somefile provides similar functionality to tail -f somefile except that (accidentally) typed keystrokes will not echo to the screen. Yeah, but less not only prevents those characters from being echoed, but it also eats them, so they are not available to the next application that wants to read them! Finally, there is one more reason: In historical times (before my time!) terminals with local echo were common. That is, the terminal (usually in hardware) would echo the characters you typed locally while also sending them down the serial line. This is useful for giving the user quick feedback even if there was lots of latency over the conection to the UNIX system (think 300 baud modem dialing up a terminal server with auto-telnet to a slow UNIX system over a token ring network — or whatever). If you have a terminal with local echo, then you want stty -echo at all times on the UNIX server to which you are connected. The result is approximately the same as a terminak without local echo (the common kind today) and stty echo enabled. So from that point of view, stty echo 's job is to echo charatcers immediately as soon as they are received, regardless of what software is running, in emulation of what would happen on a terminal with local echo. (By the way, if you have a terminal with local echo, you can't hide your password.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210638",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120018/"
]
} |
210,653 | I am attempting concatenate file names to use in a ftp mdelete command. Each file name needs to be separated by a space in order for the mdelete command to work. The $i variable is in a loop and I am attempting to assign the file name located into $i to $FILESTODELETE in addtion to the file names already in $FILESTODELETE for i in `ls` do $FILESTODELETE = "$FILESTODELETE $i " ..... END...... mdelete $FILESTODELETE | People usually want to see what they're typing (unless it's a password) :-) The terminal accepts input at any time, and buffers it until an application reads it. More than that, when the tty is in cooked mode, the kernel buffers whole lines at a time and provides some rudimentary line editing functionality that allow you to kill the entire buffered line (default binding Ctrl - u and backspace. During the time that the line is being entered and edited and until you press Enter , applications reading from the terminal read nothing at all. The tty functionality in the kernel does not and can not know if and when an application like tail is planning to produce output on the terminal, so it would not be able to somehow... cancel (?) line editing during such times and only during such times. Anyway, being able to prepare the next line for the shell while something else is still busy running on the terminal and the shell is not yet ready to read that command is a feature , not a bug, so I wouldn't advocate removing it. Maybe not so useful for tail (which will never terminate on its own), but pre-typing the next command during a long-running cp or make (for example), and even editing that command with Ctrl - h and Ctrl - u , all before the shell gets ahold of it, is a common thing to do. Timothy Martin wrote in a comment : It is worth mentioning that less +F somefile provides similar functionality to tail -f somefile except that (accidentally) typed keystrokes will not echo to the screen. Yeah, but less not only prevents those characters from being echoed, but it also eats them, so they are not available to the next application that wants to read them! Finally, there is one more reason: In historical times (before my time!) terminals with local echo were common. That is, the terminal (usually in hardware) would echo the characters you typed locally while also sending them down the serial line. This is useful for giving the user quick feedback even if there was lots of latency over the conection to the UNIX system (think 300 baud modem dialing up a terminal server with auto-telnet to a slow UNIX system over a token ring network — or whatever). If you have a terminal with local echo, then you want stty -echo at all times on the UNIX server to which you are connected. The result is approximately the same as a terminak without local echo (the common kind today) and stty echo enabled. So from that point of view, stty echo 's job is to echo charatcers immediately as soon as they are received, regardless of what software is running, in emulation of what would happen on a terminal with local echo. (By the way, if you have a terminal with local echo, you can't hide your password.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210653",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119996/"
]
} |
210,694 | printf "-Xdebug" Gives: bash: printf: -X: invalid optionprintf: usage: printf [-v var] format [arguments] echo -n "-Xdebug" works but according to this question here it isn't portable. There are multiple versions of the echo command, with different behaviors. Apparently the shell used for your script uses a version that doesn't recognize -n. How can I have a string be printed to screen uninterpreted, as is? | Add a format string printf '%s' '-Xdebug' Or use -- to signal end of option processing printf -- '-Xdebug' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/210694",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19506/"
]
} |
210,741 | socat TCP-LISTEN:22,fork TCP:192.168.0.15:5900 How can I tell to socat , that port 22 is only trusted from the remote IP address 8.8.8.8, and it should not accept connections from other IP addresses? This is on a Linux server. | You can add the range option to the socat listening address: socat TCP-LISTEN:22,fork,range=8.8.8.8/32 TCP:192.168.0.15:5900 Or you can add the tcpwrap=vnc_forward option and define global rules for that vnc_forward service as per hosts_access(5) . That won't stop the connections from reaching socat , but socat will ignore them (with a warning) if they don't come from 8.8.8.8. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/210741",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
210,763 | I have an if statement in a script. It looks like this: if [ "$a" != "0" -a "$b" != "100" ]; then #some commands here If I'm not mistaken, the line above will work if both conditions are true. Now, how can I test that either one or both conditions are true before executing some commands ? | The standard (POSIX sh and utilities) canonical legible ways would be: string comparison: if [ "$a" != 0 ] || [ "$b" != 100 ]; then... decimal integer comparison (0100 is 100, whether leading blanks are ignored or not depend on the implementation though). if [ "$a" -ne 0 ] || [ "$b" -ne 100 ]; then... integer comparison (0x64, 0144 are 100 (POSIX mode has to be enabled for some shells for octals). Depending on the shell 100.0, 1e2, 50+50, ( RANDOM 0.003% of the time)... will be as well): if [ "$((a != 0 || b != 100))" -ne 0 ]; then... However, if the content of the variables cause that arithmetic expansion to fail with a syntax error, that will cause the shell to abort, so you may want to run that in a subshell to account for that. if ([ "$((a != 0 || b != 100))" -ne 0 ]); then You probably shouldn't use that form anyway if the content of the variables is not under your control as that would be an arbitrary command execution vulnerability in many shells ( bash , ksh , zsh ) (for instance with values of $a like x[$(reboot)] ). Which one you'll choose depends on what the content of the variables may be and what you want to allow them to be. If you know they contain decimal integer numbers in their canonical form, all 3 will be equivalent. In any case, avoid the -a / -o test operators which are deprecated and unreliable in the general case (not here if you have control on the content of the variables though). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210763",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119587/"
]
} |
210,791 | I saw script where the condition in an if statement used a $ and I don't understand why? if $( ssh user@host " test -e file " ); then echo "File is there"else echo "We don't that file on that host"fi | $(...) is a command substitution. The shell runs the enclosed command, and the expression is replaced by the command's standard output. Generally, this would produce an error if the replacement text did not name a command that the shell could then run. However, test produces no output, so the result is an empty string that the shell "skips". For example, consider what happens if you run if $( ssh user@host " echo foo " ); then echo "File is there"else echo "We don't that file on that host"fi The given code is correctly written without the unnecessary command substitution; the only thing the if statement needs is the exit status of the command. if ssh user@host "test -e file"; then echo "File is there"else echo "We don't that file on that host"fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210791",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120120/"
]
} |
210,879 | I have a btrfs filesystem of about 7G in a 10G image file img.btrfs (I shrank it using btrfs fi resize -3G /mnt ). How can I find the total size (end byte offset) of the filesystem, so that I can shrink the image size? I.e. find out $SIZE for truncate -s $SIZE img.btrfs A mechanism that applies to any other filesystem inside an image file would be a plus. NOTE : one thing that does work is: INITIAL=$(stat -c %s img.btrfs)mount img.btrfs /mntbtrfs fi resize -$NBYTES /mntumount /mnttruncate -s $((INITIAL - NBYTES + 1024*1024)) img.btrfsmount /img.btrfs /mntbtrfs fi resize max /mnt i.e. shrink the btrfs, shrink the image by a little bit less (leaving a 1M overhead), then grow the btrfs to the maximum afforded by the shrunk image. | Annoyingly, btrfs filesystem show returns an approximate value if the size isn't a multiple of 1MB. It also requires a loop device, btrfs filesystem show img.btrfs doesn't work (as of Debian jessie). I can't find another btrfs subcommand that would help. But file img.btrfs helpfully returns the desired size. $ truncate -s 16684k /tmp/img.btrfs$ /sbin/mkfs.btrfs /tmp/img.btrfsSMALL VOLUME: forcing mixed metadata/data groupsBtrfs v3.17See http://btrfs.wiki.kernel.org for more information.Turning ON incompat feature 'mixed-bg': mixed data and metadata block groupsTurning ON incompat feature 'extref': increased hardlink limit per file to 65536Created a data/metadata chunk of size 1703936failed to open /dev/btrfs-control skipping device registration: Permission deniedfs created label (null) on /tmp/img.btrfs nodesize 4096 leafsize 4096 sectorsize 4096 size 16.29MiB $ truncate -s 32m /tmp/img.btrfs$ file /tmp/img.btrfs/tmp/img.btrfs: BTRFS Filesystem sectorsize 4096, nodesize 4096, leafsize 4096, UUID=61297945-d399-4fdc-ba9f-750ef9f9dfdb, 28672/ 17084416 bytes used, 1 devices It directly reads the 8-byte little-endian value at offset 0x10070. If you don't want to parse the output of file , you can extract it. The following POSIX snippet does the job¹: size_hex=$(cat /tmp/img.btrfs | dd ibs=8 skip=8206 count=1 2>/dev/null | od -tx8 -An | tr abcdef ABCDEF | tr -dc 0-9ABCDEF)[ ${#size_hex} -eq 16 ] &&{ echo "ibase=16; $size_hex"; } | bc or in Perl: </tmp/btrfs.img perl -e 'seek(STDIN, 0x10070, 0) or sysread(STDIN, $_, 0x10070) == 0x10070 or die "seek"; sysread(STDIN, $_, 8) == 8 or die "read"; print unpack("Q<", $_), "\n"' file works for some other filesystem types, but that doesn't help much for scripts because the output isn't standardized. I can't think of a generic utility with a standard interface for all common filesystems, maybe some virtualization or forensics tool. ¹ Exercise: why is this a useful use of cat ? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210879",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117741/"
]
} |
210,930 | Previously I have been using this handy script with oh-my-zsh to set the tab color whenever I ssh into a machine: # iTerm2 window/tab color commands# http://code.google.com/p/iterm2/wiki/ProprietaryEscapeCodestab-color() { echo -ne "\033]6;1;bg;red;brightness;$1\a" echo -ne "\033]6;1;bg;green;brightness;$2\a" echo -ne "\033]6;1;bg;blue;brightness;$3\a"}tab-reset() { echo -ne "\033]6;1;bg;*;default\a" trap - INT EXIT}# Change the color of the tab when using SSH# reset the color after the connection closescolor-ssh() { if [[ -n "$ITERM_SESSION_ID" ]]; then trap "tab-reset" INT EXIT if [[ "$*" =~ "production|ec2-.*compute-1" ]]; then tab-color 255 0 0 else tab-color 144 181 80 #0 255 0 fi fi ssh $*}compdef _ssh color-ssh=sshalias ssh=color-ssh However, today I have discovered that the autocomplete is now broken! My ssh no longer autocompletes if I run this script. How do I diagnose what is going on? Edit:Disabling oh-my-zsh and sourcing the file leads to the error: command not found: compdef . | Ok, I've found the solution: deleting all zcompdump files solved the problem: rm ~/.zcompdump* | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/210930",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120193/"
]
} |
210,939 | Are the commands in /etc/rc.local ran by su by default? Do I need to specific sudo before each command or will they be ran by su regardless? | su is not a user it's program to run subsequent commands/programs under an alternate identity of another user than the one executing the command. It is very similar to sudo in that regard. Unless another user is specified both commands will default to running the command under the alternate identity of the root user, the superuser/administrator. The main difference between su and sudo is that: su requires you to know the password of that alternate user, where sudo will prompt for the password of the user running the sudo command and requires setup so that the user is allowed to run the requested commands/programs. (When root runs either su or sudo no password is required.) Like any init script, the /etc/rc.local script is executed by the root user and you do not need to prepend either su or sudo to the commands/programs that need to run as root. You may still need to use su or sudo in your init scripts if those commands need to be executed not as root but another user/service-account... su - oracle /do/something/as/oracle/user | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/210939",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120482/"
]
} |
210,948 | I tried to use the ls command and got an error: bash: /bin/ls: cannot execute binary file What can I use instead of this command? | You can use the echo or find commands instead of ls : echo * or: find -printf "%M\t%u\t%g\t%p\n" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/210948",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
210,982 | Question: How do I launch a program while ensuring that its network access is bound via a specific network interface? Case: I want to access two distinct machines with the same IP (192.168.1.1), but accessible via two different network interfaces (eth1 and eth2). Example: net-bind -D eth1 -exec {Program 192.168.1.1}net-bind -D eth2 -exec {Program 192.168.1.1} The above is an approximation of what I'd like, inspired by the hardware binding done via primusrun and optirun . Challenge: As suggested in a related thread , the interfaces used are not chosen by the program, but rather by the kernel (Hence the pre-binding syntax in the above example). I've found some related solutions, which are unsatisfactory. They are based on binding network interfaces via user-specific network blacklisting; i.e., running the process as a user which can only access a single specific network interface. | For Linux, this has already been answered on Superuser - How to use different network interfaces for different processes? . The most popular answer uses an LD_PRELOAD trick to change the network binding for a program, but modern kernels support a much more flexible feature called 'network namespaces' which is exposed through the ip program. This answer shows how to use this. From my own experiments I have done the following (as root): # Add a new namespace called test_nsip netns add test_ns# Set test to use eth0, after this point eth0 is not usable by programs# outside the namespaceip link set eth0 netns test_ns# Bring up eth0 inside test_nsip netns exec test_ns ip link set eth0 up# Use dhcp to get an ipv4 address for eth0ip netns exec test_ns dhclient eth0# Ping google from inside the namespaceip netns exec test_ns ping www.google.co.uk It is also possible to manage network namespaces to some extent with the unshare and nsenter commands. This allows you to also create separate spaces for PIDs, users and mount points. For some more information see: Reliable way to jail child processes using `nsenter:` Namespaces in operation | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/210982",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108221/"
]
} |
211,005 | I am using Debian and I want to make a icon on the launcher that I can click on to open my Teamspeak server. Currently, I have to go to the terminal and type the following commands. cd /usr/bin/teamspeak3-server_linux-amd64./ts3server_minimal_runscript.sh This launches my Teamspeak 3 server, leaving the terminal open which is time consuming and annoying to have a terminal open solely for this purpose. In Ubuntu I just made a .desktop file and dragged the icon onto my launcher, which is miles better. Not sure how to do that on Debian though, can someone advise? | Create a desktop file for Teamspeak 3 server and place it at /usr/share/applications directory and run sudo update-desktop-database . how to create the desktop file open any text editor of your choice and place the lines bellow and save it with any name you want like teamspeak_3_server.desktop . [Desktop Entry]Type=ApplicationExec=/usr/bin/teamspeak3-server_linux-amd64/ts3server_minimal_runscript.shIcon=/path/to/teamspeak3/iconName=Teamspeak 3 serverGenericName=TeamspeakCategories=Network; Change the icon path if you want a fancy application icon. I suggest to create a symlink for ts3server_minimal_runscript.sh to avoid the long line and change the Exec= line of the desktop file. sudo ln -s /usr/bin/teamspeak3-server_linux-amd64/ts3server_minimal_runscript.sh /usr/bin/ts3server | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/211005",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119849/"
]
} |
211,060 | Why is this shell script printing inputs twice? I expected the script to ignore the inputs after 5. Script: #! /bin/bashecho "Enter 5 words : "read a b c d e printf "> %s %s %s %s %s <" $a $b $c $d $e Output: user@linux:~$ pico ifs2.shuser@linux:~$ ./ifs2.shEnter 5 words : 1 2 3 4 5 > 1 2 3 4 5 <user@linux:~$ ./ifs2.shEnter 5 words : 1 2 3 4 5 6> 1 2 3 4 5 <> 6 <user@linux:~$ ./ifs2.shEnter 5 words : 1 2 3 4 5 6 7 8 9 0> 1 2 3 4 5 <> 6 7 8 9 0 <user@linux:~$ And, the following script works no matter what is set to $IFS. Why? #! /bin/bash old="$IFS"IFS=":"echo "IFS = $IFS"echo "Enter 5 words : "read a b c d e printf "> %s %s %s %s %s <" $a $b $c $d $e IFS="$old" Output: user@linux:~$ ./ifs2.shIFS = :Enter 5 words : 1 2 3 4 5 > 1 2 3 4 5 <user@linux:~$ ./ifs2.shIFS = :Enter 5 words : 1 2 3 4 5> 1 2 3 4 5 <user@linux:~$ ./ifs2.shIFS = :Enter 5 words : 1:2:3:4:5> 1 2 3 4 5 <user@linux:~$ | You have three problems: With read , if there are fewer variable names than fields in the input, the last var will be bound to all the remaining fields on the line, with delimiters. That means that $e gets 5 6 in your first unexpected example. Because all of $a .. $e are unquoted, their values undergo field splitting . If $e holds " 5 6 " then it expands into two arguments to the command. printf consumes all its arguments, using as many arguments at once as there are % substitutions, repeatedly. This is buried in the documentation as: The format operand shall be reused as often as necessary to satisfy the argument operands. Any extra c or s conversion specifiers shall be evaluated as if a null string argument were supplied; other extra conversion specifications shall be evaluated as if a zero argument were supplied. In other words, if there are unused arguments it starts over again and processes them from the beginning too, including the whole format string. This is useful when you want to format an entire array, say: printf '%b ' "${array[@]}" Your printf command gets one argument from each of $a .. $d , and then however many are left over from $e . When $e is " 5 6 ", printf has two goes around, the second just getting 6 to format. When it's 5 6 7 8 9 10 it has the full range of substitutions for the second printing. You can avoid all of these by adding an extra dummy field to read , and quoting your parameter substitutions (which is always a good idea): read a b c d e dummyprintf "> %s %s %s %s %s <" "$a" "$b" "$c" "$d" "$e" This will give: Enter 5 words : 1 2 3 4 5 6 7 8 9 10> 1 2 3 4 5 < dummy gets all the extra fields, and printf only gets the five arguments you expected. Your second edited-in question has a similar answer: only a gets a value when IFS doesn't have a space. That means $b .. $e expand to nothing, so printf only gets a single argument. Your spaces from the format string are printed, with nothing substituted in between them ("as if a null string argument were supplied"). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/211060",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
211,118 | Is there a way to execute a bash command when I click on a text in a browser? The web page is on computer A, the browser is on computer B : I want to execute the code on computer B, e.g hsetroot to change the wallpaper when clicking an image. | I solve it using a nodeJS server. (not clean/final code but its working) Computer A : (server) function change_wallpaper(image){ var objReq = new XMLHttpRequest(); objReq.open("GET", "http://localhost:8888" + "?image=" + image, false); objReq.send(null);}<img src="./img/1.jpeg" onclick="change_wallpaper(this.src);" /><img src="./img/2.jpeg" onclick="change_wallpaper(this.src);" /> Computer B : (client)file called server.js ans executed with nodejs server.js var http = require("http");var sys = require('sys')var exec = require('child_process').exec;var url = require("url");function onRequest(request, response) { var params = url.parse(request.url,true).query; function puts(error, stdout, stderr) {sys.puts(stdout)} exec("/usr/bin/feh --bg-center " + params.image, puts); response.writeHead(200, {'Content-Type': 'text/plain'}); response.end('Wallpaper');}http.createServer(onRequest).listen(8888); | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/211118",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119603/"
]
} |
211,173 | So I want to add 10 seconds to a time. The command to do that came from here . To illustrate: STARTIME="$(date +"%T")"ENDTIME="$STARTIME today + 10 seconds"CALL="$(echo date -d $ENDTIME +'%H:%M:%S')" The problem that I have with this code is that if I echo the $CALL variable, it gives: date -d 12:51:19 today + 10 seconds +%H:%M:%S The correct version of this string would look like: date -d "12:48:03 today + 10 seconds" +'%H:%M:%S' But if I wrap the variable name in quotes, like so: STARTIME="$(date +"%T")"ENDTIME="$STARTIME today + 10 seconds"CALL="$(echo date -d '$ENDTIME' +'%H:%M:%S')" ...it's interpreted as a string literal, and if you echo it, it gives: date -d $ENDTIME +%H:%M:%S So what I need to do is call the variable such that it's value is swapped into the function and wrapped with double-quotes("), but avoid the name of the variable being read as a literal string. I'm extremely confused with this, I miss Python! | Just for completeness, you don't need all those (") nor the final $(echo ...) .Here's the simplified version of your assignments that produce the sameeffect: STARTIME=$(date +"%T")ENDTIME="$STARTIME today + 10 seconds"CALL="date -d '$ENDTIME' +'%H:%M:%S'" Note how you don't need to quote when doing var=$(...) but you do usuallywith var="many words": a=$(echo 'a b'); echo "$a" # result: a b Inside (") a (') has no special significance, and vice-versa, eg: a="that's nice"; echo "$a" # result: that's nicea='that "is nice'; echo "$a" # result: that "is nice | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/211173",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58199/"
]
} |
211,246 | In zsh, when I enter which git it shows: git: aliased to noglob git How do I find out which git binary it actually invokes? (eg: /usr/bin/git vs ~/bin/git ). Basically I want to bypass the aliases when I use which . | For zsh , which is shorthand for whence -c , and supports other whence options. In particular: -p Do a path search for name even if it is an alias, reserved word, shell function or builtin. So: $ which git git: aliased to noglob git$ which -p git/usr/bin/git | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/211246",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34671/"
]
} |
211,291 | As I was exploring about Fedora 22 (I'm currently using fedora 20), I came to learn that Yum has been replaced by DNF. Basically what I want to know is the difference between those two Fedora package managers and the reason for this change. | Compared to Yum, DNF offers: Better dependency management Support Extensions other than Python Documented API Lower memory usage Less automatic synchronization ofmetadata with repositories, a process that users often complain"happens too often and takes too much time." See Will DNF Replace Yum? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/211291",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120342/"
]
} |
211,292 | I would like to know what is contained in the "/" directory and is it capable of utilizing 50GB. My system all the 50GB allocated to "/" had been utilized. I want to know what to delete if want to utilize the space in an efficient way? I can't see any big files in "/" . | Compared to Yum, DNF offers: Better dependency management Support Extensions other than Python Documented API Lower memory usage Less automatic synchronization ofmetadata with repositories, a process that users often complain"happens too often and takes too much time." See Will DNF Replace Yum? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/211292",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111726/"
]
} |
211,309 | I have a large file to parse and reformat, preferably with sed (under bash). The file contains repetitive sequences starting with PATTERN_START and ending with PATTERN_END . These sequences are intermixed with other text that I have to keep unchanged. In the sequences, there are several records (numbered from 1 to n , where n can be from 1 to 12). A record is a group of lines beginning with a line of the form Record i , where i is an integer between 1 and n , and ends with another such line ( Record (i+1) ) or a PATTERN_END line. The length of a record can be from 1 line to 30 lines. Here's a generic representation of an input file: unrelated data (possibly many lines) ⎤PATTERN_START |Record 1 ⎤ | data for Record 1 ⎤ (up to 30 lines) | | (many repetitions) ︙ ⎦ | (up to 12 records) |Record 2 | | data for Record 2 ⎦ |PATTERN_END ⎦ unrelated data (possibly many lines) So, I would like, ONLY for the records located between PATTERN_START and PATTERN_END , to have all the data lines of each record gathered on the Record line. Can anybody help? Hereunder is a sample of the file that I have to parse, and the kind of result I would like to have: Input BlablaBlablaPATTERN_OTHERRecord 1 <- record not between PATTERN_START and PATTERN_END tags => do not touch itDataDataPATTERN_ENDBlablaPATTERN_STARTRecord 1 <- record between PATTERN_START and PATTERN_END tags => to put in one lineDataDataDataRecord 2 <- record between PATTERN_START and PATTERN_END tags => to put in one lineDataDataRecord 3 <- record between PATTERN_START and PATTERN_END tags => to put in one lineDataDataDataDataPATTERN_ENDBlablaBlablaBlablaBlablaPATTERN_STARTRecord 1 <- record between PATTERN_START and PATTERN_END tags => to put in one lineDataDataDataPATTERN_ENDBlablaBlablaPATTERN_OTHERRecord 1 <- record not between PATTERN_START and PATTERN_END tags => do not touch itDataDataRecord 2 <- record not between PATTERN_START and PATTERN_END tags => do not touch itDataPATTERN_ENDBlablaBlablaPATTERN_STARTRecord 1 <- record between PATTERN_START and PATTERN_END tags => to put in one lineDataRecord 2 <- record between PATTERN_START and PATTERN_END tags => to put in one lineDataDataDataPATTERN_ENDBlablaBlabla Output BlablaBlablaPATTERN_OTHERRecord 1 <- was not between PATTERN_START and PATTERN_END tags => not modifiedDataDataPATTERN_ENDBlablaPATTERN_STARTRecord 1 Data Data Data <- record data grouped in one lineRecord 2 Data Data <- record data grouped in one lineRecord 3 Data Data Data Data <- record data grouped in one linePATTERN_ENDBlablaBlablaBlablaBlablaPATTERN_STARTRecord 1 Data Data Data <- record data grouped in one linePATTERN_ENDBlablaBlablaPATTERN_OTHERRecord 1 <- was not between PATTERN_START and PATTERN_END tags => not modifiedDataDataRecord 2 <- was not between PATTERN_START and PATTERN_END tags => not modifiedDataPATTERN_ENDBlablaBlablaPATTERN_STARTRecord 1 Data <- record data grouped in one lineRecord 2 Data Data Data <- record data grouped in one linePATTERN_ENDBlablaBlabla | Think this is what you want using GNU sed sed -n '/^PATTERN_START/,/^PATTERN_END/{ //!{H;/^Record/!{x;s/\n\([^\n]*\)$/ \1/;x}}; /^PATTERN_START/{h};/^PATTERN_END/{x;p;x;p};d };p' file Explanation sed -n #Non printing'/^PATTERN_START/,/^PATTERN_END/{#If the line falls between these two patterns execute the next block //!{ #If the previous pattern matched from the line above is not on matched(so skip the start and end lines), then execute next block H; #append the line to the hold buffer, so this appends all lines between #`/^PATTERN_START/` and `/^PATTERN_END/` not including those. /^Record/!{ #If the line does not begin with record then execute next block x;s/\n\([^\n]*\)$/ \1/;x #Swap current line with pattern buffer holding all our other lines #up to now.Then remove the last newline. As this only executed when #record is not matched it just removes the newline from the start #of `data`. #The line is then put switched back into the hold buffer. } #End of not record block }; #End of previous pattern match block /^PATTERN_START/{h}; #If line begins with `PATTERN_START` then the hold buffer is overwritten #with this line removing all the previous matched lines. /^PATTERN_END/{x;p;x;p} #If line begins with `PATTERN_END` the swap in our saved lines, print them, #then swap back in the PATTERN END line and print that as well. ;d #Delete all the lines within the range, as we print them explicitly in the #Pattern end block above };p' file # Print everything that's not in the range print, and the name of the file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/211309",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120426/"
]
} |
211,395 | I have a problem with for loop in bash. For example:I have an array ("etc" "bin" "var") .And I iterate on this array. But in the loop I would like append some value to the array. E.g. array=("etc" "bin" "var")for i in "${array[@]}"doecho $idone This displays etc bin var (of course on separate lines).And if I append after do like that: array=("etc" "bin" "var")for i in "${array[@]}"doarray+=("sbin")echo $idone I want: etc bin var sbin (of course on separate lines). This is not working. How can I do it? | It will append "sbin" 3 times as it should, but it won't iterate over the newly added "sbin"s in the same loop. After the 2nd example: echo "${array[@]}"#=> etc bin var sbin sbin sbin | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/211395",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120481/"
]
} |
211,402 | Earlier today when I typed sudo yum history I would get a list of operations with a heading like so: ID | Login user | Date and time | Action(s) | Altered------------------------------------------------------------------------------- Now when I type it I get a slightly different heading: ID | Command line | Date and time | Action(s) | Altered------------------------------------------------------------------------------- Notice that I used to have a column 'Login user' but now that column is replaced by 'Command line' Why did it change, and is there a way to switch between the two different outputs, or better yet show both columns together? yum 3.4.3 on CentOS 3.10.0-229 (x86_64) | It will append "sbin" 3 times as it should, but it won't iterate over the newly added "sbin"s in the same loop. After the 2nd example: echo "${array[@]}"#=> etc bin var sbin sbin sbin | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/211402",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38705/"
]
} |
211,465 | I installed Webmin, and then set up the firewall like this: INPUTSSH port ALLOWEDWebmin port ALLOWEDHTTP port (80) ALLOWEDDROP EVERYTHING ELSEFORWARDINGno rulesOUTPUTno rules If I remove DROP EVERYTHING ELSE from INPUT, everything works. However, when that rule is added, apt-get doesn't work, and I can't ping or traceroute anything. Even with DROP EVERYTHING ELSE enabled, Webmin, HTTP and SSH still work. Which ports should I unblock to get apt-get working and allowed connecting to other domains from within the server? Thanks | Make sure you accept also connection originated from inside. With iptables: iptables -A INPUT -m state --state ESTABLISHED -j ACCEPT With Webmin, allow Connection states EQUALS Existing Connection | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/211465",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55885/"
]
} |
211,471 | By default, in DEVISH (at least for me) the hitch file used is mhitch.py ... I can change the hitch using the -h option for the shfile command, but I need to permanently set it to something. | Make sure you accept also connection originated from inside. With iptables: iptables -A INPUT -m state --state ESTABLISHED -j ACCEPT With Webmin, allow Connection states EQUALS Existing Connection | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/211471",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120526/"
]
} |
211,481 | I'm new to Linux. I have been practicing a few commands. My question is about when I'm creating different files using a different umask. For example: umask 222 , as I understand it, is the same as 777 - 222 = 555, so when I create a new file (call it "newfile" ), then newfile 's permissions should be r-xr-x-r-x (or am I wrong?) Whatever: "newfile" was created with r--r--r-- permissions. My umask value in /etc/profile is: if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then umask 002else umask 022fi My uid is 1002. Note: Just for the record, I've already read all the umask questions and documentation from man and I can't understand it yet. | Most programs create files without the execute bits set ( 0666 == -rw-rw-rw- ). Execute bits are pretty much only set by the compiler, during installation of an executable, or manually by the user. Then the umask is applied, to determine the actual permissions. create 0666 rw-rw-rw-umask 0222 r-xr-xr-xeffective 0444 r--r--r-- Note that it's not actually a subtraction, but a bitwise AND of the complement. So it takes 0777 - 0222 = 0555 , and does OCTAL BINARY HUMAN-READABLE 0666 0110110110 -rw-rw-rw-& 0555 0101101101 -r-xr-xr-x 0444 0100100100 -r--r--r-- See also Can not explain ACL behavior | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/211481",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119650/"
]
} |
211,519 | Is there a way I can do something like run myscript.sh in fish ? I am using Arch Linux, and have installed the fish shell together with oh-my-fish Can someone tell me which file I must edit to add my custom shell startup commands? In zsh it was the ~/.zshrc file. What is it in the fish shell? I have a problem: if I put my stuff in bashrc it is not loaded by fish. If I enter bash commands in the fish file ~/.config/fish/config.fish , it throws errors Is there a way to get fish to load an "sh" file so that I can put all my bash things in that file? | bash and fish have incompatible syntax, so they cannot share startup files. You can put startup commands in ~/.config/fish/config.fish . However this is usually unnecessary! For creating functions or aliases, you can use autoloading functions . For setting variables, including env vars, you can use universal variables . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/211519",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120559/"
]
} |
211,550 | I implemented my own Serial-ATA Host-Bus-Adapter (HBA) in VHDL and programmed it onto a FPGA. A FPGA is chip which can be programmed with any digital circuit. It's also equipped with serial transceivers to generate high speed signals for SATA or PCIe. This SATA controller supports SATA 6 Gb/s line rates and uses ATA-8 DMA-IN/OUT commands to transfer data in up to 32 MiB chunks to and from the device. The design is proven to work at maximum speed (e.g. Samsung SSD 840 Pro -> over 550 MiB/s). After some tests with several SSD and HDD devices, I bought a new Seagate 6 TB Archive HDD ( ST6000AS0002 ). This HDD reaches up to 190 MiB/s read performance, but only 30 to 40 MiB/s write performance! So I dug deeper and measured the transmitted frames (yes that's possible with a FPGA design). As far as I can tell, the Seagate HDD is ready to receive the first 32 MiB of a transfer in one piece. This transfer happens at maximum line speed of 580 MiB/s. After that, the HDD stalls the remaining bytes for over 800 ms! Then the HDD is ready to receive the next 32 MiB and stalls again for 800 ms. All in all an 1 GiB transfer needs over 30 seconds, which equals to circa 35 MiB/s. I assume that this HDD has a 32 MiB write cache, which is flushed in between the burst cycles. Data transfers with less than 32 MiB don't show this behavior. My controller uses DMA-IN and DMA-OUT command to transfer data. I'm not using the QUEUED-DMA-IN and QUEUED-DMA-OUT command, which are used by NCQ capable AHCI controllers. Inplementing AHCI and NCQ on a FPGA platform is very complex and not needed by my application layer. I would like to reproduce this scenario on my Linux PC, but the Linux AHCI driver has NCQ enabled by default. I need to disable NCQ, so I found this website describing how to disable NCQ , but it doesn't work. The Linux PC still reaches 190 MiB/s write performance. > dd if=/dev/zero of=/dev/sdb bs=32M count=321073741824 bytes (1.1 GB) copied, 5.46148 s, 197 MB/s I think there is a fault in the article from above: Reducing the NCQ queue depth to 1 does not disable NCQ. It just allows the OS the use only one queue. It can still use QUEUED-DMA-** commands for the transfer. I need to realy disable NCQ so the driver issues DMA-IN/OUT commands to the device. So here are my questions: How can I disable NCQ? If NCQ queue depth = 1, is Linux's AHCI driver using QUEUED-DMA-** or DMA-** commands? How can I check if NCQ is disable, because changing /sys/block/sdX/device/queue_depth is not reported in dmesg ? | Thanks to @frostschutz, I could measure the write performance in Linux without NCQ feature. The kernel boot parameter libata.force=noncq disabled NCQ completely. Regarding my Seagate 6TB write performance problem, there was no change in speed. Linux still reaches 180 MiB/s. But then I had another idea: The Linux driver does not use transfers of 32 MiB chunks. The kernel buffer is much smaller, especially if NCQ with 32 queues is enabled (32 queues * 32 MiB => 1 GiB AHCI buffer). So I tested my SATA controller with 256 KiB transfers and voilà, it's possible to reach 185 MiB/s. So I guess the Seagate ST6000AS0002 firmware is not capable of handling big ATA burst transfers. The ATA standard allows up to 65.536 logical blocks, which equals 32 MiB. SMR - Shingled Magnetic Recording Another possibility for the bad write performance could be the shingled magnetic recording technique , which is used by Seagate in these archive devices. Obviously, I triggered a rare effect with my FPGA implementation. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/211550",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116202/"
]
} |
211,577 | Is there a way how to uppercase/lowercase only one character in some string? Input Example: syslog_apr_24_30syslog_mar_01_17 Desired output: syslog_Apr_24_30syslog_Mar_01_17 Note please the uppercase beginning of the month. I have tried awk but I'm not good enough to get it working. | You can use \u in GNU sed to uppercase a letter: sed -e 's/_\(.\)/_\u\1/' input Perl does the same: perl -pe 's/_(.)/_\u$1/' input \l does the opposite. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/211577",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84688/"
]
} |
211,632 | Given: main_east_librarymain_west_roofmain_north_roofminor_south_roof How can I used sed (specifically, not awk , tr , etc.) to create: main_east_Librarymain_west_Roofmain_north_Roofminor_south_Roof Something like: $ echo "main_west_librarymain_west_roofmain_north_roofminor_south_roof" | sed 's_\3_upcase(\3)_' Though that gives: sed: -e expression #1, char 16: Invalid back reference | With GNU sed : sed -E 's/[[:alpha:]]+/\u&/3' Would capitalise the third sequence of letters from each line. To capitalise every third sequence of letters in each line: sed -E 's/(([[:alpha:]]+[^[:alpha:]]+){2})([[:alpha:]]+)/\1\u\3/g' To capitalise every third sequence of letters in the whole input , with GNU awk : awk -v RS='[^[:alpha:]]+' -v ORS= ' NR % 3 == 0 {$0=toupper(substr($0,1,1)) substr($0,2)} {print $0 RT}' Or with perl : perl -Mopen=locale -pe 's/\p{alpha}+/++$n % 3 == 0 ? "\u$&" : "$&"/ge' While the [[:alpha:]] character class can be a bit random on some systems (for instance on GNU systems, that includes many numerals with the exclusion of the Arabic ones (0123456789)), Perl's \p{...} is based on Unicode character properties. So those \p{alpha} will include letters in all alphabets and also non-letter alphabetical characters. It will not include combining diacritics though which means that words like Stéphane would be considered as two separate words. So you may want instead: perl -Mopen=locale -pe 's/[\p{alpha}\p{mark}]+/++$n % 3 == 0 ? "\u$&" : "$&"/ge' Though that may end-up including too many. Also note that contrary to GNU sed , Perl's \u will correctly transform words like fiddle (where fi is one ligature character) to Fiddle (2 characters F and i ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/211632",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
211,647 | I see the following ephemeral port range on my Linux box. sysctl net.ipv4.ip_local_port_rangenet.ipv4.ip_local_port_range = 32768 61000 I want to extend the port range to start from around 16000. A quick question here being: how safe is it to change the range in context to the other applications? Will other applications be affected by this change? I understand that an application is affected only if it is using the port(s) in the specified port range. But in general, how are these kind of issues dealt it? | Changing the ephemeral port range might cause problems if you are using Mesos . Mesos advertises the resources of a host out to various Mesos Frameworks which then can choose to use the advertised resources. The advertised resources include CPU, memory, ports, etc. The default set of ports that Mesos advertises is 31000-32000 . This avoids a clash with the default Linux ephemeral port range of 32768-61000 . Notably, Mesos doesn't know about whether a port is used by some other process, it just tracks the assignment of ports to the entities it orchestrates ( Mesos Tasks & Mesos Executors ). So if you change the ephemeral port range such that it overlaps with the Mesos port range, it's likely that some arbitrary process will use an ephemeral port that is actually one of those "Mesos ports". This could lead to Mesos offering that port to a Mesos Framework , which would encounter seemingly random failures of its Mesos Executors and/or Mesos Tasks as they will be unable to bind to that port. If you need to increase your ephemeral port range and also need to run Mesos, then you can modify the advertised ports through a mesos-slave (soon to be renamed to mesos-agent ) configuration parameter of --resources . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/211647",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118741/"
]
} |
211,653 | Is there a way to find out the pane index of a particular pane in Tmux? I know I can run something like: tmux display-message -p "#{pane_index}" but that only works on the active pane. I want it to work for whatever pane it's run in. Normally of course it's hard to run a script in a pane that's not the active pane, but you can if you use the :set-window-option synchronize-panes to sync input between all panes. How would I use this? In my job I need to connect to multiple identical servers in a load balancer at the same time, which I do with Tmux panes. I normally turn on the synchronize panes feature to allow me to have whatever I type identically sent to each pane at the same time. This works great. The thing I find is that I'd like to connect to the servers and do something unique to each pane sometimes, using the same "pane index" each time. For example, I'd run a command like so: ssh NODE_$(get_pane_number) which, when synchronized and run in each pane, would run the following commands in a window with 4 panes: ssh NODE_0 in pane 0 ssh NODE_1 in pane 1 ssh NODE_2 in pane 2 ssh NODE_3 in pane 3 I could of course script this, but that would only work well before I started synchronizing inputs. There are times when I'd like to do this after I've started synchronizing inputs as well. | tmux (since v1.5) provides TMUX_PANE in the environment of the process it launches for a pane; each new pane gets a server-unique value. So, assuming that TMUX_PANE is available in your environment, then this should do what I think you want: tmux display -pt "${TMUX_PANE:?}" '#{pane_index}' The ${…:?} syntax in a Bourne-like shell prevents the expansion of missing or empty parameters. In this case, an empty expansion would fall back to the default of using “the currently active pane”, which is usually—but not always—the same as “this pane” (they will likely differ if the command’s tty is not the one that tmux started; e.g. because of using script or expect , et cetera). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/211653",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22734/"
]
} |
211,713 | I am not finding my .bash_login and .bash_profile root@linux:~# locate .bash*/etc/bash.bashrc/etc/skel/.bashrc/etc/skel/.bashrc.original/home/noroot/.bashrc/home/noroot/.bashrc.original/root/.bash_history/root/.bashrc/usr/share/base-files/dot.bashrc/usr/share/doc/adduser/examples/adduser.local.conf.examples/bash.bashrc/usr/share/doc/adduser/examples/adduser.local.conf.examples/skel/dot.bashrc/usr/share/kali-defaults/.bashrcroot@linux:~# Is there always only one .bashrc and .bash_profile file for every user? And, is .bashrc and .bash_profile always found in the /home/"user name" directory? | The only ones that bash looks at by default are in the user's home directory, yes. There is also typically a single source for them in Linux -- /etc/skel. The user's home directory does not need to be under /home, though. I see you've edited your question to ask where your .bash_login and .bash_profile files are. Based on the # prompt, I'm going to assume you're running this as root. In that case, your files are /root/.bash_history/root/.bashrc See my original answer above regarding a user's home directory -- it's not always /home; in this case, root's home directory is /root . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/211713",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
211,715 | I am trying to convert hex to decimal to ascii and store it in a variable.I am using the following code. HEX=30DEC=`printf "%d\n" 0x${HEX}`echo "$DEC"ASC=`printf \\$(printf '%03o' $DEC)`echo "$ASC" I am getting the following error syntax error : `(' unexpected I am using Solaris 10, and ksh . I do not want to use a function for ascii and call it to store the value. I want to be able to do it without using a function. | The only ones that bash looks at by default are in the user's home directory, yes. There is also typically a single source for them in Linux -- /etc/skel. The user's home directory does not need to be under /home, though. I see you've edited your question to ask where your .bash_login and .bash_profile files are. Based on the # prompt, I'm going to assume you're running this as root. In that case, your files are /root/.bash_history/root/.bashrc See my original answer above regarding a user's home directory -- it's not always /home; in this case, root's home directory is /root . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/211715",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115560/"
]
} |
211,804 | I want to know if my server establishes a connection to a remote server or if the remote server tries to reach my server. I tried to read the output of lsof and obtain this information: lsof -i TCP:25 USER FD TYPE DEVICE SIZE/OFF NODE NAMEmaster 2657 root 12u IPv4 8086 0t0 TCP *:smtp (LISTEN)smtpd 12950 postfix 6u IPv4 8086 0t0 TCP *:smtp (LISTEN)smtpd 12950 postfix 9u IPv4 35762406 0t0 TCP hostname:smtp->spe.cif.ic.IP:55277 (ESTABLISHED)smtp 13007 postfix 13u IPv4 35762309 0t0 TCP hostname:34434->fake.VVVVV.fr:smtp (ESTABLISHED)smtpd 14188 postfix 6u IPv4 8086 0t0 TCP *:smtp (LISTEN)smtpd 14188 postfix 9u IPv4 35748921 0t0 TCP hostname:smtp->XX.XX.XX.XX:55912 (ESTABLISHED)smtpd 14897 postfix 6u IPv4 8086 0t0 TCP *:smtp (LISTEN) I'd like to know if this information means that my server tries to connect to spe.cif.ic.IP or if it's the other way around. Is the sign -> relevant, or I should use a different command? | I think the clue is in the port numbers, take these two entries smtpd 12950 postfix 9u IPv4 35762406 0t0 TCP hostname:smtp->spe.cif.ic.IP:55277 (ESTABLISHED)smtp 13007 postfix 13u IPv4 35762309 0t0 TCP hostname:34434->fake.VVVVV.fr:smtp (ESTABLISHED) smtpd has received a connection on port smtp(25) from a high port number, whilst smtp connects to remote port smtp(25) and has a local high port number. So -> means connected to | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/211804",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53092/"
]
} |
211,817 | How to copy the contents of a file in UNIX without displaying the file contents. I don't want to cat or vi to see the contents. I want to copy them to clipboard so that I can paste it back on my windows notepad. I can't copy the file from that server to another due to access restrictions. | X11 If using X11 (the most common GUI on traditional Unix or Linux based systems), to copy the content of a file to the X11 CLIPBOARD selection without displaying it, you can use the xclip or xsel utility. xclip -sel c < file Or: xsel -b < file to store the content of file as the CLIPBOARD X11 selection. To store the output of a command: mycommand | xclip -sel cmycommand | xsel -b Note that it should be stored using an UTF-8 encoding or otherwise pasting won't work properly. If the file is encoded using an another character set, you should convert to UTF-8 first, like: <file iconv -f latin1 -t utf8 | xclip -sel c for a file encoded in latin1 / iso8859-1 . xsel doesn't work with binary data (it doesn't accept null bytes), but xclip does. To store it as a CUT_BUFFER (those are still queried by some applications like xterm when nothing claims the CLIPBOARD or PRIMARY X selections and don't need to have a process running to serve it like for selections), though you probably won't want or need to use that nowadays: xprop -root -format CUT_BUFFER0 8s -set CUT_BUFFER0 "$(cat file)" (removes the trailing newline characters from file ). GNU screen GNU screen has the readbuf command to slurp the content of a file into its own copy-paste buffer (which you paste with ^A] ). So: screen -X readbuf file Apple OS/X Though Apple OS/X can use X11. It doesn't by default unless you run a X11 application. You would be able to use xclip or xsel there as OS/X should synchronise the X11 CLIPBOARD selection with OS/X pasteboard buffers, but that would be a bit of a waste to start the X11 server just for that. On OS/X, you can use the pbcopy command to store arbitrary content into pasteboard buffers: pbcopy < file (the file's character encoding is expected to be the locale's one). To store the output of a command: mycommand | pbcopy Shells Most shells have their own copy-paste buffers. In emacs mode, cut and copy operations store the copied/cut text onto a stack which you yank/paste with Ctrl-Y , and cycle through with Alt+Y zsh CUTBUFFER/killring In zsh , the stack is stored in the $killring array and the top of the stack in the $CUTBUFFER variable though those variables are only available from Zsh Line Editor (zle) widgets and a few specialised widgets are the prefered way to manipulate those. Because those are only available via the ZLE, doing it with commands is a bit convoluted: zmodload zsh/mapfilezle-line-init() { if [ -n "$FILE_TO_COPY" ]; then zle copy-region-as-kill $mapfile[$FILE_TO_COPY] unset FILE_TO_COPY fi}zle -N zle-line-initfile-copy() FILE_TO_COPY=$1:A The zle-line-init special widget is executed once at the start of each new command prompt. What that means is that the file will only be copied at the next prompt. For instance, if you do: file-copy file; sleep 2 The file will only be copied after those 2 seconds. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/211817",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77967/"
]
} |
211,834 | I want to use sed to change a slash into a backslash and a slash, i.e. / -> \/ . But it does not work. Here a small example: #!/bin/bashTEST=/etc/halloecho $TESTecho $TEST | sed "s/hallo/bello/g"echo $TEST | sed "s/\//\\\//g" The output of the first three lines is as assumed. But the last one does not work. Why? How to correct the last part? | Use single quotes for the expression you used: sed 's/\//\\\//g' In double quotes, \ has a special meaning, so you have to backslash it: sed "s/\//\\\\\//g" But it's cleaner to change the delimiter: sed 's=/=\\/=g'sed "s=/=\\\/=g" | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/211834",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116283/"
]
} |
211,885 | While I was listening to music I changed the virtual console using Ctrl + Alt + F2 , and suddenly the music was paused. Returning back to the console that runs the X server Ctrl + Alt + F7 or logging in to the user in the console that I'm currently using ( Ctrl + Alt + F2 ), starts playing the music again. Linux (I am using Ubuntu) is a multi-user operating system. As far as I know, it has 7 virtual consoles that users can log on to and work simultaneously. So why is this happening? What would happen if my system was updating while switching the consoles? | Linux has multiple virtual consoles. Ctrl + Alt + F n switches between these consoles. When you switch from console 7 to console 2, the input and output peripherals are re-routed from console 7 to console 2. When console 7 is inactive, it has no access to the input/output peripherals: the display isn't shown on the screen, the applications don't receive keyboard input, etc. For historical reasons, sound input and output uses completely different channels from input devices such as keyboard and mouse and from video displays. Console devices (the abstraction in the operating system) cover keyboard and video but not sound. The most common basic implementation of sound on a Unix system is independent from that system, and permission to use the sound peripherals is granted based on group membership rather than on ownership of the console. This is in fact a design deficiency. Ubuntu has set things up so that the session logged into the console, and only them, has access to the audio device. If you switch consoles, you lose access to the audio device, unless you also log into that other console. This is what really should have been done from the start, but wasn't because the designers of console interfaces weren't thinking about sound. When you switch to another console, your programs keep running, because the CPU is not associated with a console but with a machine: anyone with an account on the machine is allowed to use CPU time. The same goes for other resources such as memory and files (subject to permissions). It's only interactions with the user that are governed by console ownership. Your sound stops playing when you switch to a console where you aren't logged in because your programs lose the privilege to access the sound output device. I believe that Ubuntu implements access control via Polkit , but I don't know exactly how this works. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/211885",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110279/"
]
} |
211,890 | File #1: I have foofooYou have foobarshe/he has foo File #2: barfoobarbarfoo Final: I have foofoobarYou have foobarfoobarshe/he has foobarfoo | With POSIX paste : paste -d'\0' file1 file2 > new_file With paste from GNU coreutils , you can use -d '' . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/211890",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68382/"
]
} |
211,901 | According to CentOS website , they use the source code from Red Hat, but I am not clear about what version of Red Hat source code is being used to build each version of CentOS. Is there a numeric equivalent? For example, is CentOS 6.5 equivalent/based on Red Hat Enterprise Linux 6.5? | CentOS 6.5 is based on RHEL 6.5; prior to CentOS 7, CentOS versions exactly match RHEL versions. The pattern changed with CentOS 7, which uses something like a build number: CentOS 7 (1406) is based on RHEL 7.0, CentOS 7 (1503) is based on RHEL 7.1, etc. You'll find all the details on the CentOS wiki (look for the "Archived Versions" section). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/211901",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120800/"
]
} |
211,907 | I am developing a web page using Apache Web Server in my pc. When I want to open it in the browser I use the ip, for example: http://192.168.1.6/proyect My host is part of a local network (a laptop and mobile device). In both of them, I can open up the website using the url mentioned above. I would like to know if it is possible to use a "domain" instead an IP in the URL? For example, http://dev-pc/proyect . My first attempt was to know whether a name exists: $ hostnamectl Static hostname: localhost.localdomain Icon name: computer-desktop Chassis: desktop Machine ID: d388b100e4b34a17a685369e53045669 Boot ID: ee82c1e45d35433785b57040944928f3 Operating System: Fedora 20 (Heisenbug) CPE OS Name: cpe:/o:fedoraproject:fedora:20 Kernel: Linux 3.19.8-100.fc20.x86_64 Architecture: x86_64 Then, I test it accesing this URL: http://localhost.localdomain/proyect And it works, but in other devices the page is not found. Looks like localhost.localdomain is just recognized by my pc. Thank you in advance. | CentOS 6.5 is based on RHEL 6.5; prior to CentOS 7, CentOS versions exactly match RHEL versions. The pattern changed with CentOS 7, which uses something like a build number: CentOS 7 (1406) is based on RHEL 7.0, CentOS 7 (1503) is based on RHEL 7.1, etc. You'll find all the details on the CentOS wiki (look for the "Archived Versions" section). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/211907",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118880/"
]
} |
211,976 | I have script I'd always like to run 'x' instances in parallel. The code looks a like that: for A in do for B in do (script1.sh $A $B;script2.sh $A $B) & done #Bdone #A The scripts itself run DB queries, so it would benefit from parallel running. Problem is 1) 'wait' doesn't work (because it finished all background jobs and starts new ones (even if I include a threadcounter), that wastes lots of time. 2) I couldn't figure out how to get parallel to do that. I only found examples where the same script gets run multiple times, but not with different parameters. 3) the alternative solution would be: for A in do for B in do while threadcount>X do sleep 60 done (script1.sh $A $B;script2.sh $A $B) & done #Bdone #A But I didn't really figure out how to get the thread count reliable. Some hints into the right direction are very much welcomed. I'd love to use parallel, but the thing just doesn't work as the documentation tells me. I do parallel echo ::: A B C ::: D E F (from the doc) and it tells me parallel: Input is read from the terminal. Only experts do this on purpose. Press CTRL-D to exit. and that is just the simplest example of the man pages. | Using GNU Parallel it looks like this: parallel script1.sh {}';' script2.sh {} ::: a b c ::: d e f It will spawn one job per CPU. GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to. It can often replace a for loop. If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU: GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time: Installation If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this: (wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README Learn more See more examples: http://www.gnu.org/software/parallel/man.html Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1 Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/211976",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120838/"
]
} |
212,009 | Say I had a block of text in the ~/.bashrc : #~/.bashrc# ...some commands...# aliasesalias suicide='sudo rm -rf /'# end aliases# other commands I wish to replace that block of text with some other text contained between two markers in another file: stuff-to-place-in-bashrc.txt # stuff-to-place-in-bashrc.txt# ...stuff...# aliasesalias ldir='ls * -d'alias ithinklifeisworthliving='echo all good'# end aliases# ...more stuff... I've tried sed -ne 's/# aliases\s+\(.*\)\s+# end aliases/\1/' stuff-to-place-in-bashrc.txt But I'm really swinging in the dark here. Could someone help me out? How do I extract the text from the stuff-to-place-in-bashrc.txt ? How do I replace the other section in ~/.bashrc with the extracted text from question 1? Edit Thanks for the updates guys, those wondering why i would want something like this: allows for cherry-picked updates to script files without overwritting user-made additions. useful for shared and frequently updated standard operating environments (like my last job at Tyro that coded with XP). go crazy guys. | Ed is the standard editor, because you can use it to develop ed scripts and let it do its work, as you tested before, interactively. For small files like .bashrc or any code text, ed is performant because it reads the whole file in one rush and works with the buffer. For the same reason you should not use ed for big files like log files. But with an ed script, your job is done in no time: ed ~/.bashrc<<EOF/^# aliases+,/^# end aliases/-1d-r !sed -n '/^# aliases/,/^# end aliases/p' stuff-to-place-in-bashrc.txt|grep -v '^#'wqEOF I use similar scripts to automatically tune configuration files, such als .asoundrc for different environments I take my laptop to. The best document about ed comes as a simple manual page from the very cool PLAN9 system. I translated it to a ed.ps postscript document.If you are interested in PLAN9 you should check 9front and http://cat-v.org/ as the original bell labs version is still maintained but has a very simple filesystem. A final word about editor wars, emacs, vim and the like: acme rules! | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/212009",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63678/"
]
} |
212,020 | I first learned how to code in a C language, and using if ( code )then codefi makes it really hard for me to read my code (don't ask why).Is there a substitute for then ? Something like this: if ( code ) { code}fi | There's no substitute, but you can use both if you really need the braces: if true ; then { something} fi That uses an unnecessary explicitly-braced command group as the body of the then block. Any commands in the braces will just be executed in the current shell when the then runs, exactly as though the braces weren't there. I wouldn't really recommend this style in general or for code that someone else will need to read, but it does give you the braces you want, and it'll do the trick if you're using an editor that requires braced blocks or something like that. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/212020",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119717/"
]
} |
212,059 | I'm trying to write a systemd service which should expose the options start|stop|status|restart . This is the current script: [Unit]Description=Daemon to start ark serverAfter=network.target[Service]ExecStart=/etc/init.d/arkdaemon startExecStop=/etc/init.d/arkdaemon stopType=forking[Install]WantedBy=multi-user.target I can't find any way to specify a custom status command. Is there a way I think, but how? | Systemd support custom status message, but here are some prerequsites that must be met: type of service should be notify your service must update systemd with your current service status either via /run/systemd/notify socket or by calling systemd-notify As a reference you can check Apache HTTPD on Fedora (maybe same in other distros, don't know): systemctl status httpd.service● httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2017-10-06 15:21:07 CEST; 18h ago Docs: man:httpd.service(8) Process: 14424 ExecReload=/usr/sbin/httpd $OPTIONS -k graceful (code=exited, status=0/SUCCESS) Main PID: 4105 (httpd) Status: "Total requests: 8; Idle/Busy workers 100/0;Requests/sec: 0.000118; Bytes served/sec: 0 B/sec" You can see that Apache is reporting status as Total requests: 8; Idle/Busy workers 100/0 So when I attached strace on pid 4105, we can see that it is periodicaly sending status updates to systemd : sudo strace -f -p 4105wait4(-1, 0x7ffcfab4a25c, WNOHANG|WSTOPPED, NULL) = 0select(0, NULL, NULL, NULL, {tv_sec=1, tv_usec=0}) = 0 (Timeout)socket(AF_UNIX, SOCK_DGRAM|SOCK_CLOEXEC, 0) = 8getsockopt(8, SOL_SOCKET, SO_SNDBUF, [212992], [4]) = 0setsockopt(8, SOL_SOCKET, SO_SNDBUFFORCE, [8388608], 4) = 0sendmsg(8, {msg_name={sa_family=AF_UNIX, sun_path="/run/systemd/notify"}, msg_namelen=21, msg_iov=[{iov_base="READY=1\nSTATUS=Total requests: 8"..., iov_len=110}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 110close(8) = 0wait4(-1, 0x7ffcfab4a25c, WNOHANG|WSTOPPED, NULL) = 0 You can see that it is sending READY=1\nSTATUS=Total requests: 8... into socket /run/systemd/notify Recommended reading man systemd-notify or official documentation . Example : Service startup in Systemd | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/212059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120882/"
]
} |
212,063 | I would like to test if a list of integer values (e.g stored in a file, one per line) is strictly increasing, using a bash script. Is there any simple/concise way to achieve that? | Check if the file's contents remain the same after sorting numerically and filtering duplicated lines: cmp file <(sort -n file | uniq) At least GNU sort can do this check directly: sort -c -u -n file (The POSIX sort documentation mentions this too so it should be supported everywhere.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/212063",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101052/"
]
} |
212,086 | I'm trying to make a sed command so that numbers longer than 3 digits were converted to hex. I.e. a string like 124 3275 7535 should result in 124 0xccb 0x1d6f . Here's what I currently have: sed 's@\([0-9]\{4,\}\)@sh -c "printf 0x%x \1"@ge' But when the string doesn't match, it attempts to run the unchanged string as an external command, so for the example string above I get sh: 1: 124: not found How can I achieve what I'm trying to do (preferably still using sed )? | Although it's not "with sed" as per your question title, if you switch from sed to perl you could use an equivalent expression such as perl -p -e 's/\b\d{4,}\b/sprintf "%#x", $&/ge' which should allow you to preserve other expressions in your chain more-or-less as is. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/212086",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27672/"
]
} |
212,098 | I have 4 identical boxes, I logged on to the first one and did sudo -s , followed by ssh-keygen -t rsa which generated the keys and placed them in /root/.ssh/ I then typed ssh-copy-id -i /root/.ssh/id_rsa.pub user@machine which then asked for my password and worked fine. When I went to the next box, I got the following message: mktemp: failed to create file via template ‘/home/user/.ssh/ssh-copy-id_id.XXXXXXXXXX’: No such file or directorymktemp failed and I seem to be pretty stuck, but, also very confused. Can anyone help and explain what has happened/why this worked on one machine? | I didn't realise ssh-copy-id is a script and I took a look at it. I was using Ubuntu as root (via sudo -s ) after logging in as a non root user, so, home was still set as /home/user So, mktemp doesn't create subfolders, and is hard coded to create a temp file as ~/.ssh/tempfile - I just created .ssh in /home/user and it worked fine. I had previously used SSH on the first machine, so, this folder already existed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/212098",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1886/"
]
} |
212,127 | How can I override the file exists: warning from zsh? > echo > newfile.txt> echo > newfile.txt zsh: file exists: newfile.txt In these cases I prefer my shell to not complain and simply overwrite the file, like bash. Likewise, how to override the following: $ ls >> /tmp/testfile.txt zsh: no such file or directory: /tmp/testfile.txt | Does your setopt output mention noclobber ? If so, that's it, just setopt clobber The documentation for the option is at http://zsh.sourceforge.net/Doc/Release/Options.html#index-file-clobbering_002c-allowing | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/212127",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38047/"
]
} |
212,140 | Elementary's default file manager, Files, has single-click to open enabled by default. For those who prefer double-click to open, how can this setting be disabled? | For Freya, apparently installing Elementary Tweaks and changing the setting there works: apt-get install elementary-tweaks Then, access the Settings Menu and click on the Tweaks Icon. You can then toggle Single Click on/off as you like. Unfortunately, this didn't work for me, and the setting immediately toggled back on as soon as I exited the settings menu. I had better luck with the following command: gsettings set org.pantheon.files.preferences single-click false | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/212140",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92444/"
]
} |
212,147 | I am trying to count the number of lines after a problematic row in a csv file. I am aware I can use the grep -a # syntax to output # number of lines after a match has been found. I'm only interested in the actual number of lines. I realize I could set the number to MAX_INT, pipe it into a file and do some more processing. I'm looking for a succinct one-liner to just tell me the count. Any suggestions? | { grep -m1 match; grep -c ''; } <file That will work w/ GNU grep and an lseek() able infile. The first grep will stop at 1 -m atch, and the second will -c ount every line remaining in input. Without GNU grep : { sed '/match/q'; grep -c ''; } <file Of course, w/ grep you can use any/all of its other options besides, and stopping at one match is not at all necessary. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/212147",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104353/"
]
} |
212,148 | I am looking to turn a 8Gb USB into a bootable drive with a minimal OS. All I want is an OS that will allow me to use vim, nothing else (no internet/any other services). I basically want to turn a USB into a digital typewriter. I was able to make a bootable semi persistent Ubuntu USB using syslinux and uNetBootin on my mac. However, Ubuntu seems too bloated. What would be a good OS for my needs? Also, could I make a persistent bootable USB with it? Thanks :) | { grep -m1 match; grep -c ''; } <file That will work w/ GNU grep and an lseek() able infile. The first grep will stop at 1 -m atch, and the second will -c ount every line remaining in input. Without GNU grep : { sed '/match/q'; grep -c ''; } <file Of course, w/ grep you can use any/all of its other options besides, and stopping at one match is not at all necessary. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/212148",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120926/"
]
} |
212,176 | I have a text file with two (2) only possible characters (and maybe new lines \n ). Example: ABBBAAAABBBBBABBABBBABBB (Size 24 bytes ) How can I convert this to a binary file, meaning a bit representation, with each one of the two possible values being assigned to 0 or 1 ? Resulting binary file ( 0=A , 1=B ): 011100001111101101110111 # 24 bits - not 24 ASCII characters Resulting file in Hex: 70FB77 # 3 bytes - not 6 ASCII characters I would be mostly interested in a command-line solution (maybe dd , xxd , od , tr , printf , bc ). Also, regarding the inverse: how to get back the original? | Another perl: perl -pe 'BEGIN { binmode \*STDOUT } chomp; tr/AB/\0\1/; $_ = pack "B*", $_' Proof: $ echo ABBBAAAABBBBBABBABBBABBB | \ perl -pe 'BEGIN { binmode \*STDOUT } chomp; tr/AB/\0\1/; $_ = pack "B*", $_' | \ od -tx10000000 70 fb 770000003 The above reads input one line at a time. It's up to you to make sure the lines are exactly what they are supposed to be. Edit: The reverse operation: #!/usr/bin/env perlbinmode \*STDIN;while ( defined ( $_ = getc ) ) { $_ = unpack "B*"; tr/01/AB/; print; print "\n" if ( not ++$cnt % 3 );}print "\n" if ( $cnt % 3 ); This reads a byte of input at a time. Edit 2: Simpler reverse operation: perl -pe 'BEGIN { $/ = \3; $\ = "\n"; binmode \*STDIN } $_ = unpack "B*"; tr/01/AB/' The above reads 3 bytes at a time from STDIN (but receiving EOF in the middle of a sequence is not a fatal problem). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/212176",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57559/"
]
} |
212,183 | I need to check a variable's existence in an if statement. Something to the effect of: if [ -v $somevar ]then echo "Variable somevar exists!"else echo "Variable somevar does not exist!" And the closest question to that was this , which doesn't actually answer my question. | In modern bash (version 4.2 and above): [[ -v name_of_var ]] From help test : -v VAR, True if the shell variable VAR is set | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/212183",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
212,194 | Doug McIlroy, the inventor of Unix pipes and one of the founders of the Unix tradition, had this to say at the time [McIlroy78]: (ii) Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don't insist on interactive input. Stringently columnar data sounds good to me, so I probably do not understand what he meant. What does it mean and why is it bad? | I assume he meant aligned columns, not columnar data in general. That's how I would understand the stringently anyway. For example: Bad: 1 200 3100 3 400 Good: 1 200 3100 3 400 In other words, make files that are easy for computers to read, not for humans. Adding spaces to align things makes them pretty and easier for you and me to understand but can confuse programs that need to parse them. For example, if I were to use cut to get the second field of each of the above examples, it would fail on the first: $ cut -d' ' -f 2 bad$ cut -d' ' -f 2 good 2003 Because of the extra spaces, the 2nd field of the bad file is a space. However, it works as expected in the good file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/212194",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26674/"
]
} |
212,207 | Can tarring a bunch of files together improve compression with the standard tools, e.g. gzip, bzip2, xz? I've long thought this to be the case but never tested it out. If we have 2 copies of the same 20Mb file of random bytes tarred together, a clever compression program that realizes this could compress the entire tarball down to almost 20Mb. I just tried this experiment using gzip, bzip2, and xz to compress 1) a file of random bytes, 2) a tarball of two copies of that file, and 3) a cat of two copies of that file. In all cases the compression did not reduce the file size. This is expected for case 1 but for cases 2 and 3 the optimal result is that a 40Mb file can be shrunk to nearly 20Mb. That's a difficult insight for a compression program to see, especially because the redundancy is distant, so I wouldn't expect a perfect result but I still had figured there would be some compression. Test: dd if=/dev/urandom of=random1.txt bs=1M count=20cp random1.txt random2.txtcat random1.txt random2.txt > random_cat.txttar -cf randoms.tar random1.txt random2.txtgzip -k random* &bzip2 -k random* &xz -k random* &waitdu -sh random* Result: 20+0 records in20+0 records out20971520 bytes (21 MB) copied, 1.40937 s, 14.9 MB/s[1] Done gzip -k random*[2]- Done bzip2 -k random*[3]+ Done xz -k random*20M random1.txt21M random1.txt.bz221M random1.txt.gz21M random1.txt.xz20M random2.txt21M random2.txt.bz221M random2.txt.gz21M random2.txt.xz40M random_cat.txt41M random_cat.txt.bz241M random_cat.txt.gz41M random_cat.txt.xz41M randoms.tar41M randoms.tar.bz241M randoms.tar.gz41M randoms.tar.xz Is this generally what I should expect? Is there a way to improve compression here? | You're up against the "block size" of the compressor. Most compression programs break the input into blocks and compress each block. It appears the bzip block size only goes up to 900K, so it won't see any pattern that takes longer than 900K bytes to repeat. http://www.bzip.org/1.0.3/html/memory-management.html gzip appears to use 32K blocks. With xz you're in luck though! From the man page: Preset DictSize CompCPU CompMem DecMem -0 256 KiB 0 3 MiB 1 MiB -1 1 MiB 1 9 MiB 2 MiB -2 2 MiB 2 17 MiB 3 MiB -3 4 MiB 3 32 MiB 5 MiB -4 4 MiB 4 48 MiB 5 MiB -5 8 MiB 5 94 MiB 9 MiB -6 8 MiB 6 94 MiB 9 MiB -7 16 MiB 6 186 MiB 17 MiB -8 32 MiB 6 370 MiB 33 MiB -9 64 MiB 6 674 MiB 65 MiB so "xz -8" will find up to 32MB patterns, and "xz -9" up to 64MB patterns. But beware how much ram it is requiring to perform the compression (and to decompress)... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/212207",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67136/"
]
} |
212,215 | I have an odd error that I have been unable to find anything on this. I wanted to change the user comment with the following command. $ sudo usermod -c "New Comment" user This will work while logged onto a server but I want to automate it across 20+ servers. Usually I am able to use a list and loop through the servers and run a command but in this case I get a error. $ for i in `cat servlist` ; do echo $i ; ssh $i sudo usermod -c "New Comment" user ; done serv1Usage: usermod [options] LOGINOptions:lists usermod optionsserv2Usage: usermod [options] LOGINOptions:lists usermod options... When I run this loop it throws back an error like I am using the command incorrectly but it will run just fine on a single server. Looking through the ssh man pages I did try -t and -t -t flags but those did not work. I have successfully used perl -p -i -e within a similar loop to edit files. Does anyone know a reason I am unable to loop this? | SSH executes the remote command in a shell. It passes a string to the remote shell, not a list of arguments. The arguments that you pass to the ssh commands are concatenated with spaces in between. The arguments to ssh are sudo , usermod , -c , New Comment and user , so the remote shell sees the command sudo usermod -c New Comment user usermod parses Comment as the name of the user and user as a spurious extra parameter. You need to pass the quotes to the remote shell so that the comment is treated as a string. The simplest way is to put the whole remote command in single quotes. If you need a single quote in that command, use '\'' . ssh "$i" 'sudo usermod -c "Jack O'\''Brian" user' Instead of calling ssh in a loop and ignoring errors, use a tool designed to run commands on multiple servers such as pssh, mussh, clusterssh, etc. See Automatically run commands over SSH on many servers | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/212215",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41643/"
]
} |
212,329 | How can I hide a password in shell scripts? There are a number of scripts that are accessing database. If we open the script others also aware the username and password. So if anyone knows how to hide please let me know. I have one way: place the password in a file and make the file as hidden and no one going to access the file (change the permissions and use the file in script while going to accessing database). | First , as several people have already said, keeping the credentials separate from the script is essential. (In addition to increased security, it also means that you can re-use the same script for several systems with different credentials.) Second , you should consider not only the security of the credentials but also the impact if/when those credentials are compromised. You shouldn't have just one password for all access to the database, you should have different credentials with different levels of access. You could, for instance, have one DB user that has the ability to perform a search in the database - that user should have read-only access. Another user may have permission to insert new records, but not to delete them. A third one may have permission to delete records. In addition to restricting the permissions for each account, you should also have restriction on where each account can be used from. For instance, the account used by your web server should not be allowed to connect from any other IP address than that of the webserver. An account with full root permissions to the database should be very restricted indeed in terms of where it may connect from and should never be used other than interactively. Also, consider using stored procedures in the database to restrict exactly what can be done by each account. These restrictions need to be implemented on the DB-server side of the system so that even if the client-side is compromised, the restrictions cannot be altered from it. (And, obviously, the DB server needs to be protected with firewalls etc in addition to the DB configuration...) In the case of a DB account that is only permitted limited read-only access, and only from a particular IP address, you might not need any further credentials than that, depending on the sensitivity of the data and the security of the host the script is being run from. One example may be a search form on your web site, which can be run with a user that is only allowed to use a stored procedure which extracts only the information that will be presented on the web page. In this case, adding a password does not really confer any extra security, since that information is already meant to be public, and the user can't access any other data that would be more sensitive. Also, make sure that the connection to the database is made using TLS, or anybody listening on the network can get your credentials. Third , consider what kind of credentials to use. Passwords are just one form, and not the most secure. You could instead use some form of public/private key pair, or AD/PAM or the like. Fourth , consider the conditions under which the script will be run: If it is run interactively, then you should enter the password, or the password to the private key, or the private key, or be logged in with a valid Kerberos ticket, when you run it - in other words, the script should get its credentials directly from you at the time that you run it, instead of reading them from some file. If it is run from a webserver, consider setting up the credentials at the time when you start the webserver. A good example here is SSL certificates - they have a public certificate and a private key, and the private key has a password. You may store the private key on the web server, but you still need to enter the password to it when you start Apache. You could also have the credentials on some kind of hardware, such as a physical card or an HSM, that can be removed or locked once the server is started. (Of course, the downside to this method is that the server can't restart on its own if something happens. I would prefer this to the risk of having my system compromised, but your mileage may vary...) If the script is being run from cron, this is the hard part. You don't want to have the credentials lying around anywhere on your system where someone can access them - but you do want to have them lying around so that your script can access them, right? Well, not quite right. Consider exactly what the script is doing. What permissions does it need on the database? Can it be restricted so that it doesn't matter if the wrong person connects with those permissions? Can you instead run the script directly on the DB server that nobody else has access to, instead of from the server that does have other users? If, for some reason that I can't think of, you absolutely must have the script running on an insecure server and it must be able to do something dangerous/destructive... now is a good time to re-think your architecture. Fifth , if you value the security of your database, you should not be running these scripts on servers that other people have access to. If someone is logged in on your system, then they will have the possibility to get at your credentials. For instance, in the case of a web server with an SSL certificate, there is at least a theoretical possibility of someone being able to gain root and access the httpd process's memory area and extract the credentials. There has been at least one exploit in recent times where this could be done over SSL, not even requiring the attacker to be logged in. Also, consider using SELinux or AppArmor or whatever is available for your system to restrict which users can do what. They will make it possible for you to disallow users to even try to connect to the database, even if they do manage to gain access to the credentials. If all this sounds like overkill to you , and you can't afford or don't have the time to do it - then, in my (arrogant and elitist) opinion, you should not be storing anything important or sensitive in your database. And if you're not storing anything important or sensitive, then where you store your credentials is also not important - in which case, why use a password at all? Lastly , if you absolutely cannot avoid storing some kind of credentials, you could have the credentials read-only and own by root and root could grant ownership on an exceedingly temporary basis when requested to do so by a script (because your script should not be run as root unless absolutely necessary, and connecting to a database does not make it necessary). But it's still not a good idea. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/212329",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120858/"
]
} |
212,355 | no /var/log/cron ,no /var/log/cron.log on my debian7 ,Where is my logfile of crontab ? ls /var/log/cron*ls: cannot access /var/log/cron*: No such file or directory | I think on debian cron writes logs in /var/log/syslog . If your system depends on rsyslog or syslogd you can check and uncomment either in /etc/rsyslog.conf or /etc/syslog.conf for line: # cron.* /var/log/cron.log and then restart services. If your system depends on systemd for example you can check with following command: journalctl _COMM=cron or journalctl _COMM=cron --since="date" --until="date" For date format you can check journalctl . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/212355",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102745/"
]
} |
212,356 | The problem: I run Linux on an arcade machine with a monitor that accepts a 15kHz signal. I can't see anything until the X server starts and a modeline gets the GPU to output the correct signal. My solution: Start something like xterm so I can have a terminal on the machine's display. This approach has problems, the main one is that if I start the X server as root, then xterm is logged in as the root user. I don't want to have a fully fledged window manager, I just use xinit to start the server. Is there a good solution to this? Should I use su to start xterm as a different user? | I think on debian cron writes logs in /var/log/syslog . If your system depends on rsyslog or syslogd you can check and uncomment either in /etc/rsyslog.conf or /etc/syslog.conf for line: # cron.* /var/log/cron.log and then restart services. If your system depends on systemd for example you can check with following command: journalctl _COMM=cron or journalctl _COMM=cron --since="date" --until="date" For date format you can check journalctl . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/212356",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41358/"
]
} |
212,360 | I'm using URxvt 9.20 on debian jessie and I'm looking for a way to copy & paste text like I was used to with the gnome-terminal ( Ctrl + Insert for copying, Shift + Insert for pasting). It works within different urxvt consoles, it does not work between e.g. urxvt and iceweasel though. I tried according to the manual on archlinux , but it won't work (even though I actually don't want to use Shift + Ctrl + C / V it was worth a try). .Xresources: ! ******************! urxvt config! ******************! Disable Perl extension! If you do not use the Perl extension features, you can improve the security! and speed by disabling Perl extensions completely. URxvt.perl-ext:URxvt.perl-ext-common:! Font spacing! By default the distance between characters can feel too wide. It's controlled! by this entry: ! URxvt.letterSpace: -1! -- Fonts -- !URxvt.font:xft:Monospace:pixelsize=13URxvt.boldfont:xft:Monospace-Bold:pixelsize=13!URxvt*font: -xos4-terminus-medium-*-*-*-14-*-*-*-*-*-iso8859-15,xft:terminus:pixelsize:12!URxvt*boldFont: -xos4-terminus-bold-*-*-*-14-*-*-*-*-*-iso8859-15,xft:terminus:bold:pixelsize:12!URxvt*italicFont: xft:Bitstream Vera Sans Mono:italic:autohint=true:pixelsize=12!URxvt*boldItalicFont: xft:Bitstream Vera Sans Mono:bold:italic:autohint=true:pixelsize=12! Disable scrollbar!URxvt*scrollBar: false! Scrollbar style - rxvt (default), plain (most compact), next, or xtermURxvt.scrollstyle: plain! Background color!URxvt*background: blackURxvt*background: #1B1B1B! Font color!URxvt*foreground: whiteURxvt*foreground: #00FF00! Other colorsURxvt*color0: black!URxvt*color1: red3URxvt*color1: #CD0000URxvt*color2: green3!URxvt*color3: yellow3URxvt*color3: #C4A000URxvt*color4: blue2!URxvt*color4: #3465A4URxvt*color5: magenta3URxvt*color6: cyan3URxvt*color7: gray90URxvt*color8: grey50URxvt*color9: redURxvt*color10: greenURxvt*color11: yellow!URxvt*color12: blueURxvt*color12: #3465A4URxvt*color13: magentaURxvt*color14: cyanURxvt*color15: white! ******************! /urxvt config! ****************** | Unfortunately, the X window system has several different copy-paste mechanisms . Rxvt, like most old-school X applications, uses the primary selection. Generally, when you select something with the mouse, it's automatically copied to the primary selection, and when you middle-click to paste, that pastes the primary selection. Ctrl + C and Ctrl + V (or other key bindings) in applications using modern GUI toolkits, such as Gnome-terminal and Firefox, copy/paste from the clipboard. There are tools to facilitate working with the selections. In particular, if you just want to have a single selection that's copied to whether you select with the mouse or press Ctrl + C , you can run autocutsel (start it from your .xinitrc or from your desktop environment's startup programs), which detects when something is copied to one of the selections and automatically copies it to the other. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/212360",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121057/"
]
} |
212,372 | I am working on a text written in Italian and Chinese and I need to extract only Chinese characters using AWK . How can I do this? I tried: [The range of Chinese Unicode chars is 4E00 thru 9FFF (344 270 200 thru 351 277 277) so the test should be >"\343" and <"\352" (to avoid picking up any 4 char UTF-8 codes)]: {f=0;for ( i=1; i<=length; i++)if(substr($0, i, 1)>"\343" &&substr($0, i, 1)<"\352")f = 1 print $f} But there is an error or more errors. I can't find it / them | Your problem is that by filtering on raw bytes in a UTF-8 character stream, you're eating part of a unicode sequence in a UTF-8 file, resulting in an invalid byte sequence. That can't work. Instead, you need to use a tool that understands UTF-8, and apply a filter on the unicode data, rather than the raw bytes. Since I don't know which implementation of awk you're using, it's impossible for me to tell whether it supports unicode. However, I know that perl is fully unicode-safe, so the following perl one-liner should work: perl -CS -p -e 's/[^\s\p{Han}]//g' The \s is for whitespace, which I'm assuming you'll want to see. The \p{Han} bit tells perl that we want to match characters that are declared in Unicode as being used in the Han script (i.e., chinese characters). I don't know if you need any punctuation characters that aren't included in that range; if you do, you may need to add that as well. We then negate the range with the ^ at the start, and finally encode it in a global substitute command ( s///g ) where we tell perl to replace instances of the part after the first slash (our negated range, or, "everything not in this range") with the part after the second and before the third (i.e., nothing). If you don't need to include several ranges, you can drop the [^] construction, and switch to using \P rather than \p , which does the same match inversion. What's left is the character ranges we entered -- unicode characters in the Han script, plus whitespace. For more information, see perldoc perlre for an explanation on how perl deals with regular expressions, and perldoc perluniprops for a list of possible unicode properties (the bits you can place inside a \p{} or \P{} construct). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/212372",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121071/"
]
} |
212,373 | I want to run some post-installation commands. The installer offers only reboot option. Is it possible to do that? | There's a console provided during installation on the second VT (and the third); you can access it by pressing Alt F2 (or Alt F3 for the third one). The installer is on the first VT ( Alt F1 ) and the detailed installer logs are on the fourth. You'll also find a "shell" option in the main installer menu; this will open a shell in the first VT, which you need to exit to return to the installer. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/212373",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120901/"
]
} |
212,417 | Terminator is my choice of terminal in debian . I seem to have broken it while playing with its profile preferences. I can't open it normally as other applications. However after becoming root I can access it through gnome-terminal . When it opens, the following warning pops up: An error occurred while loading or saving configuration information for terminator. Some of your configuration settings may not work properly. Details : No D-BUS daemon running | Have you tried purging the package and then reinstalling it? apt-get purge terminator Then delete configuration files located in your home directory rm -rfvI /home/your_user_name/.config/terminator This should remove all the config files. Now reinstall. apt-get install terminator | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/212417",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121096/"
]
} |
212,438 | I'm wondering how to stop all units that are grouped together by the same target. My setup is as follows. I have several unit config files that read: [Unit]...[Service]...[Install]WantedBy=mycustom.target When I run # systemctl start mycustom.target Those units that "are wanted by" mycustom.target start correctly. Now, I would also like to be able stop all units that are wanted by mycustom.target . I tried: # systemctl stop mycustom.target This doesn't do anything though. Is there a way to make this work without having to stop all units that are (explicitly) wanted by the same target? | Use the PartOf= directive. Configures dependencies similar to Requires=, but limited to stopping and restarting of units. When systemd stops or restarts the units listed here, the action is propagated to this unit. Note that this is a one-way dependency — changes to this unit do not affect the listed units. PartOf=mycustom.target | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/212438",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67525/"
]
} |
212,493 | I am trying to install tomcat 8 on centos 7. I am using the terminal as root. When I type in source ~/.bashrc , the terminal gives back the following error: -bash: unalias: ls: not found . How can I resolve this error so that the source ~/.bashrc command can succeed? Note that I recently added the line unalias ls as the last line of ~/.bashrc as per @Cyrus' solution to this other problem related to color aliasing in the same CentOS installation . | I can't reproduce this but I assume the problem is because you have already unaliased ls once, so you can't unalias it again. However, I'm also pretty sure that the source command worked perfectly. Did you check? Chances are that it was sourced correctly and you can just ignore the error message. More to the point, why are you running source ~/.bashrc ? That file should be read when you start a new interactive non-login shell anyway. If it's to reload it because you made a change, then you're fine, your change was loaded. Ignore the error. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/212493",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92670/"
]
} |
212,519 | I want to list all the filesystems in a single comma-delimited list, in dictionary order. I'm using this command: cat /proc/filesystems | sed 's/nodev//' | sed 's/,$//' My output looks like this: sysfsrootfsramfsbdevproccgroupcpusettmpfsdevtmpfsdebugfssecurityfssockfspipefsanon_inodefsdevptsext3ext2ext4hugetlbfsvfatecryptfsfuseblkfusefusectlpstoremqueuebinfmt_miscvboxsf How can I change this to a single line output with commas separating each filesystem? I figured part of it out by using xargs : cat /proc/filesystems | sed 's/nodev//' | xargs | sed -e 's/ /,/g' Now I want to make the output formatted in dictionary order. | I can't reproduce this but I assume the problem is because you have already unaliased ls once, so you can't unalias it again. However, I'm also pretty sure that the source command worked perfectly. Did you check? Chances are that it was sourced correctly and you can just ignore the error message. More to the point, why are you running source ~/.bashrc ? That file should be read when you start a new interactive non-login shell anyway. If it's to reload it because you made a change, then you're fine, your change was loaded. Ignore the error. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/212519",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121164/"
]
} |
212,621 | I have a Debian 8 Jessie server that I need to connect to my home network and I'm using an OpenVPN server on a pfSense 2.2 box at home. I have done this fine under older Debian versions, so I'm guessing I'm missing something new with how systemd controls the service... I have everything I need in /etc/openvpn/ , with a reasonable simple setup: clientdev tunproto udpremote home.dynamic-domain.com 1194resolv-retry infinitenobinduser nobodygroup nobodypersist-tunpersist-keyca /etc/openvpn/ca.crtcert /etc/openvpn/hostname.crtkey /etc/openvpn/hostname.keytls-auth /etc/openvpn/tls.key 1cipher "AES-256-CBC"comp-lzoverb 3 and the relevant certs/keys are present and correct. Bringing up the config manually works great: ~# openvpn --config /etc/openvpn/servervpn.confSat Jun 27 13:26:08 2015 OpenVPN 2.3.4 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [EPOLL] [PKCS11] [MH] [IPv6] built on Dec 1 2014Sat Jun 27 13:26:08 2015 library versions: OpenSSL 1.0.1k 8 Jan 2015, LZO 2.08 Sat Jun 27 13:26:08 2015 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info.Sat Jun 27 13:26:08 2015 Control Channel Authentication: using '/etc/openvpn/servervpn/tls.key' as a OpenVPN static key fileSat Jun 27 13:26:08 2015 Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authenticationSat Jun 27 13:26:08 2015 Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authenticationSat Jun 27 13:26:08 2015 Socket Buffers: R=[212992->131072] S=[212992->131072]Sat Jun 27 13:26:08 2015 NOTE: UID/GID downgrade will be delayed because of --client, --pull, or --up-delaySat Jun 27 13:26:08 2015 UDPv4 link local: [undef]Sat Jun 27 13:26:08 2015 UDPv4 link remote: [AF_INET]x.x.x.x:1194Sat Jun 27 13:26:08 2015 TLS: Initial packet from [AF_INET]x.x.x.x:1194, sid=531d85a9 2201aab6Sat Jun 27 13:26:08 2015 VERIFY OK: depth=1, xxxxxxxxSat Jun 27 13:26:08 2015 VERIFY OK: depth=0, xxxxxxxxSat Jun 27 13:26:13 2015 Data Channel Encrypt: Cipher 'AES-256-CBC' initialized with 256 bit keySat Jun 27 13:26:13 2015 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authenticationSat Jun 27 13:26:13 2015 Data Channel Decrypt: Cipher 'AES-256-CBC' initialized with 256 bit keySat Jun 27 13:26:13 2015 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authenticationSat Jun 27 13:26:13 2015 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 2048 bit RSASat Jun 27 13:26:13 2015 [hm-py-router-01] Peer Connection Initiated with [AF_INET]188.78.154.7:11193Sat Jun 27 13:26:15 2015 SENT CONTROL [hm-py-router-01]: 'PUSH_REQUEST' (status=1)Sat Jun 27 13:26:15 2015 PUSH: Received control message: 'PUSH_REPLY,route 192.168.10.0 255.255.255.0,topology net30,ping 5,ping-restart 60,ifconfig 192.168.11.6 192.168.11.5'Sat Jun 27 13:26:15 2015 OPTIONS IMPORT: timers and/or timeouts modifiedSat Jun 27 13:26:15 2015 OPTIONS IMPORT: --ifconfig/up options modifiedSat Jun 27 13:26:15 2015 OPTIONS IMPORT: route options modifiedSat Jun 27 13:26:15 2015 ROUTE_GATEWAY 176.126.240.1/255.255.248.0 IFACE=eth0 HWADDR=00:16:3c:89:81:e0Sat Jun 27 13:26:15 2015 TUN/TAP device tun0 openedSat Jun 27 13:26:15 2015 TUN/TAP TX queue length set to 100Sat Jun 27 13:26:15 2015 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0Sat Jun 27 13:26:15 2015 /sbin/ip link set dev tun0 up mtu 1500Sat Jun 27 13:26:15 2015 /sbin/ip addr add dev tun0 local 192.168.11.6 peer 192.168.11.5Sat Jun 27 13:26:15 2015 /sbin/ip route add 192.168.10.0/24 via 192.168.11.5Sat Jun 27 13:26:15 2015 GID set to nogroupSat Jun 27 13:26:15 2015 UID set to nobodySat Jun 27 13:26:15 2015 Initialization Sequence Completed^CSat Jun 27 13:28:17 2015 event_wait : Interrupted system call (code=4)Sat Jun 27 13:28:17 2015 /sbin/ip route del 192.168.11.1/32RTNETLINK answers: Operation not permittedSat Jun 27 13:28:17 2015 ERROR: Linux route delete command failed: external program exited with error status: 2Sat Jun 27 13:28:17 2015 /sbin/ip route del 192.168.51.0/24RTNETLINK answers: Operation not permittedSat Jun 27 13:28:17 2015 ERROR: Linux route delete command failed: external program exited with error status: 2Sat Jun 27 13:28:17 2015 Closing TUN/TAP interfaceSat Jun 27 13:28:17 2015 /sbin/ip addr del dev tun0 local 192.168.11.6 peer 192.168.11.5RTNETLINK answers: Operation not permittedSat Jun 27 13:28:17 2015 Linux ip addr del failed: external program exited with error status: 2Sat Jun 27 13:28:17 2015 SIGINT[hard,] received, process exiting Unfortunately, starting openvpn as a service doesn't seem to bring the tunnel up, or do much of anything I can see... ~# systemctl start openvpn.service~# systemctl status openvpn.service● openvpn.service - OpenVPN service Loaded: loaded (/lib/systemd/system/openvpn.service; enabled) Active: active (exited) since Sat 2015-06-27 13:29:12 EDT; 4min 3s ago Process: 13873 ExecStart=/bin/true (code=exited, status=0/SUCCESS) Main PID: 13873 (code=exited, status=0/SUCCESS) CGroup: /system.slice/openvpn.service The tunnel just never seems to come up... So I try the 'old' way too: ~# /etc/init.d/openvpn start[ ok ] Starting openvpn (via systemctl): openvpn.service.~# /etc/init.d/openvpn status● openvpn.service - OpenVPN service Loaded: loaded (/lib/systemd/system/openvpn.service; enabled) Active: active (exited) since Sat 2015-06-27 13:09:12 EDT; 8min ago Process: 13873 ExecStart=/bin/true (code=exited, status=0/SUCCESS) Main PID: 13873 (code=exited, status=0/SUCCESS) CGroup: /system.slice/openvpn.service But it seems the SysV init script just call to systemctrl anyways. I've look through the Debian wiki page for OpenVPN and when running as a service it should parse any *.conf file in /etc/openvpn and bring up the interfaces unless explicitly listed in /etc/default/openvpn . Not sure of my next step. | As I have said before : I'm guessing I'm missing something new with how systemd controls the service. Yes, and it is explained in the commentary at the top of /lib/systemd/system/openvpn.service . You, as the other questioner did, are calling a System 5 rc script directly. Do not call System 5 rc scripts directly, especially on a system where System 5 rc isn't used , such as Debian version 8. OpenVPN is a templatized service under systemd — be that with Fedora, Ubuntu, or Debian Linux. The services are named openvpn@ config .service . So you should be starting your /etc/openvpn/servervpn.conf instance with systemctl start [email protected] Further reading https://unix.stackexchange.com/a/206490/5132 http://fedoraproject.org/wiki/Openvpn | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/212621",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119462/"
]
} |
212,628 | In my syslog I had: thermal thermal_zone0: critical temperature reached(102 C),shutting down I lost data due to this. I would much rather that the system: suspended to RAM, or lowered the clock freq How can I do that? I imagine the process responsible for monitoring the temperature is calling a shutdown script. If I can change that to run the suspend-to-RAM, then both the me and the laptop should be happy. So the question is partly: Which process is responsible for doing this shutdown? And how do I configure it? uname -aLinux aspire 3.16.0-31-lowlatency #43~14.04.1-Ubuntu SMP PREEMPT Tue Mar 10 20:41:36 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux | From drivers/thermal/thermal_core.c : if (trip_type == THERMAL_TRIP_CRITICAL) { dev_emerg(&tz->device, "critical temperature reached(%d C),shutting down\n", tz->temperature / 1000); orderly_poweroff(true); } So it seems it is not calling a script to handle the situation. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/212628",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2972/"
]
} |
212,645 | I found a list of projects to do, and one of them was something that generates amounts of change. I made this code: getamt() {echo "Enter amount of money."read amountecho "OK."}change() {amount=$(echo "$amount*100" | bc)quarter=$(echo "($amount-25)" | bc)dime=$(echo "($amount-10)" | bc)nickel=$(echo "($amount-5)" | bc)penny=$(echo "($amount-1)" | bc )quarter=${quarter%???}dime=${dime%???}nickel=${nickel%???}penny=${penny%???}amount=${amount%???}qNum=0dNum=0nNum=0pNum=0}getchange() {while [ $quarter -ge 0 ]doqNum=$(( qNum+1 ))amount=$(( $amount-25 ))donewhile [ $dime -ge 0 ]dodNum=$(( dNum+1 ))amount=$(( $amount-10 ))donewhile [ $nickel -ge 0 ]donNum=$(( nNum+1 ))amount=$(( $amount-5 ))donewhile [ $penny -ge 0 ]dopNum=$(( nNum+1 ))amount=$(( $amount-1 ))done}display() {echo "Your change is:"echo "$qNum quarters"echo "$dNum dimes"echo "$nNum nickels"echo "$pNum pennies"}getamtchangegetchangedisplay I know it's probably a bad way of doing what I need to do, but it's getting stuck.I think I may have used the while loop wrong, but I don't know.My objective in using the while loops were to check if it's possible to add another type of that coin there, so it checks if the value is above zero. | Your code's most obvious issue is that all of your while loops check a variable (e.g. $quarter ) that is never changed inside the loop, so the loop condition can never become false and the loop repeats endlessly. Let's look at one of the loops: while [ $quarter -ge 0 ]doqNum=$(( qNum+1 ))amount=$(( $amount-25 ))done If $quarter > 0, the control flow enters the loop, $qNum is incremented and $amount is decremented, but $quarter remains unchanged, so you're in for another loop iteration. Fixing your code works best by restructuring it: Instead of relying on global variables like amount that are set as side effects of functions, rewrite your functions to accept parameters and output their results to stdout (where possible). Results to stdout : Your function getamt() could echo $amount instead of relying on amount being available (and unchanged) for processing later on in the script. Whatever calls getamt can then capture this output into a variable with amount=$(getamt) . Unfortunately that doesn't work as well when a function needs to return multiple values — in that case, you could have the function print its return values separated by newlines or a character that you know won't appear in the values. You could even go for an output format like quarter=3 dime=1 nickel=4 and evaluate that output to set local variables with the function's return values: $(yourfunction); echo $quarter Parameters: Your function change() could take the amount of change it should compute as a parameter (i.e. you would call amount 2.50 ) instead of reading it from a global variable. You can access parameters given to your function (or to your script, depending on the context) via their indices: $1 for the first parameter, $2 for the second one, etc. You can avoid a few calls to bc by just cutting off the decimal places once and only using bash arithmetic evaluation after that. Your current substitution ${quarter%???} also removes any last three characters, which is going to yield unwanted results if your users ever decide to enter a value with more (or less) than two decimal places. Use something like ${quarter%%.*} to remove everything after (and including) the first . . Use comments (started with a # and continued until the end of the line): e.g. amount=${amount%%.*} # remove decimal places Most of your code will seems obvious to you right now, but it might not be obvious to anyone else looking at it, and it also won't be obvious for you anymore when you'll have to look at it again in a few months. To be honest, I'm not entirely sure how your script is supposed to calculate the number of coins to return at the moment. The most common approach to calculating change would be a greedy algorithm that starts at the highest available coin value, dispenses as many coins of that value as "fit" into the change amount 1 , subtracts the total value of those coins from the change amount, then continues with the next (smaller) coin value, and so on, until the change amount reaches 0 (i.e. enough coins have been dispensed to make up the total change amount). 1 To compute this number of coins, you can either look at modulo operations or just subtract the current coin value from the change amount in a loop until the change amount is smaller than the coin value (i.e. you would return too much change if you dispensed another coin of the current value). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/212645",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119717/"
]
} |
212,688 | It's a serious question. I test some awk scripts and I need files with a newline in their names. Is it possible to add a newline into a filename with mv ? I now, I can do this with touch : touch "foobar" With touch I added the newline character per copy and paste. But I can't write foo Return bar in my shell. How can I rename a file, to have a newline in the filename? Edit 2015/06/28; 07:08 pm To add a newline in zsh I can use, Alt + Return | It is a bad idea (to have strange characters in file names) but you could do mv somefile.txt "foo bar" (you could also have done mv somefile.txt "$(printf "foo\nbar")" or mv somefile.txt foo$'\n'bar , etc... details are specific to your shell. I'm using zsh ) Read more about globbing , e.g. glob(7) . Details could be shell-specific. But understand that /bin/mv is given (by your shell), via execve(2) , an expanded array of arguments: argument expansion and globbing is the responsibility of the invoking shell. And you could even code a tiny C program to do the same: #include <stdio.h>#include <stdlib.h>int main() { if (rename ("somefile.txt", "foo\nbar")) { perror("rename somefile.txt"); exit(EXIT_FAILURE); }; return 0;} Save above program in foo.c , compile it with gcc -Wall foo.c -o foo then run ./foo Likewise, you could code a similar script in Perl, Ruby, Python, Ocaml, etc.... But that is a bad idea. Avoid newlines in filenames (it will confuse the user, and it could break many scripts). Actually, I even recommend to use only non-accentuated letters, digits, and +-/._% characters (with / being the directory separator) in file paths. "Hidden" files (starting with . ) should be used with caution and parcimony. I believe using any kind of space in a file name is a mistake. Use an underscore instead (e.g. foo/bar_bee1.txt ) or a minus (e.g. foo/bar-bee1.txt ) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/212688",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107084/"
]
} |
212,703 | I'm seeing an error every time I do the command below. Why? $crontab -lno crontab for server where server is the user account. This issue comes about because the script in crontab doesn't work, so I've tried to break down the problem. This is what I have put in using crontab -e : crontab -e@reboot /usr/bin/teamspeak3-server_linux-amd64/ts3server_minimal_runscript.sh I press ctrl + o and save it, reboot and find the script doesn't boot (even though the script itself does work if I double click it from the GUI). If I do a crontab -l after the reboot, I find I get the same error as above. Even before the reboot, if I try and open the crontab -e just after I saved this command inside of the file, the line of code isn't there. | That's probably because this user does not have a crontab, yet. You can create a crontab for this user by calling: crontab -e | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/212703",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119849/"
]
} |
212,754 | Is there a way to run a Linux binary in macOS? I tried to run a binary but it said it isn't executable. | These answers are half correct, because virtualization is a choice but there is another. May I present... History First there was UNIX, circa 1972 Then the Timeline Split In 1977, for $90, Bob Fabry and others , compiled/built the first versions of BSD, short for Berkeley Systems Distribution. In 1991, Linus Torvalds posted in a Newsgroup , about software he used from Richard Stallman, who started GNU in 1983, and Linus'es UNIX was born. Apple reacquired NeXT Software in 1996, after Steve Jobs was fired from Apple in 1984, and used the software and people there to build OS X. OS X is the Darwin OS + the NeXT Desktop Environment. And now we're back to Bullet #1, as Darwin is a closed source fork of BSD. For the Open Source Project, see PureDarwin . For the Official Apple Developer Page, see Apple Open Source . How We Can Use This BSD's traditionally use the Ports system for Package Management. The most widely used of these are the FreeBSD Ports . Ports are packages installed directly from source. Since the same Linux applications come from the same sources , you can run a Linux application if its port exists. Don't use these Ports on a Mac because... Since all of Apple's GUI's are written using the Cocoa API - WikiEntry , bundled with XCode - OS X for Developers , the Ports can be tuned to take advantage of this: Install XCode for your version of OS X. Bundled inside XCode is Apple's version of the GCC Compiler, and all the other associated tools. To update the tools, see this post on StackOverflow . The tools are OS Version dependant, ie they are not backwards compatible , to my knowledge (in short, don't install XCode for 10.8 on 10.6, etc.) Having installed XCode you now have a compiler, and can head on over to the MacPorts page and browse for the port you need installed, after installing the MacPorts .pkg installer Caveat The ports system doesn't necessarily do dependency checking, unless the port was well written. I lightly touched on the problem in what does elibc_FreeBSD mean in gentoo portage overlays? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/212754",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121320/"
]
} |
212,805 | Today when writing my shell script. A question suddenly comes to my mind. Since cd /target_dir and cd /target_dir/ both works. Should I add a slash at the end of my path variables in a shell script? Such as LOG_PATH=/data/nginx/logs versus LOG_PATH=/data/nginx/logs/ . I did some gross search on google, but didn't find discussion about this, maybe it's too basic? For now, it's really hard for me to decide which style to pick. But I preferred LOG_PATH=/target_dir/ style a bit more. Because when I'm doing autocompletion with bash, it pops me the result with slash. What's your opinion about this, why? | According to POSIX: Definition of a pathname: A string that is used to identify a file. It has optional beginning < slash > characters, followed by zero or more filenames separated by < slash > characters. A pathname can optionally contain one or more trailing < slash > characters. Multiple successive < slash > characters are considered to be the same as one < slash > , except for the case of exactly two leading < slash > characters. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/212805",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74226/"
]
} |
212,834 | My problem (in a script with #!/bin/sh ) is as follows: I try to checksum all files in a directory for archival purposes. The checksum (in my case sha1) file with all filenames should reside in the same directory. Lets say we have a directory ~/test with files f1 and f2 : . mkdir ~/testcd ~/testecho "hello" > f1echo "world" > f2 Now calculating the checksums with find -maxdepth 1 -type f -printf '%P\n' | xargs shasum does exactly what I want, it lists all files of the current directory only and calculates the sha1 sums (maxdepth may be changed later). The output on STDOUT is: f572d396fae9206628714fb2ce00f72e94f2258f f19591818c07e900db7e1e0bc4b884c945e6a61b24 f2 Unfortunately, when trying to save this to a file with find -maxdepth 1 -type f -printf '%P\n' | xargs shasum > sums.sha1 the resulting file displays the checksum for itself: da39a3ee5e6b4b0d3255bfef95601890afd80709 sums.sha1f572d396fae9206628714fb2ce00f72e94f2258f f19591818c07e900db7e1e0bc4b884c945e6a61b24 f2 and therefore fails at a later shasum --check , because of the obvious problem of additional file modification when saving the last sum. I looked around and by using -p flag for xargs , I found out that it somehow creates the output file before even executing the find command, therefore the additional file is found and will be checksummed... I know that as a workaround I could save the checksum to another location (temp directory via mktemp ) or exclude it in find specifically, but I'd like to understand why it behaves the way it does - which is in my eyes not that useful, for example if the first command would check if the output file is already on disk, it would never get the correct answer... | You can prevent the file from reaching xargs using: find . -maxdepth 1 -type f ! -name sums.sha1 -printf '%P\n' | xargs -r shasum -- > sums.sha1 To prevent problems with filename that have blanks or newlines or quotes or backslashes, I would however use: find . -maxdepth 1 -type f ! -name sums.sha1 -printf '%P\0' | xargs -r0 shasum -- > sums.sha1 instead. The -- is to avoid problems with file names that start with - . It will however not help for a file called - . Had you used -print0 instead of -printf '%P\0' , you wouldn't have needed the -- and would not have had a problem with the - file. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/212834",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121391/"
]
} |
212,842 | Like this way in Windows mklink /D Virtual_Folder_Here Real_Folder_Here . I was unable make work alias in Apache virtual sites in Windows so I made a virtual directory. Is this possible in Linux too? | You can prevent the file from reaching xargs using: find . -maxdepth 1 -type f ! -name sums.sha1 -printf '%P\n' | xargs -r shasum -- > sums.sha1 To prevent problems with filename that have blanks or newlines or quotes or backslashes, I would however use: find . -maxdepth 1 -type f ! -name sums.sha1 -printf '%P\0' | xargs -r0 shasum -- > sums.sha1 instead. The -- is to avoid problems with file names that start with - . It will however not help for a file called - . Had you used -print0 instead of -printf '%P\0' , you wouldn't have needed the -- and would not have had a problem with the - file. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/212842",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2880/"
]
} |
212,848 | My understanding is that pipe in e.g., command1 | command2 sends the output of the command1 to command2 . However, I would have expected this to work: echo "tmp.pdf" | evince But it does not. Where is the output of echo "tmp.pdf" being sent? | A pipe sends its output to the program that has it open for reading. In a shell pipeline, that's the program on the right-hand side of the pipe symbol, i.e. evince in your example. You're sending the file name tmp.pdf to evince on its standard input. However evince doesn't care about its standard input. Like every program that acts on a file, it expects the file name to be passed as a command line argument; if you don't pass a file name on the command line, it offers to open a file. Command line arguments are not the same thing as standard input. Humans have different input organs that input different things (e.g. you can't eat through your nose), and similarly programs have different ways of receiving information that serve different purposes. Evince can read a file (not a file name) on standard input: evince /dev/stdin <"tmp.pdf" . (This may not work on all Unix variants.) The file name /dev/stdin means “whatever file you already have open on your standard input”. Programs intended for command line typically read their standard input when they aren't given a file name, but GUI programs usually don't. Evince can only open a regular file this way, not data from a pipe (e.g. cat tmp.pdf | evince /dev/stdin doesn't work), because it needs to be able to seek back and forth in the file when navigating between pages. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/212848",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90937/"
]
} |
212,872 | I want to see what are the last N commands in my history . I thought history | tail -n 5 would make it, but I noticed that a multiline command counts for as many lines as it has. $ echo "hellohow are you"$ history | tail -2how are you"1051 history | tail -2 So my question is: do I have to parse the output of the command to accomplish this? | I found it! history [n] An argument of n lists only the last n lines. $ echo "hellohow are you"$ history 21060 echo "hellohow are you"1061 history 2 | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/212872",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40596/"
]
} |
212,894 | I've encountered both http_proxy and HTTP_PROXY . Are both forms equivalent? Does one of them take precedence over the other? | There is no central authority who assigns an official meaning to environment variables before applications can use them. POSIX defines the meaning of some variables ( PATH , TERM , …) and lists several more in a non-normative way as being in common use, all of them in uppercase. http_proxy and friends isn't one of them. Unlike basically all conventional environment variables used by many applications, http_proxy , https_proxy , ftp_proxy and no_proxy are commonly lowercase. I don't recall any program that only understands them in uppercase, I can't even find one that tries them in uppercase. Many programs use the lowercase variant only, including lynx, wget, curl, perl LWP, perl WWW::Search, python urllib/urllib2, etc. So for these variables, the right form is the lowercase one. The lowercase name dates back at least to CERN libwww 2.15 in March 1994 (thanks to Stéphane Chazelas for locating this). I don't know what motivated the choice of lowercase, which would have been unusual even then. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/212894",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79271/"
]
} |
212,897 | There is something terribly wrong with my current Debian install. Most programs like firefox, nslookup, dig etc. are ignoring entries in /etc/hosts file, actually I use this file for Ad-blocking. an example a line in /etc/hosts file 127.0.0.1 www.winaproduct.com when I do dig +short www.winaproduct.com it returns the respective IP address of the server, not 127.0.0.1 . Open www.winaproduct.com on firefox, it shows the respective website, but this is not expected. But there is no problem with ping , busybox nslookup , busybox ping , resolveip etc. So, what is the problem ? And how to fix it ?I think the problem is with the DNS resolving library. A temporary fix-up, setup dnsmasq and change nameserver to 127.0.0.1 in /etc/resolv.conf . update problem magically solved after installing libnss3 , as a dependency of google-chrome default /etc/nsswitch.conf looks like hosts: files dns how to tell nslookup , dig etc. ask /etc/hosts file first instead asking directly to the DNS ? but why busybox nslookup , wget , resolveip etc. are working differently than nslookup , dig etc. ? | nslookup , dig , and host are tools for querying DNS name servers. If your configuration is not provided by a name server (like the information given in /etc/hosts ) those tools will not show them, because they directly ask the name server. If you want to check that the "usual" resolution is working (i.e. the way specified in /etc/nsswitch.conf ) you can use getent : getent hosts www.winaproduct.com | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/212897",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118389/"
]
} |
212,922 | I have a fairly recently installed CentOS 6.6 system. I have disabled IPv6 as best I can: IPV6INIT=no in ifcfg-[eth0|lo] net.ipv6.conf.all.disable_ipv6 = 1 in /etc/sysctl.conf net.ipv6.conf.default.disable_ipv6 = 1 in /etcsysctl.conf However when I do DNS lookups, most notably for yum updates, I keep getting offered IPv6 addresses, which of course are not reachable; I presume from this that my system is asking for AAAA records instead of A records when doing a name lookup. Downloading Packages:http://centos.mirror.iweb.ca/6.6/os/x86_64/Packages/ConsoleKit-0.4.1-3.el6.x86_64.rpm: [Errno 14] PYCURL ERROR 7 - "Failed to connect to 2607:f748:10:12:0:ce17:705:1: Network is unreachable"Trying other mirror. How do I make my system only ask for IPv4 addresses? | nslookup , dig , and host are tools for querying DNS name servers. If your configuration is not provided by a name server (like the information given in /etc/hosts ) those tools will not show them, because they directly ask the name server. If you want to check that the "usual" resolution is working (i.e. the way specified in /etc/nsswitch.conf ) you can use getent : getent hosts www.winaproduct.com | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/212922",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1397/"
]
} |
212,925 | This works to replace tom with sam in a file: sed 's/tom/sam/g' file_1 > file_2 But this does not: sed 's/*****/sam/g' file_1 > file_2 To replace the special characters ***** with the word sam . I have tried with a slash \* but errors. | You need to escape the special characters with a backslash \ in front of the special character, e.g.: sed 's/\*/t/g' test.txt > test2.txt | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/212925",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121423/"
]
} |
212,950 | I'm trying to hide the "output" of a gnupg command, but it seems that it is always printed. the command is: echo "thisprogramwørks" | gpg -q --status-fd 1 --no-use-agent --sign --local-user D30BDF86 --passphrase-fd 0 --output /dev/null It is a command to verify the password of pgp keys, and by using it like this: a=$(echo "thisprogramwørks" | gpg -q --status-fd 1 --no-use-agent --sign --local-user D30BDF86 --passphrase-fd 0 --output /dev/null) I recover the output: echo $a[GNUPG:] USERID_HINT F02346C1EA445B6A p7zrecover (7zrecover craking pgp test) <a@a> [GNUPG:] NEED_PASSPHRASE F02346C1EA445B6A F02346C1EA445B6A 1 0 [GNUPG:] GOOD_PASSPHRASE [GNUPG:] BEGIN_SIGNING [GNUPG:] SIG_CREATED S 1 8 00 1435612254 8AE04850C3DA5939088BE2C8F02346C1EA445B6A the problem is that when I use the command, the console prints: You need a passphrase to unlock the secret key foruser: "test (test) <a@a>"1024-bit RSA key, ID EA445B6A, created 2015-06-29 I've been trying to use command redirects like &>/dev/null and stuff like that, but passphrase text is always printed. It is possible to hide this text? | The "problem" is, that gpg writes directly to the TTY instead of STDOUT or STDERR. That means it cannot be redirected. You can either use the --batch option as daniel suggested, but as a more general approach you can use the script tool, which fakes a TTY. Any output is then sent to STDOUT, so you can redirect it to /dev/null : script -c 'echo "thisprogramwørks" | gpg -q --status-fd 1 --no-use-agent --sign --local-user D30BDF86 --passphrase-fd 0 --output /dev/null' > /dev/null The output is also written to a file, so you can still get and analyze it. See man script ( link ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/212950",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
213,027 | I've reinstalled a Linux server from CentOS 6 to 7. The server has 3 drives - a system SSD drive (it hosts everything except /home ) and two 4TB HDD drives that host /home . Everything uses LVM. The two 4TB drives are mirrored (using the raid option within LVM itself), and they are completely filled with the /home partition. The problem is that although the 4TB disks are recognized fine, and LVM sees the volume without problems, it does not activate it automatically. Everything else is activated automatically. I can activate it manually, and it works. I have an image of the old system drive in /home. That contains LVM volumes too. If I mount it with kpartx , and LVM picks those up and activates them. But I can see no difference between those volumes and the inactive ones. The root filesystem is LVM too, and that activates just fine. I see a peculiar thing though: executing lvchange -aay tells me that I need to specify which drives I want to activate. It doesn't do it automatically either. If I specify lvchange -ay lv_home - that works. I cannot find anything that could be responsible for this behavior. Added: I noticed that the old system (which used init) had vgchange -aay --sysinit in its startup scripts. The new one uses systemd, and I don't see the vgchange call in its scripts. But I also don't know where to put it. Added 2: Starting to figure out systemd. I found where the scripts are located and started understanding how they are called. Also found that I could see the executed scripts with systemctl -al . This shows me that after starting lvmetad it calls pvscan for each known udev block device. However at that point there is just one registered udev block device, and that is one of the recognized lvm volumes. The hard drives are there too, but under different paths and much longer names. The recognized block device is something like 8:3 , while the hard drives are like /device/something/ . I'm not at the server anymore, so I cannot write it precisely (will fix this later). I think that it has something to do with udev and device detection/mapping. I will continue in the evening and will study udev then. If all else fails, I found the script that calls pvscan and checked that I can modify it to scan all the devices all the time. That fixes the problem, but it looks like a rather ugly hack, so I'll try to figure out the real root cause. Added 3 : OK, I still don't know why this happens, but at least I've made a fairly passable workaround. I made another systemd service that calls the pvscan once, right after starting lvmetad . The other call for the specific device is still there, and I think it's actually udev that calls it (that's the only place where I found reference to it). Why it doesn't call it for the other hard drives - I have no idea. | I did it! I did it! I fixed it properly (I think). Here's the story: After some time the server turned out to be faulty and had to be scrapped. I kept disks and got everything else new. Then I reinstalled CentOS again on the SSD and then I attached the HDDs. LVM worked nicely, the disks were recognized, the configuration kept. But the same problem came up again - after a reboot, the volume was inactive. However this time I chanced to notice something else - the bootloader passes the following parameters to the kernel: crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet Hmm, wait a minute, those look FAMILIAR ! Quick google query, and there we are : rd.lvm.lv= only activate the logical volumes with the given name. rd.lvm.lv can be specified multiple times on the kernel command line. Well now. THAT explains it! So, the resolution was (gathered from several more google queries): Modify /etc/defaults/grub to include the additional volume in the parameters: crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rd.lvm.lv=vg_home/lv_home rhgb quiet Reconfigure grub with grub2-mkconfig -o /boot/grub2/grub.cfg Reconfigure initramfs with mkinitrd -f -v /boot/initramfs-3.10.0-327.18.2.el7.x86_64.img 3.10.0-327.18.2.el7.x86_64 . Note: your values may vary. Use uname -r to get that kernel version. Or just read up on mkinitrd . (Frankly, I don't know why this step is needed, but apparently it is - I tried without it and it didn't work) And finally, reinstall grub: grub2-install /dev/sda Reboot, naturally. TA-DA! The volume is active on reboot. Add it to fstab and enjoy! :) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/213027",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18179/"
]
} |
213,054 | I am on Ubuntu 12.04, and the ip utility does not have ip netns identify <pid> option, I tried installing new iproute , but still, the option identify doesn't seem to be working!. If I were to write a script (or code) to list all processes in a network-namespace, or given a PID, show which network-namespace it belongs to, how should I proceed ?(I need info on a handful of processes, to check if they are in the right netns ) | You could do something like: netns=mynsfind -L /proc/[1-9]*/task/*/ns/net -samefile /run/netns/"$netns" | cut -d/ -f5 Or with zsh : print -l /proc/[1-9]*/task/*/ns/net(e:'[ $REPLY -ef /run/netns/$netns ]'::h:h:t) It checks the inode of the file which the /proc/*/task/*/ns/net symlink points to agains those of the files bind-mounted by ip netns add in /run/netns . That's basically what ip netns identify or ip netns pid in newer versions of iproute2 do. That works with the 3.13 kernel as from the linux-image-generic-lts-trusty package on Ubuntu 12.04, but not with the 3.2 kernel from the first release of 12.04 where /proc/*/ns/* are not symlinks and each net file there from every process and task gets a different inode which can't help determine namespace membership. Support for that was added by that commit in 2011, which means you need kernel 3.8 or newer. With older kernels, you could try and run a program listening on an ABSTRACT socket in the namespace, and then try to enter the namespace of every process to see if you can connect to that socket there like: sudo ip netns exec "$netns" socat abstract-listen:test-ns,fork /dev/null &ps -eopid= | while read p; do nsenter -n"/proc/$p/ns/net" socat -u abstract:test-ns - 2> /dev/null && echo "$p" done | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/213054",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47055/"
]
} |
213,059 | I'm new to Munin and Nginx. I've installed and configured Munin and created a Nginx server block.I can see the index page generated by munin, listing the different nodes. But when I click on a host to see the graphs, the only thing I get is an HTML page without CSS and without graphs. More precisely, there's the same HTML code in the webpage, the CSS and even in favicon.ico. And no graphs are loaded (I have no 404 for example). I followed this tutorial . Here is my Nginx server block: server { listen 80; server_name munin.armagnac.[masked].com; location ^~ /cgi-bin/munin-cgi-graph/ { access_log off; fastcgi_split_path_info ^(/cgi-bin/munin-cgi-graph)(.*); fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_pass unix:/var/run/munin/fcgi-graph.sock; include fastcgi_params; } location /static/ { alias /etc/munin/static/; } location / { fastcgi_split_path_info ^(/munin)(.*); fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_pass unix:/var/run/munin/fcgi-html.sock; include fastcgi_params; }} I have no errors and nothing in the logs. As said above, a node page is almost blank: There's no CSS because any other resource is just the same HTML page: Again, there's nothing in the logs and the HTML and Graphs CGIs are working fine. But I don't know where is the configuration problem, eg. on the Nginx side or in the Munin side. OS: Ubuntu Server 15.04 | You could do something like: netns=mynsfind -L /proc/[1-9]*/task/*/ns/net -samefile /run/netns/"$netns" | cut -d/ -f5 Or with zsh : print -l /proc/[1-9]*/task/*/ns/net(e:'[ $REPLY -ef /run/netns/$netns ]'::h:h:t) It checks the inode of the file which the /proc/*/task/*/ns/net symlink points to agains those of the files bind-mounted by ip netns add in /run/netns . That's basically what ip netns identify or ip netns pid in newer versions of iproute2 do. That works with the 3.13 kernel as from the linux-image-generic-lts-trusty package on Ubuntu 12.04, but not with the 3.2 kernel from the first release of 12.04 where /proc/*/ns/* are not symlinks and each net file there from every process and task gets a different inode which can't help determine namespace membership. Support for that was added by that commit in 2011, which means you need kernel 3.8 or newer. With older kernels, you could try and run a program listening on an ABSTRACT socket in the namespace, and then try to enter the namespace of every process to see if you can connect to that socket there like: sudo ip netns exec "$netns" socat abstract-listen:test-ns,fork /dev/null &ps -eopid= | while read p; do nsenter -n"/proc/$p/ns/net" socat -u abstract:test-ns - 2> /dev/null && echo "$p" done | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/213059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92806/"
]
} |
213,074 | I set up automatically ssh login without typing password to a server by: cd ~/.sshssh-keygenssh-copy-id -i ~/.ssh/id_rsa.pub tim@server1 It works on the server. Later I did the same on a different server. ssh-copy-id -i ~/.ssh/id_rsa.pub tim@server2 Immediately I ssh tim@server2 , but it still requires my password. Did I do something incorrectly? What are some possible reasons that I didn't set up successfully on the second server? (note that the second server runs kerberos and Andrew file system) $ ssh -v tim@server2OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014debug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 19: Applying options for *debug1: Connecting to server2 [...] port 22.debug1: Connection established.debug1: identity file /home/tim/.ssh/id_rsa type 1debug1: identity file /home/tim/.ssh/id_rsa-cert type -1debug1: identity file /home/tim/.ssh/id_dsa type -1debug1: identity file /home/tim/.ssh/id_dsa-cert type -1debug1: identity file /home/tim/.ssh/id_ecdsa type -1debug1: identity file /home/tim/.ssh/id_ecdsa-cert type -1debug1: identity file /home/tim/.ssh/id_ed25519 type -1debug1: identity file /home/tim/.ssh/id_ed25519-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3debug1: match: OpenSSH_5.3 pat OpenSSH_5* compat 0x0c000000debug1: SSH2_MSG_KEXINIT sentdebug1: SSH2_MSG_KEXINIT receiveddebug1: kex: server->client aes128-ctr hmac-md5 nonedebug1: kex: client->server aes128-ctr hmac-md5 nonedebug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<3072<8192) sentdebug1: expecting SSH2_MSG_KEX_DH_GEX_GROUPdebug1: SSH2_MSG_KEX_DH_GEX_INIT sentdebug1: expecting SSH2_MSG_KEX_DH_GEX_REPLYdebug1: Server host key: RSA xxxdebug1: Host 'server2' is known and matches the RSA host key.debug1: Found key in /home/tim/.ssh/known_hosts:70debug1: ssh_rsa_verify: signature correctdebug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug1: SSH2_MSG_NEWKEYS receiveddebug1: Roaming not allowed by serverdebug1: SSH2_MSG_SERVICE_REQUEST sentdebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password,keyboard-interactivedebug1: Next authentication method: gssapi-keyexdebug1: No valid Key exchange contextdebug1: Next authentication method: gssapi-with-micdebug1: Unspecified GSS failure. Minor code may provide more informationNo Kerberos credentials availabledebug1: Unspecified GSS failure. Minor code may provide more informationNo Kerberos credentials availabledebug1: Unspecified GSS failure. Minor code may provide more informationdebug1: Unspecified GSS failure. Minor code may provide more informationNo Kerberos credentials availabledebug1: Next authentication method: publickeydebug1: Offering RSA public key: /home/tim/.ssh/id_rsadebug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password,keyboard-interactivedebug1: Trying private key: /home/tim/.ssh/id_dsadebug1: Trying private key: /home/tim/.ssh/id_ecdsadebug1: Trying private key: /home/tim/.ssh/id_ed25519debug1: Next authentication method: keyboard-interactivePassword: I tried Anthon's method of using Diffie-Hellman keys, but it still asks me for my password. $ cd ~/.ssh$ ssh-keygen -t dsa$ ssh-copy-id -i ~/.ssh/id_dsa.pub tim@server2$ ssh -v tim@server2OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014debug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 19: Applying options for *debug1: Connecting to server2 [...] port 22.debug1: Connection established.debug1: identity file /home/tim/.ssh/id_rsa type 1debug1: identity file /home/tim/.ssh/id_rsa-cert type -1debug1: identity file /home/tim/.ssh/id_dsa type 2debug1: identity file /home/tim/.ssh/id_dsa-cert type -1debug1: identity file /home/tim/.ssh/id_ecdsa type -1debug1: identity file /home/tim/.ssh/id_ecdsa-cert type -1debug1: identity file /home/tim/.ssh/id_ed25519 type -1debug1: identity file /home/tim/.ssh/id_ed25519-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3debug1: match: OpenSSH_5.3 pat OpenSSH_5* compat 0x0c000000debug1: SSH2_MSG_KEXINIT sentdebug1: SSH2_MSG_KEXINIT receiveddebug1: kex: server->client aes128-ctr hmac-md5 nonedebug1: kex: client->server aes128-ctr hmac-md5 nonedebug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<3072<8192) sentdebug1: expecting SSH2_MSG_KEX_DH_GEX_GROUPdebug1: SSH2_MSG_KEX_DH_GEX_INIT sentdebug1: expecting SSH2_MSG_KEX_DH_GEX_REPLYdebug1: Server host key: RSA ...debug1: Host 'server2' is known and matches the RSA host key.debug1: Found key in /home/tim/.ssh/known_hosts:70debug1: ssh_rsa_verify: signature correctdebug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug1: SSH2_MSG_NEWKEYS receiveddebug1: Roaming not allowed by serverdebug1: SSH2_MSG_SERVICE_REQUEST sentdebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password,keyboard-interactivedebug1: Next authentication method: gssapi-keyexdebug1: No valid Key exchange contextdebug1: Next authentication method: gssapi-with-micdebug1: Unspecified GSS failure. Minor code may provide more informationNo Kerberos credentials availabledebug1: Unspecified GSS failure. Minor code may provide more informationNo Kerberos credentials availabledebug1: Unspecified GSS failure. Minor code may provide more informationdebug1: Unspecified GSS failure. Minor code may provide more informationNo Kerberos credentials availabledebug1: Next authentication method: publickeydebug1: Offering DSA public key: /home/tim/.ssh/id_dsadebug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password,keyboard-interactivedebug1: Offering RSA public key: /home/tim/.ssh/id_rsadebug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password,keyboard-interactivedebug1: Trying private key: /home/tim/.ssh/id_ecdsadebug1: Trying private key: /home/tim/.ssh/id_ed25519debug1: Next authentication method: keyboard-interactivePassword: | You mention that the second server is using the Andrew File System (AFS). I haven't worked with that, but from what I understand it, AFS is a Kerberos-secured filesystem which requires a kerberos ticket in order to work. That means you need to be logged to your site's Kerberos realm in order to be able to access your home directory. If you log on with password, server2 is likely set up so that it logs you on to your Kerberos realm through PAM. If you're using SSH keys, however, then server2 won't get the information it requires to do that, and you won't be able to access your home directory. Luckily, from the ssh -v output in your question, we can infer that your server has GSSAPI authentication enabled. This should allow you to perform a passwordless logon, provided you have a valid kerberos ticket for your realm. Do the following: Log on to server2 , and run the klist program. This will return something along the following lines: Ticket cache: FILE:/tmp/krb5cc_2000Default principal: [email protected] starting Expires Service principal28-05-15 15:01:31 29-05-15 01:01:31 krbtgt/[email protected] renew until 29-05-15 15:01:2828-05-15 15:02:04 29-05-15 01:01:31 IMAP/[email protected] renew until 29-05-15 15:01:28 look for the line which starts with Default principal: . It tells you what your kerberos principal is (in the above example, it's [email protected] ). Write this down. Note that it's not an email address, and that it is case-sensitive; i.e., the principal ends with EXAMPLE.ORG , not example.org . On your client machine, run kinit with the name of your principal (i.e., in the above example, that would be kinit [email protected] ). If all goes well, when you run klist again now, you will see that you have a ticket cache on your local machine. If you now run ssh -K server2 , you should be able to log on, and the system should not ask for a password. Please note that due to how Kerberos works, a ticket cache has a limited validity. It is not possible to ask for a ticket cache with validity longer than what the realm administrator configured (which is usually something like 10 hours or so). Once your ticket has expired, you will need to run kinit again, and enter your password once more. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/213074",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
213,089 | With the 15.04 release of Kubuntu, I switched from Gnome (Ubuntu) to KDE/Plasma. I did a clean install, while keeping my home directory. Now, libreoffice (mostly Calc) seems to be associated with every unknown file-type. Instead of manually fixing the associations for every file I encounter, I'd rather understand what went wrong and fix it by looking at the corresponding config file that KDE/Plasma uses. Which config-file is it? | You mention that the second server is using the Andrew File System (AFS). I haven't worked with that, but from what I understand it, AFS is a Kerberos-secured filesystem which requires a kerberos ticket in order to work. That means you need to be logged to your site's Kerberos realm in order to be able to access your home directory. If you log on with password, server2 is likely set up so that it logs you on to your Kerberos realm through PAM. If you're using SSH keys, however, then server2 won't get the information it requires to do that, and you won't be able to access your home directory. Luckily, from the ssh -v output in your question, we can infer that your server has GSSAPI authentication enabled. This should allow you to perform a passwordless logon, provided you have a valid kerberos ticket for your realm. Do the following: Log on to server2 , and run the klist program. This will return something along the following lines: Ticket cache: FILE:/tmp/krb5cc_2000Default principal: [email protected] starting Expires Service principal28-05-15 15:01:31 29-05-15 01:01:31 krbtgt/[email protected] renew until 29-05-15 15:01:2828-05-15 15:02:04 29-05-15 01:01:31 IMAP/[email protected] renew until 29-05-15 15:01:28 look for the line which starts with Default principal: . It tells you what your kerberos principal is (in the above example, it's [email protected] ). Write this down. Note that it's not an email address, and that it is case-sensitive; i.e., the principal ends with EXAMPLE.ORG , not example.org . On your client machine, run kinit with the name of your principal (i.e., in the above example, that would be kinit [email protected] ). If all goes well, when you run klist again now, you will see that you have a ticket cache on your local machine. If you now run ssh -K server2 , you should be able to log on, and the system should not ask for a password. Please note that due to how Kerberos works, a ticket cache has a limited validity. It is not possible to ask for a ticket cache with validity longer than what the realm administrator configured (which is usually something like 10 hours or so). Once your ticket has expired, you will need to run kinit again, and enter your password once more. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/213089",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121530/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.