source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
217,840 | A service on a linux server is only able to do full backups, where each backup is a .tar archive (no compression). Many contents of the archive do not change from day to day. Each .tar file size is about 3GB (slowly increasing from day to day). I want to transfer the backups to another server, which archives them. The transfer is done through the internet. A requirement is that the backups are not altered (the result is again a list of .tar files, whose md5 sum is still identical to the original files on the server). I'm currently using rsync to transfer the files, which works great, but all files are transferred with their full size. As far as I know rsync does some kind of deduplication on transfers, but only on a per-file level (right?). Is there any way to transfer few similar files through a SSH connection without retransmitting identical chunks of the files (so some kind of reduplication), that does not require write access on the server (no unpacking of the tar files) is tolerant to connection losses (does not leave temp files on abortions and detects not correctly transmitted files) is able to resume the transfer after connection losses (do not retransmit all files if connection aborts) does not require any additional tools on the server (besides the standard unix toolchain including rsync) still uses a client-initiated SSH connection for the transfer | The problem is sudo -s without any argument will open an interactive shell for root. If you just want to run a single command using sudo -s , you can simple do: sudo -s command For example : $ sudo -s whoamiroot Or you can use here strings : $ sudo -s <<<'whoami'root If you have multiple commands you can use here doc : $ sudo -s <<'EOF'> whoami> hostname> EOFrootsomething-- | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/217840",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9266/"
]
} |
217,902 | I was trying to make a very simple bash script to list all of the multiples of five between 375 and 3500 (375, 380, 385...). One thing that I tried and didn't work is: for i in {375..3500}do echo $i (($i += 5))done I gave up after a while and wrote this in BASIC in about 15 seconds: 10 count = 37520 print count30 count = count+540 if count < 3500 then goto 20 How can I make my BASIC program in a bash script? | Alternatively a traditional C-style for loop can be used: for ((i=375; i<=3500; i+=5)); do echo $idone This is perhaps less clear than using seq, but it doesn't spawn any subprocesses. Though since I'm familiar with C, I wouldn't have any difficulty understanding this, but YMMV. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/217902",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124698/"
]
} |
217,905 | I've looked around and bit for an answer to this question but I don't seem to find it (which is weird). My question is, is there any simple way to restart the BASH session from within the terminal on Mac. I just want the same behaviour as if I closed the terminal application and started it again (all variables reset, .bash_profile sourced etc). I know how to source .bash_profile , but that's not what I want. One of the reasons I want to do this is because a plugin for my BASH prompt has code that prevents colors from being loaded multiple times. Therefore, sourcing .bash_profile doesn't reload the color variables and I have to restart the terminal application to get changes in effect. | exec bash should replace the current shell process with (a new instance of) bash. EDIT: Seems from answers below that Catalina replaces bash with zsh. You can run exec zsh in that case, or alternatively exec "$SHELL" which should always use your default shell. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/217905",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124423/"
]
} |
217,922 | I'm trying to copy only folders with .mp3 files in them to another device. And only the .mp3s in them .The structure looks like this: Band/Year - Album/*.flacBand/Year - Album/*.txtBand/Year - Album/*.mp3Band/Year - Album/coversBand/Year - Album/covers/*.jpg I've already read a shitton of manuals and forums but can't resolve my problem. No matter what I do, no matter how I'm trying to formulate what I want to include in copying, since it's excluded - it's gone even --exclude=* --include=* won't copy anything. I tested it on another directory, even without any, dammit. EDIT: List of what I tried to no avail: rsync -rnv --exclude=** --include=*/ --include=*/*/ --include=*/*/*/ --include=*/*/*.mp3 "./Be'lakor" /media/moorindal/WALKMAN/MUSICrsync -rnv --exclude=** --include=*/ --include=*/*/ --include=*/*/*/ --include=**.mp3 "./Be'lakor" /media/moorindal/WALKMAN/MUSICrsync -rnv --exclude=** --include=*/ --include=*/* --include=**.mp3 "./Be'lakor" /media/moorindal/WALKMAN/MUSICrsync -rnv --exclude=*/*/* --include=**.mp3 "./Be'lakor" /media/moorindal/WALKMAN/MUSICrsync -rnv --exclude=*/*/*.* --include=**.mp3 "./Be'lakor" /media/moorindal/WALKMAN/MUSICrsync -rnv --exclude='*/*/*.*' --include='*/*/*.mp3' "./Be'lakor" /media/moorindal/WALKMAN/MUSIC | First, why --exclude=* --include=* doesn't work: because the first matching rule applies. * matches everything, so everything is excluded, and rsync doesn't even try the second rule for any file. See Rsync filter: copying one pattern only for a guide to rsync filters. To include .mp3 files, you need rsync -a -m --include='*.mp3' --include='*/' --exclude='*' i.e. include MP3, include all directories (if a directory isn't included then none of the files inside it are included) and exclude everything else. The -m option (a.k.a. --prune-empty-dirs ) makes rsync skip directories that don't contain any files to be copied. But that won't include other files in the same directory as the .mp3 file. For that, you need some help from the shell or other tools. In zsh, you can match .mp3 files in subdirectories with **/*.mp3 , and then use a history modifier as a glob qualifier to transform the result into the list of directories containing .mp3 files. rsync -a **/*.mp3(:h) /destination If you have too many directories (or more precisely, if the combined length of their names is too large), this may break the command line limit. Removing duplicates might help: typeset -aU directoriesdirectories=(**/*.mp3(:h))rsync -a $directories /destination This doesn't eliminate the risk that the command is too long, it merely reduces it. To eliminate the risk, use zargs to run rsync in batches. autoload -U zargstypeset -aU directoriesdirectories=(**/*.mp3(:h))do_rsync () { rsync -a $@ /destination; }zargs --max-chars=$(($(get_conf ARG_MAX) - 500)) -- $directories -- do_rsync | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/217922",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124711/"
]
} |
217,932 | I have a script that runs a command via zsh -c . However, when zsh runs, it doesn't appear to load ~/.zshrc . I understand a login shell flag exists , but even zsh -lc <command> doesn't seem to work. How can I get functions, aliases, and variables defined in my ~/.zshrc to populate when running it with zsh -c ? | zsh do not read .zshrc in non-interactive shell, but zsh allow you to invoke an interactive shell to run a script : $ zsh -ic 'type f'f is a shell function or you can always source .zshrc manually: $ zsh -c '. ~/.zshrc; type f'f is a shell function | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/217932",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89283/"
]
} |
217,936 | I have a bunch of binaries and I know that inside these binaries there are strings I want to find. I want to do a: grep -lir "the string I am looking for" and get a list of all binaries inside a particular directory that contain that string but grep -lir is apparently not working with these files. Is there a command that can do this kind of search from terminal? | With GNU grep , you can use -a option to make it treats binary files as text files: grep -ali -- string file If your grep version does not support -a , you can use ack instead. With ack 1.x, you need to include -a option, with ack 2.x, you don't, since when searching include non-text file by default (only ignored non-text file when you did not specify any files). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/217936",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45335/"
]
} |
217,939 | I'd like to know whether there are any Systemd equivalents for *BSD distributions, basically something that can handle dependencies between services (service A requires B to be started, so until B is ready don't start A) and has a sane service file format (like a configuration file that tells it what to start and when, instead of an initscript). After using it on Linux I can't even think of going back to a legacy initscripts-based distribution, and yet I'd like to try a BSD (I need a very minimal system for a router & access point). | With GNU grep , you can use -a option to make it treats binary files as text files: grep -ali -- string file If your grep version does not support -a , you can use ack instead. With ack 1.x, you need to include -a option, with ack 2.x, you don't, since when searching include non-text file by default (only ignored non-text file when you did not specify any files). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/217939",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124721/"
]
} |
217,948 | l and v in exec calls denote whether the arguments are provided via list or array(vector) . I read it somewhere that p denotes the user's path and e denote the environment but did not understand what that means? | Check this Wikipedia link on Exec function and this link on Starting a process with the exec() calls e – An array of pointers to environment variables is explicitly passed to the new process image. The "e" suffix versions pass an environment to the program. An environment is just that—a kind of "context" for the program to operate in. For example, you may have a spelling checker that has a dictionary of words. Instead of specifying the dictionary's location every time on the command line, you could provide it in the environment: l – Command-line arguments are passed individually (a list) to the function. For example, if I want to invoke the ls command with the arguments -t, -r, and -l (meaning "sort the output by time, in reverse order, and show me the long version of the output"), I could specify it as either. p – Uses the PATH environment variable to find the file named in the path argument to be executed. The "p" suffix versions will search the directories in your PATH environment variable to find the executable. You've probably noticed that all the examples have a hard-coded location for the executable: /bin/ls and /usr/bin/spellcheck. What about other executables? Unless you want to first find out the exact path for that particular program, it would be best to have the user tell your program all the places to search for executables. The standard PATH environment variable does just that. v – Command-line arguments are passed to the function as an array (vector) of pointers. The argument list is specified via a pointer to an argument vector. As mentioned in the other answer, this link on Unix System Calls is also equally awesome for further reading. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/217948",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77724/"
]
} |
217,959 | I have deployed a wildcard certificate (Comodo PlatinumSSL) for *.example.com on Apache/Ubuntu 14.04. Everything works if the client visits https://www.example.com but https://example.com throws up this in Firefox: example.com uses an invalid security certificate. The certificate is only valid for *.example.com (Error code: ssl_error_bad_cert_domain) Extracts from the vhost file: <IfModule mod_ssl.c> <VirtualHost *:443> SSLEngine on ServerName example.com ServerAlias www.example.com *.example.com DocumentRoot /var/www/html SSLCertificateFile /etc/ssl/localcerts/example_com.cer SSLCertificateKeyFile /etc/ssl/localcerts/example_com.key SSLCertificateChainFile /etc/ssl/localcerts/example_com_interm.cer </VirtualHost></IfModule> How do I get both https://www.example.com and https://example.com to work without warnings? | A wildcard matches a single left-most label. That is *.example.com matches www.example.com but not example.com or sub.foo.example.com . This means you either need to get a certificate which includes *.example.com and example.com as subject alternative names or if you just need www and the naked domain name then you can can get a cheaper certificate which only includes www.example.com and example.com . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/217959",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124520/"
]
} |
217,988 | I am attempting to write a filter using something like sed or awk to do the following: If a given pattern does not exist in the input, copy the entire input to the output If the pattern exists in the input, copy only the lines after the first occurrence to the output This happens to be for a "git clean" filter, but that's probably not important. The important aspect is this needs to be implemented as a filter, because the input is provided on stdin. I know how to use sed to delete lines up to a pattern, eg. 1,/pattern/d but that deletes the entire input if /pattern/ is not matched anywhere. I can imagine writing a whole shell script that creates a temporary file, does a grep -q or something, and then decides how to process the input. I'd prefer to do this without messing around creating a temporary file, if possible. This needs to be efficient because git might call it frequently. | If your files are not too large to fit in memory, you could use perl to slurp the file: perl -0777pe 's/.*?PAT[^\n]*\n?//s' file Just change PAT to whatever pattern you're after. For example, given these two input files and the pattern 5 : $ cat file123451112131415$ cat file1 foobar$ perl -0777pe 's/.*?5[^\n]*\n?//s' file1112131415$ perl -0777pe 's/.*?10[^\n]*\n?//s' file1foobar Explanation -pe : read the input file line by line, apply the script given by -e to each line and print. -0777 : slurp the entire file into memory. s/.*?PAT[^\n]*\n?//s : remove everything until the 1st occurrence of PAT and until the end of the line. For larger files, I don't see any way to avoid reading the file twice. Something like: awk -vpat=5 '{ if(NR==FNR){ if($0~pat && !a){a++; next} if(a){print} } else{ if(!a){print} else{exit} } }' file1 file1 Explanation awk -vpat=5 : run awk and set the variable pat to 5 . if(NR==FNR){} : if this is the 1st file. if($0~pat && !a){a++; next} : if this line matches the value of pat and a is not defined, increment a by one and skip to the next line. if(a){print} : if a is defined (if this file matches the pattern), print the line. else{ } : if this is not the 1st file (so it's the second pass). if(!a){print} if a is not defined, we want the whole file, so print every line. else{exit} : if a is defined, we've already printed in the 1st pass so there's no need to reprocess the file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/217988",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1232/"
]
} |
218,034 | I have a Debian 7 VPS setup. I just enabled SSH Key authentication and disabled password authentication but the disabling did not work. When I attempt to SSH into my VPS, it prompts me for my SSH Key password which then works fine, BUT if I hit cancel, it will give me "Agent admitted faliure to sign" Error and then it prompts me for the current users account password, I enter it in and it logs me in with my account password, even though it's disabled... Does anyone have any idea why it allows me to login with password access? Thank you I am connecting with a 4096 bit key. Here is my sshd_config: Port 22# Use these options to restrict which interfaces/protocols sshd will bind to#ListenAddress ::#ListenAddress 0.0.0.0Protocol 2# HostKeys for protocol version 2HostKey /etc/ssh/ssh_host_rsa_keyHostKey /etc/ssh/ssh_host_dsa_keyHostKey /etc/ssh/ssh_host_ecdsa_key#Privilege Separation is turned on for securityUsePrivilegeSeparation yes# Lifetime and size of ephemeral version 1 server keyKeyRegenerationInterval 3600ServerKeyBits 768# LoggingSyslogFacility AUTHLogLevel INFO# Authentication:LoginGraceTime 120PermitRootLogin noStrictModes yesRSAAuthentication yesPubkeyAuthentication yes#AuthorizedKeysFile %h/.ssh/authorized_keys# Don't read the user's ~/.rhosts and ~/.shosts filesIgnoreRhosts yes# For this to work you will also need host keys in /etc/ssh_known_hostsRhostsRSAAuthentication no# similar for protocol version 2HostbasedAuthentication no# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication#IgnoreUserKnownHosts yes# To enable empty passwords, change to yes (NOT RECOMMENDED)PermitEmptyPasswords no# Change to yes to enable challenge-response passwords (beware issues with# some PAM modules and threads)ChallengeResponseAuthentication no# Change to no to disable tunnelled clear text passwords#PasswordAuthentication no# Kerberos options#KerberosAuthentication no#KerberosGetAFSToken no#KerberosOrLocalPasswd yes#KerberosTicketCleanup yes# GSSAPI options#GSSAPIAuthentication no#GSSAPICleanupCredentials yesX11Forwarding yesX11DisplayOffset 10PrintMotd noPrintLastLog yesTCPKeepAlive yes#UseLogin no#GSSAPIAuthentication no#GSSAPICleanupCredentials yesX11Forwarding yesX11DisplayOffset 10PrintMotd noPrintLastLog yesTCPKeepAlive yes#UseLogin no#MaxStartups 10:30:60#Banner /etc/issue.net# Allow client to pass locale environment variablesAcceptEnv LANG LC_*Subsystem sftp /usr/lib/openssh/sftp-server# Set this to 'yes' to enable PAM authentication, account processing,# and session processing. If this is enabled, PAM authentication will# be allowed through the ChallengeResponseAuthentication and# PasswordAuthentication. Depending on your PAM configuration,# PAM authentication via ChallengeResponseAuthentication may bypass# the setting of "PermitRootLogin without-password".# If you just want the PAM account and session checks to run without# PAM authentication, then enable this but set PasswordAuthentication# and ChallengeResponseAuthentication to 'no'.UsePAM yes | You only disabled ChallengeResponseAuthentication . Lines starting with # are comments and won't interpreted as configuration, they are for humans to read. To disable all possibilities to login with a password you have to set PasswordAuthentication no AND ChallengeResponseAuthentication no There is a possible path over pam_unix to login with a password. This will be disabled with the later. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/218034",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124777/"
]
} |
218,058 | We're moving websites from one server configuration to a new configuration and the websites will live in different paths than previously. We're planning to diligently go through and replace old paths with new paths, but in case we miss any, is there some way to monitor for any processes trying to access the old paths and also know what UID the process was owned by? | You can use this little systemtap script : #!/usr/bin/stapfunction proc:string() { return sprintf("PID(%d) UID(%d) PROC(%s)", pid(), uid(), execname()) }probe syscall.open.return, syscall.stat.return, syscall.open64.return ?, syscall.stat64.return ? { filename = user_string($filename) if ($return < 0) { printf("failed %s on %s by %s\n", pn(), proc(), filename) }} It will hook the syscalls open and stat (you can copy/paste the code, maybe I forgot some other syscalls) at the return. As syscalls are the only way to communicate with the kernel, you won't miss anything.This script will produce this kind of output : failed syscall.stat.return on PID(4203) UID(1000) PROC(bash) by /tmp/roflfailed syscall.stat.return on PID(4203) UID(1000) PROC(bash) by /tmp/hihi among the pros of using systemtap, we have : less instrusive for the process system-wide (not only the monitored process) but you can reduce its selection directly in the script less ressource hungry (only displays failed actions, not all to be grep after) you can improve the script to get the details about the calling program (eg, its backtrace, time of call, etc...). It depends on your application. And for the cons : not standard, you have to install it (but standard enough to be available on most distribution). On Redhat & variants: sudo yum install systemtap need to have the debuginfos to build the module. On Redhat & variants : sudo debuginfo-install kernel Some useful links : The tapset (included functions) index , and a beginners guide Good luck for your migration ! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/218058",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74995/"
]
} |
218,060 | The modern recommendation is to use: $(...) instead of the older: `...` mainly for readability and to avoid the need for escaping when nesting. Which shells do not support the $(...) form, and are any of them still in use today? | The original Bourne shell, csh or tcsh all do not support $() and require `` for command substitution. The Bourne shell is no longer used as the default shell (/bin/sh) on many UNIX based operating systems. However, these shells are still supplied by the OS vendors as they are still used for historical reasons. Both AIX and HP-UX have /bin/sh as a POSIX compliant shell. Bourne shell is still available on AIX as 'bsh' (but deprecated in 1995 with AIX 4.1) and HP-UX 11.11 '/usr/old/bin/sh' (Bourne was removed from 11.23). I'm not sure when the switch was made, but it was many years ago. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/218060",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33967/"
]
} |
218,074 | I wanted to find out how many cores my system has, so I searched the same question in Google. I got some commands such as the lscpu command.When I tried this command, it gave me the following result: $ lscpuArchitecture: x86_64CPU op-mode(s): 32-bit, 64-bitByte Order: Little EndianCPU(s): 4On-line CPU(s) list: 0-3Thread(s) per core: 1Core(s) per socket: 4Socket(s): 1NUMA node(s): 1Vendor ID: GenuineIntelCPU family: 6Model: 23Stepping: 10CPU MHz: 1998.000BogoMIPS: 5302.48Virtualization: VT-xL1d cache: 32KL1i cache: 32KL2 cache: 2048KNUMA node0 CPU(s): 0-3 In particular, this output shows: CPU(s): 4 Core(s) per socket: 4 CPU family: 6 Which of those indicates cores of a Linux system? Is there any other command to tell the number of cores, or am I assuming it is completely wrong? | You have to look at sockets and cores per socket. In this case you have 1 physical CPU (socket) which has 4 cores (cores per socket). | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/218074",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124316/"
]
} |
218,076 | There are different sources and different practices are suggested. I found the following proposals how often one should run fstrim. run weekly by cron run daily by cron run at each boot What is the optimal option and why? Ubuntu 14.04 uses the first option by default. | TRIM does at least three things: minimize write amplification prevent long-term performance degradation irrecoverably delete your data Now it depends where your priorities are. For 1), you should not be using fstrim at all, but make use of the discard option of your filesystem. Only if everything is trimmed instantly will the SSD stop copying no longer needed bits of data around. In practice though, it has been shown that preventing write amplification is not that important since SSD are fine with lots of writes. For 2), using fstrim weekly or even monthly is completely fine. There is no need to use instant discard, or to trim daily - that would be a short-term measure, but this is about keeping the SSD happy in the long-term. But it also depends on your usage: if your filesystem is always full and sees lots of writes, you might need to trim more regularly than if you tend to have lots of free space and not that much writes in your filesystems. For 3), you should not be using any kind of trim at all. Basically if you expect to be human, making errors, having accidents - like you just deleted your photo collection, whoops - recovery tools like photorec won't work after TRIM because with TRIM everything is gone forever. From a pure data recovery point of view, SSD is a huge headache. There's too much trim happening in Linux, even without asking you ( mkfs implies trim, lvremove / lvresize /... might if issue_discards , some partitioners might be having ideas, ...). Suddenly previously reversible actions become irreversible, all for the sake of getting a few more points in some filesystem benchmark... If you decide on fstrim you should know where the cron job is located so you can disable it when you have an accident, that way you get a compromise between 2) and 3). In general with SSD you should make sure you have good backups, they are even more important than with HDD since you have lesser chance of recovery on SSD. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/218076",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61003/"
]
} |
218,084 | I am using different distros via VirtualBox. I stumbled on Arch Linux as a natural platform to do that. However, I am having the following issue: On my non-root account, post-installation... Attempting to ping -c 3 www.google.com results in "ping:unknown host www.google.com". Attempting to ping -c 3 8.8.8.8 results in "Network is unreachable". Attempting to sudo pacman -S alsa-utils results in "error: failed retrieving file '' from : Could not resolve host: " for all files. I am running a Windows 7 64-bit host and VirtualBox 4.3.28. I have a motherboard with an Intel ethernet NIC (this is the only one connected to my router and the only host OS-enabled adapter), a third-party ethernet NIC, and a WiFi adapter. Network settings in VirtualBox are defaults. Internet works for the host, all other VMs, and for the Arch Linux (2015.07.01) live installation (ping and downloads worked pre-installation). Here are the exact actions and commands I executed during installation (ignoring my notes). Edit: Pastie redacted the important line (46) xD; it reads " systemctl enable [email protected] ". These steps were taken from the Arch Linux Beginners' Guide and Lifehacker. Original thread | I found that eth0 was not the name of my interface. systemctl enable [email protected] solved the problem. Thank you very much. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/218084",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124804/"
]
} |
218,093 | I have file with n lines. (Each line refers to a “question”, andtherefore they are labeled Q.1 , Q.2 , Q.3 , ..., Q. n .) Each line (question) has a “Marks” attribute,which has the value 2, 3, 4, 5, or 6. There are n ⁄ 5 lines with each value. For example: A 10-line file (i.e., n =10) might look like amol@mypc:~$ cat questions.txtQ.1 2 MarksQ.2 5 MarksQ.3 4 MarksQ.4 3 MarksQ.5 6 MarksQ.6 4 MarksQ.7 3 MarksQ.8 2 MarksQ.9 6 MarksQ.10 5 Marks I know I can split this into five homogeneous (i.e., all the same) fileswith something like amol@mypc:~$ grep " 2 Marks" questions.txt > questions2Marks.txtamol@mypc:~$ grep " 3 Marks" questions.txt > questions3Marks.txtamol@mypc:~$ grep " 4 Marks" questions.txt > questions4Marks.txtamol@mypc:~$ grep " 5 Marks" questions.txt > questions5Marks.txtamol@mypc:~$ grep " 6 Marks" questions.txt > questions6Marks.txt Each of the resulting files will have n ⁄ 5 lines. I want to do the inverse operation –i.e., produce a transpose of the above result. I want to split my questions.txt fileinto n ⁄ 5 files: questions1.txt , questions2.txt , questions3.txt , ..., questions M .txt (using M to represent n ⁄ 5 ) where each fileis five lines long and is heterogeneous (i.e., all different). questions1.txt should contain the first line in questions.txt with 2 Marks , the first line in questions.txt with 3 Marks , the first line in questions.txt with 4 Marks , the first line in questions.txt with 5 Marks , and the first line in questions.txt with 6 Marks , in that order. questions2.txt should contain the second line of each, etc. So, for n =10, M obviously is 2. I would want my example questions.txt from above broken down into these two files: amol@mypc:~$ cat questions1.txt Q.1 2 MarksQ.4 3 MarksQ.3 4 MarksQ.2 5 MarksQ.5 6 Marksamol@mypc:~$ cat questions2.txt Q.8 2 MarksQ.7 3 MarksQ.6 4 MarksQ.10 5 MarksQ.9 6 Marks How can I achieve that using *nix tools(sed, awk, perl, shell script, etc...)? | sort -n -k2 -k1.3 file | awk '{$2!=a?x=1:x++} {print > "file"x; a=$2}' First , we need to sort the file correctly. -n sorts the file numerically, -k2 sorts according to the second field (the marks 2-6), -k1.3 then sorts within this order the first field starting from the 3rd character numerically (irgnoring the leading Q. ). Now awk splits the output between ascending files (file1, file2, file3, filen....). The output looks like this, file1 : $ cat file1Q.1 2 MarksQ.4 3 MarksQ.3 4 MarksQ.2 5 MarksQ.5 6 Marks And file2 : $ cat file2Q.8 2 MarksQ.7 3 MarksQ.6 4 MarksQ.10 5 MarksQ.9 6 Marks | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/218093",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89599/"
]
} |
218,122 | I have a problem with a bash script on raspberry pi: x='gpio -g read 22'if [ $x -ge 1 ]thengpio -g write 23 1fi The error is integer expression expected . Why? | That's because you are checking whether the string gpio -g read 22 is greater than 1. Since gpio -g read 22 is not a number, you get that error. You don't explain what you are trying to do but I'm guessing you want to compare the output of the gpio command. To do that, you need to enclose the command in $() or backticks ( `` ): x=$(gpio -g read 22)if [ "$x" -ge 1 ]then gpio -g write 23 1fi Or, more simply: [ "$(gpio -g read 22)" -ge 1 ] && gpio -g write 23 1 The assignment foo='command' doesn't run command . The variable foo takes the value of the string command and not its output. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/218122",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124824/"
]
} |
218,125 | I am trying to import a mysqldump file into my mysql database, but my (putty) connection to the server keeps timing out halfway through. I tried to use nohup . . . . & but this doesn't appear to work. My Command is nohup sudo mysql -hlocalhost -P3306 -uxxxxxx -pxxxxxx < /var/lib/mysql/backups/dump.sql & Which I believe should run it in the background, and persist after my connection to the server dies.However, when I then type jobs to see if it is running, it says: [1] Stopped nohup sudo mysql -hlocalhost -P3306 -uxxxxx -pxxxxx < /var/lib/mysql/backups/dump.sql Am I doing something wrong, or have I completely misunderstood the purpose of nohup . . . . & | sudo probably asks for a password and nohup disconnects the process from the controlling terminal. Use that instead: sudo sh -c 'nohup mysql -hlocalhost -P3306 -uxxxxxx -pxxxxxx < /var/lib/mysql/backups/dump.sql &' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/218125",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102428/"
]
} |
218,163 | How to install Cuda Toolkit 7.0 or 8 on Debian 8? I know that Debian 8 comes with the option to download and install CUDA Toolkit 6.0 using apt-get install nvidia-cuda-toolkit , but how do you do this for CUDA toolkit version 7.0 or 8? I tried installing using the Ubuntu installers, as described below: sudo wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1404/x86_64/cuda-repo-ubuntu1404_7.0-28_amd64.debdpkg -i cuda-repo-ubuntu1404_7.0-28_amd64.debsudo apt-get updatesudo apt-get install -y cuda However it did not work and the following message was returned: Reading package lists... DoneBuilding dependency tree Reading state information... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: cuda : Depends: cuda-7-0 (= 7.0-28) but it is not going to be installedE: Unable to correct problems, you have held broken packages. | The following instructions are valid for CUDA 7.0, 7.5, and several previous (and probably later) versions. As far as Debian distributions, they're valid for Jessie and Stretch and probably other versions. They assume an amd64 (x86_64) architecture, but you can easily adapt them for x86 (x86_32). Installation prerequisites g++ - You should use the newest GCC version supported by your version of CUDA. For CUDA 7.x this would be version 4.9.3, last of the 4.x line; for CUDA 8.0, GCC 5.x versions are supported. If your distribution uses GCC 5.x by default, use that, otherwise GCC 5.4.0 should do. Earlier versions are usable but I wouldn't recommend them, if only for the better modern-C++ feature support for host-side code. gcc - comes with g++. I even think CMake might default to having nvcc invoke gcc rather than g++ in some cases with a -x switch (but not sure about this). libGLU - Mesa OpenGL libraries (+ development files?) libXi - X Window System Xinput extension libraries (+ development files?) libXmu - X Window System "miscellaneous utilities" library (+ development files?) Linux kernel - headers for the kernel version you're running. If you want a list of specific packages - well, that depends on exactly which distribution you're using. But you can try the following (for CUDA 7.x): sudo apt-get install gcc g++ gcc-4.9 g++-4.9 libxi libxi6 libxi-dev libglu1-mesa libglu1-mesa-dev libxmu6 libxmu6-dev linux-headers-amd64 linux-source And you might add some -dbg versions of those packages for debugging symbols. I'm pretty sure this covers it all - but I might have missed something I just had installed already. Also, CUDA can work with clang , at least experimentally, but I haven't tried that. Installing the CUDA kernel driver Go to NVIDIA's CUDA Downloads page . Choose Linux > x86_64 > Ubuntu , and then whatever latest version they have (at the time of writing: Ubuntu 15.04). Choose the .run file option. Download the .run file (currently this one ). Make sure not to put it in /tmp . Make the .run file executable: chmod a+x cuda_7.5.18_linux.run . Become root. Execute the .run file: Pretend to accept their silly shrink-wrap license; say "yes" to installing just the NVIDIA kernel driver, and say "no" to everything else. The installation should tell you it expects to have installed the NVIDIA kernel driver, but that you should reboot before continuing/retrying the toolkit installation. So... Having apparently succeeded, reboot. Installing CUDA itself Be root. Locate and execute cuda_7.5.18_linux.run This time around, say No to installing the driver, but Yes to installing everything else, and accept the default paths (or change them, whatever works for you). The installer is likely to now fail . That is a good thing assuming it's the kind of failure we expect: It should tell you your compiler version is not supported - CUDA 7.0 or 7.5 supports up to gcc 4.9 and you have some 5.x version by default. Now, if you get a message about missing libraries , that means my instructions above regarding prerequisites somehow failed, and you should comment here so I can fix them. Assuming you got the "good failure", proceed to: Re-invoke the .run file, this time with the --override option. Make the same choices as in step 11. CUDA should now be installed, by default under /usr/local/cuda (that's a symlink). But we're not done! Directing NVIDIA's nvcc compiler to use the right g++ version NVIDIA's CUDA compiler actually calls g++ as part of the linking process and/or to compile actual C++ rather than .cu files. I think. Anyway, it defaults to running whatever's in your path as g++ ; but if you place another g++ under /usr/local/cuda/bin , it will use that first! So... Execute symlink /usr/bin/g++-4.9 /usr/local/cuda/bin/g++ (and for good measure, maybe also symlink /usr/bin/gcc-4.9 /usr/local/cuda/bin/gcc . That's it. Trying out the installation cd /root/NVIDIA_CUDA-7.5_Samples/0_Simple/vectorAdd make The build should conclude successfully, and when you do ./vectorAdd you should get the following output: root@mymachine:~/NVIDIA_CUDA-7.5_Samples/0_Simple/vectorAdd# ./vectorAdd[Vector addition of 50000 elements]Copy input data from the host memory to the CUDA deviceCUDA kernel launch with 196 blocks of 256 threadsCopy output data from the CUDA device to the host memoryTest PASSEDDone Notes You don't need to install the NVIDIA GDK (GPU Development Kit), but it doesn't hurt and it might be useful for some. Install it to the root directory of your system; it's pretty safe and there's an uninstaller afterwards: /usr/bin/uninstall_gdk.pl . In CUDA 8 it's already integrated into CUDA itself IIANM. Do not install additional packages with names like nvidia-... or cuda... ; they might not hurt but they'll certainly not help. Before doing any of these things, you might want to make sure your GPU is recognized at all, using lspci | grep -i nvidia . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/218163",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124679/"
]
} |
218,169 | To launch a root shell on machines where the root account is disabled, you can run one of: sudo -i : run an interactive login shell (reads /root/.bashrc and /root/.profile ) sudo -s : run a non-login interactive shell (reads /root/.bashrc ) In the Ubuntu world, I very often see sudo su suggested as a way to get a root shell. Why run two separate commands when one will do? As far as I can tell, sudo -i is equivalent to sudo su - and sudo -s is the same as sudo su . The only differences seem to be (comparing sudo -i on the left and sudo su - on the right): And comparing sudo -s (left) and sudo su (right): The main differences (ignoring the SUDO_foo variables and LS_COLORS ) seem to be the XDG_foo system variables in the sudo su versions. Are there any cases where that difference warrants using the rather inelegant sudo su ? Can I safely tell people (as I often have) that there's never any point in running sudo su or am I missing something? | As you stated in your question, the main difference is the environment. sudo su - vs. sudo -i In case of sudo su - it is a login shell, so /etc/profile , .profile and .bashrc are executed and you will find yourself in root's home directory with root's environment. sudo -i is nearly the same as sudo su - The -i (simulate initial login) option runs the shell specified by the password database entry of the target user as a login shell. This means that login-specific resource files such as .profile , .bashrc or .login will be read and executed by the shell. sudo su vs. sudo -s sudo su calls sudo with the command su . Bash is called as interactive non-login shell. So bash only executes .bashrc . You can see that after switching to root you are still in the same directory: user@host:~$ sudo suroot@host:/home/user# sudo -s reads the $SHELL variable and executes the content. If $SHELL contains /bin/bash it invokes sudo /bin/bash , which means that /bin/bash is started as non-login shell, so all the dot-files are not executed, but bash itself reads . bashrc of the calling user. Your environment stays the same. Your home will not be root's home. So you are root, but in the environment of the calling user. Conclusion The -i flag was added to sudo in 2004 , to provide a similar function to sudo su - , so sudo su - was the template for sudo -i and meant to work like it. I think it doesn't really matter which you use, unless the environment isn't important. Addition A basic point that must be mentioned here is that sudo was designed to run only one single command with higher privileges and then drop those privileges to the original ones. It was never meant to really switch the user and leave open a root shell. Over the time, sudo was expanded with such mechanisms, because people were annoyed about why to use sudo in front of every command. So the meaning of sudo was abused. sudo was meant to encourage the user to minimize the use of root privileges. What we have now, is sudo becomes more and more popular. It is integrated in nearly every well known linux distribution. The original tool to switch to another user account is su . For an old school *nix veteran such thing like sudo might seem needless. It adds complexity and behaves more likely to the mechanisms we know from Microsofts os-family, and thus is in contrary to the philosophy of simplicity of *nix systems. I'm not really a veteran, but also in my opinion sudo was always a thorn in my side, from the time is was introduced and I always worked around the usage of sudo , if it was possible. I am most reluctant to use sudo . On all my systems, the root account is enabled. But things change, maybe the time will come, when su will be deprecated and sudo replaces su completely. Therefore I think, it will be the best to use sudo 's internal mechanisms ( -s , -i ) instead of relying on an old tool such as su . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/218169",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22222/"
]
} |
218,172 | I have a bunch of directories in which there are 3 .csv files with different names. For example, in my directories aa bb cc dd there are 3 files in each: aa: EA_sing_aa.csv EA_ska_aa.csv EA_tat_aa.csvbb: EA_sing_bb.csv EA_ska_bb.csv EA_tat_bb.csvcc: EA_sing_cc.csv EA_ska_cc.csv EA_tat_cc.csvdd: EA_sing_dd.csv EA_ska_dd.csv EA_tat_dd.csv I want to add the name of each file to a new column as row names to each files and then combine all EA_sing*.csv files together and combine all EA_ska*.csv files together and also combine all EA_tat*.csv files together!my out put will be just 3 files: 1) EA_sing.csv ##the first column for the rows from EA_sing_aa.csv file will be aa and for the rows from EA_sing_bb.csv will be bb and for the rows from EA_sing_cc.csv will be cc..... ## 2) EA_ska.csv3) EA-tat.csv How can I do this in *nix?Thanks | As you stated in your question, the main difference is the environment. sudo su - vs. sudo -i In case of sudo su - it is a login shell, so /etc/profile , .profile and .bashrc are executed and you will find yourself in root's home directory with root's environment. sudo -i is nearly the same as sudo su - The -i (simulate initial login) option runs the shell specified by the password database entry of the target user as a login shell. This means that login-specific resource files such as .profile , .bashrc or .login will be read and executed by the shell. sudo su vs. sudo -s sudo su calls sudo with the command su . Bash is called as interactive non-login shell. So bash only executes .bashrc . You can see that after switching to root you are still in the same directory: user@host:~$ sudo suroot@host:/home/user# sudo -s reads the $SHELL variable and executes the content. If $SHELL contains /bin/bash it invokes sudo /bin/bash , which means that /bin/bash is started as non-login shell, so all the dot-files are not executed, but bash itself reads . bashrc of the calling user. Your environment stays the same. Your home will not be root's home. So you are root, but in the environment of the calling user. Conclusion The -i flag was added to sudo in 2004 , to provide a similar function to sudo su - , so sudo su - was the template for sudo -i and meant to work like it. I think it doesn't really matter which you use, unless the environment isn't important. Addition A basic point that must be mentioned here is that sudo was designed to run only one single command with higher privileges and then drop those privileges to the original ones. It was never meant to really switch the user and leave open a root shell. Over the time, sudo was expanded with such mechanisms, because people were annoyed about why to use sudo in front of every command. So the meaning of sudo was abused. sudo was meant to encourage the user to minimize the use of root privileges. What we have now, is sudo becomes more and more popular. It is integrated in nearly every well known linux distribution. The original tool to switch to another user account is su . For an old school *nix veteran such thing like sudo might seem needless. It adds complexity and behaves more likely to the mechanisms we know from Microsofts os-family, and thus is in contrary to the philosophy of simplicity of *nix systems. I'm not really a veteran, but also in my opinion sudo was always a thorn in my side, from the time is was introduced and I always worked around the usage of sudo , if it was possible. I am most reluctant to use sudo . On all my systems, the root account is enabled. But things change, maybe the time will come, when su will be deprecated and sudo replaces su completely. Therefore I think, it will be the best to use sudo 's internal mechanisms ( -s , -i ) instead of relying on an old tool such as su . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/218172",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124858/"
]
} |
218,174 | I have several VMs and right now my command-line prompt looks like -bash-3.2$ ; identical on every VM, because it doesn't contain the host name.I need to always see which VM I'm on using hostname before I do any operation. How can I add the host name to the shell prompt? ENV:CentOS/ssh | Just change the value of the $PS1 environment variable: PS1="\h$ " where \h is replaced with the hostname. Add that to /etc/bash.bashrc to set it permanent. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/218174",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/97013/"
]
} |
218,270 | I've found some special parameters in Bash that start with $ (the dollar sign). For example, when I wanted to know the exit status, I used $? . For getting the process ID, there's $$ . What are the special Bash (shell) parameters and their usage? | Referring to 3.4.2 Special Parameters from the Bash Reference Manual . Special Parameters: * ( $* ) Expands to the positional parameters, starting from one. When the expansion is not within double quotes, each positional parameter expands to a separate word. In contexts where it is performed, those words are subject to further word splitting and pathname expansion. When the expansion occurs within double quotes, it expands to a single word with the value of each parameter separated by the first character of the IFS special variable. That is, "$*" is equivalent to "$1c$2c…" , where c is the first character of the value of the IFS variable. If IFS is unset, the parameters are separated by spaces. If IFS is null, the parameters are joined without intervening separators. @ ( $@ ) Expands to the positional parameters, starting from one. When the expansion occurs within double quotes, each parameter expands to a separate word. That is, "$@" is equivalent to "$1" "$2" … . If the double-quoted expansion occurs within a word, the expansion of the first parameter is joined with the beginning part of the original word, and the expansion of the last parameter is joined with the last part of the original word. When there are no positional parameters, "$@" and $@ expand to nothing (i.e., they are removed). # ( $# ) Expands to the number of positional parameters in decimal. ? ( $? ) Expands to the exit status of the most recently executed foreground pipeline. - ( $- , a hyphen.) Expands to the current option flags as specified upon invocation, by the set builtin command, or those set by the shell itself (such as the -i option). $ ( $$ ) Expands to the process ID of the shell. In a () subshell, it expands to the process ID of the invoking shell, not the subshell. ! ( $! ) Expands to the process ID of the job most recently placed into the background, whether executed as an asynchronous command or using the bg builtin (see Job Control Builtins ). 0 ( $0 ) Expands to the name of the shell or shell script. This is set at shell initialization. If Bash is invoked with a file of commands (see Shell Scripts ), $0 is set to the name of that file. If Bash is started with the -c option (see Invoking Bash ), then $0 is set to the first argument after the string to be executed, if one is present. Otherwise, it is set to the filename used to invoke Bash, as given by argument zero. This can also be printed from the man page of Bash: $ man bash | awk '/Special Parameters$/','/Shell Variables$/' The above are the same as the special parameters defined in POSIX . In addition, there are the positional parameters $1 , $2 , ... that contain the command line arguments to the shell or the current function ( 3.4.1 Positional Parameters ). They are also a POSIX feature. Older versions of Bash also listed $_ as a special parameter, but it's now listed among other variables set by the shell ( 5.2 Bash Variables ). $_ is not POSIX and other shells may not support it. _ ( $_ , an underscore.) At shell startup, set to the absolute pathname used to invoke the shell or shell script being executed as passed in the environment or argument list. Subsequently, expands to the last argument to the previous command, after expansion. Also set to the full pathname used to invoke each command executed and placed in the environment exported to that command. When checking mail, this parameter holds the name of the mail file. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/218270",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66803/"
]
} |
218,272 | Currently i invoke the following: $ ssh [email protected] my_cmd This is slow and not easy to automate safely.I would like to establish ssh connection once and have some script that will forward my commands to host.com and print output. Is that possible ? Adding my machine to authorized_keys is not an option for me and it wouldn't solve slowness issue. | The feature is called ControlMaster which does multiplexing over one existing channel. It causes ssh to do all of the key exchanges and logging in only once; thus, the later commands will go through much faster. You activate it using these three lines in your .ssh/config : Host host.com ControlMaster auto ControlPath ~/.ssh/master-%C # for openssh < 6.7 you need to use this one: # ControlPath ~/.ssh/master-%r@%h-%p ControlPersist 5m You can adjust it to your needs; one alternative is that you could open one master connection that stays open during your other commands; then you would not need ControlPersist . There are many possibilities with this feature to tweak, but make sure you store your ControlPath socket in a safe place, not readable by other users, otherwise it could be misused. More info can be found in the ssh_config(5) manual page. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/218272",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124921/"
]
} |
218,460 | I convinced myself to learn Lisp. I'd like to know of a way to install a Lisp Interpreter on Debian(Jessie). I was reading about clisp but is not in the repositories of Debian. | SBCL is included in Debian too, and it's a really popular Common Lisp implementation. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/218460",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72530/"
]
} |
218,472 | I know the command seq to generate sequence of integers, one per line, but I would like to ask two questions: Is it possible to write the numbers of the sequence in the same line? Is it possible to create a string made of the sequence of numbers separated by a white space? | GNU seq takes separator ( -s ) option : $ seq -s ' ' 1 51 2 3 4 5$ var="$(seq -s ' ' 1 5)"$ echo "$var"1 2 3 4 5 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/218472",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125055/"
]
} |
218,503 | I'd like to log standard output and standard error separately in .xprofile using logger . In Bash I think that would look something like this: exec 1> >(logger --priority user.notice --tag $(basename $0)) \ 2> >(logger --priority user.error --tag $(basename $0)) How would I do that in a POSIX /bin/sh compatible manner? | There's no POSIX equivalent. You can only perform a redirection with exec , not a fork. A pipe requires a fork, and the shell waits for the child to finish. One solution is to put all your code in a function. all_my_code () { …}{ all_my_code | logger --priority user.notice --tag "$(basename "$0")"; } 2>&1 | logger --priority user.error --tag "$(basename "$0")" (This also logs any error from the stdout instance of logger to the stderr instance. You can avoid this with more file descriptor shuffling.) If you want the parent shell to exit even if the logger processes are still running, put & at the end of the logger invocations. { all_my_code | logger --priority user.notice --tag "$(basename "$0")" & } 2>&1 | logger --priority user.error --tag "$(basename "$0")" & Alternatively, you can use named pipes. pipe_dir=$(mktemp -d)mkfifo "$pipe_dir/out" "$pipe_dir/err"<"$pipe_dir/out" logger --priority user.notice --tag "$(basename "$0")" &<"$pipe_dir/err" logger --priority user.error --tag "$(basename "$0")" &exec >"$pipe_dir/out" 2>"$pipe_dir/err" …rm -r "$pipe_dir" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/218503",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3645/"
]
} |
218,511 | I have the following Z-shell script: compiler=clang++standard=-std=c++11warnings="-Wall -Wextra -Wpedantic"${compiler} ${warnings} ${standard} -o ${1} ${1}.cpp This does not work as the ${warnings} variable appears to be seen as "-Wall -Wextra -Wpedantic" - that is one long warning with spaces, versus three separate warnings. I did some searching and was able to get it to work with eval: eval "${compiler} ${warnings} ${standard} -o ${1} ${1}.cpp" However, I am confused as to why it is necessary, and also is there some other way to correct this issue. EDIT: In addition to doing it as shown in Peter O. 's answer, I found that you can do: setopt shwordsplit to get the Z-shell to behave like other Bourne shell derivatives. As they say in their FAQ : In most Bourne-shell derivatives, multiple-word variables such as var="foo bar" are split into words when passed to a command or used in a for foo in $var loop. By default, zsh does not have that behaviour: the variable remains intact. (This is not a bug! See below.) The option SH_WORD_SPLIT exists to provide compatibility. | Set your Warning options as an array. "${warnings[@]}" generates 3 individual words warnings=(-Wall -Wextra -Wpedantic)"${compiler}" "${warnings[@]}" "${standard}" -o "${1}" "${1}.cpp" Or, if you find it more legible, you can create the array without -W 's, and then add -W 's via how you present the array on the command line. warnings=( all extra pedantic )"${compiler}" "${warnings[@]/#/-W}" "${standard}" -o "${1}" "${1}.cpp" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/218511",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21205/"
]
} |
218,514 | Can I tell xxd to not print any line breaks in its output and have my dump as one continuous line? [user@localhost] : ~ $ echo -n "this is a long line and xxd is going to take multiple lines to print it" | xxd -p746869732069732061206c6f6e67206c696e6520616e642078786420697320676f696e6720746f2074616b65206d756c7469706c65206c696e657320746f207072696e74206974 | What you need is the -c option. # echo -n "this is a long line and xxd will print it as one line" | xxd -p -c 1000000746869732069732061206c6f6e67206c696e6520616e64207878642077696c6c207072696e74206974206173206f6e65206c696e65 Here is some info from the documentation : -c cols | -cols cols format octets per line. Default 16 (-i: 12, -ps: 30, -b: 6). Max 256. Documentation says that the max value for "c" parameter is 256, but I tried greater values and it worked. Check it out: # xxd -c 1000000 -p -l 1000000 /dev/urandom | wc -c2000001 Here I dump one million bytes from /dev/random and I get a string of 2 million + 1 characters. Each byte from /dev/random is represented by 2 characters and additional byte is the final newline. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/218514",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89986/"
]
} |
218,557 | I am facing some issue with creating soft links. Following is the original file. $ ls -l /etc/init.d/jboss-rwxr-xr-x 1 askar admin 4972 Mar 11 2014 /etc/init.d/jboss Link creation is failing with a permission issue for the owner of the file: ln -sv jboss /etc/init.d/jboss1ln: creating symbolic link `/etc/init.d/jboss1': Permission denied$ iduid=689(askar) gid=500(admin) groups=500(admin) So, I created the link with sudo privileges: $ sudo ln -sv jboss /etc/init.d/jboss1`/etc/init.d/jboss1' -> `jboss'$ ls -l /etc/init.d/jboss1 lrwxrwxrwx 1 root root 11 Jul 27 17:24 /etc/init.d/jboss1 -> jboss Next I tried to change the ownership of the soft link to the original user. $ sudo chown askar.admin /etc/init.d/jboss1$ ls -l /etc/init.d/jboss1lrwxrwxrwx 1 root root 11 Jul 27 17:24 /etc/init.d/jboss1 -> jboss But the permission of the soft link is not getting changed. What am I missing here to change the permission of the link? | On a Linux system, when changing the ownership of a symbolic link using chown , by default it changes the target of the symbolic link (ie, whatever the symbolic link is pointing to ). If you'd like to change ownership of the link itself, you need to use the -h option to chown : -h, --no-dereference affect each symbolic link instead of any referenced file (useful only on systems that can change the ownership of a symlink) For example: $ touch test$ ls -l test*-rw-r--r-- 1 mj mj 0 Jul 27 08:47 test$ sudo ln -s test test1$ ls -l test*-rw-r--r-- 1 mj mj 0 Jul 27 08:47 testlrwxrwxrwx 1 root root 4 Jul 27 08:47 test1 -> test$ sudo chown root:root test1$ ls -l test*-rw-r--r-- 1 root root 0 Jul 27 08:47 testlrwxrwxrwx 1 root root 4 Jul 27 08:47 test1 -> test Note that the target of the link is now owned by root. $ sudo chown mj:mj test1$ ls -l test*-rw-r--r-- 1 mj mj 0 Jul 27 08:47 testlrwxrwxrwx 1 root root 4 Jul 27 08:47 test1 -> test And again, the link test1 is still owned by root, even though test has changed. $ sudo chown -h mj:mj test1$ ls -l test*-rw-r--r-- 1 mj mj 0 Jul 27 08:47 testlrwxrwxrwx 1 mj mj 4 Jul 27 08:47 test1 -> test And finally we change the ownership of the link using the -h option. | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/218557",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37444/"
]
} |
218,595 | In the ~ directory of the root user on my debian wheezy server regularly appears file named dead.letter with (currently) the following content: orion : Jul 25 10:17:31 : root : unable to resolve host orionorion : Jul 26 02:17:18 : root : unable to resolve host orionorion : Jul 26 21:17:19 : root : unable to resolve host orion orion is the hostname of the server (and can normally be resolved since I have various services/programs using this hostname without problems). After some searching I figured that there is a cron job running hourly, i.e. 17 * * * * root cd / && run-parts --report /etc/cron.hourly which could explain why those errors only appear 17 minutes after the full hour. The only script in /etc/cron.hourly is fake-hwclock with the following content: #!/bin/sh## Simple cron script - save the current clock periodically in case of# a power failure or other crashif (command -v fake-hwclock >/dev/null 2>&1) ; then fake-hwclock savefi Can this produce those mysterious dead.letter ? And why seems fake-hwclock save try to resolve the hostname? Edit: Some more information. Input of /etc/hosts : 127.0.0.1 localhost::1 localhost ip6-localhost ip6-loopbackfe00::0 ip6-localnetff00::0 ip6-mcastprefixff02::1 ip6-allnodesff02::2 ip6-allrouters | Change the following line in /etc/hosts 127.0.0.1 localhost to 127.0.0.1 localhost orion Your MTA was unable to resolve the domain name of your machine. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/218595",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93225/"
]
} |
218,668 | I always use either rsync or scp in order to copy files from/to a remote machine. Recently, I discovered in the manual of scp ( man scp ) the flag -C -C Compression enable. Passes the -C flag to ssh(1) to enable compression. Before I discovered this flag, I used to zip before and then scp . Is it as efficient to just use the -C than zipping and unzipping? When is using one or another process make the transfer faster? | It's never really going to make any big difference, but zipping the file before copying it ought to be a little bit less efficient since using a container format such as zip that can encapsulate multiple files (like tar ) is unnecessary and it is not possible to stream zip input and output (so you need a temporary file). Using gzip on the other hand, instead of zip ought to be exactly the same since it's what ssh -C does under the hood... except that gzipping yourself is more work than just using ssh -C . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/218668",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/114428/"
]
} |
218,673 | How is install different from a simple copy, cp or dd ? I just compiled a little utility and want to add it to /usr/sbin so it becomes available via my PATH variable. Why use one vs the other? | To "install" a binary compiled from source the best-practice would be to put it under the directory: /usr/local/bin On some systems that path is already in your PATH variable, if not you can add it by adapting the PATH variable in one of your profile configuration files ~/.bashrc ~/.profile PATH=${PATH}:/usr/local/bin dd is a low level copy tool that is mostly used to copy exactly sized blocks of the source which could be for example a file or a device. cp is the common command to copy files and directories also recursively with the option -r and by preserving the permissions with the option -p . install is mostly similar to cp but provides additionally the option to set the destination file properties directly without having to use chmod separately. cp your files to /usr/local/bin and adapt the PATH variable if needed. That's what I would do. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/218673",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32951/"
]
} |
218,715 | To my understanding, the script below runs if the user is not root by comparing $EUID and 0 . Then, it uses [[ -t 1 ]] to decide if the script is running in a terminal or not. If it is, it will use sudo to prompt the user for a password. Otherwise, it will envoke gksudo to do so. if (($EUID != 0)); then if [[ -t 1 ]]; then sudo "$0" "$@" else exec 1>output_file && rm output_file gksu "$0 $@" fi exitfi What is [[ -t 1 ]] comparing/evaluating? | The test [[ -t 1 ]] returns true if File descriptor 1 (STDOUT) is opened on the terminal, otherwise false. From help test in bash : -t FD True if FD is opened on a terminal. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/218715",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
218,721 | When I run ls command I get files like opencv.sh~ in my output which is not visible if I check the home directory. Basically, it also lists hidden files without supplying any other parameter. How can I prevent this ? My dist is Ubuntu 14.04 | The test [[ -t 1 ]] returns true if File descriptor 1 (STDOUT) is opened on the terminal, otherwise false. From help test in bash : -t FD True if FD is opened on a terminal. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/218721",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125208/"
]
} |
218,747 | Here is what I have in ls -al /etc/nginx : total 52drwxr-xr-x. 4 root root 4096 Jul 28 04:16 .drwxr-xr-x. 78 root root 8192 Jul 28 03:37 ..drwxr-xr-x. 2 root root 26 Jul 28 03:55 conf.ddrwxr-xr-x. 2 root root 6 May 10 09:21 default.d-rw-r--r--. 1 root root 1034 May 10 09:21 fastcgi.conf-rw-r--r--. 1 root root 964 May 10 09:21 fastcgi_params-rw-r--r--. 1 root root 2837 May 10 09:21 koi-utf-rw-r--r--. 1 root root 2223 May 10 09:21 koi-win-rw-r--r--. 1 root root 3957 May 10 09:21 mime.types-rw-r--r--. 1 root root 1033 Jul 28 03:43 nginx.conf-rw-r--r--. 1 root root 596 May 10 09:21 scgi_params-rw-r--r--. 1 root root 623 May 10 09:21 uwsgi_params-rw-r--r--. 1 root root 3610 May 10 09:21 win-utf This is what I see /var/log/nginx/error.log after sudo service nginx start : [emerg] 20360#0: open() "/etc/nginx/conf.d/foo.conf" failed(13: Permission denied) in /etc/nginx/nginx.conf:33 This is what I have in ls -al /etc/nginx/conf.d/ : $ ls -al /etc/nginx/conf.d/total 8drwxr-xr-x. 2 root root 26 Jul 28 03:55 .drwxr-xr-x. 4 root root 4096 Jul 28 04:16 ..-rw-r--r--. 1 root root 230 Jul 28 03:50 foo.conf What's wrong? | When you are getting permission denied errors on file access etc. for unknown reason, it might be related to SELinux. Especially when you see a period following permissions like drwxr-xr-x. shown by ls -l for the file/dir in question, they could be mislabeled (you can see it by ls -Z ) and cause the problem. You should first check current SELinux mode by running getenforce . If it says Enforcing , then temporarily set the mode to Permissive by running setenforce 0 , and see if your application works afterwards. Please consult your distribution's guide on SELinux for permanent fix, including setting the SELinux mode on start up, relabeling files or directories, updating policies, etc. Here's Howto for CentOS . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/218747",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14028/"
]
} |
218,753 | I have a while loop which reads data from a file with city names: city_name: COIMBATOREMADURAIPONDICHERRYSALEMTRIPURTIRUCHITIRUNELVELITUTUCORINVELLORE and using an awk command within the while loop and trying to read the variable defined in while loop body again in the awk statement however, this is not working and giving me errors. I understand that the awk requires the variables to be defined for its body separately to enable awk to understand the variable usage and could read its value. My while loop is like the following: while read citydo awk -F, '{ if ( $1 == "ACTIVE" ) && ( $2 == "$city" ) print $1 }' siteDBName >> count SUM=`awk '{ sum += $1 } END { print sum }' count`done < city_name where the siteDBName file contains the multiple columns with data like the following: siteDBName: ACTIVE,COIMBATORE,MGT500,1,5,7....INACTIVE,MADURAI,GT500,5,6,7...ACTIVE,SALEM,TR600,6,4,6...ACTIVE,COIMBATORE,GT500,4,5,6....... Here, i tried using the awk -v c=$city along with rest of the statements but this too gave me errors. How can i use the variable used and initialized in the while loop inside the awk statement inside this while loop? | You have two basic choices: i) use -v to pass the variable to awk or ii) close the ' around the awk script, use the shell variable and continue the ' again. Use -v while read citydo awk -v city="$city" -F, '{ if ( $1 == "ACTIVE" && $2 == city ){print $1} }' siteDBName >> countdone < city_name Close the quote while read citydo awk -F, '{ if ( $1 == "ACTIVE" && $2 == "'$city'" ){print $1} }' siteDBName >> countdone < city_name I removed the SUM= line since it wasn't doing anything useful given that $1 is ACTIVE so adding it makes no sense. See my alternate approach below for a way to do this correctly. Also, note that with this approach, you need to read the siteDBName file multiple times. A more efficient version would be: $ awk -F, '{ if(NR==FNR){cities[$1]++;} else if($1=="ACTIVE" && $2 in cities ){sum++} } END{print sum,"active cities"}' city_name siteDBName 3 active cities | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/218753",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29327/"
]
} |
218,783 | I want to monitor running python processes with VSZ, RSS %MEM, %CPU etc. One of my priorities is a list refreshing every X seconds. I managed to come to the point of obtaining a refreshing list of processes using ps and watch ps ax | grep python | awk '{print $1}' | xargs watch -n 15 ps u -p That command simply find all the processes which includes python in its command line in ps and pass the pid values to watch . ps u -p 9221 10186 11640 12347 14076 14263 14317 19029 22099 24278 26161 32469 It is all fine, but that command evaluates the pid list only once and keep watching those pid s. What I need is executing ps ax | grep python command every X seconds and get a fresh list of running processes. That Way, I can see which process has started and which one had finished executing. | You can watch any command so give this a try watch "ps aux | grep python" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/218783",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116937/"
]
} |
218,796 | I'm using a Raspberry Pi B, with Raspbian.After upgrading to Jessie, watchdog daemon doesn't start at boot anymore. Starting it manually using "sudo service watchdog start" does work.I tried: purging and reinstalling watchdog update-rc.d watchdog defaults && update-rc.d watchdog enable systemctl enable watchdog produces this error: The unit files have no [Install] section. They are not meant to be enabled using systemctl. I checked syslog with systemd verbosity on debug, no results. Other than the watchdog device nothing is mentioned. systemctl list-units | grep -i watchdog is emtpy (unless I started it manually) My default runlevel is 5 and the priority of watchdog in /etc/rc5.d/ is also 5. What else can I try? | Open /lib/systemd/system/watchdog.service and add [Install]WantedBy=multi-user.target Systemd needs the [Install]-Section for a Unit to know how it should enable/disable the Unit. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/218796",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105732/"
]
} |
218,815 | I usually connect to remote linux servers from a specific windows server (W1). On the Windows side, I use putty and on the linux side, I start tmux . Occasionally, I have to use a different windows server (W2) and connect to the same tmux sessions. Problem: If I had set a size for the putty windows on W1, then I can not exceed this size on W2. When I maximise the putty window, the extra space is unusable, filled with ~ characters. Is there a way to "force" resize on W2, even if that means W1 will show only partial output ? Or a way to make W1 get disconnected from tmux session ? | With tmux list-client , you can list all clients connected to tmux sessions. For instance: $ tmux list-client/dev/pts/6: 0 [25x80 xterm] (utf8)/dev/pts/8: 0 [25x80 xterm] (utf8) From this point, you can choose to detach a specified client, or all clients of a specified session. Say I want to detach everyone connected to session 0: $ tmux detach-client -s 0 Then, you can attach the session so the size will be yours. Actually, all that can be done with tmux attach -d (the -d option force all other clients to detach). | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/218815",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54246/"
]
} |
218,816 | I want to copy and rename multiple c source files in a directory. I can copy like this: $ cp *.c $OTHERDIR But I want to give a prefix to all the file names: file.c --> old#file.c How can I do this in 1 step? | a for loop: for f in *.c; do cp -- "$f" "$OTHERDIR/old#$f"; done I often add the -v option to cp to allow me to watch the progress. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/218816",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119219/"
]
} |
218,819 | Using bash or mysql , how can I add 30 days in user expiry table? Example I have DB name USERS and table expiry like this: USERNAME EXPIRATIONJOHN 2015-09-26 I want the command to take whatever value is in EXPIRATION column for a specified USERNAME and add 30 days to it So the result would be: USERNAME EXPIRATIONJOHN 2015-10-26 | Using GNU date : $ date -d '2015-09-26 +30 days' '+%Y-%m-%d'2015-10-26 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/218819",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119608/"
]
} |
218,836 | Suddenly something was messed up with my partitions, or just one partition. I have a default Ubuntu installation, on a Kingston SSD, with the root file system encrypted with LUKS, (using AES I think). Now I'm trying to mount the partition from a live cd, but without luck. I am so afraid of doing some additional harm that can not be undone. So I would like to make an exact copy of the drive. That means all partiton tables, whatever kind of metadata for the LUKS partition, and well any other kind of metadata that I don't know of. I guess I want all the empty blocks too, to feel absolutely safe. I know about dd if=/dev/sda of=/dev/sdb , but I don't know if it will include all the data described. Perhaps I need to specify block size with -b , but I don't understand how that works and why it is necessary (if it is). And I also don't know how to find the block size of the partition. Please tell me if it does copy all data, and if not, if there is another way. | Yes it does, even the blocks that would not (officially) contain data and also all information regarding partitions, UUIDs, etc.. E.g. recovery of data (i.e. after deleting files) from the dd-copied drive would be possible. You may want to read this regarding the noerror and sync options. Block size ( bs= ) doesn't affect the result unless there are read errors, but you should set it to "1M" (or at least "4k") or it will take longer for no good reason. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/218836",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89717/"
]
} |
218,911 | On my Kubuntu 14.4 (which has python 2.7.6 as standard) my python is broken after I tried to install python 2.7.10 after building from source from python.org with the help of How to install the latest Python version on Debian separately or upgrade? . I am not able to repair it with the standard commands I suspect that my dpkg is somehow confused/broken regarding the python installation. I would like to fix dpkg in this aspect. I suspect that this has something to do with the file /var/lib/dpkg/status and /var/lib/dpkg/available and /var/lib/dpkg/info/* particularily the first. I think I have to reset dpkg somehow, but I am really no expert. The reason why I think this is: $ apt-cache policy pythonpython: Installed: 2.7.10-1 Candidate: 2.7.10-1 Version table:*** 2.7.10-1 0 100 /var/lib/dpkg/status2.7.5-5ubuntu3 0 500 http://de.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages$ /usr/bin/python2.7Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2Type "help", "copyright", "credits" or "license" for more information.>>> exit() The Reason I tried to install python 2.7.10 is because I needed it for another program (because of issues with ssl / openssl of python 2.7.6), but now I just want to get my system repaired - just let it be python 2.7.6. The Full Technical I started trying to solve this by asking on ubuntu https://askubuntu.com/questions/648424/muon-is-gone-after-change-of-python-issues-after-python-2-7-10-installation-on but I did not get any answer there. Maybe it was the wrong crowd. I have tried quite a bit since then and have an idea what's the problem, but don't know the steps to accomplish this. It started with me not being able to install muon with sudo apg-get install muon : $ sudo apt-get install muonReading package lists... DoneBuilding dependency tree Reading state information... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: muon : Depends: apt-xapian-index but it is not going to be installedE: Unable to correct problems, you have held broken packages. The typical advice (e.g. from https://askubuntu.com/questions/118749/package-system-is-broken-how-to-fix-it ) does not help: sudo apt-get autoremovesudo apt-get cleansudo apt-get autocleansudo apt-get updatesudo apt-get upgrade -fsudo apt-get -f install muon or sudo apt-get -f install or sudo dpkg --configure -a sudo apt-get update && sudo apt-get dist-upgradesudo apt-get install muon or sudo apt-get -o dpkg::options::="--force-confnew" -o dpkg::options::="--force-confmiss" --reinstall install muon did not help. So I tried $ sudo apt-get install apt-xapian-indexReading package lists... DoneBuilding dependency tree Reading state information... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: apt-xapian-index : Depends: python-xapian (>= 1.0.2) but it is not going to be installed Depends: python-apt (>= 0.7.93.2) but it is not going to be installed Depends: python-debian (>= 0.1.14) but it is not going to be installed Depends: python:any (>= 2.7.1-0ubuntu2)E: Unable to correct problems, you have held broken packages. and found out the issue is with other programs as well like $ sudo apt-get install meld Reading package lists... DoneBuilding dependency tree Reading state information... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: meld : Depends: python:any (>= 2.7.1-0ubuntu2) Depends: python-gtk2 (>= 2.14) but it is not going to be installed Depends: python-glade2 (>= 2.14) but it is not going to be installed Depends: python-gobject-2 (>= 2.16) but it is not going to be installed Recommends: python-gnome2 but it is not going to be installed Recommends: python-gconf but it is not going to be installed Recommends: python-gtksourceview2 (>= 2.4) but it is not going to be installedE: Unable to correct problems, you have held broken packages. So I tried (without luck) $ sudo update-alternatives --config pythonupdate-alternatives: error: no alternatives for python The following did not help either: sudo dpkg -P python2.7sudo apt-get install python2.7sudo dpkg -P python-minimalsudo apt-get autoremove && sudo apt-get clean sudo apt-get update && sudo apt-get -f install I am getting $ apt-cache policy pythonpython: Installed: 2.7.10-1 Candidate: 2.7.10-1 Version table:*** 2.7.10-1 0 100 /var/lib/dpkg/status2.7.5-5ubuntu3 0 500 http://de.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages Trying to reinstall python does not work $ sudo apt-get -o dpkg::options::="--force-confnew" -o dpkg::options::="--force-confmiss" --reinstall install pythonReading package lists... DoneBuilding dependency tree Reading state information... DoneReinstallation of python is not possible, it cannot be downloaded.0 upgraded, 0 newly installed, 0 to remove and 16 not upgraded. or $ sudo apt-get -o dpkg::options::="--force-confnew" -o dpkg::options::="--force-confmiss" --reinstall install python2Reading package lists... DoneBuilding dependency tree Reading state information... DoneE: Unable to locate package python2 and trying to build an uninstaller does not work either: ~/Python-2.7.10$ sudo make uninstall make: *** No rule to make target `uninstall'. Stop. So I started to suspect that I have to get dpkg fixed somehow, because $ apt-cache policy pythonpython: Installed: 2.7.10-1 Candidate: 2.7.10-1 Version table:*** 2.7.10-1 0 100 /var/lib/dpkg/status2.7.5-5ubuntu3 0 500 http://de.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages$ /usr/bin/python2.7Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2Type "help", "copyright", "credits" or "license" for more information.>>> exit() More information (Appendix) $ dpkg -l python* | grep -v ^unGewünscht=Unbekannt/Installieren/R=Entfernen/P=Vollständig Löschen/Halten| Status=Nicht/Installiert/Config/U=Entpackt/halb konFiguriert/ Halb installiert/Trigger erWartet/Trigger anhängig|/ Fehler?=(kein)/R=Neuinstallation notwendig (Status, Fehler: GROSS=schlecht)||/ Name Version Architektur Beschreibung+++-===========================================-=======================================-============-=====================================================================================================================================================================================================================ii python 2.7.10-1 amd64 Python 2.7.10ii python-apt-common 0.9.3.5ubuntu1 all Python interface to libapt-pkg (locales)ii python-chardet-whl 2.2.1-2~ubuntu1 all universal character encoding detectorii python-colorama-whl 0.2.5-0.1ubuntu2 all Cross-platform colored terminal text in Python - Wheelsii python-cups 1.9.66-0ubuntu2 amd64 Python bindings for CUPSrc python-cupshelpers 1.4.3+20140219-0ubuntu2.6 all Python modules for printer configuration with CUPSii python-dbus-dev 1.2.0-2build2 all main loop integration development files for python-dbusii python-distlib-whl 0.1.8-1ubuntu1 all low-level components of python distutils2/packagingrc python-gobject-2 2.28.6-12build1 amd64 deprecated static Python bindings for the GObject libraryii python-html5lib-whl 0.999-3~ubuntu1 all HTML parser/tokenizer based on the WHATWG HTML5 specificationii python-ldb 1:1.1.16-1 amd64 Python bindings for LDBii python-minimal 2.7.5-5ubuntu3 amd64 minimal subset of the Python language (default version)ii python-ntdb 1.0-2ubuntu1 amd64 Python bindings for NTDBii python-pam 0.4.2-13.1ubuntu3 amd64 Python interface to the PAM libraryii python-pip-whl 1.5.4-1ubuntu3 all alternative Python package installerii python-renderpm 3.0-1build1 amd64 python low level render interfaceii python-reportlab-accel 3.0-1build1 amd64 C coded extension accelerator for the ReportLab Toolkitii python-requests-whl 2.2.1-1ubuntu0.3 all elegant and simple HTTP library for Python, built for human beingsii python-setuptools-whl 3.3-1ubuntu2 all Python Distutils Enhancements (wheel package)ii python-six-whl 1.5.2-1ubuntu1 all Python 2 and 3 compatibility library (universal wheel) rc python-support 1.0.15 all automated rebuilding support for Python modules ii python-talloc 2.1.0-1 amd64 hierarchical pool based memory allocator - Python bindings ii python-tdb 1.2.12-1 amd64 Python bindings for TDB ii python-twisted-bin 13.2.0-1ubuntu1 amd64 Event-based framework for internet applications rc python-twisted-core 13.2.0-1ubuntu1 all Event-based framework for internet applications rc python-ubuntu-sso-client 13.10-0ubuntu6 all Ubuntu Single Sign-On client - Python library ii python-urllib3-whl 1.7.1-1ubuntu3 all HTTP library with thread-safe connection pooling ii python2.7 2.7.6-8ubuntu0.2 amd64 Interactive high-level object-oriented language (version 2.7) ii python2.7-minimal 2.7.6-8ubuntu0.2 amd64 Minimal subset of the Python language (version 2.7) ii python3 3.4.0-0ubuntu2 amd64 interactive high-level object-oriented language (default python3 version) ii python3-apport 2.14.1-0ubuntu3.11 all Python 3 library for Apport crash report handling ii python3-apt 0.9.3.5ubuntu1 amd64 Python 3 interface to libapt-pkg ii python3-aptdaemon 1.1.1-1ubuntu5.2 all Python 3 module for the server and client of aptdaemon ii python3-chardet 2.2.1-2~ubuntu1 all universal character encoding detector for Python3 ii python3-colorama 0.2.5-0.1ubuntu2 all Cross-platform colored terminal text in Python - Python 3.x ii python3-commandnotfound 0.3ubuntu12 all Python 3 bindings for command-not-found. ii python3-dbus 1.2.0-2build2 amd64 simple interprocess messaging system (Python 3 interface) ii python3-dbus.mainloop.qt 4.10.4+dfsg-1ubuntu1 amd64 D-Bus Support for PyQt4 with Python 3ii python3-debian 0.1.21+nmu2ubuntu2 all Python 3 modules to work with Debian-related data formatsii python3-defer 1.0.6-2build1 all Small framework for asynchronous programming (Python 3)ii python3-dev 3.4.0-0ubuntu2 amd64 header files and a static library for Python (default)ii python3-distlib 0.1.8-1ubuntu1 all low-level components of python distutils2/packagingii python3-distupgrade 1:0.220.7 all manage release upgradesii python3-gdbm:amd64 3.4.0-0ubuntu1 amd64 GNU dbm database support for Python 3.xii python3-gi 3.12.0-1ubuntu1 amd64 Python 3 bindings for gobject-introspection librariesii python3-html5lib 0.999-3~ubuntu1 all HTML parser/tokenizer based on the WHATWG HTML5 specification (Python 3)ii python3-minimal 3.4.0-0ubuntu2 amd64 minimal subset of the Python language (default python3 version)ii python3-pip 1.5.4-1ubuntu3 all alternative Python package installer - Python 3 version of the packageii python3-pkg-resources 3.3-1ubuntu2 all Package Discovery and Resource Access using pkg_resourcesii python3-problem-report 2.14.1-0ubuntu3.11 all Python 3 library to handle problem reportsii python3-pycurl 7.19.3-0ubuntu3 amd64 Python 3 bindings to libcurlii python3-pykde4 4:4.13.3-0ubuntu0.1 amd64 Python 3 bindings for the KDE Development Platformii python3-pyqt4 4.10.4+dfsg-1ubuntu1 amd64 Python3 bindings for Qt4ii python3-requests 2.2.1-1ubuntu0.3 all elegant and simple HTTP library for Python3, built for human beingsii python3-setuptools 3.3-1ubuntu2 all Python3 Distutils Enhancementsii python3-sip 4.15.5-1build1 amd64 Python 3/C++ bindings generator runtime libraryii python3-six 1.5.2-1ubuntu1 all Python 2 and 3 compatibility library (Python 3 interface)ii python3-software-properties 0.92.37.3 all manage the repositories that you install software fromii python3-uno 1:4.2.8-0ubuntu2 amd64 Python-UNO bridgeii python3-update-manager 1:0.196.13 all python 3.x module for update-managerii python3-urllib3 1.7.1-1ubuntu3 all HTTP library with thread-safe connection pooling for Python3ii python3-wheel 0.24.0-1~ubuntu1 all built-package format for Pythonii python3-xkit 0.5.0ubuntu2 all library for the manipulation of xorg.conf files (Python 3)ii python3.4 3.4.0-2ubuntu1.1 amd64 Interactive high-level object-oriented language (version 3.4)ii python3.4-dev 3.4.0-2ubuntu1.1 amd64 Header files and a static library for Python (v3.4)ii python3.4-minimal 3.4.0-2ubuntu1.1 amd64 Minimal subset of the Python language (version 3.4) $ lsb_release -aNo LSB modules are available.Distributor ID: UbuntuDescription: Ubuntu 14.04.2 LTSRelease: 14.04Codename: trusty $ grep -P '^[ \t]*[^#[ \t]+' /etc/apt/sources.list /etc/apt/sources.list.d/*.list/etc/apt/sources.list:deb http://de.archive.ubuntu.com/ubuntu/ trusty main restricted/etc/apt/sources.list:deb-src http://de.archive.ubuntu.com/ubuntu/ trusty main restricted/etc/apt/sources.list:deb http://de.archive.ubuntu.com/ubuntu/ trusty-updates main restricted/etc/apt/sources.list:deb-src http://de.archive.ubuntu.com/ubuntu/ trusty-updates main restricted/etc/apt/sources.list:deb http://de.archive.ubuntu.com/ubuntu/ trusty universe/etc/apt/sources.list:deb-src http://de.archive.ubuntu.com/ubuntu/ trusty universe/etc/apt/sources.list:deb http://de.archive.ubuntu.com/ubuntu/ trusty-updates universe/etc/apt/sources.list:deb-src http://de.archive.ubuntu.com/ubuntu/ trusty-updates universe/etc/apt/sources.list:deb http://de.archive.ubuntu.com/ubuntu/ trusty multiverse/etc/apt/sources.list:deb-src http://de.archive.ubuntu.com/ubuntu/ trusty multiverse/etc/apt/sources.list:deb http://de.archive.ubuntu.com/ubuntu/ trusty-updates multiverse/etc/apt/sources.list:deb-src http://de.archive.ubuntu.com/ubuntu/ trusty-updates multiverse/etc/apt/sources.list:deb http://de.archive.ubuntu.com/ubuntu/ trusty-backports main restricted universe multiverse/etc/apt/sources.list:deb-src http://de.archive.ubuntu.com/ubuntu/ trusty-backports main restricted universe multiverse/etc/apt/sources.list:deb http://security.ubuntu.com/ubuntu trusty-security main restricted/etc/apt/sources.list:deb-src http://security.ubuntu.com/ubuntu trusty-security main restricted/etc/apt/sources.list:deb http://security.ubuntu.com/ubuntu trusty-security universe/etc/apt/sources.list:deb-src http://security.ubuntu.com/ubuntu trusty-security universe/etc/apt/sources.list:deb http://security.ubuntu.com/ubuntu trusty-security multiverse/etc/apt/sources.list:deb-src http://security.ubuntu.com/ubuntu trusty-security multiverse/etc/apt/sources.list:deb http://archive.canonical.com/ubuntu trusty partner/etc/apt/sources.list:deb http://extras.ubuntu.com/ubuntu trusty main/etc/apt/sources.list:deb http://cran.uni-muenster.de/bin/linux/ubuntu trusty//etc/apt/sources.list.d/fossfreedom-packagefixes-trusty.list:deb http://ppa.launchpad.net/fossfreedom/packagefixes/ubuntu trusty main/etc/apt/sources.list.d/jitsi.list:deb http://download.jitsi.org/deb unstable//etc/apt/sources.list.d/leviatan1-ppa-trusty.list:deb http://ppa.launchpad.net/leviatan1/ppa/ubuntu trusty main $ whereis pythonpython: /usr/bin/python /usr/bin/python3.4-config /usr/bin/python3.4 /usr/bin/python3.4m /usr/bin/python2.7 /usr/bin/python3.4m-config /etc/python /etc/python3.4 /etc/python2.7 /usr/lib/python3.4 /usr/lib/python2.7 /usr/bin/X11/python /usr/bin/X11/python3.4-config /usr/bin/X11/python3.4 /usr/bin/X11/python3.4m /usr/bin/X11/python2.7 /usr/bin/X11/python3.4m-config /usr/local/lib/python3.4 /usr/local/lib/python2.7 /usr/include/python3.4 /usr/include/python3.4m /usr/share/python /usr/share/man/man1/python.1.gz $ whereis python2.7python2: /usr/bin/python2.7 /usr/bin/python2 /etc/python2.7 /usr/lib/python2.7 /usr/bin/X11/python2.7 /usr/bin/X11/python2 /usr/local/lib/python2.7 /usr/share/man/man1/python2.1.gz | You have installed Python packages that are more recent than what your distribution provides. For example, you have python version 2.7.10-1 installed but your distribution only has version 2.7.5-5ubuntu3. APT doesn't downgrade packages unless explicitly told to do so. So for example if you try to install a package that depends on the exact version of Python, it won't work, because the python package can't be downgraded. Even apt-get --reinstall install python fails because APT won't downgrade Python to 2.7.5. In order to repair your system, you need to allow APT to perform downgrades. To do that, define APT preferences . Create a file /etc/apt/preferences.d/allow-downgrade containing Package: *Pin: release o=UbuntuPin-Priority: 1001 The files in /etc/apt/preferences.d (plus /etc/apt/preferences ) contain priority declarations that override the default selection when multiple versions of a package are available, which is “prefer the latest version from the target distribution”. Giving a package a priority over 1000 causes it to be preferred even if it's an older version that a package with a lower priority. Installed packages have priority 500 so the package from Ubuntu wins. For more information see: man apt_preferences I think once you've set these priorities you can run apt-get updateapt-get upgrade to downgrade all your packages to the version in Ubuntu (packages not in Ubuntu won't be removed). Also run apt-get -f install and don't try to install any other software until this completes successfully. Once everything is downgraded, remove the preferences file and run apt-get update again. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/218911",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122989/"
]
} |
218,996 | I have the following file: 6180,6180,0,1,,1,0,1,1,0,0,0,0,0,0,0,0,4326,4326,,0.440000,6553,6553,0,1,,1,0,1,1,0,0,0,0,1,0,1,0,4326,4326,,9.000000,1297,1297,0,0,,0,0,1,0,0,0,0,0,1,0,1,0,1707,1707,,7.000000,6598,6598,0,1,,1,0,1,1,0,0,0,1,0,0,0,0,1390,1390,,0.730000,4673,4673,0,1,,1,0,1,1,0,0,0,0,0,0,0,0,1707,1707,,0.000000, I need an awk command that print out the maximum value of $21 for $18. the desired output will look like: 6553,6553,0,1,,1,0,1,1,0,0,0,0,1,0,1,0,4326,4326,,9.000000,1297,1297,0,0,,0,0,1,0,0,0,0,0,1,0,1,0,1707,1707,,7.000000,6598,6598,0,1,,1,0,1,1,0,0,0,1,0,0,0,0,1390,1390,,0.730000, I got this result, but using the sort command, as below: sort -t, -k18,18n -k21,21nr | awk -F"," '!a[$18]++' while I am looking to do it with single awk command. Please advice, | I don't see why you would want to do it in a single awk command, what you have seems perfectly fine. Anyway, here's one way: $ awk -F, '(max[$18]<$21 || max[$18]==""){max[$18]=$21;line[$18]=$0} END{for(key in line){print line[key]}}' file6598,6598,0,1,,1,0,1,1,0,0,0,1,0,0,0,0,1390,1390,,0.730000,1297,1297,0,0,,0,0,1,0,0,0,0,0,1,0,1,0,1707,1707,,7.000000,6553,6553,0,1,,1,0,1,1,0,0,0,0,1,0,1,0,4326,4326,,9.000000, The idea is very simple. We have two arrays, max has $18 as a key and $21 as a value. For every line, if the saved value for $18 is smaller than $21 or if there is no value stored for $18 , then we store the current line ( $0 ) as the value for $18 in array line . Finally, in the END{} block, we print array line . Note that the script above treats $18 as a string. Therefore, 001 and 1 will be considered different strings. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/218996",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123325/"
]
} |
219,031 | I have 2000+ files in a folder, but there are few files missing from the folder. Name of the files are like GLDAS_NOAH025SUBP_3H.A2003 001.0000 .001.2015210044609.pss.grb GLDAS_NOAH025SUBP_3H.A2003 001.0600 .001.2015210044609.pss.grb GLDAS_NOAH025SUBP_3H.A2003 001.1200 .001.2015210044609.pss.grb GLDAS_NOAH025SUBP_3H.A2003 001.1800 .001.2015210044609.pss.grb GLDAS_NOAH025SUBP_3H.A2003 002.0000 .001.2015210044609.pss.grb GLDAS_NOAH025SUBP_3H.A2003 002.0600 .001.2015210044609.pss.grb GLDAS_NOAH025SUBP_3H.A2003 002.1200 .001.2015210044609.pss.grb GLDAS_NOAH025SUBP_3H.A2003 002.1800 .001.2015210044609.pss.grb GLDAS_NOAH025SUBP_3H.A2003 003.0000 .001.2015210044609.pss.grb GLDAS_NOAH025SUBP_3H.A2003 003.0600 .001.2015210044609.pss.grb GLDAS_NOAH025SUBP_3H.A2003 003.1200 .001.2015210044609.pss.grb GLDAS_NOAH025SUBP_3H.A2003 003.1800 .001.2015210044609.pss.grb 001 indicates day, while 0000 is the hour. How to find out which file is missing in the folder? I got few answer in google but could not figure out how to implement those. | With zsh or bash4 , you can use brace expansion for that: ls -d GLDAS_NOAH025SUBP_3H.A2003{001..006}.{0000,0600,1200,1800}.001.2015210044609.pss.grb >/dev/null Notice the brackets: {001..006} means expand to 001 , 002 , ... 006 {0000,0600,1200,1800} to every one of the above add 0000 , 0600 , 1200 and 1800 . >/dev/null is to avoid the standard output of ls -> we only want standard error Now if one file is not present, ls will show an error for that: ls: cannot access GLDAS_NOAH025SUBP_3H.A2003004.0000.001.2015210044609.pss.grb: No such file or directoryls: cannot access GLDAS_NOAH025SUBP_3H.A2003004.0600.001.2015210044609.pss.grb: No such file or directoryls: cannot access GLDAS_NOAH025SUBP_3H.A2003004.1200.001.2015210044609.pss.grb: No such file or directoryls: cannot access GLDAS_NOAH025SUBP_3H.A2003004.1800.001.2015210044609.pss.grb: No such file or directoryls: cannot access GLDAS_NOAH025SUBP_3H.A2003005.0000.001.2015210044609.pss.grb: No such file or directoryls: cannot access GLDAS_NOAH025SUBP_3H.A2003005.0600.001.2015210044609.pss.grb: No such file or directoryls: cannot access GLDAS_NOAH025SUBP_3H.A2003005.1200.001.2015210044609.pss.grb: No such file or directoryls: cannot access GLDAS_NOAH025SUBP_3H.A2003005.1800.001.2015210044609.pss.grb: No such file or directoryls: cannot access GLDAS_NOAH025SUBP_3H.A2003006.0000.001.2015210044609.pss.grb: No such file or directoryls: cannot access GLDAS_NOAH025SUBP_3H.A2003006.0600.001.2015210044609.pss.grb: No such file or directoryls: cannot access GLDAS_NOAH025SUBP_3H.A2003006.1200.001.2015210044609.pss.grb: No such file or directoryls: cannot access GLDAS_NOAH025SUBP_3H.A2003006.1800.001.2015210044609.pss.grb: No such file or directory With ksh93 , replace {001..006} with {1..6%.3d} . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219031",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125466/"
]
} |
219,038 | > cd /tmp> ln -s foo> ls -alhF /tmplrwxrwxrwx 1 user user 3 Jul 29 14:00 foo -> foo Is this a bug in ln or is there a use case for symlinking a file to itself? This is with coreutils 8.21-1ubuntu5.1 . | It's not a bug. The use case is for when you want to link a file to the same basename but in a different directory: cd /tmpln -s /etc/passwdls -l passwdlrwxrwxrwx 1 xxx xxx 11 Jul 29 09:10 passwd -> /etc/passwd It's true that when you do this with a filename that is in the same directory it creates a link to itself which does not do a whole lot of good! This works regardless of whether you use symlinks or hard links. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/219038",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96718/"
]
} |
219,059 | I have a Ubuntu Version here, that is started from USB as a Live version. I do not want to install it on hard disk, because it would be too much for only testing a small thing on Ubuntu. So I started Ubuntu and installed the nvidia driver (from nvidia) for a GPU (Tesla C2050) with the following commands: sudo apt-add-repository ppa:xorg-edgers/ppa -ysudo apg-get updatesudo apt-get install nvidia-346 As Ubuntu is started as Live-version, in the beginning, there was the nouveau driver activated. I want to deactivate it (maybe through rmmod or something. similar), so only the nvidia driver is activated and the GPU is using the nvidia driver. How it is possible? What can I do without rebooting the whole system (because all packages installed / removed / changes would be gone)? I have access to Ubuntu through SSH. I read that I might be helpful to type the command sudo update-initramfs -u but that command generated the output update-initramfs is disabled since running on read-only media | You need to unload the nouveau driver before you can load the nvidia driver.However, the nouveau driver is currently in use by the X-server, so it cannot be unloaded yet.You have to stop the X-server first (but don't just re-start it, as then it will use the nouveau driver again). So in short: stop X-server: sudo service lightdm stop unload the nouveau driver: sudo rmmod nouveau load the nvidia driver: sudo modprobe nvidia start the X-server: sudo service lightdm start You might be out-of-luck and the framebuffer for the console is locking the nouveau driver as well. In this case I haven't found a way to unload the driver at all... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116283/"
]
} |
219,073 | I used OpenSuse for several years now. One of the things I utmost liked with this distribution was the way Plasma/KDE issues where handled: from time to time it may happens that the panel briefly disappear and a messagebox opens-up telling me that Plasma desktop has crashed and has been restarted, also proposing to send debugging data to development teams if I like to. Now for a few months I've switched to a Fedora-based distribution ( Qubes OS , based on Fedora 20). It seems that this distribution does not offer this behavior by default, since: I never saw this messagebox anymore, But I got my desktop completely freezed several times (screen and keyboard frozen, sound and mouse pointer OK), having to brutally shut-down my computer, crossing my finger that loosing all ongoing work will be the only side-effect of such a brutal shutdown. A dozen of years ago, when I was student, my University was also using Fedora for our hands-on exercices. At that time, facing similar freeze I found the solution to connect remotely through SSH and kill the desktop manager so it gets automatically restarted, unlocking the graphical environment. Sadly, due to the specific design of Qubes OS¹, this quick-and-dirty SSH solution will not work here. However, I guess that OpenSuse's messagebox tool may do a similar thing in a proper way: implement some kind of watch dog detecting when Plasma/KDE does not respond, then kill and restart it. So what I'm wondering is: is this tool a specific feature of OpenSUSE², or is there some package I should install or some configuration I should change to enable this behavior on my current installation? Desktop freezes are particularly frustrating, even-more when I know that the application themselves are most probably still working fine and that a simple restart of the Plasma process would just get everything back to normal... ¹: In Qubes OS the network connectivity is isolated in a Xen domain and KDE is in a networkless Dom0. For security reasons, Qubes OS is precisely designed to avoid reaching the user interface from the network... ²: By the past (if it's not still the case) OpenSUSE used to have an internally heavily modified KDE, allowing them to be the first distribution to propose a stable KDE4, so I fear that this tool is just a part of these sweets... | TL;DR: Here the problem was apparently caused by an issue (most probably some obscure race condition) between OpenGL and KWin. To workaround it, one must disable OpenGL and use XRender instead (in System configuration > Desktop effects > Advanced > Compositing type, select "XRender" instead of the default OpenGL). A few desktop effects will not be available anymore, but at least the system will be stable and not freeze anymore. Long story: The issue occurred every few weeks randomly, some times several times a day, some times two or three weeks with no issue, and was therefore quite difficult to analyze (BTW at some point I switched to another video card, switching from radeon to Intel i915 without any impact on the issue, therefore it is related neither to the graphic card nor its driver). I left a script running in the background and doing automatic checks every three minutes in an infinite loops so they could hopefully catch something when the desktop freeze. Indeed, the freeze can be programmatically detected through qdbus, and in particular this call fails if and only if the desktop is frozen: qdbus org.kde.Kwin /App org.freedesktop.DBus.Peer.Ping While normally it has no output and a return code of 0, when the desktop is frozen this command fails with a return code 2 and a "NoReply" error message. For information, I've also checked the status of org.kde.plasma-desktop, org.kde.kuiserver and org.kde.kded which all seem sane when a freeze occurs, therefore KWin seems the real culprit. I tried several ways to restore the desktop environment integrity with no luck. Trying to restart KWin cleanly using kquitapp kwin or kwin --replace didn't seem to have any noticeable effect. I tried to kill and rebuild the complete desktop environment as follow: kbuildsycoca4kquitapp plasma-desktopkquitapp kwinkquitapp kuiserversleep 2killall plasma-desktop kwin kuiserver; sleep 2killall -9 plasma-desktop kwin kuiserver; sleep 2kstart kuiserverkstart kwinkstart plasma-desktop The desktop unfreezed itself! ... but only for one frame: the screen (as can be seen when looking at the clock in the taskbar) is updated and freezes immediately again. Nevertheless, having found the culprit, I've found an old "high, critical but won't fix because too obscure" issue here . Same symptoms, same diagnostic steps, and finally this suggested workaround: use XRender instead of OpenGL. It has been several months now since I applied this change and I encountered no freeze since then, so I think this workaround to be correct for this issue. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219073",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53965/"
]
} |
219,165 | I'm having some troubles on Ubuntu 14.04 initialization, it fails to mount an SSH folder and gives me the option of a manual recovery by pressing M, displaying a command-line logged at root user for debugging the problem. My troubles start when I try to read the sshfs help file and it is bigger than the screen, therefore impossible to read the cut-out part. I managed to solve this by doing sshfs -h >> read; nano read but I'm wondering if there is a easiest or more elegant/right way of doing this job. PS: I'm not at the Ubuntu terminal emulator, so it's impossible to adjust the "scrolling" tab, since it doesn't exists. | People usually use a pager like less to read such a long output: sshfs -h | less On less type H to show help. Q to quit. Note that you might occasionally need 2>&1 to see also additional output from stderr. For sshfs -h it has such an output so you'd better do that like this: sshfs -h 2>&1 | less Besides using a pager, on Linux text console you can scroll back/forward the screen without a scroll bar by typing Shift + PgUp or Shift + PgDn . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/219165",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124990/"
]
} |
219,171 | I have a Windows 8.1 remote PC, to which I am connecting using RDP from Windows 7 and Linux clients. I noticed that the performance e.g. when scrolling is much better on Windows than on any Linux distribution. I am using rdesktop, Remmina, GNOME-RDP, everywhere the screen refresh is slow and choppy, like VNC. But RDP does not work like VNC , or does it? Why is it so and what is the fastest RDP client for Linux? Maybe Remote Desktop Connection Client under Wine? | There are multiple versions of RDP protocol: original 4.0, which is a clone of ITU-T T.128 protocol 5.0 - which is still used by rdesktop (and not even fully) 5.1, 5.2, 6.0, 6.1, 7.0, 8.1 and 8.1 As you can imagine, each new version of RDP is better, not only by introducing new features, but also by further improving performance and overall user experience. As I wrote above, rdesktop still implements only a subset of RDP 5.0 protocol (version used on Windows 2000). This version is less optimized than at least 6.0 (released with Windows Vista), which was a huge performance improvement. Additionally, the whole X11 window system used on Linux is a group of userland applications, while Microsoft Windows processes graphic events (like screen scrolling) directly in its kernel. Screen (and application windows) scrolling is an operation requiring copying much memory contents from one place to another. This operation is much faster in the system kernel, than in userland applications. And this also affects the performance of each RDP implementations. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/219171",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85261/"
]
} |
219,181 | In the script below, I can't seem to make $var1 expand in the second statement. I've tried $var1 , ${var1} , echo $var1 and '$var1' . It is inside a few sets of quotes and parentheses which I guess is what is causing the problem. Any ideas? #!/bin/bash# Get the AutoScalingGroupName for the NameNode ASGvar1=$(aws cloudformation list-stack-resources --stack-name abc123 | jq '.StackResourceSummaries[] | select(.ResourceType=="AWS::AutoScaling::AutoScalingGroup")' | jq '.PhysicalResourceId' | tr -d '"' | grep nn); echo $var1var2=$(aws autoscaling describe-auto-scaling-instances | jq -r '.AutoScalingInstances[] | select(.AutoScalingGroupName == "$var1") | select(.AvailabilityZone == "us-east-1a") .InstanceId'); echo $var2 | Variables in single quotes are not expanded. Try this... var2=$(aws autoscaling describe-auto-scaling-instances | jq -r '.AutoScalingInstances[] | select(.AutoScalingGroupName == "'"$var1"'") | select(.AvailabilityZone == "us-east-1a") .InstanceId'); echo $var2 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219181",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86561/"
]
} |
219,231 | I have a computer with 8G RAM and a 128G SSD. I don't plan hibernating. What swap size would you recommend? Would you change any swappiness? In the nearest future I'll compile programs (or even kernels), run some virtual machines (leaving at least 5G free for the system), maybe occasionally play some game. | You should be fine with just 2 or 4 Gb of swap size, or none at all (since you don't plan hibernating). An often-quoted rule of thumb says that the swap partition should be twice the size of the RAM. This rule made sense on older systems to cope with the limited amount of RAM; nowadays your system, unless on heavy load, won't swap at all. It mostly depends whether you're going to do a memory-intensive use of your machine; if this is the case, you might want to increase the amount of RAM instead. Note that a SSD is subject to more wear and tear than a hard disk, and is limited by a number of rewrite cycles. This makes it not optimal to host a swap partition. Edit: Also see this question: Linux, SSD and swap | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/219231",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61003/"
]
} |
219,253 | In the past years I used debootstrap to install my desktop Debian (daily usage) and I'm planning to use it once again, but until now I just used debootstrap default options, this time instead I'd like to install a minimal system. I did some search but so far found nothing I didn't already knew, most articles say bare minimal but then they just use default options too, instead I'd like to know if there are exclude options to trim it down and still get a working system. I plan to look into debootstrap but thought to ask here before, maybe somebody already did or know about and may save me some time. edit A minimal Debian is composed by the packages with priority required and important dpkg-query -f '${binary:Package} ${Priority}\n' -W \ | grep -w 'required\|important' The minbase option still install some extra , optional , standard , but very few, eventually some of those and some important could be removed (or not installed at all, I think --exclude should work but haven't checked) The shell deboostrap's sub-script for sid is /usr/share/debootstrap/scripts/sid , easy to (backup and) customize. After the installation a lot of disk space is taken from downloaded .deb , apt-get clean , apt-get autoclean should free some. Some space is taken from locales, docs, man pages, dpkg-reconfigure locales and the package localepurge should help. | I use the option --variant=minbase which seems to be fairly minimal (about 150MB).No text editor, but essential GNU tools, package manager, and networking functionnalities with iproute2. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219253",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34446/"
]
} |
219,255 | I have a computer that should connect two networks: 192.168.0.x and 192.168.1.x 192.168.0.x is reachable through interface tun3 while 192.168.1.x is reachable through interface virbr1 . It seems that computers from 0.x can talk with computers from 1.x but not the other way around. It seems that arp packets coming from virbr1 are dropped. Where does this happen? Here is the ifconfig for both interfaces(tun3 and virbr1) on the host that should connect the two networks: root@pgrozav:/home/paul/data/work/server# ifconfig tun3 ; ifconfig virbr1tun3 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.1.0.1 P-t-P:10.1.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:942 errors:0 dropped:0 overruns:0 frame:0 TX packets:463 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:88986 (86.9 KiB) TX bytes:42452 (41.4 KiB)virbr1 Link encap:Ethernet HWaddr 52:54:00:78:23:3b inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:51616 errors:0 dropped:0 overruns:0 frame:0 TX packets:1198 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1469672 (1.4 MiB) TX bytes:155418 (151.7 KiB) Also, here's the IPTables rules: root@pgrozav:/home/paul/data/work/server# iptables -nvLChain INPUT (policy ACCEPT 4097K packets, 1544M bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT udp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:53 0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 0 0 ACCEPT udp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:67 0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:67 0 0 ACCEPT udp -- virbr1 * 0.0.0.0/0 0.0.0.0/0 udp dpt:53 0 0 ACCEPT tcp -- virbr1 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 0 0 ACCEPT udp -- virbr1 * 0.0.0.0/0 0.0.0.0/0 udp dpt:67 0 0 ACCEPT tcp -- virbr1 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:67Chain FORWARD (policy ACCEPT 481 packets, 40360 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * virbr0 0.0.0.0/0 192.168.122.0/24 ctstate RELATED,ESTABLISHED 0 0 ACCEPT all -- virbr0 * 192.168.122.0/24 0.0.0.0/0 0 0 ACCEPT all -- virbr0 virbr0 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- * virbr0 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable 0 0 REJECT all -- virbr0 * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable 393 42938 ACCEPT all -- * virbr1 0.0.0.0/0 192.168.1.0/24 ctstate RELATED,ESTABLISHED 397 35116 ACCEPT all -- virbr1 * 192.168.1.0/24 0.0.0.0/0 0 0 ACCEPT all -- virbr1 virbr1 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- virbr1 * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachableChain OUTPUT (policy ACCEPT 3217K packets, 435M bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT udp -- * virbr0 0.0.0.0/0 0.0.0.0/0 udp dpt:68 0 0 ACCEPT udp -- * virbr1 0.0.0.0/0 0.0.0.0/0 udp dpt:68root@pgrozav:/home/paul/data/work/server# iptables -nvL -t natChain PREROUTING (policy ACCEPT 99697 packets, 15M bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 65648 packets, 13M bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 169K packets, 12M bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 169K packets, 12M bytes) pkts bytes target prot opt in out source destination 69 5293 RETURN all -- * * 192.168.122.0/24 224.0.0.0/24 0 0 RETURN all -- * * 192.168.122.0/24 255.255.255.255 0 0 MASQUERADE tcp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 0 0 MASQUERADE udp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 0 0 MASQUERADE all -- * * 192.168.122.0/24 !192.168.122.0/24 69 5293 RETURN all -- * * 192.168.1.0/24 224.0.0.0/24 0 0 RETURN all -- * * 192.168.1.0/24 255.255.255.255 5 300 MASQUERADE tcp -- * * 192.168.1.0/24 !192.168.1.0/24 masq ports: 1024-65535 12 766 MASQUERADE udp -- * * 192.168.1.0/24 !192.168.1.0/24 masq ports: 1024-65535 5 420 MASQUERADE all -- * * 192.168.1.0/24 !192.168.1.0/24 And the routing table(s): root@pgrozav:/home/paul/data/work/server# ip routedefault via 192.168.200.1 dev eth0 10.1.0.2 dev tun3 proto kernel scope link src 10.1.0.1 192.168.0.0/24 via 10.1.0.1 dev tun3 scope link 192.168.1.0/24 dev virbr1 proto kernel scope link src 192.168.1.1 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 192.168.200.0/24 dev eth0 proto kernel scope link src 192.168.200.70 root@pgrozav:/home/paul/data/work/server# ip route list table 200default via 10.1.0.1 dev tun3 192.168.1.0/24 via 10.1.0.1 dev tun3 Actually, I have a script that sets this up: remoteHost=develtunnelNumber=3tunnelPrefixName="tun"tunnelName="$tunnelPrefixName$tunnelNumber"tunnelLocalIP="10.1.0.1"tunnelRemoteIP="10.1.0.2"remoteNetworkToJoin="192.168.0.0"remoteNetworkToJoinNetmask="255.255.255.0"remoteNetworkToJoinInterfaceName="eth0"localNetworkToJoin="192.168.1.0"localNetworkToJoinNetmask="255.255.255.0"localNetworkToJoinInterfaceName="virbr1" ssh -f -NTC -w $tunnelNumber:$tunnelNumber $remoteHost ip link set $tunnelName up ssh $remoteHost ip link set $tunnelName up ip addr add $tunnelLocalIP/32 peer $tunnelRemoteIP dev $tunnelName ssh $remoteHost ip addr add $tunnelRemoteIP/32 peer $tunnelLocalIP dev $tunnelName route add -net $remoteNetworkToJoin gw $tunnelLocalIP netmask $remoteNetworkToJoinNetmask dev $tunnelName ip route add default via $tunnelLocalIP dev $tunnelName table 200 ip rule add from $localNetworkToJoin/24 table 200 ssh $remoteHost route add -net $localNetworkToJoin gw $tunnelRemoteIP netmask $localNetworkToJoinNetmask dev $tunnelName ssh $remoteHost iptables -A FORWARD -i $remoteNetworkToJoinInterfaceName -o $tunnelName -m state --state ESTABLISHED,RELATED -j ACCEPT ssh $remoteHost iptables -A FORWARD -s $tunnelLocalIP -o $remoteNetworkToJoinInterfaceName -j ACCEPT ssh $remoteHost iptables -t nat -A POSTROUTING -s $tunnelLocalIP -o $remoteNetworkToJoinInterfaceName -j MASQUERADE I am running KVM on this machine and virbr1 is connecting my machine to the virtual LAN where all the virtual machines are. I am trying to connect the local LAN (with the VMs - 1.x) to a remote network (0.x) | I use the option --variant=minbase which seems to be fairly minimal (about 150MB).No text editor, but essential GNU tools, package manager, and networking functionnalities with iproute2. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219255",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125612/"
]
} |
219,260 | I have a computer that I need to boot into, but the passwords seem to be bogus. Additionally I can't mount the drive for writing, and it is a mips processor, so I can't stick it in another machine to run it. Anyhow, they passwd file has some users that look like this, with a star after the user-name. does that mean blank password or what? root:8sh9JBUR0VYeQ:0:0:Super-User,,,,,,,:/:/bin/kshsysadm:*:0:0:System V Administration:/usr/admin:/bin/shdiag:*:0:996:Hardware Diagnostics:/usr/diags:/bin/cshdaemon:*:1:1:daemons:/:/dev/nullbin:*:2:2:System Tools Owner:/bin:/dev/nulluucp:*:3:5:UUCP Owner:/usr/lib/uucp:/bin/cshsys:*:4:0:System Activity Owner:/var/adm:/bin/shadm:*:5:3:Accounting Files Owner:/var/adm:/bin/shlp:VvHUV8idZH1uM:9:9:Print Spooler Owner:/var/spool/lp:/bin/shnuucp::10:10:Remote UUCP User:/var/spool/uucppublic:/usr/lib/uucp/uucicoauditor:*:11:0:Audit Activity Owner:/auditor:/bin/shdbadmin:*:12:0:Security Database Owner:/dbadmin:/bin/shrfindd:*:66:1:Rfind Daemon and Fsdump:/var/rfindd:/bin/sh | You have to check man passwd : If the encrypted password is set to an asterisk (*), the user will be unable to login using login(1), but may still login using rlogin(1), run existing processes and initiate new ones through rsh(1), cron(8), at(1), or mail filters, etc. Trying to lock an account by simply changing the shell field yields the same result and additionally allows the use of su(1). Usually accounts with * in password field don't have a password e.g: disabled for login. This is different to account without password which means the password field will be empty and which is nearly always a bad practice. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/219260",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41051/"
]
} |
219,268 | Why does the following command not insert new lines in the generated file and what's the solution? $ echo "Line 1\r\nLine2" >> readme.txt$ cat readme.txt Line 1\r\nLine2 | echo An echo implementation which strictly conforms to the Single Unix Specification will add newlines if you do: echo 'line1\nline2' But that is not a reliable behavior. In fact, there really isn't any standard behavior which you can expect of echo . OPERANDS string A string to be written to standard output. If the first operand is -n , or if any of the operands contain a < \ backslash> character, the results are implementation-defined . On XSI-conformant systems, if the first operand is -n , it shall be treated as a string, not an option . The following character sequences shall be recognized on XSI-conformant systems within any of the arguments: \a - Write an <alert> . \b - Write a <backspace> . \c - Suppress the <newline> that otherwise follows the final argument in the output. All characters following the \c in the arguments shall be ignored. \f - Write a <form-feed> . \n - Write a <newline> . \r - Write a <carriage-return> . \t - Write a <tab> . \v - Write a <vertical-tab> . \\ - Write a <backslash> character. \0num - Write an 8-bit value that is the zero, one, two, or three-digit octal number num . And so there really isn't any general way to know how to write a newline with echo , except that you can generally rely on just doing echo to do so. A bash shell typically does not conform to the specification, and handles the -n and other options, but even that is uncertain. You can do: shopt -s xpg_echoecho hey\\nthere heythere And not even that is necessary if bash has been built with the build-time option... --enable-xpg-echo-default Make the echo builtin expand backslash-escaped characters by default, without requiring the -e option. This sets the default value of the xpg_echo shell option to on , which makes the Bash echo behave more like the version specified in the Single Unix Specification, version 3. See Bash Builtins , for a description of the escape sequences that echo recognizes. printf On the other hand, printf 's behavior is pretty tame in comparison. RATIONALE The printf utility was added to provide functionality that has historically been provided by echo . However, due to irreconcilable differences in the various versions of echo extant, the version has few special features, leaving those to this new printf utility, which is based on one in the Ninth Edition system. The EXTENDED DESCRIPTION section almost exactly matches the printf() function in the ISO C standard, although it is described in terms of the file format notation in XBD File Format Notation . It handles format strings which describe its arguments - which can be any number of things, but for strings are pretty much either %b yte strings or literal %s trings. Other than the %f ormats in the first argument, it behaves most like a %b yte string argument, except that it doesn't handle the \c escape. printf ^%b%s$ '\n' '\n' '\t' '\t' ^\n$^ \t$ See Why is printf better than echo ? for more. echo() printf You might write your own standards conformant echo like... echo(){ printf '%b ' "$@\n\c"} ...which should pretty much always do the right thing automatically. Actually, no... That prints a literal \n at the tail of the arguments if the last argument ends in an odd number of <backslashes> . But this doesn't: echo() case ${IFS- } in (\ *) printf %b\\n "$*";; (*) IFS=\ $IFS printf %b\\n "$*" IFS=${IFS#?} esac | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/219268",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122146/"
]
} |
219,309 | If i run this command: XZ_OPT=-9 tar --xz -cvf files/compressed/xz/archive.tar.xz -C files/original/ . Get this message: xz: (stdin): Cannot allocate memorytar: files/compressed/lzma//archive.lzma: Wrote only 4096 of 10240 bytestar: Error is not recoverable: exiting now What type of memory it is? Or how do i set it to make it work. EDIT: (aditional info) Total file size that I want to compress: 18.92M Gzip Bzip2 ZIP - works OK xz --info-memory : Total amount of physical memory (RAM): 595 MiB (623116288 B)Memory usage limit for compression: DisabledMemory usage limit for decompression: Disabled ulimit -a : core file size (blocks, -c) 0data seg size (kbytes, -d) unlimitedscheduling priority (-e) 0file size (blocks, -f) unlimitedpending signals (-i) 2312max locked memory (kbytes, -l) 64max memory size (kbytes, -m) unlimitedopen files (-n) 1024pipe size (512 bytes, -p) 8POSIX message queues (bytes, -q) 819200real-time priority (-r) 0stack size (kbytes, -s) 8192cpu time (seconds, -t) unlimitedmax user processes (-u) 2312virtual memory (kbytes, -v) unlimitedfile locks (-x) unlimited | In man xz , you'll find that -9 requires 674 MiB of memory for compression (and that it's only useful if you're compressing files bigger than 32 MiB). Try adding about this much swap to provide enough virtual memory for the operation (assuming you're using all your current memory for other purposes). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219309",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125607/"
]
} |
219,314 | I have a bash script that generates a report on the progress of some long-running jobs on the machine. Basically, this parent script loops through a list of child scripts (calling them all with source ). The child scripts are expected to set a couple of specific variables, which the parent script will then make use of. Today, I discovered a bug where, a variable set by the first child script accidentally got used by the second child script, causing incorrect output. Is there a clean way to prevent these types of bugs from happening? Basically, when I source a child script, there are a couple of specific variables that I want to persist back to the parent script. My parent script resets these specific variables before it source s each new child script, so there are no issues with them. However, some child scripts may have additional arbitrary variables that it uses locally that should not persist back to the parent script. Obviously I could manually unset each of these at the end of the child script, but these seems prone to error if I forget one. Is there a more proper way of sourcing a script, and having only certain variables persist to the script that called source ? edit: For sake of clarity, here's a sort of dumbed down version of my parent script: echo "<html><body><h1>My jobs</h1>"FILES=~/confs/*.shfor f in $FILES; do # reset variables name="Unnamed job" secsSinceActive="Unknown" statusText="Unknown" # run the script that checks on the job source "$f" # print bit of report echo "<h3>$name</h3>" echo "<p>Last active: $secsSinceActive seconds ago</p>" echo "<p>Status: $statusText</p>"echo "</body></html>" And here's what one of the child scripts might look like: name="My awesome job"nowTime=`expr $(date +%s) `lastActiveTime=`expr $(date +%s -r ~/blah.log)`secsSinceActive=`expr $nowTime - $lastActiveTime`currentRaw=$(cat ~/blah.log | grep "Progress" | tail -n 1)if [ -z "$currentRaw" ]; then statusText="Not running"else statusText="In progress"fi The variables $name, $secsSinceActive, and $statusText need to persist back to the parent script, but all the other variables should disappear when the child script terminates. | Wrap the whole script you want to source into a function, add local before the declarations you want to only use in the function, and call the function at the end of the script. func () { local name="My awesome job" nowTime=`expr $(date +%s) ` lastActiveTime=`expr $(date +%s -r ~/blah.log)` local secsSinceActive=`expr $nowTime - $lastActiveTime` currentRaw=$(cat ~/blah.log | grep "Progress" | tail -n 1) if [ -z "$currentRaw" ]; then local statusText="Not running" else local statusText="In progress" fi}func | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/219314",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111223/"
]
} |
219,342 | I have a default Debian 8.1 installation. I do suman apt<TAB> but get nothing. I do see the manual when I write man apt-get in both modes: as user mas and as root. However, tab completion works in the user mode only. How can I enable table completion after man in root? Why is this disabled by default? | Running su invokes bash in non-login mode. Bash then reads .bashrc to configure its environment. Runing su - invokes bash as a login shell. In this mode /etc/profile is executed if it exists. Bash also searches for ~/.bash_profile , ~/.bash_login and ~/.profile executing the first file it finds. Although not documented, it appears to execute ~/.bashrc when none of these exist. If you get different behavior, it is likely that you are using different files to initialize bash depending on how it is invoked. I tested which file was invoked by adding lines like echo .bashrc to then end of the existing files. This will display which configurations get invoked. There is more detail on this behavior in the INVOCATION section of the bash man page. Tab completion is available in bash but not in sh . root normally has sh as its shell as bash may not be available. Users typically have bash as their shell. Try running bash as root before trying tab-completion. This should enable tab completion. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219342",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
219,351 | I recently installed kernel 3.14.27-100 on my Fedora 19 system and now I no longer get a Caps Lock LED when using virtual terminals. (It still works on X system ). Also, it works fine running kernel 3.9.5-301. There must be a keyboard configuration somewhere that needs to be changed?? Note: the Caps Lock feature itself works fine. | Running su invokes bash in non-login mode. Bash then reads .bashrc to configure its environment. Runing su - invokes bash as a login shell. In this mode /etc/profile is executed if it exists. Bash also searches for ~/.bash_profile , ~/.bash_login and ~/.profile executing the first file it finds. Although not documented, it appears to execute ~/.bashrc when none of these exist. If you get different behavior, it is likely that you are using different files to initialize bash depending on how it is invoked. I tested which file was invoked by adding lines like echo .bashrc to then end of the existing files. This will display which configurations get invoked. There is more detail on this behavior in the INVOCATION section of the bash man page. Tab completion is available in bash but not in sh . root normally has sh as its shell as bash may not be available. Users typically have bash as their shell. Try running bash as root before trying tab-completion. This should enable tab completion. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219351",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125660/"
]
} |
219,370 | After several years of happily using different terminal emulators like Konsole , Gnome-TERMINAL , and lately XFCE Terminal in their appropriate desktop environments, I decided to use good old xterm with its bitmap fonts . It works just fine, it supports Unicode, and the default fixed font family contains characters from nearly all languages, which is great. But I came across an important problem. The fonts are really small. Even the so called Huge size (which is 10x20 bitmap font) is very small for me, and unusable. My default setting for the XFCE environment is set to 120 dpi, but xpdyinfo reports 97x97 DPI $ xdpyinfo |grep resolution resolution: 97x97 dots per inch So I tried to change the DPI with xrandr , but It didn't help. $ xrandr --dpi 120 The result seems to be applied $ xdpyinfo | grep resolution resolution: 120x120 dots per inch but it does not change the resolution of xterm at all. I have even tried to use scaling, but it affected the whole X, rather than a single application: $ xrandr --output LVDS1 --scale 0.5x0.5 There are workarounds for Qt and Gtk , but what about Xlib -based applications like Xterm , Xcalc , Xman , Xfige , etc? Should we watch them fade away, as the display DPI goes up? Please Help if you know any workarounds. This is what I have done, which worked somehow, but I couldn't be able to use the original "fixed font family", so it may now work for some languages only. PS1: I have installed 100 DPI fonts for X, but I couldn't use them $ sudo apt-get install xfonts-100dpi PS2: Fontforge which also uses Xlib , uses a nice theme and normal font sizes. I don't know how it does that. PS3: I am testing otf2bdf and bdftopcf utiliites to create experimental PCF bitmap fonts for HIDPI from vector TTF/OTF fonts. PS4: After installing 100DPI fonts, this was good, although it lacks great language support of the default fixed font. $ xterm -font -Adobe-Courier-Bold-r-Normal-*-34-*-100-100-*-*-*-* I have used fontsel . It is really helpful. PS5: This is also useful. PS6: I could be able to create 120DPI bitmap font from Courier New with 20pt $ otf2bdf -p 20 -r 120 cour.ttf > cour.bdf$ bdftopcf cour.bdf | gzip - > cour.pcf.gz$ sudo cp cour.pcf.gz /usr/share/fonts/X11/misc/$ fc-cache$ xterm -font -*-*-*-*-*-*-*-*-120-120-*-*-*-* PS7: 75 DPI is hardcoded in the BDF font. Maybe changing it will help. PS8: vncdesk is a good tool to use to scale up a single window . | You have hinted the answer yourself by referencing https://en.wikipedia.org/wiki/Fixed_(typeface) This is the standard fixed bitmap font which has been expanded by Markus Kuhn to have a rather complete character set. The question is then how to scale a bitmap . What you have achieved so far is scaling a vector font and converting it to a bitmap (ttf → bdf → pcf) . That is a fine strategy but as you mention it lacks some language support. That seems strange as Courier New is one of the more unicode complete fonts but I digress! Maybe try using Mono which is a clone. I do however not understand why you are doing this as xterm does support truetype . Modify ~/.Xresources such as this (note that you'll need to reload it using xrdb as seen in another answer to this question): XTerm*renderFont: trueXTerm*faceName: VeraMonoXTerm*faceSize: 10 But back to the task: You want a larger bitmap font. The largest available bitmap available is: 10x20 -Misc-Fixed-Medium-R-Normal--20-200-75-75-C-100-ISO10646-1 Markus have been so nice that he supplies the source BDF files. If your distribution does not have the most recent updates (from April 2009) you can grab the package directly from him. The "-misc-fixed-*" font package: http://www.cl.cam.ac.uk/~mgk25/download/ucs-fonts.tar.gz Rather than converting back and forth between pcf and bdf you could/should stick to the source format. You can use a BDF font editor to resize the font. Do not expect any antialiasing or such trickery - but at least you can get a readable size. Or you can use bdfresize by Hiroto Kagotani (also found in some package systems). UPDATE: I do not know of a way to scale just one window (never had the need). You could track this Superuser question. When I have had the need I have scaled the entire environment. You can downgrade a 3200x1800 display to 1920x1080 using: xrandr --dpi 141xrandr --output eDP1 --scale 0.6x0.6 Other tricks for screen scaling in different window managers can be found here . They suggest using VNC: One approach is to run the application full screen and without decoration in its own VNC desktop. Then scale the viewer. With Vncdesk ( vncdesk-git from the AUR) you can set up a desktop per application, then start server and client with a simple command such as vncdesk 2 . x11vnc has an experimental option -appshare , which opens one viewer per application window. Perhaps something could be hacked up with that. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/219370",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37799/"
]
} |
219,395 | Let's say I have a dozen windows open in Byobu, I am currently on window 2 and I want to go see what's going on in window 9. How to quickly jump to another window, without having to press F4 seven times? The best would be some kind of shortcut like ALT + 9 or something similar. | All GNU Screen's keys binding work exactly the same in byobu. To select a window , simply pressing Ctrl + a , then Window number Note that Ctrl + a is conflict with GNU Emacs keys binding, so byobu will ask you to chose the behavior. In any case, you can use Ctrl + a , then a to go to beginning of line in Emacs keys binding mode. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219395",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2305/"
]
} |
219,438 | How do I remove the ^L character and the blank lines that come after it on a unix file? I have tried the below and have been able to remove the VT and spaces but am failing to remove the ^L character and the blank lines after it tr -s '\040\011\' '|' <$x>> modified.txt and: tr -d '\013' <modified1.txt>> $FILENAME | That's the caret notation for the form feed character . With the GNU implementation of sed , you can remove it using its octal value, \o14 : sed 's/\o14//g' file You can also use its escape code: sed 's/\f//g' file Such characters can be entered in the terminal by pressing Ctrl V and then the code for the character. In this case, Ctrl L . So, type this: sed 's/ Then, hit Ctrl V and then Ctrl L : sed 's/^L Now, complete the command: sed 's/^L//g' file Don't write ^L and don't paste it from the above, use the keyboard shortcut I gave. You could also remove it with tr : tr -d '\f' < file Or perl : perl -pe 's/\f//g' file To delete both the \f and any blank lines following it, you could do something like: perl -0pe 's/\f\s*/\n/s' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219438",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22163/"
]
} |
219,496 | Imaging the next simple file structure within the /home/user/ directory: /home/user |--dir0 |--dir1 |--file1 My current directory is 'dir1' and I remove the directory from inside with the following command: rm -r ../dir1 After that (and not getting any errors on the terminal), the working directory still is the same and when using the command pwd the output is: user@ubuntu:~/dir0/dir1$ pwd/home/user/dir0/dir1user@ubuntu:~/dir0/dir1$ Why would the OS return that the working directory is 'dir1' if that was already removed from the filesystem? | I think pwd you run was a bash shell built-in. It just printed out the path it held in memory without looking up the file system. $ type pwdpwd is a shell builtin$ /bin/pwd/bin/pwd: couldn't find directory entry in '..' with matching i-node | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219496",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125765/"
]
} |
219,605 | I just got an asus dsl-ac68u modem/router and I noticed it has ssh access. I set this up and I can ssh in with root permissions, but it doesn't seem to have a package manager installed. Being used to Debian, just to test, I tried: # apt-get install vim but got the following response: -sh: apt-get: not found how can I figure out if there is a package manager installed? i thought about trying to find out the distro that is running, but I can't even figure that out: # cat /proc/versionLinux version 2.6.36.4brcmarm (sam@SW5-Server-50) (gcc version 4.5.3 (Buildroot 2012.02) ) #10 SMP PREEMPT Tue Jul 14 16:24:32 CST 2015# uname -aLinux (none) 2.6.36.4brcmarm #10 SMP PREEMPT Tue Jul 14 16:24:32 CST 2015 armv7l GNU/Linux# ls /etc/*elease*ls: /etc/*elease*: No such file or directory# ls /etc/*ersion*ls: /etc/*ersion*: No such file or directory It seems to be some customized version of Linux, and not any particular distro. how can I install apt on such a device? | Only within a chroot using debootstrap, if the architecture is supported. Don't mess up the real filesystem. I believe this approach has been popular on certain NAS devices, e.g. http://www.rooot.net/en/geek-stuff/synology/39-chroot-debian-synology-debootstrap.html The router will almost certainly not be designed to alter the filesystem (treated as ROM). Hence the lack of package manager. This means your chroot will have to be in tmpfs or a mounted usb device. tmpfs will obviously not survive reboots :). And won't be big enough to reliably run debian. You'll have to use a usb storage device. You may wish to participate in openwrt development for your device. http://wiki.openwrt.org/toh/asus/rt-ac68u https://forum.openwrt.org/viewtopic.php?id=51005 https://forum.openwrt.org/viewtopic.php?id=52378 Looking at the specs there's enough ram to have some fun with, and the processor looks good too so a Debian chroot on usb might just be an option. However remember that in this case you will be limited by the original kernel+modules, which may not be intended for your desired uses. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219605",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5451/"
]
} |
219,606 | Bash command to check number of words in a file that contain letter “a” | Suppose that we have this test file: $ cat filethe cat in the hatthe quick brown dogjack splat With grep implementations that have adopted GNU's -o extension, we can retrieve all the words containing a : $ grep -wo '[[:alnum:]]*a[[:alnum:]]*' filecathatjacksplat We can count those words: $ grep -wo '[[:alnum:]]*a[[:alnum:]]*' file | wc -l4 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/219606",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123444/"
]
} |
219,628 | I'm probably missing something really simple here, but when I say echo 'The quick brown fox jumped over the lazy dog.' | \ awk '{ split($0, WORDS, " "); for ( WORD in WORDS ) { print $WORD; } }' I get this in return: quickbrownfoxjumpedoverthelazydog.The Why is the first word printed last? $ awk --versionawk version 20070501 | Firstly, for (i in array) in awk yield the index of array, not the array elements. So you got the result like you accessed $1 . $2 ... $NF . echo 'The quick brown fox jumped over the lazy dog.' | \ awk '{ split($0, WORDS, " "); for ( WORD in WORDS ) { print WORD; } }'234567891 You can see you got array indexes when accessing variable WORD . For your question, POSIX defined looping through awk array yielding the array index in unspecified order : for (variable in array) which shall iterate, assigning each index of array to variable in an unspecified order . So it's up to implementation to define how to traverse the array. A quick test in my system shown that gawk and mawk looping with increasing order: for AWK in gawk mawk /usr/5bin/[on]awk /usr/5bin/posix/awk; do printf '==%s==\n' "$AWK" echo 'The quick brown fox jumped over the lazy dog.' | "$AWK" '{ split($0, WORDS, " ") for (WORD in WORDS) { print WORD; } }' | { sed 1q; tail -n1 } done==awk==19==mawk==19==/usr/5bin/nawk==21==/usr/5bin/oawk==21==/usr/5bin/posix/awk==21 (With GNU sed , you need sed -u 1q ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219628",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48783/"
]
} |
219,632 | I'd like to monitor my network traffic of an specific interface to file. Then I would like to stop the interface if the traffic counts over 60mb total. Is there a possible way to do that? | dumpcap , the low-level traffic capture program of Wireshark , can be instructed to stop capturing after certain conditions with the option -a . You can stop capturing after writing 60MB. This isn't the same thing as measuring traffic, since it depends on the file encoding, but it should be close enough for most purposes (and anyway the exact traffic depends at which protocol level you measure it — Ethernet, IP, TCP, application, …). dumpcap -i eth0 -a filesize:61440 -w capture.dump | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219632",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63940/"
]
} |
219,688 | Whenever I copy any text via X11's copy/paste feature, I would like all formatting to be removed. I waste time daily dumping the text into a terminal, then copying again. Is this possible? | This should do it: xclip -selection c -o | xclip -selection c | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219688",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65344/"
]
} |
219,712 | I am trying to generate a hashed password as in /etc/shadow file, using bash script. The user is prompted for the password which is saved in a variable PSWD . I have extracted the Hash and the salt value from the /etc/shadow file and saved them in a variables HVAL and SVAL respectively. Now in order to generate a hashed password using Password and the Salt value, I need to use command given below ( Reference ): $ perl -e 'print crypt("password","\$6\$salt\$") . "\n"' In the above command, I must replace "password" with the $PSWD variable, "6" with $HVAL and "salt" with $SVAL variable. I've tried exporting above variables and replacing them in the perl command, as shown below, but it was totally messed up. perl -e 'print crypt("$ENV{"PSWD"}","\$$ENV{"HVAL"}\$$ENV{"SVAL"}\$") . "\n"' What will be the corrections? | The problem is with your double quotes. Here you don't need to quote those hash keys as they are simple identifiers . From perldoc perldata : In fact, a simple identifier within such curlies is forced to be a string, and likewise within a hash subscript. Neither need quoting. Our earlier example, $days{'Feb'} can be written as $days{Feb} and the quotes will be assumed automatically. But anything more complicated in the subscript will be interpreted as an expression. This means for example that $version{2.0}++ is equivalent to $version{2}++ , not to $version{'2.0'}++ . So: perl -le 'print crypt($ENV{PSWD},"\$$ENV{HVAL}\$$ENV{SVAL}\$")' If you're using it inside backticks, you'd need to double your backslashes as in: var=`perl -le 'print crypt($ENV{PSWD},"\\$$ENV{HVAL}\\$$ENV{SVAL}\$")'` Best is to use the $(...) form of command substitution instead: var=$(perl -le 'print crypt($ENV{PSWD},"\$$ENV{HVAL}\$$ENV{SVAL}\$")') | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219712",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48188/"
]
} |
219,723 | I know that similar questions have been asked about permissions, but which form of compression or archiving keeps the permissions and file owners of each file and directory? I was considering the tar.gz format, but would this be right? I need to move 37GB of files and directories to another server and need everything to be exactly the same when uncompressed. | Both traditional archiving tools tar and cpio preserve ownership and Unix permissions (user/group/other) as well as timestamps (with cpio, be sure to pass -m when extracting). If you don't like their arcane syntax ¹, you can use their POSIX replacement pax ( pax -w -pe ) All of these output an uncompressed archive; pipe the archive into a tool like gzip or xz to compress it (GNU tar has options to do the compression). Users and groups are identified by their name; GNU tar has an option . None of these tools preserve modern features such as ACL, capabilities, security contexts or other extended attributes. Some versions of tar can store ACL. See What to use to backup files, preserving ACLs? With GNU tar, pass --acls both when creating and extracting the archive. A squashfs filesystem, as suggested by mikeserv , stores capabilities and extended attributes including SELinux context, but not ACL. (You need versions that aren't too antique .) If you have both ACL and security contexts, you can use a squashfs filesystem, and save the ACL by running getfacl -R at the root of the original filesystem and restore them after extracting the files with setfacl --restore . If you want to save absolutely everything including ACL, subsecond timestamps, extended attributes and filesystem-specific attributes, you can clone the filesystem. The downside of this approach is that you can't conveniently directly write a compressed copy. The ultimate way to clone the filesystem is to copy the block device; of course this is wasteful in that it copies the empty space. Alternatively, create a filesystem that's large enough to store all the files and use cp -a with the cp command from GNU coreutils to copy the files; GNU cp is pretty good at copying everything including non-traditional features such as extended attribute and ACLs. ¹ Though this one is really overblown. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219723",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125926/"
]
} |
219,728 | In some applications entering capital letters works like hitting ESC key. Reproducing: Open LibreOffice document Select "Save as" (Unity dialog) Hit "Create folder" Enter a capital letter using shift key (e.g. Shift+A) As this point the creating of the new folder get canceled. (like ESC key was hit) This behavior is also present in many different programs and games. Analyzing the situation with xev (Hitting Shift+d) # xevKeyPress event, serial 37, synthetic NO, window 0x4c00001, root 0x259, subw 0x0, time 994702, (15,-13), root:(987,197), state 0x10, keycode 50 (keysym 0xffe1, Shift_L), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: FalseFocusOut event, serial 37, synthetic NO, window 0x4c00001, mode NotifyGrab, detail NotifyAncestorFocusIn event, serial 37, synthetic NO, window 0x4c00001, mode NotifyUngrab, detail NotifyAncestorKeymapNotify event, serial 37, synthetic NO, window 0x0, keys: 89 0 0 0 0 1 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 KeyPress event, serial 37, synthetic NO, window 0x4c00001, root 0x259, subw 0x0, time 994927, (15,-13), root:(987,197), state 0x11, keycode 40 (keysym 0x44, D), same_screen YES, XLookupString gives 1 bytes: (44) "D" XmbLookupString gives 1 bytes: (44) "D" XFilterEvent returns: FalseKeyRelease event, serial 37, synthetic NO, window 0x4c00001, root 0x259, subw 0x0, time 995062, (15,-13), root:(987,197), state 0x11, keycode 40 (keysym 0x44, D), same_screen YES, XLookupString gives 1 bytes: (44) "D" XFilterEvent returns: FalseKeyRelease event, serial 37, synthetic NO, window 0x4c00001, root 0x259, subw 0x0, time 995395, (15,-13), root:(987,197), state 0x11, keycode 50 (keysym 0xffe1, Shift_L), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False I think, that the FocusOut and FocusIn events look suspicious. Linux: Ubuntu 14.04 LTS 32bit, Unity How can I fix my system? Any ideas or further tests are welcome! Edit: The solution I used to configure language switch to LeftShift+RightShift. This worked for years, but become the problem at some point. Anyway configuring switch to any other key combination solved the problem. | FocusOut event, serial 37, synthetic NO, window 0x4c00001, mode NotifyGrab, detail NotifyAncestorFocusIn event, serial 37, synthetic NO, window 0x4c00001, mode NotifyUngrab, detail NotifyAncestor What happened when you pressed A with Shift held is a passive grab : there's an X client which has exclusive control over this key combination, and when the key combination is pressed, the event is routed only to that client, not to xev or anyone else. xev does report the client grabbing the key combination when it happens and ungrabbing it when it's over. In layman's terms, there's a program that's defined Shift + A as a global keybinding. It's probably a typo where you meant to bind Shift + Alt + key or Win + Shift + A something. In Manipulating X key and pointer grabs on the command line I asked how to find who the grabber is. The best way I found only reports active grab, so the key has to be down when the information is queried. Install xdotool if you don't already have it. Run sleep 1; xdotool key XF86LogGrabInfo . Within one second, press and hold Shift + A . Hold until xdotool has run. Look in the X server log for information about the grab. The typical location of the X server log is /var/log/Xorg.0.log (the 0 reflects the display number, i.e. the number in $DISPLAY : if $DISPLAY is :1 or :1.0 then look at /var/log/Xorg.1.log , etc.). Here's some sample output showing that the key I pressed was a key binding defined by sawfish: [2292688.331] Active grab 0x41602244 (core) on device 'Virtual core keyboard' (3):[2292688.331] client pid 6745 sawfish [2292688.331] at 2292687547 (from passive grab) (device thawed, state 3)[2292688.331] core event mask 0x3[2292688.331] passive grab type 2, detail 0x4e, activating key 78[2292688.331] owner-events false, kb 0 ptr 0, confine 0, cursor 0x0[2292688.331] (II) End list of active device grabs | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219728",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49672/"
]
} |
219,750 | I am trying to copy some files with spaces and $ , @ symbols in their file names in a bash script but the script fails to copy the files stating it cannot find the file. I can see that it is treating each space separated word in file name as another file name which is why it is failing. Following is my code: cp "$TRX_SOURCE_PATH/*TRX*" $DEST_PATH Error: cp: cannot stat `/pmautomation/PM/Report_Output/CFBLOCKTRUMNG/2015-08-01/*TRX*': No such file or directory If i do a ls i can see the file names: # ls -lrt /pmautomation/PM/Report_Output/CFBLOCKTRUMNG/2015-08-01/*TRX*-rw-r--r--. 1 root root 856064 Jul 27 11:54 /pmautomation/PM/Report_Output/CFBLOCKTRUMNG/2015-08-01/Bharti Blocked TRX Report [email protected]. 1 root root 856064 Jul 27 11:54 /pmautomation/PM/Report_Output/CFBLOCKTRUMNG/2015-08-01/Bharti Blocked TRX Report [email protected]. 1 root root 856064 Jul 27 11:54 /pmautomation/PM/Report_Output/CFBLOCKTRUMNG/2015-08-01/Bharti Blocked TRX Report [email protected]. 1 root root 1254400 Aug 1 04:43 /pmautomation/PM/Report_Output/CFBLOCKTRUMNG/2015-08-01/Bharti Blocked TRX Report [email protected]. 1 root root 2770944 Aug 1 04:48 /pmautomation/PM/Report_Output/CFBLOCKTRUMNG/2015-08-01/Bharti Blocked TRX Report [email protected]. 1 root root 1707008 Aug 1 04:57 /pmautomation/PM/Report_Output/CFBLOCKTRUMNG/2015-08-01/Bharti Blocked TRX Report [email protected]. 1 root root 1204736 Aug 1 09:42 /pmautomation/PM/Report_Output/CFBLOCKTRUMNG/2015-08-01/Bharti Blocked TRX Report [email protected]. 1 root root 1204736 Aug 1 09:44 /pmautomation/PM/Report_Output/CFBLOCKTRUMNG/2015-08-01/Bharti Blocked TRX Report [email protected]. 1 root root 3048448 Aug 1 10:24 /pmautomation/PM/Report_Output/CFBLOCKTRUMNG/2015-08-01/Bharti Blocked TRX Report [email protected]. 1 root root 1294336 Aug 1 10:40 /pmautomation/PM/Report_Output/CFBLOCKTRUMNG/2015-08-01/Bharti Blocked TRX Report [email protected]. 1 root root 1153536 Aug 1 10:45 /pmautomation/PM/Report_Output/CFBLOCKTRUMNG/2015-08-01/Bharti Blocked TRX Report [email protected]. 1 root root 1108992 Aug 1 11:20 /pmautomation/PM/Report_Output/CFBLOCKTRUMNG/2015-08-01/Bharti Blocked TRX Report [email protected]. 1 root root 1108992 Aug 1 11:33 /pmautomation/PM/Report_Output/CFBLOCKTRUMNG/2015-08-01/Bharti Blocked TRX Report [email protected]. 1 root root 1302016 Aug 1 11:48 /pmautomation/PM/Report_Output/CFBLOCKTRUMNG/2015-08-01/Bharti Blocked TRX Report [email protected]. 1 root root 1150976 Aug 1 11:57 /pmautomation/PM/Report_Output/CFBLOCKTRUMNG/2015-08-01/Bharti Blocked TRX Report [email protected] This directory has many files and i am interested in pulling only files with the following names: Bharti Blocked TRX Report [email protected] where the TN and datestamps changes. How do i fix this to make the cp command work in the bash script? EDIT: I read the other question with the script choking on whitespace and special characters and found that i can use double quotes for it. I have tried it but it won't work. Also, the script also fails for the following command: cp: cannot stat `/pmautomation/PM/StaticUpload/20150801/2G_SITEDB_*.csv': No such file or directory where These files do not have any spaces in them: ls -lrt /pmautomation/PM/StaticUpload/20150801/2G_SITEDB_*.csv-rw-r--r--. 1 root root 4850694 Aug 2 06:51 /pmautomation/PM/StaticUpload/20150801/2G_SITEDB_KL.csv-rw-r--r--. 1 root root 4743676 Aug 2 06:55 /pmautomation/PM/StaticUpload/20150801/2G_SITEDB_PB.csv-rw-r--r--. 1 root root 2812108 Aug 2 07:05 /pmautomation/PM/StaticUpload/20150801/2G_SITEDB_AS.csv-rw-r--r--. 1 root root 1934089 Aug 2 07:15 /pmautomation/PM/StaticUpload/20150801/2G_SITEDB_CH.csv-rw-r--r--. 1 root root 2360597 Aug 2 07:30 /pmautomation/PM/StaticUpload/20150801/2G_SITEDB_JK.csv-rw-r--r--. 1 root root 1685844 Aug 2 07:35 /pmautomation/PM/StaticUpload/20150801/2G_SITEDB_NE.csv-rw-r--r--. 1 root root 8355408 Aug 2 07:47 /pmautomation/PM/StaticUpload/20150801/2G_SITEDB_TN.csv-rw-r--r--. 1 root root 8356293 Aug 2 07:51 /pmautomation/PM/StaticUpload/20150801/2G_SITEDB_UE.csv-rw-r--r--. 1 root root 3422073 Aug 2 11:04 /pmautomation/PM/StaticUpload/20150801/2G_SITEDB_DL.csv-rw-r--r--. 1 root root 6989514 Aug 2 17:34 /pmautomation/PM/StaticUpload/20150801/2G_SITEDB_RJ.csv-rw-r--r--. 1 root root 1276063 Aug 2 18:35 /pmautomation/PM/StaticUpload/20150801/2G_SITEDB_HP.csv-rw-r--r--. 1 root root 2585368 Aug 2 18:50 /pmautomation/PM/StaticUpload/20150801/2G_SITEDB_HR.csv-rw-r--r--. 1 root root 5975056 Aug 2 19:18 /pmautomation/PM/StaticUpload/20150801/2G_SITEDB_UW.csv-rw-r--r--. 1 root root 6558770 Aug 2 19:29 /pmautomation/PM/StaticUpload/20150801/2G_SITEDB_KK.csv-rw-r--r--. 1 root root 10222883 Aug 2 19:33 /pmautomation/PM/StaticUpload/20150801/2G_SITEDB_AP.csv | The glob must be left unquoted for it to be treated as a glob. The variables should be quoted: cp -- "$TRX_SOURCE_PATH"/*TRX* "$DEST_PATH" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219750",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29327/"
]
} |
219,786 | I want to remove the contents of a zfs datasets subdir. It's a large amount of data. For the pool "nas", the path is /nas/dataset/certainFolder $ du -h -d 1 certainFolder/1.2T certainFolder/ Rather than me have to wait for rm -rf certainFolder/ can't I just destroy the handle to that directory so its overwrite-able(even by the same dir name if I chose to recreate it) ?? So for e.g. not knowing much about zfs file system internals,specifically how it journals its files, I wonder if I was able to accessthat journal/map directly, for e.g., then remove the right entries, so that the dir would no longer display. That space dir holds has to be removed from some kind of audit as well. Is there an easy way to do this? Even if on an ext3 fs, or is that already what the recursive remove command has to do in the first place, i.e. pilfer through and edit journals? I'm just hoping to do something of the likes of kill thisDir to where it simply removes some kind of ID, and poof the directory no longer shows up in ls -la . The data is still there on the drive obviously, but the space will now be reused(overwritten), because ZFS is just that cool? I mean I think zfs is really that cool, how can we do it? Ideally? rubbing hands together :-) My specific use case (besides my love for zfs ) is management of a backup archive. The data is pushed to zfs via freefilesync (AWESOME PROG) on/from win boxes across SMB to the zfs pool. When removing rm -rf /nas/dataset/certainFolder through a putty term, it stalls, the term is obviously unusable for a long time now. I of course then have to open another terminal, to continue. Thats gets old, plus its no fun to monitor the rm -rf, it can take hours. Maybe I should set the command to just release the handle e.g. & , then print to std out, that might be nice. More realistically , recreate the data-set in a few seconds zfs destroy nas/dataset; zfs create -p -o compression=on nas/dataset after the thoughts from the response from @Gilles. | Tracking freed blocks is unavoidable in any decent file system and ZFS is no exception . There is however a simple way under ZFS to have a nearly instantaneous directory deletion by "deferring" the underlying cleanup. It is technically very similar to Gilles' suggestion but is inherently reliable without requiring extra code. If you create a snapshot of your file system before removing the directory, the directory removal will be very fast because nothing will need to be explored/freed under it, all being still referenced by the snapshot. You can then destroy the snapshot in the background so the space will be gradually recovered. d=yourPoolName/BackupRootDir/hostNameYourPc/somesubdirzfs snapshot ${d}@quickdelete && { rm -rf /${d}/certainFolder zfs destroy ${d}@quickdelete & } | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/219786",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111873/"
]
} |
219,804 | We can get the same result using the following two in bash , echo 'foo' | cat and cat <<< 'foo' My question is what are the difference between these two as far as the resources used are concerned and which one is better ? My thought is that while using pipe we are using an extra process echo and pipe while in here string only a file descriptor is being used with cat . | The pipe is a file opened in an in-kernel file-system and is not accessible as a regular file on-disk. It is automatically buffered only to a certain size and will eventually block when full. Unlike files sourced on block-devices, pipes behave very like character devices, and so generally do not support lseek() and data read from them cannot be read again as you might do with a regular file. The here-string is a regular file created in a mounted file-system. The shell creates the file and retains its descriptor while immediately removing its only file-system link (and so deleting it) before ever it writes/reads a byte to/from the file. The kernel will maintain the space required for the file until all processes release all descriptors for it. If the child reading from such a descriptor has the capability to do so, it can be rewound with lseek() and read again. In both cases the tokens <<< and | represent file-descriptors and not necessarily the files themselves. You can get a better idea of what's going on by doing stuff like: readlink /dev/fd/1 | cat ...or... ls -l <<<'' /dev/fd/* The most significant difference between the two files is that the here-string/doc is pretty much an all-at-once affair - the shell writes all data into it before offering the read descriptor up to the child. On the other hand, the shell opens the pipe on the appropriate descriptors and forks off children to manage those for the pipe - and so it is written/read concurrently at both ends. These distinctions, though, are only generally true. As far as I am aware (which isn't really all that far) this is true of pretty much every shell which handles the <<< here-string short-hand for << a here-document redirection with the single exception of yash . yash , busybox , dash , and other ash variants do tend to back here-documents with pipes, though, and so in those shells there really is very little difference between the two after all. Ok - two exceptions. Now that I'm thinking about it, ksh93 doesn't actually do a pipe at all for | , but rather handles the whole business w/ sockets - though it does do a deleted tmp file for <<<* as most others do. What's more, it only puts the separate sections of a pipeline in a subshell environment which is a sort of POSIX euphemism for at least it acts like a subshell , and so doesn't even do the forks. The fact is that @PSkocik's benchmark (which is very useful) results here can vary widely for many reasons, and most of these are implementation dependent. For the here-document setup the biggest factors will be the target ${TMPDIR} file-system type and current cache configuration/availability, and still moreso the amount of data to be written. For the pipe it will be the size of the shell process itself, because copies are made for the required forks. In this way bash is terrible at pipeline setup (to include $( command ) substitutions) - because it is big and very slow, but with ksh93 it makes hardly any difference at all. Here's another little shell snippet to demonstrate how a shell splits off subshells for a pipeline: pipe_who(){ echo "$$"; sh -c 'echo "$PPID"'; }pipe_whopipe_who | { pipe_who | cat /dev/fd/3 -; } 3<&0 32059 #bash's pid32059 #sh's ppid32059 #1st subshell's $$32111 #1st subshell sh's ppid32059 #2cd subshell's $$32114 #2cd subshell sh's ppid The difference between what a pipelined pipe_who() call reports and the report of one run in the current shell is due to a ( subshell's ) specified behavior of claiming the parent shell's pid in $$ when it is expanded. Though bash subshells definitely are separate processes, the $$ special shell parameter is not a reliable source of this information. Still, the subshell's child sh shell does not decline to accurately report its $PPID . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/219804",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125975/"
]
} |
219,814 | I have the following configuration to clean up temporary files (default for CentOS 7), which says that files in /tmp should be removed if they are more than 10 days old. [root]# tail -n +10 /usr/lib/tmpfiles.d/tmp.conf | head -n 3# Clear tmp directories separately, to make them easier to overrided /tmp 1777 root root 10dd /var/tmp 1777 root root 30d However, even after running systemd-tmpfiles --clean , when I look at the contents of /tmp , there are files in there that are more than 10 days old. [root]# ls -dl /tmp/backup-inspectiondrwxr-xr-x 8 root root 68 Aug 29 2014 /tmp/backup-inspection The contents of the /tmp directory is huge: [root]# du -h /tmp | tail -n 13.5G /tmp Can anyone explain to me why the backup-inspection directory is not removed? It is nearly 1 year old? | I have run into the same problem recently and found this question, so i am sharing my experience. Actually systemd-tmpfiles has full support for recursive directory tree processing as you would expect (the other answer confused me enough to check the source code). The reason files was not deleted (in my case) was atime . systemd-tmpfiles checks ctime (except for directories), mtime and atime and all three (or two) of them must be old enough for the file (or directory) to be deleted. Actually there may be other reasons, because systemd-tmpfiles has a lot of internal rules for skipping files. To find out why some files are not deleted, run systemd-tmpfiles as following: env SYSTEMD_LOG_LEVEL=debug systemd-tmpfiles --clean It will probably dump a lot of output into your console. Note that if you try to redirect stdout to e.g. a file, output disappears and is sent to systemd journal (so that it can be obtained via e.g. journalctl ). In my case the output was also cut in the middle (or i just do not know how to use journalctl ), so my solution was to temporarily increase history buffer in my terminal emulator. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219814",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125986/"
]
} |
219,825 | How do I disable the system beep on the console in FreeBSD 10.1? The recommended commands don't work. The sysctl setting: # sysctl hw.syscons.bell=0hw.syscons.bell: 1 -> 0# sysctl -a | grep bellhw.syscons.bell: 0 Backspace still results in an ear splitting beep. Found another suggestion , to use kbdcontrol : # kbdcontrol -b off# Nope, still beeps. My system details: An old Gateway MD-78 series laptop (with Intel GM45 Express Chipset), without a hardware volume knob, and decidedly loud PC speaker volume. I'm running FreeBSD 10.1. # uname -aFreeBSD raktop 10.1-RELEASE FreeBSD 10.1-RELEASE #0 r274401: Tue Nov 11 21:02:49 UTC 2014 [email protected]:/usr/obj/usr/src/sys/GENERIC amd64 Update: I'm running vt a.k.a. newcons , and eventually found that I could turn the beep off with: kbdcontrol -b quiet.off which can be put into /etc/rc.conf , to make the change permanent, as: allscreens_kbdflags="-b quiet.off" | If you're running vt a.k.a. newcons , try: kbdcontrol -b quiet.off If that works, you can make it permanent in your /etc/rc.conf : allscreens_kbdflags="-b quiet.off" Background: After running kbdcontrol from an Xterm and seeing it print out an escape sequence, I realized that it is just trying to send a command to the terminal emulation in the console driver, and it might need to be sending something different depending on the console driver; then I looked for and found the answer specific to newcons : http://lists.freebsd.org/pipermail/freebsd-current/2014-April/049463.html | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/219825",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53365/"
]
} |
219,838 | I need a log file to be updated if it has been left unchanged during 55 minutes. For example, say it is now 19:00 and IOstatDisk2.log hasn't changed since 18:00: solaris1a:/var/tmp ROOT # ls -ltr IOstatDisk2.log -rw-r--r-- 1 root other 6 Aug 2 18:00 IOstatDisk2.log So in this case I will append the line echo “new cycle - forced update after 55 min.” >> IOstatDisk2.log But if the last time stamp was less than 55 minutes ago then I will not append the line. | If you're running vt a.k.a. newcons , try: kbdcontrol -b quiet.off If that works, you can make it permanent in your /etc/rc.conf : allscreens_kbdflags="-b quiet.off" Background: After running kbdcontrol from an Xterm and seeing it print out an escape sequence, I realized that it is just trying to send a command to the terminal emulation in the console driver, and it might need to be sending something different depending on the console driver; then I looked for and found the answer specific to newcons : http://lists.freebsd.org/pipermail/freebsd-current/2014-April/049463.html | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/219838",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67059/"
]
} |
219,851 | I login as root and can not see the screen created by other others using screen -ls . I think the root user could have way to access those screen, but I can't find useful options of screen: Use: screen [-opts] [cmd [args]] or: screen -r [host.tty]Options:-4 Use IPv4.-6 Use IPv6.-a Force all capabilities into each window's termcap.-A -[r|R] Adapt all windows to the new display width & height.-c file Read configuration file instead of '.screenrc'.-d (-r) Detach the elsewhere running screen (and reattach here).-dmS name Start as daemon: Screen session in detached mode.-D (-r) Detach and logout remote (and reattach here).-D -RR Do whatever is needed to get a screen session.-e xy Change command characters.-f Flow control on, -fn = off, -fa = auto.-h lines Set the size of the scrollback history buffer.-i Interrupt output sooner when flow control is on.-l Login mode on (update /var/run/utmp), -ln = off.-list or -ls. Do nothing, just list our SockDir.-L Turn on output logging.-m ignore $STY variable, do create a new screen session.-O Choose optimal output rather than exact vt100 emulation.-p window Preselect the named window if it exists.-q Quiet startup. Exits with non-zero return code if unsuccessful.-r Reattach to a detached screen process.-R Reattach if possible, otherwise start a new session.-s shell Shell to execute rather than $SHELL.-S sockname Name this session <pid>.sockname instead of <pid>.<tty>.<host>.-t title Set title. (window's name).-T term Use term as $TERM for windows, rather than "screen".-U Tell screen to use UTF-8 encoding.-v Print "Screen version 4.00.03 (FAU) 23-Oct-06".-wipe Do nothing, just clean up SockDir.-x Attach to a not detached screen. (Multi display mode).-X Execute <cmd> as a screen command in the specified session. So which should I use? | Unless the screen session in question is created with multiuser on , you can't. Even if you set your SCREENDIR variable to point at the other user's socket directory, screen will just complain that you don't own the directory and quit when you try to use it. Of course, you can simply su to the other user and use screen in the normal way. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219851",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124489/"
]
} |
219,853 | TCP/IP and UDP captures can be made using tcpdump / dumpcap and produces a pcap/pcapng file which can be fed to Wireshark for further analysis. Does a similar tool exist for named Unix domain sockets? (A general solution that works for abstract sockets would be nice too though.) strace as-is is not sufficient, it is not straightforward to filter for Unix domain sockets I/O. A proxy using socat or alike is also not suitable as the goal is passive analysis for existing open programs. How can I obtain a packet capture that I can use in Wireshark for analysis? Example protocol applications are X11 (Xorg, my current application) and cURL/PHP (HTTP). I have seen a CONFIG_UNIX_DIAG option in the Linux kernel, is this of some use? | As of Linux kernel v4.2-rc5 it is not possible to capture directly using the interfaces that are in use by libpcap. libpcap uses the Linux-specific AF_PACKET (alias PF_PACKET ) domain which only allows you to capture data for data going through a " netdevice " (such as Ethernet interfaces). There is no kernel interface for capturing from AF_UNIX sockets. Standard Ethernet captures have an Ethernet header with source/destination, etc. Unix sockets have no such fake header and the link-layer header types registry does not list anything related to this. The basic entry points for data are unix_stream_recvmsg and unix_stream_sendmsg for SOCK_STREAM ( SOCK_DGRAM and SOCK_SEQPACKET have similarly named functions). Data is buffered in sk->sk_receive_queue and in the unix_stream_sendmsg function , there is no code that ultimately lead into calling the tpacket_rcv function for packet captures. See this analysis by osgx on SO for more details on the internals of packet capture in general. Back to the original question on AF_UNIX socket monitoring, if you are mainly interested in application data, you have some options: Passive (also works for already running processes): Use strace and capture on possible system calls that perform I/O. There are lots of them, read , pread64 , readv , preadv , recvmsg and many more... See @Stéphane Chazelas example for xterm . Disadvantage of this approach is that you first have to find your file descriptor and then still might miss out system calls. With strace you can use -e trace=file for most of them ( pread is only covered by -e trace=desc , but it probably not used for Unix sockets by most of the programs). Break on/modify unix_stream_recvmsg , unix_stream_sendmsg (or unix_dgram_* or unix_seqpacket_* ) in the kernel and output the data, somewhere. You can use SystemTap for setting such trace points, here is an example to monitor for outgoing messages. Requires kernel support and availability of debugging symbols . Active (only works for new processes): Use a proxy that also writes files. You could write a quick multiplexer yourself or hack something like this that also outputs a pcap (beware of the limitations, for example AF_UNIX can pass file descriptors, AF_INET cannot): # fake TCP server connects to real Unix socketsocat TCP-LISTEN:6000,reuseaddr,fork UNIX-CONNECT:some.sock# start packet capture on said porttcpdump -i lo -f 'tcp port 6000'# clients connect to this Unix socketsocat UNIX-LISTEN:fake.sock,fork TCP-CONNECT:127.0.0.1:6000 Use a dedicated application proxy. For X11, there is xscope ( git , manual ). The suggested CONFIG_UNIX_DIAG option is unfortunately also not helpful here, it can only be used to collect statistics, not acquire realtime data as they flow by (see linux/unix_diag.h ). Unfortunately there are no perfect tracers at the moment for Unix domain sockets that produce pcaps (to my best knowledge). Ideally there would be a libpcap format that has a header containing the source/dest PID (when available) followed by optional additional data (credentials, file descriptors) and finally the data. Lacking that, the best that can be done is syscall tracing. Additional information (for the interested reader), here are some backtraces (acquired with GDB breaking on unix_stream_* and rbreak packet.c:. , Linux in QEMU and socat on mainline Linux 4.2-rc5): # echo foo | socat - UNIX-LISTEN:/foo &# echo bar | socat - UNIX-CONNECT:/foounix_stream_sendmsg at net/unix/af_unix.c:1638sock_sendmsg_nosec at net/socket.c:610sock_sendmsg at net/socket.c:620sock_write_iter at net/socket.c:819new_sync_write at fs/read_write.c:478__vfs_write at fs/read_write.c:491vfs_write at fs/read_write.c:538SYSC_write at fs/read_write.c:585SyS_write at fs/read_write.c:577entry_SYSCALL_64_fastpath at arch/x86/entry/entry_64.S:186unix_stream_recvmsg at net/unix/af_unix.c:2210sock_recvmsg_nosec at net/socket.c:712sock_recvmsg at net/socket.c:720sock_read_iter at net/socket.c:797new_sync_read at fs/read_write.c:422__vfs_read at fs/read_write.c:434vfs_read at fs/read_write.c:454SYSC_read at fs/read_write.c:569SyS_read at fs/read_write.c:562# tcpdump -i lo &# echo foo | socat - TCP-LISTEN:1337 &# echo bar | socat - TCP-CONNECT:127.0.0.1:1337tpacket_rcv at net/packet/af_packet.c:1962dev_queue_xmit_nit at net/core/dev.c:1862xmit_one at net/core/dev.c:2679dev_hard_start_xmit at net/core/dev.c:2699__dev_queue_xmit at net/core/dev.c:3104dev_queue_xmit_sk at net/core/dev.c:3138dev_queue_xmit at netdevice.h:2190neigh_hh_output at include/net/neighbour.h:467dst_neigh_output at include/net/dst.h:401ip_finish_output2 at net/ipv4/ip_output.c:210ip_finish_output at net/ipv4/ip_output.c:284ip_output at net/ipv4/ip_output.c:356dst_output_sk at include/net/dst.h:440ip_local_out_sk at net/ipv4/ip_output.c:119ip_local_out at include/net/ip.h:119ip_queue_xmit at net/ipv4/ip_output.c:454tcp_transmit_skb at net/ipv4/tcp_output.c:1039tcp_write_xmit at net/ipv4/tcp_output.c:2128__tcp_push_pending_frames at net/ipv4/tcp_output.c:2303tcp_push at net/ipv4/tcp.c:689tcp_sendmsg at net/ipv4/tcp.c:1276inet_sendmsg at net/ipv4/af_inet.c:733sock_sendmsg_nosec at net/socket.c:610sock_sendmsg at net/socket.c:620sock_write_iter at net/socket.c:819new_sync_write at fs/read_write.c:478__vfs_write at fs/read_write.c:491vfs_write at fs/read_write.c:538SYSC_write at fs/read_write.c:585SyS_write at fs/read_write.c:577entry_SYSCALL_64_fastpath at arch/x86/entry/entry_64.S:186tpacket_rcv at net/packet/af_packet.c:1962dev_queue_xmit_nit at net/core/dev.c:1862xmit_one at net/core/dev.c:2679dev_hard_start_xmit at net/core/dev.c:2699__dev_queue_xmit at net/core/dev.c:3104dev_queue_xmit_sk at net/core/dev.c:3138dev_queue_xmit at netdevice.h:2190neigh_hh_output at include/net/neighbour.h:467dst_neigh_output at include/net/dst.h:401ip_finish_output2 at net/ipv4/ip_output.c:210ip_finish_output at net/ipv4/ip_output.c:284ip_output at net/ipv4/ip_output.c:356dst_output_sk at include/net/dst.h:440ip_local_out_sk at net/ipv4/ip_output.c:119ip_local_out at include/net/ip.h:119ip_queue_xmit at net/ipv4/ip_output.c:454tcp_transmit_skb at net/ipv4/tcp_output.c:1039tcp_send_ack at net/ipv4/tcp_output.c:3375__tcp_ack_snd_check at net/ipv4/tcp_input.c:4901tcp_ack_snd_check at net/ipv4/tcp_input.c:4914tcp_rcv_state_process at net/ipv4/tcp_input.c:5937tcp_v4_do_rcv at net/ipv4/tcp_ipv4.c:1423tcp_v4_rcv at net/ipv4/tcp_ipv4.c:1633ip_local_deliver_finish at net/ipv4/ip_input.c:216ip_local_deliver at net/ipv4/ip_input.c:256dst_input at include/net/dst.h:450ip_rcv_finish at net/ipv4/ip_input.c:367ip_rcv at net/ipv4/ip_input.c:455__netif_receive_skb_core at net/core/dev.c:3892__netif_receive_skb at net/core/dev.c:3927process_backlog at net/core/dev.c:4504napi_poll at net/core/dev.c:4743net_rx_action at net/core/dev.c:4808__do_softirq at kernel/softirq.c:273do_softirq_own_stack at arch/x86/entry/entry_64.S:970 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219853",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8250/"
]
} |
219,856 | My laptop is displaying a blank screen today, instead of the normal login window. I am using LUbuntu 14.04. I can get into the terminal and login, but how do I start lxde after that and how do I get the normal graphical login screen back? | As of Linux kernel v4.2-rc5 it is not possible to capture directly using the interfaces that are in use by libpcap. libpcap uses the Linux-specific AF_PACKET (alias PF_PACKET ) domain which only allows you to capture data for data going through a " netdevice " (such as Ethernet interfaces). There is no kernel interface for capturing from AF_UNIX sockets. Standard Ethernet captures have an Ethernet header with source/destination, etc. Unix sockets have no such fake header and the link-layer header types registry does not list anything related to this. The basic entry points for data are unix_stream_recvmsg and unix_stream_sendmsg for SOCK_STREAM ( SOCK_DGRAM and SOCK_SEQPACKET have similarly named functions). Data is buffered in sk->sk_receive_queue and in the unix_stream_sendmsg function , there is no code that ultimately lead into calling the tpacket_rcv function for packet captures. See this analysis by osgx on SO for more details on the internals of packet capture in general. Back to the original question on AF_UNIX socket monitoring, if you are mainly interested in application data, you have some options: Passive (also works for already running processes): Use strace and capture on possible system calls that perform I/O. There are lots of them, read , pread64 , readv , preadv , recvmsg and many more... See @Stéphane Chazelas example for xterm . Disadvantage of this approach is that you first have to find your file descriptor and then still might miss out system calls. With strace you can use -e trace=file for most of them ( pread is only covered by -e trace=desc , but it probably not used for Unix sockets by most of the programs). Break on/modify unix_stream_recvmsg , unix_stream_sendmsg (or unix_dgram_* or unix_seqpacket_* ) in the kernel and output the data, somewhere. You can use SystemTap for setting such trace points, here is an example to monitor for outgoing messages. Requires kernel support and availability of debugging symbols . Active (only works for new processes): Use a proxy that also writes files. You could write a quick multiplexer yourself or hack something like this that also outputs a pcap (beware of the limitations, for example AF_UNIX can pass file descriptors, AF_INET cannot): # fake TCP server connects to real Unix socketsocat TCP-LISTEN:6000,reuseaddr,fork UNIX-CONNECT:some.sock# start packet capture on said porttcpdump -i lo -f 'tcp port 6000'# clients connect to this Unix socketsocat UNIX-LISTEN:fake.sock,fork TCP-CONNECT:127.0.0.1:6000 Use a dedicated application proxy. For X11, there is xscope ( git , manual ). The suggested CONFIG_UNIX_DIAG option is unfortunately also not helpful here, it can only be used to collect statistics, not acquire realtime data as they flow by (see linux/unix_diag.h ). Unfortunately there are no perfect tracers at the moment for Unix domain sockets that produce pcaps (to my best knowledge). Ideally there would be a libpcap format that has a header containing the source/dest PID (when available) followed by optional additional data (credentials, file descriptors) and finally the data. Lacking that, the best that can be done is syscall tracing. Additional information (for the interested reader), here are some backtraces (acquired with GDB breaking on unix_stream_* and rbreak packet.c:. , Linux in QEMU and socat on mainline Linux 4.2-rc5): # echo foo | socat - UNIX-LISTEN:/foo &# echo bar | socat - UNIX-CONNECT:/foounix_stream_sendmsg at net/unix/af_unix.c:1638sock_sendmsg_nosec at net/socket.c:610sock_sendmsg at net/socket.c:620sock_write_iter at net/socket.c:819new_sync_write at fs/read_write.c:478__vfs_write at fs/read_write.c:491vfs_write at fs/read_write.c:538SYSC_write at fs/read_write.c:585SyS_write at fs/read_write.c:577entry_SYSCALL_64_fastpath at arch/x86/entry/entry_64.S:186unix_stream_recvmsg at net/unix/af_unix.c:2210sock_recvmsg_nosec at net/socket.c:712sock_recvmsg at net/socket.c:720sock_read_iter at net/socket.c:797new_sync_read at fs/read_write.c:422__vfs_read at fs/read_write.c:434vfs_read at fs/read_write.c:454SYSC_read at fs/read_write.c:569SyS_read at fs/read_write.c:562# tcpdump -i lo &# echo foo | socat - TCP-LISTEN:1337 &# echo bar | socat - TCP-CONNECT:127.0.0.1:1337tpacket_rcv at net/packet/af_packet.c:1962dev_queue_xmit_nit at net/core/dev.c:1862xmit_one at net/core/dev.c:2679dev_hard_start_xmit at net/core/dev.c:2699__dev_queue_xmit at net/core/dev.c:3104dev_queue_xmit_sk at net/core/dev.c:3138dev_queue_xmit at netdevice.h:2190neigh_hh_output at include/net/neighbour.h:467dst_neigh_output at include/net/dst.h:401ip_finish_output2 at net/ipv4/ip_output.c:210ip_finish_output at net/ipv4/ip_output.c:284ip_output at net/ipv4/ip_output.c:356dst_output_sk at include/net/dst.h:440ip_local_out_sk at net/ipv4/ip_output.c:119ip_local_out at include/net/ip.h:119ip_queue_xmit at net/ipv4/ip_output.c:454tcp_transmit_skb at net/ipv4/tcp_output.c:1039tcp_write_xmit at net/ipv4/tcp_output.c:2128__tcp_push_pending_frames at net/ipv4/tcp_output.c:2303tcp_push at net/ipv4/tcp.c:689tcp_sendmsg at net/ipv4/tcp.c:1276inet_sendmsg at net/ipv4/af_inet.c:733sock_sendmsg_nosec at net/socket.c:610sock_sendmsg at net/socket.c:620sock_write_iter at net/socket.c:819new_sync_write at fs/read_write.c:478__vfs_write at fs/read_write.c:491vfs_write at fs/read_write.c:538SYSC_write at fs/read_write.c:585SyS_write at fs/read_write.c:577entry_SYSCALL_64_fastpath at arch/x86/entry/entry_64.S:186tpacket_rcv at net/packet/af_packet.c:1962dev_queue_xmit_nit at net/core/dev.c:1862xmit_one at net/core/dev.c:2679dev_hard_start_xmit at net/core/dev.c:2699__dev_queue_xmit at net/core/dev.c:3104dev_queue_xmit_sk at net/core/dev.c:3138dev_queue_xmit at netdevice.h:2190neigh_hh_output at include/net/neighbour.h:467dst_neigh_output at include/net/dst.h:401ip_finish_output2 at net/ipv4/ip_output.c:210ip_finish_output at net/ipv4/ip_output.c:284ip_output at net/ipv4/ip_output.c:356dst_output_sk at include/net/dst.h:440ip_local_out_sk at net/ipv4/ip_output.c:119ip_local_out at include/net/ip.h:119ip_queue_xmit at net/ipv4/ip_output.c:454tcp_transmit_skb at net/ipv4/tcp_output.c:1039tcp_send_ack at net/ipv4/tcp_output.c:3375__tcp_ack_snd_check at net/ipv4/tcp_input.c:4901tcp_ack_snd_check at net/ipv4/tcp_input.c:4914tcp_rcv_state_process at net/ipv4/tcp_input.c:5937tcp_v4_do_rcv at net/ipv4/tcp_ipv4.c:1423tcp_v4_rcv at net/ipv4/tcp_ipv4.c:1633ip_local_deliver_finish at net/ipv4/ip_input.c:216ip_local_deliver at net/ipv4/ip_input.c:256dst_input at include/net/dst.h:450ip_rcv_finish at net/ipv4/ip_input.c:367ip_rcv at net/ipv4/ip_input.c:455__netif_receive_skb_core at net/core/dev.c:3892__netif_receive_skb at net/core/dev.c:3927process_backlog at net/core/dev.c:4504napi_poll at net/core/dev.c:4743net_rx_action at net/core/dev.c:4808__do_softirq at kernel/softirq.c:273do_softirq_own_stack at arch/x86/entry/entry_64.S:970 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219856",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4430/"
]
} |
219,857 | I'm having the following bash script: URL=`grep -E -m 1 -o "<ExportCatalogResult>(.*)</ExportCatalogResult>" costa_export.xml| sed -e 's,.*<ExportCatalogResult>\([^<]*\)</ExportCatalogResult>.*,\1,g'` &&echo $URL echo -n $url > url.txt &&wget $(cat url.txt | tr -d " \t\n\r") -O price.zip The problem is that, when running the script, wget is always downloading an empty file: http://training.******.net/WAWS_1_9/Catalog/price.zip //echo $URLResolving training.******.net (training.*******.net)... 194.**.***.90, 194.**.***.90Connecting to training.******.net (training.*****.net)|194.**.***.90|:80... connected.HTTP request sent, awaiting response... 204 Still exportingLength: 0Saving to: ‘price.zip’[ <=> ] 0 --.-K/s in 0s However, the url that is echoed is a valid one (running a wget with that url in the command line will download the zip package). Why is this happening? | As of Linux kernel v4.2-rc5 it is not possible to capture directly using the interfaces that are in use by libpcap. libpcap uses the Linux-specific AF_PACKET (alias PF_PACKET ) domain which only allows you to capture data for data going through a " netdevice " (such as Ethernet interfaces). There is no kernel interface for capturing from AF_UNIX sockets. Standard Ethernet captures have an Ethernet header with source/destination, etc. Unix sockets have no such fake header and the link-layer header types registry does not list anything related to this. The basic entry points for data are unix_stream_recvmsg and unix_stream_sendmsg for SOCK_STREAM ( SOCK_DGRAM and SOCK_SEQPACKET have similarly named functions). Data is buffered in sk->sk_receive_queue and in the unix_stream_sendmsg function , there is no code that ultimately lead into calling the tpacket_rcv function for packet captures. See this analysis by osgx on SO for more details on the internals of packet capture in general. Back to the original question on AF_UNIX socket monitoring, if you are mainly interested in application data, you have some options: Passive (also works for already running processes): Use strace and capture on possible system calls that perform I/O. There are lots of them, read , pread64 , readv , preadv , recvmsg and many more... See @Stéphane Chazelas example for xterm . Disadvantage of this approach is that you first have to find your file descriptor and then still might miss out system calls. With strace you can use -e trace=file for most of them ( pread is only covered by -e trace=desc , but it probably not used for Unix sockets by most of the programs). Break on/modify unix_stream_recvmsg , unix_stream_sendmsg (or unix_dgram_* or unix_seqpacket_* ) in the kernel and output the data, somewhere. You can use SystemTap for setting such trace points, here is an example to monitor for outgoing messages. Requires kernel support and availability of debugging symbols . Active (only works for new processes): Use a proxy that also writes files. You could write a quick multiplexer yourself or hack something like this that also outputs a pcap (beware of the limitations, for example AF_UNIX can pass file descriptors, AF_INET cannot): # fake TCP server connects to real Unix socketsocat TCP-LISTEN:6000,reuseaddr,fork UNIX-CONNECT:some.sock# start packet capture on said porttcpdump -i lo -f 'tcp port 6000'# clients connect to this Unix socketsocat UNIX-LISTEN:fake.sock,fork TCP-CONNECT:127.0.0.1:6000 Use a dedicated application proxy. For X11, there is xscope ( git , manual ). The suggested CONFIG_UNIX_DIAG option is unfortunately also not helpful here, it can only be used to collect statistics, not acquire realtime data as they flow by (see linux/unix_diag.h ). Unfortunately there are no perfect tracers at the moment for Unix domain sockets that produce pcaps (to my best knowledge). Ideally there would be a libpcap format that has a header containing the source/dest PID (when available) followed by optional additional data (credentials, file descriptors) and finally the data. Lacking that, the best that can be done is syscall tracing. Additional information (for the interested reader), here are some backtraces (acquired with GDB breaking on unix_stream_* and rbreak packet.c:. , Linux in QEMU and socat on mainline Linux 4.2-rc5): # echo foo | socat - UNIX-LISTEN:/foo &# echo bar | socat - UNIX-CONNECT:/foounix_stream_sendmsg at net/unix/af_unix.c:1638sock_sendmsg_nosec at net/socket.c:610sock_sendmsg at net/socket.c:620sock_write_iter at net/socket.c:819new_sync_write at fs/read_write.c:478__vfs_write at fs/read_write.c:491vfs_write at fs/read_write.c:538SYSC_write at fs/read_write.c:585SyS_write at fs/read_write.c:577entry_SYSCALL_64_fastpath at arch/x86/entry/entry_64.S:186unix_stream_recvmsg at net/unix/af_unix.c:2210sock_recvmsg_nosec at net/socket.c:712sock_recvmsg at net/socket.c:720sock_read_iter at net/socket.c:797new_sync_read at fs/read_write.c:422__vfs_read at fs/read_write.c:434vfs_read at fs/read_write.c:454SYSC_read at fs/read_write.c:569SyS_read at fs/read_write.c:562# tcpdump -i lo &# echo foo | socat - TCP-LISTEN:1337 &# echo bar | socat - TCP-CONNECT:127.0.0.1:1337tpacket_rcv at net/packet/af_packet.c:1962dev_queue_xmit_nit at net/core/dev.c:1862xmit_one at net/core/dev.c:2679dev_hard_start_xmit at net/core/dev.c:2699__dev_queue_xmit at net/core/dev.c:3104dev_queue_xmit_sk at net/core/dev.c:3138dev_queue_xmit at netdevice.h:2190neigh_hh_output at include/net/neighbour.h:467dst_neigh_output at include/net/dst.h:401ip_finish_output2 at net/ipv4/ip_output.c:210ip_finish_output at net/ipv4/ip_output.c:284ip_output at net/ipv4/ip_output.c:356dst_output_sk at include/net/dst.h:440ip_local_out_sk at net/ipv4/ip_output.c:119ip_local_out at include/net/ip.h:119ip_queue_xmit at net/ipv4/ip_output.c:454tcp_transmit_skb at net/ipv4/tcp_output.c:1039tcp_write_xmit at net/ipv4/tcp_output.c:2128__tcp_push_pending_frames at net/ipv4/tcp_output.c:2303tcp_push at net/ipv4/tcp.c:689tcp_sendmsg at net/ipv4/tcp.c:1276inet_sendmsg at net/ipv4/af_inet.c:733sock_sendmsg_nosec at net/socket.c:610sock_sendmsg at net/socket.c:620sock_write_iter at net/socket.c:819new_sync_write at fs/read_write.c:478__vfs_write at fs/read_write.c:491vfs_write at fs/read_write.c:538SYSC_write at fs/read_write.c:585SyS_write at fs/read_write.c:577entry_SYSCALL_64_fastpath at arch/x86/entry/entry_64.S:186tpacket_rcv at net/packet/af_packet.c:1962dev_queue_xmit_nit at net/core/dev.c:1862xmit_one at net/core/dev.c:2679dev_hard_start_xmit at net/core/dev.c:2699__dev_queue_xmit at net/core/dev.c:3104dev_queue_xmit_sk at net/core/dev.c:3138dev_queue_xmit at netdevice.h:2190neigh_hh_output at include/net/neighbour.h:467dst_neigh_output at include/net/dst.h:401ip_finish_output2 at net/ipv4/ip_output.c:210ip_finish_output at net/ipv4/ip_output.c:284ip_output at net/ipv4/ip_output.c:356dst_output_sk at include/net/dst.h:440ip_local_out_sk at net/ipv4/ip_output.c:119ip_local_out at include/net/ip.h:119ip_queue_xmit at net/ipv4/ip_output.c:454tcp_transmit_skb at net/ipv4/tcp_output.c:1039tcp_send_ack at net/ipv4/tcp_output.c:3375__tcp_ack_snd_check at net/ipv4/tcp_input.c:4901tcp_ack_snd_check at net/ipv4/tcp_input.c:4914tcp_rcv_state_process at net/ipv4/tcp_input.c:5937tcp_v4_do_rcv at net/ipv4/tcp_ipv4.c:1423tcp_v4_rcv at net/ipv4/tcp_ipv4.c:1633ip_local_deliver_finish at net/ipv4/ip_input.c:216ip_local_deliver at net/ipv4/ip_input.c:256dst_input at include/net/dst.h:450ip_rcv_finish at net/ipv4/ip_input.c:367ip_rcv at net/ipv4/ip_input.c:455__netif_receive_skb_core at net/core/dev.c:3892__netif_receive_skb at net/core/dev.c:3927process_backlog at net/core/dev.c:4504napi_poll at net/core/dev.c:4743net_rx_action at net/core/dev.c:4808__do_softirq at kernel/softirq.c:273do_softirq_own_stack at arch/x86/entry/entry_64.S:970 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219857",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8200/"
]
} |
219,859 | File: Data inserted into table. Total count 13No error occurredData inserted into table. Total count 45No error occurredData inserted into table. Total count 14No error occurredData inserted into table. Total count 90No error occurred Expected output file: Data inserted into table. Total count 13Data inserted into table. Total count 45Data inserted into table. Total count 14Data inserted into table. Total count 90 I want the output to look this way: every second line will be deleted but there will be no gap between lines. | With sed : sed -e n\;d <file With POSIX awk : awk 'FNR%2' <file If you have older awk (like oawk ), you need: oawk 'NR%2 == 1' <file With ex : $ ex file <<\EX:g/$/+d:wq!EX will edit the file in-place. g mark a global command /$/ match every lines +d delete the next line wq! save all changes This approach share the same ideal with sed approach, delete every next line of current line start from line 1. With perl : perl -ne 'print if $. % 2' <file and raku : raku -ne '.say if $*IN.ins % 2' <fileraku -ne '.say if ++$ % 2' <file Edit Raku IO::Handle.ins was removed in this commit . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/219859",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118311/"
]
} |
219,896 | I can run the command aws --version in a script and in the cli. But if I put this command into a crontab it does not work. Crontab: 50 12 * * * aws --version > ~/yolo.swag Error: /bin/sh: 1: aws: not found The aws command is in a bash script. And I get the same error message when I run the script in cron. How can I get the script to run the command fine ? | You need to specify the full path to the aws executable: 50 12 * * * /usr/local/bin/aws --version > ~/yolo.swag | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/219896",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107256/"
]
} |
219,938 | How do I delete a set of trailing commas in bash: a,b,c,d,,,,1,2,3,,,, Desired Output: a,b,c,d1,2,3 Tried doing this: grep "5628" test.csv | sed 's/,*$//g' but it doesn't work. The file originally came from a Windows machine. | Re the command you have provided: grep "5628" test.csv | sed 's/,*$//g' This will output lines matching '5628' with all trailing commas removed. It will not update the file test.csv . However, you indicated the file came from a Windows machine, so the line endings are CR/NL instead of just NL. The result is that there is a hidden CR at the end of the line, and you need a command line this instead: grep "5628" test.csv | sed 's/,*\r*$//' Actually, you can simplify this to one command: sed -n '/5628/s/,*\r*$//p' test.csv | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/219938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125512/"
]
} |
219,973 | I prefer regular expressions with -regex over the shell pattern syntax of -name . I also want to use the posix-egrep type, so I'd like to do something like alias find="find -regextype posix-egrep" but that is an error since the path has to come before the expression. However, the -regextype has to be first in the expressions (or before using -regex or -iregex ). I just want to alias or have a shell function find so that it works the same as usual, just ready to use posix-regex whenever I decide to use the -regex or -iregex option. How can I do this? | since the find arguments are positional a function would be a better solution. find(){ command find "$1" -regextype posix-egrep "${@:2}"} since you want to "overwrite" the original command you need to use the full path of find so that your new function doesn't create an infinite loop of calling itself. by using a function instead of an alias we can use positional argument variables ( $1-$n ). since you also might want to add other things to the end of your find we append the command with ${@:2} which appends all but the first argument (your path) using array slicing. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/219973",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53613/"
]
} |
219,991 | I have parent folder and inside this folder I have 4 files ParentFolder File1.txt File2.txt File3.txt File4.txt I wanted to create subfolders inside the parent folder and carry the name of the files then move every file inside the folder that carry it is name like: ParentFolder File1 File1.txt File2 File2.txt File3 File3.txt File4 File4.txt How can I do that in batch or tsch script?I tried this script: #!/bin/bashin=path_to_my_parentFolderfor i in $(cat ${in}/all.txt); docd ${in}/${i} ls > files.txtfor ii in $(cat files.txt); domkdir ${ii}mv ${ii} ${in}/${i}/${ii} done done | You're overcomplicating this. I don't understand what you're trying to do with all.txt . To enumerate the files in a directory, don't call ls : that's more complex and doesn't work reliably anyway . Use a wildcard pattern . To strip the extension ( .txt ) at the end of the file name, use the suffix stripping feature of variable substitution . Always put double quotes around variable substitutions . cd ParentFolderfor x in ./*.txt; do mkdir "${x%.*}" && mv "$x" "${x%.*}"done | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/219991",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
220,001 | I want to create a script to run the following BASH command: mysqldump -u [username] -p [db_name] > [path to backup file] Which results in a backup file. When running this in BASH, it prompts for a password before continuing. How do I craft this in a BASH script so that the password is automatically entered? Can this be done securely? | The best kind of approach here is to do something like: mysqldump --defaults-extra-file=/path/to/auth.cnf ... Where auth.cnf looks like: [client]user=the-userpassword=the-password Then make sure the file is only readable by whomever is meant to run that script. The script itself can be world readable. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/220001",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45243/"
]
} |
220,017 | Does the hash of a file change if the filename or path or timestamp or permissions change? $ echo some contents > testfile$ shasum testfile 3a2be7b07a1a19072bf54c95a8c4a3fe0cdb35d4 testfile | Not as far as I can tell after a simple test. $ echo some contents > testfile$ shasum testfile 3a2be7b07a1a19072bf54c95a8c4a3fe0cdb35d4 testfile$ mv testfile newfile$ shasum newfile 3a2be7b07a1a19072bf54c95a8c4a3fe0cdb35d4 newfile | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/220017",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32951/"
]
} |
220,065 | We have a linux machine, in which the swap partition gets filled to the limit from time to time. There's still enough of RAM free, so there should be no risk of running out of memory. The usual course of action in case of a full swap is to execute a swapoff -a && swapon -a to completely clear the swap. My Questions: Is the full swap partition a problem which needs attention in the first place, or is it something "normal", which can just be ignored. Is turning the swap off and on again a good thing to do, or does it make the situation worse, because the kernel now gets busy clearing the swap space? What would be a better (or the best) way to react? | Clearing the swap is not necessary nor useful. Read linuxatemyram . The kernel has a quite efficient page cache . So RAM is used for useful data (e.g. recently accessed file segment chunks, or heap memory), and less useful data got swapped to the swap zone. Perhaps your swap zone might be too small. You could also swap to some file. See this . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/220065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126177/"
]
} |
220,070 | I calling an url using wget. This url gives me a response, its a Message id. I want to write the logs to a log file, with the message id as well. Also the log should be appended each time. I trying to do it in my shell script. Is it possible to do this? If so how can i do it. | Clearing the swap is not necessary nor useful. Read linuxatemyram . The kernel has a quite efficient page cache . So RAM is used for useful data (e.g. recently accessed file segment chunks, or heap memory), and less useful data got swapped to the swap zone. Perhaps your swap zone might be too small. You could also swap to some file. See this . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/220070",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122825/"
]
} |
220,228 | I wonder whether for the average Linux user it is considered as bad -from a point of view of security or any other relevant viewpoints- to have no or almost no entropy left in /dev/random. Edit: I don't need to generate random numbers (I would use /dev/urandom for that and even for password generation and disk encryption). Just for the fun of it, I have a bash script that generates strings of random characters out of /dev/random and of course, after playing a bit with it, I am left without entropy in /dev/random and it blocks. On IRC I was told it's "bad" to do so, but I wasn't given any reason. Is it bad because the average Linux user automatically generates random things using /dev/random? If so, which program(s) is/are involved? I also understand that having no entropy left in /dev/random makes the generation of numbers deterministic. But again, is my computer (the average Linux user) in need of truly random numbers? Edit 2: I've just monitored the entropy level every second in /dev/random during about 3 minutes, where I launched my bash script that uses entropy to generate a string of random character around the beginning of the monitoring. I've made a plot. We can see that indeed, the entropy level oscillates somehow, so some program(s) on my computer are using /dev/random to generate stuff. Is there a way I can list all programs using the file /dev/random? We can also see that it takes less than a minute to yield "acceptable levels" of entropy once the entropy pool has been emptied. | Entropy is fed into /dev/random at a rather slow rate, so if you use any program that uses /dev/random , it's pretty common for the entropy to be low. Even if you believe in Linux's definition of entropy, low entropy isn't a security problem. /dev/random blocks until it's satisfied that it has enough entropy. With low entropy, you'll get applications sitting around waiting for you to wiggle the mouse, but not a loss of randomness. In fact Linux's definition of entropy is flawed: it's an extremely conservative definition which strives to achieve a theoretical level of randomness that's useless in practice. In fact, entropy does not wear out — once you have enough, you have enough. Unfortunately, Linux only has two interfaces to get random numbers: /dev/random , which blocks when it shouldn't, and /dev/urandom , which never blocks. Fortunately, in practice, /dev/urandom is almost always correct , because a system quickly gathers enough entropy, after which point /dev/urandom is ok forever (including uses such as generating cryptographic keys) . The only time when /dev/urandom is problematic is when a system doesn't have enough entropy yet, for example on the first boot of a fresh installation, after booting a live CD, or after cloning a virtual machine. In such situations, wait until /proc/sys/kernel/random/entropy_avail reaches 200 or so. After that, you can use /dev/urandom as much as you like. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/220228",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126296/"
]
} |
220,229 | How can I set CPU affinity for the specific program (say gzip) to always run on specific core or cores (core 1, for example)? I read about taskset, but can it be used before program is actually used and creates a process? | Entropy is fed into /dev/random at a rather slow rate, so if you use any program that uses /dev/random , it's pretty common for the entropy to be low. Even if you believe in Linux's definition of entropy, low entropy isn't a security problem. /dev/random blocks until it's satisfied that it has enough entropy. With low entropy, you'll get applications sitting around waiting for you to wiggle the mouse, but not a loss of randomness. In fact Linux's definition of entropy is flawed: it's an extremely conservative definition which strives to achieve a theoretical level of randomness that's useless in practice. In fact, entropy does not wear out — once you have enough, you have enough. Unfortunately, Linux only has two interfaces to get random numbers: /dev/random , which blocks when it shouldn't, and /dev/urandom , which never blocks. Fortunately, in practice, /dev/urandom is almost always correct , because a system quickly gathers enough entropy, after which point /dev/urandom is ok forever (including uses such as generating cryptographic keys) . The only time when /dev/urandom is problematic is when a system doesn't have enough entropy yet, for example on the first boot of a fresh installation, after booting a live CD, or after cloning a virtual machine. In such situations, wait until /proc/sys/kernel/random/entropy_avail reaches 200 or so. After that, you can use /dev/urandom as much as you like. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/220229",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48025/"
]
} |
220,355 | I want to search all files inside a directory and its subdirectories for lines containing a certain string, but I want to exclude those results that contain a different certain string in the line immediately after it. For example, this: foo1 searchString barfoo1 excludeString barfoo2 searchString barsomething elsefoo3 searchString barfoo3 excludeString barfoo4 searchString bar should return this: foo2 searchString barfoo3 searchString barfoo4 searchString bar I know that -A prints multiple lines, and that -v excludes results. But my current approach of grep -r -A 1 "searchString" | grep -v "excludeString" obviously can't work. Is there a way to tell the second grep that it should also remove the previous line if it finds a match? Or some other way how I might achieve this? Performance isn't my primary concern; It would be nice if the command is relatively easy to remember though. | You can use p erl c ompatible r egular e xpressions grep : $ pcregrep -M '(searchString.*\n)(?!.*excludeString)' filefoo2 searchString barfoo3 searchString barfoo4 searchString bar It searches searchString followed by any char . , repeated zero or more times * , followed by new line \n only if there is not ( ?! ) pattern .*excludeString next to it. Option -M is present in order to match multi lines. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/220355",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31533/"
]
} |
220,362 | I'm in the process of installing postgresql onto a second server Previously I installed postgresql and then used the supplied script ./contrib/start-scripts/linux Placed into the correct dir # cp ./contrib/start-scripts/linux /etc/rc.d/init.d/postgresql92# chmod 755 /etc/rc.d/init.d/postgresql92 Which I could then execute as expected with # service postgresql92 start However the new machine is using Systemd and it looks like there is a completely different way to do this I don't want to hack at this and ruin something so I was wondering if anyone out there could point me in the right direction of how to achieve the same result | When installing from source, you will need to add a systemd unit file that works with the source install. For RHEL, Fedora my unit file looks like: /usr/lib/systemd/system/postgresql.service [Unit]Description=PostgreSQL database serverAfter=network.target[Service]Type=forkingUser=postgresGroup=postgres# Where to send early-startup messages from the server (before the logging# options of postgresql.conf take effect)# This is normally controlled by the global default set by systemd# StandardOutput=syslog# Disable OOM kill on the postmasterOOMScoreAdjust=-1000# ... but allow it still to be effective for child processes# (note that these settings are ignored by Postgres releases before 9.5)Environment=PG_OOM_ADJUST_FILE=/proc/self/oom_score_adjEnvironment=PG_OOM_ADJUST_VALUE=0# Maximum number of seconds pg_ctl will wait for postgres to start. Note that# PGSTARTTIMEOUT should be less than TimeoutSec value.Environment=PGSTARTTIMEOUT=270Environment=PGDATA=/usr/local/pgsql/dataExecStart=/usr/local/pgsql/bin/pg_ctl start -D ${PGDATA} -s -w -t ${PGSTARTTIMEOUT}ExecStop=/usr/local/pgsql/bin/pg_ctl stop -D ${PGDATA} -s -m fastExecReload=/usr/local/pgsql/bin/pg_ctl reload -D ${PGDATA} -s# Give a reasonable amount of time for the server to start up/shut down.# Ideally, the timeout for starting PostgreSQL server should be handled more# nicely by pg_ctl in ExecStart, so keep its timeout smaller than this value.TimeoutSec=300[Install]WantedBy=multi-user.target Then enable the service on startup and start the PostgreSQL service: $ sudo systemctl daemon-reload # load the updated service file from disk$ sudo systemctl enable postgresql$ sudo systemctl start postgresql | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/220362",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89568/"
]
} |
220,380 | I'm trying to use OpenConnect to connect to my company's Cisco VPN (AnyConnect) The connection seems to work just fine, what I'm not understanding is how to set up routing. I'm doing this from the command line. I use the default VPN script to connect like this: openconnect -u MyUserName --script path_to_vpnc_script myvpngateway.example.com I type in my password, and I'm connected fine, but my default route has changed to force all traffic down the VPN link, whereas I just want company traffic down the VPN link. Are there some variables that I need to be putting into the vpnc-script? It's not very clear how this is done. | This answer is as follows: Use the following bash wrapper script to call the vpnc-script. In the wrapper script, the routes to be used for the VPN connection can be specified via a ROUTES variable. #!/bin/bash## Routes that we want to be used by the VPN linkROUTES="162.73.0.0/16"# Helpers to create dotted-quad netmask strings.MASKS[1]="128.0.0.0"MASKS[2]="192.0.0.0"MASKS[3]="224.0.0.0"MASKS[4]="240.0.0.0"MASKS[5]="248.0.0.0"MASKS[6]="252.0.0.0"MASKS[7]="254.0.0.0"MASKS[8]="255.0.0.0"MASKS[9]="255.128.0.0"MASKS[10]="255.192.0.0"MASKS[11]="255.224.0.0"MASKS[12]="255.240.0.0"MASKS[13]="255.248.0.0"MASKS[14]="255.252.0.0"MASKS[15]="255.254.0.0"MASKS[16]="255.255.0.0"MASKS[17]="255.255.128.0"MASKS[18]="255.255.192.0"MASKS[19]="255.255.224.0"MASKS[20]="255.255.240.0"MASKS[21]="255.255.248.0"MASKS[22]="255.255.252.0"MASKS[23]="255.255.254.0"MASKS[24]="255.255.255.0"MASKS[25]="255.255.255.128"MASKS[26]="255.255.255.192"MASKS[27]="255.255.255.224"MASKS[28]="255.255.255.240"MASKS[29]="255.255.255.248"MASKS[30]="255.255.255.252"MASKS[31]="255.255.255.254"export CISCO_SPLIT_INC=0# Create environment variables that vpnc-script uses to configure networkfunction addroute(){ local ROUTE="$1" export CISCO_SPLIT_INC_${CISCO_SPLIT_INC}_ADDR=${ROUTE%%/*} export CISCO_SPLIT_INC_${CISCO_SPLIT_INC}_MASKLEN=${ROUTE##*/} export CISCO_SPLIT_INC_${CISCO_SPLIT_INC}_MASK=${MASKS[${ROUTE##*/}]} export CISCO_SPLIT_INC=$((${CISCO_SPLIT_INC}+1))}# Old function for generating NetworkManager 0.8 GConf keys function translateroute (){ local IPADDR="${1%%/*}" local MASKLEN="${1##*/}" local OCTET1="$(echo $IPADDR | cut -f1 -d.)" local OCTET2="$(echo $IPADDR | cut -f2 -d.)" local OCTET3="$(echo $IPADDR | cut -f3 -d.)" local OCTET4="$(echo $IPADDR | cut -f4 -d.)" local NUMADDR=$(($OCTET1*16581375 + $OCTET2*65536 + $OCTET3*256 + $OCTET4)) local NUMADDR=$(($OCTET4*16581375 + $OCTET3*65536 + $OCTET2*256 + $OCTET1)) if [ "$ROUTESKEY" = "" ]; then ROUTESKEY="$NUMADDR,$MASKLEN,0,0" else ROUTESKEY="$ROUTESKEY,$NUMADDR,$MASKLEN,0,0" fi}if [ "$reason" = "make-nm-config" ]; then echo "Put the following into the [ipv4] section in your NetworkManager config:" echo "method=auto" COUNT=1 for r in $ROUTES; do echo "routes${COUNT}=${r%%/*};${r##*/};0.0.0.0;0;" COUNT=$(($COUNT+1)) done exit 0fifor r in $ROUTES; do addroute $rdoneexec /etc/openconnect/vpnc-script Then connect as follows: openconnect -u myusername --script wrapper-script -b vpngateway.example.com | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/220380",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90367/"
]
} |
220,447 | I have a Makefile target, in which I have to check the value of an environment variable. But, I don't get the exact syntax of it. Tried hard, but can't find it. Any help is appreciated. Environment variable name: TEST, its value: "TRUE" test_target: ifeq ($(TEST),"TRUE") echo "Do something" endif I get the following error: /bin/sh: -c: line 0: syntax error near unexpected token `"TRUE","TRUE"'/bin/sh: -c: line 0: `ifeq ("TRUE","TRUE")' | The ifeq() directive has to be in column 1, remove any leading whitespace ie test_target: ifeq ($(TEST),"TRUE") echo "Do something"endif ^ no whitespace | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/220447",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118741/"
]
} |
220,501 | My /etc/hosts file looks like this: # Your system has configured 'manage_etc_hosts' as True.# As a result, if you wish for changes to this file to persist# then you will need to either# a.) make changes to the master file in /etc/cloud/templates/hosts.tmpl# b.) change or remove the value of 'manage_etc_hosts' in# /etc/cloud/cloud.cfg or cloud-config from user-data127.0.1.1 ansible-server ansible-server127.0.0.1 localhost# The following lines are desirable for IPv6 capable hosts::1 ip6-localhost ip6-loopbackfe00::0 ip6-localnetff00::0 ip6-mcastprefixff02::1 ip6-allnodesff02::2 ip6-allroutersff02::3 ip6-allhostsnode1 0.0.0.0node2 0.0.0.0 I have added the node1 and node2 and naturally the IP 0.0.0.0 is replaced by the IP of the node. I would assume this works perfectly fine, however it doesn't. I thought SSH simply ignores the hosts file: root@ansible-server:~# ssh root@node1ssh: Could not resolve hostname node1: Name or service not knownroot@ansible-server:~# ssh root@node2ssh: Could not resolve hostname node2: Name or service not known However, I can't ping these servers by their name either: root@ansible-server:~# ping node1ping: unknown host node1root@ansible-server:~# ping node2ping: unknown host node2 It is pretty clear I'm doing something really stupid here... but what? Additional information: this server runs Ubuntu 14.04.2 LTS and is hosted on DigitalOcean. The server this is occurring on an Ansible server. | The format of lines in /etc/hosts is address first and name(s) second 0.0.0.0 node1 0.0.0.0 node2 192.168.1.1 myroutermaybe8.8.8.8 googledns # in case DNS doesn't work for DNS???127.0.0.1 localhost or where several names map to the same address 0.0.0.0 node1 node2 node3 stitch626 ADDED, thanks to reminder by fpmurphy1: The first name (if more than one) is used as the canonical or "official" name for gethostbyaddr etc, so if you have a domain name assigned to this machine/address it is usually clearest and most useful to put the Fully Qualified Domain Name (FQDN) as the first name. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/220501",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78333/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.