source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
182,500
I'm working in a Centos 6.6 Docker image . I thought I installed everything to get access to man pages, but apparently not... $ yum install -y man man-pages man-pages-overrides[...]Complete!$ which man/usr/bin/man$ man manNo manual entry for man What am I missing ? Regarding questions in comments (thanks for your help everyone): $ echo $MANPATH# empty$ man 1 manNo entry for man in section 1 of the manual$ man 7 manNo entry for man in section 7 of the manual$ ll /usr/share/man/man1/total 8drwxr-xr-x 2 root root 4096 Sep 23 2011 ./drwxr-xr-x 61 root root 4096 Jan 31 01:55 ../$ yum search man | grep dbModemManager.x86_64 : Mobile broadband modem management servicehsqldb-manual.noarch : Manual for hsqldbdb4-utils.x86_64 : Command line tools for managing Berkeley DB (version 4)foomatic-db-ppds.noarch : PPDs from printer manufacturersldb-tools.x86_64 : Tools to manage LDB files$ rpm -q -l man | grep man.1/usr/share/doc/man-1.6f/usr/share/doc/man-1.6f/COPYING/usr/share/doc/man-1.6f/README/usr/share/man/bg/man1/man.1.gz/usr/share/man/cs/man1/man.1.gz/usr/share/man/da/man1/man.1.gz/usr/share/man/de/man1/man.1.gz/usr/share/man/el/man1/man.1.gz/usr/share/man/en/man1/man.1.gz/usr/share/man/es/man1/man.1.gz/usr/share/man/fi/man1/man.1.gz/usr/share/man/fr/man1/man.1.gz/usr/share/man/hr/man1/man.1.gz/usr/share/man/it/man1/man.1.gz/usr/share/man/ja/man1/man.1.gz/usr/share/man/ko/man1/man.1.gz/usr/share/man/man1/man.1.gz/usr/share/man/nl/man1/man.1.gz/usr/share/man/pl/man1/man.1.gz/usr/share/man/pt/man1/man.1.gz/usr/share/man/ro/man1/man.1.gz/usr/share/man/sl/man1/man.1.gz
See the comment re: removing tsflags=nodocs from /etc/yum.conf , put there as a purported consequence of base docker image build policy: https://groups.google.com/forum/#!topic/docker-user/fuW0e9xlqQE I just tested this on a CentOS 6.7 container and it works.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/182500", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101593/" ] }
182,532
When you try to find a previous used command in an interactive shell sessionby ↑ (up arrow) , you may get something like $ls # 1st time push `up arrow`$ls # 2nd time push `up arrow`$ls # 3rd time push `up arrow`$ls # 4th time push `up arrow`$ls # 5th time push `up arrow`$ls # 6th time push `up arrow`$make # 7th time push `up arrow`$make # 8th time push `up arrow`$make # 9th time push `up arrow`$ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" # Bingo! I would like it better if it were like this: $ls # 1st time push `up arrow`$make # 2th time push `up arrow`$ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" # Bingo! because the duplicated history is usually of no use. How can I get Bash to do this?
You can accomplish this by setting ignoredups in the HISTCONTROL variable: HISTCONTROL="ignoredups" Optionally export it, export HISTCONTROL="ignoredups" to make it an environment variable. From the bash(1) man page : HISTCONTROL A colon-separated list of values controlling how commands aresaved on the history list. If the list of values includes ignorespace ,lines which begin with a space character are not saved in the historylist.  A value of ignoredups causes lines matching the previoushistory entry to not be saved.  A value of ignoreboth is shorthandfor ignorespace and ignoredups .  A value of erasedups causes allprevious lines matching the current line to be removed from thehistory list before that line is saved.  Any value not in the abovelist is ignored.  If HISTCONTROL is unset, or does not include a validvalue, all lines read by the shell parser are saved on the historylist, subject to the value of HISTIGNORE . The second andsubsequent lines of a multi-line compound command are not tested,and are added to the history regardless of the value of HISTCONTROL . This does exactly what the question asks for. If the user enters the commands ruby … , make , make , make , ls , ls , ls , ls , ls and ls (as separate, consecutive lines),then the history list will be ruby … , make , ls . Pressing ↑ (up arrow) three timeswill return to the ruby command.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/182532", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8932/" ] }
182,537
When trying to write the stdout from a Python script to a text file ( python script.py > log ), the text file is created when the command is started, but the actual content isn't written until the Python script finishes. For example: script.py: import timefor i in range(10): print('bla') time.sleep(5) prints to stdout every 5 seconds when called with python script.py , but when I call python script.py > log , the size of the log file stays zero until the script finishes. Is it possible to directly write to the log file, such that you can follow the progress of the script (e.g. using tail )? EDIT It turns out that python -u script.py does the trick, I didn't know about the buffering of stdout.
This is happening because normally when process STDOUT is redirected to something other than a terminal, then the output is buffered into some OS-specific-sized buffer (perhaps 4k or 8k in many cases). Conversely, when outputting to a terminal, STDOUT will be line-buffered or not buffered at all, so you'll see output after each \n or for each character. You can generally change the STDOUT buffering with the stdbuf utility: stdbuf -oL python script.py > log Now if you tail -F log , you should see each line output immediately as it is generated. Alternatively explicit flushing of the output stream after each print should achieve the same. It looks like sys.stdout.flush() should achieve this in Python. If you are using Python 3.3 or newer, the print function also has a flush keyword that does this: print('hello', flush=True) .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/182537", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101617/" ] }
182,573
Zip encryption has had often a bad reputation of being weak, but some would argue that a zip file encrypted using certain algorithms (such as by using AES), together with strong password, is really safe (see: https://superuser.com/questions/145167/is-zips-encryption-really-bad ) My question is: how strong is the encryption of a zip file in Linux Mint 17.1, when one compress a file by right clicking on it in Nemo and then selecting the context "Compress..."? Does it use this same AES standard as recommended by the link above? Please assume a strong password using upper and lower case letters, numbers, symbols, 16+ digits and not a dictionary word.
File Roller (the GNOME application whose variant/fork/whatever-you-call-it you use) depends on zip. That should not be the case - according to the fileroller news page, p7zip is used to create zip archives since version 2.23.4 - see this somewhat outdated fileroller news page. It's also stated on 7-Zip's Wiki page: 7-Zip supports: The 256-bit AES cipher. Encryption can be enabled for both files and the 7z directory structure. When the directory structure is encrypted, users are required to supply a password to see the filenames contained within the archive. WinZip-developed zip file AES encryption standard is also available in 7-Zip to encrypt ZIP archives with AES 256-bit, but it does not offer filename encryption as in 7z archives. Checking a standard-encrypted zip file from fileroller on the terminal shows: 7z l -slt [myStrongFile.zip]-> Method = AES-128 Deflate Where 7-Zip's own deflate algorithm applies (which yields better compression, too), according to the Wiki. ** If you want stronger encryption, you have two options: ** use the terminal and use the higher zip encrypt security option: 7z a -p -mem=AES256 -tzip [myStrongerFile.zip] [fileToEncrypt1] [fileToEncrypt2] ... Checking the encrypted 7z file on the terminal shows: 7z l -slt [myStrongerFile.zip]-> Method = AES-256 Deflate use the 7z format and encryption with fileroller, which also supports directory folder encryption, in contrary to zip files: Checking the encrypted 7z file on the terminal shows: 7z l -slt [myStrongerFile.7z]-> Method = LZMA:3m 7zAES:19 Which means AES-256
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/182573", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65416/" ] }
182,602
I have an FFmpeg command to trim audio: ffmpeg -ss 01:43:46 -t 00:00:44.30 -i input.mp3 output.mp3 The problem I have with this command is that option -t requires a duration (in seconds) from 01:43:46 . I want to trim audio using start/stop times, e.g. between 01:43:46 and 00:01:45.02 . Is this possible?
ffmpeg seems to have a new option -to in the documentation : -to position ( input / output ) Stop writing the output or reading the input at position . position must be a time duration specification, see (ffmpeg-utils)the Time duration section in the ffmpeg-utils(1) manual. -to and -t are mutually exclusive and -t has priority. Sample command with two time formats ffmpeg -i file.mkv -ss 20 -to 40 -c copy file-2.mkvffmpeg -i file.mkv -ss 00:00:20 -to 00:00:40 -c copy file-2.mkv This should create a copy (file-2.mkv) of file.mkv from the 20 second mark to the 40 second mark.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/182602", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77038/" ] }
182,626
I created a key for logging into a server (using ssh-keygen) with the name id_rsa, and so in my .ssh directory there is id_rsa.pub and id_rsa. The reason I used this name, is because when I tried other names, they didn't work with my server (I couldn't log in for some reason). I setup a new server today (and generated the key on a different computer). But the key names is also id_rsa. So how do I use the same key on my macbook pro (OSX), which already has a key named id_rsa, which is still in use (I can't get rid of it, as I need to use it to log into some other servers) ?
Generally speaking SSH keys identify clients, not servers (well, at least for the keys in ~/.ssh ). The recommended approach is to generate one key per client, as you’ve done effectively, and to add all the appropriate public keys to ~/.ssh/authorized_keys on the servers/accounts you need to access. So on your Macbook Pro, you wouldn’t add the new server’s key, you’d add your existing key (stored on the Macbook) to the new server, typically by using ssh-copy-id <username>@<server> If that doesn’t work, cat ~/.ssh/id_rsa.pub on your Macbook and copy/paste that at the end of ~/.ssh/authorized_keys on the server. Each account you need to use on each server will end up with a ~/.ssh/authorized_keys looking something like ssh-rsa AAAAuifi4poojaixahV8thaQu3eQueex0iequ7Eephua4sai8liwiezaic8othisieseepheexaa1zohdouk5ooxas0aoN9ouFa3ejuiK2odahy8Opaen0Niech4Vaegiphagh4EileiHuchoovusu3awahmo4hooShoocoshi3zohw4ieShaivoora7ruuy7igii3UkeeNg5oph6ohN4ciepaifee8ipas9Gei4cee1SohSoo2oCh5ieta5ohQu6eu5PhoomuxoowigaeH2ophau0xo5phoosh3mah7cheD3ioph1FeeZaudiMei4eighish3deixeiceangah5peeT8EeCheipaiLoonaaPhiej0toYe6== user1@host1ssh-rsa AAAAsaengaitoh4eiteshijee8ohFichah1chaesh4Oeroh2Chae8aich2os1akoh4Waifee5dai3roethah9oojahnietaexo0ia0xiegheixaiwo8aeshui8uZ4chooCohtei8ooMieloo0pahghaeShooth3zae7eigoSe9arei0lohpeij4aeJ3sahfahviaNiejoozeu1zooth8meibooph5IeGuun1lothiur6aleaw8shuof6fah7ooboophoo8nae6aipieshahcae4ShochohZoh4gohX7aes7aes4bo1eiNaeng7Eeghoh6Ge3Maenoh0qui1eiphahWotahGai8ohYohchuubohp3va5dohs== user2@host1ssh-rsa AAAA3Zohquoh8UavooveiF0aGho8tokaduih4eosai4feiCoophie7ekisuoNii0raizaighahfaik6aibeviojabee1Sheifo8mae0tiecei4Bai8gaiyahvo1eememofiesai0Teyooghah6iovi1zaibie3aePaFeishie0Pheitahka0FaisieVeuceekooSoopoox7Ahhaed2oi6Faeph1airaizee7Aeg8Aiya2oongaC9ing6iGheeg8chei1ogheighieghie1Apode3shibai5eit8oa5shahDaic0shishie0ies7Aijee5ohk1aetha1Quieyafu2oa0Ahwee3mu9tae4AebeiveeFiewohj== user1@host2 The lines will wrap in most editors, so it won’t look quite like the above when viewed; but there is only one line per key. Each line takes the form [options] key-type public-key comment The important part in this is the middle section which is the base64-encoded public key. Any user with a matching private key will be allowed on the server. The key-type is usually ssh-rsa nowadays, but you can expect to see other types become more popular in the future (such as ssh-ed255519 ). This depends on the options given when the key was generated. The comment is only there to help people identify the keys, so that once in a while someone can go through the list of authorized keys and make sensible decisions about whether to keep a key or not (disabling a key is as easy as commenting the line out with a # at the start of the file). Typically the comment is the username and hostname corresponding to the generated key (/i.e./ your username when you ran ssh-keygen and the hostname of the client computer). The optional options (there aren’t any in the example above) allow you to control what the users are allowed to do on the server, and/or to constrain the keys (requiring them to be signed by a specific certificate authority for example). For details, see the sshd manpage (search for “AUTHORIZED_KEYS FILE FORMAT”).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/182626", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91690/" ] }
182,631
There are two Linux kernel functions: get_ds() and get_fs() According to this article , I know ds is short for data segment . However, I cannot guess what "fs" is short for. Any explanations?
The FS comes from the additional segment register named FS on the 386 architecture (end of second paragraph). My guess is that after DS for Data Segment and ES for Extra Segment Intel just went for the next characters in the alphabet (FS, GS). You can see the 386 register on the wiki page , on the graphic on the right side. From the linux kernel source on my Linux Mint system ( arch/x86/include/asm/uaccess.h ): /* * The fs value determines whether argument validity checking should be * performed or not. If get_fs() == USER_DS, checking is performed, with * get_fs() == KERNEL_DS, checking is bypassed. * * For historical reasons, these macros are grossly misnamed. */#define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })#define KERNEL_DS MAKE_MM_SEG(-1UL)#define USER_DS MAKE_MM_SEG(TASK_SIZE_MAX)#define get_ds() (KERNEL_DS)#define get_fs() (current_thread_info()->addr_limit)#define set_fs(x) (current_thread_info()->addr_limit = (x))
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/182631", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65170/" ] }
182,693
When I type vi into a terminal it shows the following display: Note that it says vim not vi, I'm sure I haven't downloaded vim yet and that this is actually vi not vim: for example the arrow keys print ABCD instead of moving.
While the original vi is still available , I do not think it is much used on current linux or BSD distributions; 1 apparently it was dusted off in 2000 after having been mothballed a decade before that, and the last release was 2005. There are various implementations of vi around, which is really now a POSIX specification . These include nvi and elvis , but the most popular is probably vim . On systems that use vim, vi will simply be a softlink to it and when invoked this way it should start in vi-compatible mode, so the system has something that conforms to the POSIX spec. However, that doesn't change the actual name of the program, which is vim, and that's what you see on the title screen. 1. Although it is available on Arch , at least. You might find it other places too.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/182693", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92266/" ] }
182,696
I want to get the current CPUPower governor. When I type cpupower frequency-info I get a lot of information. I just want to get the governor, just like "ondemand" with no more information, to use its value in a program.
The current governor can be obtained as follows: cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor Note that cpu* will give you the scaling governor of all your cores and not just e.g. cpu0. This solution might be system dependent, though. I'm not 100% sure this is portable.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/182696", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83668/" ] }
182,732
I was looking up shebang and wondering why I would use it.I could execute a bash script using: bash foo.sh or ./foo.sh (with a shebang line in foo.sh) What are the pros and cons of each and which one should I use by default?
I would say yes, it's intrinsically better to use a shebang. Pro: If you put all your scripts in your $PATH (maybe /usr/local/bin or ~/bin ) and mark them as executable, then you can execute them by name without thinking about which interpreter you need to invoke (bash, Python, Ruby, Perl, etc.). If you place an executable file named foo with a shebang anywhere in your $PATH , you can simply type foo to execute it. Con: You have to type #!/bin/bash at the top and chmod +x the file. This is near-zero cost for a very convenient return.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/182732", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101738/" ] }
182,793
I'm not very experienced with Linux and I made a big, big mistake. I ran the following command: chown -R [ftpusername]:[ftpusername] / I meant to run this: chown -R [ftpusername]:[ftpusername] ./ See the problem? I tried to correct my mistake by changing the owner of all files to root : chown -R root:root / Now I'm getting permissions errors when trying to access my websites, but my biggest concern is that I want to make sure I haven't caused any security vulnerabilities here. Questions: Was changing ownership of everything to root the right thing to do? I think running chown caused some of the folder and file permissions to be changed. Is that normal? Would this cause any security vulnerabilities?
Was changing ownership of everything to root the right thing to do? No . It is, however, the quickest way I can think of to get the system to normal state. There are plenty of process which require some directories/files be owned by their user. Examples include logs, caches, working/home directories of some processes like MySQL, LightDM, etc. Especially log files can create a lot of problems. There are some applications which are setuid / setgid , and so need their owner/group to be something specific. Examples include /usr/bin/at , /usr/bin/crontab , etc. I think running chown caused some of the folder and file permissions to be changed. Is that normal? I doubt modes got changed. If it did, it most definitely is not normal. Would this cause any security vulnerabilities? Since you just set /usr/bin/crontab to be owned by root , you now have a setuid application that opens an editor. I doubt any vulnerabilities compare to that. Of course, this is a blatant vulnerability, so something more insidious might now pop up. Overall, I'd recommend simply re-installing the system - or hopefully you have full-disk backups. Apparently, chown(3) is supposed to clear the setuid and setgid bits if the running process doesn't have the appropriate privileges. And man 2 chown for Linux says: When the owner or group of an executable file are changed by an unprivileged user the S_ISUID and S_ISGID mode bits are cleared. POSIX does not specify whether this also should happen when root does the chown() ; the Linux behavior depends on the kernel version. In case of a non-group-executable file (i.e., one for which the S_IXGRP bit is not set) the S_ISGID bit indicates mandatory locking, and is not cleared by a chown() . So, it seems the devs and the standards committees have provided safegaurds.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/182793", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43886/" ] }
182,800
I have file1, file2, file3. file1 contains 1 file2 contains 2 file3 contains 3 I use command cat file1 > file2 > file3 Results in: file1 1 file2 (contains nothing) file3 1 Why does anything along this line get destroyed? Basically what am I not seeing behind the scenes? (Side notes using "append" >> is even weirder)
Redirections in Bourne/POSIX-style shells such as bash , dash, ksh, etc. processed in the order they appear, from left to right > x opens and truncates file x , and sets the file descriptor that writes into x as standard output. Your command: cat file1 > file2 > file3 Will: Open and truncate file2 Set standard output to write to that file descriptor Open and truncate file3 Set standard output to write to that file descriptor Run cat file1 The end result is that standard output points into file3 at the time cat runs. Both file2 and file3 have their current contents erased, and file3 gets the output of cat (the contents of file1 ) written into it. If you want to split output into multiple streams written into separate files, you can use tee : cat file1 | tee file2 > file3 Other shells ( notably zsh ) behave differently, and your command would have the result you probably expected: both file2 and file3 would have the contents of file1 . Note that cat isn't necessary here; < input redirection would do the job just as well.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/182800", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68239/" ] }
182,801
I've used many variants of Linux(mostly Debian derivatives) for over a decade now. One problem that I haven't seen solved satisfactorily is the issue of horizontal tearing, or Vsync not being properly implemented. I say this because I use used 5 different distros on 4 different computers with various monitors and Nvidia/AMD/ATI/Intel graphics cards; every time, there has been an issue with video tearing with even slight motion. This is a big problem, especially since even Windows XP doesn't have these issues on modern hardware. If anyone is going to use Linux for anything, why would they want constant defects to show up when doing anything non-CLI? I'm guessing that either few developers know about this problem or care enough to fix it. I've tried just about every compositor out there, and usually the best they can do is minimize the issue but not eliminate it. Shouldn't it be as simple as synchronizing with the refresh rate of the monitor? Is there some politics among the OSS community that's preventing anyone from committing code that fixes this? Every time I've asked for help on this issue in the past, it either gets treated as an edge case(which I find difficult to believe it is given the amount of times I've replicated the problem) or I get potential solutions that at most minimize the tearing.
This is all due to the fact that the X server is out-dated, ill-suitable for today's graphics hardware and basically all the direct video card communication is done as an extension ("patch") over the ancient bloated core. The X server provides no builtin means of synchronization between user rendering the window and the screen displaying a window, so the content changes in the middle of rendering. This is one of the well-known issues of the X server (it has many, the entire model of what the server does and is outdated - event handling in subwindows, metadata about windows, graphical primitives for direct drawing...). Widget toolkits mostly want to gloss over all this, but tearing is still a problem because there is no mechanism to handle that. Additional problems arise when you have multiple cards that require different drivers, and on top of all this, opengl library has a hard-wired dependency on xlib, so you can't really use it independently without going through X. Wayland, which is somewhat unenthusiastically trying to replace X, supports a pedantic vsync synchronization in its core, and is advertised to have every frame exactly perfect. If you quickly google "wayland video tearing" you'll find more information on everything.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/182801", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79611/" ] }
182,802
I have a Bash script that looks like #!/bin/bash#FECHA=`date +%j`if [ $FECHA -eq 40 ]then echo "Esta semana le toca preparar el café a Osvaldo" | mail -s 'Café' [email protected] exitelif [ $FECHA -eq 47 ]then echo "Esta semana le toca preparar el café a Berenice" | mail -s 'Café' [email protected] exitelif [ $FECHA -eq 54 ]then echo "Esta semana le toca preparar el café a Nizaá" | mail -s 'Café' [email protected] exitfi which will run, thanks to crontab, every monday at 7 am. The actual Bash script has more lines, because there are more people involved. I think it works. But... Is there a way to make this script with fewer lines? I was thinking: two variables, one for the person that will make the coffee and another for the date and a way to relate those variables.
I think the case/esac construct fits well here. #!/bin/bashcase "`date +%j`" in 40) name=Osvaldo ;; 47) name=Berenice ;; 54) name=Nizaá ;; *) exit ;;esacecho "Esta semana le toca preparar el café a ${name}" \ | mail -s 'Café' [email protected] Note: if the same person needs to make coffee several times, you can aggregate tests with | : case "`date +%j`" in 12|23|40|49) name=Osvaldo ;; 10|19|30|47) name=Berenice ;;...
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/182802", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38060/" ] }
182,807
I am using RHEL6.4 . I want run my newt based application from rc.local so that it runs as the only one application . But it's not working. How can I run this application to run before login to system ?
I think the case/esac construct fits well here. #!/bin/bashcase "`date +%j`" in 40) name=Osvaldo ;; 47) name=Berenice ;; 54) name=Nizaá ;; *) exit ;;esacecho "Esta semana le toca preparar el café a ${name}" \ | mail -s 'Café' [email protected] Note: if the same person needs to make coffee several times, you can aggregate tests with | : case "`date +%j`" in 12|23|40|49) name=Osvaldo ;; 10|19|30|47) name=Berenice ;;...
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/182807", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101804/" ] }
182,869
I know that I can use either as the first line of scripts to invoke the desired shell. Would #!/bin/sh be recommended if compatibility with all unix systems is an absolute requirement? In my case the only OS' I care about are Ubuntu (Debian) and OSX.Given that, could I use #!/bin/bash and be assured it would work on both systems? Would this also make it easier to use scripts with more modern and clearer syntax for commands? Does using #!/bin/sh also relate to using POSIX ?
For starters, if you can make the assumption that Bash is preinstalled (which, to my knowledge is the case on all the systems you list), use the following hashbang to be compatible: #!/usr/bin/env bash this invokes whatever bash happens to be configured, no matter whether it's in /bin or /usr/local/bin . While on most systems across a wide range (including AIX, Solaris, several BSD flavors), bash ended up in different locations, env always ended up in /usr/bin/env . The trick, however, is not mine but from the author of the Bash Cookbook. Anyway, yes Bash would allow you to use some "modern" features that make your life easier. For example the double brackets: [[ -f "/etc/debian_version" ]] && echo "This is a Debian flavor" whereas in traditional shell dialects you'd have to resort to: test -f "/etc/debian_version" && echo "This is a Debian flavor" but the best about the double brackets is that they allow regular expressions for matching. The Bash Hackers Wiki will give you many tricks in that direction. You can also use quite convenient expressions like $((2**10)) or other arithmetic expressions inline with the $((expression)) syntax. Using backticks for subshells is fine, albeit a bit outdated. But the nesting capabilities of $(command ...) invocations are way more convenient as you won't have to escape many things at different subshell levels. These are but a few things Bash gives you over the traditional common POSIX sh syntax. But if you want more power on the shell (not just in scripts), also have a look at zsh .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/182869", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10043/" ] }
182,882
(See Use #!/bin/sh or #!/bin/bash for Ubuntu-OSX compatibility and ease of use & POSIX ) If I want my scripts to use the bash shell, does using the .bash extension actually invoke bash or does it depend on system config / 1st shebang line. If both were in effect but different, which would have precedence? I'm not sure whether to end my scripts with .sh to just indicate "shell script" and then have the first line select the bash shell (e.g. #!/usr/bin/env bash ) or whether to just end them with .bash (as well as the line 1 setting). I want bash to be invoked.
does using the .bash extension actually invoke bash or does it depend on system config / 1st shebang line. If you do not use an interpreter explicitly, then the interpreter being invoked is determined by the shebang used in the script. If you use an interpreter specifically then the interpreter doesn't care what extension you give for your script. However, the extension exists to make it very obvious for others what kind of script it is. [sreeraj@server ~]$ cat ./ext.py#!/bin/bashecho "Hi. I am a bash script" See, .py extension to the bash script does not make it a python script. [sreeraj@server ~]$ python ./ext.py File "./ext.py", line 2 echo "Hi. I am a bash script" ^SyntaxError: invalid syntax Its always a bash script. [sreeraj@server ~]$ ./ext.pyHi. I am a bash script
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/182882", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10043/" ] }
182,885
We have a document which contains lines and we have to find if [a|b|c] appears at least once in every line no matter which order. For example: Input: abcbcacabhhfdhdhfabjfdjdjffacjfdjdfjdffhfhfhfabcjdfjdjfkahfhfbkjfjdjffc Desired Output (the fourth line is absent since it only contains a and b but no c ): abcbcacabfhfhfhfabcjdfjdjfkahfhfbkjfjdjffc We are using Linux.
Pipe it: grep a file | grep b | grep c
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/182885", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101848/" ] }
182,925
Whenever I open any software through Terminal I get following errors and eventually the software opens dconf-WARNING **: failed to commit changes to dconf: The connection is closed(gedit:3609): dconf-WARNING **: failed to commit changes to dconf: The connection is closed(gedit:3609): dconf-WARNING **: failed to commit changes to dconf: The connection is closedError creating proxy: The connection is closed (g-io-error-quark, 18)Error creating proxy: The connection is closed (g-io-error-quark, 18)Error creating proxy: The connection is closed (g-io-error-quark, 18)Error creating proxy: The connection is closed (g-io-error-quark, 18)Error creating proxy: The connection is closed (g-io-error-quark, 18) What can be the possible issue?
I had the same problem, in my case I was running "sudo gedit" from a user account; therefore when it tried to save dconf changes it realized that the user was not root, and thus it raised those errors. I solved it by running gedit as a "root": sudo -igedit & where sudo -i will login into a user acount.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/182925", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101866/" ] }
182,927
I have been asked to set up a shared directory for a colleague on a server I manage. I created an account for him on that server, set up a Samba password with smbpasswd , created a directory and set it up in the smb.conf file, which I copy below: [global]workgroup = OURWORKGROUPserver string = Samba Server %vnetbios name = server_i_runsecurity = usermap to guest = bad username resolve order = bcast lmhosts host wins dns proxy = no[coworkerguy]path = /samba/coworkerguyvalid users = coworkerguyguest ok = nowritable = yesbrowsable = yes Now I have been asked to limit this space to 2Gb. I have looked online for ideas but I can't find anything recent and setting up disk quotas is apparently one of the most popular solutions. I admit I'm not that confident doing that, and furthermore it often comes up that I have to reboot in single user mode - unless I misunderstood something. That is not possible, as I can only ssh remotely to that server. Are there are techniques I could use? If not, could someone point me to an idiot-proof guide?
My solution is not the best, I know, but it works ;-). EDIT: Please read my other answer as well, this answer is an evil hack! Create a 2Gb file with dd, format the file e.g. ext3, mount it, add it to fstab and use that as a share. $ dd if=/dev/zero of=filename bs=1024 count=2M$ sudo mkfs.ext4 filename$ cat /etc/fstab/path/to/filename /mount/point ext4 defaults,users Now you point the share to /mount/point (or wherever you chose to mount it), so path = /samba/coworkerguy becomes path = /mount/point In UNIX, everything is a file.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/182927", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82913/" ] }
182,931
I have got this query sed '/./=' abc.txt| sed '/./N; s/\n/, /' >> as.datsource file has 3 rows like belowabc When use the following command it gives me result like this Output 1 a 2 b 3 c but I would like to have output of the command like this Output a 1 b 2 c 3
My solution is not the best, I know, but it works ;-). EDIT: Please read my other answer as well, this answer is an evil hack! Create a 2Gb file with dd, format the file e.g. ext3, mount it, add it to fstab and use that as a share. $ dd if=/dev/zero of=filename bs=1024 count=2M$ sudo mkfs.ext4 filename$ cat /etc/fstab/path/to/filename /mount/point ext4 defaults,users Now you point the share to /mount/point (or wherever you chose to mount it), so path = /samba/coworkerguy becomes path = /mount/point In UNIX, everything is a file.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/182931", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101872/" ] }
182,959
sudo ufw enable Firewall is active and enabled on system startup sudo ufw statusStatus: active But after I restart the system and run sudo ufw status , I get the message: Status: inactive How can I solve this problem? By the way, my /etc/ufw/ufw.conf does have ENABLED=yes.
It would've been very useful if you had mention at least what the distribution is or the version of the package that you are using. However if you are running systemd try this: sudo systemctl enable ufw This should enable the autostart of the service in init.d. Please be sure to enable and test the service configuration beforehand, so you don't fail at booting the system the next time
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/182959", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101883/" ] }
182,967
I have a system with two NICs on it. This machine, and a few accompanying devices will be moved and attached to different LANs or sometimes it'll be using dial-up. eth0: - 10.x.x.x address space - no internet gateway - only a few deviceseth1 (when used):- 172.16.x.x or 192.168.x.x or other address spaces- access to the gateway from LAN to internetppp0 (when used):- internet access through dialup using KPPP I'm using ifconfig to bring interfaces up or down (other than with ppp0, which is handled by KPPP). If I bring up eth1 first, it gets an address from its DHCP and gets the gateway and that is added to routing so there's no trouble reaching the LAN and the internet. If I bring up eth0 first or second, it gets its address and sets the default gateway to within its address space (in the 10.x.x.x range). If I bring up eth0 first and eth1 second, the default gateway is still kept to within the 10.x.x.x range. So no matter what I do, eth0 will override eth1 and "claim" the gateway in the routing. Is there some way to either prevent eth0 from claiming the gateway, or to make sure eth1 (if brought up 2nd) uses its gateway? Or can I somehow prioritize a ranking of which interface's gateway should be used over the others? I basically want to make sure eth1's default address space gateway is used if it's active, and if not, then the ppp0's default gateway is used. I'd like to be able to prevent eth0 from ever having the default gateway.
I faced similar problem on Raspbian (I suppose the solution below will be applicable to Debian as well). Raspberry Pi 3 has 2 NICs integrated: Wi-Fi and Ethernet. I use both of them, they are wlan0 and eth0, respectively. wlan0 is connected to my home Wi-Fi network and internet access comes through this interface. It gets its settings via DHCP from my home router. eth0 is connected directly to my windows PC and has static IP assigned. No internet access via eth0 was available since I didn't configure it on my windows PC. In Raspbian, the dhcpcd daemon is responsible for configuring network interfaces. In order to set static IP to eth0 interface, there were following lines added to the end of /etc/dhcpcd.conf : interface eth0static ip_address=192.168.2.2/24static routers=192.168.2.1static domain_name_servers=192.168.2.1 With this settings, dhcpcd created 2 default routes and route via eth0 had higher priority than that via wlan0: pi@raspberrypi:~ $ routeKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Ifacedefault 192.168.2.1 0.0.0.0 UG 202 0 0 eth0default 192.168.1.254 0.0.0.0 UG 303 0 0 wlan0192.168.1.0 * 255.255.255.0 U 303 0 0 wlan0192.168.2.0 * 255.255.255.0 U 202 0 0 eth0 So I had no internet access, because system tried to route it via eth0 and it didn't have internet access, as I mentioned above. To solve the problem, I used nogateway option in the /etc/dhcpcd.conf for eth0 interface. So eth0-specific configuration started looking like that: interface eth0static ip_address=192.168.2.2/24static routers=192.168.2.1static domain_name_servers=192.168.2.1nogateway After saving this configuration and rebooting, there were no default route via eth0: pi@raspberrypi:~ $ routeKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Ifacedefault 192.168.1.254 0.0.0.0 UG 303 0 0 wlan0192.168.1.0 * 255.255.255.0 U 303 0 0 wlan0192.168.2.0 * 255.255.255.0 U 202 0 0 eth0 Internet access appeared, and the problem was solved.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/182967", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101743/" ] }
182,971
This question is about reading and writing on a file descriptor. See the following example: #!/bin/shfile='somefile'# open fd 3 rwexec 3<> "$file"# write something to fd 3printf "%s\n%s\n" "foo" "bar" >&3# reading from fd 3 workscat "/proc/$$/fd/3"# only works if the printf line is removedcat <&3exit 0 This script outputs: foobar Expected output: foobarfoobar Opening and writing to the file descriptor succeeds. So does reading via proc/$$/fd/3 . But this is not portable . cat <&3 doesn't output anything. However, it works when the file descriptor is not being written to (e.g. uncomment the printf line). Why doesn't cat <&3 work and how to read the entire contents from a file descriptor portably (POSIX shell)?
cat <&3 does exactly what it's supposed to do, namely read from the file until it reaches the end of the file. When you call it, the file position on the file descriptor is where you last left it, namely, at the end of the file. (There's a single file position, not separate ones for reading and for writing.) cat /proc/$$/fd/3 doesn't do the same thing as cat <&3 : it opens the same file on a different descriptor. Since each file descriptor has its own position, and the position is set to 0 when opening a file for reading, this command prints the whole file and doesn't affect the script. If you want to read back what you wrote, you need to either reopen the file or rewind the file descriptor (i.e. set its position to 0). There's no built-in way to do either in a POSIX shell nor in most sh implementations ( there is one in ksh93 ). There is only one utility that can seek: dd , but it can only seek forward. (There are other utilities that may skip forward but that doesn't help.) I think the only portable solution is to remember the file name and open it as many times as necessary. Note that if the file isn't a regular file, you might not be able to seek backwards anyway.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/182971", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12779/" ] }
183,063
I'm asking this question cautiously because I don't want to get this wrong. I have a program_name.rpm file saved locally on my server (CentOS 6.5). I have installed it previously just by navigating to it and using yum install program_name.rpm which worked fine but it didn't give me any option to specify where it is installed. Is it possible to install this rpm to /opt/some_directory instead of it's default install location?
Too bad you accepted that rpm answer. That will lead to warnings from subsequent executions of yum, such as Warning: RPMDB altered outside of yum Instead you should use yum localinstall , per section 13 of the Yum and RPM Tricks page of the CentOS wiki => https://wiki.centos.org/TipsAndTricks/YumAndRPM#head-3c061f4a180e5bc90b7f599c4e0aebdb2d5fc7f6 You can use the --installroot option to specify a different installation root.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/183063", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89568/" ] }
183,089
I have a 300 lines file with ^@ character between each characters in the file. (I cannot post the entire contents for security reasons, so I am pasting only the first line) [mercury@app01 ftp_logs]$ cat cl.txt2015-01-22 03:00:01; local; Now, when I vi the file, I see the same contents as following: 2^@0^@1^@5^@-^@0^@1^@-^@2^@2^@ ^@0^@3^@:^@0^@0^@:^@0^@1^@;^@ ^@l^@o^@c^@a^@l^@;^@ Since cat wasn't displaying the ^@ characters, naturally I thought grepping for a certain string would work in cat , but surprisingly, it wasn't. [mercury@app01 ftp_logs]$ cat cl.txt2015-01-22 03:00:01; local;[mercury@app01 ftp_logs]$ cat cl.txt | grep local[mercury@app01 ftp_logs]$ After replacing the null bytes with sed , file is now readable in vi and grep returns the result from cat . [mercury@app01 ftp_logs]$ sed -i 's/\x0//g' cl.txt[mercury@app01 ftp_logs]$ cat cl.txt | grep local2015-01-22 03:00:01; local;[mercury@app01 ftp_logs] Questions: 1) Why didn't grep work before replacing the null bytes, given that null bytes weren't getting displayed. Does it mean grep saw the ^@ characters even though it wasn't displayed in the terminal? 2) This makes me wonder whether it is recommended to use cat -v or vi to read files on production servers since cat seem to be good in hiding stuff? 3) The file in question is an auto-generated file from a Windows machine. Under what circumstances does the ^@ find its way into a file.
The format of the file is probably little-endian UTF-16. Some apps on Windows seem to default to this, and it causes a lot of porability problems. vi represents ASCII-Nul (numerically zero) valued bytes as '^@' (control-At). You can actually enter zero-valued bytes in vim with the control-shift-@ chord. grep must see the ACII-Nul bytes, rather than interpret the file as UTF-16, and then seeing the Unicode code points for '2' or '0' or whatever. I don't see an option in the GNU grep man page for making it deal with UTF-anything. cat doesn't show the ASCII-Nul btyes, the terminal emulator in question would show them, but whatever terminal emulator you're using is ignoring them. If you use cat cl.txt | od -x or better, cat cl.txt | xxd , you'll see the ASCII-Nul bytes in the output of cat . If you see something like 'ffef' or 'efff' as the first two bytes of the file, those are the "byte order mark" promulgated by Microsoft against all common sense. I'm not sure what to recommend to transliterate UTF-16 to ASCII or UTF-8, iconv maybe, but I've never used it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183089", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68296/" ] }
183,095
I have a small server with Ubuntu 10.04 on it; I am manipulating this server from another computer via ssh , and I tried to use nfs on it to share files. That mostly works, until one of the clients unmounts and I want to shutdown nfs-kernel-server on the server. While the stopping seems proper: $ sudo service nfs-kernel-server stop * Stopping NFS kernel daemon [ OK ] * Unexporting directories for NFS kernel daemon... [ OK ] ... I do get something like this in the log: Feb 5 11:50:17 user init: statd main process (3806) killed by KILL signalFeb 5 11:50:17 user init: statd main process ended, respawningFeb 5 11:50:17 user init: idmapd main process (3808) killed by KILL signalFeb 5 11:50:17 user init: idmapd main process ended, respawningFeb 5 11:50:17 user statd-pre-start: local-filesystems startedFeb 5 11:50:17 user sm-notify[3815]: Already notifying clients; Exiting!Feb 5 11:50:17 user rpc.statd[3830]: Version 1.1.6 StartingFeb 5 11:50:17 user rpc.statd[3830]: Flags: ... meaning that some related processes to nfs didn't care about me saying stop, and respawned again. If at this point I try to do sudo service nfs-kernel-server start (again via ssh), that command freezes, and in /var/log/syslog I get this: Feb 5 11:43:55 user mountd[2045]: authenticated mount request from 192.168.0.2:1005 for /media/disk (/media/disk)Feb 5 11:45:19 user mountd[2045]: Caught signal 15, un-registering and exiting.Feb 5 11:45:19 user kernel: [27428.148368] nfsd: last server has exited, flushing export cacheFeb 5 11:45:19 user kernel: [27428.148431] BUG: Dentry d0bc8b28{i=1f6,n=} still in use (1) [unmount of vfat sdd8]Feb 5 11:45:19 user kernel: [27428.148473] ------------[ cut here ]------------Feb 5 11:45:19 user kernel: [27428.148481] kernel BUG at /build/buildd/linux-2.6.32/fs/dcache.c:670!Feb 5 11:45:19 user kernel: [27428.148491] invalid opcode: 0000 [#1] SMP Feb 5 11:45:19 user kernel: [27428.148501] last sysfs file: /sys/devices/system/cpu/cpu1/cpufreq/scaling_cur_freq...Feb 5 11:45:19 user kernel: [27428.148807] Call Trace:Feb 5 11:45:19 user kernel: [27428.148824] [<c024c780>] ? vfs_quota_off+0x0/0x20Feb 5 11:45:19 user kernel: [27428.148838] [<c021d4fc>] ? shrink_dcache_for_umount+0x3c/0x50Feb 5 11:45:19 user kernel: [27428.148852] [<c020d090>] ? generic_shutdown_super+0x20/0xe0...Feb 5 11:45:19 user kernel: [27428.149511] EIP: [<c021d4a9>] shrink_dcache_for_umount_subtree+0x249/0x260 SS:ESP 0068:ccc6de6cFeb 5 11:45:19 user kernel: [27428.149631] ---[ end trace 6198103bb62887ac ]---Feb 5 11:49:53 user init: idmapd main process (838) killed by TERM signalFeb 5 11:49:53 user init: idmapd main process ended, respawningFeb 5 11:49:53 user rpc.statd[769]: Caught signal 15, un-registering and exiting.Feb 5 11:49:53 user init: statd main process ended, respawningFeb 5 11:49:53 user statd-pre-start: local-filesystems startedFeb 5 11:49:53 user sm-notify[3790]: Already notifying clients; Exiting!Feb 5 11:49:53 user rpc.statd[3806]: Version 1.1.6 StartingFeb 5 11:49:53 user rpc.statd[3806]: Flags: ... Now, the thing is this - after this bug happens, the server's ssh server is (for some reason) usually still "live", so I can log in via ssh again, and try to close processes (and realize it is impossible to kill /usr/sbin/rpc.nfsd 8 , which is the one hanging). BUT - if at this point, I try to issue a reboot via sudo shutdown -r now && exit from ssh, then this server PC will start the reboot process - but will not complete it; it will drop to a terminal, dump some error messages, and stay there :( The problem is - the server PC is in a really difficult to access location, and having to go there to do Alt+SysRq + REISUB to properly reboot (if the kernel reacts to that key combo; else it's hard powerdown) is really difficult. So my question is - is there some "hardcore reboot" command in Linux, that will more-less "guarantee" that the PC will reboot (and not just hang/freeze), even if it has encountered a kernel bug - and which I could issue via ssh ? Something that would be the equivalent of a hard powerdown (i.e. turning of the power by e.g. holding the power button for 10+ seconds) and hard powerup?
To ensure that the system will reboot no matter what, I always do this sequence: # echo s > /proc/sysrq-trigger# echo u > /proc/sysrq-trigger# echo s > /proc/sysrq-trigger# echo b > /proc/sysrq-trigger This requests the kernel to do: emergency sync of the block devices mount readonly of all filesystems again a sync force an immediate boot; you can also use o for poweroff. See e.g. here for explanation of this feature.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183095", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8069/" ] }
183,125
I have the following bash script, from what I understand >> is used to append the output of a command to an existing file instead of overwrite, but what is it doing in this case? This script is calling some exe files to convert from one format to another. There are many years for each file, so it loops through each file by looking at the filename. Also when I run this script I get "ambiguous redirect" #!/bin/bashsource $HOME/.bashrcjobout=${1}joberr=${2}# Set some paths and prefixesyr_bgn=2000yr_end=2000yr=${yr_bgn}pth_data='/mnt/'pth_rst='/mnt/'while [ ${yr} -le ${yr_end} ]do ./executable1 ${pth_data}file${yr}-${yr}.nc ${yr} ${pth_rst} 1>> ${jobout} 2>> ${joberr} ./executable2 ${pth_data}file${yr}-${yr}.nc ${yr} ${pth_rst} 1>> ${jobout} 2>> ${joberr} ./executable3 ${pth_data}file${yr}-${yr}.nc ${yr} ${pth_rst} 1>> ${jobout} 2>> ${joberr} let yr=${yr}+1done
1>> and 2>> are redirections for specific file-descriptors, in this case the standard output (file descriptor 1) and standard error (file descriptor 2). So the script is redirecting all "standard" messages to ${jobout} and all error messages to ${joberr} . Using >> in both cases means all messages are appended to the respective files. Note that ${jobout} and ${joberr} take their values from the two command-line parameters to the script ( ${1} and ${2} ), so you need to specify the files you want to use to store the messages. If the parameters aren't given the script will produce the "ambiguous redirection" error message you've seen; the script should really check whether the parameters have been provided and produce an appropriate error message otherwise, something like if [ -z "$1" -o -z "$2" ]; then echo "Log files for standard and error messages must be specified" echo "${0} msgfile errfile" exit 1fi at the start of the script.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/183125", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102018/" ] }
183,178
I want to know the total amount of time that a series of processes would take in my computer to decide if I should running there or in a stronger computer. So, i am forecasting the running time of each command. The output looks like: process1 00:03:34process2 00:00:35process3 00:12:34 How can I sum the second column to obtain a total running time? I could try pipping each line through awk '{sum += $2 } END { print sum } but this makes no sense as the values are not natural numbers.
#!/bin/shEPOCH='jan 1 1970'sum=0for i in 00:03:34 00:00:35 00:12:34do sum="$(date -u -d "$EPOCH $i" +%s) + $sum"doneecho $sum|bc date -u -d "jan 1 1970" +%s gives 0. So date -u -d "jan 1 1970 00:03:34" +%s gives 214 secs.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183178", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
183,183
In a small bash script I'm running I am attempting to chown a new directory that is created. I've added: sudo chown $USER:$USER /var/www/$sitenamesudo chmod 775 /var/www/$sitename after the line where I mkdir ( sudo mkdir /var/www/$sitename ). For some reason the chown is not executing. I can execute it manually but when written in the file it doesn't work. I have noticed that "chown" is not highlighted in the same color as "mkdir" and "chmod" but I can't figure out my problem. Why doesn't chown work here? Is it an issue with $USER:$USER ? EDIT Here is the full script. How would I chown the file to whichever non root user executed the script? #!/bin/sh#!/bin/bash# New Sitecd /etc/apache2/sites-available/echo "New site name (test.my):"read sitenameecho "<VirtualHost *:80> ServerAdmin admin@$sitename ServerName $sitename ServerAlias $sitename DocumentRoot /var/www/$sitename <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/$sitename> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined</VirtualHost>" > $sitename.confsudo mkdir /var/www/$sitenamesudo chown $USER:$USER /var/www/$sitenameecho USER is $USERsudo chmod 775 /var/www/$sitenamesudo a2ensite $sitename.confsudo apachectl restartecho "New site created"
If for some reason, $USER is not set, you can use the id command to obtain the identity of the real user. So the first time you use the $USER variable, you can use the shell expansion to supply a default value. Change the chown line in your script to: sudo chown "${USER:=$(/usr/bin/id -run)}:$USER" "/var/www/$sitename" If USER is empty or unset when this is run, bash will set the USER variable to the output of /usr/bin/id -run .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183183", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102059/" ] }
183,204
I have a bash script, in which I have specified via trap that a function will be called for any (catchable) signal . typeset -i sig=1while (( sig < 65 )); do trap myfunc $sig let sig=sig+1done Is there any way my script can determine which signal has been caught?
trap "signum=${sig};myfunc" "$sig"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183204", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56157/" ] }
183,250
I'm wondering why I can't set ionice for cp. ionice -c2 -n0 cp I run this command and I get: cp: missing file argument Try `cp --help' for more information Why?
If I understand you correctly, you are trying to change the ionice level for all subsequent calls to cp . If so, you are misunderstanding the way ionice works (and nice , for that matter). The ionice command refers to a process , not to a binary. You can use it either to change the ionice level of a currently running process, by giving it the PID as an argument, or you can use it when you start a process. So, either you keep watch on the machine and manually set a new ionice level for the problematic processes, like this: ionice -c2 -n0 -p 12345 # replace 12345 with the PID you want to act nicer or you change the scripts you're working with so that they use ionice -c2 -n0 cp from to instead of just cp from to . There's still no guarantee that things will get better. IO is more complicated than you might think, and especially so if you're working with virtual machines.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183250", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102994/" ] }
183,262
I'm trying to use cURL to automate some processes that we usually do using a website. I was able to login to the website using curl and the following command: curl -k -v -i --user "[user]:[password]" -D cookiejar.txt https://link/to/home/page However, when I'm trying to use the generated cookiejar.txt file for subsequent calls, I'm not getting passed the authorization. The browser sends the following data to the server: GET /[my other page] HTTP/1.1Host [my host]User-Agent Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8Accept-Language en-US,en;q=0.5Accept-Encoding gzip, deflateCookie JSESSIONID=[my session id]Authorization Basic [my encrypted string]Connection keep-alive So, I changed my second cURL call to something like this, to be sure that all these parameters are sent as well: curl -i -X GET -k -v \-b cookiejar.txt \-H "Authorization: Basic [my encrypted string]" \-H "Host: [my host]" \-H "User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0" \-H "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" \-H "Content-Type: application/x-www-form-urlencoded" \-H "Accept-Language: en-US,en;q=0.5" \-H "Connection: Keep-Alive" \https://[my other page] Unfortunately this doesn't work. If I omit the Authorization header, I get a 401 error. If I include it in my cURL request, I get the Login page (with the 200 OK response). There's no error in the console to give me at least a hint about what the problem is. I appreciate any idea to help me get passed this issue.
If I understand you correctly, you are trying to change the ionice level for all subsequent calls to cp . If so, you are misunderstanding the way ionice works (and nice , for that matter). The ionice command refers to a process , not to a binary. You can use it either to change the ionice level of a currently running process, by giving it the PID as an argument, or you can use it when you start a process. So, either you keep watch on the machine and manually set a new ionice level for the problematic processes, like this: ionice -c2 -n0 -p 12345 # replace 12345 with the PID you want to act nicer or you change the scripts you're working with so that they use ionice -c2 -n0 cp from to instead of just cp from to . There's still no guarantee that things will get better. IO is more complicated than you might think, and especially so if you're working with virtual machines.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183262", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102113/" ] }
183,264
This is very nooby question I belive and I've tried everything to avoid asking it here, but I didn't found answere anywhere on whole internet. Go on and give me downvote but please answer me how to keep execution of long-time-running-required bash script when aborted via CTRL+X or CTRL+C? I want to execute command $ sh myscript.sh and to free up my terminal by CTRL+X / CTRL+C, script should continue to run next 10-20 min as required. I want to execute php script like $ php /path/to/myphpscript.php and to free up my terminal by CTRL+X / CTRL+C, script should continue to run next 2 hours as required. How to have that? Cronjob is not my solution.
Once your terminate a job with CTRL + C it is terminated and you can't tell a dead job to continue and pick up where it was. The correct term is to run a job in the background, which you can do beforehand: ./script & You can use that in combination with nohup to make the process immune to hang-ups, it will continue to run even if you log out from your bash session: nohup ./script & If the script is already running you can suspend a foreground job with CTRL + Z and instruct it to continue in the background with bg There's much more in the chapter on job control from the bash manual.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183264", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68350/" ] }
183,279
In my understanding, awk array is something like python dict. So I write down the code bellow to explore it: awk '{my_dict[$1] = $2} END { print my_dict}' zen And I got: awk: can't read value of my_dict; it's an array name. As the first column isn`t a number, how could I read the total content of the array or traverse it?
You can loop over the array's keys and extract the corresponding values: awk '{my_dict[$1] = $2} END { for (key in my_dict) { print my_dict[key] } }' zen To get output similar to that you'd get with a Python dictionary, you can print the key as well: awk '{my_dict[$1] = $2} END { for (key in my_dict) { print key ": " my_dict[key] } }' zen This works regardless of the key type.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/183279", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74226/" ] }
183,295
I have a load of tools that are kept in the /opt directory. The tools are organised like this: /opt/toolname/tool.sh. My question is, how can I add the tools in my /opt folder to my path, so i can run them from any directory in a terminal. I have managed to do this with some tools by creating a symlinks in /usr/bin; but with over 200 tools it is a very tedious way of doing things. is there a better way to do this?
The only correct way, is to make links in /usr/bin or /usr/local/bin as you described. Because in those folders in /opt/toolname there are normally many other files, not just executables. I would be grubby. Anyway, adding /opt/*/ to the $PATH variable would not work. If you have a list of the full paths to those binaries, you could generate the links scriptually.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/183295", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102134/" ] }
183,356
I know how to combine the result of different command paste -t',' <(commanda) <(commandb) I know pipe same input to different command cat myfile | tee >(commanda) >(commandb) Now how to combine these command? So that I can do cat myfile | tee >(commanda) >(commandb) | paste -t',' resulta resultb Say I have a file myfile: 1234 I want to make a new file 1 4 22 3 43 2 64 1 8 I used cat myfile | tee >(tac) >(awk '{print $1*2}') | paste would gives me result vertically, where I really want paste them in horizontal order.
When you tee to multiple process substitutions, you're not guaranteed to get the output in any particular order, so you'd better stick with paste -t',' <(commanda < file) <(commandb < file) Assuming cat myfile stands for some expensive pipeline, I think you'll have to store the output, either in a file or a variable: output=$( some expensive pipeline )paste -t',' <(commanda <<< "$output") <(commandb <<< "$output") Using your example: output=$( seq 4 )paste -d' ' <(cat <<<"$output") <(tac <<<"$output") <(awk '$1*=2' <<<"$output") 1 4 22 3 43 2 64 1 8 Another thought: FIFOs, and a single pipeline mkfifo resulta resultbseq 4 | tee >(tac > resulta) >(awk '$1*=2' > resultb) | paste -d ' ' - resulta resultbrm resulta resultb 1 4 22 3 43 2 64 1 8
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183356", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102172/" ] }
183,361
I have a folder with several repositories inside. Is there any way I can run git branch or whatever git command inside each folder? $ lsproject1 project2 project3 project4 And I'd like to have some kind of output like the following $ commandproject1 [master]project2 [dev]project3 [master]project4 [master]
Try this. $1 should be the parent dir containing all of your repositories (or use "." for the current dir): #!/bin/bashfunction git_branches(){ if [[ -z "$1" ]]; then echo "Usage: $FUNCNAME <dir>" >&2 return 1 fi if [[ ! -d "$1" ]]; then echo "Invalid dir specified: '${1}'" return 1 fi # Subshell so we don't end up in a different dir than where we started. ( cd "$1" for sub in *; do [[ -d "${sub}/.git" ]] || continue echo "$sub [$(cd "$sub"; git branch | grep '^\*' | cut -d' ' -f2)]" done )} You can make this its own script (but replace $FUNCNAME with $0 ), or keep it inside a function and use it in your scripts.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183361", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102176/" ] }
183,370
I use tmux 1.8 so I have built in CTRL+b+z feature to zoom an active pane. The problem is that zoomed pane looks the same as just one plain pane so sometimes I forget if the pane was zoomed. Is there a way to add an indication that I am currently in the zoom mode? Also the same issue is with horizontally split panes. It's hard to remember that if the border on the left corresponds to the active upper pane and vice versa. Can I make that more distinct? Maybe add horizontal borders if that's possible?
At the same time as the zoom feature, the window_flag with the same name Z was added, so this flag should appear in the status line next to the window title (you mention in a comment that you use some plugin/customization of tmux). In any case, you can query tmux using the list-panes command and the formats feature: tmux list-panes -F '#F' prints out all the window flags of the currently active pane. If Z is among the flags, the current pane is zoomed. Thus, the command tmux list-panes -F '#F' | grep -q Z will return 0 if the current pane is zoomed and return error 1 in case it isn't. This should allow you to add this indicator to your customized status line. From man tmux : FORMATS Certain commands accept the -F flag with a format argument. This is a string which controls the output format of the command. Replacement variables are enclosed in ‘#{’ and ‘}’, for example ‘#{session_name}’. The possible variables are listed in the table below, or the name of a tmux option may be used for an option's value. Some variables have a shorter alias such as ‘#S’, and ‘##’ is replaced by a single ‘#’. [...] Variable name Alias Replaced with [...] window_flags #F Window flags Looking at the source code (window.c, line 639f) shows that the complete list of flags is: #: window activity flag !: window bell flag ~: window silence flag *: current window flag -: last window flag Z: window zoomed flag ' ' (a space): no flags at all.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/183370", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87396/" ] }
183,375
My screen is just too bright. How can I adjust screen brightness? So far I tried the following: "Brightness and lock" settings doesn't work. Fn + F6 or F7 doesn't work. This doesn't work. This doesn't work too. My Laptop is Toshiba Satelite L745
You can try xrandr tool. First run xrandr --verbose and look for a line with resolution like LVDS1 connected 1024x600+0+0 . The name of your display ( LVDS1 in this example) is needed here. Now you are ready to set brightness: xrandr --output LVDS1 --brightness 0.4 xrandr sets software, not hardware brightness so you can exceed both upper and lower limits: xrandr --output LVDS1 --brightness 1.7xrandr --output LVDS1 --brightness -0.4 #negative value is also possiblexrandr --output LVDS1 --brightness 1
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183375", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102187/" ] }
183,434
I have a simple bash script that runs a series of checks ( ping , nslookup , etc) and then sends an email report with the output of that data. I'd like the email to include information on how long it took the entire script to run. Is there an easy way to collect that information?
I suggest to take a look at bash variable SECONDS : SECONDS : Each time this parameter is referenced, the number of seconds since shell invocation is returned. If a value is assigned to SECONDS, the value returned upon subsequent references is the number of seconds since the assignment plus the value assigned. Thus you can simply print this variable at the end of the script. Alternatively, if your intention is to measure the time of only part of the program, then just set SECONDS=0 at the beginning of the measured block of commands, and at the end just use value stored in this variable.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/183434", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1822/" ] }
183,452
I received a zip file from a bank. I get the following error when I trying to unzip it. unzip filename.zip Archive: filename.zip skipping: SOME_STUFF.pdf need PK compat. v6.1 (can do v4.6) The file command returns Zip archive data for this file. There are a fair number of threads containing this error message, but the only concrete suggestions they have is to use 7z x or 7za x from the p7zip-full package. These fail with the error: Unsupported MethodSub items Errors: 1 I'm using Debian wheezy amd64. I don't see significant updates of the unzip or 7za packages in testing/unstable though. I'd appreciate suggestions of how to unzip this file, and more generally, what does the error message PK compat. v6.1 (can do v4.6) mean? For a widely used utility, zip does not have much documentation available about it. The README in the Debian sources points to http://www.info-zip.org/pub/infozip/ which lists a release dated of 29th April 2009 for UnZip 6.0. Here is the version output for the unzip binary on my system. unzip -vUnZip 6.00 of 20 April 2009, by Debian. Original by Info-ZIP.Latest sources and executables are at ftp://ftp.info-zip.org/pub/infozip/ ;see ftp://ftp.info-zip.org/pub/infozip/UnZip.html for other sites.Compiled with gcc 4.7.2 for Unix (Linux ELF) on Feb 3 2015.UnZip special compilation options: ACORN_FTYPE_NFS COPYRIGHT_CLEAN (PKZIP 0.9x unreducing method not supported) SET_DIR_ATTRIB SYMLINKS (symbolic links supported, if RTL and file system permit) TIMESTAMP UNIXBACKUP USE_EF_UT_TIME USE_UNSHRINK (PKZIP/Zip 1.x unshrinking method supported) USE_DEFLATE64 (PKZIP 4.x Deflate64(tm) supported) UNICODE_SUPPORT [wide-chars, char coding: UTF-8] (handle UTF-8 paths) LARGE_FILE_SUPPORT (large files over 2 GiB supported) ZIP64_SUPPORT (archives using Zip64 for large files supported) USE_BZIP2 (PKZIP 4.6+, using bzip2 lib version 1.0.6, 6-Sept-2010) VMS_TEXT_CONV WILD_STOP_AT_DIR [decryption, version 2.11 of 05 Jan 2007]UnZip and ZipInfo environment options: UNZIP: [none] UNZIPOPT: [none] ZIPINFO: [none] ZIPINFOOPT: [none] dpkg reports the package version as 6.0-8+deb7u2 . The output of zipinfo is: zipinfo filename.zip Archive: filename.zipZip file size: 6880 bytes, number of entries: 1-rw-a-- 6.4 fat 10132 Bx defN 15-Feb-06 16:24 SOME_STUFF.pdf1 file, 10132 bytes uncompressed, 6568 bytes compressed: 35.2%
Origin of the error The PK in the error stands for Phil Katz, the inventor of the original PKZIP format. The zip utility has not kept up with the capabilities of the pkzip derived commercial software, particularly the certificate storage that banks like to include in their ZIP files. Wikipedia gives an overview of the development of the format. But the Unix zip utilities don't implement the changes after the year 2002. You might have to buy the PKWARE commercial version for Linux to uncompress this. The man page for zip has the following to say for itself and unzip : A companion program (unzip(1)) unpacks zip archives. The zip and unzip(1) programs can work with archives produced by PKZIP (supporting most PKZIP features up to PKZIP version 4.6), and PKZIP and PKUNZIP can work with archives produced by zip (with some exceptions, notably streamed archives, but recent changes in the zip file standard may facilitate better compatibility). zip version 3.0 is compatible with PKZIP 2.04 and also supports the Zip64 extensions of PKZIP 4.5 which allow archives as well as files to exceed the previous 2 GB limit (4 GB in some cases). zip also now supports bzip2 compression if the bzip2 library is included when zip is compiled. Note that PKUNZIP 1.10 can‐ not extract files produced by PKZIP 2.04 or zip 3.0. You must use PKUN‐ ZIP 2.04g or unzip 5.0p1 (or later versions) to extract them. Solution Although zip cannot do the job there are other tools that can. You mention the 7zip utility and the Linux/Unix commandline version of 7-Zip that, among others can read and write ZIP format. It claims that if 7-Zip cannot read a zip file, that in 99% of the cases the file is broken . 7-Zip utilities should be able to read your file, so either it is broken or else yours are in the 1% (for which I found no further details). 7-zip on Linux comes in various executables with different format support. The most basic ( 7zr ), doesn't support ZIP, you should use at least 7za or the full-fledged 7z : 7za x filename.zip Different Linux version package 7za / 7z in packages with different names. The most easy (as so often) is installing on Solus: sudo eopkg install p7zip On Debian derived Linux version, the package p7zip only installs the base 7z that doesn't support ZIP. This split-up has caused some problems and installing p7zip-full doesn't do what it says, sometimes you also have to install p7zip-rar On my Linux Mint system I needed to do: sudo apt-get install p7zip-full p7zip-rar On RedHat/CentOS you need to have the EPEL repository enabled. E.g on CentOS 7 I needed to do: sudo yum install epel-releasesudo yum --enablerepo=epel install p7zip
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/183452", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4671/" ] }
183,457
Can I setup less program to exit from it by pressing ESC key?
To bind Esc + Esc to quit with lesskey , do the following: Create a ~/.lesskey file with the line: \e\e quit Run lesskey . This will create a binary ~/.less file used by less . Use less as usual. Esc + Esc will do a quit . If you no longer want your bindings, you can remove the ~/.less file. For more details, see man lesskey or lesskey.nro in the less package source. Debian -- Details of source package less in wheezy SYNOPSIS lesskey [-o output] [--] [input]The input file is a text file which describes the key bindings.If the input file is "-", standard input is read.If no input file is specified, a standard filename is usedas the name of the input file, which depends on the system being used:On Unix systems, $HOME/.lesskey is used;on MS-DOS systems, $HOME/_lesskey is used;and on OS/2 systems $HOME/lesskey.ini is used,or $INIT/lesskey.ini if $HOME is undefined.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183457", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102245/" ] }
183,465
Once a file is gzipped, is there a way of quickly querying it to say what the uncompressed file size is (without decompressing it), especially in cases where the uncompressed file is > 4GB in size. According to the RFC https://www.rfc-editor.org/rfc/rfc1952#page-5 you can query the last 4 bytes of the file, but if the uncompressed file was > 4GB then the value just represents the uncompressed value modulo 2^32 This value can also be retrieved by running gunzip -l foo.gz , however the "uncompressed" column just contains uncompressed value modulo 2^32 again, presumably as it's reading the footer as described above. I was just wondering if there is a way of getting the uncompressed file size without having to decompress it first, this would be especially useful in the case where gzipped files contain 50GB+ of data and would take a while to decompress using methods like gzcat foo.gz | wc -c EDIT: The 4GB limitation is openly acknowledged in the man page of the gzip utility included with OSX ( Apple gzip 242 ) BUGS According to RFC 1952, the recorded file size is stored in a 32-bit integer, therefore, it can not represent files larger than 4GB. This limitation also applies to -l option of gzip utility.
I believe the fastest way is to modify gzip so that testing in verbose mode outputs the number of bytes decompressed; on my system, with a 7761108684-byte file, I get % time gzip -tv test.gztest.gz: OK (7761108684 bytes)gzip -tv test.gz 44.19s user 0.79s system 100% cpu 44.919 total% time zcat test.gz| wc -c7761108684zcat test.gz 45.51s user 1.54s system 100% cpu 46.987 totalwc -c 0.09s user 1.46s system 3% cpu 46.987 total To modify gzip (1.6, as available in Debian), the patch is as follows: --- a/gzip.c+++ b/gzip.c@@ -61,6 +61,7 @@ #include <stdbool.h> #include <sys/stat.h> #include <errno.h>+#include <inttypes.h> #include "closein.h" #include "tailor.h"@@ -694,7 +695,7 @@ if (verbose) { if (test) {- fprintf(stderr, " OK\n");+ fprintf(stderr, " OK (%jd bytes)\n", (intmax_t) bytes_out); } else if (!decompress) { display_ratio(bytes_in-(bytes_out-header_bytes), bytes_in, stderr);@@ -901,7 +902,7 @@ /* Display statistics */ if(verbose) { if (test) {- fprintf(stderr, " OK");+ fprintf(stderr, " OK (%jd bytes)", (intmax_t) bytes_out); } else if (decompress) { display_ratio(bytes_out-(bytes_in-header_bytes), bytes_out,stderr); } else { A similar approach has been implemented in gzip , and will be included in the release following 1.11; gzip -l now decompresses the data to determine its size.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183465", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81852/" ] }
183,496
I am looking for a simple way to create a permanent alias for all users. So ~/.bashrc or ~/.bash_profile is not an option. Hasn't anybody created a program for this? I think it should be a very common need. If not, I can always create a custom Bash script, but I need to know if there is a equivalent of .bash_profile for all users. In my case, I am using Mac OS X v10.9 (Mavericks) and Ubuntu 12.04 (Precise Pangolin), but I would like a method that works on major Unix systems. UPDATE: I was wondering about a program which automatically allows the users to manage a list of permanent of aliases directly from the command-line without having to edit files. It would have options for setting for all users, target users, interative/login shell, etc. UPDATE 2: Reply to answer of @jimmij $ su -mPassword:# cat /etc/profilealias test343="echo working"# cat /etc/bash.bashrcalias test727="echo working"# test727bash: test727: command not found# test343bash: test343: command not found
Please have a look at bash manual: /etc/profile The systemwide initialization file, executed for interactive login shells /etc/bash.bashrc The systemwide initialization file, executed for interactive, non-login shells. ~/.bash_profile The personal initialization file, executed for interactive login shells ~/.bashrc The individual per-interactive-shell startup file ~/.bash_logout The individual login shell cleanup file, executed when a login shell exits So you need to put your aliases in /etc/profile or /etc/bash.bashrc in order to make them available for all users.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183496", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101930/" ] }
183,504
I would like to transfer files between two remote hosts using on local shell, but it seems rsync doesn't support synchronisation if two remotes are specified as follow: $ rsync -vuar host1:/var/www host2:/var/wwwThe source and destination cannot both be remote. What other workarounds/commands I could use to achieve similar results?
As you have discovered you cannot use rsync with a remote source and a remote destination. Assuming the two servers can't talk directly to each other, it is possible to use ssh to tunnel via your local machine. Instead of rsync -vuar host1:/var/www host2:/var/www you can use this ssh -R localhost:50000:host2:22 host1 'rsync -e "ssh -p 50000" -vuar /var/www localhost:/var/www' The first instance of /var/www applies to the source on host1 , the localhost:/var/www corresponds to the destination on host2 . In case you're curious, the -R option sets up a reverse channel from port 50000 on host1 that maps (via your local machine) to port 22 on host2. There is no direct connection from host1 to host2.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/183504", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21471/" ] }
183,514
I want to monitor only a process and its children processes on htop . Filtering on the name of the parent process lists only the parent process, not its children. How do I show the children processes too?
Under Linux, you can do: htop -p `pstree -p $PID | perl -ne 'push @t, /\((\d+)\)/g; END { print join ",", @t }'` where $PID is the root process. This works as follows: The list of the wanted processes are obtained with pstree , using the -p option to list them with their PID. The output is piped to a Perl script that retrieves the PID's, using a regular expression (here, \((\d+)\) ), and outputs them separated with commas. This list is provided as an argument of htop -p . For other OS like Mac OS, you may need to adapt the regular expression that retrieves the PIDs. Note: It is unfortunately not possible to update the list with new children that are spawn later, because once htop has been executed, one cannot do anything else. This is a limitation of htop (current version: 2.0.2).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/183514", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56755/" ] }
183,595
I have heard if you don't make partitions in linux, it is so hard to recover the data. And if you make few partitions it would be easy to recover the data. for example if you make /par1 /part2 /part3 it is better for recovery. But now some of my friends tell me that /home/user/{all data here} has no difference with making /par1 /part2 /part3 concerning recovery. Which one is true, and why?
To illustrate the question in a simple and efficient manner, consider two scenarios: You install your favourite linux distribution on entire disk i.e. without any partitions: Suppose your system is crashed because operating system is unable to access some sectors and unable to boot. You lost some chunk of data due to bad sectors and because of that you might be unable to access other chunks of data in your hard disk. Bottom line is that some bad sector is affecting your entire data. So recovery here is probably hard than if you were to use multiple partitions for different category of data. You install your favourite linux distribution by partitioning the hard disk: If you partition your hard disk say sda1 for boot, sda2 for root, sda3 for opt, sda4 for usr, sda5 for home and so on, now if some kind of crashing or bad sector problem occurs then there is a greater probability than the previous scenario that you might save/recover your other partitions. It is also useful in cases like, for example say that i have crashed my system (consider it an os problem) and system does not boot, i could reinstall my system without touching my home partition so, partition home is isolated and safe. Other benefits are as follows: less amount of time in file system check. freedom to choose different file systems. protection of file systems. ease in repairing file systems by pin pointing to the problematic file system. And of course there are benefits of Logical Volume Management(LVM), it starts with a single volume group and subsequently creating multiple logical volumes to hold the necessary file systems. I personally don't use LVM, so for more you can visit Wikipedia and Gentoo
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183595", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56476/" ] }
183,630
Is there a way to pass an array to a function as one of its parameters? Currently I have #!/bin/bashhighest_3 () { number_under_test=(${array[@]}) max_of_3=0 for ((i = 0; i<$((${#number_under_test[@]}-2)); i++ )) { test=$((number_under_test[i] + number_under_test[i+1] + number_under_test[i+2])) if [ $test -gt $max_of_3 ]; then max_of_3=$((number_under_test[i]+ number_under_test[i+1]+ number_under_test[i+2])) result=$((number_under_test[i]))$((number_under_test[i+1]))$((number_under_test[i+2])) fi } }array=(1 2 3 4 5 6 7 8 7 6 5 4 3 2 1)highest_3echo result=$resultarray=(1 2 3 4 3 2 1)highest_3echo result=$result which works by just setting array and using array , but is there a way to pass in the array, e.g. (1 2 3 4 5 4 3 2 1) as an actual parameter rather than just setting a (presumably global) variable? Update: I'd like to be able to pass in other parameters beside this array
You can always pass the array to the function and rebuild it as an array within the function: #!/usr/bin/env bashfoo () { ## Read the 1st parameter passed into the new array $_array _array=( "$1" ) ## Do something with it. echo "Parameters passed were 1: ${_array[@]}, 2: $2 and 3: $3"}## define your arrayarray=(a 2 3 4 5 6 7 8 7 6 5 4 3 2 1)## define two other variablesvar1="foo"var2="bar"## Call your functionfoo "$(echo ${array[@]})" $var1 $var2 The script above produces the following output: $ a.shParameters passed were 1: a 2 3 4 5 6 7 8 7 6 5 4 3 2 1, 2: foo and 3: bar
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183630", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10043/" ] }
183,664
I'm calling a function and I want to pass up to 100 paramters onto another function. I do not want to pass on the first 3 params, I start with param4 being the first param for the other program. I am currently allowing for passing on up to 19 additional with $function_under_test "$4" "$5" "$6" "$7" "$8" "$9" "${10}" "${11}" "${12}" "${13}" "${14}" "${15}" "${16}" "${17}" "${18}" "${19}" but this is not very substanable for larger sets of params. I tried declare -a pass_on_params for ((a=2; a<$#; a++)); do pass_on_params+=(${@[a]}) # line 8 done echo "->" $pass_on_params but I get do_test.sh: line 8: ${@[a]}: bad substitution Full code is: do_test () { function_under_test=$1 line_number=$2 expected="$3" param1="$4" declare -a pass_on_params for ((a=2; a<$#; a++)); do pass_on_params+=(${@[a]}) done echo "ppppppppp" $pass_on_params $function_under_test "$4" "$5" "$6" "$7" "$8" "$9" "${10}" "${11}" "${12}" "${13}" "${14}" "${15}" "${16}" "${17}" "${18}" "${19}" if [ $result -eq $expected ]; then printf '.' else printf 'F' error_messages=$error_messages"Call to '$function_under_test $param1' failed: $result was not equal to $expected at line $line_number\n" fi} Shell is bash
"${@:4}" works for me in bash. You can also assign to another array and do indexing on it: foo=("$@")second_function "${foo[@]:4}"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183664", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10043/" ] }
183,665
I am writing a script in Korn Shell, where at one statement I want something like getch() used in C. I want my while loop to be exited, if it sees that I have pressed ESC in Keyboard. For eg. while [[ getch() != 27 ]]do print "Hello"done In my script this getch() != 27 won't work. I want something there to be worked. Can anyone help?
Use read x='';while [[ "$x" != "A" ]]; do read -n1 x; done read -n 1 is to read 1 character. This should work in bash but you can check if it works in ksh
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183665", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99546/" ] }
183,689
I just set up my new Pi2 with Raspbian. All works well, I installed avahi, so that I can reach the Pi via raspberrypi.local. However, the Pi does not find my MacBook, which is usually resolvable via mymacbook.local. For example, this is what I get when pinging: raspberrypi $ ping mymacbook.localping: unknown host mymacbook.local The other way around works fine. What do I need to do, to make Raspbian search the .local domain? The Pi is connected via WiFi (wpa_supplicant), using DHCP.
What you are trying to do is to add multicast DNS to the name searching on Raspbian. Install the package libnss-mdns (ie: sudo apt-get install libnss-mdns ). This will pull in the Avahi packages to implement multicast DNS (which is used for name resolution for ".local" domains). After installation ensure that /etc/nsswitch.conf has the line: hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 Edit: when going from mac-->raspi to ensure that the Mac can log into your Raspberry Pi install the package avahi-daemon and add a file /etc/avahi/services/ssh.service containing <?xml version="1.0" standalone='no'?><!--*-nxml-*--><!DOCTYPE service-group SYSTEM "avahi-service.dtd"><service-group> <name replace-wildcards="yes">%h</name> <service> <type>_ssh._tcp</type> <port>22</port> </service></service-group> Note that the RaspberryPi ships with IPv6 turned off. If the other host does not implement IPv4 link local addresses then you may need to turn on IPv6 on the RaspberryPi to have a IP protocol in common between the two machines. You can turn onIPv6 on the RasPi deleting /etc/modprobe.d/ipv6.conf and rebooting.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183689", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32750/" ] }
183,690
There's a computer PC1 with web application server that connected to the internet, using private IP (172.16.x.x). I have an outside server (Linux, command line only) PC2 that using real IP 192.211.y.y , PC1 can access PC2 , but PC2 can't access PC1 (because it's private IP). I believe there's an application that could enable visiting PC1 through PC2 , so when people visit 192.211.y.y:12124 it would show the content of PC1:12123 , I forgot the name of that application
What you are trying to do is to add multicast DNS to the name searching on Raspbian. Install the package libnss-mdns (ie: sudo apt-get install libnss-mdns ). This will pull in the Avahi packages to implement multicast DNS (which is used for name resolution for ".local" domains). After installation ensure that /etc/nsswitch.conf has the line: hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 Edit: when going from mac-->raspi to ensure that the Mac can log into your Raspberry Pi install the package avahi-daemon and add a file /etc/avahi/services/ssh.service containing <?xml version="1.0" standalone='no'?><!--*-nxml-*--><!DOCTYPE service-group SYSTEM "avahi-service.dtd"><service-group> <name replace-wildcards="yes">%h</name> <service> <type>_ssh._tcp</type> <port>22</port> </service></service-group> Note that the RaspberryPi ships with IPv6 turned off. If the other host does not implement IPv4 link local addresses then you may need to turn on IPv6 on the RaspberryPi to have a IP protocol in common between the two machines. You can turn onIPv6 on the RasPi deleting /etc/modprobe.d/ipv6.conf and rebooting.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183690", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27996/" ] }
183,717
The man page of unshare says: UTS namespace setting hostname, domainname will not affect rest of the system (CLONE_NEWUTS flag) What does UTS stand for?
It means the process has a separate copy of the hostname and the (now mostly unused) NIS domain name, so it can set it to something else without affecting the rest of the system. The hostname is set via sethostname and is the nodename member of the struct returned by uname . The NIS domain name is set by setdomainname and is the domainname member of the struct returned by uname . UTS stands for UNIX Timesharing System. References: lwn.net - Namespaces in operation, part 1: namespaces overview uts namespaces: Introduction man uname(2) Meaning of UTS in UTS_RELEASE
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/183717", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65170/" ] }
183,745
As far as I know, [[ is an enhanced version of [ , but I am confused when I see [[ as a keyword and [ being shown as a builtin. [root@server ~]# type [[ is a shell builtin[root@server ~]# type [[[[ is a shell keyword TLDP says A builtin may be a synonym to a system command of the same name, but Bash reimplements it internally. For example, the Bash echo command is not the same as /bin/echo, although their behavior is almost identical. and A keyword is a reserved word, token or operator. Keywords have a special meaning to the shell, and indeed are the building blocks of the shell's syntax. As examples, for, while, do, and ! are keywords. Similar to a builtin, a keyword is hard-coded into Bash, but unlike a builtin, a keyword is not in itself a command, but a subunit of a command construct. [2] Shouldn't that make both [ and [[ a keyword? Is there anything that I am missing here?Also, this link re-affirms that both [ and [[ should belong to the same kind.
The difference between [ and [[ is quite fundamental. [ is a command. Its arguments are processed just the way any other commands arguments are processed. For example, consider: [ -z $name ] The shell will expand $name and perform both word splitting and filename generation on the result, just as it would for any other command. As an example, the following will fail: $ name="here and there"$ [ -n $name ] && echo not emptybash: [: too many arguments To have this work correctly, quotes are necessary: $ [ -n "$name" ] && echo not emptynot empty [[ is a shell keyword and its arguments are processed according to special rules. For example, consider: [[ -z $name ]] The shell will expand $name but, unlike any other command, it will perform neither word splitting nor filename generation on the result. For example, the following will succeed despite the spaces embedded in name : $ name="here and there"$ [[ -n $name ]] && echo not emptynot empty Summary [ is a command and is subject to the same rules as all other commands that the shell executes. Because [[ is a keyword, not a command, however, the shell treats it specially and it operates under very different rules.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/183745", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68296/" ] }
183,755
I want create simple xml file using bash script ( not with dom ) but when I set value in the line then echo print the $lorem word and not the val lorem=LOL echo '<root><foo a="b">$lorem</foo><bar value="ipsum" /></root>' >> $MY_XML I also try this echo '<root><foo a="b">\$lorem</foo><bar value="ipsum" /></root>' >> $MY_XMLecho '<root><foo a="b">"$lorem"</foo><bar value="ipsum" /></root>' >> $MY_XMLecho '<root><foo a="b">\"$lorem\"</foo><bar value="ipsum" /></root>' >> $MY_XML but all these print the exactly the line and not the val please advice how to print the val $lorem ? as the following example <root><foo a="b">LOL</foo><bar value="ipsum" /></root>
Print the line on this way: echo '<root><foo a="b">'"$lorem"'</foo><bar value="ipsum" /></root>' >> "$MY_XML" This is need because single quotes deny shell interpreter from replace environment variables inside
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183755", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67059/" ] }
183,763
After spamassassin was restarted by the daily cronjob this morning it's flooding syslog with the following errors: Feb 9 09:24:26 mail spamd[8766]: spamd: got connection over /var/run/spamd.socketFeb 9 09:24:26 mail spamd[8766]: spamd: setuid to Debian-exim succeededFeb 9 09:24:26 mail spamd[8766]: spamd: checking message <004c01d0444a$01d5a905$d690a59f@kiffyv> for Debian-exim:106Feb 9 09:24:26 mail spamd[8766]: rules: failed to run T_SPF_HELO_PERMERROR test, skipping:Feb 9 09:24:26 mail spamd[8766]: (Can't locate object method "check_for_spf_helo_permerror" via package "Mail: [...]:SpamAssassin::PerMsgStatus" at (eval 1169) line 19.Feb 9 09:24:26 mail spamd[8766]: )Feb 9 09:24:28 mail spamd[8766]: rules: failed to run T_SPF_TEMPERROR test, skipping:Feb 9 09:24:28 mail spamd[8766]: (Can't locate object method "check_for_spf_temperror" via package "Mail: [...]:SpamAssassin::PerMsgStatus" at (eval 1169) line 614.Feb 9 09:24:28 mail spamd[8766]: )Feb 9 09:24:28 mail spamd[8766]: rules: failed to run T_SPF_PERMERROR test, skipping:Feb 9 09:24:28 mail spamd[8766]: (Can't locate object method "check_for_spf_permerror" via package "Mail: [...]:SpamAssassin::PerMsgStatus" at (eval 1169) line 784.Feb 9 09:24:28 mail spamd[8766]: )Feb 9 09:24:28 mail spamd[8766]: rules: failed to run T_SPF_HELO_TEMPERROR test, skipping:Feb 9 09:24:28 mail spamd[8766]: (Can't locate object method "check_for_spf_helo_temperror" via package "Mail: [...]:SpamAssassin::PerMsgStatus" at (eval 1169) line 1129.Feb 9 09:24:28 mail spamd[8766]: )Feb 9 09:24:29 mail spamd[8766]: spamd: identified spam (26.6/5.0) for Debian-exim:106 in 3.1 seconds, 821 bytes.Feb 9 09:24:29 mail spamd[8766]: spamd: result: Y 26 - AXB_XMAILER_MIMEOLE_OL_024C2,BAYES_99,BAYES_999,DOS_OE_TO_MX,NAME_EMAIL_DIFF,RAZOR2_CF_RANGE_51_100,RAZOR2_CF_RANGE_E8_51_100,RAZOR2_CHECK,RCVD_IN_BRBL_LASTEXT,RCVD_IN_PSBL,RCVFeb 9 09:24:30 mail spamd[8759]: prefork: child states: II I have already checked if there were any unattended upgrades. Also I checked Mail::SpamAssassin::PerMsgStatus via CPAN but it is already installed. OS is Ubuntu Server 12.04.5 LTS and there are no pending updates. How can I resolve this error?
It may be tad easier to go to the update directory (something like /var/lib/spamassassin/3.003002/updates_spamassassin_org ) and comment out every lines containing T_SPF_PERMERROR or T_SPF_TEMPERROR , like: # header T_SPF_PERMERROR eval:check_for_spf_permerror() etc. instead of upgrading or cherry-picking upstream changes. If you use automatic updates you may want to go manual until they realize their problem (which seems not to be the case just yet).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183763", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102403/" ] }
183,910
I'm running a dual-screen setup and have my trackpad disabled most of the time (which includes hiding the mousepointer).When I reenable the trackpad (and display the mouse pointer again), I've lost track where the pointer was before. I'm looking for a tool to highlight the current mouse position (e.g. by a circle). Ideally this would be a single command flashing the circle for a short period of time. I'm aware that xdotool can find the current position, yet there is no highlighting; also, key-mon doesn't provide this functionality.I've also read that cairo composition manager provides such functionality, yet I'm wondering if there is a smaller tool to achieve this. In case there is no such tool: What is the easiest way to display such a circle around the cursor using the data provided by xdotool getmouselocation ? In case this is relevant: I don't use a desktop environment, just the xmonad window manager.
While I like Mikeserv 's answer for cleverness, it has the downside that it will create a window which "steals" the focus and has to be clicked away. I also find it takes just slightly too long to start: about 0.2 to 0.3 seconds, which is just slightly too slow for a "smooth" experience. I finally got around to digging into XLib, and clobbered together a basic C program to do this. The visual effect is roughly similar to what Windows (XP) has (from memory). It's not very beautiful, but it works ;-) It doesn't "steal" focus, starts near-instantaneous, and you can click "through" it. You can compile it with cc find-cursor.c -o find-cursor -lX11 -lXext -lXfixes . There are some variables at the top you can tweak to change the size, speed, etc. I released this as a program at https://github.com/arp242/find-cursor . I recommend you use this version, as it has some improvements that the below script doesn't have (such as commandline arguments and ability to click "through" the window). I've left the below as-is due to its simplicity. /* * https://github.com/arp242/find-cursor * Copyright © 2015 Martin Tournoij <[email protected]> * See below for full copyright */#include <stdlib.h>#include <stdio.h>#include <unistd.h>#include <string.h>#include <X11/Xlib.h>#include <X11/Xutil.h>// Some variables you can play with :-)int size = 220;int step = 40;int speed = 400;int line_width = 2;char color_name[] = "black";int main(int argc, char* argv[]) { // Setup display and such char *display_name = getenv("DISPLAY"); if (!display_name) { fprintf(stderr, "%s: cannot connect to X server '%s'\n", argv[0], display_name); exit(1); } Display *display = XOpenDisplay(display_name); int screen = DefaultScreen(display); // Get the mouse cursor position int win_x, win_y, root_x, root_y = 0; unsigned int mask = 0; Window child_win, root_win; XQueryPointer(display, XRootWindow(display, screen), &child_win, &root_win, &root_x, &root_y, &win_x, &win_y, &mask); // Create a window at the mouse position XSetWindowAttributes window_attr; window_attr.override_redirect = 1; Window window = XCreateWindow(display, XRootWindow(display, screen), root_x - size/2, root_y - size/2, // x, y position size, size, // width, height 0, // border width DefaultDepth(display, screen), // depth CopyFromParent, // class DefaultVisual(display, screen), // visual CWOverrideRedirect, // valuemask &window_attr // attributes ); XMapWindow(display, window); XStoreName(display, window, "find-cursor"); XClassHint *class = XAllocClassHint(); class->res_name = "find-cursor"; class->res_class = "find-cursor"; XSetClassHint(display, window, class); XFree(class); // Keep the window on top XEvent e; memset(&e, 0, sizeof(e)); e.xclient.type = ClientMessage; e.xclient.message_type = XInternAtom(display, "_NET_WM_STATE", False); e.xclient.display = display; e.xclient.window = window; e.xclient.format = 32; e.xclient.data.l[0] = 1; e.xclient.data.l[1] = XInternAtom(display, "_NET_WM_STATE_STAYS_ON_TOP", False); XSendEvent(display, XRootWindow(display, screen), False, SubstructureRedirectMask, &e); XRaiseWindow(display, window); XFlush(display); // Prepare to draw on this window XGCValues values; values.graphics_exposures = False; unsigned long valuemask = 0; GC gc = XCreateGC(display, window, valuemask, &values); Colormap colormap = DefaultColormap(display, screen); XColor color; XAllocNamedColor(display, colormap, color_name, &color, &color); XSetForeground(display, gc, color.pixel); XSetLineAttributes(display, gc, line_width, LineSolid, CapButt, JoinBevel); // Draw the circles for (int i=1; i<=size; i+=step) { XDrawArc(display, window, gc, size/2 - i/2, size/2 - i/2, // x, y position i, i, // Size 0, 360 * 64); // Make it a full circle XSync(display, False); usleep(speed * 100); } XFreeGC(display, gc); XCloseDisplay(display);}/* * The MIT License (MIT) * * Copyright © 2015 Martin Tournoij * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * The software is provided "as is", without warranty of any kind, express or * implied, including but not limited to the warranties of merchantability, * fitness for a particular purpose and noninfringement. In no event shall the * authors or copyright holders be liable for any claim, damages or other * liability, whether in an action of contract, tort or otherwise, arising * from, out of or in connection with the software or the use or other dealings * in the software. */
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/183910", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102507/" ] }
183,918
I installed RHEL 7 using server with GUI with no added option on my local virtualbox, but I cannot get Firefox connected to the Internet. I checked /etc/resolv.conf contains nameserver setting I can ping other servers, such as 8.8.8.8 I open the firewall config GUI, I can see connected is at the left corner, I added http , https and 80 , 443 , 8080 and 8443 in public zone, and set interface to use public zone Firefox cannot get connected, and I can't curl either. Could some RHEL experts explain what I am missing?
While I like Mikeserv 's answer for cleverness, it has the downside that it will create a window which "steals" the focus and has to be clicked away. I also find it takes just slightly too long to start: about 0.2 to 0.3 seconds, which is just slightly too slow for a "smooth" experience. I finally got around to digging into XLib, and clobbered together a basic C program to do this. The visual effect is roughly similar to what Windows (XP) has (from memory). It's not very beautiful, but it works ;-) It doesn't "steal" focus, starts near-instantaneous, and you can click "through" it. You can compile it with cc find-cursor.c -o find-cursor -lX11 -lXext -lXfixes . There are some variables at the top you can tweak to change the size, speed, etc. I released this as a program at https://github.com/arp242/find-cursor . I recommend you use this version, as it has some improvements that the below script doesn't have (such as commandline arguments and ability to click "through" the window). I've left the below as-is due to its simplicity. /* * https://github.com/arp242/find-cursor * Copyright © 2015 Martin Tournoij <[email protected]> * See below for full copyright */#include <stdlib.h>#include <stdio.h>#include <unistd.h>#include <string.h>#include <X11/Xlib.h>#include <X11/Xutil.h>// Some variables you can play with :-)int size = 220;int step = 40;int speed = 400;int line_width = 2;char color_name[] = "black";int main(int argc, char* argv[]) { // Setup display and such char *display_name = getenv("DISPLAY"); if (!display_name) { fprintf(stderr, "%s: cannot connect to X server '%s'\n", argv[0], display_name); exit(1); } Display *display = XOpenDisplay(display_name); int screen = DefaultScreen(display); // Get the mouse cursor position int win_x, win_y, root_x, root_y = 0; unsigned int mask = 0; Window child_win, root_win; XQueryPointer(display, XRootWindow(display, screen), &child_win, &root_win, &root_x, &root_y, &win_x, &win_y, &mask); // Create a window at the mouse position XSetWindowAttributes window_attr; window_attr.override_redirect = 1; Window window = XCreateWindow(display, XRootWindow(display, screen), root_x - size/2, root_y - size/2, // x, y position size, size, // width, height 0, // border width DefaultDepth(display, screen), // depth CopyFromParent, // class DefaultVisual(display, screen), // visual CWOverrideRedirect, // valuemask &window_attr // attributes ); XMapWindow(display, window); XStoreName(display, window, "find-cursor"); XClassHint *class = XAllocClassHint(); class->res_name = "find-cursor"; class->res_class = "find-cursor"; XSetClassHint(display, window, class); XFree(class); // Keep the window on top XEvent e; memset(&e, 0, sizeof(e)); e.xclient.type = ClientMessage; e.xclient.message_type = XInternAtom(display, "_NET_WM_STATE", False); e.xclient.display = display; e.xclient.window = window; e.xclient.format = 32; e.xclient.data.l[0] = 1; e.xclient.data.l[1] = XInternAtom(display, "_NET_WM_STATE_STAYS_ON_TOP", False); XSendEvent(display, XRootWindow(display, screen), False, SubstructureRedirectMask, &e); XRaiseWindow(display, window); XFlush(display); // Prepare to draw on this window XGCValues values; values.graphics_exposures = False; unsigned long valuemask = 0; GC gc = XCreateGC(display, window, valuemask, &values); Colormap colormap = DefaultColormap(display, screen); XColor color; XAllocNamedColor(display, colormap, color_name, &color, &color); XSetForeground(display, gc, color.pixel); XSetLineAttributes(display, gc, line_width, LineSolid, CapButt, JoinBevel); // Draw the circles for (int i=1; i<=size; i+=step) { XDrawArc(display, window, gc, size/2 - i/2, size/2 - i/2, // x, y position i, i, // Size 0, 360 * 64); // Make it a full circle XSync(display, False); usleep(speed * 100); } XFreeGC(display, gc); XCloseDisplay(display);}/* * The MIT License (MIT) * * Copyright © 2015 Martin Tournoij * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * The software is provided "as is", without warranty of any kind, express or * implied, including but not limited to the warranties of merchantability, * fitness for a particular purpose and noninfringement. In no event shall the * authors or copyright holders be liable for any claim, damages or other * liability, whether in an action of contract, tort or otherwise, arising * from, out of or in connection with the software or the use or other dealings * in the software. */
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/183918", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102515/" ] }
183,944
I have a bash script and a php script that function in concert to trim audio/video files using start/stop times . PHP script: <?php// Create datetime objcects$dt1 = new DateTime($argv[1]);$dt2 = new DateTime($argv[2]);// Conver difference to seconds$dt3 = $dt2->format('U') - $dt1->format('U');// echo $dt3."\n";$h = (int)($dt3 / 3600);$dt3 %= 3600;$m = (int)($dt3 / 60);$dt3 %= 60;$s = $dt3;// Dump as H:M:Secho $h . ":" . $m . ":" . $s;?> chopvideoaudio.sh script: #!/bin/bashINFILE=$1START=$2STOP=$3OUTFILE=$4OFFSET=`php TimeDiff.php "$START" "$STOP"`echo "Disecting $INFILE starting from $START to $STOP (duration $OFFSET)"ffmpeg -ss "$START" -t "$OFFSET" -i "$INFILE" "$OUTFILE" Usage: ./chopvideoaudio.sh [input.mp4] [startchop] [stopchop] [output.mp4] Where [startchop] and [stopchop] are both absolute timestamps from the beginning of the track. Example command to run this script: ./chopvideoaudio.sh input.mp4 00:01:20 00:01:45 output.mp4 I want a YAD (yet another dialog) script that will open up a dialog box(es) containing an input field to enter a custom file type (e.g. mp3, mp4, avi). Then input fields for the two timestamps, in which I can enter two custom timestamps. After pressing OK the script will run and extract the section from between the two timestamps. I would also be interested in a solution using Zenity , but I prefer YAD.
Here is a solution using yad , bash only (no php), with one dialog: #!/bin/basheval $(yad --width=400 --form --field=input:FL --field=start --field=end --field=output:SFL "" "00:00:00" "00:00:00" "" | awk -F'|' '{printf "INPUT=\"%s\"\nSTART=%s\nEND=%s\nOUTPUT=\"%s\"\n", $1, $2, $3, $4}')[[ -z $INPUT || -z $START || -z $END || -z $OUTPUT ]] && exit 1DIFF=$(($(date +%s --date="$END")-$(date +%s --date="$START")))OFFSET=""$(($DIFF / 3600)):$(($DIFF / 60 % 60)):$(($DIFF % 60))ffmpeg -ss "$START" -t "$OFFSET" -i "$INPUT" "$OUTPUT" Here is a screenshot of what it will look like. Please note that the text of the buttons will be automatically adapted to your choosen language. I'm French speaker obviously! The drawback of this one dialog only is that with yad , you cannot pre-select file extension for file input. If this is mandatory, here is two steps/dialogs solution: #!/bin/bashINPUT=$(yad --width=600 --height=400 --file-selection --file-filter='*.mp3 *.mp4 *.avi')eval $(yad --width=400 --form --field=start --field=end --field=output:SFL "00:00:00" "00:00:00" "${INPUT/%.*}-out.${INPUT##*.}" | awk -F'|' '{printf "START=%s\nEND=%s\nOUTPUT=\"%s\"\n", $1, $2, $3}')[[ -z $START || -z $END || -z $OUTPUT ]] && exit 1DIFF=$(($(date +%s --date="$END")-$(date +%s --date="$START")))OFFSET=""$(($DIFF / 3600)):$(($DIFF / 60 % 60)):$(($DIFF % 60))ffmpeg -ss "$START" -t "$OFFSET" -i "$INPUT" "$OUTPUT"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183944", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77038/" ] }
183,974
I have a bash script that emails me whenever a web server is not responding, and the script is ran by cron every 5 minutes. However, if the website goes down for a few hours, I'd receive too many messages instead of just one. What's the best way to make it email only once? Should I use an environment variable and check it before sending the email/resetting it when web server goes up again? Are there better ways to do this (without polluting the environment)? Am I doing something silly right now? I'm not confident in my shell scripting skills. #!/bin/shoutput=$(wget http://lon2315:8081 2>&1)pattern="connected"if [[ ! "$output" =~ "$pattern" ]]then echo "$output" | mail -s "Website is down" "[email protected]" fi
I don't think you can use environment variables, as they won't persist between script "runs". Alternatively, you could write to a temporary file in /tmp or somewhere in your home directory, then check it each time? For example, something like #!/bin/shoutput=$(wget http://lon2315:8081 2>&1)pattern="connected"tempfile='/tmp/my_website_is_down'if [[ ! "$output" =~ "$pattern" ]]then if ! [[ -f "$tempfile" ]]; then echo "$output" | mail -s "Website is down" "[email protected]" touch "$tempfile" fielse [[ -f "$tempfile" ]] && rm "$tempfile"fi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183974", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80615/" ] }
183,994
I've never really got how chmod worked up until today. I followed a tutorial that explained a big deal to me. For example, I've read that you've got three different permission groups: owner ( u ) group ( g ) everyone ( o ) Based on these three groups, I now know that: If the file is owned by the user, the user permissions determine the access. If the group of the file is the same as the user's group, the group permission determine the access. If the user is not the file owner, and is not in the group, then the other permission is used. I've also learned that you've got the following permissions: read ( r ) write ( w ) execute ( x ) I created a directory to test my newly acquired knowledge: mkdir test Then I did some tests: chmod u+rwx test/# drwx------chmod g+rx test/# drwxr-x---chmod u-x test/# drw-r-x--- After fooling around for some time I think I finally got the hang of chmod and the way you set permission using this command. But... I still have a few questions: What does the d at the start stand for? What's the name and use of the containing slot and what other values can it hold? How can I set and unset it? What is the value for this d ? (As you only have 7=4+2+1 7=4+2+1 7=4+2+1) Why do people sometimes use 0777 instead of 777 to set their permissions? But as I shouldn't be asking multiple questions, I'll try to ask it in one question. In UNIX based system such as all Linux distributions, concerning the permissions, what does the first part ( d ) stand for and what's the use for this part of the permissions?
I’ll answer your questions in three parts: file types, permissions, and use cases for the various forms of chmod . File types The first character in ls -l output represents the file type; d means it’s a directory. It can’t be set or unset, it depends on how the file was created. You can find the complete list of file types in the ls documentation ; those you’re likely to come across are - : “regular” file, created with any program which can write a file b : block special file, typically disk or partition devices, can be created with mknod c : character special file, can also be created with mknod (see /dev for examples) d : directory, can be created with mkdir l : symbolic link, can be created with ln -s p : named pipe, can be created with mkfifo s : socket, can be created with nc -U D : door , created by some server processes on Solaris/openindiana. Permissions chmod 0777 is used to set all the permissions in one chmod execution, rather than combining changes with u+ etc. Each of the four digits is an octal value representing a set of permissions: suid , sgid and “sticky” (see below) user permissions group permissions “other” permissions The octal value is calculated as the sum of the permissions: “read” is 4 “write” is 2 “execute” is 1 For the first digit: suid is 4; binaries with this bit set run as their owner user (commonly root ) sgid is 2; binaries with this bit set run as their owner group (this was used for games so high scores could be shared, but it’s often a security risk when combined with vulnerabilities in the games), and files created in directories with this bit set belong to the directory’s owner group by default (this is handy for creating shared folders) “sticky” (or “restricted deletion”) is 1; files in directories with this bit set can only be deleted by their owner, the directory’s owner, or root (see /tmp for a common example of this). See the chmod manpage for details. Note that in all this I’m ignoring other security features which can alter users’ permissions on files (SELinux, file ACLs...). Special bits are handled differently depending on the type of file (regular file or directory) and the underlying system. (This is mentioned in the chmod manpage.) On the system I used to test this (with coreutils 8.23 on an ext4 filesystem, running Linux kernel 3.16.7-ckt2), the behaviour is as follows. For a file, the special bits are always cleared unless explicitly set, so chmod 0777 is equivalent to chmod 777 , and both commands clear the special bits and give everyone full permissions on the file. For a directory, the special bits are never fully cleared using the four-digit numeric form, so in effect chmod 0777 is also equivalent to chmod 777 but it’s misleading since some of the special bits will remain as-is. (A previous version of this answer got this wrong.) To clear special bits on directories you need to use u-s , g-s and/or o-t explicitly or specify a negative numeric value, so chmod -7000 will clear all the special bits on a directory. In ls -l output, suid , sgid and “sticky” appear in place of the x entry: suid is s or S instead of the user’s x , sgid is s or S instead of the group’s x , and “sticky” is t or T instead of others’ x . A lower-case letter indicates that both the special bit and the executable bit are set; an upper-case letter indicates that only the special bit is set. The various forms of chmod Because of the behaviour described above, using the full four digits in chmod can be confusing (at least it turns out I was confused). It’s useful when you want to set special bits as well as permission bits; otherwise the bits are cleared if you’re manipulating a file, preserved if you’re manipulating a directory. So chmod 2750 ensures you’ll get at least sgid and exactly u=rwx,g=rx,o= ; but chmod 0750 won’t necessarily clear the special bits. Using numeric modes instead of text commands ( [ugo][=+-][rwxXst] ) is probably more a case of habit and the aim of the command. Once you’re used to using numeric modes, it’s often easier to just specify the full mode that way; and it’s useful to be able to think of permissions using numeric modes, since many other commands can use them ( install , mknod ...). Some text variants can come in handy: if you simply want to ensure a file can be executed by anyone, chmod a+x will do that, regardless of what the other permissions are. Likewise, +X adds the execute permission only if one of the execute permissions is already set or the file is a directory; this can be handy for restoring permissions globally without having to special-case files v. directories. Thus, chmod -R ug=rX,u+w,o= is equivalent to applying chmod -R 750 to all directories and executable files and chmod -R 640 to all other files.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/183994", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102590/" ] }
183,998
I am using bash on Mac OSX. I have a shell file named myshell.sh and I use it to do a lot of things. I have a entrance.sh file which contains: # content of entrance.sh./myshell.sh arg1 arg2./myshell.sh arg3 arg4 But when I run entrance.sh I got a error: ./entrance.sh: line 2: myshell.sh: command not found I can run the myshell.sh directly. What can I do?
./myshell.sh means the script myshell.sh is found in the current directory. If you run this script from somewhere else, it won't work. You could use full paths, but in this case, the only sensible solutions are: Adding the location of myshell.sh to your $PATH (in case, myshell.sh really is something that is supposed to be called from everywhere). So, add PATH="$PATH":/dir/of/myshell at the beginning of the outer script. Put myshell.sh somewhere so that it's accessible from everywhere (just like all other executables on the system). That would be /usr/local/bin most likely. Use this only if this script is universally useful. If the scripts rely on local files in their directory (and may even break down and do damage if called from elsewhere), then you should either leave it in the current form (this actually prevents you to call them from places you are not supposed to), or use cd inside the script to get to the proper location. Be careful, use absolute paths in shell scripts for cd , it is too easy to break stuff if something goes wrong and you go out .. and cd -ing further in fails, you could escape out of your directory and reign chaos all over the parent directories. Mostly I'd recommend solution #1.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/183998", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45317/" ] }
184,009
I'm writing a script which shows the git log for a directory when I cd into it. Such a log can be overwhelming, containing hundreds of lines. So far I have been limiting that to a hard-coded 20 lines ( ... | head -n 20 ), which is fine on the screen at work, but too much on the smaller MacBook screen at home. I would prefer the log to take up about half the (vertical) screen on either terminal. And "terminal" also changes: it's Gnome terminal at work, but iTerm2 at home. And I do not use screen or tmux. How do I find the number of vertical lines available in a terminal from command line?
Terminal parameters are stored as $LINES and $COLUMNS variables. Additionally you can use special term-operation programm, for example tput : tput lines # outputs the number of lines of the present terminal window.tput cols # outputs the number of columns of the present terminal window.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/184009", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32775/" ] }
184,014
Would like to try this lld from LLVM. The doc on apt could be found here , but I don't know which package contains the lld executable. It seems the purpose of lld is to remove the system dependency, but clang doesn't have lld built-in. (Not yet?) Using the following example to test if lld is used. GNU-ld places some constraint on the order of archive files appear, but lld seems to be more tolerate on this (if I understand it correctly), so this example should build successfully, if lld is used. However, it fails on my box. # one.cextern int two();int main(int argc, char *argv[]){ two(); return 0;}# two.cvoid two(){}$ clang -c two.c; ar cr two.a two.o ; clang -c one.c ; clang two.a one.oone.o: In function `main':one.c:(.text+0x19): undefined reference to `two'clang: error: linker command failed with exit code 1 (use -v to see invocation) If we use -v : $ clang -c two.c; ar cr two.a two.o ; clang -c one.c ; clang -v two.a one.oUbuntu clang version 3.4-1ubuntu3 (tags/RELEASE_34/final) (based on LLVM 3.4)Target: x86_64-pc-linux-gnuThread model: posixFound candidate GCC installation: /usr/bin/../lib/gcc/i686-linux-gnu/4.9Found candidate GCC installation: /usr/bin/../lib/gcc/i686-linux-gnu/4.9.0Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8.2Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/4.9Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/4.9.0Found candidate GCC installation: /usr/lib/gcc/i686-linux-gnu/4.9Found candidate GCC installation: /usr/lib/gcc/i686-linux-gnu/4.9.0Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/4.8Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/4.8.2Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/4.9Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/4.9.0Selected GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8 "/usr/bin/ld" -z relro --hash-style=gnu --build-id --eh-frame-hdr -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o a.out /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/crt1.o /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/crti.o /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/crtbegin.o -L/usr/bin/../lib/gcc/x86_64-linux-gnu/4.8 -L/usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/lib/../lib64 -L/usr/lib/x86_64-linux-gnu -L/usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../.. -L/lib -L/usr/lib two.a one.o -lgcc --as-needed -lgcc_s --no-as-needed -lc -lgcc --as-needed -lgcc_s --no-as-needed /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/crtend.o /usr/bin/../lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/crtn.oone.o: In function `main':one.c:(.text+0x19): undefined reference to `two'clang: error: linker command failed with exit code 1 (use -v to see invocation) ENV Ubuntu clang version 3.4-1ubuntu3 (tags/RELEASE_34/final) (based on LLVM 3.4)Target: x86_64-pc-linux-gnuThread model: posix
Since January 2017, the LLVM apt repository includes lld, as do the snapshot packages available in Debian (starting with 4.0 in unstable, 5.0 in experimental). Since version 5, lld packages are available in Debian ( lld-5.0 in stretch-backports , lld-6.0 in stretch-backports and Debian 10, lld-7 in Debian 9 and 10, lld-8 in buster-backports , and later packages in releases currently in preparation). To install the upstream packages on Debian or Ubuntu, follow the instructions for your distribution . Back in February 2015 when this answer was originally written, the LLVM apt repository stated that it included LLVM, Clang, compiler-rt, polly and LLDB. lld wasn't included. Even the latest snapshot packages in Debian (which are maintained by the same team as the LLVM packages) didn't include lld.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/184014", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57697/" ] }
184,031
If a user has loginShell=/sbin/nologin is it still possible to ssh user@machine [command] assuming that the user has proper ssh keys in its home directory that can be used to authenticate? My goal is to keep the user as a nologin, but still able to execute commands on a few other machines on the network (similar to its use through 'sudo -u'), and am wondering if this is a reasonable course.
Setting /sbin/nologin as the user's shell (or /bin/false or /bin/true , which are almost equivalent ) forbids the user from logging in to run any command whatsoever. SSH always invokes the user's login shell to run commands, so you need to set the login shell to one that is able to run some commands. There are several restricted shells that allow users to run only a few commands. For example rssh and scponly are both such shells that allow the user to run a few predefined commands (such as scp , sftp-server , rsync , …). See also Restrict user access in linux and Do you need a shell for SCP?
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/184031", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67807/" ] }
184,038
Here is a little script to retarget old-wrong symlinks that I want to make interactive. #!/bin/bash# retarget (broken) symbolink links interactivelyecho -n "Enter the source directory where symlinks path should be retargeted > " read response1 if [ -n "$response1" ]; then symlinksdirectory=$response1 fi if [ -d $symlinksdirectory ]; then echo -n "Okay, source directory exists. Now enter 1) the symlinks OLD-WRONG target directory > " read response2 if [ -n "$response2" ]; then oldtargetdir=$response2 fi echo -n "$oldtargetdir - And 2) enter the symlinks CORRECT target directory > " read response3 if [ -n "$response3" ]; then goodtargetdir=$response3 fi echo -n "Now parsing symlinks in $symlinksdirectory to retarget them from $oldtargetdir to $goodtargetdir > " find $symlinksdirectory -type l | while read nullsymlink ; do wrongpath=$(readlink "$nullsymlink") ; right=$(echo "$wrongpath" | sed s'|$oldtargetdir|$goodtargetdir|') ; ln -fs "$right" "$nullsymlink" ; done fi It does not replace the symlinks' path. My bad syntax as it works fine when replacing variables with real paths for sed (end of the script): right=$(echo "$wrongpath" | sed s'|/mnt/docs/dir2|/mnt/docs/dir1/dir2|') ; How should I insert variables properly?
Setting /sbin/nologin as the user's shell (or /bin/false or /bin/true , which are almost equivalent ) forbids the user from logging in to run any command whatsoever. SSH always invokes the user's login shell to run commands, so you need to set the login shell to one that is able to run some commands. There are several restricted shells that allow users to run only a few commands. For example rssh and scponly are both such shells that allow the user to run a few predefined commands (such as scp , sftp-server , rsync , …). See also Restrict user access in linux and Do you need a shell for SCP?
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/184038", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87958/" ] }
184,041
I only know little about bash (or any Linux shell - but I just use bash). I'm used to using for %c in (*.*) do {something} in cmd in Windows, and I really feel inconvenient without it. I tried wine cmd, but it wouldn't interpret Linux commands (which I need for the {something}) at all, which is not I want. Any help? In Windows, for %c in (*.*) do {something} runs {something} once per file (ignoring directories). {something} can refer to %c for the current iteration's file name; other modifiers allow the file's path to be extracted, or the file's name without its path, the file's extension, attributes, modification date/time, size... for /d %c in (*.*) ... will process directories instead of files. In both cases *.* is equivalent to * in Unix-style systems, it matches all non-hidden files or directories, even those with no . in their name. P.S. I know that the cmd "for" has many usages which may not be covered in one command in bash, but I just want to know the equivalent of the command mentioned above.
for file in *; do echo "$file"done This usually does not match "hidden" files (names beginning with a dot) though. For matching the hidden files, too, you need this before: shopt -s dotglob If you want to skip directories (and probably anything "strange") then you have to detect them in the loop; pattern matching (the * ) doesn't care about the type of object: for file in *; do test -f "$file" || continue echo "$file"done Symbolic links are a special case. They are considered if they link to a file but the file may be in another directory. To ignore symlinks: for file in *; do test -f "$file" || continue test -L "$file" && continue echo "$file"done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/184041", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102611/" ] }
184,060
How to insert bash variables in awk (for example i need to do in in some for loop - like this: in first iteration use awk to search string by first column, next by second column and so on with using diffent pattern from bash variables in awk) or find the simple way to make pattern at all (as string) and recieve it to awk as argument. Trying to insert bash variables in awk - I have some log - $ cat > logtest1test2ahahahahah i do $ cat log | awk '$1~/test1|test2/ {print }'test1test2 all ok i need to paste bash variables in awk $ a=test1$ b=test2 then i try to insert $ cat log | awk 'BEGIN{a;b} $1~/a|b/ {print }'ahahahahah$ cat log | awk -v a=test1 -v b=test2 '$1~/a|b/ {print }'ahahahahah How to make all patern as string and recieve it to awk as argument - $ p='$1~/test1|test2/ {print }'$ cat log | awk p# get test1# get test2
for file in *; do echo "$file"done This usually does not match "hidden" files (names beginning with a dot) though. For matching the hidden files, too, you need this before: shopt -s dotglob If you want to skip directories (and probably anything "strange") then you have to detect them in the loop; pattern matching (the * ) doesn't care about the type of object: for file in *; do test -f "$file" || continue echo "$file"done Symbolic links are a special case. They are considered if they link to a file but the file may be in another directory. To ignore symlinks: for file in *; do test -f "$file" || continue test -L "$file" && continue echo "$file"done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/184060", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73552/" ] }
184,061
I am an infrequent user of UNIX, I'm sure this will be fairly trivial for any regular user so I appologize for that. I have the following code: for file in /home/sub1/samples/aaa*/*_novoalign.bam do samtools depth -r chr9:218026635-21994999 *_novoalign.bam < $file > /home/sub2/sub3/${file}.out done I was hoping the output would be sent to a file in sub2/sub3/ with its name like the input folder. It says 'no file or directory'. I would ideally like to send it here with the '_novoalign.bam' removed and a new ending eg '_output.txt' added. Any tips? p.s. I don't have permission to write to the directory in which the input file is found.
for file in *; do echo "$file"done This usually does not match "hidden" files (names beginning with a dot) though. For matching the hidden files, too, you need this before: shopt -s dotglob If you want to skip directories (and probably anything "strange") then you have to detect them in the loop; pattern matching (the * ) doesn't care about the type of object: for file in *; do test -f "$file" || continue echo "$file"done Symbolic links are a special case. They are considered if they link to a file but the file may be in another directory. To ignore symlinks: for file in *; do test -f "$file" || continue test -L "$file" && continue echo "$file"done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/184061", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102616/" ] }
184,110
I want to remove the vi text editor from Linux but it does not show up as a package in aptitude. Is it possible to remove this? I have already removed vim by running sudo apt-get remove vim I am using Linux Mint.
You can test where /usr/bin/vi to lead update-alternatives --query vi Usually there is link to /usr/bin/vim.tiny To find package name you can try dpkg -S /usr/bin/vim.tiny In my system I have received vim-tiny: /usr/bin/vim.tiny So there is additional package vim-tiny .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/184110", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/84962/" ] }
184,113
In icedove>preferences>attachments, for 'JEPG image' one can select 'Image Viewer' or 'Use other ...'. It turns out that 'Image Viewer' is actually '/usr/bin/eog' on my system. I only know that because after opening eog on the command line, clicking on 'Help>About', I see "Image Viewer" ... "The GNOME image viewer". It gives you no clue as to what the actual binary is, so, when the program is opened 'via' it's 'Image Viewer' name in icedove, how in heck would you figure out what the actual binary is? Is there some table somewhere, or some list of associations or something? The above is just one example--this problem exists in all GUIs all the time. It's a sad example of Linux trying hard to be as stupid and unhelpful as Windows :-(
You can test where /usr/bin/vi to lead update-alternatives --query vi Usually there is link to /usr/bin/vim.tiny To find package name you can try dpkg -S /usr/bin/vim.tiny In my system I have received vim-tiny: /usr/bin/vim.tiny So there is additional package vim-tiny .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/184113", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56145/" ] }
184,124
I have a bash script, wherein I execute a line, sleep for sometime and then tail -f my log file to verify a certain pattern is seen, I press ctrl +c to get out of tail -f and then move to the next line till the bash script finishes execution: Here is what I have done thus far: #!/bin/bash# capture the hostnamehost_name=`hostname -f`# method that runs tail -f on log_file.log and looks for pattern and passes control to next line on 'ctrl+c'echo "==================================================="echo "On $host_name: running some command"some command hereecho "On $host_name: sleeping for 5s"sleep 5# Look for: "pattern" in log_file.log# trap 'continue' SIGINTtrap 'continue' SIGINTecho "On $host_name: post update looking for pattern"tail -f /var/log/hadoop/datanode.log | egrep -i -e "receiving.*src.*dest.*"# some more sanity check echo "On $host_name: checking uptime on process, tasktracker and hbase-regionserver processes...."sudo supervisorctl status process# in the end, enable the balancer# echo balance_switch true | hbase shell The script works but I get the error, what needs to change/ what am I doing wrong? ./script.sh: line 1: continue: only meaningful in a `for', `while', or `until' loop
The continue keyword doesn't mean whatever you think it means. It means continue to the next iteration of a loop. It makes no sense outside of a loop. I think you're looking for trap ' ' INT Since you don't want to do anything upon reception of the signal (beyond killing the foreground job), put no code in the trap. You need a non-empty string, because the empty string has a special meaning: it causes the signal to be ignored.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/184124", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102655/" ] }
184,134
I have a bunch of txt files, I'd like to output them lower-cased, only alphabetic and one word-per line, I can do it with several tr commands in a pipeline like this: tr -d '[:punct:]' <doyle_sherlock_holmes.txt | tr '[:upper:]' '[:lower:]' | tr ' ' '\n' Is it possible to do this in one scan? I could write a C program to do this, but I feel like there's a way to do it using tr , sed , awk or perl .
You can combine multiple translations (excepting complex cases involving overlapping locale-dependent sets), but you can't combine deletion with translation. <doyle_sherlock_holmes.txt tr -d '[:punct:]' | tr '[:upper:] ' '[:lower:]\n' Two calls to tr are likely to be faster than a single call to more complex tools, but this is very dependent on the input size, on the proportions of different characters, on the implementation of tr and competing tools, on the operating system, on the number of cores, etc.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/184134", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14306/" ] }
184,181
If I type cat > file.txt 2>&1 then file.txt is created with the content of cat 's standard input. But if I do cat > file.txt 1>&2 then file.txt is created, but the file is empty. What is the issue between above two commands?
Order is important. The shell processes file redirections in the order that it sees them. Consider: cat >file.txt 2>&1 First, stdout is redirected to the file file.txt . Next, stderr is redirected to stdout which is file.txt . Thus both stdout and stderr are sent to file.txt . By contrast, consider: cat >file.txt 1>&2 First, stdout is redirected to the file file.txt . (In doing this, file.txt is created as an empty file.) Next, stdout is redirected to stderr. Thus, stdout goes to stderr and nothing goes to file.txt and, therefore, file.txt remains as an empty file. Another Interesting Case Consider: cat 2>&1 >file.txt First, stderr is redirected to stdout which is still the terminal. Next, stdout is redirected to file.txt . This second redirection does not affect stderr. It is still sent to the terminal. It is not sent to file.txt . Documentation This behavior is documented in man bash : Redirections are processed in the order they appear, from left to right. An exception to this order, which is also documented in man bash , is pipelines. Consider: command1 ... | command2 The standard output of command1 is sent by the pipe to the standard input of command2 and this connection is performed before any redirections specified by the first command. An Example With Pipelines Consider: command1 >file.txt | command2 Because the redirection to file.txt occurs after the redirection to the pipeline, the stdout of command1 will go to file.txt . command2 will receive no input.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/184181", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102692/" ] }
184,209
Edit: There's a bug report for this , doing the equivalent of this answer . I'm trying to script the copying of public keys to multiple machines. ssh-copy-id checks whether it can log in with the current configuration before copying, but unfortunately this includes any IdentityFile entries in ~/.ssh/config . I would like to completely ignore ~/.ssh/config ; is there some way to do that, or to force ssh-copy-id to always add the key? This does not work: ssh-add "$old_key"ssh-copy-id -i "$new_key" -o "IdentityFile $new_key" "$login" This is similar to, but distinct from How can I make ssh ignore .ssh/config? .
After checking the code of ssh-copy-id , it turns out this hack works: SSH_OPTS='-F /dev/null' ssh-copy-id [...] Would still be interested in a solution that only relies on documented features, though.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/184209", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3645/" ] }
184,266
I am trying to write a bash script in a file that would, when run start pinging a host until it becomes available, when the host becomes reachable it runs a command and stops executing, I tried writing one but the script continues pinging until the count ends, Plus I need to put that process in the background but if I run the script with the dollar ( $ ) sign it still runs in foreground, #!/bin/bashping -c30 -i3 192.168.137.163if [ $? -eq 0 ]then /root/scripts/test1.shexit 0else echo “fail”fi
I would use this, a simple one-liner: while ! ping -c1 HOSTNAME &>/dev/null; do echo "Ping Fail - `date`"; done ; echo "Host Found - `date`" ; /root/scripts/test1.sh Replace HOSTNAME with the host you are trying to ping. I missed the part about putting it in the background, put that line in a shellscript like so: #!/bin/shwhile ! ping -c1 $1 &>/dev/null do echo "Ping Fail - `date`"doneecho "Host Found - `date`"/root/scripts/test1.sh And to background it you would run it like so: nohup ./networktest.sh HOSTNAME > /tmp/networktest.out 2>&1 & Again replace HOSTNAME with the host you are trying to ping. In this approach you are passing the hostname as an argument to the shellscript. Just as a general warning, if your host stays down, you will have this script continuously pinging in the background until you either kill it or the host is found. So I would keep that in mind when you run this. Because you could end up eating system resources if you forget about this.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/184266", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89748/" ] }
184,338
Whenever I try to start an NFS mount I get: Feb 12 00:02:19 martin-xps.lico.nl rpc.statd[23582]: Version 1.3.2 startingFeb 12 00:02:19 martin-xps.lico.nl rpc.statd[23582]: Flags: TI-RPCFeb 12 00:02:19 martin-xps.lico.nl rpc.statd[23582]: Running as root. chown /var/lib/nfs to choose different userFeb 12 00:02:19 martin-xps.lico.nl rpc.statd[23582]: failed to create RPC listeners, exitingFeb 12 00:02:19 martin-xps.lico.nl systemd[1]: rpc-statd.service: control process exited, code=exited status=1Feb 12 00:02:19 martin-xps.lico.nl systemd[1]: Failed to start NFS status monitor for NFSv2/3 locking..Feb 12 00:02:19 martin-xps.lico.nl systemd[1]: Unit rpc-statd.service entered failed state.Feb 12 00:02:19 martin-xps.lico.nl systemd[1]: rpc-statd.service failed.Feb 12 00:02:19 martin-xps.lico.nl rpc.statd[23584]: Version 1.3.2 startingFeb 12 00:02:19 martin-xps.lico.nl rpc.statd[23584]: Flags: TI-RPCFeb 12 00:02:19 martin-xps.lico.nl rpc.statd[23584]: Running as root. chown /var/lib/nfs to choose different userFeb 12 00:02:19 martin-xps.lico.nl rpc.statd[23584]: failed to create RPC listeners, exiting I tried to chown /var/lib/nfs to rpc , which just gives me the error minus the "Running as root" line: Feb 12 00:05:09 martin-xps.lico.nl rpc.statd[23773]: Version 1.3.2 startingFeb 12 00:05:09 martin-xps.lico.nl rpc.statd[23773]: Flags: TI-RPCFeb 12 00:05:09 martin-xps.lico.nl rpc.statd[23773]: failed to create RPC listeners, exitingFeb 12 00:05:09 martin-xps.lico.nl systemd[1]: rpc-statd.service: control process exited, code=exited status=1Feb 12 00:05:09 martin-xps.lico.nl systemd[1]: Failed to start NFS status monitor for NFSv2/3 locking..Feb 12 00:05:09 martin-xps.lico.nl systemd[1]: Unit rpc-statd.service entered failed state.Feb 12 00:05:09 martin-xps.lico.nl systemd[1]: rpc-statd.service failed.Feb 12 00:05:09 martin-xps.lico.nl rpc.statd[23775]: Version 1.3.2 startingFeb 12 00:05:09 martin-xps.lico.nl rpc.statd[23775]: Flags: TI-RPCFeb 12 00:05:09 martin-xps.lico.nl rpc.statd[23775]: failed to create RPC listeners, exiting I have tried to reinstall nfs-utils: $ pacman -R nfs-utils$ rm -r /var/lib/nfs$ pacman -S nfs-utils It then re-creates the directory with the permission of the root user. I'm not even sure if this error even related to rpc.statd not starting. I also tried to run rpc.statd -F --no-notify in my shell, but that just exits with code 1. No error, no nothing. There's no verbose or debug flag documented in the manpage. I also tried to empty my /etc/exports , and my system is up to date ( pacman -Syu ). I didn't change anything, it just stopped working a few hours ago. Note that using mount -o nolock /data works; so the rest of the NFS/rpc daemons seem to be fine.
Same problem here, rpc-stad failed since the last update (all my computers had the problem after the update). To solve the problem I just enabled and started rpcbind: sudo systemctl enable rpcbind.service # for the next rebootsudo systemctl start rpcbind.service sudo systemctl restart rpcbind.service
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/184338", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33645/" ] }
184,345
Here's the problem: I want to be able to discern if my terminal is capable of decent unicode or not, in order to use some characters or not, much like glances does, that sometimes uses colors and others underline. The motivation arises because in any kind of virtual terminal I get decent fonts, but I understand that the basic Linux console has a character set of 256 or 512 simultaneous symbols, so you cannot expect full font support. At first I thought that I could use $TERM or tty, but here's the catch: I'm using byobu too, so $TERM is always "screen.linux". The output of tty is also not very telling: /dev/pts/<some number> in both "real" and virtual terms. $BYOBU_TTY is no help either, because e.g. it may be /dev/tty1 and when the session is opened in Ctrl + Alt + F1 the characters don't show but when attaching to the same session from some X term, they show properly and still $BYOBU_TTY does not change. Besides, I'd like to be able to detect this without presuming byobu is there or not. Also, locale shows in all cases en_US.UTF-8 Yet somehow glances (to name a particular tool I see detecting this), even inside byobu, uses different output depending on the terminal I'm attaching to the byobu session. I'm having trouble with google because terminal and tty seem too common search terms. At most I arrive at solutions recommending $TERM or tty.
Well, first I guess I would point out that pretty much all terminals these days are "virtual" in the sense you talk about... even if the terminal is at the other end of a bona fide serial port. I mean, the days of VT-100 s, Wyse terminals and other "physical", "real" terminals are pretty much gone! That aside, let's say you want to detect what kind of Unicode support your terminal has. You can do this by writing test characters to is and seeing what happens. (You can make an effort to erase the test characters after you've written then, but the user may still see them briefly, or erasing them might not work properly in the first place.) The idea is to ask the terminal to tell you its cursor position, output a test character, ask the terminal again to tell you its position, and compare the two positions to see how far the terminal's cursor moved. To ask the terminal for its position, see here . Essentially: echo -e "\033[6n"; read -d R foo; echo -en "\nCurrent position: "; echo $foo | cut -d \[ -f 2 Try outputting "é". This character takes 2 bytes in UTF-8 but displays in only one column on the screen. If you detect that outputting "é" causes the cursor to move by 2 positions, then the terminal has no UTF-8 support at all and has probably output some kind of garbage. If the cursor didn't move at all, then then terminal is probably ASCII only. If it moved by 1 position, then congratulations, it can probably display French words. Try outputing "あ". This character takes 3 bytes in UTF-8 but displays in only two columns on the screen. If the cursor moves by 0 or 3, bad news, similar to above. If it moves by 1, then it looks like the terminal supports UTF-8 but doesn't know about wide characters (in fixed-width fonts). If it moves by 2 columns, all is good. I'm sure there are other probe characters that you could emit which would lead to useful information. I am not aware of a tool that does this automatically.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/184345", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88523/" ] }
184,359
I just installed openvpn on a remote CentOS 7 server using the instructions in this tutorial . The only change I made to the tutorial was to use @GarethTheRed's instructions for configuring firewalld instead of iptables, as described in Step 4 of the tutorial. The problem is that the tutorial ends with a line of client code that causes the terminal to fail to return a command prompt (see below). How can I successfully connect via OpenVPN to my remote CentOS 7 server from my local CentOS 7 devbox? Here are the connection steps I have tried so far: At the end of Step 6 of the tutorial, I successfully used yum install openvpn on my devbox before typing sudo openvpn --config /path/to/client.ovpn . The problem is that sudo openvpn --config /path/to/client.ovpn results in the terminal locking up after printing Initialization Sequence Completed . The complete output is: [root@localhost ~]# openvpn --config /etc/openvpn/client.ovpnWed Feb 11 16:46:06 2015 OpenVPN 2.3.6 x86_64-redhat-linux-gnu [SSL (OpenSSL)] [LZO] [EPOLL] [PKCS11] [MH] [IPv6] built on Dec 2 2014Wed Feb 11 16:46:06 2015 library versions: OpenSSL 1.0.1e-fips 11 Feb 2013, LZO 2.06Wed Feb 11 16:46:06 2015 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info.Wed Feb 11 16:46:06 2015 Socket Buffers: R=[212992->131072] S=[212992->131072]Wed Feb 11 16:46:06 2015 UDPv4 link local: [undef]Wed Feb 11 16:46:06 2015 UDPv4 link remote: [AF_INET]192.96.215.22:1194Wed Feb 11 16:46:06 2015 TLS: Initial packet from [AF_INET]192.96.215.22:1194, sid=1f320288 ab1f20d3Wed Feb 11 16:46:07 2015 VERIFY OK: depth=1, C=US, ST=CA, L=SomeTown, O=Fort-Funston, OU=MyOrganizationalUnit, CN=serverdomain.com, name=server, [email protected] Feb 11 16:46:07 2015 VERIFY OK: depth=0, C=US, ST=CA, L=SomeTown, O=Fort-Funston, OU=MyOrganizationalUnit, CN=server, name=server, [email protected] Feb 11 16:46:08 2015 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit keyWed Feb 11 16:46:08 2015 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authenticationWed Feb 11 16:46:08 2015 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit keyWed Feb 11 16:46:08 2015 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authenticationWed Feb 11 16:46:08 2015 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 2048 bit RSAWed Feb 11 16:46:08 2015 [server] Peer Connection Initiated with [AF_INET]192.96.215.22:1194Wed Feb 11 16:46:10 2015 SENT CONTROL [server]: 'PUSH_REQUEST' (status=1)Wed Feb 11 16:46:10 2015 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1 bypass-dhcp,dhcp-option DNS 8.8.8.8,dhcp-option DNS 8.8.4.4,route 10.8.0.1,topology net30,ping 10,ping-restart 120,ifconfig 10.8.0.6 10.8.0.5'Wed Feb 11 16:46:10 2015 OPTIONS IMPORT: timers and/or timeouts modifiedWed Feb 11 16:46:10 2015 OPTIONS IMPORT: --ifconfig/up options modifiedWed Feb 11 16:46:10 2015 OPTIONS IMPORT: route options modifiedWed Feb 11 16:46:10 2015 OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modifiedWed Feb 11 16:46:10 2015 ROUTE_GATEWAY 10.0.0.1/255.255.255.0 IFACE=p4p1 HWADDR=14:fe:b5:aa:57:60Wed Feb 11 16:46:10 2015 TUN/TAP device tun0 openedWed Feb 11 16:46:10 2015 TUN/TAP TX queue length set to 100Wed Feb 11 16:46:10 2015 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0Wed Feb 11 16:46:10 2015 /usr/sbin/ip link set dev tun0 up mtu 1500Wed Feb 11 16:46:10 2015 /usr/sbin/ip addr add dev tun0 local 10.8.0.6 peer 10.8.0.5Wed Feb 11 16:46:10 2015 /usr/sbin/ip route add 192.96.215.22/32 via 10.0.0.1Wed Feb 11 16:46:10 2015 /usr/sbin/ip route add 0.0.0.0/1 via 10.8.0.5Wed Feb 11 16:46:10 2015 /usr/sbin/ip route add 128.0.0.0/1 via 10.8.0.5Wed Feb 11 16:46:10 2015 /usr/sbin/ip route add 10.8.0.1/32 via 10.8.0.5Wed Feb 11 16:46:10 2015 Initialization Sequence Completed At the end of this output, there is just a cursor, but no command prompt. Typing in the cursor or hitting return has no effect besides printing what you type on the terminal screen. I read this other posting which describes a similar error and states that the problem is in the DNS configuration , but I followed the tutorial's DNS config instructions exactly. The server also handles requests for mydomain.com served up by httpd. The domain registrar has been pointing requests for mydomain.com to the ip of the server since long before adding OpenVPN. Would this cause some kind of conflict? How can I get the connection to complete?
Try starting the client with the --daemon option: openvpn --daemon From openvpn 's man page: --daemon [progname] Become a daemon after all initialization functions are completed To interact with openvpn once it is a daemon, add the --management option to the command. This allows you to interact with it using telnet as described here . Alternatively, open another terminal and just use that. This way, you can exit the running openvpn by pressing Ctl C in the original terminal. If the client is a desktop system that uses Network Manager, then use the OpenVPN plugin to control it from there - no terminal needed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/184359", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92670/" ] }
184,367
I have an Epson multifunction device connected to a Raspberry Pi running the latest Raspbian. As you can see in the output below, scanimage will only find my scanner if I sudo it, but sane-find scanner finds it just fine without sudo. I've checked that the device permissions are properly set... saned is a memeber of the lp group, which is the group of the USB device. What gives? richard@raspberrypi ~ $ scanimage > image.pnmscanimage: no SANE devices foundrichard@raspberrypi ~ $ sane-find-scanner...found USB scanner (vendor=0x04b8, product=0x0839) at libusb:001:004found USB scanner (vendor=0x0424, product=0xec00) at libusb:001:003...richard@raspberrypi ~ $ sudo scanimage > image.pnmrichard@raspberrypi ~ $ sudo su -s /bin/bash - sanedX11 connection rejected because of wrong authentication.No directory, logging in with HOME=/saned@raspberrypi:/$ lsusbBus 001 Device 002: ID 0424:9512 Standard Microsystems Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. Bus 001 Device 004: ID 04b8:0839 Seiko Epson Corp. CX8300/CX8400/DX8400saned@raspberrypi:/$ ls -l /dev/bus/usb/001total 0crw-rw-r-T 1 root root 189, 0 Feb 12 02:23 001crw-rw-r-T 1 root root 189, 1 Jan 1 1970 002crw-rw-r-T 1 root root 189, 2 Jan 1 1970 003crw-rw-r--+ 1 root lp 189, 3 Feb 12 02:24 004saned@raspberrypi:/$ groupssaned lp scanner
Solved my problem, thanks to the tutorial found here: http://www.johndstech.com/2016/linux/raspberry-pi/geek-friday-setting-up-epson-scanning-on-raspberry-pi/ I had to create /etc/udev/rules.d/55-libsane.rules to say: SYSFS{idVendor}=="04b8", MODE="0666", GROUP="scanner", ENV{libsane_matched}="yes" and /etc/saned/epson2.conf to reflect the vendor and product ID: usb <0x04b8> <0x0839>
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/184367", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102818/" ] }
184,379
I am trying to copy files from one server directly to another, bypassing my local computer. I did scp -r [email protected]:~/data/* [email protected]:~/data/Password: Host key verification failed.lost connection Is this even possible? How may I fix it?
Something I use fairly often when there is no connection possible between the two servers scp -3 user@server1:/path/to/file user@server2:/path/to/file source -3 Copies between two remote hosts are transferred through the local host. Without this option the data is copied directly between the two remote hosts. Note that this option disables the progress meter. Assuming youu have a good connection to both, its not too slow.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/184379", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/71888/" ] }
184,393
I have this code snippet: $CUSER=tim$APPDIR=/var/www/testing$APPVENV=/var/www/testing/vencat > $APPDIR/cronfile << EOFPWD=$APPDIR/$CUSERPATH=$APPVENV/bin:\$PATH0 2 * * * testapp search newsiteEOFcrontab $APPDIR/cronfile It seems to work but I'm really confused about how I would try to run this manually. What does this expand to if I wanted to run it from a command from shell? I tried something like this but it didn't work :( cd /var/www/testing/ven testapp search newsite
Something I use fairly often when there is no connection possible between the two servers scp -3 user@server1:/path/to/file user@server2:/path/to/file source -3 Copies between two remote hosts are transferred through the local host. Without this option the data is copied directly between the two remote hosts. Note that this option disables the progress meter. Assuming youu have a good connection to both, its not too slow.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/184393", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61825/" ] }
184,412
I wanted to forward emails which comes to my institute mail to my gmail, while keeping the originals in the institute account too. I tried editing my /.procmail file like below: # Forward everything to me at gmail:0:! [email protected] This was working fine except the original mail is just bouncing to gmail account, not getting stored at institute mail account. What to do for this?
Use :0c for forwarding a copy of a message :0c! [email protected] c flag documentation: c Generate a carbon copy of this mail. This only makes sense on delivering recipes. The only non-delivering recipe this flag has an effect on is on a nesting block, in order to generate a carbon copy this will clone the running procmail process (lockfiles will not be inherited), whereby the clone will proceed as usual and the parent will jump across the block.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/184412", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102841/" ] }
184,413
I have few directores inside a folder like below - teckapp@machineA:/opt/keeper$ ls -ltrhtotal 8.0Kdrwxr-xr-x 10 teckapp cloudmgr 4.0K Feb 9 10:22 keeper-3.4.6drwxr-xr-x 3 teckapp cloudmgr 4.0K Feb 12 01:44 data I have some other folder as well in some other machines for which I need to change the permission to the above one like this drwxr-xr-x . Meaning how can I change any folder permissions to drwxr-xr-x ? I know I need to use chmod command with this but what should be the value with chown that I should use for this?
To apply those permissions to a directory: chmod 755 directory_name To apply to all directories inside the current directory: chmod 755 */ If you want to modify all directories and subdirectories, you'll need to combine find with chmod : find . -type d -exec chmod 755 {} +
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/184413", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102842/" ] }
184,445
Just now, dpkg --help spit out three pages of output in my face. I was maybe interested in the first ten lines, which show the general usage and most common arguments. I'd like that, whenever I run a program (any program) with --help as the only argument, if the output is longer than $(tput lines) , it would automatically get piped through less . Is it easily doable in bash? Edit: In the end, the best solution for me was to switch to zsh. Either one of the following snippets in your ~/.zshrc will do the job; each one has its own tradeoffs: # Modify the input line before it runsfunction lessify() { if [[ "$BUFFER" =~ " --help$" ]] ; then BUFFER="$BUFFER | less -FX" fi zle accept-line}zle -N lessify_widget lessify# Bind to the Enter keybindkey '^M' lessify_widget or # Alias --help ; ignore rest of the line alias -g -- --help="--help | less -FX ; true " Also, in researching this question, I've probably wasted more time than this will ever save me. Don't regret it one bit.
In bash , you can do this with the debug features, although it's a pretty fragile solution and very dependent on your environment. Enable extended debugging (see the manual for details ): shopt -s extdebug Create a helprun function: helprun() { if [ "$#" -eq 2 ] && [ "$2" = '--help' ]; then "$@" | less -F return 1 fi} Then trap all commands with it: trap 'helprun $BASH_COMMAND' DEBUG This will run helprun <command> for every command, and if it is a --help command, pipe it through less , returning 1 so that the command isn't executed (thanks to extdebug ). If it isn't, it just runs as normal. There are probably edge cases I haven't handled here...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/184445", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79271/" ] }
184,447
Doubt has raised from my another question Where if i give localhost to my path, it works. But it doesn't work if i give my system ip. 127.0.0.1 is mapped to localhost in my /etc/hosts. Do i need to map my ip to localhost? Doesn't change? Aren't they same?
Some services are configured to only listen on the localhost IP address. An example would be a MySQL database - you want your PHP application running on the same server to connect to it, but don't want any external services or even hackers from the outside to connect. By configuring MySQL to only accept localhost addresses ( 127.0.0.1 for example) and not your server's real IP address ( 10.x.x.x for example) you are reducing the chance of being compromised. So, to answer your question - yes, they are different. localhost is given an ip address in the 127.0.0.0 network and given to a virtual loopback network device lo . This device is present on all systems, regardless of whether they have a physical network device fitted (WiFi or Ethernet, for example). A system that is not connected to any network will have this loopback device and hence a 127.0.0.0 address. The name localhost is simply a name that resolves to this IP address and is configured in /etc/hosts . Your real IP address (10.x.x.x for example) is allocated to a network device. This is usually a physical network device (WiFi or Ethernet) although advanced setups using tun or tap devices can use them too. Again, the name resolution (for example www.example.org to 10.0.1.1 ) can be configured in /etc/hosts or can be set up to use DNS.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/184447", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/95691/" ] }
184,486
I must admit that I like servers without passwords in some cases. A typical server is vulnerable to anyone who has physical access to it. So in some cases it is practical to lock it physically and since then trust any physical access. Basic concepts In theory, when I physically reach such a server, I should be able to perform administration tasks without password by simply typing root as the login and I shouldn't be asked for a password. The same may apply to user accounts but one would not really access them physically. Therefore no system passwords are needed for (occasional) local access. When accessing the server remotely, either for administration, or for user account, I expect to always use an SSH private key. It is very easy to set up an SSH key for a just created account and thus no system passwords are needed for (regular) remote access. # user=...# # useradd -m "$user"# sudo -i -u "$user"$ keyurl=...$$ mkdir -p .ssh$ curl -o .ssh/authorized_keys "$keyurl" The conclusion is that, in theory, we wouldn't neeed any system passwords for use cases like that. So the question is, how do we configure the system and user accounts to make it happen in a consistent and secure way. Local access details How do we ensure the root account can be accessed locally without a password? I don't think we can use passwd -d as that will make root access too permissive and an unpriviliged user could switch to root for free, which is wrong. We cannot use passwd -l as it prevents us from logging in. Note that local access is exclusively about access using the local keyboard. Therefore a valid solution must not allow any user switching (whether using su or sudo ). Remote access details Until recently the above solution would work but now SSH started to check for locked user accounts. We cannot probably use passwd -d for the same reasons. We cannot use passwd -u as it just complains that it would lead to what passwd -d does. There's a workaround with dummy password for this part. user=...echo -ne "$user:`pwgen 16`\n" | chpasswd It might also be possible to turn off locked account checking in SSH entirely but it would be nicer to retain the support of locked accounts and just be able to unlock them. Final notes What I'm interested in is a solution that would allow you to log in to the root account locally and all accounts including root remotely, without any passwords. On the other hand, a solution must not impact security except in explicitly described ways, especially not by allowing remote users to get access to the root account or other users' account. The solution should be sufficiently robust so that it doesn't cause security issues indirectly. An accepted and awarded answer may or may not describe detailed configuration of individual tools but must contain the key points to reach the stated goals. Note that this probably cannot be solved through conventional use of tools like passwd , ssh , su , sudo and the like. More ideas after reading the first answers Just an idea – the local root access could be achieved by starting root shells instead of login processes. But there's still the need to lock only password authentication, not public key authentication.
Requirements for which I will offer solutions, as bullet points: Passwordless root console login Passwordless root remote login from pre-authorised users Passwordless remote login for specified accounts from pre-authorised users Passwordless remote login for any account from pre-authorised users The following examples are based on Debian, since that's what I've got here for testing. However, I see no reason why the principles cannot be applied to any distribution (or indeed any PAM-based *ix derivative). Passwordless root console login I think the way I would approach this would be to leverage PAM and the /etc/securetty configuration file. As a pre-requisite, a "sufficiently secure" root password must be set. This is not required for console login but exists to make brute force cracking attempts unrealistic. The account is otherwise a perfectly normal root account. In /etc/pam.d/login I have the following standard set of lines for authentication (those beginning with keyword auth ): auth optional pam_faildelay.so delay=3000000auth [success=ok new_authtok_reqd=ok ignore=ignore user_unknown=bad default=die] pam_securetty.soauth requisite pam_nologin.so@include common-authauth optional pam_group.so The referenced common-auth include file contains the following relevant lines: auth [success=1 default=ignore] pam_unix.so nullok_secureauth requisite pam_deny.soauth required pam_permit.soauth optional pam_cap.so The common-auth file instructs PAM to skip one rule (the deny) if a "UNIX login" succeeds. Typically this means a match in /etc/shadow . The auth ... pam_securetty.so line is configured to prevent root logins except on tty devices specified in /etc/securetty . (This file already includes all the console devices.) By modifying this auth line slightly it is possible to define a rule that permits a root login without a password from a tty device specified in /etc/securetty . The success=ok parameter needs to be amended so that the ok is replaced by the number of auth lines to be skipped in the event of a successful match. In the situation shown here, that number is 3 , which jumps down to the auth ... pam_permit.so line: auth [success=3 new_authtok_reqd=ok ignore=ignore user_unknown=bad default=die] pam_securetty.so Passwordless root remote login from pre-authorised users This is a straightforward inclusion of ssh keys for those authorised users being added to the root authorized_keys file. Passwordless remote login for specified accounts from pre-authorised users This is also a straightforward inclusion of ssh keys for authorised users being added to the appropriate and corresponding user's .ssh/authorized_keys file. (The typical remote user chris wants passwordless login to local user chris scenario.) Note that accounts can remain in the default locked state after creation (i.e. with just ! in the password field for /etc/shadow ) but permit SSH key based login. This requires root to place the key in the new user's .ssh/authorized_keys file. What is not so obvious is that this approach is only available when UsePAM Yes is set in /etc/ssh/sshd_config . PAM differentiates ! as "account locked for password but other access methods may be permitted" and !... "account locked. Period." (If UsePAM No is set, then OpenSSH considers any presence of ! starting the password field to represent a locked account.) Passwordless remote login for any account from pre-authorised users It wasn't entirely clear to me whether you wanted this facility or not. Namely, certain authorised users would be able to ssh login without a password to any any every local account. I cannot test this scenario but I believe this can be achieved with OpenSSH 5.9 or newer, which permits multiple authorized_keys files to be defined in /etc/ssh/sshd_config . Edit the configuration file to include a second file called /etc/ssh/authorized_keys . Add your selected authorised users' public keys to this file, ensuring permissions are such that it is owned by root and has write access only by root (0644).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/184486", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60296/" ] }
184,493
Am I correct to assume that when ; joins two commands on a line, Bash always waits until the first command has exited before executing the second command?And similarly, in a shell script containing two different commands on different lines, Bash always waits until the command on the first line has exited before executing the command on the second line? If this is the case, is there a way to execute two commands in one line or in a script, so that the second command doesn't wait until the first command has finished? Also, are different lines in a shell script equivalent to separate lines joined by ; or && ?
You're correct, commands in scripts are executed sequentially by default. You can run a command in the background by suffixing it with & (a single ampersand). Commands on separate lines are equivalent to commands joined with ; by default. If you tell your shell to abort on non-zero exit codes ( set -e ), then the script will execute as though all the commands were joined with && .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/184493", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85900/" ] }
184,502
Currently I have a script.sh file with the following content: #!/bin/bashwget -q http://exemple.com/page1.php;wget -q http://exemple.com/page2.php;wget -q http://exemple.com/page3.php; I want to execute the commands one by one, when the previous finishes. Am I doing it in the right way? I've never worked with Linux before and tried to search for it but found no solutions.
Yes, you're doing it the right way. Shell scripts will run each command sequentially, waiting for the first to finish before the next one starts. You can either join commands with ; or have them on separate lines: command1; command2 or command1command2 There is no need for ; if the commands are on separate lines. You can also choose to run the second command only if the first exited successfully. To do so, join them with && : command1 && command2 or command1 &&command2 For more information on the various control operators available to you, see here .
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/184502", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102903/" ] }
184,508
I am trying to build a script in in which nvm and eventually node will get installed. I have installed nvm with cURL. I see the modifications in the .profile or .bashrc file (both work) and when typing the nvm at the bash prompt, it shows the options available etc. So nvm works. Manually I can install node, but as soon as I put the nvm command in a shell script: nano test.sh#!/bin/bashnvm and run it with: chmod 755 test.sh./test.sh I get: ./test.sh: line 2: nvm: command not found If it can't find nvm , I don't even have to think of nvm ls-remote or nvm install ... I got Ubuntu 14.04 installed and Bash is my shell.
nvm command is a shell function declared in ~/.nvm/nvm.sh . You may source either of following scripts at the start of yours to make nvm() available: . ~/.nvm/nvm.sh. ~/.profile. ~/.bashrc. $(brew --prefix nvm)/nvm.sh # if installed via Brew
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/184508", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102902/" ] }
184,519
NOTE: This question is the complement of this Q&A: How to "grep" for line length *not* in a given range? I need to get only the lines from a textfile (a wordlist, separated with newline) that has a length range of minimum or equal than 3 characters, but not longer or equal than 10. Example: INPUT: egyezményétmegkívánkialmakevesenmeghatározó OUTPUT: megkívánalmakevesen Question: How can I do this in bash ?
grep -x '.\{3,10\}' where -x (also --line-regexp with GNU grep ) match pattern to whole line . any single character \{3,10\} quantify from 3 to 10 times previous symbol (in the case any ones)
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/184519", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
184,604
Can someone please elaborate on the difference between the various RX packets fields in ifconfig output? For example, let's say I run ifconfig and see the following: eth0 Link encap:Ethernet HWaddr AA:BB:CC:DD:EE:FF inet addr:1.1.1.1 Bcast:1.1.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:202723544 errors:0 dropped:4959 overruns:0 frame:37 TX packets:158354057 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4261083782 (3.9 GiB) TX bytes:1224803677 (1.1 GiB) Interrupt:83 Memory:f6bf0000-f6c00000 What is the difference between errors: dropped: overruns and frame: My guess at this point (based on some vague googling) is that frame: specifically pertains to CRC failures when the nic analyzes incoming frames and that errors: is a broader generic category. Then again... if that were the case, I would expect both of those fields to show numbers.
That information is poorly documented. I will tell you what I understand from my experience. frame counts only misaligned frames, it means frames with a length not divisible by 8. Because of that length is not a valid frame and it is simply discarded. Meanwhile errors counts CRC errors, too-short frames and too-long frames. overruns counts that times when there is FIFO overruns, caused by the rate at which the buffer gets full and the kernel isn't able to empty it. At last, dropped counts things like unintended VLAN tags or receiving IPv6 frames when the interface is not configured for IPv6.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/184604", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1822/" ] }
184,631
I am trying to create a while loop in bash that won't continue until the user enters the word "next". I can't seem to figure out how to use strings for the condition though. #Beginning of codeecho -e "Please type next to continue."read word while [ "$word" -ne "next" ]do read worddone#the rest of the code
Use != instead of -ne echo -e "Please type next to continue."read word while [ "$word" != "next" ]do read worddone Check this for comparison operators. http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-11.html
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/184631", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93163/" ] }
184,651
I'm using scp to copy files from a remote server to a local one. What's really uncomfortable is that I need to type the file path precisely. I'm used to relying on autocompletion, because file names and folder structures can be long. I want to be able to see the names of files in each directory and autocomplete just like when browsing files locally. Now, I could do SSH separately, find the file names, and use them in the SCP command. But of course, that would be a huge waste of effort. Also, I could use a GUI, but I prefer to avoid that because a command line is more lightweight. Any way to use SCP without having to remember file names exactly?
Use != instead of -ne echo -e "Please type next to continue."read word while [ "$word" != "next" ]do read worddone Check this for comparison operators. http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-11.html
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/184651", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11086/" ] }
184,658
I have a file compressed in *.txz. After unpacking it I received a *.tar file. Is there any way to unpack it twice with one command? I mean unpack file (*.tar).txz with one command? For know I'm do it like this: xz -d file.txztar xvf file.tar But I wonder if there is nicer way.
xz -d < file.tar.xz | tar xvf - That's the same as with any compressed archive. You should never have to create an uncompressed copy of the original file. Some tar implementations like recent versions of GNU tar have builtin options to call xz by themselves. With GNU tar or bsdtar : tar Jxvf file.tar.xz Though, if you've got a version that has -J , chances are it will detect xz files automatically, so: tar xvf file.tar.xz will suffice. If your GNU or BSD tar is too old to support xz specifically, you may be able to use the --use-compress-program option: tar --use-compress-program=xz -xvf file.tar.gz One of the advantages of having tar invoke the compressor utility is that it is able to report the failure of it in its exit status. Note: if the tar.xz archive has been created with pixz , pixz may have added a tar index to it, which allows extracting files individually without having to uncompress the whole archive: pixz -x path/to/file/in/archive < file.tar.xz | tar xvf -
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/184658", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102994/" ] }
184,659
Can anyone tell me what is necessary to get elementaryOS installed in a virtualbox on a Windows7 host? I came across elementaryOS a few days back and I have been trying to install it in a VirtualBox. However, each time I have downloadeded it and attempted to select the elementary iso in storage settings of my virtualbox, my downloaded elementary iso CD file is not visible and hence not selectable. The file download in named in the following format:elementaryos-stable-amd64.20130810
xz -d < file.tar.xz | tar xvf - That's the same as with any compressed archive. You should never have to create an uncompressed copy of the original file. Some tar implementations like recent versions of GNU tar have builtin options to call xz by themselves. With GNU tar or bsdtar : tar Jxvf file.tar.xz Though, if you've got a version that has -J , chances are it will detect xz files automatically, so: tar xvf file.tar.xz will suffice. If your GNU or BSD tar is too old to support xz specifically, you may be able to use the --use-compress-program option: tar --use-compress-program=xz -xvf file.tar.gz One of the advantages of having tar invoke the compressor utility is that it is able to report the failure of it in its exit status. Note: if the tar.xz archive has been created with pixz , pixz may have added a tar index to it, which allows extracting files individually without having to uncompress the whole archive: pixz -x path/to/file/in/archive < file.tar.xz | tar xvf -
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/184659", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16154/" ] }
184,698
I'm currently using thunderbird with gnupg to read encrypted emails.If I understand the swapping behavior correctly, the memory pages containing the decrypted emails might be swapped out and leave traces on the hard disk which may in theory later be recovered forensically. While it is certainly possible to just use an encrypted swapfile or disable swapping globally for the duration of using sensitive files, it impacts performance, might be forgotten and requires root privileges. Is it possible to mark certain files or programs as not to be swapped? Even without root access? Could one write an application which can be distributed to technically naive users and whose memory contents are never swapped to disk?
In the comments, I suggested you create a cgroup, set memory.swappiness to zero (to minimize swapping) and run your application inside of that. If you did that, your application probably wouldn't swap unless you were running so incredibly low on physical memory that swapping pages for programs in that cgroup was the only way to make enough physical memory available. To do this on RHEL 6.5: Ensure the libcgroup package is installed. This gives you access to userspace tools like cgcreate and cgexec . Start and enable the cgconfig service so that changes to cgroup configuration are persistent between reboots. On RHEL this service should also mount the required filesystems underneath the /cgroup tree. Create the cgroup with cgcreate -g memory:thunderbird Set swappiness to zero in this group with cgset -r memory.swappiness=0 thunderbird Use cgsnapshot -s > /etc/cgconfig.conf to save an updated persistent configuration for the cgconfig service (all changes up until now have been runtime changes. You'll probably want to save the default config file somewhere and give it a once-over before making it the persistent configuration. You can now use cgexec to start desired applications within the thunderbird cgroup: [root@xxx601 ~]# cgexec -g memory:thunderbird ls anaconda-ks.cfg a.out foreman.log index.html install.log install.log.syslog node.pp sleep sleep.c ssl-build stack test [root@xxx601 ~]# I don't have thunderbird actually installed otherwise I would have done that. Not sure why the formatting of the above is messed up. One alternative to cgexec would be to start thunderbird and add the PID to the tasks file for the application. For example: [root@xxx601 ~]# cat /cgroup/memory/thunderbird/tasks [root@xxx601 ~]# pidof httpd 25926 10227 10226 10225 10163 10162 10161 10160 10159 10157 10156 10155 10152 10109 [root@xxx601 ~]# echo 25926 > /cgroup/memory/thunderbird/tasks [root@xxx601 ~]# cat /cgroup/memory/thunderbird/tasks 25926 Again, it's bears mentioning that this doesn't technically prevent swapping but short of modifying the application itself, it's probably your best bet. I've just now found memory.memsw.limit_in_bytes which seems like it might be a more direct control on forcing there to be no swapping but I haven't played around with it enough to really feel comfortable saying that it fixes your problem completely. That said, it might be something to look into after this. The real answer would be to have the application mlock sensitive information to get around this sort of concern. I'm willing to bet an application like Thunderbird, though, does do that but I don't know enough about the internals to comment on that.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/184698", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/103010/" ] }
184,722
I'm developing on a CentOS6 server with Apache and PHP. When I make a change to a PHP file (and save) it appears that apache is not reading the changed file - it's still processing my old .php file. After 5-10 minutes it will start to use the new file. Can someone tell me how to force Apache to immediately pickup the changed .php files? UPDATE: I moved the files onto the apache server and the problem remains (this is not an NFS issue). So it seems that Apache is just not reading in the changed files for several minutes Confused...
Maybe I had the same problem with you and it was because opcache configuration on php.ini. So I set revalidate frequency to 0 opcache.revalidate_freq=0 or disabled opcache opcache.enable=0 Remember to restart Apache server afterwards.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/184722", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23091/" ] }
184,726
I need to include below python script inside a bash script. If the bash script end success, I need to execute the below script: #!/usr/bin/python from smtplib import SMTPimport datetimedebuglevel = 0smtp = SMTP()smtp.set_debuglevel(debuglevel)smtp.connect('192.168.75.1', 25)smtp.login('my_mail', 'mail_passwd')from_addr = "My Name <[email protected]>"to_addr = "<[email protected]"subj = "Process completed"date = datetime.datetime.now().strftime( "%d/%m/%Y %H:%M" )#print (date)message_text = "Hai..\n\nThe process completed."msg = "From: %s\nTo: %s\nSubject: %s\nDate: %s\n\n%s" % ( from_addr, to_addr, subj, date, message_text )smtp.sendmail(from_addr, to_addr, msg)smtp.quit()
You can use heredoc if you want to keep the source of both bash and python scripts together. For example, say the following are the contents of a file called pyinbash.sh : #!/bin/bashecho "Executing a bash statement"export bashvar=100cat << EOF > pyscript.py#!/usr/bin/pythonimport subprocessprint 'Hello python'subprocess.call(["echo","$bashvar"])EOFchmod 755 pyscript.py./pyscript.py Now running pyinbash.sh will yield: $ chmod 755 pyinbash.sh$ ./pyinbash.shExecuting a bash statementHello python100
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/184726", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102478/" ] }