source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
146,283 | I want to install libpq-dev on my Vagrant machine. I install it with $ apt-get install -y libpq-dev During installation a prompt appears which asks if it's allowed to restart some services automatically. This prompt breaks my Vagrant provision. How can disable this prompt? Text: There are services installed on your system which need to be restarted when certain libraries, such as libpam, libc, and libssl, are upgraded. Since these restarts may cause interruptions of service for the system, you will normally be prompted on each upgrade for the list of services you wish to restart. You can choose this option to avoid being prompted; instead, all necessary restarts will be done for you automatically so you can avoid being asked questions on each library upgrade. ****EDIT **** Thanks to Patrick's answer and this question I fixed it. Now my Vagrantfile contains: sudo DEBIAN_FRONTEND=noninteractive apt-get install -y libpq-dev | Set the environment variable DEBIAN_FRONTEND=noninteractive . For example: export DEBIAN_FRONTEND=noninteractiveapt-get install -y libpq-dev This will make apt-get select the default options. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/146283",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26992/"
]
} |
146,296 | I know that in ~/.bashrc one must not put spaces around = signs in assignment: $ tail -n2 ~/.bashrc alias a="echo 'You hit a!'"alias b = "echo 'You hit b!'"$ aYou hit a!$ bb: command not found I'm reviewing the MySQL config file /etc/my.cnf and I've found this: tmpdir=/mnt/ramdiskkey_buffer_size = 1024Minnodb_buffer_pool_size = 512Mquery_cache_size=16M How might I verify that the spaces around the = signs are not a problem? Note that this question is not specific to the /etc/my.cnf file, but rather to *NIX config files in general. My first inclination is to RTFM but in fact man mysql makes no mention of the issue and if I need to go hunting online for each case, I'll never get anywhere. Is there any convention or easy way to check? As can be seen, multiple people have edited this file (different conventions for = signs) and I can neither force them all to use no spaces, nor can I go crazy checking everything that may have been configured and may or may not be correct. EDIT: My intention is to ensure that currently-configured files are done properly. When configuring files myself, I go with the convention of whatever the package maintainer put in there. | Bash will interpret a line that has text followed by a = as an assignment to a variable, but it will interpret a line that has text followed by a space as a command with an argument. var=assignment vs command =argument Bash scripts work on the principle that everything in the script is as if you have typed it into the command line. In configuration files that aren't interpreted by bash (or another shell), it will be determined by the parser that is used to read the configuration file. Some parsers will take spaces, some won't. It's up to the application in that case. Personally, I go with whatever convention the default configuration file has used. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146296",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9760/"
]
} |
146,299 | I feel like a kid in the principal's office explaining that the dog ate my homework the night before it was due, but I'm staring some crazy data loss bug in the face and I can't figure out how it happened. I would like to know how git could eat my repository whole! I've put git through the wringer many times and it's never blinked. I've used it to split a 20 Gig Subversion repo into 27 git repos and filter-branched the foo out of them to untangle the mess and it's never lost a byte on me. The reflog is always there to fall back on. This time the carpet is gone! From my perspective, all I did is run git pull and it nuked my entire local repository. I don't mean it "messed up the checked out version" or "the branch I was on" or anything like that. I mean the entire thing is gone . Here is a screen-shot of my terminal at the incident: Let me walk you through that. My command prompt includes data about the current git repo (using prezto's vcs_info implementation) so you can see when the git repo disappeared. The first command is normal enough: » caleb » jaguar » ~/p/w/incil.info » ◼ zend ★ »❯❯❯ git co masterSwitched to branch 'master'Your branch is up-to-date with 'origin/master'. There you can see I was on the 'zend' branch, and checked out master. So far so good. You'll see in the prompt before my next command that it successfully switched branches: » caleb » jaguar » ~/p/w/incil.info » ◼ master ★ »❯❯❯ git pullremote: Counting objects: 37, done.remote: Compressing objects: 100% (37/37), done.remote: Total 37 (delta 25), reused 0 (delta 0)Unpacking objects: 100% (37/37), done.From gitlab.alerque.com:ipk/incil.info + 7412a21...eca4d26 master -> origin/master (forced update) f03fa5d..c8ea00b devel -> origin/devel + 2af282c...009b8ec verse-spinner -> origin/verse-spinner (forced update)First, rewinding head to replay your work on top of it...>>> elapsed time 11s And just like that it's gone. The elapsed time marker outputs before the next prompt if more than 10 seconds have elapsed. Git did not give any output beyond the notice that it was rewinding to replay. No indication that it finished. The next prompt includes no data about what branch we are on or the state of git. Not noticing it had failed I obliviously tried to run another git command only to be told I wasn't in a git repo. Note the PWD has not changed: » caleb » jaguar » ~/p/w/incil.info »❯❯❯ git fetch --allfatal: Not a git repository (or any parent up to mount point /home)Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). After this a look around showed that I was in a completely empty directory. Nothing. No '.git' directory, nothing. Empty. My local git is at version 2.0.2. Here are a couple tidbits from my git config that might be relevant to making out what happened: [branch] autosetuprebase = always rebase = preserve[pull] rebase = true[rebase] autosquash = true autostash = true[alias] co = checkout For example I have git pull set to always do a rebase instead of a merge, so that part of the output above is normal. I can recover the data. I don't think there were any git objects other than some unimportant stashes that hadn't been pushed to other repos, but I'd like to know what happened . I have checked for: Messages in dmesg or the systemd journal. Nothing even remotely relevant. There is no indication of drive or file system failure (LVM + LUKS + EXT4 all look normal). There is nothing in lost+found. I didn't run anything else. There is nothing in the history I'm not showing above, and no other terminals were used during this time. There are no rm commands floating around that might have executed in the wrong CWD, etc. Poking at another git repo in another directory shows no apparent abnormality executing git pull s. What else should I be looking for here? | Yes, git ate my homework. All of it. I made a dd image of this disk after the incident and messed around with it later. Reconstructing the series of events from system logs, I deduce what happened was something like this: A system update command ( pacman -Syu ) had been issued days before this incident. An extended network outage meant that it was left re-trying to download packages. Frustrated at the lack of internet, I'd put the system to sleep and gone to bed. Days later the system was woken up and it started finding and downloading packages again. Package download finished sometime just before I happened to be messing around with this repository. The system glibc installation got updated after the git checkout and before the git pull . The git binary got replaced after the git pull started and before it finished. And on the seventh day, git rested from all its labors. And deleted the world so everybody else had to rest too. I don't know exactly what race condition occurred that made this happen, but swapping out binaries in the middle of an operation is certainly not nice nor a testable / repeatable condition. Usually a copy of a running binary is stored in memory, but git is weird and something about the way it re-spawns versions of itself I'm sure led to this mess. Obviously it should have died rather than destroying everything, but that's what happened. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146299",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1925/"
]
} |
146,303 | In bash, autocompletion of paths has recently stopped working when issuing vim commands where the path is deeper than two directories (it continues to work as expected with other commands, such as ls and cd ). For example, if I type ls .config/btsync/bt and then press TAB, it expands to ls .config/btsync/btsync.conf . If I type vim .config/bt and then press TAB, it expands to vim .config/btsync/ . However, if I type vim .config/btsync/bt and then press TAB, nothing happens (I would expect it to expand to vim .config/btsync/btsync.conf , as in the ls example, above. I get the same issue when running as my own user and when running as su. I read this post which mentioned an issue with older versions of bash-completion but I'm running 2.1-5. UPDATE:After some additional testing, I've found that the root issue is that bash will only complete directory names, not filenames. UPDATE:It turns out that bash-completion was the overall cause. See my second comment on the accepted answer. Any suggestions as to the potential cause of this behaviour would be gratefully received! | I did some more research for you and here is what I found - the key to autocompletion is the bash command complete . You can print the rules for vim using: complete -p vim Likewise you can remove these specific rules with: complete -r vim This command resets it to defaults - the usual paths and file names completion without any extra logic. That's probably what you want. For more info check out help complete or man bash and look for the section about complete command (it's a bash built-in, hence it's documented in the bash manpage). One last point - the above changes only affect the current bash session, if you want to remove the vim rules after every login put the complete -r vim to your ~/.bashrc . Hope that helps :) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146303",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78540/"
]
} |
146,307 | I'd like to know the easiest way of creating a bash script that will run an app in MacOSX (it's a UNIX based system so I assumed this was the right place). wait for the program to exit. If the exit wasn't a crash, run it again. Otherwise leave the crash report window open and exit. I'd like this script to run forever until either the crash or manual termination. I've a vague idea but not sure. Here is what I have so far: echo "Launching Autorun…"if ["$1" -eq "0"]; then# Application exited successfully. Restarting./path/to/application.appelse# An error occured. Do nothing and exit. exitfi | This is what Bash while loops do: while /path/to/application.appdo :done It will run the application, and if it terminates successfully run the body of the loop. : is the shell's no-op command (the loop has to have a body, so that's what we put there); after that it goes back to the top and runs the program again. If it fails, the loop stops running and the script exits. However, it looks like what you're running might be an ordinary Mac application ( .app ): there are a couple of issues that come up in that case. One is that you need to use the open command , rather than running the application directory directly: open /path/to/application.app . The other is that when you do open will usually terminate immediately with a success, regardless of what the application goes on to do: that isn't absolutely universal, but most will. If your one does, you can use the -W option to force open to block until the application ends: open -W /path/to/application.app . Note that if the application was already running, this will wait until the existing execution terminates too. How much of an issue any of that is depends on what appplication you're running. If it doesn't play nicely, doing this from the shell may not be the best option. In that case you're probably better off going with AppleScript, which you can ask about on Ask Different . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146307",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78543/"
]
} |
146,342 | by way of example, I've a big text file with many email address, using bash I need search/verify that an email exists (or no). Should be use (only) the "anchors"? grep '^[email protected]' text_file or there're better ways? I need create a bash script and I'd like be safe. | See the -F (fixed string, as opposed to regular expression) and -x (exact: match the whole line) options. grep -Fx [email protected] text_file would be the equivalent of: grep '^user1@example\.com$' text_file (remember that . is a regular expression operator that matches any character). Use the -q option if you only want to check if there's such a line: grep -Fxq [email protected] text_file && echo yes, that address is in that file. If the line to search and the file name are variable: grep -Fxqe "$email" < "$file" Or grep -Fxq -- "$email" < "$file" You don't want: grep -Fxq "$email" "$file" as that would cause problems if $email or $file started with - . If the file is sorted (in your current locale, preferably C ), you can possibly speed things up by using comm instead of grep : printf '%s\n' [email protected] | comm -12 - text_file The advantage will become more obvious when you have several email addresses to check (for instance in another sorted file): comm -12 text_file emails_to_check would be faster than: grep -Fxf emails_to_check text_file | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146342",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40628/"
]
} |
146,349 | Is there a way to move a rule in the iptables and change it position ? I'm aware i can use -I to insert a rule in a specific position, but i also like to keep the counters. | No, you cannot move a rule. However, you can set the counter for any rule you add/insert/replace (using the -c or --set-counters parameter). So you could check the current count, delete the rule and reinsert it with the old count value. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146349",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64411/"
]
} |
146,359 | I have a master file with more than 50000 lines and I need to remove around 20000 lines from actual file, for this I used grep -vE command which is showing an error as too many arguments. I'm using sed command to remove those using for loop. But it is taking so much time to remove as 20000 sed commands are going to initialize. for i in `cat 20000-words.txt`; do sed -i -e "/$i/ d" 50000-lines.txtdone | Assuming that 20000-words.txt is already in the format of one word per line, do: grep -vFf 20000-words.txt 50000-lines.txt >50000-filtered-lines.txt The -f argument to grep tells it to read patterns from a file, one pattern per line, instead of taking them as command line arguments. The -F argument to grep tells it that the patterns should be used as literal strings rather than regular expressions. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146359",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78571/"
]
} |
146,362 | I've just used this guide to create a SSH tunnel to bypass my corporate proxy. I didn't use the usual "Putty", but in "MobaXterm" (a program similar to putty that also offers unix tools) I wrote, to create the tunnel: ssh -D 7070 -p 22 [email protected] sleep 9999 Then I went to my browser (Firefox) and I told it to use SOCKS on the localhost port 7070 . All is good and it's working, but I have no clue on how the underling technology actually works. I mean, on a general level I know how the process works, but I would like to go a little deeper than that. What I mean is, currently we can compare my knowledge to the one I have of cars: I know there is an engine, 4 wheels, etc, but I have no clue how the engine actually uses gasoline to move the car. The 2 answers I received so far only explain to me what I already know. I have a CCNP, work in a datacenter as network engineer, know how to program, know fair enough of the Linux environment. What I would like to have from you is a thorough (but less than a 20-pages-long-RFC) answer. The answer should include: A description of the main options tunnel-wise. A good example is the image in this answer: Understanding SSH tunnels Which type of tunnel is used when and for what (normal tunnels, reverse tunnels) A wireshark capture of the before/after the tunnel An image explaining the process (I am a visual guy after all) I promise that I will give 100 points as soon as I can put a bounty on the question. I really need a good answer on this. (In case I forget about the bounty, please email me: ilcon7e[at]gmail[dot]com EDIT Improved the question. | Assuming that 20000-words.txt is already in the format of one word per line, do: grep -vFf 20000-words.txt 50000-lines.txt >50000-filtered-lines.txt The -f argument to grep tells it to read patterns from a file, one pattern per line, instead of taking them as command line arguments. The -F argument to grep tells it that the patterns should be used as literal strings rather than regular expressions. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146362",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44361/"
]
} |
146,372 | I used the following command to create a list of URLs that I want to test: echo -e localhost:8080/reports/{promos,promo-updates,scandown}/{130,139,142}{,-unburdened,-burdened}{,.pdf,.xls,.xlsx,.csv,.preload}"\n" >> urls.txt Unfortunately the URLs appended to urls.txt each had a space before them (except for the first, of course). I understand why that happened, and I realize I could just strip it off by piping throug a sed expression, but I'd like to know if there's a way to suppress it instead. (It may seem silly and pedantic, but it's no different than the preference so many people feel for not "abusing cats" .) I tried double-quoting to suppress word-splitting , but that suppressed the brace expansion too, so that was a no-go. I tried changing IFS to an empty string, but it didn't work either: IFS='' echo -e localhost:8080/reports/{promos,promo-updates,scandown}/{130,139,142}{,-unburdened,-burdened}{,.pdf,.xls,.xlsx,.csv,.preload}"\n" >> urls.txt Nor did changing it to a newline: IFS='\n' echo -e localhost:8080/reports/{promos,promo-updates,scandown}/{130,139,142}{,-unburdened,-burdened}{,.pdf,.xls,.xlsx,.csv,.preload}"\n" >> urls.txt | Use printf builtin: $ printf %s\\n localhost:8080/reports/{promos,promo-updates,scandown}/{130,139,142}{,-unburdened,-burdened}{,.pdf,.xls,.xlsx,.csv,.preload}localhost:8080/reports/promos/130localhost:8080/reports/promos/130.pdflocalhost:8080/reports/promos/130.xlslocalhost:8080/reports/promos/130.xlsxlocalhost:8080/reports/promos/130.csvlocalhost:8080/reports/promos/130.preloadlocalhost:8080/reports/promos/130-unburdenedlocalhost:8080/reports/promos/130-unburdened.pdflocalhost:8080/reports/promos/130-unburdened.xlslocalhost:8080/reports/promos/130-unburdened.xlsxlocalhost:8080/reports/promos/130-unburdened.csvlocalhost:8080/reports/promos/130-unburdened.preload.... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146372",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3358/"
]
} |
146,378 | I am having a long list of IP addresses, which are not in sequence. I need to find how many IP addresses are there before/after a particular IP address. How can I achieve this? | Number of lines before and after a match, including the match (i.e. you need to subtract 1 from the result if you want to exclude the match): sed -n '0,/pattern/p' file | wc -lsed -n '/pattern/,$p' file | wc -l But this has nothing to do with IP addresses in particular. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146378",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48188/"
]
} |
146,402 | I am trying to upgrade apache 2.2.15 to 2.2.27. While running config.nice taken from apache2.2.15/build I am getting following error: checking whether the C compiler works... noconfigure: error: in `/home/vkuser/httpd-2.2.27/srclib/apr':configure: error: C compiler cannot create executables I have tried to search online but no luck. I have also tested out c compiler by running a small test.c script and it runs fine. There were few solution given online like installing 'kernel-devel' package but it did not resolve issue. How can I get this to work? Following is the config.log generated: This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. It was created by configure, which was generated by GNU Autoconf 2.67. Invocation command line was $ ./configure --prefix=/opt/myapp/apache2.2 --with-mpm=worker --enable-static-support --enable-ssl=static --enable-modules=most --disable-authndbd --disable-authn-dbm --disable-dbd --enable-static-logresolve --enable-static-rotatelogs --enable-proxy=static --enable-proxyconnect=static --enable-proxy-ftp=static --enable-proxy-http=static --enable-rewrite=static --enable-so=static --with-ssl=/opt/myapp/apache2.2/openssl --host=x86_32-unknown-linux-gnu host_alias=x86_32-unknown-linux-gnu CFLAGS=-m32 LDFLAGS=-m32 --with-included-apr ## --------- ## ## Platform. ## ## --------- ## hostname = dmcpq-000 uname -m = x86_64 uname -r = 2.6.18-348.12.1.el5 uname -s = Linux uname -v = #1 SMP Mon Jul 1 17:54:12 EDT 2013 /usr/bin/uname -p = unknown /bin/uname -X = unknown /bin/arch = x86_64 /usr/bin/arch -k = unknown /usr/convex/getsysinfo = unknown /usr/bin/hostinfo = unknown /bin/machine = unknown /usr/bin/oslevel = unknown /bin/universe = unknown PATH: /opt/myapp/Entrust/GetAccess/Runtime/Apache22/bin PATH: /usr/kerberos/sbin PATH: /usr/kerberos/bin PATH: /usr/local/sbin PATH: /usr/local/bin PATH: /sbin PATH: /bin PATH: /usr/sbin PATH: /usr/bin PATH: /root/bin ## ----------- ## ## Core tests. ## ## ----------- ## configure:2793: checking for chosen layout configure:2795: result: Apache configure:3598: checking for working mkdir -p configure:3614: result: yes configure:3629: checking build system type configure:3643: result: x86_64-unknown-linux-gnu configure:3663: checking host system type configure:3676: result: x86_32-unknown-linux-gnu configure:3696: checking target system type configure:3709: result: x86_32-unknown-linux-gnu ## ---------------- ## ## Cache variables. ## ## ---------------- ## ac_cv_build=x86_64-unknown-linux-gnu ac_cv_env_CC_set= ac_cv_env_CC_value= ac_cv_env_CFLAGS_set=set ac_cv_env_CFLAGS_value=-m32 ac_cv_env_CPPFLAGS_set= ac_cv_env_CPPFLAGS_value= ac_cv_env_CPP_set= ac_cv_env_CPP_value= ac_cv_env_LDFLAGS_set=set ac_cv_env_LDFLAGS_value=-m32 ac_cv_env_LIBS_set= ac_cv_env_LIBS_value= ac_cv_env_build_alias_set= ac_cv_env_build_alias_value= ac_cv_env_host_alias_set=set ac_cv_env_host_alias_value=x86_32-unknown-linux-gnu ac_cv_env_target_alias_set= ac_cv_env_target_alias_value= ac_cv_host=x86_32-unknown-linux-gnu ac_cv_mkdir_p=yes ac_cv_target=x86_32-unknown-linux-gnu ## ----------------- ## ## Output variables. ## ## ----------------- ## APACHECTL_ULIMIT='' APR_BINDIR='' APR_CONFIG='' APR_INCLUDEDIR='' APR_VERSION='' APU_BINDIR='' APU_CONFIG='' APU_INCLUDEDIR='' APU_VERSION='' AP_BUILD_SRCLIB_DIRS='' AP_CLEAN_SRCLIB_DIRS='' AP_LIBS='' AWK='' BUILTIN_LIBS='' CC='' CFLAGS='-m32' CORE_IMPLIB='' CORE_IMPLIB_FILE='' CPP='' CPPFLAGS='' CRYPT_LIBS='' CXX='' CXXFLAGS='' DEFS='' DSO_MODULES='' ECHO_C='' ECHO_N='-n' ECHO_T='' EGREP='' EXEEXT='' EXTRA_CFLAGS='' EXTRA_CPPFLAGS='' EXTRA_CXXFLAGS='' EXTRA_INCLUDES='' EXTRA_LDFLAGS='' EXTRA_LIBS='' GREP='' HTTPD_LDFLAGS='' HTTPD_VERSION='' INCLUDES='' INSTALL='' INSTALL_DSO='' INSTALL_PROG_FLAGS='' LDFLAGS='-m32' LIBOBJS='' LIBS='' LIBTOOL='' LN_S='' LTCFLAGS='' LTFLAGS='' LTLIBOBJS='' LT_LDFLAGS='' LYNX_PATH='' MKDEP='' MKINSTALLDIRS='' MK_IMPLIB='' MODULE_CLEANDIRS='' MODULE_DIRS='' MOD_ACTIONS_LDADD='' MOD_ALIAS_LDADD='' MOD_ASIS_LDADD='' MOD_AUTHNZ_LDAP_LDADD='' MOD_AUTHN_ALIAS_LDADD='' MOD_AUTHN_ANON_LDADD='' MOD_AUTHN_DBD_LDADD='' MOD_AUTHN_DBM_LDADD='' MOD_AUTHN_DEFAULT_LDADD='' MOD_AUTHN_FILE_LDADD='' MOD_AUTHZ_DBM_LDADD='' MOD_AUTHZ_DEFAULT_LDADD='' MOD_AUTHZ_GROUPFILE_LDADD='' MOD_AUTHZ_HOST_LDADD='' MOD_AUTHZ_OWNER_LDADD='' MOD_AUTHZ_USER_LDADD='' MOD_AUTH_BASIC_LDADD='' MOD_AUTH_DIGEST_LDADD='' MOD_AUTOINDEX_LDADD='' MOD_BUCKETEER_LDADD='' MOD_CACHE_LDADD='' MOD_CASE_FILTER_IN_LDADD='' MOD_CASE_FILTER_LDADD='' MOD_CERN_META_LDADD='' MOD_CGID_LDADD='' MOD_CGI_LDADD='' MOD_CHARSET_LITE_LDADD='' MOD_DAV_FS_LDADD='' MOD_DAV_LDADD='' MOD_DAV_LOCK_LDADD='' MOD_DBD_LDADD='' MOD_DEFLATE_LDADD='' MOD_DIR_LDADD='' MOD_DISK_CACHE_LDADD='' MOD_DUMPIO_LDADD='' MOD_ECHO_LDADD='' MOD_ENV_LDADD='' MOD_EXAMPLE_LDADD='' MOD_EXPIRES_LDADD='' MOD_EXT_FILTER_LDADD='' MOD_FILE_CACHE_LDADD='' MOD_FILTER_LDADD='' MOD_HEADERS_LDADD='' MOD_HTTP_LDADD='' MOD_IDENT_LDADD='' MOD_IMAGEMAP_LDADD='' MOD_INCLUDE_LDADD='' MOD_INFO_LDADD='' MOD_ISAPI_LDADD='' MOD_LDAP_LDADD='' MOD_LOGIO_LDADD='' MOD_LOG_CONFIG_LDADD='' MOD_LOG_FORENSIC_LDADD='' MOD_MEM_CACHE_LDADD='' MOD_MIME_LDADD='' MOD_MIME_MAGIC_LDADD='' MOD_NEGOTIATION_LDADD='' MOD_OPTIONAL_FN_EXPORT_LDADD='' MOD_OPTIONAL_FN_IMPORT_LDADD='' MOD_OPTIONAL_HOOK_EXPORT_LDADD='' MOD_OPTIONAL_HOOK_IMPORT_LDADD='' MOD_PROXY_AJP_LDADD='' MOD_PROXY_BALANCER_LDADD='' MOD_PROXY_CONNECT_LDADD='' MOD_PROXY_FTP_LDADD='' MOD_PROXY_HTTP_LDADD='' MOD_PROXY_LDADD='' MOD_PROXY_SCGI_LDADD='' MOD_REQTIMEOUT_LDADD='' MOD_REWRITE_LDADD='' MOD_SETENVIF_LDADD='' MOD_SO_LDADD='' MOD_SPELING_LDADD='' MOD_SSL_LDADD='' MOD_STATUS_LDADD='' MOD_SUBSTITUTE_LDADD='' MOD_SUEXEC_LDADD='' MOD_UNIQUE_ID_LDADD='' MOD_USERDIR_LDADD='' MOD_USERTRACK_LDADD='' MOD_VERSION_LDADD='' MOD_VHOST_ALIAS_LDADD='' MPM_LIB='' MPM_NAME='' MPM_SUBDIR_NAME='' NONPORTABLE_SUPPORT='' NOTEST_CFLAGS='' NOTEST_CPPFLAGS='' NOTEST_CXXFLAGS='' NOTEST_LDFLAGS='' NOTEST_LIBS='' OBJEXT='' OS='' OS_DIR='' OS_SPECIFIC_VARS='' PACKAGE_BUGREPORT='' PACKAGE_NAME='' PACKAGE_STRING='' PACKAGE_TARNAME='' PACKAGE_URL='' PACKAGE_VERSION='' PATH_SEPARATOR=':' PCRE_CONFIG='' PICFLAGS='' PILDFLAGS='' PKGCONFIG='' PORT='' POST_SHARED_CMDS='' PRE_SHARED_CMDS='' RANLIB='' RM='' RSYNC='' SHELL='/bin/sh' SHLIBPATH_VAR='' SHLTCFLAGS='' SH_LDFLAGS='' SH_LIBS='' SH_LIBTOOL='' SSLPORT='' SSL_LIBS='' UTIL_LDFLAGS='' ab_LTFLAGS='' abs_srcdir='' ac_ct_CC='' ap_make_delimiter='' ap_make_include='' bindir='${exec_prefix}/bin' build='x86_64-unknown-linux-gnu' build_alias='' build_cpu='x86_64' build_os='linux-gnu' build_vendor='unknown' cgidir='${datadir}/cgi-bin' checkgid_LTFLAGS='' datadir='${prefix}' datarootdir='${prefix}/share' docdir='${datarootdir}/doc/${PACKAGE}' dvidir='${docdir}' errordir='${datadir}/error' exec_prefix='${prefix}' exp_bindir='/opt/myapp/apache2.2/bin' exp_cgidir='/opt/myapp/apache2.2/cgi-bin' exp_datadir='/opt/myapp/apache2.2' exp_errordir='/opt/myapp/apache2.2/error' exp_exec_prefix='/opt/myapp/apache2.2' exp_htdocsdir='/opt/myapp/apache2.2/htdocs' exp_iconsdir='/opt/myapp/apache2.2/icons' exp_includedir='/opt/myapp/apache2.2/include' exp_installbuilddir='/opt/myapp/apache2.2/build' exp_libdir='/opt/myapp/apache2.2/lib' exp_libexecdir='/opt/myapp/apache2.2/modules' exp_localstatedir='/opt/myapp/apache2.2' exp_logfiledir='/opt/myapp/apache2.2/logs' exp_mandir='/opt/myapp/apache2.2/man' exp_manualdir='/opt/myapp/apache2.2/manual' exp_proxycachedir='/opt/myapp/apache2.2/proxy' exp_runtimedir='/opt/myapp/apache2.2/logs' exp_sbindir='/opt/myapp/apache2.2/bin' exp_sysconfdir='/opt/myapp/apache2.2/conf' host='x86_32-unknown-linux-gnu' host_alias='x86_32-unknown-linux-gnu' host_cpu='x86_32' host_os='linux-gnu' host_vendor='unknown' htcacheclean_LTFLAGS='' htdbm_LTFLAGS='' htdigest_LTFLAGS='' htdocsdir='${datadir}/htdocs' htmldir='${docdir}' htpasswd_LTFLAGS='' httxt2dbm_LTFLAGS='' iconsdir='${datadir}/icons' includedir='${prefix}/include' infodir='${datarootdir}/info' installbuilddir='${datadir}/build' libdir='${exec_prefix}/lib' libexecdir='${exec_prefix}/modules' localedir='${datarootdir}/locale' localstatedir='${prefix}' logfiledir='${localstatedir}/logs' logresolve_LTFLAGS='' mandir='${prefix}/man' manualdir='${datadir}/manual' nonssl_listen_stmt_1='' nonssl_listen_stmt_2='' oldincludedir='/usr/include' other_targets='' pdfdir='${docdir}' perlbin='' prefix='/opt/myapp/apache2.2' progname='' program_transform_name='s,x,x,' proxycachedir='${localstatedir}/proxy' psdir='${docdir}' rel_bindir='bin' rel_cgidir='cgi-bin' rel_datadir='' rel_errordir='error' rel_exec_prefix='' rel_htdocsdir='htdocs' rel_iconsdir='icons' rel_includedir='include' rel_installbuilddir='build' rel_libdir='lib' rel_libexecdir='modules' rel_localstatedir='' rel_logfiledir='logs' rel_mandir='man' rel_manualdir='manual' rel_proxycachedir='proxy' rel_runtimedir='logs' rel_sbindir='bin' rel_sysconfdir='conf' rotatelogs_LTFLAGS='' runtimedir='${localstatedir}/logs' sbindir='${exec_prefix}/bin' shared_build='' sharedstatedir='${prefix}/com' sysconfdir='${prefix}/conf' target='x86_32-unknown-linux-gnu' target_alias='' target_cpu='x86_32' target_os='linux-gnu' target_vendor='unknown' configure: exit 1 | From the output you've given, you are trying to compile a 32-bit build of apache on a 64 bit system. This is from the intput to configure here: --host=x86_32-unknown-linux-gnu host_alias=x86_32-unknown-linux-gnu CFLAGS=-m32 LDFLAGS=-m32 Also see the output lines confirming this: configure:3629: checking build system typeconfigure:3643: result: x86_64-unknown-linux-gnuconfigure:3663: checking host system typeconfigure:3676: result: x86_32-unknown-linux-gnuconfigure:3696: checking target system typeconfigure:3709: result: x86_32-unknown-linux-gnu Here it is using a 64 bit build system but a 32 bit host/target. Further down we see: ac_cv_env_CFLAGS_set=setac_cv_env_CFLAGS_value=-m32 This flag tells gcc to produce 32 bit objects. Your error that the C compiler cannot produce executable is likely caused by not having a 32 bit toolchain present. Testing your ability to compile 32 bit objects You can test this by compiling a small C example with the -m32 flag. // Minimal C example#include <stdio.h>int main(){ printf("This works\n"); return 0;} Compiling: gcc -m32 -o m32test m32test.c If this command fails, then you have a problem with your compiler being able to build 32 bit objects. The error messages emitted from the compiler may be helpful in remedying this. Remedies Build for a 64 bit target (by removing the configure options forcing a 32 bit build), or Install a 32 bit compiler toolchain | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/146402",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63194/"
]
} |
146,415 | In Postfix, I have specified my private key, my certificate, and the certificate of my CA smtpd_tls_CAfile = /etc/ssl/cacert.pemsmtpd_tls_key_file = /etc/ssl/server.keysmtpd_tls_cert_file = /etc/ssl/server.pem In dovecot, there are only options to specify my key and my cert: ssl_cert = </etc/ssl/server.pemssl_key = </etc/ssl/server.key How do I specify the certificate of my CA ? Update: The problem is, when I connect with client to my port 993, I get certificate error. Using openssl s_client -connect server:993 I get this error: verify return:1verify error:num=27:certificate not trustedverify return:1verify error:num=21:unable to verify the first certificateverify return:1 I don't get this error when I connect to port 465 (Postfix): openssl s_client -connect server:465 | What you need is a chain certificate. You can create one like this: cat /etc/ssl/server.pem /etc/ssl/cacert.pem > /etc/ssl/chain.pem and then use the chain as the server certificate ssl_cert = </etc/ssl/chain.pemssl_key = </etc/ssl/server.key Now when you connect with openssl s_client , you should get no errors (provided everything else is set up correctly) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146415",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
146,419 | So I have tried experimented and created an alias in .bashrc . However, when I test out the command I get: [rkahil@netmon3 ~]$ menu-bash: menu: command not found Here is what I have in the .bashrc file: # Source global definitionsif [ -f /etc/bashrc ]; then . /etc/bashrcfi# User specific aliases and functionsalias menu='./menuScript.sh'alias vi='vim' The funny thing is when I created the alias vi , it worked. But menu does not. I have looked up previous posts on UnixStackExchange and attempted to follow other posts, but to no avail. Does anyone else have any suggestions? | You should try with alias menu='bash ./menuScript.sh' . I am not currently on a Linux machine, so cannot test it myself, but it should work. When you call the alias, it doesn't know what to do with the path, so you must include the bash at the beginning. And resetting the terminal does help after making the change. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146419",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72902/"
]
} |
146,441 | I need to copy and over-write a large amount of files, I've used the following command: # cp -Rf * ../ But then whenever a file with the same name exists on the destination folder I get this question: cp: overwrite `../ibdata1'? The Problem is that I have about 200 files which are going to be over-written and I don't think that pressing Y then Enter 200 times is the right way to do it. So, what is the right way to that? | You can do yes | cp -rf myxx , Or if you do it as root - your .bashrc or .profile has an alias of cp to cp -i, most modern systems do that to root profiles. You can temporarily bypass an alias and use the non-aliased version of a command by prefixing it with \, e.g. \cp whatever | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/146441",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36757/"
]
} |
146,459 | I have been trying for a while to view files, hidden by a mount on my device sporting Debian 6, to no avail, and being new to Linux, I am compelled to ask the question: How do you view files hidden by a mount on Debian 6? I have gone over the many duplicates I came across as I was drafting this question the first 1 or 10 times and the following answers did not help in my case: Answer to "Access to original contents of mount point" Answer to "Where do the files go if you mount a drive to a folder that already contains files?" Answer to "What happened to the old files if we mount a drive to the directory? [duplicate]" I also found this , but it was a little intimidating to try that with my limited knowledge of what I am even doing. I also asked Linux users around me, who all (both) say that it's impossible to see my files without umount ing. So just to make things clear, this is what I am working with: /tmp # mkdir FOO/tmp # cd FOO//tmp/FOO # touch abc/tmp/FOO # cd~ # mount /dev/sda1 /tmp/FOO/~ # ls /tmp/FOO/bbb~ # cd /tmp//tmp # mkdir BAR/tmp # cd~ # mount --bind / /tmp/BAR~ # cd /tmp/BAR//tmp/BAR # lsbin etc lib media proc sbin sys usrdev home linuxrc mnt root selinux tmp var/tmp/BAR # cd tmp//tmp/BAR/tmp # ls/tmp/BAR/tmp # @John1024: ~ # mount | grep /tmp//dev/sda1 on /tmp/FOO type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=cp932,iocharset=cp932,errors=remount-ro)/dev/root on /tmp/BAR type jffs2 (ro,relatime) What and to where should I mount --bind to see the files that were in /tmp/somefolder ? Could the functionality of the solutions linked above be related to what build of Debian I am using? Edit: For clarification, these are some of the commands I tried: ~ # mount --bind /tmp/somefolder /tmp/anotherfolder~ # mount --bind / /tmp/anotherfolder~ # mount --bind /dev/sda1 /tmp/anotherfolder | As I understand it, you want to see the files, if any, hidden by the mount /dev/sda1 /tmp/somefolder command. Assuming that /tmp is part of the / filesystem, run: mount --bind / /tmp/anotherfolderls /tmp/anotherfolder/tmp/somefolder If /tmp is not part of / but is a separate filesystem, run: mount --bind /tmp /tmp/anotherfolderls /tmp/anotherfolder/somefolder | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146459",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72977/"
]
} |
146,463 | Every 6 months or so I cannot access the DNS on whatever router I'm using. usually have to add the nameserver by hand to the /etc/resolv.conf file. I've tried adding Google free DNS to the file and it changes nothing. What can I do to not have to manually change the file each time I go to different coffee shop? | You can add the following line to /etc/dhcp/dhclient.conf : prepend domain-name-servers <working DNS IP(s) here>; This adds the DNS IP address(es) you specify before that/those provided by the DHCP.If you would like to add it/them after the address(es) provided by the DHCP, just use append domain-name-servers <working DNS IP(s) here>; If, instead you would like to ignore the DNS address(es) provided by the DHCP altogether, use supersede domain-name-servers <working DNS IP(s) here>; | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146463",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78632/"
]
} |
146,486 | Until now I used characters to draw images, shapes etc in a terminal. Is it possible to draw a single pixel? Let's say: foo 1 1 red This will draw a red pixel at the coordinate (1, 1) . Is there an existing application that will do this job? Currently running Ubuntu 14.04. | Terminals are character-cell displays and don't support drawing pixel graphics. Not even when running in X11; although it's certainly possible to draw individual pixels when talking directly to an X server, if your program is talking to a terminal it can only ask the terminal to display characters. To display graphics instead of text, you'll need to write a program that interacts directly with the X server. This is typically done through a UI toolkit library such as GTK , Qt , or wxWidgets . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146486",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45370/"
]
} |
146,490 | I'm using tmpfs for my /tmp directory. How can I make the computer decide to swap out the files inside the /tmp before swapping out anything that is being used by applications? Basically the files inside /tmp should have a higher swappiness compared to the memory being used by processes. It seems this answer https://unix.stackexchange.com/a/90337/56970 makes a lot of sense, but you can't change swappiness for a single directory. I know about cgroups though, but I don't see any way of making tmp into a cgroup? | If all goes well, your kernel should decide to "do the right thing" all by itself. It uses a lot of fancy heuristics to decide what to swap out and what to keep when there is memory pressure. Those heuristics have been carefully built by really smart people with a lot of experience in memory management and are already good enough that they're pretty hard to improve upon. The kernel uses a combination of things like this to decide what to swap out: How recently the memory has been used. Whether the memory has been modified since it was mapped. So for example a shared library will be pushed out ahead of heap memory because the heap memory is dirty and needs to be written to swap, whereas the shared library mapped memory can be loaded again from the original file on disk in case it is needed again, so no need to write those pages to swap. Here you should realize that tmpfs memory is always dirty (unless it's a fresh page filled with zeros) because it is not backed by anything. Hints from mprotect() . Likely many more. Short answer: no, you can't directly override the kernel's decisions about how to manage memory. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146490",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56970/"
]
} |
146,501 | The problem that I encountered today (on an embedded device with Linux) is that some files (specifically, the Valgrind /usr/lib files) are behaving very strange and they are not copy-able with scp (error is: not a regular file ). So: $ ls -al /usr/lib/*valgrind*drwxr-xr-x 2 root root 4096 Sep 30 00:01 .drwxr-xr-x 24 root root 12288 Sep 30 00:00 ..-rwxr-xr-x 1 root root 1816444 Jun 6 2014 cachegrind-x86-linux-rwxr-xr-x 1 root root 1910732 Jun 6 2014 callgrind-x86-linux-rw-r--r-- 1 root root 28429 Jun 6 2014 default.supp-rwxr-xr-x 1 root root 1884080 Jun 6 2014 drd-x86-linux-rwxr-xr-x 1 root root 1770688 Jun 6 2014 exp-bbv-x86-linux-rwxr-xr-x 1 root root 1852668 Jun 6 2014 exp-ptrcheck-x86-linux-rwxr-xr-x 1 root root 1910200 Jun 6 2014 helgrind-x86-linux-rwxr-xr-x 1 root root 1778880 Jun 6 2014 lackey-x86-linux-rwxr-xr-x 1 root root 1792524 Jun 6 2014 massif-x86-linux-rwxr-xr-x 1 root root 1942904 Jun 6 2014 memcheck-x86-linux-rwxr-xr-x 1 root root 1766560 Jun 6 2014 none-x86-linux-rwxr-xr-x 1 root root 2620 Jun 6 2014 vgpreload_core-x86-linux.so-rwxr-xr-x 1 root root 65296 Jun 6 2014 vgpreload_drd-x86-linux.so-rwxr-xr-x 1 root root 17904 Jun 6 2014 vgpreload_exp-ptrcheck-x86-linux.so-rwxr-xr-x 1 root root 37908 Jun 6 2014 vgpreload_helgrind-x86-linux.so-rwxr-xr-x 1 root root 15128 Jun 6 2014 vgpreload_massif-x86-linux.so-rwxr-xr-x 1 root root 26652 Jun 6 2014 vgpreload_memcheck-x86-linux.so First of all, notice that they do not have "valgrind" string in their names. Then, they are shown as regular files (not pipes, not devices). But, very strangely, they are not found by simple ls : $ ls -al /usr/lib/memcheck-x86-linuxls: /usr/lib/memcheck-x86-linux: No such file or directory They don't even show in the regular listing: $ ls -al /usr/bin(snipped) What on Linux could cause that behaviour? More, there is no utility like "file" on the device, but I doubt that would help me much. Output of df : $ dfFilesystem 1024-blocks Used Available Use% Mounted on/dev/root 980308 175548 754964 19% /tmpfs 10240 112 10128 1% /devtmpfs 246728 0 246728 0% /tmptmpfs 2048 12 2036 1% /var/logtmpfs 246728 128 246600 0% /dev/shmtmpfs 246728 0 246728 0% /runtmpfs 246728 20 246708 0% /var/run So, / is a device. And: $ whoamiroot So, I am root. Just to complete the puzzle: $ ls -al /dev/rootls: /dev/root: No such file or directory So the device that is mapped does not really exist. | This output: $ ls -al /usr/lib/*valgrind*drwxr-xr-x 2 root root 4096 Sep 30 00:01 .drwxr-xr-x 24 root root 12288 Sep 30 00:00 ..-rwxr-xr-x 1 root root 1816444 Jun 6 2014 cachegrind-x86-linux indicates that there is a directory named /usr/lib/*valgrind* (most likely just /usr/lib/valgrind ) which you're listing. The biggest clue is that you're seeing directory entries for . and .. . That explains why ls -al /usr/lib/memcheck-x86-linux says the file doesn't exist - it's because the file is called /usr/lib/valgrind/memcheck-x86-linux . If you don't want to list directories and show them as one entry instead, add the -d flag to ls : $ ls -ald /usr/lib/*valgrind*drwxr-xr-x 2 root root 4096 Sep 30 00:01 /usr/lib/valgrind As for why scp is saying "not a regular file", since you didn't provide the scp command line or output I have to guess, but my guess is that that is the output your scp produces for an argument that isn't any kind of file at all, because it doesn't exist. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146501",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78654/"
]
} |
146,507 | I am running Linux Mint Debian Edition, and I am getting the following errors: Jul 25 10:23:39 mhost kernel: [ 36.775380] [drm] nouveau 0000:01:00.0: unknown i2c port 57Jul 25 10:23:39 mhost kernel: [ 36.775406] [drm] nouveau 0000:01:00.0: unknown i2c port 49Jul 25 10:23:39 mhost kernel: [ 37.095951] [drm] nouveau 0000:01:00.0: PFIFO: unknown status 0x40000000Jul 25 10:23:57 mhost kernel: [ 54.815320] [drm] nouveau 0000:01:00.0: unknown i2c port 48 Otherwise, my system is fine. Everything seems to be working properly, the only problem is, that I get these errors very frequently and have to clean the logs regularly (kern.log, syslog, and messages). I am not too interested in fixing the underlying issue (I do not like to mess with the graphics driver if I do not have to), but I would like to block the error (unknown i2c port as well as unknown status). Here is some more information about my system: $ inxi -SGxSystem: Host: mhost Kernel: 3.2.0-4-amd64 x86_64 (64 bit, gcc: 4.6.3) Desktop: Cinnamon 2.0.14 Distro: LinuxMint 1 debianGraphics: Card: NVIDIA GF108 [GeForce GT 630] bus-ID: 01:00.0 X.Org: 1.14.3 drivers: nouveau (unloaded: fbdev,vesa) Resolution: [email protected] GLX Renderer: Gallium 0.4 on NVC1 GLX Version: 3.0 Mesa 9.2.2 Direct Rendering: Yes So my question are: Can I block a certain error in Linux? And more specifically, can I block/disable these errors? My main motivation for this is that my log files become really big really fast which fills up my disk. One workaround would be to automatically clear the logs, but I do not want to put that much strain on my ssd. | This output: $ ls -al /usr/lib/*valgrind*drwxr-xr-x 2 root root 4096 Sep 30 00:01 .drwxr-xr-x 24 root root 12288 Sep 30 00:00 ..-rwxr-xr-x 1 root root 1816444 Jun 6 2014 cachegrind-x86-linux indicates that there is a directory named /usr/lib/*valgrind* (most likely just /usr/lib/valgrind ) which you're listing. The biggest clue is that you're seeing directory entries for . and .. . That explains why ls -al /usr/lib/memcheck-x86-linux says the file doesn't exist - it's because the file is called /usr/lib/valgrind/memcheck-x86-linux . If you don't want to list directories and show them as one entry instead, add the -d flag to ls : $ ls -ald /usr/lib/*valgrind*drwxr-xr-x 2 root root 4096 Sep 30 00:01 /usr/lib/valgrind As for why scp is saying "not a regular file", since you didn't provide the scp command line or output I have to guess, but my guess is that that is the output your scp produces for an argument that isn't any kind of file at all, because it doesn't exist. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146507",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31533/"
]
} |
146,519 | Is there a possibility to write the following script without the loop? IPv4_first=1.1.1.1IPv4_second=2.2.2.2IPv4_third=3.3.3.3IPv4_all=() for var in ${!IPv4_@}do IPv4_all+=(${!var})doneprintf "'%s'\n" "${IPv4_all[@]}" Something like: IPv4_all=${!${!IPv4_@}} | This might be the ugliest Bash code I've ever written, but... IPv4_first=1.1.1.1IPv4_second=2.2.2.2IPv4_third=3.3.3.3names=(${!IPv4_@})eval "IPv4_all=(${names[@]/#/$})"printf "'%s'\n" "${IPv4_all[@]}" Look Ma, no loop! ${names[@]/#/$} prepends $ to the start of every element of the array, by matching an empty string anchored to the start of each element. That gives an array of variable dereferences, which we can expand inside eval to get the variable references inside the array initialiser. These need to be two separate lines because you can't apply multiple parameter expansions at the same time. The output is: '1.1.1.1''2.2.2.2''3.3.3.3' as expected. It's possible to replace the line with: IPv4_all=($(eval "echo ${names[@]/#/$}")) rather than eval ling the array assignment. I'm not sure whether that's any better or not. If your variable values might contain spaces or other IFS characters, you can change the eval: eval "IPv4_all=($(printf '"$%s" ' "${names[@]}"))" That properly double-quotes all the variable dereferences. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146519",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78673/"
]
} |
146,521 | I am trying to understand the linux diff command on two files whose lines are just permutation of each other but not able to grok the output that it generates. Consider the three commands below: [myPrompt]$ cat file1applesoranges[myPrompt]$ cat file2 orangesapples[myPrompt]$ diff file1 file21d0< apples2a2> apples Can someone explain the above cryptic output from diff. Why there is no mention of "oranges" at all in the output? What does 1d0 and 2a2 mean? I understand from this answer that : "<" means the line is missing in file2 and ">" means the line is missing in file1 BUT that doesn't explain why oranges is missing in the output. | To understand the report, remember that diff is prescriptive, describing what changes need to be made to the first file ( file1 ) to make it the same as the second file ( file2 ). Specifically, the d in 1d0 means delete and the a in 2a2 means add . Thus: 1d0 means line 1 must be deleted in file1 ( apples ). 0 in 1d0 means line 0 is where they would have appeared in the second file ( file2 ) had they not been deleted. That means when changing file2 to file1 (backwards) append line 1 of file1 after line 0 of file2 . 2a2 means append the second line ( oranges ) from file2 to the now second line of file1 (after deleting the first line in file1 , oranges switched to line 1) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/146521",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28032/"
]
} |
146,530 | I have a file like this: ID A56DS /A56DS AGE 56 And I'd like to print the whole line only if the second column starts with a capital letter. Expected output: ID A56DS AGE 56 What I've tried so far: awk '$2 ~ /[A-Z]/ {print $0}' file Prints everything: capital letters are found within the second column. awk '$2 /[A-Z]/' file Gets a syntax error. | You must use regex ^ to denote start of string: $ awk '$2 ~ /^[[:upper:]]/' fileID A56DS AGE 56 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146530",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74555/"
]
} |
146,550 | Using the date program, how can I calculate the number of seconds since midnight? | To avoid race conditions, still assuming GNU date: eval "$(date +'today=%F now=%s')"midnight=$(date -d "$today 0" +%s)echo "$((now - midnight))" With zsh , you can do it internally: zmodload zsh/datetimenow=$EPOCHSECONDSstrftime -s today %F $nowstrftime -rs midnight %F $todayecho $((now - midnight)) Portably, in timezones where there's no daylight saving switch, you could do: eval "$(date +'h=%H m=%M s=%S')"echo "$((${h#0} * 3600 + ${m#0} * 60 + ${s#0}))" The ${X#0} is to strip one leading 0 which in some shells like bash , dash and posh cause problems with 08 and 09 (where the shell complains about it being an invalid octal number). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146550",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73543/"
]
} |
146,551 | I am working on a Linux Mint 16 computer and since recently, every time I want to install something via apt-get install , the log message says that the packages couldn't be authenticated. I go ahead and try to install them without authetication and it turns out most of the packages are not found. At the end of the process, the console message suggests me to use apt-get update or --fix-missing . So that's what I do: sudo apt-get update and immediateley after I try again to install with sudo apt-get install nginx but I still get the same message error. What is the problem? Am I missing something? Note: I would have copy/pasted the logs but they are in Spanish so they would probably wouldn't have been of much help to most. UPDATE:I managed to get the logs in English thanks to @Flup. Here they are:For apt-get install : ricardo@toshi ~$ sudo apt-get install nginxReading package lists... DoneBuilding dependency tree Reading state information... DoneThe following packages were automatically installed and are no longer required: libnet-daemon-perl libplrpc-perlUse 'apt-get autoremove' to remove them.The following extra packages will be installed: nginx-common nginx-fullThe following NEW packages will be installed: nginx nginx-common nginx-full0 upgraded, 3 newly installed, 0 to remove and 17 not upgraded.Need to get 404 kB of archives.After this operation, 1246 kB of additional disk space will be used.Do you want to continue [Y/n]? YWARNING: The following packages cannot be authenticated! nginx-common nginx-full nginxInstall these packages without verification [y/N]? yErr http://archive.ubuntu.com/ubuntu/ raring-updates/universe nginx-common all 1.2.6-1ubuntu3.3 404 Not Found [IP: 91.189.88.153 80]Err http://security.ubuntu.com/ubuntu/ raring-security/universe nginx-common all 1.2.6-1ubuntu3.3 404 Not Found [IP: 91.189.92.200 80]Err http://security.ubuntu.com/ubuntu/ raring-security/universe nginx-full amd64 1.2.6-1ubuntu3.3 404 Not Found [IP: 91.189.92.200 80]Err http://security.ubuntu.com/ubuntu/ raring-security/universe nginx all 1.2.6-1ubuntu3.3 404 Not Found [IP: 91.189.92.200 80]Failed to fetch http://security.ubuntu.com/ubuntu/pool/universe/n/nginx/nginx-common_1.2.6-1ubuntu3.3_all.deb 404 Not Found [IP: 91.189.92.200 80]Failed to fetch http://security.ubuntu.com/ubuntu/pool/universe/n/nginx/nginx-full_1.2.6-1ubuntu3.3_amd64.deb 404 Not Found [IP: 91.189.92.200 80]Failed to fetch http://security.ubuntu.com/ubuntu/pool/universe/n/nginx/nginx_1.2.6-1ubuntu3.3_all.deb 404 Not Found [IP: 91.189.92.200 80]E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? For apt-get update : ricardo@toshi ~$ sudo apt-get updateHit http://dl.google.com stable Release.gpgIgn http://es.archive.ubuntu.com raring Release.gpg Hit http://archive.canonical.com raring Release.gpg Hit http://dl.google.com stable Release.gpg Hit http://ppa.launchpad.net raring Release.gpg Ign http://es.archive.ubuntu.com raring Release.gpg Ign http://archive.ubuntu.com raring Release.gpg Hit http://dl.google.com stable Release Hit http://archive.canonical.com raring Release Ign http://es.archive.ubuntu.com raring Release Hit http://ppa.launchpad.net raring Release.gpg Ign http://archive.ubuntu.com raring-updates Release.gpg Hit http://dl.google.com stable Release Ign http://security.ubuntu.com raring-security Release.gpg Ign http://es.archive.ubuntu.com raring Release Hit http://archive.canonical.com raring/partner amd64 Packages Ign http://archive.ubuntu.com raring Release Hit http://ppa.launchpad.net raring Release Hit http://downloads-distro.mongodb.org dist Release.gpg Hit http://dl.google.com stable/main amd64 Packages Get:1 http://packages.linuxmint.com olivia Release.gpg [198 B] Ign http://archive.ubuntu.com raring-updates Release Hit http://archive.canonical.com raring/partner i386 Packages Hit http://dl.google.com stable/main i386 Packages Hit http://ppa.launchpad.net raring Release Ign http://security.ubuntu.com raring-security Release Ign http://archive.ubuntu.com raring/main amd64 Packages/DiffIndex Hit http://ppa.launchpad.net raring/main Sources Ign http://archive.ubuntu.com raring/restricted amd64 Packages/DiffIndex Get:2 http://packages.linuxmint.com olivia Release [18.5 kB] Hit http://ppa.launchpad.net raring/main amd64 Packages Ign http://archive.ubuntu.com raring/universe amd64 Packages/DiffIndex Ign http://security.ubuntu.com raring-security/main amd64 Packages/DiffIndex Hit http://dl.google.com stable/main amd64 Packages Hit http://downloads-distro.mongodb.org dist Release Ign http://archive.ubuntu.com raring/multiverse amd64 Packages/DiffIndex Hit http://ppa.launchpad.net raring/main i386 Packages Hit http://dl.google.com stable/main i386 Packages Ign http://archive.ubuntu.com raring/main i386 Packages/DiffIndex Ign http://security.ubuntu.com raring-security/restricted amd64 Packages/DiffIndex Ign http://archive.ubuntu.com raring/restricted i386 Packages/DiffIndex Ign http://archive.ubuntu.com raring/universe i386 Packages/DiffIndex Ign http://security.ubuntu.com raring-security/universe amd64 Packages/DiffIndex Ign http://archive.ubuntu.com raring/multiverse i386 Packages/DiffIndex Hit http://toolbelt.heroku.com ./ Release.gpg Hit http://ppa.launchpad.net raring/main Sources Get:3 http://packages.linuxmint.com olivia/main amd64 Packages [23.5 kB] Hit http://downloads-distro.mongodb.org dist/10gen amd64 Packages Hit https://get.docker.io docker Release.gpg Hit http://ppa.launchpad.net raring/main amd64 Packages Ign http://security.ubuntu.com raring-security/multiverse amd64 Packages/DiffIndex Ign http://archive.canonical.com raring/partner Translation-en Hit http://ppa.launchpad.net raring/main i386 Packages Hit https://get.docker.io docker Release Ign http://archive.canonical.com raring/partner Translation-es Ign http://security.ubuntu.com raring-security/main i386 Packages/DiffIndex Hit http://toolbelt.heroku.com ./ Release Hit http://downloads-distro.mongodb.org dist/10gen i386 Packages Hit https://get.docker.io docker/main amd64 Packages Get:4 http://packages.linuxmint.com olivia/upstream amd64 Packages [9249 B] Ign http://security.ubuntu.com raring-security/restricted i386 Packages/DiffIndex Hit https://get.docker.io docker/main i386 Packages Get:5 http://packages.linuxmint.com olivia/import amd64 Packages [39.2 kB] Ign http://security.ubuntu.com raring-security/universe i386 Packages/DiffIndex Hit http://toolbelt.heroku.com ./ Packages Ign http://archive.ubuntu.com raring-updates/main amd64 Packages/DiffIndex Ign http://security.ubuntu.com raring-security/multiverse i386 Packages/DiffIndex Ign http://archive.ubuntu.com raring-updates/restricted amd64 Packages/DiffIndex Ign http://dl.google.com stable/main Translation-en Ign http://archive.ubuntu.com raring-updates/universe amd64 Packages/DiffIndex Ign http://dl.google.com stable/main Translation-es Ign http://archive.ubuntu.com raring-updates/multiverse amd64 Packages/DiffIndex Ign http://dl.google.com stable/main Translation-en Ign http://archive.ubuntu.com raring-updates/main i386 Packages/DiffIndex Get:6 http://packages.linuxmint.com olivia/main i386 Packages [23.5 kB] Ign http://dl.google.com stable/main Translation-es Ign http://archive.ubuntu.com raring-updates/restricted i386 Packages/DiffIndex Ign http://archive.ubuntu.com raring-updates/universe i386 Packages/DiffIndex Ign http://archive.ubuntu.com raring-updates/multiverse i386 Packages/DiffIndex Get:7 http://packages.linuxmint.com olivia/upstream i386 Packages [9237 B] Ign http://ppa.launchpad.net raring/main Translation-en Get:8 http://packages.linuxmint.com olivia/import i386 Packages [40.1 kB] Ign http://ppa.launchpad.net raring/main Translation-es Ign http://ppa.launchpad.net raring/main Translation-en Ign http://ppa.launchpad.net raring/main Translation-es Ign http://toolbelt.heroku.com ./ Translation-en Ign http://toolbelt.heroku.com ./ Translation-es Ign http://downloads-distro.mongodb.org dist/10gen Translation-en Ign http://downloads-distro.mongodb.org dist/10gen Translation-es Err http://es.archive.ubuntu.com raring/main amd64 Packages 404 Not Found [IP: 91.189.92.201 80]Err http://es.archive.ubuntu.com raring/universe amd64 Packages 404 Not Found [IP: 91.189.92.201 80]Err http://es.archive.ubuntu.com raring/main i386 Packages 404 Not Found [IP: 91.189.92.201 80]Err http://es.archive.ubuntu.com raring/universe i386 Packages 404 Not Found [IP: 91.189.92.201 80]Ign http://es.archive.ubuntu.com raring/main Translation-en Ign http://es.archive.ubuntu.com raring/main Translation-es Ign http://es.archive.ubuntu.com raring/universe Translation-en Ign http://es.archive.ubuntu.com raring/universe Translation-es Err http://es.archive.ubuntu.com raring/main amd64 Packages 404 Not Found [IP: 91.189.92.201 80]Err http://es.archive.ubuntu.com raring/universe amd64 Packages 404 Not Found [IP: 91.189.92.201 80]Err http://es.archive.ubuntu.com raring/main i386 Packages 404 Not Found [IP: 91.189.92.201 80]Err http://es.archive.ubuntu.com raring/universe i386 Packages 404 Not Found [IP: 91.189.92.201 80]Ign http://es.archive.ubuntu.com raring/main Translation-en Ign http://es.archive.ubuntu.com raring/main Translation-es Ign http://es.archive.ubuntu.com raring/universe Translation-en Ign http://es.archive.ubuntu.com raring/universe Translation-es Ign http://packages.linuxmint.com olivia/import Translation-en Ign http://packages.linuxmint.com olivia/import Translation-es Ign https://get.docker.io docker/main Translation-en Ign http://packages.linuxmint.com olivia/main Translation-en Ign http://packages.linuxmint.com olivia/main Translation-es Ign http://packages.linuxmint.com olivia/upstream Translation-en Ign https://get.docker.io docker/main Translation-es Ign http://packages.linuxmint.com olivia/upstream Translation-es Ign http://archive.ubuntu.com raring/main Translation-en Ign http://archive.ubuntu.com raring/main Translation-es Ign http://archive.ubuntu.com raring/multiverse Translation-en Ign http://archive.ubuntu.com raring/multiverse Translation-es Ign http://archive.ubuntu.com raring/restricted Translation-en Ign http://archive.ubuntu.com raring/restricted Translation-es Ign http://archive.ubuntu.com raring/universe Translation-en Ign http://archive.ubuntu.com raring/universe Translation-es Ign http://archive.ubuntu.com raring-updates/main Translation-en Ign http://archive.ubuntu.com raring-updates/main Translation-es Ign http://archive.ubuntu.com raring-updates/multiverse Translation-enIgn http://archive.ubuntu.com raring-updates/multiverse Translation-esIgn http://archive.ubuntu.com raring-updates/restricted Translation-enIgn http://archive.ubuntu.com raring-updates/restricted Translation-esIgn http://security.ubuntu.com raring-security/main Translation-enIgn http://archive.ubuntu.com raring-updates/universe Translation-enIgn http://archive.ubuntu.com raring-updates/universe Translation-esIgn http://security.ubuntu.com raring-security/main Translation-esErr http://archive.ubuntu.com raring/main amd64 Packages 404 Not Found [IP: 91.189.92.200 80]Err http://archive.ubuntu.com raring/restricted amd64 Packages 404 Not Found [IP: 91.189.92.200 80]Ign http://security.ubuntu.com raring-security/multiverse Translation-enErr http://archive.ubuntu.com raring/universe amd64 Packages 404 Not Found [IP: 91.189.92.200 80]Ign http://security.ubuntu.com raring-security/multiverse Translation-esErr http://archive.ubuntu.com raring/multiverse amd64 Packages 404 Not Found [IP: 91.189.92.200 80]Err http://archive.ubuntu.com raring/main i386 Packages 404 Not Found [IP: 91.189.92.200 80]Err http://archive.ubuntu.com raring/restricted i386 Packages 404 Not Found [IP: 91.189.92.200 80]Ign http://security.ubuntu.com raring-security/restricted Translation-enErr http://archive.ubuntu.com raring/universe i386 Packages 404 Not Found [IP: 91.189.92.200 80]Err http://archive.ubuntu.com raring/multiverse i386 Packages 404 Not Found [IP: 91.189.92.200 80]Ign http://security.ubuntu.com raring-security/restricted Translation-esErr http://archive.ubuntu.com raring-updates/main amd64 Packages 404 Not Found [IP: 91.189.92.200 80]Err http://archive.ubuntu.com raring-updates/restricted amd64 Packages 404 Not Found [IP: 91.189.92.200 80]Ign http://security.ubuntu.com raring-security/universe Translation-enErr http://archive.ubuntu.com raring-updates/universe amd64 Packages 404 Not Found [IP: 91.189.92.200 80]Err http://archive.ubuntu.com raring-updates/multiverse amd64 Packages 404 Not Found [IP: 91.189.92.200 80]Ign http://security.ubuntu.com raring-security/universe Translation-esErr http://archive.ubuntu.com raring-updates/main i386 Packages 404 Not Found [IP: 91.189.92.200 80]Err http://archive.ubuntu.com raring-updates/restricted i386 Packages 404 Not Found [IP: 91.189.92.200 80]Err http://archive.ubuntu.com raring-updates/universe i386 Packages 404 Not Found [IP: 91.189.92.200 80]Err http://security.ubuntu.com raring-security/main amd64 Packages 404 Not Found [IP: 91.189.91.15 80]Err http://archive.ubuntu.com raring-updates/multiverse i386 Packages 404 Not Found [IP: 91.189.92.200 80]Err http://security.ubuntu.com raring-security/restricted amd64 Packages 404 Not Found [IP: 91.189.91.15 80]Err http://security.ubuntu.com raring-security/universe amd64 Packages 404 Not Found [IP: 91.189.91.15 80]Err http://security.ubuntu.com raring-security/multiverse amd64 Packages 404 Not Found [IP: 91.189.91.15 80]Err http://security.ubuntu.com raring-security/main i386 Packages 404 Not Found [IP: 91.189.91.15 80]Err http://security.ubuntu.com raring-security/restricted i386 Packages 404 Not Found [IP: 91.189.91.15 80]Err http://security.ubuntu.com raring-security/universe i386 Packages 404 Not Found [IP: 91.189.91.15 80]Err http://security.ubuntu.com raring-security/multiverse i386 Packages 404 Not Found [IP: 91.189.91.15 80]Fetched 163 kB in 15s (10.9 kB/s)W: Failed to fetch http://es.archive.ubuntu.com/ubuntu/pool/universe/a/aufs-tools/aufs-tools_3.0+20120411-3ubuntu1_amd64.deb/dists/raring/main/binary-amd64/Packages 404 Not Found [IP: 91.189.92.201 80]W: Failed to fetch http://es.archive.ubuntu.com/ubuntu/pool/universe/a/aufs-tools/aufs-tools_3.0+20120411-3ubuntu1_amd64.deb/dists/raring/universe/binary-amd64/Packages 404 Not Found [IP: 91.189.92.201 80]W: Failed to fetch http://es.archive.ubuntu.com/ubuntu/pool/universe/a/aufs-tools/aufs-tools_3.0+20120411-3ubuntu1_amd64.deb/dists/raring/main/binary-i386/Packages 404 Not Found [IP: 91.189.92.201 80]W: Failed to fetch http://es.archive.ubuntu.com/ubuntu/pool/universe/a/aufs-tools/aufs-tools_3.0+20120411-3ubuntu1_amd64.deb/dists/raring/universe/binary-i386/Packages 404 Not Found [IP: 91.189.92.201 80]W: Failed to fetch http://es.archive.ubuntu.com/ubuntu/pool/main/n/nginx/nginx_1.4.6-1ubuntu3_all.deb/dists/raring/main/binary-amd64/Packages 404 Not Found [IP: 91.189.92.201 80]W: Failed to fetch http://es.archive.ubuntu.com/ubuntu/pool/main/n/nginx/nginx_1.4.6-1ubuntu3_all.deb/dists/raring/universe/binary-amd64/Packages 404 Not Found [IP: 91.189.92.201 80]W: Failed to fetch http://es.archive.ubuntu.com/ubuntu/pool/main/n/nginx/nginx_1.4.6-1ubuntu3_all.deb/dists/raring/main/binary-i386/Packages 404 Not Found [IP: 91.189.92.201 80]W: Failed to fetch http://es.archive.ubuntu.com/ubuntu/pool/main/n/nginx/nginx_1.4.6-1ubuntu3_all.deb/dists/raring/universe/binary-i386/Packages 404 Not Found [IP: 91.189.92.201 80]W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring/main/binary-amd64/Packages 404 Not Found [IP: 91.189.92.200 80]W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring/restricted/binary-amd64/Packages 404 Not Found [IP: 91.189.92.200 80]W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring/universe/binary-amd64/Packages 404 Not Found [IP: 91.189.92.200 80]W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/main/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80]W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring/multiverse/binary-amd64/Packages 404 Not Found [IP: 91.189.92.200 80]W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring/main/binary-i386/Packages 404 Not Found [IP: 91.189.92.200 80]W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/restricted/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80]W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring/restricted/binary-i386/Packages 404 Not Found [IP: 91.189.92.200 80]W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring/universe/binary-i386/Packages 404 Not Found [IP: 91.189.92.200 80]W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/universe/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80]W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring/multiverse/binary-i386/Packages 404 Not Found [IP: 91.189.92.200 80]W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/multiverse/binary-amd64/Packages 404 Not Found [IP: 91.189.91.15 80]W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/main/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80]W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/restricted/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80]W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/universe/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80]W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring-updates/main/binary-amd64/Packages 404 Not Found [IP: 91.189.92.200 80]W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/multiverse/binary-i386/Packages 404 Not Found [IP: 91.189.91.15 80]W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring-updates/restricted/binary-amd64/Packages 404 Not Found [IP: 91.189.92.200 80]W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring-updates/universe/binary-amd64/Packages 404 Not Found [IP: 91.189.92.200 80]W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring-updates/multiverse/binary-amd64/Packages 404 Not Found [IP: 91.189.92.200 80]W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring-updates/main/binary-i386/Packages 404 Not Found [IP: 91.189.92.200 80]W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring-updates/restricted/binary-i386/Packages 404 Not Found [IP: 91.189.92.200 80]W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring-updates/universe/binary-i386/Packages 404 Not Found [IP: 91.189.92.200 80]W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/raring-updates/multiverse/binary-i386/Packages 404 Not Found [IP: 91.189.92.200 80]E: Some index files failed to download. They have been ignored, or old ones used instead. | The thing that helped me was: https://smyl.es/how-to-fix-ubuntudebian-apt-get-404-not-found-package-repository-errors-saucy-raring-quantal-oneiric-natty/ Basically updating the lists to use old-releases.ubuntu.com: sudo sed -i -e 's/archive.ubuntu.com\|security.ubuntu.com/old-releases.ubuntu.com/g' /etc/apt/sources.listsudo sed -i -e 's/archive.ubuntu.com\|security.ubuntu.com/old-releases.ubuntu.com/g' /etc/apt/sources.list.d/official-package-repositories.listsudo sed -i -e 's/archive.ubuntu.com\|security.ubuntu.com/old-releases.ubuntu.com/g' /etc/apt/sources.list.d/official-source-repositories.list Edit: As Meisam Mulla said in the comments , if your urls in the /etc/apt/sources.list files are prefixed with something ( ca. for example) you'll need to remove the prefixes manually, as ca.old-releases.ubuntu.com isn't a valid address. Also, some of my error messages for the Googlers: W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/main/source/Sources 404 Not Found [IP: 91.189.88.153 80]Err http://archive.ubuntu.com raring/main Sources 404 Not Found [IP: 91.189.92.201 80]Err http://archive.ubuntu.com raring/restricted Sources 404 Not Found [IP: 91.189.92.201 80] | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/146551",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54616/"
]
} |
146,555 | Can scrypt be used as the hashing algorithm for LUKS? Can I tune its parameters? How can I do this? | No, LUKS 1 only supports PBKDF2 as the password-based key derivation function. PBKDF2 is built on a cryptographic hash function, and you can select the hash function with --hash , as well as the iteration count via --iter-time . All supported hash functions are equally secure for this use case; a higher iteration count makes the job proportionally harder for the attacker but also make normal mounting correspondingly slower. There is a registered issue for LUKS to support scrypt. This is a significant change because there is no field in the on-disk format to indicate which key stretching is in use. This has been discussed briefly on the dm-crypt mailing list. LUKS 2 supports Argon2 , which is a memory-hard password-based key derivation function (like scrypt) that is the new standard for password hashing . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146555",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
146,556 | I try to generate a menuconfig by my own with the help of lxdialog (source: lxdialog ). Unfortunately this is not so well documented as I wished for. Example: So what I did : I included these files (see source) into a new/empty project and did nothing else. As it seems that this is the source of lxdialog I tried a quick shot by easily importing.I need to use a GUI, which is based on .c since I need the return values in a C/C++ program - that's why I can't use the much more easier to develop dialog . I don't want to spent too much time on programming a new GUI that's why I think lxdialog fits my needs. The error: I get lots of errors with the message Type XYZ could not be resolved . I refreshed the Indexer a multiple times and I even excluded every .c from build, but without success. I already checked the whole library for the unresolved keywords but it seems that the places where these errors come from, is the place of the first occurrence of the keyword at all. Some of the messages: [...]Type 'chtype' could not be resolved dialog.hType 'WINDOW' could not be resolved dialog.h[...] My question is : obviousely I'm doing something wrong. Have I included to less files into my project or am I missing system wide libraries ? Does anyone know a tutorial to step into lxdialog or at least knows how to solve these erros? Thanks for your support Kind Regards | You are, most likely, abusing lxdialog.That is supposed to be used only together all other configuration stuff. If You really need a configuration tool to be used at compile time (i.e.:" make menuconfig && make all ") then you will find documentation to make kconfig work for you in kernel/Documentation/kconfig directory (and other places, but You need to search for kconfig , not lxdialog !). If You need a generic dialog interface instead (e.g.: as found in various debian configurators) You would be much better off using plain dialog available in all major distributions.You will find a lot of documentation for dialog (including at " http://invisible-island.net/dialog/dialog.html "). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146556",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56989/"
]
} |
146,570 | How to create a menu in a shell script that will display 3 options that a user will use the arrows keys to move the highlight cursor and press enter to select one? | Here is a pure bash script solution in form of the select_option function, relying solely on ANSI escape sequences and the built-in read . Works on Bash 4.2.45 on OSX. The funky parts that might not work equally well in all environments from all I know are the get_cursor_row() , key_input() (to detect up/down keys) and the cursor_to() functions. #!/usr/bin/env bash# Renders a text based list of options that can be selected by the# user using up, down and enter keys and returns the chosen option.## Arguments : list of options, maximum of 256# "opt1" "opt2" ...# Return value: selected index (0 for opt1, 1 for opt2 ...)function select_option { # little helpers for terminal print control and key input ESC=$( printf "\033") cursor_blink_on() { printf "$ESC[?25h"; } cursor_blink_off() { printf "$ESC[?25l"; } cursor_to() { printf "$ESC[$1;${2:-1}H"; } print_option() { printf " $1 "; } print_selected() { printf " $ESC[7m $1 $ESC[27m"; } get_cursor_row() { IFS=';' read -sdR -p $'\E[6n' ROW COL; echo ${ROW#*[}; } key_input() { read -s -n3 key 2>/dev/null >&2 if [[ $key = $ESC[A ]]; then echo up; fi if [[ $key = $ESC[B ]]; then echo down; fi if [[ $key = "" ]]; then echo enter; fi; } # initially print empty new lines (scroll down if at bottom of screen) for opt; do printf "\n"; done # determine current screen position for overwriting the options local lastrow=`get_cursor_row` local startrow=$(($lastrow - $#)) # ensure cursor and input echoing back on upon a ctrl+c during read -s trap "cursor_blink_on; stty echo; printf '\n'; exit" 2 cursor_blink_off local selected=0 while true; do # print options by overwriting the last lines local idx=0 for opt; do cursor_to $(($startrow + $idx)) if [ $idx -eq $selected ]; then print_selected "$opt" else print_option "$opt" fi ((idx++)) done # user key control case `key_input` in enter) break;; up) ((selected--)); if [ $selected -lt 0 ]; then selected=$(($# - 1)); fi;; down) ((selected++)); if [ $selected -ge $# ]; then selected=0; fi;; esac done # cursor position back to normal cursor_to $lastrow printf "\n" cursor_blink_on return $selected} Here is an example usage: echo "Select one option using up/down keys and enter to confirm:"echooptions=("one" "two" "three")select_option "${options[@]}"choice=$?echo "Choosen index = $choice"echo " value = ${options[$choice]}" Output looks like below, with the currently selected option highlighted using inverse ansi coloring (hard to convey here in markdown). This can be adapted in the print_selected() function if desired. Select one option using up/down keys and enter to confirm: [one] two three Update: Here is a little extension select_opt wrapping the above select_option function to make it easy to use in a case statement: function select_opt { select_option "$@" 1>&2 local result=$? echo $result return $result} Example usage with 3 literal options: case `select_opt "Yes" "No" "Cancel"` in 0) echo "selected Yes";; 1) echo "selected No";; 2) echo "selected Cancel";;esac You can also mix if there are some known entries (Yes and No in this case), and leverage the exit code $? for the wildcard case: options=("Yes" "No" "${array[@]}") # join arrays to add some variable arraycase `select_opt "${options[@]}"` in 0) echo "selected Yes";; 1) echo "selected No";; *) echo "selected ${options[$?]}";;esac | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/146570",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78702/"
]
} |
146,584 | It's simple enough to use cron to schedule a job to occur periodically. I'd like to have something occur less regularly -- say, run the job, then wait 2 to 12 hours before trying again. (Any reasonable type of randomness would work here.) Is there a good way to do this? | You could use the command 'at' at now +4 hours -f commandfile Or at now +$((($RANDOM % 10)+2)) hours -f commandfile | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146584",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50150/"
]
} |
146,585 | Let's say, I have files that contain only one docker id: myid.id : 28fe2baadbe8da32ed0b99c69b11c01b2d141bc5b732b81e0960086de52fc891 I want to check if the content of my.id is exactly 64 characters long and contains only characters in the range [0-9] and [a-z] (maybe [a-f]). How can I do that? If the file contains a newline 0x0a , how can I include/exclude it in this check? | Try: $ echo 28fe2baadbe8da32ed0b99c69b11c01b2d141bc5b732b81e0960086de52fc891 | awk '{sub(/\r/,"")} length == 64 && /^[[:xdigit:]]+$/'28fe2baadbe8da32ed0b99c69b11c01b2d141bc5b732b81e0960086de52fc891 or use perl instead. Include newline: perl -ne 'print if length == 64 and /^[[:xdigit:]]+$/' Exclude newline: perl -nle 'print if length == 64 and /^[[:xdigit:]]+$/' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146585",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72471/"
]
} |
146,599 | Is it possible to execute some bashrc file for all users in a group? If you can do that system-wide in /etc/profile then why not for a usergroup? | In /etc/profile , add: if [ "$(id -ng)" = "the_cool_group" ]; then # do stuff for people in the_cool_groupfi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146599",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78716/"
]
} |
146,609 | So I have just been introduced to Cathode for OSX and I am completely fascinated by how classic it looks with it's 1970s retro style. Does anyone know of any programs/packages I can download to get my terminal to sport that very look? If there are no packages available, is there are setting I can change on the Ubuntu 14.04 terminal? For those who are unaware of Cathode, I will post an image in reference to what I would like to have on my terminal: More information as to what Cathode does can be found on this link: Cathode Info | The software you are looking for is called "cool-old-term" and is available on github . It emulates the look of a CRT and is based around konsole (KDE's terminal) and requires QT 5.2 or newer. The readme has instructions for getting it working on Ubuntu 14.04 and Arch. The examples on the github page show a few other variations on the CRT look and the presentation of the text. This video shows off the features of this terminal and the many ways it can look. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146609",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72902/"
]
} |
146,612 | Ok so I have been trying to upgrade MySQL 5.5 to 5.6. I have been given this set my.cnf file (see below). I was able to uninstall the MySQL 5.5 rpms and install the 5.6. When I reload the back up my.cnf file the mysql server does not want to start. Is there something wrong with this file? I have ran mysql_install_db . However, I cannot get the server to start up with my my.cnf file. I have looked in the error log, but it hasn't been copying anything recent to it. The my.cnf works when it is blank. This is what my my.cnf file from MySQL 5.5 looks like (see below). I need this my.cnf file to work on the 5.6 version. Anyone have any incite? # You can copy this to one of:# - "/etc/mysql/my.cnf" to set global options,# - "~/.my.cnf" to set user-specific options.## One can use all long options that the program supports.# Run program with --help to get a list of available options and with# --print-defaults to see which it would actually understand and use.## For explanations see# http://dev.mysql.com/doc/mysql/en/server-system-variables.html# This will be passed to all mysql clients# It has been reported that passwords should be enclosed with ticks/quotes# escpecially if they contain "#" chars...# Remember to edit /etc/mysql/debian.cnf when changing the socket location.[client]port = 3306socket = /u02/mysqldata1/mysqld_3306.sock##[mysqld_safe]socket = /u02/mysqldata1/mysqld_3306.socknice = 0##[mysqld]#performance_schema=1user = mysqlsocket = /u02/mysqldata1/mysqld_3306.sockport = 3306basedir = /usrdatadir = /u02/mysqldata1/data/3306tmpdir = /u02/mysqldata/tmpskip-external-lockingcharacter_set_server = utf8lower_case_table_names = 1## Instead of skip-networking the default is now to listen only on# localhost which is more compatible and is not less secure.# bind-address = 0.0.0.0max_allowed_packet = 64Mmax_connections = 500max_connect_errors = 100max_heap_table_size = 256Mtmp_table_size = 256Mnet_buffer_length = 8Ksort_buffer_size = 4Mread_buffer_size = 1Mread-rnd-buffer-size = 2Mtable_cache = 1024table-definition-cache = 512thread_stack = 256Kthread_cache_size = 256sql-mode = NO_AUTO_CREATE_USERquery_cache_size = 64Mquery_cache_limit = 4M## MyISAM Configuration#key_buffer_size = 512Mmyisam_sort_buffer_size = 128Mft_min_word_len = 3myisam_recover_options = BACKUP,FORCEmyisam_repair_threads = 6myisam_use_mmap = 1## General Logging#log_error = /u02/mysqldata/mysqllog/3306/err/error.loglong_query_time = 1slow_query_log = 1slow_query_log_file = /u02/mysqldata/mysqllog/3306/slow/mysql_slow.loggeneral_log = 0general_log_file = /u02/mysqldata/mysqllog/3306/general/mysql_general.log## Binary Log and Replication Configuration#log_bin = /u02/mysqldata/mysqllog/3306/bin/mysql-bin.loglog-bin-index = /u02/mysqldata/mysqllog/3306/bin/mysql_bin.indexbinlog-cache-size = 256Kbinlog_format = ROW#binlog_do_db = include_database_name#binlog_ignore_db = include_database_namerelay-log = /u02/mysqldata/mysqllog/3306/relay/mysql_bin.logrelay-log-index = /u02/mysqldata/mysqllog/3306/relay/mysql_bin.indexexpire-logs-days = 7max_binlog_size = 100Msync_binlog = 0## Replication#server-id=129206#log-slave-updates#replicate-ignore-db = mysql#replicate-ignore-table=slave_transaction_retries = 20#slave-skip-errors = 1062#auto_increment_increment = 3#auto_increment_offset = 3## InnoDB Configuration#innodb_file_format = Barracudainnodb_file_format_max = Barracudainnodb_data_home_dir = /u02/mysqldata1/innodb/3306/datainnodb_log_group_home_dir = /u02/mysqldata1/innodb/3306/loginnodb_file_per_tableinnodb_data_file_path = ibdata1:10M:autoextendtransaction-isolation = READ-COMMITTEDinnodb_flush_log_at_trx_commit = 0innodb_log_buffer_size = 8Minnodb_log_file_size = 512Minnodb_buffer_pool_size = 500Minnodb_additional_mem_pool_size = 128Minnodb_flush_method = O_DSYNCinnodb_thread_concurrency = 0innodb_lock_wait_timeout = 60innodb-open-files = 500innodb-support-xa = 0### For generating SSL certificates I recommend the OpenSSL GUI "tinyca".## ssl-ca=/etc/mysql/cacert.pem# ssl-cert=/etc/mysql/server-cert.pem# ssl-key=/etc/mysql/server-key.pem[mysqldump]quickquote-namesmax_allowed_packet = 128M[mysql]#no-auto-rehash # faster start of mysql but no tab completition[isamchk]key_buffer = 1G | The software you are looking for is called "cool-old-term" and is available on github . It emulates the look of a CRT and is based around konsole (KDE's terminal) and requires QT 5.2 or newer. The readme has instructions for getting it working on Ubuntu 14.04 and Arch. The examples on the github page show a few other variations on the CRT look and the presentation of the text. This video shows off the features of this terminal and the many ways it can look. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146612",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77656/"
]
} |
146,615 | I'm trying to perform a one-liner using at . Basically, I want to send an SMS at some point in the future. Here's my command to send an SMS: php -r 'include_once("/home/eamorr/open/open.ie/www/newsite/ajax/constants.php");sendCentralSMS("08574930418","hi");' The above works great! I receive my sms in a couple of seconds. Now, how can I get at to run this command in future? I tried php -r 'include_once("/home/eamorr/open/open.ie/www/newsite/ajax/constants.php");sendCentralSMS("08574930418","hi");' | at now + 2 minutes But this sends my command immediately! I want to send the message in 2 minutes' time! | Because that's not how the at command works. at takes the command in via STDIN. What you're doing above is running the script and giving its output (if there is any) to at . This is the functional equivalent of what you're doing: echo hey | at now + 1 minute Since echo hey prints out just the word "hey" the word "hey" is all I'm giving at to execute one minute in the future. You probably want to echo the full php command to at instead of running it yourself. In my example: echo "echo hey" | at now + 1 minute EDIT: As @Gnouc pointed out, you also had a typo in your at spec. You have to say "now" so it knows what time you're adding 1 minute to. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146615",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78730/"
]
} |
146,620 | What is the difference between sync and async mount options from the end-user point of view? Is file system mounted with one of these options works faster than if mounted with another one? Which option is the default one, if none of them is set? man mount says that sync option may reduce lifetime of flash memory, but it may by obsolete conventional wisdom. Anyway this concerns me a bit, because my primary hard drive, where partitions / and /home are placed, is SSD drive. Ubuntu installer (14.04) have not specified sync nor async option for / partition, but have set async for /home by the option defaults . Here is my /etc/fstab , I added some additional lines (see comment), but not changed anything in lines made by installer: # / was on /dev/sda2 during installationUUID=7e4f7654-3143-4fe7-8ced-445b0dc5b742 / ext4 errors=remount-ro 0 1# /home was on /dev/sda3 during installationUUID=d29541fc-adfa-4637-936e-b5b9dbb0ba67 /home ext4 defaults 0 2# swap was on /dev/sda4 during installationUUID=f9b53b49-94bc-4d8c-918d-809c9cefe79f none swap sw 0 0# here goes part written by me:# /mnt/storageUUID=4e04381d-8d01-4282-a56f-358ea299326e /mnt/storage ext4 defaults 0 2# Windows C: /dev/sda1UUID=2EF64975F6493DF9 /mnt/win_c ntfs auto,umask=0222,ro 0 0# Windows D: /dev/sdb1UUID=50C40C08C40BEED2 /mnt/win_d ntfs auto,umask=0222,ro 0 0 So if my /dev/sda is SSD, should I - for the sake of reducing wear - add async option for / and /home file systems? Should I set sync or async option for additional partitions that I defined in my /etc/fstab ? What is recommended approach for SSD and HDD drives? | async is the opposite of sync , which is rarely used. async is the default, you don't need to specify that explicitly. The option sync means that all changes to the according filesystem are immediately flushed to disk; the respective write operations are being waited for. For mechanical drives that means a huge slow down since the system has to move the disk heads to the right position; with sync the userland process has to wait for the operation to complete. In contrast, with async the system buffers the write operation and optimizes the actual writes; meanwhile, instead of being blocked the process in userland continues to run. (If something goes wrong, then close() returns -1 with errno = EIO .) SSD: I don't know how fast the SSD memory is compared to RAM memory, but certainly it is not faster, so sync is likely to give a performance penalty, although not as bad as with mechanical disk drives. As of the lifetime, the wisdom is still valid, since writing to a SSD a lot "wears" it off. The worst scenario would be a process that makes a lot of changes to the same place; with sync each of them hits the SSD, while with async (the default) the SSD won't see most of them due to the kernel buffering. In the end of the day, don't bother with sync , it's most likely that you're fine with async . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/146620",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
146,628 | A friend of mine has a mdadm-raid5 with 9 disks which does not reassemble anymore. After having a look at the syslog I found that the disk sdi was kicked from the array: Jul 6 08:43:25 nasty kernel: [ 12.952194] md: bind<sdc>Jul 6 08:43:25 nasty kernel: [ 12.952577] md: bind<sdd>Jul 6 08:43:25 nasty kernel: [ 12.952683] md: bind<sde>Jul 6 08:43:25 nasty kernel: [ 12.952784] md: bind<sdf>Jul 6 08:43:25 nasty kernel: [ 12.952885] md: bind<sdg>Jul 6 08:43:25 nasty kernel: [ 12.952981] md: bind<sdh>Jul 6 08:43:25 nasty kernel: [ 12.953078] md: bind<sdi>Jul 6 08:43:25 nasty kernel: [ 12.953169] md: bind<sdj>Jul 6 08:43:25 nasty kernel: [ 12.953288] md: bind<sda>Jul 6 08:43:25 nasty kernel: [ 12.953308] md: kicking non-fresh sdi from array!Jul 6 08:43:25 nasty kernel: [ 12.953314] md: unbind<sdi>Jul 6 08:43:25 nasty kernel: [ 12.960603] md: export_rdev(sdi)Jul 6 08:43:25 nasty kernel: [ 12.969675] raid5: device sda operational as raid disk 0Jul 6 08:43:25 nasty kernel: [ 12.969679] raid5: device sdj operational as raid disk 8Jul 6 08:43:25 nasty kernel: [ 12.969682] raid5: device sdh operational as raid disk 6Jul 6 08:43:25 nasty kernel: [ 12.969684] raid5: device sdg operational as raid disk 5Jul 6 08:43:25 nasty kernel: [ 12.969687] raid5: device sdf operational as raid disk 4Jul 6 08:43:25 nasty kernel: [ 12.969689] raid5: device sde operational as raid disk 3Jul 6 08:43:25 nasty kernel: [ 12.969692] raid5: device sdd operational as raid disk 2Jul 6 08:43:25 nasty kernel: [ 12.969694] raid5: device sdc operational as raid disk 1Jul 6 08:43:25 nasty kernel: [ 12.970536] raid5: allocated 9542kB for md127Jul 6 08:43:25 nasty kernel: [ 12.973975] 0: w=1 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0Jul 6 08:43:25 nasty kernel: [ 12.973980] 8: w=2 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0Jul 6 08:43:25 nasty kernel: [ 12.973983] 6: w=3 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0Jul 6 08:43:25 nasty kernel: [ 12.973986] 5: w=4 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0Jul 6 08:43:25 nasty kernel: [ 12.973989] 4: w=5 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0Jul 6 08:43:25 nasty kernel: [ 12.973992] 3: w=6 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0Jul 6 08:43:25 nasty kernel: [ 12.973996] 2: w=7 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0Jul 6 08:43:25 nasty kernel: [ 12.973999] 1: w=8 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0Jul 6 08:43:25 nasty kernel: [ 12.974002] raid5: raid level 5 set md127 active with 8 out of 9 devices, algorithm 2 Unfortunately this wasn't recognized and now another drive was kicked (sde): Jul 14 08:02:45 nasty kernel: [ 12.918556] md: bind<sdc>Jul 14 08:02:45 nasty kernel: [ 12.919043] md: bind<sdd>Jul 14 08:02:45 nasty kernel: [ 12.919158] md: bind<sde>Jul 14 08:02:45 nasty kernel: [ 12.919260] md: bind<sdf>Jul 14 08:02:45 nasty kernel: [ 12.919361] md: bind<sdg>Jul 14 08:02:45 nasty kernel: [ 12.919461] md: bind<sdh>Jul 14 08:02:45 nasty kernel: [ 12.919556] md: bind<sdi>Jul 14 08:02:45 nasty kernel: [ 12.919641] md: bind<sdj>Jul 14 08:02:45 nasty kernel: [ 12.919756] md: bind<sda>Jul 14 08:02:45 nasty kernel: [ 12.919775] md: kicking non-fresh sdi from array!Jul 14 08:02:45 nasty kernel: [ 12.919781] md: unbind<sdi>Jul 14 08:02:45 nasty kernel: [ 12.928177] md: export_rdev(sdi)Jul 14 08:02:45 nasty kernel: [ 12.928187] md: kicking non-fresh sde from array!Jul 14 08:02:45 nasty kernel: [ 12.928198] md: unbind<sde>Jul 14 08:02:45 nasty kernel: [ 12.936064] md: export_rdev(sde)Jul 14 08:02:45 nasty kernel: [ 12.943900] raid5: device sda operational as raid disk 0Jul 14 08:02:45 nasty kernel: [ 12.943904] raid5: device sdj operational as raid disk 8Jul 14 08:02:45 nasty kernel: [ 12.943907] raid5: device sdh operational as raid disk 6Jul 14 08:02:45 nasty kernel: [ 12.943909] raid5: device sdg operational as raid disk 5Jul 14 08:02:45 nasty kernel: [ 12.943911] raid5: device sdf operational as raid disk 4Jul 14 08:02:45 nasty kernel: [ 12.943914] raid5: device sdd operational as raid disk 2Jul 14 08:02:45 nasty kernel: [ 12.943916] raid5: device sdc operational as raid disk 1Jul 14 08:02:45 nasty kernel: [ 12.944776] raid5: allocated 9542kB for md127Jul 14 08:02:45 nasty kernel: [ 12.944861] 0: w=1 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0Jul 14 08:02:45 nasty kernel: [ 12.944864] 8: w=2 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0Jul 14 08:02:45 nasty kernel: [ 12.944867] 6: w=3 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0Jul 14 08:02:45 nasty kernel: [ 12.944871] 5: w=4 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0Jul 14 08:02:45 nasty kernel: [ 12.944874] 4: w=5 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0Jul 14 08:02:45 nasty kernel: [ 12.944877] 2: w=6 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0Jul 14 08:02:45 nasty kernel: [ 12.944879] 1: w=7 pa=0 pr=9 m=1 a=2 r=9 op1=0 op2=0Jul 14 08:02:45 nasty kernel: [ 12.944882] raid5: not enough operational devices for md127 (2/9 failed) And now the array does not start anymore.However it seems that every disk contains the raid metadata: /dev/sda: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b8a04dbb:0b5dffda:601eb40d:d2dc37c9 Name : nasty:stuff (local to host nasty) Creation Time : Sun Mar 16 02:37:47 2014 Raid Level : raid5 Raid Devices : 9 Avail Dev Size : 7814035120 (3726.02 GiB 4000.79 GB) Array Size : 62512275456 (29808.18 GiB 32006.29 GB) Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 8600bda9:18845be8:02187ecc:1bfad83a Update Time : Mon Jul 14 00:45:35 2014 Checksum : e38d46e8 - correct Events : 123132 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 0 Array State : AAA.AAA.A ('A' == active, '.' == missing)/dev/sdc: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b8a04dbb:0b5dffda:601eb40d:d2dc37c9 Name : nasty:stuff (local to host nasty) Creation Time : Sun Mar 16 02:37:47 2014 Raid Level : raid5 Raid Devices : 9 Avail Dev Size : 7814035120 (3726.02 GiB 4000.79 GB) Array Size : 62512275456 (29808.18 GiB 32006.29 GB) Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : fe612c05:f7a45b0a:e28feafe:891b2bda Update Time : Mon Jul 14 00:45:35 2014 Checksum : 32bb628e - correct Events : 123132 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 1 Array State : AAA.AAA.A ('A' == active, '.' == missing)/dev/sdd: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b8a04dbb:0b5dffda:601eb40d:d2dc37c9 Name : nasty:stuff (local to host nasty) Creation Time : Sun Mar 16 02:37:47 2014 Raid Level : raid5 Raid Devices : 9 Avail Dev Size : 7814035120 (3726.02 GiB 4000.79 GB) Array Size : 62512275456 (29808.18 GiB 32006.29 GB) Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 1d14616c:d30cadc7:6d042bb3:0d7f6631 Update Time : Mon Jul 14 00:45:35 2014 Checksum : 62bd5499 - correct Events : 123132 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 2 Array State : AAA.AAA.A ('A' == active, '.' == missing)/dev/sde: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b8a04dbb:0b5dffda:601eb40d:d2dc37c9 Name : nasty:stuff (local to host nasty) Creation Time : Sun Mar 16 02:37:47 2014 Raid Level : raid5 Raid Devices : 9 Avail Dev Size : 7814035120 (3726.02 GiB 4000.79 GB) Array Size : 62512275456 (29808.18 GiB 32006.29 GB) Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : active Device UUID : a2babca3:1283654a:ef8075b5:aaf5d209 Update Time : Mon Jul 14 00:45:07 2014 Checksum : f78d6456 - correct Events : 123123 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 3 Array State : AAAAAAA.A ('A' == active, '.' == missing)/dev/sdf: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b8a04dbb:0b5dffda:601eb40d:d2dc37c9 Name : nasty:stuff (local to host nasty) Creation Time : Sun Mar 16 02:37:47 2014 Raid Level : raid5 Raid Devices : 9 Avail Dev Size : 7814035120 (3726.02 GiB 4000.79 GB) Array Size : 62512275456 (29808.18 GiB 32006.29 GB) Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : e67d566d:92aaafb4:24f5f16e:5ceb0db7 Update Time : Mon Jul 14 00:45:35 2014 Checksum : 9223b929 - correct Events : 123132 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 4 Array State : AAA.AAA.A ('A' == active, '.' == missing)/dev/sdg: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b8a04dbb:0b5dffda:601eb40d:d2dc37c9 Name : nasty:stuff (local to host nasty) Creation Time : Sun Mar 16 02:37:47 2014 Raid Level : raid5 Raid Devices : 9 Avail Dev Size : 7814035120 (3726.02 GiB 4000.79 GB) Array Size : 62512275456 (29808.18 GiB 32006.29 GB) Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 2cee1d71:16c27acc:43e80d02:1da74eeb Update Time : Mon Jul 14 00:45:35 2014 Checksum : 7512efd4 - correct Events : 123132 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 5 Array State : AAA.AAA.A ('A' == active, '.' == missing)/dev/sdh: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b8a04dbb:0b5dffda:601eb40d:d2dc37c9 Name : nasty:stuff (local to host nasty) Creation Time : Sun Mar 16 02:37:47 2014 Raid Level : raid5 Raid Devices : 9 Avail Dev Size : 7814035120 (3726.02 GiB 4000.79 GB) Array Size : 62512275456 (29808.18 GiB 32006.29 GB) Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : c239f0ad:336cdb88:62c5ff46:c36ea5f8 Update Time : Mon Jul 14 00:45:35 2014 Checksum : c08e8a4d - correct Events : 123132 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 6 Array State : AAA.AAA.A ('A' == active, '.' == missing)/dev/sdi: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b8a04dbb:0b5dffda:601eb40d:d2dc37c9 Name : nasty:stuff (local to host nasty) Creation Time : Sun Mar 16 02:37:47 2014 Raid Level : raid5 Raid Devices : 9 Avail Dev Size : 7814035120 (3726.02 GiB 4000.79 GB) Array Size : 62512275456 (29808.18 GiB 32006.29 GB) Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : active Device UUID : d06c58f8:370a0535:b7e51073:f121f58c Update Time : Mon Jul 14 00:45:07 2014 Checksum : 77844dcc - correct Events : 0 Layout : left-symmetric Chunk Size : 512K Device Role : spare Array State : AAAAAAA.A ('A' == active, '.' == missing)/dev/sdj: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b8a04dbb:0b5dffda:601eb40d:d2dc37c9 Name : nasty:stuff (local to host nasty) Creation Time : Sun Mar 16 02:37:47 2014 Raid Level : raid5 Raid Devices : 9 Avail Dev Size : 7814035120 (3726.02 GiB 4000.79 GB) Array Size : 62512275456 (29808.18 GiB 32006.29 GB) Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : f2de262f:49d17fea:b9a475c1:b0cad0b7 Update Time : Mon Jul 14 00:45:35 2014 Checksum : dd0acfd9 - correct Events : 123132 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 8 Array State : AAA.AAA.A ('A' == active, '.' == missing) But as you can see the two drives (sde, sdi) are in active state (but raid is stopped) and sdi is a spare.While sde has a slightly lower Events-count than most of the other drives (123123 instead of 123132) sdi has an Events-count of 0. So I think sde is almost up-to-date. But sdi not ... Now we read online that a hard power-off could cause these "kicking non-fresh"-messages. And indeed my friend caused a hard power-off one or two times. So we followed the instructions we found online and tried to re-add sde to the array: $ mdadm /dev/md127 --add /dev/sdemdadm: add new device failed for /dev/sde as 9: Invalid argument But that failed and now mdadm --examine /dev/sde shows an Events-count of 0 for sde too (+ it's a spare now like sdi): /dev/sde: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b8a04dbb:0b5dffda:601eb40d:d2dc37c9 Name : nasty:stuff (local to host nasty) Creation Time : Sun Mar 16 02:37:47 2014 Raid Level : raid5 Raid Devices : 9 Avail Dev Size : 7814035120 (3726.02 GiB 4000.79 GB) Array Size : 62512275456 (29808.18 GiB 32006.29 GB) Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 689e0030:142122ae:7ab37935:c80ab400 Update Time : Mon Jul 14 00:45:35 2014 Checksum : 5e6c4cf7 - correct Events : 0 Layout : left-symmetric Chunk Size : 512K Device Role : spare Array State : AAA.AAA.A ('A' == active, '.' == missing) We know that 2 failed drives usually means the death for a raid5. However is there a way to add at least sde to the raid so that data can be saved? | OK, it looks like we have now access to the raid. At least the first checked files looked good. So here is what we have done: The raid recovery article on the kernel.org wiki suggests two possible solutions for our problem: using --assemble --force (also mentioned by derobert) The article says: [...] If the event count differs by less than 50, then the information on the drive is probably still ok. [...] If the event count closely matches but not exactly, use "mdadm --assemble --force /dev/mdX " to force mdadm to assemble the array [...]. If the event count of a drive is way off [...] that drive [...] shouldn't be included in the assembly. In our case the drive sde had an event difference of 9. So there was a good chance that --force would work. However after we executed the --add command the event count dropped to 0 and the drive was marked as spare. So we better desisted from using --force . recreate the array This solution is explicitly marked as dangerous because you can loose data if you do something wrong. However this seemed to be the only option we had. The idea is to create a new raid on the existing raid-devices (that is overwriting the device's superblocks) with the same configuration of the old raid and explicitly tell mdadm that the raid has already existed and should be assumed as clean. Since the event count difference was just 9 and the only problem was that we lost the superblock of sde there were good chances that writing new superblocks will get us access to our data... and it worked :-) Our solution Note: This solution was specially geared to our problem and may not work on your setup. You should take these notes to get an idea on how things can be done. But you need to research what's best in your case. Backup We already lost a superblock. So this time we saved the first and last gigabyte of each raid device ( sd[acdefghij] ) using dd before working on the raid. We did this for each raid device: # save the first gigabyte of sdadd if=/dev/sda of=bak_sda_start bs=4096 count=262144# determine the size of the devicefdisk -l /dev/sda# In this case the size was 4000787030016 byte.# To get the last gigabyte we need to skip everything except the last gigabyte.# So we need to skip: 4000787030016 byte - 1073741824 byte = 3999713288000 byte# Since we read blocks auf 4096 byte we need to skip 3999713288000/4096=976492502 blocks.dd if=/dev/sda of=bak_sda_end bs=4096 skip=976492502 Gather information When recreating the raid it is important to use the same configration as the old raid. This is especially important if you want to recreate the array on another machine using a different mdadm version. In this case mdadm's default values may be different and could create superblocks that do not fit to the existing raid (see the wiki article). In our case we use the same machine (and thus the same mdadm-version) to recreate the array. However the array was created by a 3rd party tool in the first place. So we didn't want to rely on default values here and had to gather some information about the existing raid. From the output of mdadm --examine /dev/sd[acdefghij] we get the following information about the raid (Note: sdb was the ssd containing the OS and was not part of the raid): Raid Level : raid5 Raid Devices : 9 Used Dev Size : 7814034432 (3726.02 GiB 4000.79 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Layout : left-symmetric Chunk Size : 512K Device Role : Active device 0 The Used Dev Size is denominated in blocks of 512 byte. You can check this: 7814034432*512/1000000000 ~= 4000.79 But mdadm requires the size in Kibibytes: 7814034432*512/1024 = 3907017216 Important is the Device Role . In the new raid each device must have the same role as before. In our case: device role------ ----sda 0sdc 1sdd 2sde 3sdf 4sdg 5sdh 6sdi sparesdj 8 Note: Drive letters (and thus the order) can change after reboot! We also need the layout and the chunk size in the next step. Recreate raid We now can use the information of the last step to recreate the array: mdadm --create --assume-clean --level=5 --raid-devices=9 --size=3907017216 \ --chunk=512 --layout=left-symmetric /dev/md127 /dev/sda /dev/sdc /dev/sdd \ /dev/sde /dev/sdf /dev/sdg /dev/sdh missing /dev/sdj It is important to pass the devices in the correct order! Moreover we did not add sdi as it's event count was too low. So we set the 7th raid slot to missing . Thus the raid5 contains 8 of 9 devices and will be assembled in degraded mode. And because it lacks a spare device no rebuild will automatically start. Then we used --examine to check if the new superblocks fit to our old superblocks. And it did :-) We were able to mount the filesystem and read the data. The next step is to backup the data and then add back sdi and start the rebuild. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146628",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65841/"
]
} |
146,631 | At work, I have a desktop with a monitor and a few running tmux sessions. At home, I frequently ssh into that desktop and enter my running tmux sessions. When I ssh from home, I do not want to use X11, so I do not use the -X flag. When I go back to work (after ssh'ing from home) and use those tmux sessions on desktop, I can no longer do anything that would spawn a GUI. I can't open files in evince. When I try use matplotlib, I get a : cannot connect to X server message. After ssh'ing and opening an existing tmux session from home, how do I later reattach the ability to open up GUI stuff on the desktop? The ssh'ing from home seems to make the tmux session forget that it can spawn GUI stuff. EDIT: | All I need to do is set the DISPLAY environment variable to :0.0. I think the issue was that I am using the fish shell, and I need to use the -x flag to set when doing this: set -x DISPLAY :0.0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146631",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78738/"
]
} |
146,633 | I'm trying to create a new user on a Centos 6 system. First, I do useradd kevin Then, I tried to run commands as that user su - kevin However, I get the following error messages -bash: /dev/null: Permission denied-bash: /dev/null: Permission denied-bash: /dev/null: Permission denied-bash: /dev/null: Permission denied-bash: /dev/null: Permission denied-bash: /dev/null: Permission denied[kevin@gazelle ~]$ And I can't do very much as that user. The permissions on /dev/null are as follows: -rwxr-xr-x 1 root root 9 Jul 25 17:07 null Roughly the same as they are on my Mac, crw-rw-rw- 1 root wheel 3, 2 Jul 25 14:08 null It's possible , but really unlikely, that I touched dev. As the root user, I tried adding kevin to the root group: usermod -a -G root kevin However I still am getting /dev/null permission denied errors. Why can't the new user write to /dev/null ? What groups should the new user be a part of? Am I not impersonating the user correctly? Is there a beginners guide to setting up users/permissions on Linux? | Someone evidently moved a regular file to /dev/null. Rebooting will recreate it, or do rm -f /dev/null; mknod -m 666 /dev/null c 1 3 As @Flow has noted in a comment, you must be root to do this. 1 and 3 here are the device major and minor number on Linux-based OSes (the 3rd device handled by the mem driver, see /proc/devices , cat /sys/devices/virtual/mem/null/dev , readlink /sys/dev/char/1:3 ). It varies with the OS. For instance, it's 2 , 2 on OpenBSD and AIX , it may also not be always the same on a given OS. Some OSes may supply a makedev / MAKEDEV command to help recreate them. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/146633",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9519/"
]
} |
146,657 | My plan was to dual boot Windows 8.1 and Debian. Currently I have Windows installed. My disk can only have 4 partitions. One is system reserved. I want to give both OS their own 64GB partitions, and have the 3rd partition serve as 'Media' drive shared by both OS (it will serve as a pseudo user home for pics, documents, etc) So that means all 4 partitions would be put to use. Debian MUST be installed on ONLY the partition I reserved for it. When I install Debian, how can I ensure that it ONLY installs on that partition? Because I know that it can use an extra one for swap. Also, doing this won't interfere with my Windows installation, will it? | Debian (or any other Linux distribution) will work without swap partition - just don't create one during installation. You can use swap file instead of partition. The swap file can be prepared by the following example commands: sudo fallocate -l 512MB /swapfile sudo mkswap /swapfile The first one creates a 512 MB file, the second one format it as swap file system. After creating the swap file, include it to the /etc/fstab by adding a line like this: /swapfile none swap sw 0 0 That's it! After reboot your system will use /swapfile as swap area. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146657",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78745/"
]
} |
146,671 | I know this has probably been asked before, but I couldn't find it with Google. Given Linux Kernel No configurations that change $HOME bash Will ~ == $HOME be true? | What's important to understand is that ~ expansion is a feature of the shell (of some shells), it's not a magic character than means your home directory wherever it's used. It is expanded (by the shell, which is an application used to interpret command lines), like $var is expanded to its value under some conditions when used in a shell command line before the command is executed. That feature first appeared in the C-shell in the late 1970s (the Bourne shell didn't have it, nor did its predecessor the Thompson shell), was later added to the Korn shell (a newer shell built upon the Bourne shell in the 80s). It was eventually standardized by POSIX and is now available in most shells including non-POSIX ones like fish . Because it's in such widespread use in shells, some non-shell applications also recognise it as meaning the home directory. That's the case of many applications in their configuration files or their own command line ( mutt , slrn , vim ...). bash specifically (which is the shell of the GNU project and widely used in many Linux-based operating systems), when invoked as sh , mostly follows the POSIX rules about ~ expansion, and in areas not specified by POSIX, behaves mostly like the Korn shell (of which it is a part clone). While $var is expanded in most places (except inside single quotes), ~ expansion, being an afterthought is only expanded in a few specific conditions. It is expanded when on its own argument in list contexts, in contexts where a string is expected. Here are a few examples of where it's expanded in bash : cmd arg ~ other arg var=~ var=x:~:x (required by POSIX, used for variables like PATH , MANPATH ...) for i in ~ [[ ~ = text ]] [[ text = ~ ]] (the expansion of ~ being taken as a pattern in AT&T ksh but not bash since 4.0). case ~ in ~) ... ${var#~} (though not in some other shells) cmd foo=~ (though not when invoked as sh , and only when what's on the left of the = is shaped like an unquoted bash variable name) cmd ~/x (required by POSIX obviously) cmd ~:x (but not x:~:x or x-~-x ) a[~]=foo; echo "${a[~]} $((a[~]))" (not in some other shells) Here are a few examples where it's not expanded: echo "~" '~' echo ~@ ~~ (also note that ~u is meant to expand to the home directory of user u ). echo @~ (( HOME == ~ )) , $(( var + ~ )) with extglob : case $var in @(~|other))... (though case $var in ~|other) is OK). ./configure --prefix=~ (as --prefix is not a valid variable name) cmd "foo"=~ (in bash , because of the quotes). when invoked as sh : export "foo"=~ , env JAVA_HOME=~ cmd ... As to what it expands to: ~ alone expands to the content of the HOME variable, or when it is not set, to the home directory of the current user in the account database (as an extension since POSIX leaves that behaviour undefined). It should be noted that in ksh88 and bash versions prior to 4.0, tilde expansion underwent globbing (filename generation) in list contexts: $ bash -c 'echo "$HOME"'/home/***stephane***$ bash -c 'echo ~'/home/***stephane*** /home/stephane$ bash -c 'echo "~"'~ That should not be a problem in usual cases. Note that because it's expanded, the same warning applies as other forms of expansions. cd ~ Doesn't work if $HOME starts with - or contains .. components. So, even though it's very unlikely to ever make any difference, strictly speaking, one should write: cd -P -- ~ Or even: case ~ in (/*) cd -P ~;; (*) d=~; cd -P "./$d";;esac (to cover for values of $HOME like - , +2 ...) or simply: cd (as cd takes you to your home directory without any argument) Other shells have more advanced ~ expansions. For instance, in zsh , we have: ~4 , ~- , ~-2 (with completion) used to expand the directories in your directory stack (the places you've cd to before). dynamic named directories . You can define your own mechanism to decide how ~something is being expanded. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/146671",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78754/"
]
} |
146,690 | When using bash completion and an a number of characters have been entered, tabbing ceases to work when the prefix you have typed is matched by more than one of the possibilities. Is there a way to cycle through the alternatives of the prefix you have entered? | Bind the Tab key to the menu-complete command instead of the default complete . Put the following line in your ~/.bashrc : bind '"\C-i": menu-complete' or the following line in your ~/.inputrc (this will apply to all programs that use the readline library, not just bash): "\C-i": menu-complete | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146690",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26026/"
]
} |
146,699 | I've made several aliases for my convenience.But now I need to be helpful by sending a useful command and it is full of aliases.I've tried doing type u but all it returns is up && ap upgrade -y --show-progress && r && ap check && ap autoclean These are all aliases in u : alias a='alias'a ap='apt-get'a r='ap autoremove -y'a up='ap update' | Press Ctrl-Alt-e with a command using your aliases written (ready to run) and Bash will expand it . Ctrl-Alt-e is the default binding for the shell-expand-line readline command. Each time you push Ctrl-Alt-e Bash will expand one layer of alias, so push it repeatedly until your command is expanded as far as you need. If your Meta key is not Alt , substitute it instead, or press Escape Ctrl-e . There is also an alias-expand-line function which is not bound by default, which only expands aliases. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146699",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78459/"
]
} |
146,710 | After having installed the MySQL database on openSUSE I realized that for all files in /usr/bin the owner was changed to the "mysql" user of the "mysql" group. Maybe there was some mistake of mine. The worst problem was with the /usr/bin/sudo command, which obviously did not work, but I've taken back the ownership to root (having logged to root ) and it is OK now. Should I change owner of all files in /usr/bin to root or may this cause some malfunctioning of other programs? Should they also have the "Set UID" option marked in the Privileges tab as sudo does? | Yes, all files under /usr should be owned by root, except that files under /usr/local may or may not be owned by root depending on site policies. It's normal for root to own files that only a system administrator is supposed to modify. There are a few files that absolutely need to be owned by root or else your system won't work properly. These are setuid root executables, which run as root no matter who invoked them. Common setuid root binaries include su and sudo (programs to run another program as a different user, after authentication), sudoedit (a companion to sudo to edit files rather than run an arbitrary programs), and programs to modify user accounts ( passwd , chsh , chfn ). In addition, a number of programs need to run with additional group privileges, and need to be owned by the appropriate group (and by the root user) and have the setgid bit set. You can, and should, restore proper permissions from the package database. If you attempt to repair manually, you're bound to miss something and leave some hard-to-diagnose bugs lying around. Run the following commands: rpm -qa | xargs rpm --setugids --setperms | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146710",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78775/"
]
} |
146,732 | I am running DWM under Arch Linux in combination with the urxvt terminal. I have a urxvt daemon running but when I press the key combination for opening a terminal window, nothing happens. Is there an error log file for DWM? Any suggestions what I can do to find out why no terminal is opening? Thanks! Edit: My config.c file: /* appearance */static const char font[] = "-*-terminus-medium-r-normal-*- 12-*-*-*-*-*-*-*";static const char normbordercolor[] = "#000000";static const char normbgcolor[] = "#3f3f3f";static const char normfgcolor[] = "#dfaf8f";static const char selbordercolor[] = "#cc0000";static const char selbgcolor[] = "#2b2b2b";static const char selfgcolor[] = "#f0dfaf";static const unsigned int borderpx = 1; /* border pixel of windows */static const unsigned int snap = 0; /* snap pixel */static const Bool showbar = True; /* False means no bar */static const Bool topbar = True; /* False means bottom bar *//* tagging */static const char *tags[] = { "term", "work", "www", "mail"};static const Rule rules[] = {/* class instance title tags mask isfloating monitor */{ "Gimp", NULL, NULL, 0, True, -1 },{ "Firefox", NULL, NULL, 1 << 8, False, -1 },};/* layout(s) */static const float mfact = 0.55; /* factor of master area size [0.05..0.95] */static const int nmaster = 1; /* number of clients in master area */static const Bool resizehints = False; /* True means respect size hints in tiled resizals */static const Layout layouts[] = {/* symbol arrange function */{ "[]=", tile }, /* first entry is default */{ "><>", NULL }, /* no layout function means floating behavior */{ "[M]", monocle },};/* key definitions */#define MODKEY Mod4Mask#define TAGKEYS(KEY,TAG) \{ MODKEY, KEY, view, {.ui = 1 << TAG} }, \{ MODKEY|ControlMask, KEY, toggleview, {.ui = 1 << TAG} }, \{ MODKEY|ShiftMask, KEY, tag, {.ui = 1 << TAG} }, \{ MODKEY|ControlMask|ShiftMask, KEY, toggletag, {.ui = 1 << TAG} },/* helper for spawning shell commands in the pre dwm-5.0 fashion */#define SHCMD(cmd) { .v = (const char*[]){ "/bin/sh", "-c", cmd, NULL } }/* commands */static const char *dmenucmd[] = { "dmenu_run", "-fn", font, "-nb", normbgcolor, "-nf", normfgcolor, "-sb", selbgcolor, "-sf", selfgcolor, NULL };static const char *termcmd[] = { "urxvtc", NULL };static Key keys[] = {/* modifier key function argument */{ MODKEY, XK_p, spawn, {.v = dmenucmd } },{ MODKEY|ShiftMask, XK_Return, spawn, {.v = termcmd } },{ MODKEY, XK_b, togglebar, {0} },{ MODKEY, XK_j, focusstack, {.i = +1 } },{ MODKEY, XK_k, focusstack, {.i = -1 } },{ MODKEY, XK_i, incnmaster, {.i = +1 } },{ MODKEY, XK_d, incnmaster, {.i = -1 } },{ MODKEY, XK_h, setmfact, {.f = -0.05} },{ MODKEY, XK_l, setmfact, {.f = +0.05} },{ MODKEY, XK_Return, zoom, {0} },{ MODKEY, XK_Tab, view, {0} },{ MODKEY|ShiftMask, XK_c, killclient, {0} },{ MODKEY, XK_t, setlayout, {.v = &layouts[0]} },{ MODKEY, XK_f, setlayout, {.v = &layouts[1]} },{ MODKEY, XK_m, setlayout, {.v = &layouts[2]} },{ MODKEY, XK_space, setlayout, {0} },{ MODKEY|ShiftMask, XK_space, togglefloating, {0} },{ MODKEY, XK_0, view, {.ui = ~0 } },{ MODKEY|ShiftMask, XK_0, tag, {.ui = ~0 } },{ MODKEY, XK_comma, focusmon, {.i = -1 } },{ MODKEY, XK_period, focusmon, {.i = +1 } },{ MODKEY|ShiftMask, XK_comma, tagmon, {.i = -1 } },{ MODKEY|ShiftMask, XK_period, tagmon, {.i = +1 } },TAGKEYS( XK_1, 0)TAGKEYS( XK_2, 1)TAGKEYS( XK_3, 2)TAGKEYS( XK_4, 3)TAGKEYS( XK_5, 4)TAGKEYS( XK_6, 5)TAGKEYS( XK_7, 6)TAGKEYS( XK_8, 7)TAGKEYS( XK_9, 8){ MODKEY|ShiftMask, XK_q, quit, {0} },};/* button definitions *//* click can be ClkLtSymbol, ClkStatusText, ClkWinTitle, ClkClientWin, or ClkRootWin */static Button buttons[] = {/* click event mask button function argument */{ ClkLtSymbol, 0, Button1, setlayout, {0} },{ ClkLtSymbol, 0, Button3, setlayout, {.v = &layouts[2]} },{ ClkWinTitle, 0, Button2, zoom, {0} },{ ClkStatusText, 0, Button2, spawn, {.v = termcmd } },{ ClkClientWin, MODKEY, Button1, movemouse, {0} },{ ClkClientWin, MODKEY, Button2, togglefloating, {0} },{ ClkClientWin, MODKEY, Button3, resizemouse, {0} },{ ClkTagBar, 0, Button1, view, {0} },{ ClkTagBar, 0, Button3, toggleview, {0} },{ ClkTagBar, MODKEY, Button1, tag, {0} },{ ClkTagBar, MODKEY, Button3, toggletag, {0} },}; | Yes, all files under /usr should be owned by root, except that files under /usr/local may or may not be owned by root depending on site policies. It's normal for root to own files that only a system administrator is supposed to modify. There are a few files that absolutely need to be owned by root or else your system won't work properly. These are setuid root executables, which run as root no matter who invoked them. Common setuid root binaries include su and sudo (programs to run another program as a different user, after authentication), sudoedit (a companion to sudo to edit files rather than run an arbitrary programs), and programs to modify user accounts ( passwd , chsh , chfn ). In addition, a number of programs need to run with additional group privileges, and need to be owned by the appropriate group (and by the root user) and have the setgid bit set. You can, and should, restore proper permissions from the package database. If you attempt to repair manually, you're bound to miss something and leave some hard-to-diagnose bugs lying around. Run the following commands: rpm -qa | xargs rpm --setugids --setperms | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146732",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78703/"
]
} |
146,743 | I have done a script to convert recursive .jpg files to another size: echo $remkdir "$re"_tmpfor a in *.jpg ; do convert "$a" -resize $re "$re""_tmp/${a%.*} ["$re"].jpg" ; done I'd like integrate a multi extension support: png, bmp, etc. better with: FILEFORMAT="jpg, JPG, png, PNG, bmp, BMP" any idea to build it? PS: variable re is the new size 1024x768 (or 800x600, etc) | If I understand right, you want to process files with other extension, instead of only jpg . So you can try: for a in *.{jpg,JPG,png,PNG,bmp,BMP}; do printf '%s\n' "$a" # do your stuff heredone {...} is bash feature called brace expansion . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146743",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40628/"
]
} |
146,749 | Let's say I have a string like this: title="2010-09-11 11:22:45Z" How can I grep the date itself and disregard the quotes/title/ Z ? The file can contain more strings like: randomstringtitle="2010-09-11 11:22:45Z"title="disregard me" So I only want to grep timestamps with a single grep command. | With GNU grep , you can do: $ echo 'title="2010-09-11 11:22:45Z"' | grep -oP 'title="\K[^"]+'2010-09-11 11:22:45Z | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146749",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72471/"
]
} |
146,756 | I have a Bash script, which looks similar to this: #!/bin/bashecho "Doing some initial work....";/bin/start/main/server --nodaemon Now if the bash shell running the script receives a SIGTERM signal, it should also send a SIGTERM to the running server (which blocks, so no trap possible). Is that possible? | Try: #!/bin/bash _term() { echo "Caught SIGTERM signal!" kill -TERM "$child" 2>/dev/null}trap _term SIGTERMecho "Doing some initial work...";/bin/start/main/server --nodaemon &child=$! wait "$child" Normally, bash will ignore any signals while a child process is executing. Starting the server with & will background it into the shell's job control system, with $! holding the server's PID (to be used with wait and kill ). Calling wait will then wait for the job with the specified PID (the server) to finish, or for any signals to be fired . When the shell receives SIGTERM (or the server exits independently), the wait call will return (exiting with the server's exit code, or with the signal number + 128 in case a signal was received). Afterward, if the shell received SIGTERM, it will call the _term function specified as the SIGTERM trap handler before exiting (in which we do any cleanup and manually propagate the signal to the server process using kill ). | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/146756",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70752/"
]
} |
146,760 | For the purpose of testing, I'd like count how many images files are inside a directory, separating each image file type by file extension (jpg="yes". This because later it will be useful for another script that will execute an action on each file extension). Can I use something like the following for only JPEG files? jpg=""count=`ls -1 *.jpg 2>/dev/null | wc -l`if [ $count != 0 ]thenecho jpg files found: $count ; jpg="yes"fi Considering file extensions jpg, png, bmp, raw and others, should I use a while cycle to do this? | My approach would be: List all files in the directory Extract their extension Sort the result Count the occurrences of each extension Sort of like this (last awk call is purely for formatting): ls -q -U | awk -F . '{print $NF}' | sort | uniq -c | awk '{print $2,$1}' (assuming GNU ls here for the -U option to skip sorting as an optimisation. It can be safely removed without affecting functionality if not supported). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146760",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78795/"
]
} |
146,773 | n=0;((n++));echo "ret=$?;n=$n;"((n++));echo "ret=$?;n=$n;"((n++));echo "ret=$?;n=$n;" from n=1 on, ((n++)) works correctly, only when n=0 , ((n++)) return error, and I am using a trap '' ERR that is causing trouble with that is it some bug? | It's because the return value of (( expression )) is not used for error indication. From the bash manpage: (( expression )) The expression is evaluated according to the rules described below under ARITHMETIC EVALUATION . If the value of the expression is non-zero, the return status is 0; otherwise the return status is 1. This is exactly equivalent to let " expression ". So, in your case, because the value of the expression is zero, the return status of (( ... )) is 1. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146773",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30352/"
]
} |
146,784 | I have had some rather bad experiences with GRUB2 . I could say (and have said) some nasty things about its design and development process. I especially dislike its means of update: for whatever reason it must semi-automatically update several scripts - one indirectly via another in a chain - for every kernel update - or many other minor (and seemingly unrelated) configuration alterations. This is directly contrasted by previous experiences I had with LILO - to which I am seriously considering reverting - as I never had any problems with it, and its configuration was pretty simple. For one thing, as I remember it, I had only to update (or, rather, it only ever updated) a single, simply-managed configuration text-file per kernel-update. So how does LILO work on modern hardware with today's kernels? How does GRUB? How do other bootloaders? Do I have to fulfill any preconditions, or is it just about writing the configuration file and running lilo command as I fondly remember it in the old days? Does the kernel package update (Debian/Ubuntu) update LILO as it does with GRUB2? | ELILO Managing EFI Boot Loaders for Linux: Using ELILO It's really difficult for me to decide which part of that to copy+paste because it's all really good, so I'll just ask you please to read it. Rod Smith Authored and maintains both gdisk and rEFInd . But before you do I'd like to comment a little on it. The ELILO link above is to one of the many pages on UEFI booting you'll find on rodsbooks.com written by Rod Smith. He's an accomplished technical writer, and if you've ever googled the topic of UEFI booting and wound up not reading something of his, it was likely because you skipped the top several results. Linux UEFI boot Basically, the Linux kernel can be directly executed by the firmware. In the link above he mentions the Linux kernel's EFI stub loader - this is what you should be using, in my opinion, as it allows the linux kernel to be called directly by the firmware itself. Regardless of what you're doing something is being executed by the firmware - and it sounds like that something is grub . If the firmware can directly load your os kernel, what good is a bootloader? UEFI firmware mounts a FAT formatted GPT partition flagged esp by the partition table and executes a path there it has saved as a UEFI boot variable in an onboard flash memory module. So one thing you might do is put the linux kernel on that FAT partition and store its path in that boot variable. Suddenly the kernel is its own bootloader. Bootloaders On UEFI systems, bootloaders are redundant - ELILO included. The problem bootloaders were designed to solve was that BIOS systems only read in the first sector of the boot flagged partition and execute it. It's a little difficult to do anything meaningful with a 512 byte kernel, so the common thing to do was write a tiny utility that could mount a filesystem where you kept the actual kernel and chainload it. In fact, the 512 bytes was often not enough even for the bootloaders. grub , for instance, actually chainloads itself before ever chainloading your kernel, because it wedges its second stage in the empty space between the boot sector and the first sector of your filesystem. It's kind of a dirty hack - but it worked. Bootmanagers For the sake of easy configuration though, some go-between can be useful. What Rod Smith's rEFInd does is launch as an EFI application - this is a relatively new concept. It is a program that is executed from disk by - and that returns to - the firmware. What rEFInd does is allow you to manage boot menus and then returns your boot selection to the firmware to execute. It comes with UEFI filesystem drivers - so, for instance, you can use the kernel's EFI-stub loader on a non-FAT partition (such as your current /boot ). It is dead simple to manage - if such a thing is necessary at all - and it adds the simplicity of an executable system kernel to the convenience of a configurable bootmanager. Atomic Indirection The kernel doesn't need symlinks - it can mount --bind . If there's any path on your / where you should disallow symlinking, it is /boot . An orphaned symlink in /boot is not the kind of problem you should ever have to troubleshoot. Still, it is a common enough practice to setup elaborate indirections in /boot by several distributions - even if it is a horrible idea - in order to handle in-place kernel updates and/or multiple kernel configurations. This is a problem for EFI systems not configured to load filesystem drivers (such as are provided with the rEFInd package) because FAT is a fairly stupid filesystem overall, and it does not understand them. I don't personally use the UEFI filesystem drivers provided with rEFInd, though most distributions include a rEFInd package that can be installed via package manager and forgotten about just using their own awful symlinked /boot config and rEFInd's packaged UEFI filesystem drivers. My Config I once wrote a set of instructions on it and posted it here , but it looks like: % grep esp /etc/fstab && > ls /esp/EFILABEL=ESP /esp vfat defaults 0 1/esp/EFI/arch_root /boot none bind,defaults 0 0 arch_root/ arch_sqsh/ arch_xbmc/ BOOT/ ipxe/ So I just put those two lines in my /etc/fstab pointing to a folder that I intend to contain the new linux installation's /boot and I'm almost done worrying about the whole thing. I also have to do: cat /boot/refind_linux.conf "Arch" "root=LABEL=data rootflags=subvol=arch_root,rw,ssd,compress-force=lzo,space_cache,relatime" Apart from installing the refind-efi package via pacman for the first one, that is all that is required to setup as many separate installations/configurations as I desire. Note that the majority of that string above consists of btrfs-specific mount-options specified as kernel parameters. A more typical /boot/refind_linux.conf would probably look like: "Menu Entry" "root=/dev/sda2" And that's all it takes. rodsbooks.com If you still want ELILO then you can find installation instructions at the link above. If you want rEFInd you'll find links to it in the first paragraph there. Basically if you want to do any UEFI boot configuration, read rodsbooks.com first. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/146784",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
146,825 | When I press Ctrl + " (create a new pane) while in a pane, which has the PWD /tmp for example, the new pane starts as my home folder ~ . I looked at https://unix.stackexchange.com/a/109255/72471 and it helped me with the same issue concerning windows. However, I couldn't fix the split-window issue by inserting bind " split-window -c "#{pane_current_path}" into my ~/.tmux.conf . I am using tmux 1.9a and therefor don't want a rather messy solution for older versions stated here (it doesn't work in my case, anyway): bind '"' set default-path "" \; split-window -v \; set -u default-path How can I tell tmux to set the default directory as the current path of a pane, when creating a new pane? | Try specifying v for vertical or h for horizontal My .tmux.conf file has: bind \ split-window -h -c '#{pane_current_path}' # Split panes horizontalbind - split-window -v -c '#{pane_current_path}' # Split panes vertically (I use \ and - as one-finger pane splitters.) New panes open for me using my current directory, wherever I am. It's certainly a key feature for me! One other critical thing with tmux (this was the issue in this case) is that you have to apply changes with: tmux source-file ~/.tmux.conf Note that closing terminals, even logging off and restarting, will NOT apply tmux changes – you have to actually use that command (or use Ctrl + B :source-file ~/.tmux.conf ). You can see my full .tmux.conf file at https://github.com/durrantm/setups . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/146825",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72471/"
]
} |
146,833 | I'm attempting to limit a process to a given number of CPU cores. According to the taskset man page and this documentation , the following should work: [fedora@dfarrell-opendaylight-cbench-devel ~]$ taskset -pc 0 <PID>pid 24395's current affinity list: 0-3pid 24395's new affinity list: 0 To put it simply - this doesn't work. Putting the process under load and watching top , it sits around 350% CPU usage (same as without taskset). It should max out at 100%. I can properly set affinity via taskset -c 0 <cmd to start process> at process spawn time. Using cpulimit -p <PID> -l 99 also kinda-works . In both cases, putting the process under the same load results in it maxing out at 100% CPU usage. What's going wrong here? | Update: Newer versions of taskset have a -a / --all-tasks option that "operates on all the tasks (threads) for a given pid" and should solve the behavior I show below. I wrote a Python script that simply spins up some threads and burns CPU cycles. The idea is to test taskset against it, as it's quite simple. #!/usr/bin/env pythonimport threadingdef cycle_burner(): while True: meh = 84908230489 % 323422for i in range(3): thread = threading.Thread(target=cycle_burner) print "Starting a thread" thread.start() Just running the Python script eats up about 150% CPU usage. [~/cbench]$ ./burn_cycles.pyStarting a threadStarting a threadStarting a thread Launching my Python script with taskset works as expected. Watching top shows the Python process pegged at 100% usage. [~/cbench]$ taskset -c 0 ./burn_cycles.pyStarting a threadStarting a threadStarting a thread Interestingly, launching the Python script and then immediately using taskset to set the just-started process' affinity caps the process at 100%. Note from the output that the Linux scheduler finished executing the Bash commands before spawning the Python threads. So, the Python process was started, then it was set to run on CPU 0, then it spawned its threads, which inherited the proper affinity. [~/cbench]$ ./burn_cycles.py &; taskset -pc 0 `pgrep python`[1] 8561pid 8561's current affinity list: 0-3pid 8561's new affinity list: 0Starting a thread[~/cbench]$ Starting a threadStarting a thread That result contrasts with this method, which is exactly the same but allows the Python threads to spawn before setting the affinity of the Python process. This replicates the "taskset does nothing" results I described above. [~/cbench]$ ./burn_cycles.py &[1] 8996[~/cbench]$ Starting a threadStarting a threadStarting a thread[~/cbench]$ taskset -pc 0 `pgrep python`pid 8996's current affinity list: 0-3pid 8996's new affinity list: 0 What's going wrong here? Apparently threads spawned before the parent process' affinity is changed don't inherit the affinity of their parent. If someone could edit in a link to documentation that explains this, that would be helpful. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146833",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70555/"
]
} |
146,843 | When I cd a link, my current path is prefixed with the link's path, rather than the path of the directory the link links to. E.g. ~/dirlinks/maths$ ls -l logiclrwxrwxrwx 1 tim tim 71 Jul 27 10:24 logic -> /windows-d/academic discipline/study objects/areas/formal systems/logic~/dirlinks/maths$ cd logic~/dirlinks/maths/logic$ pwd/home/tim/dirlinks/maths/logic~/dirlinks/maths/logic$ cd ..~/dirlinks/maths$ I would like to have my current path changed to the path of the linked dir, so that I can work with the parent dirs of the linked dir as well. Besides ls the link to find out the linked dir, and then cd into it, what are some simpler ways to accomplish that? For example, after cd into a link, how do you change your current path to the path of the linked dir? | With POSIX shell, you can use -P option of cd builtin: cd -P <link> With bash , from man bash : The -P option says to use the physical directory structure instead of following symbolic links (see also the -P option to the set builtin command) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/146843",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
146,861 | We are having a file in Linux which contains one line per record, but problem comes when the line contains some new line characters. In this case, the backslash is appended at the end of line and the record is split into multiple lines. So below is my problem: "abc def \xyz pqr" should be: "abc def xyz pqr" I tried sed -I 's/\\\n/ /g' <file_name> which is not working. I also tried the tr command but it replaces only one character, not the string. Can you please suggest any command to handle this issue. | You should be able to use sed -e :a -e '/\\$/N; s/\\\n//; ta' See Peter Krumins' Famous Sed One-Liners Explained, Part I , 39. Append a line to the next if it ends with a backslash "\" . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146861",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78856/"
]
} |
146,869 | Let's say I have a file that contains these lines: 02.03.14 14.50 14.50 0.00 Desc02.03.14 17.00 0.00 17.00 Desc01.03.14 2.82 1.68 calc Desc02.03.14 1.04 0.00 1.04 Desc06.03.14 6.00 0.00 6.00 Desc08.03.14 11.76 2.98 calc Desc10.03.14 3.27 0.00 3.27 Desc I want to replace all column entries containing calc (only appearing in the 4th column) to contain the difference between the 3nd and 2nd column, so 4th=3rd-2nd . How can I do so? | Sed can't do arithmetic¹. Use awk instead. awk ' $4 == "calc" {sub(/calc( |\t)/, sprintf("%-6.2f", $3 - $2))} 1' The 1 at the end means to print everything (after any preceding transformation). Instead of the text substitution with sub , you could assign to $4 , but doing so replaces inter-column space (which can be any sequence of spaces and tabs) by a single space character. If your columns are tab-separated, you can use awk ' BEGIN {ORS = "\t"} $4 == "calc" {$4 = sprintf("%.2f", $3 - $2))} 1' ¹ Yes, yes, technically it can since it's Turing-complete. But not in any sane way. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146869",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72471/"
]
} |
146,913 | cd - can switch between current dir and previous dir. It seems that I have seen - used as arguments to other commands before, though I don't remember if - means the same as with cd . I found that - doesn't work with ls . Is - used only with cd? | - is defined in POSIX Utility Syntax Guidelines as standard input: Guideline 13:For utilities that use operands to represent files to be opened for either reading or writing, the '-' operand should be used to mean only standard input (or standard output when it is clear from context that an output file is being specified) or a file named -. You can see this definition for utilities which operate with files for reading or writing. cd does not belong to these utilities, so - in cd does not follow this guideline. Besides, POSIX also defined - has own meaning with cd : - When a <hyphen> is used as the operand, this shall be equivalent to the command: cd "$OLDPWD" && pwd which changes to the previous working directory and then writes its name. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/146913",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
146,920 | I'm using Linux Mint 13 MATE 32bit, I'm trying to build the kernel (primarily for experience and for fun). For now, I like to build it with the same configuration as precompiled kernel, so firstly I've installed precompiled kernel 3.16.0-031600rc6 from kernel.ubuntu.com , booted to it successfully. Then I've downloaded 3.16.rc6 kernel from kernel.org , unpacked it, configured it to use config from existing precompiled kernel: $ make oldconfig It didn't ask me anything, so, precompiled kernel contains all necessary information. Then I've built it (it took about 6 hours) : $ make And then installed: $ sudo make modules_install install Then I've booted into my manually-compiled kernel, and it works, boot process is somewhat slower though. But then I've found out that all the binaries ( /boot/initrd.img-3.16.0-rc6 and all the *.ko modules in /lib/modules/3.16.0-rc6/kernel are about 10 times larger than precompiled versions! Say, initrd.img-3.16.0-rc6 is 160 658 665 bytes, but precompiled initrd.img-3.16.0-031600rc6-generic is 16 819 611 bytes. Each *.ko module is similarly larger. Why is this? I haven't specified any special options for build (I typed exactly the same commands as I mentioned above). How to build it "correctly"? | Despite what file says, it turns out to be debugging symbols after all. A thread about this on the LKML led me to try: make INSTALL_MOD_STRIP=1 modules_install And low and behold, a comparison from within the /lib/modules/x.x.x directory; before: > ls -hs kernel/crypto/anubis.ko 112K kernel/crypto/anubis.ko And after: > ls -hs kernel/crypto/anubis.ko 16K kernel/crypto/anubis.ko More over, the total size of the directory (using the same .config ) as reported by du -h went from 185 MB to 13 MB. Keep in mind that beyond the use of disk space, this is not as significant as it may appear. Debugging symbols are not loaded during normal runtime, so the actual size of each module in memory is probably identical regardless of the size of the .ko file. I think the only significant difference it will make is in the size of the initramfs file, and the only difference it will make there is in the time needed to uncompress the fs. I.e., if you use an uncompressed initramfs, it won't matter. strip --strip-all also works, and file reports them correctly as stripped either way. Why it says not stripped for the distro ones remains a mystery. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146920",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46011/"
]
} |
146,922 | I have a 1TB big file (disk-image from a damaged drive) and a 1.3MB small file (beginning of a disk-file). Using the contents of the small file, I want to overwrite portions of the big file. That is, I want to insert/overwrite the first 1.3MB of the 1TB-image using the small file. Using small temporary files for testing I was unable to overwrite parts of the files. Rather, dd overwrote the files completely. This is not what I want. Is dd able to do this? | If you use the conv=notrunc argument, you can replace just the first however many bytes. e.g. dd conv=notrunc if=small.img of=large.img root@debian:~/ddtest# dd if=/dev/zero of=file1.img bs=1M count=1010+0 records in10+0 records out10485760 bytes (10 MB) copied, 1.14556 s, 9.2 MB/sroot@debian:~/ddtest# dd if=/dev/urandom of=file2.img bs=1M count=11+0 records in1+0 records out1048576 bytes (1.0 MB) copied, 0.207185 s, 5.1 MB/sroot@debian:~/ddtest# head file1.img << Blank space here as it's all Zeroes >>root@debian:~/ddtest# dd conv=notrunc if=file2.img of=file1.img 2048+0 records in2048+0 records out1048576 bytes (1.0 MB) copied, 0.00468016 s, 224 MB/sroot@debian:~/ddtest# head file1.img ^�v�y�ے!� E�91���� << SNIP Random garbage >>root@debian:~/ddtest# | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/146922",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11865/"
]
} |
146,929 | From the Unix Power Tools, 3rd Edition : Instead of Removing a File, Empty It section: If an active process has the file open (not uncommon for log files), removing the file and creating a new one will not affect the logging program; those messages will just keep going to the file that’s no longer linked . Emptying the file doesn’t break the association, and so it clears the file without affecting the logging program. ( emphasis mine ) I don't understand why a program will continue to log to a deleted file. Is it because the file descriptor entry not getting removed from the process table? | When you delete a file you really remove a link to the file (to the inode). If someone already has that file open, they get to keep the file descriptor they have. The file remains on disk, taking up space, and can be written to and read from if you have access to it. The unlink function is defined with this behaviour by POSIX: When the file's link count becomes 0 and no process has the file open, the space occupied by the file shall be freed and the file shall no longer be accessible. If one or more processes have the file open when the last link is removed, the link shall be removed before unlink() returns, but the removal of the file contents shall be postponed until all references to the file are closed . This piece of advice because of that behaviour. The daemon will have the file open, and won't notice that it has been deleted (unless it was monitoring it specifically, which is uncommon). It will keep blithely writing to the existing file descriptor it has: you'll keep taking up (more) space on disk, but you won't be able to see any of the messages it writes, so you're really in the worst of both worlds. If you truncate the file to zero length instead then the space is freed up immediately, and any new messages will be appended at the new end of the file where you can see them. Eventually, when the daemon terminates or close s the file , the space will be freed up. Nobody new can open the file in the mean time (other than through system-specific reflective interfaces like Linux's /proc/x/fd/... ). It's also guaranteed that: If the link count of the file is 0, when all file descriptors associated with the file are closed, the space occupied by the file shall be freed and the file shall no longer be accessible. So you don't lose your disk space permanently, but you don't gain anything by deleting the file and you lose access to new messages. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146929",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28032/"
]
} |
146,938 | I'm on Ubuntu 14.04. For a few weeks now a small window keeps popping up randomly, grabbing my keyboard and asking me to enter the passphrase for my ssh key. The Window's title says "OpenSSH", and it states the path of the private key file it wants to unlock. Of course, I don't do it, because I don't know where this request is coming from. Sometimes, after I hit cancel, a warning window pops up, saying that something might be eavesdropping on my session, because the keyboard could not be grabbed. This sounds very suspicious to me. I can't remember doing anything with ssh that might cause this behavior. How would I go about finding out where these ssh key requests are coming from and how to stop them? | When you delete a file you really remove a link to the file (to the inode). If someone already has that file open, they get to keep the file descriptor they have. The file remains on disk, taking up space, and can be written to and read from if you have access to it. The unlink function is defined with this behaviour by POSIX: When the file's link count becomes 0 and no process has the file open, the space occupied by the file shall be freed and the file shall no longer be accessible. If one or more processes have the file open when the last link is removed, the link shall be removed before unlink() returns, but the removal of the file contents shall be postponed until all references to the file are closed . This piece of advice because of that behaviour. The daemon will have the file open, and won't notice that it has been deleted (unless it was monitoring it specifically, which is uncommon). It will keep blithely writing to the existing file descriptor it has: you'll keep taking up (more) space on disk, but you won't be able to see any of the messages it writes, so you're really in the worst of both worlds. If you truncate the file to zero length instead then the space is freed up immediately, and any new messages will be appended at the new end of the file where you can see them. Eventually, when the daemon terminates or close s the file , the space will be freed up. Nobody new can open the file in the mean time (other than through system-specific reflective interfaces like Linux's /proc/x/fd/... ). It's also guaranteed that: If the link count of the file is 0, when all file descriptors associated with the file are closed, the space occupied by the file shall be freed and the file shall no longer be accessible. So you don't lose your disk space permanently, but you don't gain anything by deleting the file and you lose access to new messages. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78896/"
]
} |
146,942 | The following bash syntax verifies if param isn't empty: [[ ! -z $param ]] For example: param=""[[ ! -z $param ]] && echo "I am not zero" No output and its fine. But when param is empty except for one (or more) space characters, then the case is different: param=" " # one space[[ ! -z $param ]] && echo "I am not zero" "I am not zero" is output. How can I change the test to consider variables that contain only space characters as empty? | First, note that the -z test is explicitly for: the length of string is zero That is, a string containing only spaces should not be true under -z , because it has a non-zero length. What you want is to remove the spaces from the variable using the pattern replacement parameter expansion : [[ -z "${param// }" ]] This expands the param variable and replaces all matches of the pattern (a single space) with nothing, so a string that has only spaces in it will be expanded to an empty string. The nitty-gritty of how that works is that ${var/pattern/string} replaces the first longest match of pattern with string . When pattern starts with / (as above) then it replaces all the matches. Because the replacement is empty, we can omit the final / and the string value: ${parameter/pattern/string} The pattern is expanded to produce a pattern just as in filename expansion. Parameter is expanded and the longest match of pattern against its value is replaced with string . If pattern begins with ‘/’, all matches of pattern are replaced with string . Normally only the first match is replaced. ... If string is null, matches of pattern are deleted and the / following pattern may be omitted. After all that, we end up with ${param// } to delete all spaces. Note that though present in ksh (where it originated), zsh and bash , that syntax is not POSIX and should not be used in sh scripts. | {
"score": 10,
"source": [
"https://unix.stackexchange.com/questions/146942",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67059/"
]
} |
146,948 | $ echo "104_Fri" | sed 's/^\([0-9]+\)_\([A-Za-z]+\)$/\1;\2/'104_Fri I would like to match the digits at the beginning and the letters at the end - each as a group. Afterwards I want to output the first group, a semicolon and then the second group. I would expect this expression to yield: 104;Fri Why does this not work? | You must escape plus symbol + , too: $ echo "104_Fri" | sed 's/^\([0-9]\+\)_\([A-Za-z]\+\)$/\1;\2/'104;Fri | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146948",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60673/"
]
} |
146,955 | sed 's/[long1][long2]/[long3][long4]/' file.txt I would like to split this command onto multiple lines - f.x. something like this: sed 's/ [long1] [long2] / [long3] [long4] /' file.txt Using \ or separating strings didn't work. | sed 's'/\'[long1]'\'[long2]'\'/'\'[long3]'\'[long4]'\'/' file.txt Splitting on several lines with backslash does work if new lines are not indented. $ echo "a,b" | sed 's/\(.'\> '\),\(.\)/\2-\1/'b-a Tested on Cygwin with GNU sed 4.2.2 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146955",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60673/"
]
} |
146,971 | Moved from Stack Overflow, where I realize it was off-topic since it was asking for sources - far as I can tell, the rules forbid that there but not here. I know that the kernel in Android is now mostly the Linux kernel with a few exceptions like wakelocks (as described by John Stultz .) But is it close enough to be compliant with the Linux Standard Base? (Or for that matter with POSIX and/or the Single Unix Specification?) I'm writing about this in an academic term paper, so as well as the answer itself it would be great to have a relatively reliable source I can cite for it: a peer-reviewed article or book would be ideal, but something from Google's developer docs or a person with established cred (Torvalds, Andrew Josey, etc.) would be fine. | The LSB , POSIX , and the Single UNIX Specification all significantly involve userland . Simply using a kernel that is also used as the basis of a "unix-like", "mostly POSIX compliant" operating system -- GNU/Linux -- is not sufficient to make Android such as well. There are, however, some *nix-ish elements, such as the shell , which is a "largely compatible" Korn shell implementation (on pre-4.0, it may actually be the ash shell, which is used on embedded GNU/Linux systems via busybox) and various POSIX-y command line utilities to go along with it. There is not the complete set most people would recognize from the "unix-like" world, however. is it close enough to be compliant with the Linux Standard Base? A centrepiece of the LSB is the filesystem hierarchy, and Android does not use this. LSB really adds stuff to POSIX, and since Android is not nearly that, it is even further from being LSB compliant. This is pretty explicitly not a goal for the platform, I believe. The linux kernel was used for its own properties, and not because it could be used as the core of a POSIX system; it was taken up by GNU originally for both reasons. To clarify this distinction regarding a user space oriented specification -- such as POSIX, Unix, or the LSB extensions -- consider some of the things POSIX has to say about the native C library. This is where we run into platform specific things such as networking and most system calls, such as read() -- read() isn't, in fact, standard C. It's a Unix thing, historically. POSIX does define these as interfaces but they are implemented in the userland C library , then everything else uses this library as its foundation. The C library on GNU/Linux is the GNU C Library, a completely separate work from the kernel. Although these two things work together as the core of the OS, none of the standards under discussion here say anything about how this must happen, and so in effect, they don't say anything about what the kernel is or must do . They say a lot of things about what the C library is and must do, meaning, if you wrote a C library to work with a given kernel -- any kernel , regardless of form or characteristics -- and that library provides a user land API that satisfies the POSIX spec, you have a POSIX compliant OS. LSB does, I think, have some things to say about /proc , which linux provides as a kernel interface. However, the fact that this (for example) is provided directly by the kernel does not mean that the LSB says it has to be -- it just says this should/could be available, and if so what the nature of the information is. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146971",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78916/"
]
} |
146,974 | I have a file having lines as: ram_reg_10/raja_reg_9/raghu_reg_8 abc_reg_4/bcd_reg_5 cad/pqr_reg_91 I want to convert string "_reg_number" into [number] at only last of every line in vi editor. output should be: ram_reg_10/raja_reg_9/raghu[8] abc_reg_4/bcd[5] cad/pqr[91] I tried: :%s?_reg_[0-9]$?\[[0-9]\]?g But it gives: ram_reg_10/raja_reg_9/raghu[[0-9]] abc_reg_4/bcd[[0-9]] cad/pqr_reg_91 how to do it? | The LSB , POSIX , and the Single UNIX Specification all significantly involve userland . Simply using a kernel that is also used as the basis of a "unix-like", "mostly POSIX compliant" operating system -- GNU/Linux -- is not sufficient to make Android such as well. There are, however, some *nix-ish elements, such as the shell , which is a "largely compatible" Korn shell implementation (on pre-4.0, it may actually be the ash shell, which is used on embedded GNU/Linux systems via busybox) and various POSIX-y command line utilities to go along with it. There is not the complete set most people would recognize from the "unix-like" world, however. is it close enough to be compliant with the Linux Standard Base? A centrepiece of the LSB is the filesystem hierarchy, and Android does not use this. LSB really adds stuff to POSIX, and since Android is not nearly that, it is even further from being LSB compliant. This is pretty explicitly not a goal for the platform, I believe. The linux kernel was used for its own properties, and not because it could be used as the core of a POSIX system; it was taken up by GNU originally for both reasons. To clarify this distinction regarding a user space oriented specification -- such as POSIX, Unix, or the LSB extensions -- consider some of the things POSIX has to say about the native C library. This is where we run into platform specific things such as networking and most system calls, such as read() -- read() isn't, in fact, standard C. It's a Unix thing, historically. POSIX does define these as interfaces but they are implemented in the userland C library , then everything else uses this library as its foundation. The C library on GNU/Linux is the GNU C Library, a completely separate work from the kernel. Although these two things work together as the core of the OS, none of the standards under discussion here say anything about how this must happen, and so in effect, they don't say anything about what the kernel is or must do . They say a lot of things about what the C library is and must do, meaning, if you wrote a C library to work with a given kernel -- any kernel , regardless of form or characteristics -- and that library provides a user land API that satisfies the POSIX spec, you have a POSIX compliant OS. LSB does, I think, have some things to say about /proc , which linux provides as a kernel interface. However, the fact that this (for example) is provided directly by the kernel does not mean that the LSB says it has to be -- it just says this should/could be available, and if so what the nature of the information is. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/146974",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68133/"
]
} |
146,995 | I have a Docker container running systemd . I want to pass environment variables to applications under it. When I start systemd from within Docker ( /sbin/init as command line), Docker exposes variables to systemd, but does not expose to child services . If I add systemd.setenv=... to the cmdline, the variables are passed. I am looking for a cleaner solution. How do I expose environment variables passed to /sbin/init to applications started by it? % docker run -v /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged -ti \ -e VAR1=1 motiejus/systemd_fedora20 \ init systemd.setenv=VAR2=2...Welcome to Fedora 20 (Heisenbug)!...[ OK ] Reached target Multi-User System.[root@740690365eb0 ~]# env | grep VARVAR2=2 I expect to see VAR1=1 while running my command. In other words, can systemd pass variables passed to it to children it starts? For Dockerfile, see github repository . | To answer the question asked (as it doesn't seem to be answered anywhere else) "How do I expose environment variables passed to /sbin/init to applications started by it?" requires some slightly irritating bash, and an extremely useful function of the linux /proc filesystem: # Import our environment variables from systemdfor e in $(tr "\000" "\n" < /proc/1/environ); do eval "export $e"done This reads /proc/1/envion, which is the environment given to PID 1, but is delimited by nulls. It uses 'tr' to replace the nulls with new lines, and then iterates over those lines and evals them with a prepended 'export', so they are visible to child processes. The not-exposing-environment-variables is yet another "feature" of systemd, and they don't consider it a bug. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/146995",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16703/"
]
} |
147,005 | All other software packages are lower case, so why NetworkManager upper case? | From Red Hat Magazine: Introducing NetworkManager : Words with the creator NetworkManager creator and developer Dan Williams took time out of his hectically busy schedule to answer some questions. What's with those StudlyCaps, anyway? Well, coming from a Classic Mac OS background, in which everything was StudlyCaps, it is quite natural for me to use the Shift key, which many Linux programmers seem to run away from in fear. Which is quite silly, if you ask me. There's nothing to be afraid of. In any case, it also had to do with aesthetics. A daemon called network_manager just doesn't look good (using '_' instead of ' ' probably comes from the traditional Unix aversion to spaces in file names, which is also silly), and networkmanager is just pathetically hard to read, so it had to be NetworkManager. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147005",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13762/"
]
} |
147,023 | Using ssh, it is easy to print the contents of a file using ssh host 'cat file.txt' When ssh is disabled, and only SFTP is enabled, running the previous command gives the following error: This service allows sftp connections only. To work-around this issue, I could create a temporary file using scp or sshfs (as shown below), but that looks really ugly. What is the proper way to print the contents of a remote file when SSH is disabled? mkdir tmpdirsshfs host: tmpdircat tmpdir/file.txtfusermount -u tmpdir# This does not work! scp -v host:file.txt . shows# "Sink: This service allows sftp connections only."scp host:file.txt .cat file.txtrm file.txt | Curl can display the file the same way cat would. No need to delete the file since it simply displayed the output unless you tell it to do otherwise. curl -u username:password sftp://hostname/path/to/file.txt If you use public key authentication: curl -u username: --key ~/.ssh/id_rsa --pubkey sftp://hostname/path/to/file.txt If you use the default locations, then --key and --pubkey can be omitted: curl -u username: sftp://hostname/path/to/file.txt The user name can also be a part of the URL, so the final result looks very close to the ssh command: curl sftp://username@hostname/path/to/file.txt | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/147023",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27177/"
]
} |
147,024 | Is there an easy way to re-apply a previous command to a new command line entry? Say I typed in chmod u+r,g+x file.txt but forgot the sudo. Could I simply type sudo <some easy symbol> '? | You can do: sudo !! Another good one is alt . , to insert the last parameter of the previous command | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147024",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68285/"
]
} |
147,030 | I'm running a script that relies on network being up and a network share be mounted. The script runs on login (which happens automatically after boot). The problem is that by the time the script runs, I usually do not have an IP address yet (DHCP). At the moment I just sleep the script for 15s, but I don't like this approach at all, since I want to be able to tell the user if something is wrong. What my plan is, is loop while I don't have an IP address yet and continue when I do. Crucially, it has to time out after a while. What I came up with is to if [ ifconfig | grep "192.168.100" ]; but what happens is that grepconsumes the ]; and doesn't like it. Then bash also gets upset, because it can't find the ]; which grep ate. And then I haven't even implemented the time-out. Someone suggested keeping a variable, and sleeping for, say, a second in each iteration and increase this variable each time. Here is my complete (non working) script (I'm fairly new to bash scripting): x=0while [ ifconfig | grep "192.168.100." > /dev/null ]; do echo "no nework" if "$x" -gt 200; then #Time out here exit 1 x=$((x+1)) sleep .1 fidone#continue with rest of script... Any pointers in the right direction would be greatly appreciated! | Shell syntax You seem to be confused regarding conditionals in shell scripts. Every shell command has an exit status, which is an integer between 0 and 255, with 0 meaning success and any other value meaning failure. Statements like if and while that expect boolean operands inspect the exit status of the command and treat 0 (success) as true and any other value (failure) as false. For example, the grep command returns 0 if the pattern is found and 1 if the pattern is not found. So while ifconfig | grep "192.168.100." > /dev/null; do … repeats the loop as long as the pattern 192.168.100. is found in the output of ifconfig . Note that the pattern 192.168.100. matches strings like 192x168 1007 , because . in a regular expression matches any character; to search for a literal string, pass the option -F to grep . To invert the condition, put ! in front. while ! ifconfig | grep -F "192.168.100." > /dev/null; do … Further in the script, you want to compare the value of a variable to a number. You use the -gt operator, which is part of the syntax of the of conditional expressions understood by the test command. The test command returns 0 if the conditional expression is true and 1 if the conditional expression is false. if test "$x" -gt 200; then It is customary to use the alternate name [ for the test command. This name expects the command to end with the parameter ] . The two ways of writing this command are exactly equivalent. if [ "$x" -gt 200 ]; then Bash also offers a third way to write this command, with the special syntax [[ … ]] . This special syntax can support a few more operators than [ , because [ is an ordinary command subject to the usual parsing rules, while [[ … ]] is part of the shell syntax. Again, keep in mind that [ is for conditional expressions , which are a syntax with operators like -n , -gt , … [ doesn't mean “boolean value”: any command has a boolean value (exit status = 0?). Detecting that the network is up Your way of detecting that the network is up is not robust. In particular, note that your script will be triggered as soon as any network interface acquires an IP address within the specified range. In particular, it's quite possible that DNS won't be up yet at that point, let alone any network shares mounted. Do you really need to run these commands when someone logs in? It's easier to make a command run automatically when the network is brought up. The way to do that depends on your distribution and whether you use NetworkManager. If you need to run these commands as part of the login scripts, then test for the resource that you really need, not for the presence of an IP address. For example, if you want to test whether /net/somenode/somedir is mounted, use while ! grep -q /net/somenode/somedir </proc/mounts; do sleep 1done If you have upstart or systemd… then you can use it. For example, with Upstart , mark your job as start on net-device-up eth0 (replace eth0 by the name of the interface that provides the desired network connectivity). With Systemd, see Cause a script to execute after networking has started? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/147030",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36710/"
]
} |
147,036 | Maybe this is answered somewhere else, but I didn't see it. I am running Ubuntu 14.04. When I SSH into my machine, for example: ssh <user>@<machineip> notify-send "Hello" I don't see anything on the monitor where I am logged into the machine.If I prefix notify-send with DISPLAY=:0.0 or DISPLAY=:0 nothing different happens. I just never see any notification on the current session. Is there some trick/switch to getting this working? In case this isn't clear, allow me to reiterate:From Computer A, I SSH into Computer B. Within the SSH session, I wish to execute notify-send to run on Computer B. I expect a growl-type notification to appear on the monitor of Computer B. | I think you're confusing the various technologies and how they work. I wouldn't expect that the notification daemon from one system could send messages via SSH. Setting the $DISPLAY is how X11 sends the output from an application to another for displaying purposes, but the notify-send is sending an actual message to the notification daemon. This message is send using the libnotify library. excerpt libnotify is a library that sends desktop notifications to a notification daemon, as defined in the Desktop Notifications spec. These notifications can be used to inform the user about an event or display some form of information without getting in the user's way. Source: https://developer.gnome.org/libnotify/ Per app approach One method for joining the notify-send messages to your local system's notifier is to use a approach as outlined by this blog post titled: IRC notifications via SSH and libnotify . This approach would need to be customized per each type of notification that you'd want to tunnel back to your local notifier. Tunneling libnotify over SSH For a more general solution libnotify-over-ssh may be more what you're looking for. excerpt This is a client server perl script I wrote so that my server could essentially send libnotify messages to my local machine. I use this mainly with weechat but has a feature to make it more general. When calling the client with the weechat tag the server checks the name of the current focused window. If it starts with weechat, notifications are suppressed if not notify-send is called. Displaying on the remote server If on the otherhand you're simply trying to use notify-send to display messages on a remote server that you've used ssh to connect to, you'll likely need to follow one of the suggestions that was made in this Q&A titled: Using notify-send with cron . Even though several of the answers suggested that this was unnecessary, I had to do the following as others mentioned in comments on my Fedora 20 system using Cinnamon as my desktop to get things working. To get notify-send working I had to set this variable with the appropriate value from the remote system's desktop environment. $ export DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-wzrbFpDUZQ,guid=82e5bffe1f819506faecc77a53d3ba73 On my system I was able to make use of a file that's maintained for this exact purpose. $ ssh me@remote$ source ~/.dbus/session-bus/6a34f24f9c504e3f813bc094ed0b67af-0$ notify-send "hi" NOTE: The name of the DBUS file will change from session to session. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/147036",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22084/"
]
} |
147,044 | I'm quite new to Unix. Using Solaris 10 I faced the below issue. There is a large log file with size 9.5G. I tried to empty the file using the below command. # cat /dev/null file_log.txt By doing this I regained space on the file system but the size of the file still shows the same and is increasing. I figured a process is still running into the log file. Is there a way to correct the file size? Is this going to effect my file system? | Assuming you meant to say cat /dev/null > file_log.txt or cp /dev/null file_log.txt which has the same effect for that matter, the answer is that the process that has the file open for writing did so without O_APPEND , or it sets the offset into the file arbitrarily, in which case a sparse file is created. The manual page for write(2) explains that pretty clear: For a seekable file (i.e., one to which lseek(2) may be applied, for example, a regular file) writing takes place at the file offset, and the file offset is incremented by the number of bytes actually written. If the file was open(2)ed with O_APPEND, the file offset is first set to the end of the file before writing. The adjustment of the file offset and the write operation are performed as an atomic step. The said offset is a property of the according file descriptor of the writing process - if another process truncates the file or writes itself to the file, this will not have any effect on the offset. (Moreover, if the same process opens the file for writing without O_APPEND it will receive a different file descriptor for that and writing to the file through the new file descriptor will have the same effect.) Suppose that process P opens a file for writing without appending, yielding file descriptor fd . Then the effect on the file size (as stat() reports it) of truncating a file (e.g. by copying /dev/null to it) will be undone as soon as P writes to fd . Specifically, on write() to fd the system will move ("seek") to the offset associated with fd , filling the space from the current end of file (possibly to the beginning, if it was entirely truncated) up to the offset with zeros. However, if the file has grown larger in the mean time, writing to fd will overwrite the content of the file, beginning at the offset. A sparse file is a file that contains "holes", i.e. the system "knows" that there are large regions with zeroes, which are not really written to disk. This is why du and ls disagree - du looks at the actual disk usage, while ls uses simply stat() to extract the file size attribute. Remedy: restart the process. If possible, rewrite the part where the file is opened to use O_APPEND (or mode a when using fopen() ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147044",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78960/"
]
} |
147,059 | I have a bash script that moves files from a number of different locations to a folder named completed . I want to avoid overwriting previous files, so in the case when the name of a file (for example, Selection Of Recipes.zip ) I want to move is already in completed , add a nonce or other string to the filename to differentiate ( Selection of Recipes-???.zip , where ??? is a random string). Is this possible with just mv , or should I try creating another bash script with arguments that handles that aspect? Does anyone have a bash script that I can pattern my own against? | If you're using GNU mv you have the following option. $ mv -b source/* dest/. This switch tells mv to push any files that collide in the dest/. directory to a backed up version, typically adding a tilde ( ~ ) to the end of the file, prior to moving files into the directory. Example Say I have the following sample directories with files. $ mkdir source dest$ touch source/file{1..3} dest/file{1..5}$ tree.├── dest│ ├── file1│ ├── file2│ ├── file3│ ├── file4│ └── file5└── source ├── file1 ├── file2 └── file3 Now when we move files from source to dest : $ mv -b source/* dest/.$ tree .├── dest│ ├── file1│ ├── file1~│ ├── file2│ ├── file2~│ ├── file3│ ├── file3~│ ├── file4│ └── file5└── source2 directories, 8 files Controlling the extension Again with GNU's version of mv you can change the default behavior using the -S <string> switch. $ mv -b -S "string" source/* dest/. Example $ mv -b -S .old source/* dest/.$ tree .├── dest│ ├── file1│ ├── file1.old│ ├── file2│ ├── file2.old│ ├── file3│ ├── file3.old│ ├── file4│ └── file5└── source2 directories, 8 files | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78968/"
]
} |
147,069 | When you run an executable, sometimes the OS will deny yourpermission to. For example running make install with the prefixbeing a system path will need sudo , while with the prefix being anon-system path will not be asked for sudo . How does the OS decidethat running an executable would require more privilege than a userhas, even before the program does something? Sometimes, running a program will not be denied permission, but theprogram will be able to do more things if it is run with sudo . Forexample, when running du on some system directory, only with sudo it will be able to access some directory. Why does the OS notdeny permission of running such a program, or friendly notify more privilege is preferred, before the program can run? Is it true that whenever sudo works, su will also work, andwhenever su works, sudo will also work? or with su , a user can domore than with sudo ? How does the OS decide when sudo works, andwhen su is needed? | For the purposes you have described, the OS doesn't decide whether you need sudo to initially run the program. Instead, after the program starts running and then tries to do something that is not permitted by the current user (such as writing a file to /usr/bin to install a new command), the OS prevents the file access. The action to take on this condition is up to the program; make stops running but du will proceed to the next file/directory after printing a message. The su and sudo commands are two different ways of running a program with root privileges. They may differ in minor details such as the contents of the environment when starting the new program, depending on options used. The OS does not need to decide when one or the other might work. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/147069",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
147,098 | I am analyzing log file using vim and the format looks like this YYYY-MM-DD HH:MM:SS.USEC PID Name LogText Since Most of the times I don't care about date and time. I want to hide them and just focus on the Name and LogText columns (To save some screen estate). Since the first three columns always occupy the first 35 letters in a line. Is there a way to make vim not display first 35 letters of each line ? | You asked about how to hide the first letters, not to remove them, or scroll them out of sight - so here is how to actually hide them: Hide text in vim using conceal You can use matching , combined with syntax highlighting and the conceal feature to actually not show matched characters inside lines. To hide the first 25 chars of each line: :syn match Concealed '^.\{25\}' conceal:set conceallevel=2 To hide only the lines with the punctuation of a date instead: :syn match Concealed '^....-..-.. ..:..:..\..... ' conceal To unhide: :syn clear Concealed:set conceallevel=0 What looks like this normally: YYYY-MM-DD HH:MM:SS.USEC PID Name LogTextYYYY-MM-DD HH:MM:SS.USEC PID Name LogTextYYYY-MM-DD HH:MM:SS.USEC PID Name LogTextYYYY-MM-DD HH:MM:SS.USEC PID Name LogTextYYYY-MM-DD HH:MM:SS.USEC PID Name LogTextYYYY-MM-DD HH:MM:SS.USEC PID Name LogTextYYYY-MM-DD HH:MM:SS.USEC PID Name LogText will look like this after executing the first two commands: PID Name LogTextPID Name LogTextPID Name LogTextPID Name LogTextPID Name LogTextPID Name LogTextPID Name LogText See also - inside vim : help :syn-match help :syn-conceal help 'conceallevel' help 'concealcursor' (Let me know if it does not behave like that - there may be some more setting I'm not aware of or so - I'll get it to work.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147098",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78242/"
]
} |
147,105 | The question of why some commands rely on manpages whereas others rely on something like the --help flag for providing command usage reference is not new . There is usually a difference in scope between documentation for a command and a command usage synopsis . The latter is often a subset of the former. But even when most commands and utilities have manpages for instance, there exists differences in their formatting of the synopsis section which have very practical implications when trying to extract such information. In other cases one might find clues with the strings utility when a command has seemingly no documentation. I was interested with the commands I have on this QNX platform and discovered the use command 1 to display usage information. As explained in usemsg , the framework involves setting a standard usage record in the utilities source and once compiled this can be accessed with the use command and you can also wrap the native functionality etc. It is quite convenient as I could simply do use -d dir >>file on /base and /proc/boot to extract all the usage for all the commands on the system basically. So I then briefly looked at the source for GNU coreutils ls and FreeBSD ls to see if they did something like that and the former puts usage information in some usage named function (for --help I guess) while the latter doesn't seem to put it anywhere at all(?). Is this sort of solution( use ) typical of what you find with commercial Unix to present command usage reference interactively? Does POSIX/SUS recommend or suggest anything about presenting/implementing command usage reference in commands(as opposed to specifying notation for shell utilities )? 1. use command: usePrint a usage message (QNX Neutrino)Syntax:use [-aeis] [-d directory] [-f filelist] filesOptions:-a Extract all usage information from the load module in its source form, suitable for piping into usemsg. -d directory Recursively display information for all files under directory. -e Include only ELF files. -f filelist Read a list of files, one per line, from the specified filelist file, and display information for each. -i Display build properties about a load module. -s Display the version numbers of the source used in the executable. files One or more executable load modules or shell scripts that contain usage messages. | Commercial unices generally present usage information only in man pages. Having the command itself display usage information is not a traditional Unix feature (except for displaying the list of supported options, but without any explanation, on a usage error). POSIX and its relatives don't talk about anything like this. Having a --help option that displays a usage summary (typically a list of options, one per line, with a ~60 characters max description for each option) is a GNU standard . As far as I know, this convention was initiated by the GNU project, as part of the double-dash convention for multi-letter option names. There are other utilities, such as X11 utilities, that use multi-letter option names with a single dash and support -help ; I don't know which one came first. The use command is a QNX thing. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147105",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
147,149 | How can I replace a given character in a line matching a pattern with sed? For example: I'd like to match every line beginning with a letter, and replace the newline at the end with a tab. I'm trying to do so using: sed -e '/^[A-Z]/s/\n/\t/g' (the lines that I'm interested in also ALWAYS end with a letter, if this can help). Sample input NAME_A12,1NAME_B21,2 Sample output NAME_A 12,1NAME_B 21,2 | sed '/^[[:alpha:]]/{$!N;s/\n/ /;}' <<\DATANAME_A12,1NAME_B21,2DATA OUTPUT NAME_A 12,1NAME_B 21,2 That addresses lines beginning with a letter, pulls in the next if there is one, and substitutes a tab character for the newline. note that the s/\n/<tab>/ bit contains a literal tab character here, though some sed s might also support the \t escape in its place To handle a recursive situation you need to make it a little more robust, like this: sed '$!N;/^[[:alpha:]].*\n[^[:alpha:]]/s/\n/ /;P;D' <<\DATANAME_ANAME_B12,1 NAME_C21,2DATA OUTPUT NAME_ANAME_B 12,1NAME_C 21,2 That slides through a data set always one line ahead. If two ^[[:alpha:]] lines occur one after the other, it does not mistakenly replace the newline, as you can see. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147149",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74555/"
]
} |
147,152 | I'd like to create a script that will do the following. Start at a given time during the day and end at another given time. So for example, I have a program I'd like to test, so my script would be set to start at say 10:00pm and continue to run until 9:00am. This follows on from my other question about running programs again and again. I have the following: #!/bin/bashtrap "echo manual abort; exit 1" 1 2 3 15;RUNS=0;while open -W /Path/to/Program.appdo RUNS=$((RUNS+1)); echo $RUNS > /temp/autotest_run_count.txt;doneexit 0 This script essentially runs my program (in Mac OSX) and catches any failures, otherwise it will re-run the program when it closes. I'd like to be able to run this like I mentioned above. Start at 10:00pm. Finish at 9:00am. Your advice is always useful. Thanks! Euden | sed '/^[[:alpha:]]/{$!N;s/\n/ /;}' <<\DATANAME_A12,1NAME_B21,2DATA OUTPUT NAME_A 12,1NAME_B 21,2 That addresses lines beginning with a letter, pulls in the next if there is one, and substitutes a tab character for the newline. note that the s/\n/<tab>/ bit contains a literal tab character here, though some sed s might also support the \t escape in its place To handle a recursive situation you need to make it a little more robust, like this: sed '$!N;/^[[:alpha:]].*\n[^[:alpha:]]/s/\n/ /;P;D' <<\DATANAME_ANAME_B12,1 NAME_C21,2DATA OUTPUT NAME_ANAME_B 12,1NAME_C 21,2 That slides through a data set always one line ahead. If two ^[[:alpha:]] lines occur one after the other, it does not mistakenly replace the newline, as you can see. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147152",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78543/"
]
} |
147,181 | I want to remove all double quotes from my csv but not the fourth field (because the four fields represent PATH of file) Please advice how to implement this by sed or awk or perl one liner , etc What I know for now is to use simple sed command as: sed s"/\"//g" file.csv | sed 's/ //g' but this command no so elegant and also work on the fourth field ( fourth field should not be edit ) Remark - need also to delete empty spaces between quotes to near character Example ( file csv before ) "24 ","COsc ","LINUX","/VP/Ame/AR/Celts/COf"," fbsutamante ",fbu2012,"kkk","&^#$@J ",,,,,25,COsc,LINUX,"/VP/Ame/AR/Celts/COf","fbsutamante ",fbu2012,"iiii "," *****",,,,, Example ( file csv after after ) 24,COsc,LINUX,"/VP/Ame/AR HR/Ce lts/COf",fbsutamante,fbu2012,kkk,&^#$@J,,,,,25,COsc,LINUX,"/VP/Ame/AR HR/Ce lts/COf",fbsutamante,fbu2012,iiii,*****,,,,, | This can be a way: awk 'BEGIN{FS=OFS=","} # set input and output field separator as comma {for (i=5; i<=NF; i++) { # loop from 5th field gsub("\"","", $i); # remove " gsub(/^[ \t]+/,"", $i); # remove leading spaces gsub(/[ \t]+$/,"",$i)} # remove trailing spaces }1' file Removing leading and trailing is based on this answer by BMW: Remove Leading and trailing space in field in awk . Test $ awk 'BEGIN{FS=OFS=","} {for (i=5; i<=NF; i++) {gsub("\"","", $i); gsub(/^[ \t]+/,"", $i); gsub(/[ \t]+$/,"",$i)}}1' file24,COsc,LINUX,"/VP/Ame/AR/Celts/COf",fbsutamante,fbu2012,kkk,&^#$@J,,,,,25,COsc,LINUX,"/VP/Ame/AR/Celts/COf",fbsutamante,fbu2012,iiii,*****,,,,, If it also have to clean 1st to 3rd fields, just add if (i!=4) and loop through all the fields: $ awk 'BEGIN{FS=OFS=","} {for (i=1; i<=NF; i++) {if (i!=4) {gsub("\"","", $i); gsub(/^[ \t]+/,"", $i); gsub(/[ \t]+$/,"",$i)}}}1' a24,COsc,LINUX,"/VP/Ame/AR/Celts/COf",fbsutamante,fbu2012,kkk,&^#$@J,,,,,25,COsc,LINUX,"/VP/Ame/AR/Celts/COf",fbsutamante,fbu2012,iiii,*****,,,,, | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147181",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67059/"
]
} |
147,207 | Here is an example of file path: /isf/GCM/VPfig/Aas/AR/ClCo el Doma Republic/VMN CRTro.txt What I want to get is the file basename: VMN CRTro.txt So I try the following: echo /isf/GCM/VPfig/Aas/AR/ClCo el Doma Republic/VMN CRTro.txt | sed s'/\// /g' | awk '{print $NF}'CRTro.txt <-- not as expected Or basename /isf/GCM/VPfig/Aas/AR/ClCo el Doma Republic/VMN CRTro.txtbasename: extra operand `Doma'Try `basename --help' for more information. <-- basename cant handle spaces What the best way to get the basename of a file with spaces in it? | Just quote your path basename "/isf/GCM/VPfig/Aas/AR/ClCo el Doma Republic/VMN CRTro.txt" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/147207",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67059/"
]
} |
147,261 | I did try to install mysql-server on my Vagrant Ubuntu 12.04 LTS virtual machine. When I did so, the setup auto-starts. I can see this in the Vagrant output: While not mandatory, it is highly recommended that you set a password ││ for the MySQL administrative "root" user.││││ If this field is left blank, the password will not be changed.││││ New password for the MySQL "root" user After that the output text goes haywire — ± ├⎺ ⎼␊⎻┌▒␌␊ ┌␋␉⎽─┌␋├␊3-0 3.7.9-2┤␉┤┼├┤1 (┤⎽␋┼± ... — but is rather lengthy and full of green and red colors, so I believe the rest of the install is completing. But I can confirm the lack of install after: sudo apt-get install --just-print mysql-server-5.5 ...The following NEW packages will be installed: mysql-server-5.5 How can I send the right signals through a shell script to configure the MYSQL server? Or if I cannot, how can I stop the automatic launching of the configuration or kill the setup once launched while still having the package installed? | You can set the MySQL root password in your bootstrap file by adding debconf-set-selections commands before running your apt-get install: #!/usr/bin/env bashdebconf-set-selections <<< 'mysql-server mysql-server/root_password password MySuperPassword'debconf-set-selections <<< 'mysql-server mysql-server/root_password_again password MySuperPassword'apt-get updateapt-get install -y mysql-server I presume this to work on any Debian based system. I use it everyday, box is built completely automatically. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/147261",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79059/"
]
} |
147,277 | I'm working on a bash script to restart multiple apache instances in different environments. It's working fine except I'd like to add some logic to ensure only correct arguments are entered. I've tried the below statement in multiple ways, but I'm not having any luck. Any suggestions? if [[ $ENVT != qa || $ENVT != qa2 || $ENVT != stageqa || $ENVT != stage ]]; then usagefi It's when I use the stage environment it's evaluating the first option and invoking the function instead of iterating through as seen with set -x turned on. + ENVT=stage+ ACTION=stop+ USER=www+ '[' 2 -ne 2 ']'+ [[ stage != qa ]]+ usage | You should not use multiple if condition in this case, use case instead: case "$ENVT" in (qa|qa2|stageqa|stage) ;; (*) usage ;;esac | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147277",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79065/"
]
} |
147,284 | I do not understand this script. getopt_simple(){ echo "getopt_simple()" echo "Parameters are '$*'" until [ -z "$1" ] do echo "Processing parameter of: '$1'" if [ ${1:0:1} = '/' ] then tmp=${1:1} # Strip off leading '/' . . . parameter=${tmp%%=*} # Extract name. value=${tmp##*=} # Extract value. echo "Parameter: '$parameter', value: '$value'" eval $parameter=$value fi shift done} I need some help after if [ ${1:0:1} = '/' ] in the code written above and my questions are: What is happening in the if statement? What does ":" symbolises here? | There's just about one new syntax element per line, nice... I'll annotate each line with the relevant section from man bash - may be helpful as is, or in combination with another answer: From the argument $1 , cut out 1 char starting at 0 and check it's a / : if [ ${1:0:1} = '/' ] ${parameter:offset} ${parameter:offset:length} Substring Expansion. Expands to up to length characters of the value of parameter starting at the character specified by off‐ set. If parameter is @, an indexed array subscripted by @ or *, or an associative array name, the results differ as described below. If length is omitted, expands to the substring of the value of parameter starting at the character specified by offset and extending to the end of the value. length and offset are arithmetic expressions (see ARITHMETIC EVALUATION below). If offset evaluates to a number less than zero, the value is used as an offset in characters from the end of the value of parameter. If length evaluates to a number less than zero, it is interpreted as an offset in characters from the end of the value of parameter rather than a number of characters, and the expansion is the characters between offset and that result. Note that a negative offset must be separated from the colon by at least one space to avoid being confused with the :- expan‐ sion. Leave char 0 out and get chars from 1 to the end from $1 : tmp=${1:1} # Strip off leading '/' . . . See section above, first case. For arguments like --foo=bar , cut off text matching '=*' from the right, as much as possible to the left (think of handling --foo=bar=baz ): parameter=${tmp%%=*} # Extract name. ${parameter%word} ${parameter%%word} Remove matching suffix pattern. The word is expanded to produce a pattern just as in pathname expansion. If the pattern matches a trailing portion of the expanded value of parameter, then the result of the expansion is the expanded value of parameter with the shortest matching pattern (the ``%'' case) or the longest matching pattern (the ``%%'' case) deleted. If parameter is @ or *, the pattern removal operation is applied to each posi‐ tional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with @ or *, the pattern removal operation is applied to each member of the array in turn, and the expansion is the resultant list. For arguments like --foo=bar , cut off text matching '*=' from the left, as much as possible to the right (think of handling --foo=bar=baz ): value=${tmp##*=} # Extract value. ${parameter#word} ${parameter##word} Remove matching prefix pattern. The word is expanded to produce a pattern just as in pathname expansion. If the pattern matches the beginning of the value of parameter, then the result of the expansion is the expanded value of parameter with the shortest matching pattern (the ``#'' case) or the longest matching pat‐ tern (the ``##'' case) deleted. If parameter is @ or *, the pattern removal operation is applied to each positional parame‐ ter in turn, and the expansion is the resultant list. If param‐ eter is an array variable subscripted with @ or *, the pattern removal operation is applied to each member of the array in turn, and the expansion is the resultant list. (Note: the example case --foo=bar=baz is not supported as --foo and bar=baz , but as --foo and baz ) Source: section Parameter Expansion in man bash , man bash | less '+/Parameter Expansion' (or, shorter man bash | less '+/##' ) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/147284",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73029/"
]
} |
147,290 | I have a directory that I am trying to clean out that contains both files and subdirectories. What I am trying to do is simple: move all the files into another directory, but leave all the sub-directories as they are. I am thinking something like: mv [*_but_no_dirs] ./other_directory It seems like there should be a simple way to do this with wildcards * and regex... Anyone have ideas? | Regex aren't involved here. Wildcards in bash (like most other shells) only match files based on the file names, not based on the file type or other characteristics. There is one way to match by type: adding / at the end of the pattern makes it only match directories or symbolic links to directories. This way, you can move directories, then move what's left, and move directories back — cumbersome but it works. tmp=$(TMPDIR=.. mktemp -d)mv -- */ "$tmp"mv -- * other_directory/mv "$tmp"/* .rmdir "$tmp" (that approach should be avoided if the current directory is the mount point of a filesystem, as that would mean the moving of directories away and back would have to copy all the data in there twice). A standard way to match files by type is to call find . find . -name . -o -type d -prune -o -exec sh -c 'mv "$@" "$0"' other_directory/ {} + (also moves symlinks, whether they point to directories or not). In zsh, you can use glob qualifiers to match files by type. The . qualifier matches regular files; use ^/ to match all non-directories, or -^/ to also exclude symbolic links to directories. mv -- *(.) other_directory/ In any shell, you can simply loop. for x in *; do if [ ! -d "$x" ]; then mv -- "$x" other_directory/ fidone (does not move symlinks to directories). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/147290",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61235/"
]
} |
147,327 | This is the output of pacman . I installed nothing; just wanted to update... :: Synchronizing package databases... core is up to date extra 1761.1 KiB 2.58M/s 00:01 [#######################################################################] 100% community 2.3 MiB 2.25M/s 00:01 [#######################################################################] 100% multilib 121.2 KiB 1236K/s 00:00 [#######################################################################] 100%:: Starting full system upgrade...:: Replace glamor-egl with extra/xorg-server? [Y/n] What could cause this? What option should I select? | That's not a conflict, its a reflection of the fact that the new version of X (1.16) has hit the repos and, as the news makes clear , glamour-egl is deprecated. Follow pacman 's advice and select Y . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/147327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33165/"
]
} |
147,332 | I'm using runit to manage my services and when a new version of nginx is installed, I'd like to restart nginx using sv restart nginx . Is there a way that I can monitor a package or set of packages for upgrades and trigger a script when they're upgraded? | That's not a conflict, its a reflection of the fact that the new version of X (1.16) has hit the repos and, as the news makes clear , glamour-egl is deprecated. Follow pacman 's advice and select Y . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/147332",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5614/"
]
} |
147,342 | I'm working on a system whose primary startup system is runit. Unfortunately, runit requires that whatever application it is running be running in the foreground like so: #!/bin/bashexec sshd -D Seeing as nginx doesn't offer a way to run it in the foreground, how can I have runit still manage nginx and be able to stop, start, and restart it using runit's sv commands? | You can use option daemon off : exec /usr/sbin/nginx -c /etc/nginx/nginx.conf -g "daemon off;" From nginx wiki : You can use daemon off safely in production mode with runit / daemontools however you can't do a graceful upgrade. master_process off should never be used in production. When you use runit to control nginx , it becomes the parent process of the nginx master process. But if you try to do an online upgrade, the nginx master process will fork and execute the new binary. A new master process is created, but because old master process still exists (because it's controlled by runit ), the parent of the new master process will be the init process, because runit can not control new master master process as it didn't start it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147342",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5614/"
]
} |
147,357 | If I cat /etc/shadow I can get the encrypted passwords of root and my user. These passwords are the same (I know, bad security) for each account, but in /etc/shadow they show up as being different encrypted strings. Why? Are different algorithms used for each? | Separate users means a separate User ID, and therefore separate hashes will be involved with the algorithm. Even a user with the same name, same password, and created at the same time will (with almost certain probability) end up with a different hash. There are other factors that help create the encryption. If you want to look at a quick example here it may explain it better. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147357",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/75027/"
]
} |
147,364 | I have the following two files. The first file is : 3184 2014-07-28 04:15 global.Remote-Access 10.111.8.25 81.245.6.25 tcp 3268 3035 2014-07-28 04:16 global.Remote-Access 10.111.8.12 81.245.6.25 tcp 3268 The second file is: 1 Jul 28 04:12 2014-07-28 id967254(group3)[attribute1 attribute2] Tunneling: User with IP 10.111.8.12 10 connected 1 Jul 28 04:15 2014-07-28 id920767(group2)[attribute3 attribute4 .... attribute n] Tunneling: User with IP 10.111.8.25 connected 1 Jul 28 04:16 2014-07-28 ID926072(group3)[attribute1 attribute2] Tunneling:User with IP 10.111.8.12 connected If the source IP address in the file 1 is equal to file 2 , and if the time ( hh:mm ) and date ( yyyy-mm-dd ) in the file 1 are equal to file2, the third file will be as follows: 3184 04:15 2014-07-28 global.Remote-Access id920767(group2)[attribute3 attribute4 .... attribute n] 10.111.8.25 81.245.6.25 tcp 3268 3035 04:16 2014-07-28 global.Remote-Access ID926072(group3)[attribute1 attribute2] 10.111.8.12 81.245.6.25 tcp 3268 How can I realise this using awk ? | Separate users means a separate User ID, and therefore separate hashes will be involved with the algorithm. Even a user with the same name, same password, and created at the same time will (with almost certain probability) end up with a different hash. There are other factors that help create the encryption. If you want to look at a quick example here it may explain it better. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147364",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79115/"
]
} |
147,377 | I untarred a corrupt tar file, and managed to end up with some directorythat I can not delete,If I try to delete it, it seems like it can not be found, but ls shows it's present, both with bash and with python I get similar behaviour, except right after I try to delete it with rm -rf , ls complains it can't find it, then it lists it (see below after rm -rf ). The find command shows the file is present, but still I can't think of a way to delete it. Here are my attempts: Here you see both ls and find agree we have a directory, rl]$ lsmikeaâ??cntrl]$ find -maxdepth 1 -type d -empty -print0 ./mikeaâcnt But I can't delete it: rl]$ find -maxdepth 1 -type d -empty -print0 | xargs -0 rm -f -v rm: cannot remove `./mikeaâ\302\201\302\204cnt': Is a directoryrl]$ lsmikeaâ??cnt I can cd to it though and it's empty: rl]$ cd mikeaâ^Á^Äcnt/mikeaâ^Á^Äcnt]$ lsmikeaâ^Á^Äcnt]$ pwd.../rl/mikeaâcntmikeaâ^Á^Äcnt]$ cd ../rl]$ lsmikeaâ??cnt see below that is not a simple file but a directory, plus ls behaves funny after the rm -rf it says it can't find the file then lists it straight after: rl]$ rm mikeaâ^Á^Äcnt/rm: cannot remove `mikeaâ\302\201\302\204cnt/': Is a directoryrl]$ rm -rf mikeaâ^Á^Äcnt/rl]$ lsls: cannot access mikeaâcnt: No such file or directorymikeaâ??cntrl]$ So this is the attempt with python, the file is found, but the name is notusable as a name that can be deleted: rl]$ python Python 2.6.6 (r266:84292, Jul 10 2013, 22:48:45) [GCC 4.4.7 20120313 (Red Hat 4.4.7-3)] on linux2Type "help", "copyright", "credits" or "license" for more information.>>> import os>>> import shutil>>> os.listdir('.')['mikea\xc3\xa2\xc2\x81\xc2\x84cnt']>>> shutil.rmtree(os.listdir('.')[0] )Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib64/python2.6/shutil.py", line 204, in rmtree onerror(os.listdir, path, sys.exc_info()) File "/usr/lib64/python2.6/shutil.py", line 202, in rmtree names = os.listdir(path)OSError: [Errno 2] No such file or directory: 'mikea\xc3\xa2\xc2\x81\xc2\x84cnt' even when I use tab completion the name it picks up is no usable: rl]$ rm -rf mikeaâ^Á^Äcnt rl]$ lsls: cannot access mikeaâcnt: No such file or directorymikeaâ??cnt using the name that python shows with bash I get this: rl]$ rm -rf "mikea\xc3\xa2\xc2\x81\xc2\x84cnt"rl]$ lsls: cannot access mikeaâcnt: No such file or directorymikeaâ??cnt Is there anything I can do to get rid of this corrupt dir?The underlying filesystem (NFS) seems functional and no other problems are reported, and I have had no such problems until the corrupt tar file. EDIT:Here is using find 's own -exec option to call rm rl]$ find -maxdepth 1 -type d -empty -exec rm -f {} \;find: `./mikeaâ\302\201\302\204cnt': No such file or directoryrl]$ lsls: cannot access mikeaâcnt: No such file or directorymikeaâ??cntrl]$ but the file is still there, ( ls complains it can't find it, but then shows it anyway) 2nd EDIT: rl]$ find -maxdepth 1 -type d -empty -exec rm -rf {} \;find: `./mikeaâ\302\201\302\204cnt': No such file or directoryrl]$ lsls: cannot access mikeaâcnt: No such file or directorymikeaâ??cnt The behaviour is still unchanged, the file still present 3rd EDIT: rl]$ lsmikeaâ??cntrl]$ find -maxdepth 1 -type d -empty -exec rm -rf {} + rl]$ lsls: cannot access mikeaâcnt: No such file or directorymikeaâ??cnt There seems to be more to the name than mikeaâcnt from looking at the output of the python attempt mikea\xc3\xa2\xc2\x81\xc2\x84cnt , and this screenshot: 4th EDIT:This is the attempt with a wild card: rl]$ echo * mikeaâcntrl]$ echo mike* mikeaâcntrl]$ rm -rf mike*rl]$ lsls: cannot access mikeaâcnt: No such file or directorymikeaâ??cnt and my locale: rl]$ localeLANG=en_US.utf8LC_CTYPE="en_US.utf8"LC_NUMERIC="en_US.utf8"LC_TIME="en_US.utf8"LC_COLLATE="en_US.utf8"LC_MONETARY="en_US.utf8"LC_MESSAGES="en_US.utf8"LC_PAPER="en_US.utf8"LC_NAME="en_US.utf8"LC_ADDRESS="en_US.utf8"LC_TELEPHONE="en_US.utf8"LC_MEASUREMENT="en_US.utf8"LC_IDENTIFICATION="en_US.utf8"LC_ALL= 5th Edit: rl]$ ls -i ls: cannot access mikeaâcnt: No such file or directory? mikeaâ??cnt but also the behaviour has changed, now ls and cd do this: rl]$ lsls: cannot access mikeaâcnt: No such file or directorymikeaâ??cntrl]$ cd mikeaâ^Á^Äcnt mikeaâcnt: No such file or directory. This has happened after the attempts to delete, I'm thinking that it might be NFS issues as suggested in one of the answers here by vinc17. 6th EDIT:This is the output of lsof and ls -a rl]$ /usr/sbin/lsof mikeaâ^Á^Äcnt lsof: status error on mikeaâ\xc2\x81\xc2\x84cnt: No such file or directory above is wrong, here is the correct lsof invocation:(rl is the parent directory) rl]$ /usr/sbin/lsof | grep mike | grep rl tcsh 11926 mike cwd DIR 0,33 4096 19569249 /home/mike/mish/rllsof 14733 mike cwd DIR 0,33 4096 19569249 /home/mike/mish/rlgrep 14734 mike cwd DIR 0,33 4096 19569249 /home/mike/mish/rlgrep 14735 mike cwd DIR 0,33 4096 19569249 /home/mike/mish/rllsof 14736 mike cwd DIR 0,33 4096 19569249 /home/mike/mish/rlrl]$ rl]$ ls -als: cannot access mikeaâcnt: No such file or directory. .. mikeaâ??cnt 7th Edit:move won't work, (I tried it before all this, but I did not save the output), but it has the same problem as ls and rm with the file. 8th EDIT: this is using the hex chars as suggested: rl]$ ls --show-control-chars | xxd0000000: 6d69 6b65 61c3 a2c2 81c2 8463 6e74 0a mikea......cnt.rl]$ rmdir $'mikea\6d69\6b65\61c3\a2c2\81c2\8463\6e74\0acnt' rmdir: failed to remove `mikea\006d69\006b651c3\a2c2\\81c2\\8463\006e74': No such file or directoryrl]$ lsls: cannot access mikeaâcnt: No such file or directorymikeaâ??cntrl]$ 9th Edit:for the stat command: rl]$ stat mikeaâ^Á^Äcnt stat: cannot stat `mikeaâ\302\201\302\204cnt': No such file or directory rl]$ Its seems even more likely from all the output, there is a bug or other NFS misbehaviour as suggested in the comments. Edit 10: This is strace output in a gist since its so large,its the output or these two commands: strace -xx rmdir ./* | grep -e '-1 E'`strace -xx -e trace=file ls -li` https://gist.github.com/mikeatm/e07fa600747a4285e460 Edit 11:So before the above rmdir I noticed that I could cd into the directory, but after the rmdir I could not cd again, similar to yesterday. The . and .. files were present: rl]$ lsmikeaâ??cntrl]$ cd mikeaâ^Á^Äcnt/mikeaâ^Á^Äcnt]$ lsmikeaâ^Á^Äcnt]$ ls -a. ..mikeaâ^Á^Äcnt]$ cd ../ Final Edit:I saw a local admin over this and it was dealt with by logging on to the server itself and deleting from there. The explanation from them is that it could be a problem with character sets in the name being inappropriate. | The following excerpt from this essay potentially explains why that directory refuses to be deleted: NFSv4 requires that all filenames be exchanged using UTF-8 over the wire. The NFSv4 specification, RFC 3530, says that filenames should be UTF-8 encoded in section 1.4.3: “In a slight departure, file and directory names are encoded with UTF-8 to deal with the basics of internationalization.” The same text is also found in the newer NFS 4.1 RFC (RFC 5661) section 1.7.3. The current Linux NFS client simply passes filenames straight through, without any conversion from the current locale to and from UTF-8. Using non-UTF-8 filenames could be a real problem on a system using a remote NFSv4 system; any NFS server that follows the NFS specification is supposed to reject non-UTF-8 filenames. So if you want to ensure that your files can actually be stored from a Linux client to an NFS server, you must currently use UTF-8 filenames. In other words, although some people think that Linux doesn’t force a particular character encoding on filenames, in practice it already requires UTF-8 encoding for filenames in certain cases. UTF-8 is a longer-term approach. Systems have to support UTF-8 as well as the many older encodings, giving people time to switch to UTF-8. To use “UTF-8 everywhere”, all tools need to be updated to support UTF-8. Years ago, this was a big problem, but as of 2011 this is essentially a solved problem, and I think the trajectory is very clear for those few trailing systems. Not all byte sequences are legal UTF-8, and you don’t want to have to figure out how to display them. If the kernel enforces these restrictions, ensuring that only UTF-8 filenames are allowed, then there’s no problem... all the filenames will be legal UTF-8. Markus Kuhn’s utf8_check C function can quickly determine if a sequence is valid UTF-8. The filesystem should be requiring that filenames meet some standard, not because of some evil need to control people, but simply so that the names can always be displayed correctly at a later time. The lack of standards makes things harder for users, not easier. Yet the filesystem doesn’t force filenames to be UTF-8, so it can easily have garbage. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/147377",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79125/"
]
} |
147,384 | I have 2 partitions sda1 XP and sda2 CENTOS. I have reinstalled grub on /dev/sda but on rebooting, I experienced: error : 22 no such parition for Centos; while Windows boots charmingly on the other. fdisk -l gives sda1 as boot, is there a possibility that I can change it to sda2 on shell , since I am on rescue mode. | The following excerpt from this essay potentially explains why that directory refuses to be deleted: NFSv4 requires that all filenames be exchanged using UTF-8 over the wire. The NFSv4 specification, RFC 3530, says that filenames should be UTF-8 encoded in section 1.4.3: “In a slight departure, file and directory names are encoded with UTF-8 to deal with the basics of internationalization.” The same text is also found in the newer NFS 4.1 RFC (RFC 5661) section 1.7.3. The current Linux NFS client simply passes filenames straight through, without any conversion from the current locale to and from UTF-8. Using non-UTF-8 filenames could be a real problem on a system using a remote NFSv4 system; any NFS server that follows the NFS specification is supposed to reject non-UTF-8 filenames. So if you want to ensure that your files can actually be stored from a Linux client to an NFS server, you must currently use UTF-8 filenames. In other words, although some people think that Linux doesn’t force a particular character encoding on filenames, in practice it already requires UTF-8 encoding for filenames in certain cases. UTF-8 is a longer-term approach. Systems have to support UTF-8 as well as the many older encodings, giving people time to switch to UTF-8. To use “UTF-8 everywhere”, all tools need to be updated to support UTF-8. Years ago, this was a big problem, but as of 2011 this is essentially a solved problem, and I think the trajectory is very clear for those few trailing systems. Not all byte sequences are legal UTF-8, and you don’t want to have to figure out how to display them. If the kernel enforces these restrictions, ensuring that only UTF-8 filenames are allowed, then there’s no problem... all the filenames will be legal UTF-8. Markus Kuhn’s utf8_check C function can quickly determine if a sequence is valid UTF-8. The filesystem should be requiring that filenames meet some standard, not because of some evil need to control people, but simply so that the names can always be displayed correctly at a later time. The lack of standards makes things harder for users, not easier. Yet the filesystem doesn’t force filenames to be UTF-8, so it can easily have garbage. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/147384",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39890/"
]
} |
147,420 | What is $() in Linux Shell Commands? For example: chmod 777 $(pwd) | It's very similar to the backticks ``. It's called command substitution ( posix specification ) and it invokes a subshell. The command in the braces of $() or between the backticks ( `…` ) is executed in a subshell and the output is then placed in the original command. Unlike backticks, the $(...) form can be nested. So you can use command substitution inside another substitution. There are also differences in escaping characters within the substitution. I prefer the $(...) form. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/147420",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73778/"
]
} |
147,426 | I get this error message when trying to install CentOS 7 from an USB device: /dev/root does not exist in CentOS 7 How can I solve this problem? | Use Win32 Disk Imager on Windows or dd to write the ISO to the USB stick on Linux/OSX. dd if=CentOS-7.0-1406-x86_64-NetInstall.iso of=/dev/sdb bs=8m I've recently used the first and it booted fine after doing that. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147426",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79162/"
]
} |
147,434 | How can I find out what commands a package ran to install the software with apt-get install <package> ? For example, if I install a package that creates a user, how can I find out how it created that user? | You look at the post-installation script, which is actually run by dpkg. You can find these in /var/lib/dpkg/info . Such scripts contain the name of the binary package in question, and have the suffix .postinst . Note that there are also pre-installation scripts, which have the suffix .preinst , but I think that a package is much more likely to create a new user in a postinst script. Did you have a particular example in mind? An example is postgresql-common, which creates the postgres user. Here is an extract from the file /var/lib/dpkg/info/postgresql-common.postinst . # Make sure the administrative user exists if ! getent passwd postgres > /dev/null; then adduser --system --quiet --home /var/lib/postgresql --no-create-home \ --shell /bin/bash --group --gecos "PostgreSQL administrator" postgres fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147434",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78683/"
]
} |
147,443 | Is it possible for less output to set the tab width to a number X as it is for cat ? | Yes, it is possible with less -x or less --tabs , e.g. less -x4 will set the tabwidth to 4. You can configure defaults with the LESS environment variable, e.g. LESS="-x4" . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/147443",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72471/"
]
} |
147,446 | How do you use top with just showing the CMD name? I have used top with just showing the running process that I want; for example: $ top -p 19745 And if I want more than one PID, I would use: $ top -p 19745 -p 19746 -p 19747 I have Googled it, but they don't say how you can do it, even when I try looking at the help in top it still doesn't show you. Is there a way you can filter by the CMD name only? There are certain files that I am running through Apache2, and I want to monitor them only. afile1.abcafile2.abcafile3.abcafile4.abc Update I see this in the man top page: x: Command -- Command line or Program name Display the command line used to start a task or the name of the associated program. You toggle between command line and name with 'c', which is both a command-line option and an interactive command. When you've chosen to display command lines, processes without a command line (like kernel threads) will be shown with only the program name in parentheses, as in this example: ( mdrecoveryd ) Either form of display is subject to potential truncation if it's too long to fit in this field's current width. That width depends upon other fields selected, their order and the current screen width. Note: The 'Command' field/column is unique, in that it is not fixed-width. When displayed, this column will be allocated all remaining screen width (up to the maximum 512 characters) to provide for the potential growth of program names into command lines. Will that do anything for me? | Yes, it is possible with less -x or less --tabs , e.g. less -x4 will set the tabwidth to 4. You can configure defaults with the LESS environment variable, e.g. LESS="-x4" . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/147446",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20752/"
]
} |
147,454 | I want to write a command that gives me the newest file in a directory recursively . But that's not my only limitation. The files has to be an mp3 or a jpg file. ( case insensitive prefered)I only need the creating date of that newest file. If possible I want it formatted like this:30-12-2014 (so: day-month-year ) This is currently wath I've got: find . -name '*.mp3' -or -name '*.JPG' -printf "%TD \n" | sort -rn | head -n 1 But it doesn't work well. I only get JPG's and the date isn't formatted. | Something like this should work: find . \( -iname "*.mp3" -o -iname "*.jpg" \) -printf '%TY%Tm%Td %TT %p\n' | sort -r This should find the files that (case-insensitively) find files ending with mp3 or jpg, print out the modification time, then sort it in reverse order. It seems to show both file-types when you run it effectively as two commands: ( find . -iname "*.mp3" -printf '%TY%Tm%Td %TT %p\n' ; find . -iname "*.jpg" -printf '%TY%Tm%Td %TT %p\n' ) | sort -r | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147454",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79132/"
]
} |
147,465 | I would like to suspend my xubuntu (14.04) system from a keyboard shortcut without entering my superuser password ( sudo ). I'm looking a command line which I can convert to a shortcut. So far, I tried two solutions: Xfce command: xfce4-session-logout --suspend Problem: The system doesn't lock the session. I don't need to enter my password for the wake-up and I want to do it. Dbus : dbus-send --print-reply --system --dest=org.freedesktop.UPower /org/freedesktop/UPower org.freedesktop.UPower.Suspend Problem: After wake-up, the Internet connection is down and I have to reboot the system to get it back. Is there a third solution which 1. ask the password during the wake-up process, and 2. doesn't mess up with Internet connection? In fact, the graphical default shortcut (from the menu) works fine. I just don't know which command line is called. | I wrote a script. It seems to do what you ask for: #!/usr/bin/env zsh# Custom suspend## (That 'zsh' up there can be switched to 'bash', or # pretty much any shell - this doesn't do anything too fancy.)## Dependencies are mostly xfce stuff:## xbacklight# xflock4# xfce4-session-logout# Set how dim we want the screen to go (percentage, out of 100)dim=5# Pack up your toysprevious_dimness=$(xbacklight -get)# Turn down the lightsxbacklight -set $dim# Lock the door (this requires a password to get back in)xflock4# And go to sleepxfce4-session-logout --suspend# When we wake up, turn the lights back onxbacklight -set $previous_dimness | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147465",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49275/"
]
} |
147,471 | I find the output of the shell command top to be a simple and familiar way to get a rough idea of the health of a machine. I'd like to serve top 's output (or something very similar to it) from a tiny web server on a machine for crude monitoring purposes. Is there a way to get top to write its textual output exactly once , without formatting characters? I've tried this: (sleep 1; echo 'q') | top > output.txt This seems to be close to what I want, except that (1) there's no guarantee that I won't get more or less than one screenful of info and (2) I have to strip out all the terminal formatting characters. Or is there some other top -like command that lists both machine-wide and process-level memory/CPU usage/uptime info? (Ideally, I'd love a strategy that's portable to both Linux and Mac OS X, since our devs use Macs and our prod environment is Linux.) | In Linux, you can try this: top -bn1 > output.txt From man top : -b : Batch-mode operation Starts top in 'Batch' mode, which could be useful for sending output from top to other programs or to a file. In this mode, top will not accept input and runs until the iterations limit you've set with the '-n' command-line option or until killed.....-n : Number-of-iterations limit as: -n number Specifies the maximum number of iterations, or frames, top should produce before ending. With OS X, try: top -l 1 From top OSX manpage : -l <samples> Use logging mode and display <samples> samples, even if standard output is a terminal. 0 is treated as infinity. Rather than redisplaying, output is periodically printed in raw form. Note that the first sample displayed will have an invalid %CPU displayed for each process, as it is calculated using the delta between samples. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/147471",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28969/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.