source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
250,969 | I need to force a disconnect all sessions for a given user. Since this user might be root as well I think it would be better to avoid killing the parent sshd process along the way. Is there a portable way to do that? | This isn't elegant but it would get the job done. ps ax | grep 'sshd: <insert username here>' | grep -v 'grep' | awk '{print $1}' | xargs kill | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/250969",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2917/"
]
} |
250,979 | Let's say I want to repeat the same string of environment variables before running various incantations of a command if [[ some_thing ]]; then TZ=GMT LC_ALL=C LONG_ENV_VAR=foo my_commandelif [[ some_other_thing ]]; then TZ=GMT LC_ALL=C LONG_ENV_VAR=foo my_command --with-argelse TZ=GMT LC_ALL=C LONG_ENV_VAR=foo my_command --with-other-argfi Is there a way to combine those? Some options Set them via export export TZ=GMTexport LC_ALL=Cexport LONG_ENV_VAR=fooif [[ ]] # ... This works but I would rather not have them continue to be set in the environment. Attempt to create a variable variable! local cmd_args="TZ=GMT LC_ALL=C LONG_ENV_VAR=foo" Unfortunately when I tried to run this via: $cmd_args my_command I got TZ=GMT: command not found . Just list them all out every time. I also tried Googling for this, but "environment variable variable" isn't the easiest term to search for and I didn't get anywhere. Is there a fix for what I'm trying to do in #2? or am I stuck with some version of #1 and unsetting the vars afterwards? | I might use a subshell for this: ( export TZ=GMT LC_ALL=C LONG_ENV_VAR=foo if [[ some_thing ]]; then exec my_command … fi) That allows you to clearly set the variables once; have them present for anything you run inside the subshell, and also not be present in the main shell's environment. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/250979",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9519/"
]
} |
251,005 | I have a simple script with a lot of output: #!/bin/bash{apt-get update && apt-get upgrade} 2>&1 Starting it with ./script.sh >/dev/null 2>&1 silences it. Can I silence the script from the inside? | You can add the redirection in your script: --EDIT-- after Jeff Schaller comment #!/bin/bash# case 1: if you want to hide all message even errors{apt-get update && apt-get upgrade} > /dev/null 2>&1#!/bin/bash# case 2: if you want to hide all messages but errors{apt-get update && apt-get upgrade} > /dev/null | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251005",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78795/"
]
} |
251,006 | I am working on a project, in which the ultimate goal is to successfully wipe iOS from an iPhone and install TinyCore Linux in it's place. I expected the iPhone to function like an external drive when connected to a computer running Linux (i.e. a mountable device or partition containing the phone's entire storage, including the OS), but instead, it mounts in the phone profile's media directory ( /var/mobile/Media ). How can I access the iPhone storage as a device or partition (or really just have access to the / directory) so that I can put a new OS, Boot-loader, etc. onto it? | You can add the redirection in your script: --EDIT-- after Jeff Schaller comment #!/bin/bash# case 1: if you want to hide all message even errors{apt-get update && apt-get upgrade} > /dev/null 2>&1#!/bin/bash# case 2: if you want to hide all messages but errors{apt-get update && apt-get upgrade} > /dev/null | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251006",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137327/"
]
} |
251,013 | I'm trying to match multiple alphanumeric values (this number could vary) from a string and save them to a bash capture group array. However, I'm only getting the first match: mystring1='<link rel="self" href="/api/clouds/1/instances/1BBBBBB"/> dsf <link rel="self" href="/api/clouds/1/instances/2AAAAAAA"/>'regex='/instances/([A-Z0-9]+)'[[ $mystring1 =~ $regex ]]echo ${BASH_REMATCH[1]}1BBBBBBecho ${BASH_REMATCH[2]} As you can see- it matches the first value I'm looking for, but not the second. | It's a shame that you can't do global matching in bash. You can do this: global_rematch() { local s=$1 regex=$2 while [[ $s =~ $regex ]]; do echo "${BASH_REMATCH[1]}" s=${s#*"${BASH_REMATCH[1]}"} done}global_rematch "$mystring1" "$regex" 1BBBBBB2AAAAAAA This works by chopping the matched prefix off the string so the next part can be matched. It destroys the string, but in the function it's a local variable, so who cares. I would actually use that function to populate an array: $ mapfile -t matches < <( global_rematch "$mystring1" "$regex" )$ printf "%s\n" "${matches[@]}"1BBBBBB2AAAAAAA | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/251013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143799/"
]
} |
251,054 | I would expect xdg-open command to use the same application that opens when I double-click the file in the default file manager, but this is not always true. For example my DE is XFCE, my file manager is Thunar and my default picture viewer is Ristretto. However, xdg-open example.png opens the example PNG file in Pinta. Why? | xdg-open is a desktop-independent tool for configuring the default applications of a user. Many applications invoke the xdg-open command internally. Inside a desktop environment (like GNOME, KDE, or Xfce), xdg-open simply passes the arguments to those desktop environment's file-opener application (eg. gvfs-open, kde-open, or exo-open). which means that the associations are left up to the desktop environment. When no desktop environment is detected (for example when one runs a standalone window manager like eg. Openbox), xdg-open will use its own configuration files. from archwiki specific to your question, you could try this to set the default application associated with the png file: xdg-mime default <ristretto.desktop> image/png you need find out what exactly the desktop file name of Ristretto.afterwards, you could check it with this: xdg-mime query default image/png | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/251054",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2119/"
]
} |
251,057 | I have seen people connect two computers with an Ethernet cable, but the instructions I've seen were for Windows to Windows or Mac to Mac or Windows to Mac. I never came across any for connecting Windows to Linux. Is it possible to connect a Windows system to a Linux system via Ethernet cable? | Yes, as I have done this before, but with Ubuntu-based distros connected to Windows Vista. However this should still work with Windows 10. This is called a direct ethernet connection. There are a few steps to this: Windows, p1 check current IP for example Start, cmd to open a terminal, run ipconfig write down the current IP(s) to compare later Both attach the Ethernet cable to both machines so they are now physically connected to each other Windows, p2 get the new IP: Start, cmd to open a command prompt, run ipconfig comparing with your previously copied IPs, see which new IP appears, and copy it down for example it may resemble: 169.254.123.101 . Ubuntu get to the network manager, for example click status bar network icon Edit Connections choose Wired type create a new wired connection, naming it something you'll recognize such as direct-ether under iPv4 , use these settings Method: Manual . Otherwise default Automatic (DHCP) does not let you set an IP address: 169.254.123.105 . The point is to use same IP except for last segment to be on the same subnet so if one is a.b.c.101 then you should be a.b.c.105 for example netmask: 255.255.0.0 gateway: leave blank It is at this point, on Lubuntu for example there is weirdness where, when typing address numbers, values "disappear" when typing. Just keep typing and when you Save, it seems the values just appear. Save Now choose your new direct-ether network, for example status bar click it Test So now you should have, for example: Windows: 169.254.123.101 Ubuntu: 169.254.123.105 Test the connectivity for example using software that you can access by IP. For example on Windows I had Xampp Portable running which runs an Apache web server. So to test whether Ubuntu could see that web server, I simply opened a browser to http://169.254.123.101 which is the Windows's IP in this example, and could see the Windows' Xampp Portable default page, thus confirming the connectivity. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/251057",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144567/"
]
} |
251,090 | An existing directory is needed as a mount point . $ ls$ sudo mount /dev/sdb2 ./datadiskmount: mount point ./datadisk does not exist$ mkdir datadisk$ sudo mount /dev/sdb2 ./datadisk$ I find it confusing since it overlays existing contents of the directory. There are two possible contents of the mount point directory which may get switched unexpectedly (for a user who is not performing the mount). Why doesn’t mount happen into a newly created directory? This is the way how graphical operating systems display removable media. It would be clear if the directory is mounted (exists) or not mounted (does not exist). I am pretty sure there is a good reason but I haven’t been able to discover it yet. | This is a case of an implementation detail that has leaked. In a UNIX system, every directory consists of a list of names mapped to inode numbers. An inode holds metadata which tells the system whether it is a file, directory, special device, named pipe, etc. If it is a file or directory it also tells the system where to find the file or directory contents on disk. Most inodes are files or directories. The -i option to ls will list inode numbers. Mounting a filesystem takes a directory inode and sets a flag on the kernel's in-memory copy to say "actually, when looking for the contents of this directory look at this other filesystem instead" (see slide 10 of this presentation ). This is relatively easy as it's changing a single data item. Why doesn't it create a directory entry for you pointing at the new inode instead? There are two ways you could implement that, both of which have disadvantages. One is to physically write a new directory into the filesystem - but that fails if the filesystem is readonly! The other is to add to every directory listing process a list of "extra" things that aren't really there. This is fiddly and potentially incurs a small performance hit on every file operation. If you want dynamically-created mount points, the automount system can do this. Special non-disk filesystems can also create directories at will, e.g. proc , sys , devfs and so on. Edit: see also the answer to What happens when you 'mount over' an existing folder with contents? | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/251090",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54675/"
]
} |
251,103 | The general rule in shell scripting is that variables should always be quoted unless there is a compelling reason not to. For more details than you probably want to know, have a look at this great Q&A: Security implications of forgetting to quote a variable in bash/POSIX shells . Consider, however, a function like the following: run_this(){ $@} Should $@ be quoted there or not? I played with it for a bit and couldn't find any case where the lack of quotes caused a problem. On the other hand, using the quotes makes it break when passing a command containing spaces as a quoted variable: #!/usr/bin/shset -xrun_this(){ $@}run_that(){ "$@"}comm="ls -l"run_this "$comm"run_that "$comm" Running the script above returns: $ a.sh+ comm='ls -l'+ run_this 'ls -l'+ ls -ltotal 8-rw-r--r-- 1 terdon users 0 Dec 22 12:58 da-rw-r--r-- 1 terdon users 45 Dec 22 13:33 file-rw-r--r-- 1 terdon users 43 Dec 22 12:38 file~+ run_that 'ls -l'+ 'ls -l'/home/terdon/scripts/a.sh: line 7: ls -l: command not found I can get around that if I use run_that $comm instead of run_that "$comm" , but since the run_this (unquoted) function works with both, it seems like the safer bet. So, in the specific case of using $@ in a function whose job is to execute $@ as a command, should $@ be quoted? Please explain why it should/shouldn't be quoted and give an example of data that can break it. | The problem lies in how the command is passed to the function: $ run_this ls -l Untitled\ Document.pdf ls: cannot access Untitled: No such file or directoryls: cannot access Document.pdf: No such file or directory$ run_that ls -l Untitled\ Document.pdf -rw------- 1 muru muru 33879 Dec 20 11:09 Untitled Document.pdf "$@" should be used in the general case where your run_this function is prefixed to a normally written command. run_this leads to quoting hell: $ run_this 'ls -l Untitled\ Document.pdf'ls: cannot access Untitled\: No such file or directoryls: cannot access Document.pdf: No such file or directory$ run_this 'ls -l "Untitled\ Document.pdf"'ls: cannot access "Untitled\: No such file or directoryls: cannot access Document.pdf": No such file or directory$ run_this 'ls -l Untitled Document.pdf'ls: cannot access Untitled: No such file or directoryls: cannot access Document.pdf: No such file or directory$ run_this 'ls -l' 'Untitled Document.pdf'ls: cannot access Untitled: No such file or directoryls: cannot access Document.pdf: No such file or directory I'm not sure how I should pass a filename with spaces to run_this . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/251103",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22222/"
]
} |
251,106 | I want to set up daily logrotate for my Tomcat server' catalina.out log file but it's not working - I haven't seen the rotated log files created. To troubleshoot, I ran logrotate -d /etc/logrotate.conf and got the following: rotating pattern: /usr/local/tomcat/logs/catalina.out 5242880 bytes (7 rotations)empty log files are rotated, old logs are removedconsidering log /usr/local/tomcat/logs/catalina.out log needs rotatingrotating log /usr/local/tomcat/logs/catalina.out, log->rotateCount is 7dateext suffix '-20151223'glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'glob finding old rotated logs failedcopying /usr/local/tomcat/logs/catalina.out to /usr/local/tomcat/logs/catalina.out-20151223truncating /usr/local/tomcat/logs/catalina.outcompressing log with: /bin/gzip It seems like everything is working without any error. However, there is no results: [root@gec logrotate.d]# ls -lrth /usr/local/tomcat/logs/cata*-rw-r--r-- 1 root root 398 Dec 4 17:48 /usr/local/tomcat/logs/catalina.2015-12-04.log-rw-r--r-- 1 root root 109M Dec 23 17:21 /usr/local/tomcat/logs/catalina.out My /etc/logrotate.conf : dailyrotate 7# create new (empty) log files after rotating old onescreate# use date as a suffix of the rotated filedateext# uncomment this if you want your log files compressed#compress# RPM packages drop log rotation information into this directoryinclude /etc/logrotate.d My /etc/logrotate.d/tomcat : /usr/local/tomcat/logs/catalina.out { copytruncate daily rotate 7 compress missingok size 5M} What is wrong? Updates: Interestingly, running logrotate -f /etc/logrotate.conf creates the rotation gzip files! [root@gec logrotate.d]# ls -lrth /usr/local/tomcat/logs/cata*-rw-r--r-- 1 root root 398 Dec 4 17:48 /usr/local/tomcat/logs/catalina.2015-12-04.log-rw-r--r-- 1 root root 1.1M Dec 23 17:26 /usr/local/tomcat/logs/catalina.out-20151223.gz-rw-r--r-- 1 root root 109K Dec 23 17:27 /usr/local/tomcat/logs/catalina.out However, how do I know whether the daily cron job will work? | You are running logrotate -d /etc/logrotate.conf with -d argument. The -d argument is debug mode, you can say kind of "dry-run". It will only give you info if the logrotate will work but will not rotate the logs. The logrotate -f worked since the -f argument specifies logrotate to force the logrotate. Quoting from the manual of logrotate : -d, --debug Turns on debug mode and implies -v. In debug mode, no changes will be made to the logs or to the logrotate state file. -f, --force Tells logrotate to force the rotation, even if it doesn't think this is necessary. Sometimes this is useful after adding new entries to a logrotate config file, or if old log files have been removed by hand, as the new files will be created, and logging will continue correctly. If the logrotate -d /etc/logrotate.conf gave you output that the log will be rotate and compressed then it will surely rotate it when logrotate will go through your configuration file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251106",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31359/"
]
} |
251,154 | I’m writing something that deals with file matches, and I need an inversion operation. I have a list of files (e.g. from find . -type f -print0 | sort -z >lst ), and a list of matches (e.g. from grep -z foo lst >matches – note that this is only an example; matches can be any arbitrary subset (including empty or full) or lst ), and now I want to invert this list. Background: I’m sorta implementing something like find(1) excepton file lists (although the files do exist in the filesystem at the point of calling, the list may have been pre-filtered). If the list of files weren’t potentially so large, I could use find "${files[@]}" -maxdepth 0 -somecondition -print0 , but even moderate use of what I’m writing would go beyond the Linux or BSD argv size limit. If the lines were not NUL-separated, I could use comm -23 lst matches >inverted . If the matches were not NUL-separated, I could use grep -Fvxzf matches lst . But, from the generators I mentioned in the first paragraph, both are. Assume GNU tools are installed, so this needs not be portable beyond e.g. Debian, as I’m using find -print0 , sort -z and friends already (although some BSDs have it, so if it can be done in “more portable”, I won’t complain). I’m trying to do code reuse here; plus, comm -23 is basically the perfect tool for this already except it doesn’t support changing the input line separator (yet), and comm is an underrated and not-enough-well-known tool anyway. If the Unix/Linux toolbox doesn’t offer anything sensible, I’m likely to reimplement a form of comm -23 (reduced to just this one use case) in shell, as the script already (for other reasons) requires a shell that happens to support read -d '' for NUL-delimited input, but that’s going to be slow (and effort… I posted this at the end of the workday in the hopes someone has got an idea for when I pick this up tomorrow or on the 28th). | If your comm supports non-text input (like GNU tools generally do), you can always swap NUL and nl (here with a shell supporting process substitution (have you got any plan for that in mksh btw?)): comm -23 <(tr '\0\n' '\n\0' < file1) <(tr '\0\n' '\n\0' < file2) | tr '\0\n' '\n\0' That's a common technique . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251154",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43720/"
]
} |
251,159 | How can we concatenate results from stdout (or stderr) and a file into a final file. For example ls -a | grep text1 concatenate with file2.txt into a final result (not file2.txt ), without storing grep text1 to something intermediate such as grep text1 > file1.txt | ls -a | grep text1 | cat file2.txt - The - stands for standard input. Alternatively you may write ls -a | grep text1 | cat - file2.txt to have the output in different order. Yet another possibility using process substitution: cat <(ls -a | grep text1) file2.txt or in different order: cat file2.txt <(ls -a | grep text1) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/251159",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144104/"
]
} |
251,163 | I'm installing "Red Hat Enterprise Linux 7.2 (Linux version 3.10.0-327.el7.x86_64 ([email protected]) (gcc version 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) ) #1 SMP Thu Oct 29 17:29:29 EDT 2015)" I am trying to switch from LANG="en_US.UTF-8" to LANG="en_US" as we need to operate the OS in 8 bits ASCII mode. I have tried to change /etc/locale.conf and reboot. It doesn't work for gnome. For instance, when I try to launch a terminal session, I get this error: Dec 23 14:27:56 cmt22 gnome-session: Error constructing proxy for org.gnome.Terminal:/org/gnome/Terminal/Factory0: Error calling StartServiceByName for org.gnome.Terminal: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process /usr/libexec/gnome-terminal-server exited with status 8 Accordingly to gnome documentation , it says the locale is not defined but localectl list-locales shows it is defined. | ls -a | grep text1 | cat file2.txt - The - stands for standard input. Alternatively you may write ls -a | grep text1 | cat - file2.txt to have the output in different order. Yet another possibility using process substitution: cat <(ls -a | grep text1) file2.txt or in different order: cat file2.txt <(ls -a | grep text1) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/251163",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148636/"
]
} |
251,169 | so the command is : echo "abc 123" | sed "s/[0-9]*/h/g" and im getting output as hahbhch h how am i getting this output? the output i expected it to be is abc h which im getting by this command : echo "abc 123" | sed "s/[0-9][0-9]*/h/g" can someone explain this.. | The * means zero-or-more matches, and it matches as soon as possible. If you run that command without the g flag (which means sed will stop after the first replacement), you will get as output habc 123 . This is because it start reading from left to right, and because it couldn't match a , it will simply match the beginning of the line and then stop there. Using the global ( g ) flag, it wil keep trying to match the rest of the string, and because * matches the empty string when it can't match anything else, it will place an h every time it cannot match more numbers. Note that your second attempt is equivalent to sed "s/[0-9]\+/h/" . Here + means one or more matches, meaning it won't match the empty string when it does not find a number to replace. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/251169",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148644/"
]
} |
251,174 | Using Cygwin, I installed Environment Modules by downloading source code, running configure, make, and make install. Every time I run a module command, I get: init.c(718):WARN:165: Cannot set TCL variable '!::' I've traced this down to the fact that Cygwin has the following environment variable set: $ env | grep ::!::=::\ Does anyone know what this is, where it is set, why it might be necessary, or how to get rid of it? I might add that it's exceedingly difficult to Google, or even get to display correctly in Markdown. From the comments: $ unset '!::' -bash: unset: `!::': not a valid identifier | This is nothing to do with Unix or Linux. It's entirely Win32 and Cygwin. As first discussed in the Microsoft doco for Win32 and various Win32 programmers guides almost a quarter of a century ago, the Windows NT kernel doesn't have a notion of multiple drives each with their own individual working directories. This MS-DOS paradigm is emulated in Win32 using environment variables, not normally displayed by Win32 command interpreters' set commands (but fairly easily accessible programmatically), with names in the form = D : (where D is a drive letter). This pretense of multiple working directories, just like good old MS-DOS, is a shared fiction consulted and maintained by the Win32 API, Microsoft's command interpreter cmd , and the runtime libraries for various languages including some C and C++ compilers. When a Cygwin process starts up, it converts the Win32 environment block into a "more UNIX-y" form. It has a whole set of hardwired special conversion rules for various specific variables, such as PATH . It's not in the Cygwin doco, but it also likewise deals with the = D := D :\ path environment strings by converting the leading = into a ! . This yields environment strings, as Cygwin program execution sees them, of the form ! D := D :\ path . It reverses this conversion when it needs to generate a new Win32 environment for whatever reason, such as spawning a new process, turning the ! back into a = . To get Microsoft's command interpreter to display these environment variables, one simply runs set "" whereupon one will see output beginning something like =C:=C:\Users\Jim… Sometimes, an extra one of these environment variables crops up, with : as the drive letter. Running the same set command as above yields output beginning =::=::\=C:=C:\Users\Jim… After this has been made "more UNIX-y" by Cygwin, this is of course the very !::=::\ that you are seeing. Because these are a mechanism that is embedded within Win32 applications (within Microsoft's command interpreter most especially) and that is partly entangled in the Win32 API itself, it's not exactly trivial to prevent their existence. Further reading " CreateProcess() ". Microsoft Win32 Programmer's Reference: Functions, A–G . Microsoft Press. 1993. ISBN 9781556155178. p. 213. Jeffrey Richter (1995). Advanced Windows: The Developer's Guide to the Win32 API for Windows NT 3.5 and Windows 95 . Microsoft Press. ISBN 9781556156779. pp. 26–27. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251174",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77505/"
]
} |
251,196 | I want to extract a value from a json file so that I can process it and I try that using grep '"USDEUR" currs.json' | cut -d ':' -f 2 but it returns 0.918695, and the json file looks like this: {"success":true,"terms":"https:\/\/currencylayer.com\/terms","privacy":"https:\/\/currencylayer.com\/privacy","timestamp":1449232988,"source":"USD","quotes":{ "USDEUR":0.918695, "USDGBP":0.660851, "USDPLN":3.95815}} So I want to know how to disable the comma so that I can process the value of USDEUR | Use a JSON parser to parse JSON, for example jq : $ jq '.quotes.USDEUR' file.json 0.918695 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251196",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148060/"
]
} |
251,213 | I'm using bash (CentOS 5) and looking to generate this output (Iassume I could use seq or echo together maybe?): 1|1,2|2,3|3,....31|31,32|32,33|33, I googled seq examples for over two hours and the closest I can come up with is: echo {1..31}..{1..31} | tr ' ' '\n' which almost gives me what I want, but messes up when I change the .. to | or even "|"). A second number generation I need is formatted the same way but fordescending years, i.e.: 2015|2015,2014|2014,...1938|1938,1937|1937,1936|1936, I've already manually typed out these two lists, but I would love anyinput on how I could have done this from the command line for futureneeds and to learn ( seq or echo , I'm assuming). | The following should do it: seq 0 31 | awk '{ print $1"|"$1", " }' in the descending case: seq 31 -1 0 | awk '{ print $1"|"$1", " }' These use awk to duplicate the number on each line, separated by apipe character. Or using pure bash (as suggested by DopeGhoti in a comment): for n in {0..31}; do printf "%d|%d,\n" $n $n; donefor n in {31..0}; do printf "%d|%d,\n" $n $n; done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251213",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81468/"
]
} |
251,276 | now i am user "lawrence.li" ,I can see directory "lijunda" with "read" privilege but now i have no "read" privilege,why can i still see this directory? I am confused that what is the difference between "r" and "-"(no read privilege),can anybody tell me why? thank you very much | Try ls -l /tmp/lijunda and all you will see is the names of the files within—you won't be able to open the files, or even see the file size, permissions, etc. about the files within that directory. This is because the directory itself only contains filenames and inode numbers —that's all. Read access to the filenames is controlled by the read permission. Access to the inodes pointed to by the directory is controlled by the execute permission—not the read permission. The inodes contain all the actual details about the file, such as filesize, owner, permissions, time last modified, and the physical location (on your physical hard disk) of the binary data which comprises the file's contents. To view the names of the files in the directory —you need read permission on the directory. You don't need execute or write permissions for this. To view the details of the files in the directory i.e. to view the inode contents—you need execute permissions on the directory. Read permissions on the directory makes no difference for viewing details of a file if you already know the file's name. To view the details of files that you don't already know the names of , you need read and execute permissions. And finally, to view the contents of a file —you need: read permissions on the file itself, execute permissions on the directory that contains the file*, and at least one of: read permissions on the directory containing the file OR the knowledge of the name of the file through some other means. See below for example. $ whoamivagrant$ ls -ltotal 12drwxrwx--x 2 pete pete 4096 Dec 24 08:51 execute_onlydrwxrwxr-x 2 pete pete 4096 Dec 24 08:52 read_and_executedrwxrwxr-- 2 pete pete 4096 Dec 24 08:52 read_only$ ls -l read_only/ls: cannot access read_only/mysterious_file: Permission deniedtotal 0-????????? ? ? ? ? ? mysterious_file$ cat read_only/mysterious_file cat: read_only/mysterious_file: Permission denied$ ls -l execute_only/ls: cannot open directory execute_only/: Permission denied$ ls -l execute_only/unicorn_file-rw-rw-r-- 1 pete pete 55 Dec 24 08:51 execute_only/unicorn_file$ cat execute_only/unicorn_fileThis file only exists for you if you know it's here ;)$ ls -l read_and_execute/total 4-rw-rw-r-- 1 pete pete 83 Dec 24 08:52 jack_sparrow$ cat read_and_execute/jack_sparrow "After the reading, you will be executed.""That's *Captain* Jack Sparrow to you!"$ *You also need execute permissions on all the parent directories all the way up to root, by the way. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251276",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148322/"
]
} |
251,291 | Why did the original Unix versions suddenly become open source/free? It seems odd that AT&T and Bell Labs would let something like an operating system become moldable and resellable with all of the funtionality that operating systems hold, especially with the small amount of them back then. I know that they allowed it to become open source, but I can't find out why they did. | From The Art of Unix Programming (emphasis added): After the [1974] paper, research labs and universities all over the world clamored for the chance to try out Unix themselves. Under a 1958 consent decree in settlement of an antitrust case, AT&T (the parent organization of Bell Labs) had been forbidden from entering the computer business. Unix could not, therefore, be turned into a product; indeed, under the terms of the consent decree, Bell Labs was required to license its nontelephone technology to anyone who asked. Ken Thompson quietly began answering requests by shipping out tapes and disk packs — each, according to legend, with a note signed “love, ken”. There is much more relevant information in that chapter; its title is "Origins and History of Unix, 1969-1995". Highly recommended reading (along with the rest of the book!) :) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251291",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146386/"
]
} |
251,347 | So i came across this script to reverse an input line #! /bin/bash input="${1}" reverse="" len=${#input} for(( i=${len}-1; i>=0; i-- )) do reverse="$reverse${input:${i}:1}" done echo "$reverse" Can someone explain what #input and the for loop does? | Simplified, your script should be like this: #! bin/bashinput="${1}" reverse=""for (( i=0; i<${#input}; i++ ))do reverse="${input:${i}:1}$reverse"doneecho "$reverse" Assuming you place the code above in a file named script.sh and that you allow it to be executed: chmod u+x script.sh . Then, this command will work: $ ./script.sh 01234567899876543210 The value of ${#input} is the length of input (the count of characters). The loop goes character by character from start to end. To select each character, the script is using a bash tool called Substring Expansion. Quoting from the man bash (you could also access it by typing man bash ): ${parameter:offset:length} Substring Expansion. Expands to up to length characters of parameter starting at the character specified by offset. That means that each character at position i is selected in turn to re-create the string on the reverse variable. But you do not need any loop or fancy coding to do this. This simple line will do exactly the same: $ echo "0123456789" | rev9876543210 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251347",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148644/"
]
} |
251,353 | I would like to write a Bash script that searches for PIDs matching a program's name (perhaps using ps ax | grep <PROGRAM> or something similar) and feeds them to pixelb's ps_mem script . ps_mem needs a list of PIDs separated by commas (no spaces) in order to evaluate RAM usage, unfortunately the only way to search for processes by program name that I am aware of is ps ax | grep <PROGRAM> which returns something like (taking the example of GitHub's Atom text editor): 7365 pts/2 S 0:00 /bin/bash /usr/bin/atom /home/fusion809/GitHub/fusion809.github.io 7367 pts/2 Sl 2:09 /usr/share/atom/atom --executed-from=/home/fusion809/GitHub/fusion809.github.io --pid=7354 /home/fusion809/GitHub/fusion809.github.io 7369 pts/2 S 0:00 /usr/share/atom/atom --type=zygote --no-sandbox 7404 pts/2 Sl 69:11 /usr/share/atom/atom --type=renderer --js-flags=--harmony --no-sandbox --lang=en-GB --node-integration=true --enable-delegated-renderer --num-raster-threads=2 --gpu-rasterization-msaa-sample-count=8 --content-image-texture-target=3553 --video-image-texture-target=3553 --disable-accelerated-video-decode --disable-webrtc-hw-encoding --disable-gpu-compositing --channel=7367.0.1287479693 --v8-natives-passed-by-fd --v8-snapshot-passed-by-fd 7469 pts/2 S 0:02 /usr/share/atom/atom --eval require('/usr/share/atom/resources/app.asar/src/compile-cache.js').setCacheDirectory('/home/fusion809/.atom/compile-cache'); require('/usr/share/atom/resources/app.asar/src/task-bootstrap.js');10094 pts/2 Sl 0:31 /usr/share/atom/atom --type=renderer --js-flags=--harmony --no-sandbox --lang=en-GB --node-integration=true --enable-delegated-renderer --num-raster-threads=2 --gpu-rasterization-msaa-sample-count=8 --content-image-texture-target=3553 --video-image-texture-target=3553 --disable-accelerated-video-decode --disable-webrtc-hw-encoding --disable-gpu-compositing --channel=7367.1.769162379 --v8-natives-passed-by-fd --v8-snapshot-passed-by-fd11799 pts/2 S 0:01 /usr/share/atom/atom --eval require('/usr/share/atom/resources/app.asar/src/compile-cache.js').setCacheDirectory('/home/fusion809/.atom/compile-cache'); require('/usr/share/atom/resources/app.asar/src/task-bootstrap.js');18686 pts/2 Sl 0:02 /usr/share/atom/atom --eval require('/usr/share/atom/resources/app.asar/src/compile-cache.js').setCacheDirectory('/home/fusion809/.atom/compile-cache'); require('/usr/share/atom/resources/app.asar/src/task-bootstrap.js');31761 pts/6 S+ 0:00 grep --colour=auto atom which as you can see is far from the syntax that ps_mem accepts. Is there a way to extract PIDs from this output in a Bash script or is there a way to otherwise get the PIDs for a specified program in a Bash script in a format that is acceptable to ps_mem ? | Pidof command returns the PID and introduced by a given process name. According entiend, you want to get a list of PIDs separated by commas, corresponding to potential PIDs as a process name. Pidof command to get that but in a list of PIDs separated by spaces. With the help of the tr command can truncate the characters corresponding to the space delimited by said output pidof another character, in this case the comma command. It could do so: pidof <process_name> | tr '\ ' ',' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251353",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27613/"
]
} |
251,360 | I have a list of words such as: string1string2string3....string12312 How do I convert them these words in delimited manner that output could be used as a JS array, i.e. "String1", "String2"..., "String12312" --- in other words how do I add quotation marks and commas? I understand this an be done in shell, but I guess any other solution would be okay as long as the result can be converted into an array. | Here's one way: sed 's/^\|$/"/g' file | paste -d, -s "string1","string2","string3","....","string12312" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251360",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148755/"
]
} |
251,388 | I have ran into a problem trying to write a Bash script. When grep outputs, it returns (usually) many lines. I would like to prefix and suffix a string to each of these output lines. I would also like to note that I'm piping ls into grep , like: ls | grep | With sed: ls | grep txt | sed 's/.*/prefix&suffix/' | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/251388",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146386/"
]
} |
251,405 | I recently got into nautilus scripts, and for the one I'm writing I'd need to extract a substring from a filename. My problem is that I found tons of methods to extract a substring based on the position of a character, and not any on how to find a given character in my string and extract a substring from or up to this character. cut -f1 -d "delimiter" works, but cut only accepts 1-char delimiter. Maybe awk or expr ? EDIT: I'm writing in bash and for example I expect a file with the name [email protected] to be renamed to simply Any Series S01 E01 VOSTFR.avi | With POSIX shells, using pattern stripping parameter expansion operators (initially from the Korn shell): string=whateverDELIMrestDELIMmorebefore_first_DELIM=${string%%DELIM*}before_last_DELIM=${string%DELIM*}after_first_DELIM=${string#*DELIM}after_last_DELIM=${string##*DELIM} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251405",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148149/"
]
} |
251,456 | I am trying to setup Arch Linux, and after getting the most basic stuff setup I installed and ran i3 with: pacman -S i3 dmenu xorg xorg-xinitstartx It finally started, but I can't exit it. After pressing $mod+shift+E and confirming, it gives me the error: i3-sensible-terminal could not find a terminal emulator. Please install one. I also get this error from $mod+Enter. I'm confused because I can't exit back to the actual terminal in order to install a terminal emulator. Why does exiting i3 try to run a terminal emulator instead of closing dmenu/xorg and returning me to the actual(?) terminal? Can anyone provide some insight? | The i3 environment isn't usable in the case described because there's no way to get to a shell. This is a graphical environment (X also known as X11) running in one of Linux's virtual consoles. To switch to a text environment and get a shell, use control alt together with a function-key for the number of the virtual console that you want to switch to. Most X environments with Linux run in virtual console 7, some may be in virtual console 1. So the quickest advice is to choose 2 through 6. When you do this, you will get a login prompt. This is expected. You can be logged into the same machine several times. Once logged in, you can run pacman to add whatever packages are needed, such as xterm . Further reading: Keyboard shortcuts (Arch wiki) 7. Console switching , The Keyboard and Console HOWTO 9.2.2. A Note About Virtual Consoles (Red Hat) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251456",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148811/"
]
} |
251,469 | this should probably be obvious to me, but I've been stuck for some time. I'm trying to write a very simple bash loop to take a list of servers, retrieve some specific info from them, and save the output to a file based on that host address on the starting machine. Code currently looks like: #!/bin/bashSERVER_LIST=/path/to/hostswhile read REMOTE_SERVERdo { ssh user@$REMOTE_SERVER 'show_stat_from_shell_command' } > "$REMOTE_SERVER"done < $SERVER_LIST The result from the above produces only a single output file for the first host in my list and then exits. To head off some of the more obvious solutions, Ansible etc. are not an option due to this being a very restricted environment. For the same reason using a multi-shell or tmux is also not an option (I can only log into one system at a time from my host). So, if someone could tell me exactly how I'm messing this up it would be appreciated! | Replace ssh user@$REMOTE_SERVER 'show_stat_from_shell_command' by ssh user@$REMOTE_SERVER 'show_stat_from_shell_command' </dev/null to prevent ssh reading from stdin ( $SERVER_LIST ) too. Or use ssh 's option -n . -n : Redirects stdin from /dev/null (actually, prevents reading from stdin). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251469",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
251,481 | I was looking at :- $ glxinfo | grep OpenGLOpenGL vendor string: Intel Open Source Technology CenterOpenGL renderer string: Mesa DRI Intel(R) Haswell Mobile OpenGL core profile version string: 3.3 (Core Profile) Mesa 11.0.7OpenGL core profile shading language version string: 3.30OpenGL core profile context flags: (none)OpenGL core profile profile mask: core profileOpenGL core profile extensions:OpenGL version string: 3.0 Mesa 11.0.7OpenGL shading language version string: 1.30OpenGL context flags: (none)OpenGL extensions:OpenGL ES profile version string: OpenGL ES 3.0 Mesa 11.0.7OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.00OpenGL ES profile extensions: From the above, this bit - OpenGL renderer string: Mesa DRI Intel(R) Haswell Mobile seems to say that it is all using software rendering, how do I turn on the hardware rendering if I want to ? | You probably use hardware rendering, check this: $ glxinfo | fgrep directdirect rendering: Yes "Direct rendering" above is explained by Wikipedia as: The Direct Rendering Infrastructure (DRI) is a framework for allowing direct access to graphics hardware under the X Window System in a safe, efficient way. The main use of DRI is to provide hardware acceleration for the Mesa implementation of OpenGL. As pointed out by @Ruslan, Mesa contains a software renderer to use as a fallback when no graphics hardware accelerator is available. It's called Gallium in OpenGL renderer string . But your output shows that the Intel renderer is being used, not the software one. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251481",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50490/"
]
} |
251,494 | Is there a way to list out all the files within a directory tree in a single list, sorted by modification time on Linux? ls -Rlt lists out files recursively, but they are grouped under different folders in the output and as a result, the output isn't sorted as a whole. Only the contents of each directory are sorted by time. | Yes, you can do this with GNU find . If your file names don't contain newlines, you can do: find -printf '%T@ %p\n' | sort -gk1,1 Explanation The -printf option of find can print all sorts of information. In this case, we are using: %Tk File's last modification time in the format specified by k, which is the same as for %A.@ seconds since Jan. 1, 1970, 00:00 GMT, with fractional part.%p File's name. So, %T@ %p\n will print the file's modification time in seconds since the epoch ( %T@ ), a space, and then the file's name ( %p ). These are then passed to sort which is told to sort numerically ( -n ) on the first field only ( -k1,1 ). Note that this will return all files and directories. To restrict it to regular files only (no directories, device files, links etc.) add -type f to your find command. To get human readable dates, you can process the output with GNU date : find -printf '%T@ %p\t\n' | sort -gk1,1 | perl -lne 's/([^ ]*)//;chomp($i=`date -d \@$1`); print "$i $_"' Here, the perl command replaces the first string of non-space characters (the date) with itself as processed by GNU date . The above will fail for file names that contain newlines. To deal with newlines, use: find -printf '%p\t%T@\0' | sort -zt$'\t' -nk2 | tr '\0' '\n' That's the same thing except that find will output a \0 instead of \n at the end of each file name. GNU sort can deal with null-separated output so it is still able to sort correctly. The final tr command translates the \0 back to \n . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251494",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148838/"
]
} |
251,625 | I need to execute a script as soon as my raspberry pi gets connected to the Internet. However I was wondering if there was a better way than just pinging Google every minute or so. My problem is that my Internet connection drops 1-2 times during the day so I need a way to log such events. It's just the ADSL dropping during the day, I was looking for some way to log when it occurs even when i don't notice it. I think I'll setup a script as suggested. | you can make a check on: cat /sys/class/net/wlan0/carrier where wlan0 is my internet interface. you can use whatever interface you are using , such as eth0 , eth1 , wlan0 for internet connectivity. if the output of that command is 1 then you are connected. otherwise not.so you may write script like this: #!/bin/bash# Test for network conectionfor interface in $(ls /sys/class/net/ | grep -v lo);doif [[ $(cat /sys/class/net/$interface/carrier) = 1 ]]; then ; echo "online"; fidone you can also use the command: #hwdetect --show-net this script also works well: #!/bin/bashWGET="/usr/bin/wget"$WGET -q --tries=20 --timeout=10 http://www.google.com -O /tmp/google.idx &> /dev/nullif [ ! -s /tmp/google.idx ]then echo "Not Connected..!"else echo "Connected..!"fi | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/251625",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124512/"
]
} |
251,666 | So I was reading the book The Linux Command Line , and it says that commands are of four types: (1) executable programs (2) shell builtins (3) shell functions (shell scripts) (4) aliases Then it says that in order to identify the type of a command, you can use the type command. However, I noticed that the type command fails to distinguish between a shell function (shell script) and an executable command. For example: type cp(will output: cp is /bin/cp)type bzexe(will output: bzexe is /bin/bzexe) However, we all know that cp is an executable program and bzexe is a shell script. So my question is now: what command can we use to differentiate between those two? I do know about the file command, and it works fine. Is that the only solution? | A shell script is an executable program. That's why type says that it is one. A shell script is as much an executable command as a perl script, a python script, a native ELF executable, a cross-architecture executable being executed by Qemu through Linux's binfmt_misc mechanism, etc. Any executable file is an executable command, it doesn't matter what interpreter it uses. As you can tell from my list of examples, the line between “a script” and “not a script” is fuzzy: any executable file that begins with a shebang is a script, but there are executable files that are neither native code nor scripts. When you execute a program, what language it's written in is irrelevant. So it wouldn't make sense for type to tell you about it. The job of type is only to tell you what type of command it is from the point of view of the shell. A shell script is not the same thing as a function. A function runs inside the shell, and can modify the shell's environment. A shell script is a separate program; this separate program may happen to be written in the same language as the program you're running right now, but that's just a coincidence. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/251666",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100193/"
]
} |
251,670 | I'm trying to install arch-linux for the first time. everything was OK till I tried to install "grub" boot-loader to a USB drive. I am working by the WIKI ARCH LINUX guide. both of this commands worked with no errors: # mkdir -p /mnt/usb ; mount /dev/sdc1 /mnt/usb# grub-install --target=i386-pc --recheck --debug --boot-directory=/mnt/usb/boot /dev/sdc but the next command return an error: failed to get canonical path of 'airootfs' : # grub-mkconfig -o /mnt/usb/boot/grub/grub.cfg can any one assist? (tried to arch-chroot /mnt /bin/bash on this one the command is not found). | Try adding --root-directory=/mnt to the grub-install command. It seems to be undocumented, but I saw it mentioned on some forum, and it worked for me. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251670",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148917/"
]
} |
251,674 | Im trying to match a string agains a regular expression inside an if statement on bash. Code below: var='big'If [[ $var =~ ^b\S+[a-z]$ ]]; then echo $varelse echo 'none'fi Match should be a string that starts with 'b' followed by one or more non-whitespace character and ending on a letter a-z. I can match the start and end of the string but the \S is not working to match the non-whitespace characters. Thanks in advance for the help. | In non-GNU systems what follows explain why \S fail: The \S is part of a PCRE (Perl Compatible Regular Expressions). It is not part of the BRE (Basic Regular Expressions) or the ERE (Extended Regular Expressions) used in shells. The bash operator =~ inside double bracket test [[ use ERE. The only characters with special meaning in ERE (as opposed to any normal character) are .[\()*+?{|^$ . There are no S as special. You need to construct the regex from more basic elements: regex='^b[^[:space:]]+[a-z]$' Where the bracket expression [^[:space:]] is the equivalent to the \S PCRE expressions : The default \s characters are now HT (9), LF (10), VT (11), FF (12), CR (13), and space (32). The test would be: var='big' regex='^b[^[:space:]]+[a-z]$'[[ $var =~ $regex ]] && echo "$var" || echo 'none' However, the code above will match bißß for example. As the range [a-z] will include other characters than abcdefghijklmnopqrstuvwxyz if the selected locale is (UNICODE).To avoid such issue, use: var='bißß' regex='^b[^[:space:]]+[a-z]$'( LC_ALL=C; [[ $var =~ $regex ]]; echo "$var" || echo 'none') Please be aware that the code will match characters only in the list: abcdefghijklmnopqrstuvwxyz in the last character position, but still will match many other in the middle: e.g. bég . Still, this use of LC_ALL=C will affect the other regex range: [[:space:]] will match spaces only of the C locale. To solve all the issues, we need to keep each regex separate: reg1=[[:space:]] reg2='^b.*[a-z]$' out=noneif [[ $var =~ $reg1 ]] ; then out=noneelif ( LC_ALL=C; [[ $var =~ $reg2 ]] ); then out="$var"fiprintf '%6.8s\t|' "$out" Which reads as: If the input (var) has no spaces (in the present locale) then check that it start with a b and ends in a-z (in the C locale). Note that both tests are done on the positive ranges (as opposed to a "not"-range). The reason is that negating a couple of characters opens up a lot more possible matches. The UNICODE v8 has 120,737 characters already assigned. If a range negates 17 characters, then it is accepting 120720 other possible characters, which may include many non-printable control characters. It should be a good idea to limit the character range that the middle characters could have (yes, those will not be spaces, but may be anything else). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/251674",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80155/"
]
} |
251,691 | Why is this not possible? pv ${dest_file} | gzip -1 pv is a progress bar error gzip: compressed data not written to a terminal. Use -f to force compression.For help, type: gzip -h 0 B 0:00:00 [ 0 B/s] [> ] 0% This works pv ${file_in} | tar -Jxf - -C /outdir | What are you trying to achieve is to see the progress bar of the compression process. But it is not possible using pv . It shows only transfer progress, which you can achieve by something like this (anyway, it is the first link in the google): pv input_file | gzip > compressed_file The progress bar will run fast, and then it will wait for compression, which is not observable anymore using pv . But you can do that other way round and watch the output stream, bot here you will not be able to see the actual progress, because pv does not know the actual size of the compressed file: gzip <input_file | pv > compressed_file The best I found so far is the one from commandlinefu even with rate limiting and compression of directories: $D=directorytar pcf - $D | pv -s $(du -sb $D | awk '{print $1}') --rate-limit 500k | gzip > target.tar.gz | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/251691",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83275/"
]
} |
251,728 | I want to sort files according to the number in the filename.Here are the files: $ ls *.f0.f 13.f 1.f 22.f 4.f abc.f The sorting result: $ ls *.f | sort -t. -k1n0.fabc.f # note this file!1.f4.f13.f22.f What I had expected was: $ ls *.f | sort -t. -k1nabc.f0.f1.f4.f13.f22.f Why was abc.f showed just after 0.f and before 1.f ? Is it because 0 is not treated as a number by sort ? I searched the web and didn't find any reference. | The reason is because when using numeric sort, strings without numbers are treated as zero.GNU sort gets the behavior right, but makes no comment as to why. The man page on illumos for SunOS sort does provide an explanation: -n Restricts the sort key to an initial numeric string, consisting of optional blank characters, optional minus sign, and zero or more digits with an optional radix character and thousands separators (as defined in the current locale), which is sorted by arithmetic value. An empty digit string is treated as zero. Leading zeros and signs on zeros do not affect ordering. This behavior is also specified in SUSv4 and POSIX.1-2008 ( http://pubs.opengroup.org/onlinepubs/9699919799/utilities/sort.html ), using the same verbiage as the illumos man page. GNU sort also has -g , "general numeric sort", which sorts by floating point numbers instead of integers where empty digit strings are sorted before zero. I'm not sure if this is a side effect or intentional. However, -g comes with a warning since it is significantly slower than -n . If you'e sorting a large dataset or doing anything that users are waiting on you should avoid -g . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/251728",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47723/"
]
} |
251,770 | I want rsync to copy everything from "copy_to_home" directory (which is on a NTFS-formatted hard drive) to the users home directory. I don't want it to delete anying, but it should replace files on the receiving side if they're not the same as the files on the sending side. This is what the command looks like: rsync --modify-window=1 -hh --progress -v -r copy_to_home/ ~/ My problem is that whenever executing this command, rsync always seems to replace every single file in ~ despite the files having not been changed. The --update option wouldn't do that, but it doesn't replace modified files on the receiving side. | If Quora Feans' answer does not help, you can add the option -i or --itemize-changes to get rsync to explain why it is updating the files. It prints a string formed of chars YXcstpoguax where c , for example, means the checksum differs. You will probably find codes p permissions differ or o owner differs. These are usually fixed by using the -a option to preserve such attributes. If you still see diffs, try using --size-only to only have files updated if their size differs (ignoring the timestamp). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251770",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7749/"
]
} |
251,784 | When I run echo (ls) in bash it returns: -bash: syntax error near unexpected token `ls' I understand that I should escape parenthesis or quote to get plain text output.But the result still does not make sense to me if parenthesis means running the command sequence in a subshell environment. My environment: bash 4.3 (installed by homebrew), OS X El Capitan | It is essentially a generic syntax error, not specifically related to the ls token. bash uses a yacc parser, which calls a common yyerror() on any problem, Within the resulting error-handling, it proceeds to try to pinpoint the error. The message is coming from this chunk (see source ): /* If the line of input we're reading is not null, try to find the objectionable token. First, try to figure out what token the parser's complaining about by looking at current_token. */ if (current_token != 0 && EOF_Reached == 0 && (msg = error_token_from_token (current_token))) { if (ansic_shouldquote (msg)) { p = ansic_quote (msg, 0, NULL); free (msg); msg = p; } parser_error (line_number, _("syntax error near unexpected token `%s'"), msg); free (msg); if (interactive == 0) print_offending_line (); last_command_exit_value = parse_and_execute_level ? EX_BADSYNTAX : EX_BADUSAGE; return; } In other words, it's already confused by the '(' , and having looked-ahead for context is reporting the ls . A ( would be legal at the beginning of a command, but not embedded. Per manual page: Compound Commands A compound command is one of the following: (list) list is executed in a subshell environment (see COMMAND EXECU‐ TION ENVIRONMENT below). Variable assignments and builtin com‐ mands that affect the shell's environment do not remain in effect after the command completes. The return status is the exit status of list. Further reading: 3.2.4.3 Grouping Commands (Bash Reference Manual) 3.5.4 Command Substitution (Bash Reference Manual) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251784",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149036/"
]
} |
251,893 | For example, I have a variable: env_name="GOPATH" Now I want to get the environment variable GOPATH as if like this: echo $GOPATH How can I get $GOPATH by $env_name ? | Different shells have different syntax for achieving this. In bash , you use variable indirection : printf '%s\n' "${!env_name}" In ksh , you use nameref aka typeset -n : nameref env_name=GOPATHprintf '%s\n' "$env_name" In zsh , you use P parameter expansion flag : print -rl -- ${(P)env_name} In other shell, you must use eval , which put you under many security implications if you're not sure the variable content is safe: eval "echo \"\$$name_ref\"" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/251893",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109888/"
]
} |
251,902 | I've set up an encrypted home directory for user piranha3: root@raspberrypi:~# ecryptfs-verify -u piranha3 -hINFO: [/home/piranha3/.ecryptfs] existsINFO: [/home/piranha3/.ecryptfs/Private.sig] existsINFO: [/home/piranha3/.ecryptfs/Private.sig] contains [2] signaturesINFO: [/home/piranha3/.ecryptfs/Private.mnt] existsINFO: [/home/piranha3] is a directoryINFO: [/home/piranha3/.ecryptfs/auto-mount] Automount is setINFO: Mount point [/home/piranha3] is the user's homeINFO: Ownership [piranha3] of mount point [/home/piranha3] is correctINFO: Configuration valid But after piranha3 logouts directory is not unmounted: root@raspberrypi:~# mount | grep ecryptfs/home/.ecryptfs/piranha3/.Private on /home/piranha3 type ecryptfs (rw,nosuid,nodev,relatime,ecryptfs_fnek_sig=729061d7fa17b3a4,ecryptfs_sig=eb5ec4d9c13e2d74,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs) lsof output: lsof: WARNING: can't stat() cifs file system /media/cifs Output information may be incomplete.lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete. System Information: root@raspberrypi:~# dpkg -l ecryptfs-utilsDeseado=desconocido(U)/Instalar/eliminaR/Purgar/retener(H)| Estado=No/Inst/ficheros-Conf/desempaqUetado/medio-conF/medio-inst(H)/espera-disparo(W)/pendienTe-disparo|/ Err?=(ninguno)/requiere-Reinst (Estado,Err: mayúsc.=malo)||/ Nombre Versión Arquitectura Descripción+++-========================-=================-=================-======================================================ii ecryptfs-utils 103-5 armhf ecryptfs cryptographic filesystem (utilities)root@raspberrypi:~# uname -aLinux raspberrypi 4.1.13-v7+ #826 SMP PREEMPT Fri Nov 13 20:19:03 GMT 2015 armv7l GNU/Linux And finally about PAM: root@raspberrypi:~# grep -r ecryptfs /etc/pam.d/etc/pam.d/common-session:session optional pam_ecryptfs.so unwrap/etc/pam.d/common-password:password optional pam_ecryptfs.so /etc/pam.d/common-auth:auth optional pam_ecryptfs.so unwrap/etc/pam.d/common-session-noninteractive:session optional pam_ecryptfs.so unwrap Why is not /home/directory unmounted? | Different shells have different syntax for achieving this. In bash , you use variable indirection : printf '%s\n' "${!env_name}" In ksh , you use nameref aka typeset -n : nameref env_name=GOPATHprintf '%s\n' "$env_name" In zsh , you use P parameter expansion flag : print -rl -- ${(P)env_name} In other shell, you must use eval , which put you under many security implications if you're not sure the variable content is safe: eval "echo \"\$$name_ref\"" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/251902",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47954/"
]
} |
251,969 | Someone sent me a ZIP file containing files with Hebrew names (and created on Windows, not sure with which tool). I use LXDE on Debian Stretch. The Gnome archive manager manages to unzip the file, but the Hebrew characters are garbled. I think I'm getting UTF-8 octets extended into Unicode characters, e.g. I have a file whose name has four characters and a .doc suffic, and the characters are: 0x008E 0x0087 0x008E 0x0085 . Using the command-line unzip utility is even worse - it refuses to decompress altogether, complaining about an "Invalid or incomplete multibyte or wide character". So, my questions are: Is there another decompression utility that will decompress my files with the correct names? Is there something wrong with the way the file was compressed, or is it just an incompatibility of ZIP implementations? Or even misfeature/bug of the Linux ZIP utilities? What can I do to get the correct filenames after having decompressed using the garbled ones? | It sounds like the filenames are encoded in one of Windows' proprietary codepages ( CP862 , 1255 , etc). Is there another decompression utility that will decompress my files with the correct names? I'm not aware of a zip utility that supports these code pages natively. 7z has some understanding of encodings, but I believe it has to be an encoding your system knows about more generally (you pick it by setting the LANG environment variable) and Windows codepages likely aren't among those. unzip -UU should work from the command line to create files with the correct bytes in their names (by disabling all Unicode support). That is probably the effect you got from GNOME's tool already. The encoding won't be right either way, but we can fix that below. Is there something wrong with the way the file was compressed, or is it just an incompatibility of ZIP implementations? Or even misfeature/bug of the Linux ZIP utilities? The file you've been given was not created portably. That's not necessarily wrong for an internal use where the encoding is fixed and known in advance, although the format specification says that names are supposed to be either UTF-8 or cp437 and yours are neither. Even between Windows machines, using different codepages doesn't work out well, but non-Windows machines have no concept of those code pages to begin with. Most tools UTF-8 encode their filenames (which still isn't always enough to avoid problems). What can I do to get the correct filenames after having decompressed using the garbled ones? If you can identify the encoding of the filenames, you can convert the bytes in the existing names into UTF-8 and move the existing files to the right name. The convmv tool essentially wraps up that process into a single command: convmv -f cp862 -t utf8 -r . will try to convert everything inside . from cp862 to UTF-8. Alternatively, you can use iconv and find to move everything to their correct names. Something like: find -mindepth 1 -exec sh -c 'mv "$1" "$(echo "$1" | iconv -f cp862 -t utf8)"' sh {} \; will find all the files underneath the current directory and try to convert the names into UTF-8. In either case, you can experiment with different encodings and try to find one that makes sense. After you've fixed the encoding for you, if you want to send these files back in the other direction it's possible you'll have the same problem on the other end. In that case, you can reverse the process before zipping the files up with -UU , since it's likely to be very hard to fix on the Windows end. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/251969",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34868/"
]
} |
251,979 | I'm trying install dropbox on Debian with xfce and every time it ends with this error bn.BUILD_KEY: Dropboxbn.VERSION: 3.12.6bn.DROPBOXEXT_VERSION: failedbn.is_frozen: Truepid: 11257ppid: 5898ppid exe: '/bin/bash'uid: 1000user_info: pwd.struct_passwd(pw_name='honzik', pw_passwd='x', pw_uid=1000, pw_gid=1000, pw_gecos='Jan Schramhauser,,,', pw_dir='/home/honzik', pw_shell='/bin/bash')effective_user_info: pwd.struct_passwd(pw_name='honzik', pw_passwd='x', pw_uid=1000, pw_gid=1000, pw_gecos='Jan Schramhauser,,,', pw_dir='/home/honzik', pw_shell='/bin/bash')euid: 1000gid: 1000egid: 1000group_info: grp.struct_group(gr_name='honzik', gr_passwd='x', gr_gid=1000, gr_mem=[])effective_group_info: grp.struct_group(gr_name='honzik', gr_passwd='x', gr_gid=1000, gr_mem=[])LD_LIBRARY_PATH: Nonecwd: '/home/honzik/.dropbox-dist' real_path='/home/honzik/.dropbox-dist' mode=040755 uid=1000 gid=1000 parent mode=040755 uid=1000 gid=1000HOME: u'/home/honzik'appdata: u'/home/honzik/.dropbox/instance1' real_path=u'/home/honzik/.dropbox/instance1' mode=040700 uid=1000 gid=1000 parent mode=040700 uid=1000 gid=1000dropbox_path: u'/home/honzik/Dropbox' real_path=u'/home/honzik/Dropbox' mode=040777 uid=1000 gid=1000 parent mode=040755 uid=1000 gid=1000sys_executable: '/home/honzik/.dropbox-dist/dropbox-lnx.x86_64-3.12.6/dropbox' real_path='/home/honzik/.dropbox-dist/dropbox-lnx.x86_64-3.12.6/dropbox' mode=0100755 uid=1000 gid=1000 parent mode=040755 uid=1000 gid=1000trace.__file__: '/home/honzik/.dropbox-dist/dropbox-lnx.x86_64-3.12.6/library.zip/dropbox/client/ui/common/boot_error.pyc' real_path='/home/honzik/.dropbox-dist/dropbox-lnx.x86_64-3.12.6/library.zip/dropbox/client/ui/common/boot_error.pyc' not found parent not foundtempdir: '/tmp' real_path='/tmp' mode=041777 uid=0 gid=0 parent mode=040755 uid=0 gid=0Traceback (most recent call last): File "dropbox/client/main.py", line 4065, in main_startup File "dropbox/client/main.py", line 1980, in run File "ui/common/uikit.py", line 383, in create_ui_kit File "dropbox/client/ui/qt/__init__.py", line 49, in <module> File "dropbox/client/ui/qt/setup_wizard.py", line 29, in <module> File "dropbox/client/ui/qt/xui.py", line 24, in <module> File "PyQt5/QtWebKit.py", line 14, in <module>ImportError: libxslt.so.1: cannot open shared object file: No such file or directory Earlier I used gnome and Dropbox worked flawlessly. Does somebody know, what is missing? I don't understand this error. I did it according to instructions on Dropbox website. | The error message the OP posted shows libxslt.so.1 is missing, as in "libxslt.so.1: cannot open shared object file" Using debian.packages.org or a system where this library is present shows the name of the package: $ dpkg -S /usr/lib/x86_64-linux-gnu/libxslt.so.1libxslt1.1:amd64: /usr/lib/x86_64-linux-gnu/libxslt.so.1 Knowing the name of package is lixslt1.1, the command to install it is: sudo apt-get install libxslt1.1 After installing the XSLT library, it should be then enough to try again to install DropBox. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251979",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149176/"
]
} |
251,996 | From the bash manual, on the $? variable : $? Expands to the exit status of the most recently executed foreground pipeline. I wonder why bash updates the $? variable on pressing Ctrl-C or Ctrl-Z : $ echo $?0$ ^C$ echo $?130$ sleep 10^Z[1]+ Stopped sleep 10$ echo $?148 | When you press Ctrl+C on the command line, nothing exits, but the handler for SIGINT ( sigint_sighandler() ) sets the exit status to 130 (128 + 2, as DopeGhoti's answer explains) anyway: if (interrupt_immediately) { interrupt_immediately = 0; last_command_exit_value = 128 + sig; throw_to_top_level (); } And in throw_to_top_level() : if (interrupt_state) { if (last_command_exit_value < 128) last_command_exit_value = 128 + SIGINT; print_newline = 1; DELINTERRUPT; } When you press Ctrl+C to kill a background process, the shell observes that the process has died and also sets the exit status $? to 128 plus the signal number. When you press Ctrl+Z to suspend a background process, the shell observes that something has happened to the process: it hasn't died, but the information is reported through the same system call ( wait and friends). Here as well, the shell sets the exit status $? to 128 plus the signal number, which is 148 (SIGTSTP = 20). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/251996",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/903/"
]
} |
252,011 | Here is my script: if [[ "$(echo "$2" | sed 's/.two//g')" == "load" ]] && [[ "$1" == "Decrypt" ]] || [[ "$(echo "$2" | sed 's/.two//g')" == "load" ]] && [[ "$1" == "Encrypt" ]] then key=aNXlye1tGbd0uPelse if [ -z "$key" ] then key="$2" fifi It's supposed to look for the second argument, remove potential .two , and then compare it to load , if it is load then it should set key to aNXlye1tGbd0uP . However, this doesn't work. This is what it looks like when I run it. pskey Decrypt load (some string) Here is the output from bash -x : ++ echo load++ sed s/.two//g+ [[ load == \l\o\a\d ]]+ [[ Decrypt == \D\e\c\r\y\p\t ]]+ [[ Decrypt == \E\n\c\r\y\p\t ]]+ '[' -z '' ']'+ key=load However, If I remove whats after [[ "$1" == "Decrypt" ]] , it works. What is wrong with that line? | If I understand you correctly, you are looking for something like this: if [[ "$(echo "$2" | sed 's/.two//g')" == "load" && "$1" == "Decrypt" ]] || [[ "$(echo "$2" | sed 's/.two//g')" == "load" && "$1" == "Encrypt" ]]then ...fi Note that you could also simplify the whole thing to: if [[ "$(echo "$2" | sed 's/.two//g')" == "load" && "$1" =~ (De|En)crypt ]]; then ... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252011",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79979/"
]
} |
252,016 | The second field in the Linux /etc/shadow file represents a password. However, what we have seen is that: Some of the password fields may have a single exclamation <account>:!:..... Some of the password fields may have a double exclamation <account>:!!:..... Some of the password fields may have an asterisk sign <account>:*:..... By some research on internet and through this thread , I can understand that * means password never established, ! means locked. Can someone explain what does double exclamation ( !! ) mean? and how is it different from ( ! )? | Both "!" and "!!" being present in the password field mean an account is locked. As it can be read in the following document, "!!" in an account entry in shadow means the account of an user has been created, but not yet given a password. Until being given an initial password by a sysadmin, it is locked by default. https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/4/html/System_Administration_Guide/s2-redhat-config-users-process.html | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/252016",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95872/"
]
} |
252,051 | I have some problems with git-lfs and I think that upgrading to the latest git can fix this problems. Current version of git in Debian is 2.1.4 , current stable version on official site is 2.6.4 . Can I only build from source or maybe can I add some external repository? | As of December 2015, Debian stretch/sid has git version 2.6.4 . If you don't want to upgrade your entire distribution, you can look into apt pinning to bring in only git and any necessary dependencies from stretch/sid. However, many Debian folks will tell you this sort of thing is a bad idea , so building from source or waiting/asking for a backport are the only officially recommended approaches. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252051",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34318/"
]
} |
252,158 | I have started downloading my ISO's etc directly to my fileserver using wget . After I close the ssh session, how can I check back on the download process? Scenario: I start the download, then shut down my computer. The next day I ssh into the server and want to see if the download is still active, complete or has been interupted. | If you run wget and close the terminal or terminate your ssh session , it will terminate the wget process too. You need to run wget and keep it running even after the session is closed. For that purpose there are many tools. wget -bqc http://path-to-url/linux.iso You will see a PID on screen: Continuing in background, pid 12345. Where, -b : Go to background immediately after startup. If no output file is specified via the -o, output is redirected to wget-log.-q : Turn off Wget’s output aka save disk space.-c : Resume broken download i.e. continue getting a partially-downloaded file. This is useful when you want to finish up a download started by a previous instance of Wget, or by another program. The nohup command You can also use the nohup command to execute commands after you exit from a shell prompt. The syntax is: $ nohup wget -qc http://path-to-url/linux.iso & ## exit from shell or close the terminal ## $ exit The disown bash command Another option is to use the disown command as follows: $ wget -qc http://path-to-url/linux.iso & [1] 10685 $ disown wget $ ps PID TTY TIME CMD 10685 pts/0 00:00:00 wget 10687 pts/0 00:00:00 bash 10708 pts/0 00:00:00 ps $ logout The screen command You can also use the screen command for this purpose. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252158",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18463/"
]
} |
252,165 | I'm trying to list all the hidden files in a directory, but not other directories, and I am trying to do this using only ls and grep. ls -a | egrep "^\." This is what I have so far, but the problem is that it also lists hidden directories, when I don't want that. Then, completely separately, I want to list the hidden directories. | To list only hidden files : ls -ap | grep -v / | grep "^\." Note that files here is everything that is not a directory. It's not file in "everything in Linux is a file" ;) To list only hidden directories : ls -ap | grep "^\..*/$" Comments: ls -ap lists everything in the current directory, including hiddenones, and puts a / at the end of directories. grep -v / inverts results of grep / , so that no directory is included. "^\..*/$" matches everything that start with . and end in / . If you want to exclude . and .. directories from results of the second part, you can use -A option instead of -a for ls , or if you like to work with regex, you can use "^\.[^.]+/$" instead of "^\..*/$" . Have fun! | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/252165",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149300/"
]
} |
252,166 | I want to use emacs from Applications folder when I'm using Mac, but I'm using same .zshrc in Ubuntu. alias emacs='/Applications/Emacs.app/Contents/MacOS/bin/emacsclient' So I want to create this alias for only when I'm using OS X . How can I get a OS name in .zshrc ? | I also share my Zsh startup between multiple operating systems. You could use a case statement for those commands which are system-specific: case `uname` in Darwin) # commands for OS X go here ;; Linux) # commands for Linux go here ;; FreeBSD) # commands for FreeBSD go here ;;esac Alternatively you can split off system-specific startup into files called (say) .zshrc-Darwin , .zshrc-Linux , etc., and then source the required one near the end of your .zshrc : source "${ZDOTDIR:-${HOME}}/.zshrc-`uname`" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/252166",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44001/"
]
} |
252,224 | I am using a Nix package manager on NixOS . Suppose I want to install a package that provides a file libgtk-x11-2.0.so.0 . How do I find a package that provides this file, similar to other GNU/Linux distributions ? Currently I have to google the file and figure out which package it might belong and find the corresponding package on Nix repository, but I would like a more idiomatic method. | nix-index is what you need. Install and build the index: nix-env -iA nixos.nix-indexnix-index Locate libgtk-x11-2.0.so.0 : nix-locate -w libgtk-x11-2.0.so.0 Output: (zed.out) 0 s /nix/store/bc4mngklj2j7hmm21jra4641x4pm9r8z-node-webkit-env/lib/libgtk-x11-2.0.so.0(thrust.out) 0 s /nix/store/wzg0k4i2cy0qsm3hwxlywxxbga019hbq-env-thrust/lib/libgtk-x11-2.0.so.0(nwjs_0_12.out) 0 s /nix/store/js6klvzjfi5q4djmwb0bqzfb4x0vzm6g-nwjs-env/lib/libgtk-x11-2.0.so.0(node_webkit_0_11.out) 0 s /nix/store/30vm6a7bmc56ckl575rqassw60ccxjpg-node-webkit-env/lib/libgtk-x11-2.0.so.0(mumble_overlay.out) 0 s /nix/store/wayx023w1nslqg2z0c5v4n0b4jxn5n06-gtk+-2.24.31/lib/libgtk-x11-2.0.so.0gnome2.gtk.out 0 s /nix/store/3iqchhncghm5s458lzy99c3prfymrnp2-gtk+-2.24.31/lib/libgtk-x11-2.0.so.0 The last line says that package gtk+-2.24.31 with attribute path gnome2.gtk contains this file. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/252224",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11397/"
]
} |
252,229 | I currently use this to display the current time in my bash prompt: PS1=\[\e[0;32m\]\t \W>\[\e[1;37m\]20:42:23 ~> Is it possible to display the elapsed time since the previous prompt?Such as: 00:00:00 ~> sleep 1000:00:10 ~> sleep 2000:00:20 ~> This has nothing in common with Is it possible to change the PS1 periodically by a script in the background? | One way to do it would be to use the PROMPT_COMMAND feature of bash to execute code that modifies PS1. The function below is an updated version of my original submission; this one uses two fewer environment variables and prefixes them with "_PS1_" to try to avoid clobbering existing variables. prompt_command() { _PS1_now=$(date +%s) PS1=$( printf "\[\e[0;32m\]%02d:%02d:%02d \W>\[\e[1;37m\] " \ $(( ( _PS1_now - _PS1_lastcmd ) / 3600)) \ $(( (( _PS1_now - _PS1_lastcmd ) % 3600) / 60 )) \ $(( ( _PS1_now - _PS1_lastcmd ) % 60)) \ ) _PS1_lastcmd=$_PS1_now}PROMPT_COMMAND='prompt_command'_PS1_lastcmd=$(date +%s) Put that into your .bash_profile to get things started up. Note that you have to type pretty quickly to get the sleep parameter to match the prompt parameter -- the time really is the difference between prompts, including the time it takes you to type the command. 00:00:02 ~> sleep 5 ## here I typed really quickly00:00:05 ~> sleep 3 ## here I took about 2 seconds to enter the command00:00:10 ~> sleep 30 ## more slow typing00:01:35 ~> Late addition: Based on @Cyrus' now-deleted answer, here is a version that does not clutter the environment with extra variables: PROMPT_COMMAND=' _prompt(){ PROMPT_COMMAND="${PROMPT_COMMAND%-*}-$SECONDS))\"" printf -v PS1 "\[\e[0;32m\]%02d:%02d:%02d \W>\[\e[1;37m\] " \ "$(($1/3600))" "$((($1%3600)/60))" "$(($1%60))" }; _prompt "$((SECONDS'"-$SECONDS))\"" Extra late addition: Starting in bash version 4.2 ( echo $BASH_VERSION ), you can avoid the external date calls with a new printf format string; replace the $(date +%s) pieces with $(printf '%(%s)T' -1) . Starting in version 4.3 , you can omit the -1 parameter to rely on the "no argument means now " behavior. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/252229",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149340/"
]
} |
252,267 | I just installed MariaDB on Kubuntu 15.10. I am able to log in with the root user via the plugin that authenticates the user from the operating system. (This is new to me, so I am learning about it rather than removing the plugin authentication as most tutorials seem to recommend.) Now I want to create a non-root user and grant all privileges to that user and allow the user to log into mysql (on localhost) without a password (using just the plugin). How would I do this? Do I need to give the user a password too? | Found the answer. The part I needed was "IDENTIFIED VIA unix_socket" as shown below: MariaDB [(none)]> CREATE USER serg IDENTIFIED VIA unix_socket;MariaDB [(none)]> GRANT ALL PRIVILEGES on mydatabase.* to 'serg'@'localhost';MariaDB [(none)]> select user, host, password, plugin from mysql.user;+--------------+-----------+----------+-------------+| user | host | password | plugin |+--------------+-----------+----------+-------------+| root | localhost | | unix_socket || root | mitra | | unix_socket || root | 127.0.0.1 | | unix_socket || root | ::1 | | unix_socket || serg | localhost | | unix_socket |+--------------+-----------+----------+-------------+5 rows in set (0.00 sec)MariaDB [(none)]> FLUSH PRIVILEGES; Then in the shell: sudo service mysql restart To log in using user 'serg' do not use sudo. Just use mysql -u serg . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252267",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15010/"
]
} |
252,282 | I just recently installed gitg and whenever I tried to make a commit, I encountered an error of missing author details. However, I am unable to change author details as no editing window pops up after I clicked on the the author details tab... | Found the answer. The part I needed was "IDENTIFIED VIA unix_socket" as shown below: MariaDB [(none)]> CREATE USER serg IDENTIFIED VIA unix_socket;MariaDB [(none)]> GRANT ALL PRIVILEGES on mydatabase.* to 'serg'@'localhost';MariaDB [(none)]> select user, host, password, plugin from mysql.user;+--------------+-----------+----------+-------------+| user | host | password | plugin |+--------------+-----------+----------+-------------+| root | localhost | | unix_socket || root | mitra | | unix_socket || root | 127.0.0.1 | | unix_socket || root | ::1 | | unix_socket || serg | localhost | | unix_socket |+--------------+-----------+----------+-------------+5 rows in set (0.00 sec)MariaDB [(none)]> FLUSH PRIVILEGES; Then in the shell: sudo service mysql restart To log in using user 'serg' do not use sudo. Just use mysql -u serg . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252282",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145794/"
]
} |
252,286 | Emoticons seem to be specified using a format of U+xxxxx wherein each x is a hexadecimal digit. For example, U+1F615 is the official Unicode Consortium code for the "confused face" As I am often confused, I have a strong affinity for this symbol. The U+1F615 representation is confusing to me because I thought the only encodings possible for unicode characters required 8, 16, 24 or 32 bits, whereas 5 hex digits require 5x4=20 bits. I've discovered that this symbol seems to be represented by a completely different hex string in bash: $echo -n | hexdump0000000 f0 9f 98 95 0000004$echo -e "\xf0\x9f\x98\x95"$PS1=$'\xf0\x9f\x98\x95 >' > I would have expected U+1F615 to convert to something like \x00 \x01 \xF6 \x15 . I don't see the relationship between these 2 encodings? When I lookup a symbol in the official Unicode Consortium list , I would like to be able to use that code directly without having to manually convert it in this tedious fashion. i.e. finding the symbol on some web page copying it to the clipboard of the web browser pasting it in bash to echo through a hexdump to discover the REAL code. Can I use this 20-bit code to determine what the 32-bit code is? Does a relationship exist between these 2 numbers? | UTF-8 is a variable length encoding of Unicode. It is designed to be superset of ASCII. See Wikipedia for details of the encoding. \x00 \x01 \xF6 \x15 would be UCS-4BE or UTF-32BE encoding. To get from the Unicode code point to the UTF-8 encoding, assuming the locale's charmap is UTF-8 (see the output of locale charmap ), it's just: $ printf '\U1F615\n'$ echo -e '\U1F615'$ confused_face=$'\U1F615' The latter will be in the next version of the POSIX standard . AFAIK, that syntax was introduced in 2000 by the stand-alone GNU printf utility (as opposed to the printf utility of the GNU shell), brought to echo / printf / $'...' builtins first by zsh in 2003 , ksh93 in 2004, bash in 2010 (though not working properly there until 2014 ), but was obviously inspired by other languages. ksh93 also supports it as printf '\x1f615\n' and printf '\u{1f615}\n' . $'\uXXXX' and $'\UXXXXXXXX' are supported by zsh , bash , ksh93 , mksh and FreeBSD sh , GNU printf , GNU echo . Some require all the digits (as in \U0001F615 as opposed to \U1F615 ) though that's likely to change in future versions as POSIX will allow fewer digits. In any case, you need all the digits if the \UXXXXXXXX is to be followed by hexadecimal digits as in \U0001F615FOX , as \U1F615FOX would have been $'\U001F615F'OX . Some expand to the characters in the current locale's encoding at the time the string is parsed or at the time it is expanded, some only in UTF-8 regardless of the locale. If the character is not available in the current locale's encoding, the behaviour varies between shells. So, for best portability, best is to only use it in UTF-8 locales and use all the digits, and use it in $'...' : printf '%s\n' $'\U0001F615' Note that: LC_ALL=C.UTF-8; printf '%s\n' $'\U0001F615' or: { LC_ALL=C.UTF-8 printf '%s\n' $'\U0001F615'} Will not work with all shells (including bash ) because the $'\U0001F615' is parsed before LC_ALL is assigned. (also note that there's no guarantee that a system will have a locale called C.UTF-8 ) You'd need: LC_ALL=C.UTF-8; eval "confused_face=$'\U0001F615'" Or: LC_ALL=C.UTF-8printf '%s\n' $'\U0001F615' (not within a compound command or function). For the reverse, to get from the UTF-8 encoding to the Unicode code-point, see this other question or that one . $ unicode U+1F615 CONFUSED FACEUTF-8: f0 9f 98 95 UTF-16BE: d83dde15 Decimal: 😕Category: So (Symbol, Other)Bidi: ON (Other Neutrals)$ perl -CA -le 'printf "%x\n", ord shift' 1f615 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/252286",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72482/"
]
} |
252,314 | Usually, when applying package updates via yum update , rpm is 'intelligent' enough to respect my changes to configuration files under /etc . (It basically looks at the mtime, compares it and depending on the outcome replaces the file with the new version, or just puts the new version beside it.) But with one of the last yum/yum-cron updates on Centos 7, my custom yum-cron config files were replaced: /etc/yum/yum-cron.conf/etc/yum/yum-cron-hourly.conf Now I am wondering why this happened exactly? I mean, the answer must be in the source package - but I can't find it there: $ rpm -qi yum-cron | grep srcSource RPM : yum-3.4.3-132.el7.centos.0.1.src.rpm$ yumdownloader --source yum-3.4.3-132.el7.centos.0.1$ grep '%.*yum-cron.*\.conf' yum.spec%config(noreplace) %{_sysconfdir}/yum/yum-cron.conf%config(noreplace) %{_sysconfdir}/yum/yum-cron-hourly.conf Looking at the spec file, in the yum-cron section, the config directive even has noreplace specified. On the other hand, the ownership of the config files seems to be shared among the yum and the yum-cron binary packages: $ rpm -ql yum-cron | grep 'yum-cron.*\.conf'/etc/yum/yum-cron-hourly.conf/etc/yum/yum-cron.conf$ rpm -ql yum | grep 'yum-cron.*\.conf' /etc/yum/yum-cron-hourly.conf/etc/yum/yum-cron.conf How come? I mean, I only see the yum-cron config files mentioned in the cron specific files section of the spec file ... See also the CentOS issue and the RHEL issue on this. | UTF-8 is a variable length encoding of Unicode. It is designed to be superset of ASCII. See Wikipedia for details of the encoding. \x00 \x01 \xF6 \x15 would be UCS-4BE or UTF-32BE encoding. To get from the Unicode code point to the UTF-8 encoding, assuming the locale's charmap is UTF-8 (see the output of locale charmap ), it's just: $ printf '\U1F615\n'$ echo -e '\U1F615'$ confused_face=$'\U1F615' The latter will be in the next version of the POSIX standard . AFAIK, that syntax was introduced in 2000 by the stand-alone GNU printf utility (as opposed to the printf utility of the GNU shell), brought to echo / printf / $'...' builtins first by zsh in 2003 , ksh93 in 2004, bash in 2010 (though not working properly there until 2014 ), but was obviously inspired by other languages. ksh93 also supports it as printf '\x1f615\n' and printf '\u{1f615}\n' . $'\uXXXX' and $'\UXXXXXXXX' are supported by zsh , bash , ksh93 , mksh and FreeBSD sh , GNU printf , GNU echo . Some require all the digits (as in \U0001F615 as opposed to \U1F615 ) though that's likely to change in future versions as POSIX will allow fewer digits. In any case, you need all the digits if the \UXXXXXXXX is to be followed by hexadecimal digits as in \U0001F615FOX , as \U1F615FOX would have been $'\U001F615F'OX . Some expand to the characters in the current locale's encoding at the time the string is parsed or at the time it is expanded, some only in UTF-8 regardless of the locale. If the character is not available in the current locale's encoding, the behaviour varies between shells. So, for best portability, best is to only use it in UTF-8 locales and use all the digits, and use it in $'...' : printf '%s\n' $'\U0001F615' Note that: LC_ALL=C.UTF-8; printf '%s\n' $'\U0001F615' or: { LC_ALL=C.UTF-8 printf '%s\n' $'\U0001F615'} Will not work with all shells (including bash ) because the $'\U0001F615' is parsed before LC_ALL is assigned. (also note that there's no guarantee that a system will have a locale called C.UTF-8 ) You'd need: LC_ALL=C.UTF-8; eval "confused_face=$'\U0001F615'" Or: LC_ALL=C.UTF-8printf '%s\n' $'\U0001F615' (not within a compound command or function). For the reverse, to get from the UTF-8 encoding to the Unicode code-point, see this other question or that one . $ unicode U+1F615 CONFUSED FACEUTF-8: f0 9f 98 95 UTF-16BE: d83dde15 Decimal: 😕Category: So (Symbol, Other)Bidi: ON (Other Neutrals)$ perl -CA -le 'printf "%x\n", ord shift' 1f615 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/252314",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1131/"
]
} |
252,342 | How do you iterate through a loop n amount of times when n is specified by the user at the beginning? I have written a shell script and need to repeat a certain part of it n numbers of times (depending upon how many times the user wishes). My script so far looks like this: echo "how many times would you like to print Hello World?"read numfor i in {1.."$num"}doecho "Hello World"done If I change "num" to a number such as "5" the loop works however I need to be able to let the user specify the amount of times to iterate through the loop. | You can use seq for i in $(seq 1 "$num") or your shell may support C-style loops e.g. in bash for ((i=0; i<$num; i++)) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252342",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149444/"
]
} |
252,349 | I am familiar with kill command , and most of the time we just use kill -9 to kill a process forcefully, there are many other signals that can be used with kill . But I wonder what are the use cases of pkill and killall , if there is already a kill command. Do pkill and killall use the kill command in their implementation? I mean they are just wrappers over kill or they have their own implementation? I would also like to know how pgrep command gets the process id from the process name. Do all these commands use the same underlying system calls? Is there any difference from a performance point of view, which one is faster? | The kill command is a very simple wrapper to the kill system call , which knows only about process IDs (PIDs). pkill and killall are also wrappers to the kill system call , (actually, to the libc library which directly invokes the system call), but can determine the PIDs for you, based on things like, process name, owner of the process, session id, etc. How pkill and killall work can be seen using ltrace or strace on them. On Linux, they both read through the /proc filesystem, and for each pid (directory) found, traverses the path in a way to identify a process by its name or other attributes. How this is done is technically speaking, kernel and system specific. In general, they read from /proc/<PID>/stat which contains the command name as the 2nd field. For pkill -f and pgrep examine the /cmdline entry for each PID's proc entry. pkill and pgrep use the readproc system call, whereas killall does not. I couldn't say if there's a performance difference: you'll have to benchmark that on your own. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/252349",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148778/"
]
} |
252,350 | Password less SSH without user directory? The folder .ssh should be stored in a user directory as far as I understood. ServerA: Linux without /home/usersServerB: Linux with /home/usersclient: Linux/mac etc... Cases: client password-less ssh to ServerB, no problem client password-less ssh to ServerA, no problem SeverA password-less ssh to ServerB, problem! If there is no actual user directories in ServerA how public key for each user without user directory existing? Or is there other ways to safely ssh to ServerB from ServerA? | The kill command is a very simple wrapper to the kill system call , which knows only about process IDs (PIDs). pkill and killall are also wrappers to the kill system call , (actually, to the libc library which directly invokes the system call), but can determine the PIDs for you, based on things like, process name, owner of the process, session id, etc. How pkill and killall work can be seen using ltrace or strace on them. On Linux, they both read through the /proc filesystem, and for each pid (directory) found, traverses the path in a way to identify a process by its name or other attributes. How this is done is technically speaking, kernel and system specific. In general, they read from /proc/<PID>/stat which contains the command name as the 2nd field. For pkill -f and pgrep examine the /cmdline entry for each PID's proc entry. pkill and pgrep use the readproc system call, whereas killall does not. I couldn't say if there's a performance difference: you'll have to benchmark that on your own. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/252350",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148706/"
]
} |
252,368 | From my experience with modern programming and scripting languages, I believe most programmers are generally accustomed to referring to the first element of an array as index 0 (zero). I'm sure I've heard of languages other than zsh starting array indexing on 1 (one); it's okay, as it is equally convenient.However, as the previously released and widely used shell scripting languages ksh and bash both use 0, why would someone choose to alter this common convention? There does not seem to be any substantial advantages of using 1 as the first index;then, the only explanation I can think of regarding this somewhat "exclusive feature" to shells would be "they just did this to show off a bit more their cool shell". I don't know much of either zsh or its history, though, and there is a high chance my trivial theory about this does not make any sense. Is there an explanation for this? Or is it just out of personal taste? | Virtually all shell arrays (Bourne, csh, tcsh, fish, rc, es, yash) start at 1. ksh is the only exception that I know (bash just copied ksh). Most interpreted languages at the time (early 90s): awk , tcl at least, and tools typically used from the shell ( cut -f1-3 , head -n 3 , sort -k1,3 , cal 1 2015 , comm -1 ) start at 1. sed , ed , vi number their lines from 1... zsh takes the best of the Bourne shell and csh. The Bourne shell array $@ start at 1. zsh is consistent with its handling of $@ (like in Bourne) or $argv (like in csh). See how confusing it is in ksh where ${@:0:1} does not give you the first positional parameter for instance. A shell is a user tool before being a programming language. It makes sense for most users to have the first element in $a[1] . It also means that the number of elements is the same as the last indice (in zsh like in most other shells except ksh, arrays are not sparse). a[1] for the first element is consistent with a[-1] for the last. So IMO the question should rather be: Why did David Korn's choose to make its arrays start at 0? About your: "However, as the previously released and widely used shell scripting languages ksh and bash both use 0" Note that while bash was released a few months before zsh indeed (June 1989 compared to December 1990), array support was added in their respective 2.0 version, but for zsh that was released in 1991, while for bash it was released much later 1996. The first Unix shell to introduce arrays (unless you want to consider the 1970's Thompson shell with its $1 .. $2 positional parameters) was csh in the late 70s whose indexes start at one. And its code was freely available, while ksh was proprietary and often not included by default on Unices (sold separately at a hefty price) until the late 80s. While ksh93 code was released as open source circa 2000, ksh88's, to this day, never was (though it's not too difficult to find ksh86a and ksh88d source code on archive.org today if you're interested in archaeology). | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/252368",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93585/"
]
} |
252,430 | I have a directory structure like this: Project/ | +--Part1/ | | | +--audio.mp3 | +--Part2/ | | | +--audio.mp3 | +--Part3/ | | | +--audio.mp3... I want to end up with files called Part1.mp3, Part2.mp3, etc. Each folder only contains a single file so there is no risk of clobbering files or dealing with multiple files with the same name. I feel like I could do this with some sort of find / xargs command coupled with cut and mv but I can't figure out how to actually form the command. | These examples work in any POSIX shell and require no external programs. This stores the Part*.mp3 files at the same level as the Project directory: (cd Project && for i in Part*/audio.mp3; do echo mv "$i" ../"${i%/*}".mp3; done) This keeps the Part*.mp3 files in the Project directory: for i in Project/Part*/audio.mp3; do echo mv "$i" ./"${i%/*}".mp3; done These solutions use the shell's pattern matching parameter expansion to produce the new filename. ${parameter%word} Remove Smallest Suffix Pattern. The word is expanded to produce a pattern. The parameter expansion then results in parameter, with the smallest portion of the suffix matched by the pattern deleted. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252430",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65884/"
]
} |
252,503 | Background I am running an SSH server and have this user that I want to delete. I cannot delete this user because he is currently running a few processes that I need to kill first. This is the pipeline I am using currently using to find out all the process ids of the user I am currently using: ps -u user | awk '{print $1;}' The output looks like this: PID212121222124212523692370 I want to pipe this to kill -9 to kill all processes so I can delete this stupid user like this: ps -u user | awk '{print $1;}' | sudo xargs kill -9 But this does not work because of the PID header: kill: failed to parse argument: 'PID' The question I am thinking that there has to be a simple Unix command to remove the first line of input. I am aware that I can use tail for this but I don't want to count how many lines the input contains to figure out exactly how many I want to display. I am looking for something like head or tail but inverted (instead of displaying only the first/last part of the stream it displays everything but the start/end of stream). Note I managed to solve this issue I had by simply adding | grep [[:digit:]] after my awk command but I am still looking for a way to delete first line of a file as I think would be quite useful in other scenarios. | NOTE: if your system already has pgrep / pkill then you are re-inventing the wheel here. If your system doesn't have these utilities, then you should be able to format the output of ps to get the unencumbered PID list directly e.g. ps -u user -opid= If you are already using awk , there is no need to pipe through an additional process in order to remove the first line (record): simply add a condition on the record number NR ps -u user | awk 'NR>1{print $1;}' Since you mention head and tail , the formula you probably want in this case is tail -n +2 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/252503",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128859/"
]
} |
252,507 | Many years ago I had an issue with Linux where processes would randomly go to sleep. Back then, I knew a trick with the /proc filesystem to trigger a wakeup of the process. I vaguely remember being able to do something like "echo R" >/proc/pid/stat but that doesn't appear to be the right command. There are lots of hits on the internet for "how do I wake a sleeping process?" and so many of the answers are "oh, just kill it!" I know there's another way, but my memory is failing me now. So far I've tried: kill -SIGCONT <pid>echo "R" > /proc/<pid>/statusecho "R" > /proc/<pid>/stat | What do you mean by “sleep”? If you mean state S (interruptible sleep), that means that the process is waiting for I/O. The process is currently engaged in a blocking system call. You can't force it to “wake up” in a generic way — what would it do then? It'll wake up when the input or output operation it wants to make is possible (e.g. when data is available to read, when a write channel becomes ready, etc.). If you mean state T (stopped), that means that the process is currently suspended. You can unsuspend it by sending it a CONT signal (SIGCONT): kill -CONT PID . Processes do not “randomly go to sleep”. They sleep when they have nothing to do. They get suspended if they receive a signal that stops them: SIGTSTP, SIGSTOP, SIGTTIN, SIGTTOU. These last two signals are sent by the terminal interface in the kernel when a background process tries to read from the terminal (resp. write to the terminal); if you aren't aware of that, you might think that the process randomly stops. If that's what happened, you need to bring it to the foreground; run fg in the shell from which you started that background job, with the right argument to indicate the job that the process is part of, e.g. fg %3 . The stat* files in Linux's /proc are read-only and I'm not aware of any time when they were writable. I don't know what you could hope to write there. The data reported by this file is kernel-managed data, and some of it can be changed more or less directly by the process, but it isn't something you can modify from the outside. For example you can't magically make a process become runnable. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/252507",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15189/"
]
} |
252,516 | I wrote a backup script on my Debian 8 system which uses tar command with "--exclude-from" to exclude some files/dir. This works great, but today I would like to exclude some files sharing the same path pattern, like /home/www-data/sites/<some_string>log.txt or directories like /home/www-data/sites/<one_or_two_directories>/vendor . I tried to append /home/www-data/sites/*log.txt into the file, but tar fails and outputs on stderr the following errors: tar: /home/www-data/sites/*log.txt: Cannot stat: No such file or directorytar: Exiting with failure status due to previous errors Did I miss something when trying to use * or ** ? I then read that in Unix, programs usually do not interpret wildcards themselves which means that * isn't expanded neither ** by tar. As far as I know, my last resort here is to expand the list using bash and append it into the exclusion file (if it's not already there) prior to the tar call. Is there a cleaner way? EDIT Here is the script snippet .. # ...broot=$(dirname "${PWD}")i="${PWD}/list.include"x="${PWD}/list.exclude"o="$broot/archive.tgz"tar -zpcf $o -T $i -X $x# ... Here is the exclusion file .. /etc/php5/fpm/etc/nginx/etc/mysql/home/me/websites/*log.txt/home/me/websites/**/vendor The goal is to exclude all log files located inside "websites" directory and all "vendor" directories that could be found within any sub-directories of "websites". | The shell expands wildcards in arguments, so most applications don't need to perform any wildcard expansion. However tar's exclude list does support wildcards, which happen to match the wildcards supported by traditional shells. Beware that there may be slight differences; for example tar doesn't distinguish * and ** like ksh, bash and zsh can. With tar, * can match any character including / , so for example */.svn excludes a file called .svn at any level of the hierarchy. You can use tar --no-wildcards-match-slash in which case * doesn't match directory separators. For example, excluding /home/me/websites/*log.txt excludes /home/me/websites/log.txt , /home/me/websites/foo-log.txt and /home/me/websites/subdir/log.txt . Excluding /home/me/websites/**/vendor excludes /home/me/websites/one/vendor and /home/me/websites/one/two/vendor but not /home/me/websites/vendor . With the --no-wildcards-match-slash option, /home/me/websites/*log.txt does not exclude /home/me/websites/subdir/log.txt and /home/me/websites/**/vendor does not exclude /home/me/websites/one/two/vendor . tar … --exclude='/home/www-data/sites/*include' … excludes the files and directories under /home/www-data/sites whose name ends with include . You might get away without the quotes, but not if you write --exclude /home/www-data/sites/*include (because then the shell would expand the wildcards before tar can see them) or if you use a shell that signals an error on non-matching wildcards (e.g. zsh in its default — and recommended — configuration). The option --exclude-from requires a file name. The file must contain one pattern per line. Do not confuse --exclude (followed by a pattern) and --exclude-from (followed by the name of a file containing patterns). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/252516",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145762/"
]
} |
252,517 | Can nmap list all hosts on the local network that have both SSH and HTTP open? To do so, I can run something like: nmap 192.168.1.1-254 -p22,80 --open However, this lists hosts that have ANY of the list ports open, whereas I would like hosts that have ALL of the ports open. In addition, the output is quite verbose: # nmap 192.168.1.1-254 -p22,80 --openStarting Nmap 6.47 ( http://nmap.org ) at 2015-12-31 10:14 ESTNmap scan report for Wireless_Broadband_Router.home (192.168.1.1)Host is up (0.0016s latency).Not shown: 1 closed portPORT STATE SERVICE80/tcp open httpNmap scan report for new-host-2.home (192.168.1.16)Host is up (0.013s latency).PORT STATE SERVICE22/tcp open ssh80/tcp open httpNmap done: 254 IP addresses (7 hosts up) scanned in 3.78 seconds What I'm looking for is output simply like: 192.168.1.16 as the above host is the only one with ALL the ports open. I certainly can post-process the output, but I don't want to rely on the output format of nmap, I'd rather have nmap do it, if there is a way. | There is not a way to do that within Nmap, but your comment about not wanting "to rely on the output format of nmap" lets me point out that Nmap has two stable output formats for machine-readable parsing. The older one is Grepable output ( -oG ) , which works well for processing with perl, awk, and grep, but is missing some of the more advanced output (like NSE script output, port reasons, traceroute, etc.). The more complete format is XML output ( -oX ) , but it may be overkill for your purposes. You can either save these outputs to files with -oG , -oX , or -oA (both formats plus "normal" text output), or you can send either one straight to stdout: nmap 192.168.1.1-254 -p22,80 --open -oG - | awk '/22\/open.*80\/open/{print $2}' | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/252517",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137535/"
]
} |
252,530 | Note: This applies to Centos 7. If you are looking for a Debian answer, see this question . Those answers will not be duplicated here. After an install of centos 7, I can't access man pages : # man ls-bash: man: command not found I tried to install it via yum # yum install man-pages... ok But again: # man ls-bash: man: command not found Why? | In order to use the man command, you must also install the man package before or after the man-pages one # yum install man-pages... ok# yum install man... ok Now man is installed # man lsNAME ls - list directory contentsSYNOPSIS ls [OPTION]... [FILE]...DESCRIPTION List information about the FILEs (the current directory by default). Sort entries alphabetically if none of -cftuvSUX nor --sort. Mandatory arguments to long options are mandatory for short options too. ... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/252530",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80244/"
]
} |
252,561 | I use Ubuntu 15.10 and I'm very new in Linux. After reading in Wikipedia what is a symbolic link in general , and after executing a symlink creation command in the Ubuntu Unix-bash terminal, I ought to better understand the structure of a symlink I worked with several times when creating (and "destroying") Ubuntu learning environments. There is a short syntax I ran each time when installing a PHPmyadmin (PMA) service. Without running it, the service just didn't work. From the information I gathered, this following syntax creates a symlink that connects Apache to a certain PMA file that includes conf directions. This is the syntax I ran each time: cd /etc/apache2/conf-enabled/sudo ln -s /etc/phpmyadmin/apache.conf phpmyadmin.confservice apache2 restart I want to better understand what is actually being done here, for example: Why is the cd navigation even needed? Couldn't we specify which files we want to work on from the root (computer) folder and that's it? Why is the -s after the ln? I navigated to both directories in the ln command but I couldn't find phpmyadmin.conf in either of them - So, how can the system know where it is (assuming there is no system-wide search for it). | The ln command creates the symlink in the current directory if no directory is specified. Thus, phpmyadmin.conf is put in /etc/apache2/conf-enabled/ . You could have also done ln -s /etc/phpmyadmin/apache.conf /etc/apache2/conf-enabled/phpmyadmin.conf This is standard behavior for pretty much all Unix commands. The -s option specifies that you are creating a soft link as opposed to a hard link . Read more here. I don't quite understand the question ("how can the system know where it is?"). phpmyadmin.conf is created in the current directory (in this case /etc/apache2/conf-enabled/ ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252561",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
252,562 | The command $ wget -r http://www.comp.brad.ac.uk/research/GIP/tutorials/index.html only downloads index.html and robots.txt for me, even though there are links in it to further pages in the same directory. For example <A HREF="viewp.html">Viewpoint specification</A> Why does wget ignore that? | I tested this, and found the issue: wget respects robots.txt unless explicitly told not to. wget -r http://www.comp.brad.ac.uk/research/GIP/tutorials/index.html--2015-12-31 12:29:52-- http://www.comp.brad.ac.uk/research/GIP/tutorials/index.htmlResolving www.comp.brad.ac.uk (www.comp.brad.ac.uk)... 143.53.133.30Connecting to www.comp.brad.ac.uk (www.comp.brad.ac.uk)|143.53.133.30|:80... connected.HTTP request sent, awaiting response... 200 OKLength: 878 [text/html]Saving to: ‘www.comp.brad.ac.uk/research/GIP/tutorials/index.html’www.comp.brad.ac.uk/research/GI 100%[======================================================>] 878 --.-KB/s in 0s 2015-12-31 12:29:53 (31.9 MB/s) - ‘www.comp.brad.ac.uk/research/GIP/tutorials/index.html’ saved [878/878]Loading robots.txt; please ignore errors.--2015-12-31 12:29:53-- http://www.comp.brad.ac.uk/robots.txtReusing existing connection to www.comp.brad.ac.uk:80.HTTP request sent, awaiting response... 200 OKLength: 26 [text/plain]Saving to: ‘www.comp.brad.ac.uk/robots.txt’www.comp.brad.ac.uk/robots.txt 100%[======================================================>] 26 --.-KB/s in 0s 2015-12-31 12:29:53 (1.02 MB/s) - ‘www.comp.brad.ac.uk/robots.txt’ saved [26/26]FINISHED --2015-12-31 12:29:53-- As you can see, wget did what it was asked by you, perfectly. What does the robots.txt say in this case? cat robots.txtUser-agent: *Disallow: / So this site doesn't want robots downloading stuff, at least not ones that are reading and following the robots.txt, usually this means they don't want to be indexed in search engines. wget -r -erobots=off http://www.comp.brad.ac.uk/research/GIP/tutorials/index.html Now, if wget is simply too powerful for you to learn, that's fine too, but don't make the error of thinking the flaw is in wget. There's a risk to doing recursive downloads of a site however, so it's sometimes best to use limits to avoid grabbing the entire site: wget -r -erobots=off -l2 -np http://www.comp.brad.ac.uk/research/GIP/tutorials/index.html -l2 means 2 levels max. -l means: level. -np means don't go UP in the tree, just in, from the start page. -np means: no parent. It just depends on the target page, sometimes you want to specify exactly what to get and not get, for example, in this case, you are only getting the default of .html/.htm extensions, not graphics, pdfs, music/video extensions. The -A option lets you add extension types to grab. By the way, I checked and my wget, version 1.17, is from 2015. Not sure what version you are using. Python by the way I think was also created in the 90s, so by your reasoning, python is also junk from the 90s. I admit the wget --help is quite intense and feature rich, as is the wget man page, so it's understandable why someone would want to not read it, but there are tons of online tutorials that tell you how do most common wget actions. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252562",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149575/"
]
} |
252,590 | I wonder how to log GPU load. I use Nvidia graphic cards with CUDA. Not a duplicate: I want to log. | You can use (tested with nvidia-smi 352.63 ): while true; do nvidia-smi --query-gpu=utilization.gpu --format=csv >> gpu_utillization.log; sleep 1; done. The output will be (if 3 GPUs are attached to the machine): utilization.gpu [%]96 %97 %92 %utilization.gpu [%]97 %98 %93 %utilization.gpu [%]87 %96 %89 %utilization.gpu [%]93 %91 %93 %utilization.gpu [%]95 %95 %93 % Theoretically , you could simply use nvidia-smi --query-gpu=utilization.gpu --format=csv --loop=1 --filename=gpu_utillization.csv , but it doesn't seem to work for me. (the flag -f or --filename logs the output to a specified file). To log more information: while true; do nvidia-smi --query-gpu=utilization.gpu,utilization.memory,memory.total,memory.free,memory.used --format=csv >> gpu_utillization.log; sleep 1; done outputs: utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB]98 %, 15 %, 12287 MiB, 10840 MiB, 1447 MiB98 %, 16 %, 12287 MiB, 10872 MiB, 1415 MiB92 %, 5 %, 12287 MiB, 11919 MiB, 368 MiButilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB]90 %, 2 %, 12287 MiB, 11502 MiB, 785 MiB92 %, 4 %, 12287 MiB, 11180 MiB, 1107 MiB92 %, 6 %, 12287 MiB, 11919 MiB, 368 MiButilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB]97 %, 15 %, 12287 MiB, 11705 MiB, 582 MiB94 %, 7 %, 12287 MiB, 11540 MiB, 747 MiB93 %, 5 %, 12287 MiB, 11920 MiB, 367 MiB | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252590",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16704/"
]
} |
252,593 | I would like to know how to delete a USB flash drive via the terminal if possible so data can't be recovered. | TL/DR: Make sure you get the right device name, ensure it's not mounted, and do as many random overwrites as you can afford. You can follow it by an erase command designed for flash hardware, if you are on a recent enough distribution. In these checks, always use the drive (like /dev/sd h ) and not the partition name (which would be /dev/sd h1 ) # dmesg|grep sdXX[3600.000001] sd 6:0:0:0: [sdXX] 125106176 512-byte logical blocks: (64.0 GB/59.6 GiB)# blkid|grep sdXX/dev/sdXX1: PARTUUID="88a03bb2-ced8-4bb2-9883-0a51b4d460a8"# df|grep /dev/sdXX# shred -vzn8 /dev/sdXXshred: /dev/sdXX: pass 1/9 (random)...shred: /dev/sdXX: pass 1/9 (random)...46MiB/3.8GiB 1%...shred: /dev/sdXX: pass 9/9 (000000)...3.8GiB/3.8GiB 100%# blkdiscard -s /dev/sdXXblkdiscard: /dev/sdXX: BLKSECDISCARD ioctl failed: Operation not supported# blkdiscard /dev/sdXXblkdiscard: /dev/sdXX: BLKDISCARD ioctl failed: Operation not supported# In theory, overwriting with zero with dd is just fine. However, due to how the internals of a flash drive are built, if you use a single overwrite pass, there may be several layers of data hidden behind the actual blocks that are still storing leftover information. Typically a part of flash storage is faulty, and is marked so during manufacturing. There are also other bits that can go wrong (becoming unchangeable, unsettable, or unclearable), these parts must be marked faulty as well during the lifetime. This information is stored in a reserved space, on the same chips as your data. This is one of the several reasons a 4GB thumb drive is not showing 2^32 bytes capacity. Flash storage is also internally organised in larger blocks, sometimes much larger than the filesystems working on the drive. A typical filesystem block size is 4KB, and the flash segments that can be erased in one go may range from 64KB to even several megabytes. These large blocks can only be erased in whole, which resets all of the block to a known state (all 1s or all 0s). Afterwards a data write can alter any of the bits (change the default 1s into 0s where needed, or change the default 0s into 1s), but only once . To change any of the bits back into the default, all of the segment needs to be erased again! So, when you want to change a 4KB block (the filesystem is asked to change a single character in the middle of a file), the flash controller would need to read and buffer all 64KB of the old data, erase all of it, and write back the new contents. This would be very slow, erasing segments is the slowest operation. Also, a segment can only erased by a limited times (tens of thousands is typical), so if you make too many changes to a single file, that can quickly deteriorate the drive. But this is not how it's done. Intelligent flash controllers simply write the 4KB new data elsewhere, and make a note to redirect reads to this 4KB of data in the middle of the old block. They need some more space, that we can't see to store this information about redirects. They also try to make sure that they go through all the accessible segments to store data, this is called wear levelling . This means that typically old data is still on the drive somewhere! If you just cleared all accessible blocks, all the hidden blocks still keep a quite recent version of the data. Whether this is accessible to an attacker you want your data to be protected from, is a different question. If you have a recent enough distribution, and the USB drive is programmed to reveal that it is a flash drive, blkdiscard can use the underlying TRIM operation, which is the segment erase that we talked about above. It also has an additional flag to make sure that even the invisible hidden data is fully erased by the hardware: # blkdiscard -s /dev/myusbdevice -s, --secure Perform a secure discard. A secure discard is the same as a regular discard except that all copies of the discarded blocks that were possibly created by garbage collection must also be erased. This requires support from the device. It won't necessarily work, as I demonstrated above. If you get Operation not supported , either your kernel, your utilities, or the USB gateway chip (which allows the flash controller to look like a drive via USB) does not support passing TRIM command. (The flash controller must still be able to erase segments on its own). If it is supported by the vendor of your drive, this is the safest way. Another, less safe way to make sure you're allowing less of the old data to linger around somewhere is to overwrite it several times, with random values, if possible. Why random, you ask? Just imagine if the USB drive were made too intelligent, and detected that you wanted to clear a sector, and just made a change in a bitmap that this sector is now free, and will need clearing later. As this means it can speed up writes of zeros, so it makes for a pendrive that appears more efficient, right? Whether your drive is doing it, hard to tell. At the most extreme, the drive could just remember how much from the start you have cleared, and all it needs to store is about 4 bytes of information to do this, and not clear anything from the data you want to disappear. All so that it could look very fast. If you are overwriting the data with random, unpredictable values, these optimizations are impossible. So the drive has to make sure the data ends up stored inside the flash chips. But you still won't be able to rule out that some of the previously used sectors are still there with some old data of yours, but the drive just didn't consider important to erase it just yet, since it's not accessible normally. Only the actual TRIM command can guarantee that. To automate overwriting with random values, you may want to look into using shred , like: # shred -vzn88 /dev/myusbdrive The options used: -v for making it show the progress -z to zero it at as a final phase -n8 is to do 8 random passes of overwrites If possible, use both blkdiscard and shred , if blkdiscard -s is supported by your drive, it's the optimal solution, but can't hurt to do a shred beforehand to rule out firmware mistakes. Oh, and always double-triple-check the device that you are trying to clear! dmesg can help to see what was the most recently inserted device, and also it's worth to check the device name you intend to clear with ls -al , even for the devices node numbers, and the blkid output to see what partitions may be available that you DON'T want to clear. Never use these commands on an internal drive that you want to keep using - blkdiscard will only work on solid state drives, but it's not worth to try to lose data! There may be other ways to clear data securely as technology progresses. One other way mentioned is the ATA SECURITY ERASE command that can be issued via hdparm commands. In my experience, it is not really supported on flash drives. It was designed for enterprise hard drives, and the feature is not always implemented in lowest cost storage devices. The TRIM / DISCARD operation is much newer than the SECURITY ERASE command, and was created in response to the flash features, so it has a much higher chance of being implemented, even in cheap USB drives, but it's still not ubiquitous. If you want to erase an SD/micro SD card in an USB dongle, and blkdiscard reports it is not supported, you may want to try a different dongle/card reader, and/or do it in a machine with a direct SD/MMC slot. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/252593",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148018/"
]
} |
252,602 | My problem here is that the parameter $0 gives the same result as ${0##*/} , and that happens after converting the x-shellscript to x-executable using the SHC program! OS: Debiab-8.2-jessieSHC version: 3.8.7cmd used: shc -f script.bash The compiled script.x resides in a extra bin path (not known by sudo). Note I've created a hello world program to print the parametr $0, and it always gives me the basename! My scriptFile contains: #!/bin/bash((!EUID)) || exec sudo "$0"# shellcode ... When I execute it, I get this: sudo: scriptName: command not found After checking out I found that the parameter $0 is the same as ${0##*/} or $(basename $0) inside an x-executable! How do I deal with that without putting an absolute path inside the script? Or is there something I should know when I'm compiling shell to x-executable using SHC? | Why SHC? First of all, why are you using SHC, given your "hobbyist" motivations? Here is an excerpt from their own description: Upon execution, the compiled binary will decrypt and execute the code with the shell -c option. Unfortunatelly, it will not give you any speed improvement as a real C program would. The compiled binary will still be dependent on the shell specified in the first line of the shell code (i.e. #!/bin/sh), thus shc does not create completely independent binaries. SHC's main purpose is to protect your shell scripts from modification or inspection. My opinion (I'll keep it sort of brief): Even if your motivation is the stated "main purpose" of preventing modifications, a mildly determined person can still recover (and, hence, modify) the original script! SHC essentially provides security by obscurity , which is an oft-derided strategy when used as a primary means of security. If this doesn't sound helpful, then I'd recommend ditching SHC and simply using shell scripts as the vast majority of others do. If you need real security for your shell scripts, I'd suggest asking a specific question about that, without SHC or "compilers". The specific $0 problem I downloaded SHC 3.8.9 from this page just to try this out. I was not able to reproduce the problem, on Ubuntu 14.04 LTS. #!/bin/bashecho "Hello, world. My name is \`$0'" Test run $ ./shc -f my_test.bash$ ~/path/to/my_test.bash.xHello, world. My name is `~/path/to/my_test.bash.x' So, clearly, our systems differ. I can try to update this answer if you post more details about your operating system, version of SHC, and specific shell script and shc commandline you used to "compile" it. Why do you need the path? Why does your script need to know its path? Are you storing files with the script? Does it "default" to operate on the directory it was run in if the user doesn't specify? Whether these things are good ideas or not is a matter of opinion, but knowing your motivations might be helpful in narrowing down this answer. How to fetch the current directory The pwd command fetches the current directory. In many cases, you can assemble the full path of your script (assuming it was run with an explicit relative path) with something like: realname="`pwd`/$0" ... which would result in a value like: /path/to/script/./yourscript.bash.x The extra ./ just means "current directory", so while it might be cosmetically unfortunate, it will not negatively affect the result. If the script is simply in your $PATH , then you would need to use which instead of pwd , but we're already getting beyond the scope of the original question, here, so I'll conclude with a simple mention that determining pathnames can be a tricky process, especially if you need to do so in a secure manner, so that would best be left for another question with a specific problem statement along these lines. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252602",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134906/"
]
} |
252,603 | I just upgraded my MOBO and CPU on my PC, and now whenever I try and boot my debian install it hangs at [ OK ] Started Update UTMP about System Runlevel Changes. Is there an way I can boot into my old setup with my new hardware without having to completely reinstall? Specs: CPU = AMD Phenom II 965 3.4 Ghz x4 -> AMD FX 8350 4.0 Ghz x8 MOBO = ASUS M4A87TD EVO -> ASUS M5A99FX PRO | There is probably a problem with your video driver. I resolved it by following (modified) instructions at http://ubuntuforums.org/showthread.php?t=2072420 : Press Alt + F2 to switch to a new console sudo apt-get purge xserver-xorg-video-intel then reboot sudo apt-get install xserver-xorg-video-intel nano /etc/X11/xorg.conf , remove any present code (if applicable) and enter the following: Section "Device" Identifier "Card0" Driver "intel" Option "AccelMethod" "sna" EndSection Save the file and reboot | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252603",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133093/"
]
} |
252,610 | I have a raspberry Pi and I want it to display on my monitor. My monitor only supports DVI and the Pi supports only HDMI. I was wondering if there is a way I can connect my raspberry pi, via hdmi, to my pc and have it display on its screen. | There is probably a problem with your video driver. I resolved it by following (modified) instructions at http://ubuntuforums.org/showthread.php?t=2072420 : Press Alt + F2 to switch to a new console sudo apt-get purge xserver-xorg-video-intel then reboot sudo apt-get install xserver-xorg-video-intel nano /etc/X11/xorg.conf , remove any present code (if applicable) and enter the following: Section "Device" Identifier "Card0" Driver "intel" Option "AccelMethod" "sna" EndSection Save the file and reboot | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252610",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149590/"
]
} |
252,625 | I would like to know how to create a ntfs partition on /dev/sdx. I could not figure out if I should use partition type 7 86 or 87. What is the full list of commands to use? | create a partition using fdisk fdisk /dev/sdx Commands: to create the partition: n, p, [enter], [enter] to give a type to the partition: t, 7 (don't select 86 or 87, those are for volume sets) if you want to make it bootable: a to see the changes: p to write the changes: w create a ntfs fileystem on /dev/sdx1: mkfs.ntfs -f /dev/sdx1 (the -f argument makes the command run fast, skipping both the bad block check and the zeroing of the storage) mount it wherever you want mount /dev/sdx1 /mnt/myNtfsDevice | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/252625",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148018/"
]
} |
252,631 | I always wanted to try Unix but I can't seem to find an ISO file or somewhere to buy it. Is Unix published for the whole wide world to use or is it a special OS only for high-class servers, mainframes and supercomputers? Can I try it out? Does it comes in distros? This is like the problem I had with Linux: I was going around the Internet, wondering how to install Linux and looking for a Linux ISO when I did not know it came in distros. | Unix was originally a product, first developed in AT&T's Bell Labs. But today, the word “Unix”, except in historical context, means a family of operating systems, not a single product (similarly to “Linux” meaning a family of distributions, not a single product). This family has a somewhat complex history (see also Evolution of Operating systems from Unix ). It's difficult to say when this product ended, because the original code was licensed to a number of vendors, some of which still maintain their product. I believe the last product released by AT&T was Unix Time-Sharing System 10 in 1989. However by that time most Unix systems were actually modified versions of the AT&T code maintained by other companies such as Sun ( SunOS , later renamed Solaris ), Hewlett-Packard ( HP-UX ), IBM ( AIX ), etc. These three are still extant today, and they're directly derived from the AT&T code (although after over 25 years there probably isn't much of the AT&T era code remaining). In addition to unix systems that are derived from the AT&T code, there are systems that don't contain any AT&T code but have a similar design and compatible user and programmer interfaces. The main families of such unix systems are BSD ( FreeBSD , OpenBSD , NetBSD , Darwin / macOS etc.), Linux (many distributions) and MINIX . Just to add to the confusion, UNIXⓇ is a trademark (actually, a family of trademarks: there are several versions ) which does not designate a particular product: any product can use it as long as it passes a series of conformance tests . This unusual situation is a consequence of a long legal battle . Basically, a product can lay claim to one of the UNIX trademarks if it complies with the Single UNIX Specification , which describes user and programmer interfaces of the operating system (but not administration interfaces). If you want to run a product that came directly from AT&T, you can run Unix V5, V6 or V7 on a PDP-11 simulator (the PDP-11 was a popular series of minicomputers in from the early 1970s to the early 1990s). If you want to run a product based on code from AT&T, you can run OpenIndiana , which is based on the now-discontinued open source edition of Solaris ( OpenSolaris ). OpenIndiana is free software and runs on a PC. (It might not support as much hardware as Linux does though, but it can run in e.g. VirtualBox.) I believe that you can also download Oracle's Solaris for free for personal use, and it too can run on a PC. As far as I know, it isn't possible to run AIX or HP-UX on easily-available hardware or emulators. If you want to run a product which has the UNIX brand, you can go through the official list . It includes several versions of Solaris (including PC versions), several versions of macOS, and a few uncommon Linux distributions. In a twist of fate none of the historical Unix products have the UNIX trademark, because they're too old and fail to meet some of the newer requirements for the UNIX brand. If you want to run a product in the unix family of operating systems, Linux is one (or rather Linux is a subfamily, and each distribution is a Unix-like operating system). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252631",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144567/"
]
} |
252,632 | It seems to me that there's a lot of confusion (or at least there is on my part) about how different file systems are structured among different distributions of Linux/Unix. It would stand to reason then that instead of having different types of packages for each system, it would be more useful to have environment variables that point to the different directories in individual file system structures. For example: If I wanted to know the location of the "program files" directory on a windows system, I could use the environment variable %ProgramFiles% or %ProgramFiles(x86)% . Is there any such facility on Linux or Unix systems? | Unix was originally a product, first developed in AT&T's Bell Labs. But today, the word “Unix”, except in historical context, means a family of operating systems, not a single product (similarly to “Linux” meaning a family of distributions, not a single product). This family has a somewhat complex history (see also Evolution of Operating systems from Unix ). It's difficult to say when this product ended, because the original code was licensed to a number of vendors, some of which still maintain their product. I believe the last product released by AT&T was Unix Time-Sharing System 10 in 1989. However by that time most Unix systems were actually modified versions of the AT&T code maintained by other companies such as Sun ( SunOS , later renamed Solaris ), Hewlett-Packard ( HP-UX ), IBM ( AIX ), etc. These three are still extant today, and they're directly derived from the AT&T code (although after over 25 years there probably isn't much of the AT&T era code remaining). In addition to unix systems that are derived from the AT&T code, there are systems that don't contain any AT&T code but have a similar design and compatible user and programmer interfaces. The main families of such unix systems are BSD ( FreeBSD , OpenBSD , NetBSD , Darwin / macOS etc.), Linux (many distributions) and MINIX . Just to add to the confusion, UNIXⓇ is a trademark (actually, a family of trademarks: there are several versions ) which does not designate a particular product: any product can use it as long as it passes a series of conformance tests . This unusual situation is a consequence of a long legal battle . Basically, a product can lay claim to one of the UNIX trademarks if it complies with the Single UNIX Specification , which describes user and programmer interfaces of the operating system (but not administration interfaces). If you want to run a product that came directly from AT&T, you can run Unix V5, V6 or V7 on a PDP-11 simulator (the PDP-11 was a popular series of minicomputers in from the early 1970s to the early 1990s). If you want to run a product based on code from AT&T, you can run OpenIndiana , which is based on the now-discontinued open source edition of Solaris ( OpenSolaris ). OpenIndiana is free software and runs on a PC. (It might not support as much hardware as Linux does though, but it can run in e.g. VirtualBox.) I believe that you can also download Oracle's Solaris for free for personal use, and it too can run on a PC. As far as I know, it isn't possible to run AIX or HP-UX on easily-available hardware or emulators. If you want to run a product which has the UNIX brand, you can go through the official list . It includes several versions of Solaris (including PC versions), several versions of macOS, and a few uncommon Linux distributions. In a twist of fate none of the historical Unix products have the UNIX trademark, because they're too old and fail to meet some of the newer requirements for the UNIX brand. If you want to run a product in the unix family of operating systems, Linux is one (or rather Linux is a subfamily, and each distribution is a Unix-like operating system). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252632",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146479/"
]
} |
252,671 | I'd like to try PHP7.0 on Debian Jessie and am trying to install it from sid. However, php7.0 depends on php7.0-common which depends on php-common > 18 while php-common in sid is at 17. Does this mean it's simply impossible to install php7.0 from this distribution at the moment? Why is that? I know that it is possible to install from source as explained e.g. here , I'm just asking about the official packages. Note : the packages in sid have been fixed and it is now (Jan 6, 2016) possible to install from there. | You have unofficial repos with new versions. Using Debian one of the best well-known repository for most up-to-date software for web servers for i386 and amd64 packages is dotdeb. " Dotdeb is an extra repository providing up-to-date packages for your Debian servers" They have PHP 7 since the 3rd of December (of 2015), and have had a pre-packaged beta since November. To add the dotdeb repository, from here . Edit /etc/apt/sources.list and add deb http://packages.dotdeb.org jessie all Fetch the repository key and install it. wget https://www.dotdeb.org/dotdeb.gpgsudo apt-key add dotdeb.gpg Do then sudo apt-get update And lastly: sudo apt-get install php7.0 To search for php 7 related packages: apt-cache search php | grep ^php7 In Ubuntu you also already have PPAs for it too. It seems Debian backports do not have yet PHP 7.0. Search here in a near future. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/252671",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
252,684 | The Docker service is clearly running: $ systemctl status docker.service ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2015-12-28 19:20:50 GMT; 3 days ago Docs: https://docs.docker.com Main PID: 1015 (docker) CGroup: /system.slice/docker.service └─1015 /usr/bin/docker daemon -H fd:// --exec-opt native.cgroupdriver=cgroupfs$ ps wuf -u root | grep $(which docker)root 1015 0.0 0.3 477048 12432 ? Ssl 2015 2:26 /usr/bin/docker daemon -H fd:// --exec-opt native.cgroupdriver=cgroupfs However, Docker itself refuses to talk to it: $ docker infoCannot connect to the Docker daemon. Is the docker daemon running on this host? I am running the default Docker configuration , that is, I haven't changed any /etc files relating to this service. What could be the problem here? | You need to add yourself to the docker group and activate the group (by logging out and in again or running newgrp docker ) to run docker commands. The error message is simply misleading. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/252684",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3645/"
]
} |
252,690 | I recently bought this WLAN adapter on Amazon. When I attempt to switch to monitor mode: ifconfig wlan1 downiwconfig wlan1 mode monitorifconfig wlan1 up I receive the following error: Error for wireless request "Set Mode" (8B06) : SET failed on device wlan1 ; Invalid argument. This adapter is listed as follows in lsusb : ID 0bda:8172 Realtek Semiconductor Corp. RTL8191SU 802.11n WLAN Adapter and wlan1 is listed when typing iwconfig , which means its drivers/firmware are correctly installed. I use Kali-linux Sana (2.0) with kernel 4.0.0-kali1-amd64 , but this should apply to all (Debian based) Linux distributions. How can I switch this devive to "monitor mode"? | According to this, https://wikidevi.com/wiki/R8712u your chipset does not supports monitor mode. Not all combinations of hardware/software support wifi monitor mode. Like any other functionality implemented in silicon/firmware, be it listening to the media, sending and listening to packets, monitor mode has to be usually implemented by the manufacturer for it to work. Beware that I far as I remember, some implementations only allow passive monitoring, while others allow monitoring and sending/manufacturing "fake" packets. Think it as monitor mode as a special/yet another service supported by the (firmware running on) hardware. When shopping around, that thought has to be taken in account, specifically for so more esoteric uses. So it it advisable beforehand to use Google, and talk with other people, to have an idea of what they are already using successfully for specific purposes. I would also direct you to another thread where I talk about my (bad) experience with cheap realtek devices. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252690",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149430/"
]
} |
252,714 | I want to run a program without any internet access, e.g. unshare -n ping 127.0.0.1 . As an unprivileged user, it returns Operation not permitted , as a privileged user, it returns the desired Network is unreachable . Is there any way to make it work for the unprivileged user, as well? | In later versions of util-linux, unshare gained the --map-root-user option. Quoting from unshare(1) version 2.26.2: -r, --map-root-user Run the program only after the current effective user and group IDs have been mapped to the superuser UID and GID in the newly created user namespace. This makes it possible to conveniently gain capabilities needed to manage various aspects of the newly created namespaces (such as configuring interfaces in the network namespace or mounting filesystems in the mount namespace) even when run unprivileged. As a mere convenience feature, it does not support more sophisticated use cases, such as mapping multiple ranges of UIDs and GIDs. This option implies --setgroups=deny. So, on newer systems, you can run: unshare -n -r ping 127.0.0.1 And this will yield the expected Network is unreachable . On Debian systems you might still get an Operation not permitted error, then you have to enable unprivileged user namespaces first by running: sudo sysctl -w kernel.unprivileged_userns_clone=1 Note: for a wider range of use cases, the more sophisticated bwrap --unshare-net may be considered, as described briefly in a different answer . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/252714",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149642/"
]
} |
252,744 | When using ss with -p option, user/pid/fd column jumps underneath the particular line. For instance this is it what I'm actually seeing: # ss -nulp4State Recv-Q Send-Q Local Address:Port Peer Address:Port UNCONN 0 0 *:20000 *:* users:(("perl",pid=9316,fd=6))UNCONN 0 0 *:10000 *:* users:(("perl",pid=9277,fd=6))UNCONN 0 0 192.168.100.10:53 *:* users:(("named",pid=95,fd=517),("named",pid=95,fd=516))UNCONN 0 0 127.0.0.1:53 *:* users:(("named",pid=95,fd=515),("named",pid=95,fd=514)) Preferred output formatting : # ss -nulp4State Recv-Q Send-Q Local Address:Port Peer Address:Port UNCONN 0 0 *:20000 *:* users:(("perl",pid=9316,fd=6))UNCONN 0 0 *:10000 *:* users:(("perl",pid=9277,fd=6))UNCONN 0 0 192.168.100.10:53 *:* users:(("named",pid=95,fd=517),("named",pid=95,fd=516))UNCONN 0 0 127.0.0.1:53 *:* users:(("named",pid=95,fd=515),("named",pid=95,fd=514)) To confirm that there are no line breaks I've tried this: # ss -nulp4 | cat -AState Recv-Q Send-Q Local Address:Port Peer Address:Port $UNCONN 0 0 *:20000 *:* users:(("perl",pid=9316,fd=6))$UNCONN 0 0 *:10000 *:* users:(("perl",pid=9277,fd=6))$UNCONN 0 0 192.168.100.10:53 *:* users:(("named",pid=95,fd=517),("named",pid=95,fd=516))$UNCONN 0 0 127.0.0.1:53 *:* users:(("named",pid=95,fd=515),("named",pid=95,fd=514))$ And indeed you can see that there were none, but now, strangely enough, output format is the way I've wanted it to be. Could someone explain what's going on here? How can I achieve my preferred formatting? This is the only thing stopping me from migrating from netstat to ss . | As for why etc. ss , part of the iproute2 utility collection in the Linux kernel, uses an ioctl() request to get the current width of terminal. However; the entire width is used for the «other» fields and the process field get squeezed onto next line. You can view this by for example (when having a limited with on terminal): script ss.txtss -nlup4exit Then widen your terminal window and cat ss.txt . The reason why ss -nulp4 | cat -A «works» is because the utility recognizes if it writes to a tty or not : if (isatty(STDOUT_FILENO)) {} As you can see from the prior line in the source code default width is set to 80. Thus if your terminal is at say 130 columns and you do: ss -nulp4 | cat it recognizes the output is not to a tty (but to a pipe) and the other fields are crammed into 80 columns, whilst the process field is written after these 80 columns. But as your terminal is wider then 80 columns and has room for the process entry it is displayed in one line. The same goes for for example: ss -nulp4 > ss.txt As for how to «achieve my preferred formatting» one likely unsuitable way is to do something in the direction of (depending on terminal): stty cols 100ss -nlup4 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/252744",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111103/"
]
} |
252,745 | I try to modify one column of my file, then print the result. awk -F"|" '{ if(NR!=1){$5 = $5+0.1} print $0}' myfile It does what I want, but when printing, only the first line keeps its field separator.( the one, i don't modify). So I could use print $1"|"$2"|"$3"|"$4"|"$5"|"... but isn't there a solution using $0 ? (For example if I don't know the numbers of column) I believe, I could solve my problem easily with sed , but I try to learn awk for now. | @Sukminder has already given the simple answer; I have a couple tidbits of style points and helpful syntax about your example code (like a code review). This started as a comment but it was getting long. OFS is the output field separator, as already mentioned. $0 is the default argument to print —no need to specify it explicitly. And another style point: awk has what's called "patterns", which are like built-in conditional blocks. So you could also just use: awk -F'|' 'BEGIN {OFS = FS} NR != 1 {$5 += 0.1} {print}' myfile | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/252745",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148352/"
]
} |
252,747 | Let's say I exported a variable: foo=barexport foo Now, I'd like to un-export it. That's to say, if I do sh -c 'echo "$foo"' I shouldn't get bar . foo shouldn't show up in sh -c 's environment at all. sh -c is merely an example, an easy way to show the presence of a variable. The command could be anything - it may be something whose behaviour is affected simply by the presence of the variable in its environment. I can: unset the variable, and lose it Remove it using env for each command: env -u foo sh -c 'echo "$foo"' impractical if you want to continue using the current shell for a while. Ideally, I'd want to keep the value of the variable, but not have it show up at all in a child process, not even as an empty variable. I guess I could do: otherfoo="$foo"; unset foo; foo="$otherfoo"; unset otherfoo This risks stomping over otherfoo , if it already exists. Is that the only way? Are there any standard ways? | There's no standard way. You can avoid using a temporary variable by using a function. The following function takes care to keep unset variables unset and empty variables empty. It does not however support features found in some shells such as read-only or typed variables. unexport () { while [ "$#" -ne 0 ]; do eval "set -- \"\${$1}\" \"\${$1+set}\" \"\$@\"" if [ -n "$2" ]; then unset "$3" eval "$3=\$1" fi shift; shift; shift done}unexport foo bar In ksh, bash and zsh, you can unexport a variable with typeset +x foo . This preserves special properties such as types, so it's preferable to use it. I think that all shells that have a typeset builtin have typeset +x . case $(LC_ALL=C type typeset 2>&1) in typeset\ *\ builtin) unexport () { typeset +x -- "$@"; };; *) unexport () { … };; # code aboveesac | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252747",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70524/"
]
} |
252,812 | I have oracle linux 6.7, a NFS server in Windows, and I am trying to mount a shared folder in Linux. The Windows NFS server has a shared mount : 192.168.1.10:/OracleBK In my oracle linux server, I created a folder , /orabackup and the oracle user from oinstall group is the owner of this folder : mkdir /orabackupchown -R oracle:oinstall /orabackupchmod -R 777 /orabackupmount -t nfs -o rw 192.168.1.10:/OracleBK /orabackup The /etc/fstab corresponding line is 192.168.1.10:/OracleBK /orabackup nfs defaults 0 0 The command for mounting the folder used is : mount /orabackup Now , the "orabackup" folder is mounted . However the oracle user cannot read and write, and needs read and write permissions to this directory. The root user can read and write. What should be done to give full permissions to the oracle user ? | NFS checks access permissions against user ids (UIDs). The UID of the user on your local machine needs to match the UID of the owner of the files you are trying to access on the server. I would suggest to go to the server and look at the file permissions. Which UID (find out with id username ) do they belong to and which permissions are set? And if you are the only one accessing the files on the server, you can make the server pretend that all request come from the proper UID. For that, NFS has the option all_squash . It tells the server to map all request to the anonymous user, specified by anonuid,anongid. Add these options: all_squash,anonuid=1026,anongid=100 to the export in /etc/exports . Be warned though, that this will make anyone mounting the export effectively the owner of those files. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252812",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149683/"
]
} |
252,822 | What is the purpose of having both? Aren't they both used for mounting drives? | I recommend visiting the Filesystem Hierarchy Standard. /media is mount point for removable media . In other words, where system mounts removable media. This directory contains sub-directories used for mounting removable media such as CD-ROMs, floppy disks, etc. /mnt is for temporary mounting . In other words, where user can mount things. This directory is generally used for mounting filessytems temporarily when needed. Ref: http://www.pathname.com/fhs/pub/fhs-2.3.html#MEDIAMOUNTPOINT http://www.pathname.com/fhs/pub/fhs-2.3.html#MNTMOUNTPOINTFORATEMPORARILYMOUNT | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/252822",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148018/"
]
} |
252,824 | What command can check that a directory contains the same files as another directory, all the files are up to date and copy any updated files or new files to the first directory. It should not care about file permissions copying restricted files without checking back. | You can use rsync for that. NAME rsync - a fast, versatile, remote (and local) file-copying tool Example: rsync -av "/path/to/source" "/path/to/destination" Note: Where "/path/to/source" is the path of source directory and "/path/to/destination" is the path to directory which contains destination directory. For example of you want to make synchronization between /media/users/disk1/dir (as source) and /media/disk2/dir (as destination), then you should run rsync -av "/media/users/disk1/dir" "/media/disk2/" If you want to delete extraneous files from destination, you can use --delete option as follows: rsync -av --delete "/path/to/source" "/path/to/destination" If you want to show the progress during transfer then use --progress as follows: rsync -avh --progress --delete "/path/to/source" "/path/to/destination" There is also --info=progress2 available for outputs statistics based on the whole transfer. Note: For more information on rsync visit ( man rsync ) manpage and list of options . You can also use a GUI front-end. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252824",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148018/"
]
} |
252,833 | I suspend to ram. Sometimes when I resume, some of the keys do not work. Some keys are reacting, but ALT - TAB for one is not. What can I do to ask the auto detection to retry configuring keyboard? | You can use rsync for that. NAME rsync - a fast, versatile, remote (and local) file-copying tool Example: rsync -av "/path/to/source" "/path/to/destination" Note: Where "/path/to/source" is the path of source directory and "/path/to/destination" is the path to directory which contains destination directory. For example of you want to make synchronization between /media/users/disk1/dir (as source) and /media/disk2/dir (as destination), then you should run rsync -av "/media/users/disk1/dir" "/media/disk2/" If you want to delete extraneous files from destination, you can use --delete option as follows: rsync -av --delete "/path/to/source" "/path/to/destination" If you want to show the progress during transfer then use --progress as follows: rsync -avh --progress --delete "/path/to/source" "/path/to/destination" There is also --info=progress2 available for outputs statistics based on the whole transfer. Note: For more information on rsync visit ( man rsync ) manpage and list of options . You can also use a GUI front-end. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252833",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2972/"
]
} |
252,901 | So I can run a process in Unix / Linux using POSIX, but is there some way I can store / redirect both the STDOUT and STDERR of the process to a file? The spawn.h header contains a deceleration of posix_spawn_file_actions_adddup2 which looks relevant, but I'm not sure quite how to use it. The process spawn: posix_spawn(&processID, (char *)"myprocess", NULL, NULL, args, environ); The output storage: ...? | Here's a minimal example of modifying file descriptors of a spawned process, saved as foo.c : #include <stdio.h>#include <stdlib.h>#include <sys/stat.h>#include <fcntl.h>#include <spawn.h>int main(int argc, char* argv[], char *env[]){ int ret; pid_t child_pid; posix_spawn_file_actions_t child_fd_actions; if (ret = posix_spawn_file_actions_init (&child_fd_actions)) perror ("posix_spawn_file_actions_init"), exit(ret); if (ret = posix_spawn_file_actions_addopen (&child_fd_actions, 1, "/tmp/foo-log", O_WRONLY | O_CREAT | O_TRUNC, 0644)) perror ("posix_spawn_file_actions_addopen"), exit(ret); if (ret = posix_spawn_file_actions_adddup2 (&child_fd_actions, 1, 2)) perror ("posix_spawn_file_actions_adddup2"), exit(ret); if (ret = posix_spawnp (&child_pid, "date", &child_fd_actions, NULL, argv, env)) perror ("posix_spawn"), exit(ret);} What does it do? The third parameter of posix_spwan is a pointer of type posix_spawn_file_actions_t (one you have given as NULL ). posix_spawn will open, close or duplicate file descriptors inherited from the calling process as specified by the posix_spawn_file_actions_t object. So we start with a posix_spawn_file_actions_t object ( chiild_fd_actions ), and initialize it with posix_spawn_file_actions_init() . Now, the posix_spawn_file_actions_{addopen,addclose,addup2} functions can be used to open, close or duplicate file descriptors (after the open(3) , close(3) and dup2(3) functions) respectively. So we posix_spawn_file_actions_addopen a file at /tmp/foo-log to file descriptor 1 (aka stdout). Then we posix_spawn_file_actions_adddup2 fd 2 (aka stderr ) to fd 1. Note that nothing has been opened or duped yet . The last two functions simply changed the child_fd_actions object to note that these actions are to be taken. And finally we use posix_spawn with the child_fd_actions object. Testing it out: $ make foocc foo.c -o foo$ ./foo$ cat /tmp/foo-log Sun Jan 3 03:48:17 IST 2016$ ./foo +'%F %R' $ cat /tmp/foo-log2016-01-03 03:48$ ./foo -d 'foo' $ cat /tmp/foo-log./foo: invalid date ‘foo’ As you can see, both stdout and stderr of the spawned process went to /tmp/foo-log . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/252901",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17639/"
]
} |
252,949 | Is it possible to use TLSv1.3 in Apache2.4? As of October 2015, TLS 1.3 is a working draft, i.e. TLSv1.3 . | June 2019 Update It's here! Apache 2.4.37 ( released 22-October-2018 ) adds support for OpenSSL 1.1.1 and TLSv1.3 . Make sure you use at least 2.4.39 though due to security issues. March 2018 Update TLS 1.3 draft is up to v26. There is general support in the main SSL libraries for varying versions of the Draft. It doesn't look like Chrome and Firefox have shipped it "on" as default yet. Cloudflare have written about some issues with using TLS 1.3 across some TLS 1.2 devices when trials were done. Dec 2017 Update The TLS 1.3 Draft is up to v22. Not much change in servers and clients, probably waiting for something closer to the formal release spec. June 2017 Update The mod_nss module can be used to enable TLS 1.3 on Apache 2.4 Most SSL implementations have varying features of TLS 1.3 implemented. Chrome and Firefox have shipped TLS 1.3 behind feature flags. Feb 2017 Update There are some TLS 1.3 implementations now the spec is a bit more mature. BoringSSL and OpenSSL are working on 1.3 but it seems to be a WIP. No mod_ssl TLS 1.3 yet. Original There doesn't seem to be any OpenSSL implementations of the draft TLS 1.3 specification yet which would be required for modssl to support it. So I'm going to say no. Neither the OpenSSL or BoringSSL projects mention TLS 1.3 much other than people fixing bugs with forethought of what looks like coming in TLS 1.3. There's only a couple of references to the 1.3 version in the tests for OpenSSL. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252949",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149772/"
]
} |
252,977 | When I execute the following command in Ubuntu: curl -v --insecure -XGET 'https://user:pass@IP_ADDR:PORT/SOME_FILE.php' I get this output: * Hostname was NOT found in DNS cache* Trying IP_ADDR...* Connected to IP_ADDR (IP_ADDR) port PORT (#0)* successfully set certificate verify locations:* CAfile: none CApath: /etc/ssl/certs* SSLv3, TLS handshake, Client hello (1): And after several minutes I get this: * Unknown SSL protocol error in connection to IP_ADDR:PORT * Closing connection 0curl: (35) Unknown SSL protocol error in connection to IP_ADDR:PORT When I try the same thing in CentOS I still get stuck in Client Hello , but in the end I get this: curl: (28) Operation timed out after 0 milliseconds with 0 out of 0 bytes received Does anyone knows what can cause it and how can I fix it? | We suffered the same exact issue and the cause was an MTU misconfiguration, but there are many other possible causes. The key was to sniff traffic on our edge router, where we saw ICMP messages to the server (GitHub.com) asking for fragmentation . This was messing the connection, with retransmissions, duplicated ACKs and so. The ICMP packet had a field, MTU of next hop with a weird value, 1450. The usual value is 1500. We checked our router and one of the interfaces (an Ethernet tunnel) had this value as MTU, so the router was taking the minumun MTU of all interfaces as next hop. As soon as we removed this interface (it was unused), the SSH handshake started to work again. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/252977",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149792/"
]
} |
252,980 | I'm trying to find my current logged in group without wanting to use newgrp to switch. | I figured I can use the following. id -g To get all the groups I belong id -G And to get the actual names, instead of the ids, just pass the flag -n . id -Gn This last command will yield the same result as executing groups | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/252980",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149687/"
]
} |
252,995 | How can mouse support be enabled in an Emacs terminal session started with emacs -nw ? Is there a keyboard shortcut or a flag to do this? If not how can it be done in terminal emulators? I use Guake. | Hit F10 to open the menu and use the arrow keys to navigate to “Options” → “Customize Emacs” → “All Settings Matching…”. Type mouse and Enter . If your Emacs version doesn't have a menu when running in a terminal then run M-x customize . (This means: press Alt + X , type customize and press Enter .) Navigate to the search box, type mouse and press Enter . Mouse support is called “Xterm Mouse mode”. You can find that in the manual . The manual also gives a way to turn it on (for the current session) — M-x xterm-mouse-mode . In the Customize interface, on the setting you want to change, press Enter on “Show Value”. A “Toggle” button appears, press Enter on it. Then press Enter on the “State” box and choose either 0 for “Set for Current Session” or “1” for “Save for Future Sessions”. (You can choose 0 for now and come back there and choose 1 later if you're happy with the setting.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/252995",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148018/"
]
} |
253,038 | Let's say I have 2 files each containing lines which start with a 'b' character and I only want to merge these lines in the same order they appear in the first file. First File (1.txt) b 12 32b 23 43b 23 63 Second File (2.txt) a 1322c 233g 23324s 24352bh vd2 3f4ga 2t42c 34536g h3443es 24h455bh 3434gggdfbv4a 423gwgc f24bvg 34g 45hs 4zth5bh 3456zh543 You can see that in the second file the lines which start with a 'b' character don't contain any more information while in the first file I have lines only starting with a 'b' followed by some integer values. What I need now is something which gets the integers from the first file and puts them into the second files 'b' lines the same way the appear in the first file. So the second file should in the end look like this: merged file (3.txt) a 1322 c 233 g 23324 s 24352 b 12 32 h vd2 3f4g a 2t42 c 34536 g h3443e s 24h455 b 23 43 h 3434gggdfbv4 a 423gwg c f24bv g 34g 45h s 4zth5 b 23 63 h 3456zh543 join command seems to be able to do what I want but I can't find a way to tell it to only work on lines matching the leading 'b' character.I also thought about a loop walking through file 1 to get the line numbers matching the patter '^b' and then use them to replace the lines matching pattern '^b' in file 2 but again I can't find a working solution. Does anyone have an idea to accomplish my task with a one-liner or a short bash script? | With GNU sed: sed -e '/^b/{R 1.txt' -e 'd}' 2.txt if you want to edit file 2.txt "in place", add sed 's option -i . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/253038",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142714/"
]
} |
253,074 | In directory /source I'd like to copy all files with a file name more than 15 characters to directory /dest . Is there a UNIX command to do this? EDIT : Although this question explains how to search for a filename of a certain length, my question also asks how to copy the file. | You can make a pattern with 16-or-more characters and copy those files. A simple (but not elegant) approach, using 16 ? characters, each matching any single character: for n in /source/????????????????*;do [ -f "$n" ] && cp "$n" /dest/; done After the 16th ? , use * to match any number of characters. The pattern might not really match anything, so a test -f to ensure it is a file is still needed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/253074",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87878/"
]
} |
253,081 | Is it possible with either find or ls to see, in a list of results, if any given entry is the nth result, what n is? So, for example, pretend ls -l returns: total 0-rw-rw---- 1 bigdog bigdog 0 Jan 3 17:13 a-rw-rw---- 1 bigdog bigdog 0 Jan 3 17:13 b-rw-rw---- 1 bigdog bigdog 0 Jan 3 17:13 c Is there a way to get ls to return 1,2,3 for files a,b,c respectively? Or, find . -type f returns: ./a./b./c Any way to get something like: 1 ./a2 ./b3 ./c I am aware of ls | wc -l and the like, but that is not what I'm looking for. | You can use the following command: ls | nl | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/253081",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121258/"
]
} |
253,147 | A version of java is installed on my Linux machine. When i try this: root@test$: javac -version It given the result as: javac jdk1.7.0_80. Now my problem is i don't know where that(1.7.0_80) java folder is. I have a folder named " java-7-oracle " in usr/lib/jvm. I am suspecting that it would be the folder for the installed version of java. Now I have a java folder and I want to know which version of java it is? How?? | I think you can track all this by checking to where your java binaries linked to. #which javac /usr/bin/javac #ls -ln /usr/bin/java lrwxrwxrwx. 1 0 0 22 Nov 27 04:54 /usr/bin/java -> /etc/alternatives/java #ls -ln /usr/bin/javac lrwxrwxrwx. 1 0 0 23 Nov 27 04:54 /usr/bin/javac -> /etc/alternatives/javac # ls -ln /usr/bin/javadoc lrwxrwxrwx. 1 0 0 25 Nov 27 04:54 /usr/bin/javadoc -> /etc/alternatives/javadoc and finally: #ls -ld /etc/alternatives/javalrwxrwxrwx. 1 root root 46 Nov 27 04:54 /etc/alternatives/java -> /usr/lib/jvm/jre-1.7.0-openjdk.x86_64/bin/java therefore , my java installation is: /usr/lib/jvm/jre-1.7.0-openjdk.x86_64 I suppose you can track any binary like this. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/253147",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124316/"
]
} |
253,163 | Right now I can't access HTTPS site from PhantomJS headless WebKit browser because of TLSv1.2 In my CentOS 5.8 I have following OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008 verison installed. I think I need to upgrade my OpenSSL lib in order to support TLSv1.2. Am I right ? If so, could you please show me an example how it can be achieved ? | I think you can track all this by checking to where your java binaries linked to. #which javac /usr/bin/javac #ls -ln /usr/bin/java lrwxrwxrwx. 1 0 0 22 Nov 27 04:54 /usr/bin/java -> /etc/alternatives/java #ls -ln /usr/bin/javac lrwxrwxrwx. 1 0 0 23 Nov 27 04:54 /usr/bin/javac -> /etc/alternatives/javac # ls -ln /usr/bin/javadoc lrwxrwxrwx. 1 0 0 25 Nov 27 04:54 /usr/bin/javadoc -> /etc/alternatives/javadoc and finally: #ls -ld /etc/alternatives/javalrwxrwxrwx. 1 root root 46 Nov 27 04:54 /etc/alternatives/java -> /usr/lib/jvm/jre-1.7.0-openjdk.x86_64/bin/java therefore , my java installation is: /usr/lib/jvm/jre-1.7.0-openjdk.x86_64 I suppose you can track any binary like this. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/253163",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149916/"
]
} |
253,203 | My question is simple: how do I tell journald to re-read its configuration file without rebooting ? I've made some changes to /etc/systemd/journald.conf and I'd like to see if they are correct and everything works as I expect. I do not want to reboot. | To control running services with systemd, use the systemctl utility . This utility is similar to the service utility provided by SysVinit and Upstart. Among others: systemctl status systemd-journald indicates whether the service is running and additional information if it is. systemctl start systemd-journald starts the service (systemd unit). systemctl stop systemd-journald stops the service. systemctl restart systemd-journald restarts the service. systemctl reload systemd-journald reloads the service's configuration if possible, but will not kill it (so no risk of a service interruption or of disrupting processing in progress, but the service may keep running with a stale configuration). systemctl force-reload systemd-journald reloads the service's configuration if possible, and if not restarts the service (so the service is guaranteed to use the current configuration, but this may interrupt something). systemctl daemon-reload reloads systemd's own configuration. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/253203",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2229/"
]
} |
253,211 | Nautilus has the ability to assign a keyboard shortcut to open a terminal at the current folder . Is this possible in Nemo (I have version 2.6.7, running on Linux Mint 17.2)? | Yes, it's possible but not obvious how to do it: Create the file ~/.gnome2/accels/nemo: $ mkdir -p ~/.gnome2/accels$ touch ~/.gnome2/accels/nemo Then add the following line in that file (replace "F4" with whatever shortcut you want to use): (gtk_accel_path "<Actions>/DirViewActions/OpenInTerminal" "F4") This worked for me with Nemo 3.6.5 on Ubuntu 18.04. Source: https://forums.linuxmint.com/viewtopic.php?f=47&t=225682#p1197133 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/253211",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78809/"
]
} |
253,233 | I have recursively transferred many files and folders with scp , using the command: scp -rp /source/folder [email protected]:/destination/folder After the transfer is completed, do I need to check whether all files got transferred without any corruption, or does scp take care of it (i.e. displays some error message if any of the file is not correctly transfered)? | scp verifies that it copied all the data sent by the other party. The integrity of the transfer is guaranteed by the cryptographic channel protocol. So you don't need to verify the integrity after the transfer. That would be redundant, and very unlikely to catch any hardware error since the data you're comparing against would probably be read from the cache. Verifying data periodically can be useful, but verifying immediately after the transfer is pointless. You do however need to ensure that scp isn't telling you that something went wrong. There should be an error message, but the reliable indicator is that scp returns a nonzero exit code if something went wrong. More precisely, you know that the file was transmitted correctly if scp returns 0 (i.e. the success status code). Checking that the exit status is 0 is necessary when you run any command anyway. If scp returns an error status, or if it's killed by a signal, or if it never dies because the system crashes or loses power while it's running, then you have no guarantees. In particular, since scp copies the file directly to its final name, this means that you can end up with a partial file in case of a system crash. The part that was copied is guaranteed to be correct but the file may be truncated. For better reliability, use rsync instead of scp. Unless instructed otherwise, rsync writes to a temporary file, and moves it into place once it's finished. Thus, if rsync returns a success code, you know the file is present and a correct, complete copy; if rsync hasn't returned an error code then no file will be present (unless there was an older version of the file, in which case that older version won't be modified). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/253233",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16704/"
]
} |
253,245 | What does make localmodconfig do and what should you set so that external hardware is supported? | From the kernel README : "make localmodconfig" Create a config based on current config and loaded modules (lsmod). Disables any module option that is not needed for the loadedmodules. To create a localmodconfig for another machine, store the lsmod of that machine into a file and pass it in as a LSMOD parameter. target$ lsmod > /tmp/mylsmod target$ scp /tmp/mylsmod host:/tmp host$ make LSMOD=/tmp/mylsmod localmodconfig The above also works when cross compiling. "make localyesconfig" Similar to localmodconfig, except it will convert all module options to built in (=y) options. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/253245",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148304/"
]
} |
253,271 | In Ubuntu/gnome-terminal, if I run: $ stty -icrnl Then launch the GHC interactive environment (a Haskell console): $ ghci Then pressing Return does not submit the line; however, Enter does. However, with: $ stty icrnl Both Return and Enter submit the line. I don't really understand the behaviour; surely Return will be submitting the newline character in both cases? | The first step in understanding what's going on is to be aware that there are in fact two “newline” characters. There's carriage return (CR, Ctrl + M ) and line feed (LF, Ctrl + J ). On a teletype , CR moves the printer head to the beginning of the line while LF moves the paper down by one line. For user input, there's only one relevant concept, which is “the user has finished entering a line”, but unfortunately there's been some divergence: Unix systems, as well as the very popular C language, use line feed to represent line breaks; but terminals send a carriage return when the user presses the Return or Enter key. The icrnl setting tells the terminal driver in the kernel to convert the CR character to LF on input. This way, applications only need to worry about one newline character; the same newline character that ends lines in files also ends lines of user input on the terminal, so the application doesn't need to have a special case for that. By default, ghci, or rather the haskeline library that it uses, has a key binding for Ctrl + J , i.e. LF, to stop accumulating input and start processing it. It has no binding for Ctrl + M i.e. CR. So if the terminal isn't converting CR to LF, ghci doesn't know what to do with that character. Haskeline instructs the terminal to report keypad keys with escape sequences. It queries the terminal's terminfo settings to know what those escape sequences are ( kent entry in the terminfo database). (The terminfo database is also how it knows how to enable keypad escapes: it sends the smkx escape sequence, and it sends rmkx on exit to restore the default keypad character mode.) When you press the Enter key on the keypad in ghci, that sends the escape sequence \eOM , which haskeline recognizes as a binding to stop accumulating input and start processing it. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/253271",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27530/"
]
} |
253,276 | How would a system be set up to stripe data like raid does, preferably with out using raid. Could lvm be used for this? | The first step in understanding what's going on is to be aware that there are in fact two “newline” characters. There's carriage return (CR, Ctrl + M ) and line feed (LF, Ctrl + J ). On a teletype , CR moves the printer head to the beginning of the line while LF moves the paper down by one line. For user input, there's only one relevant concept, which is “the user has finished entering a line”, but unfortunately there's been some divergence: Unix systems, as well as the very popular C language, use line feed to represent line breaks; but terminals send a carriage return when the user presses the Return or Enter key. The icrnl setting tells the terminal driver in the kernel to convert the CR character to LF on input. This way, applications only need to worry about one newline character; the same newline character that ends lines in files also ends lines of user input on the terminal, so the application doesn't need to have a special case for that. By default, ghci, or rather the haskeline library that it uses, has a key binding for Ctrl + J , i.e. LF, to stop accumulating input and start processing it. It has no binding for Ctrl + M i.e. CR. So if the terminal isn't converting CR to LF, ghci doesn't know what to do with that character. Haskeline instructs the terminal to report keypad keys with escape sequences. It queries the terminal's terminfo settings to know what those escape sequences are ( kent entry in the terminfo database). (The terminfo database is also how it knows how to enable keypad escapes: it sends the smkx escape sequence, and it sends rmkx on exit to restore the default keypad character mode.) When you press the Enter key on the keypad in ghci, that sends the escape sequence \eOM , which haskeline recognizes as a binding to stop accumulating input and start processing it. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/253276",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148018/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.