source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
147,488 | I'm trying to use the nc command on SuSE Linux Enterprise Desktop 11. I executed this line: nc -4ul 192.0.2.2 50000 But I got this.- bash: nc: command not found This is the first time that I have the problem. I have used the command for other tests without difficulty. I'll appreciate any help to solve this. | It seems that you don't have netcat-openbsd installed in your machine. Try: zypper search netcat-openbsd Then: sudo zypper install netcat-openbsd Maybe your package name will be different, so change it to what ever zypper search command return. This will install netcat version implemented by OpenBSD. Note zypper nc-openbsd netcat implemention | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147488",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74301/"
]
} |
147,494 | I'm using the Raspbian (a distribution made for Raspberry Pi, which is based on Debian). I have some scripts that use i 2 c. Normally only root has read and write permissions for i 2 c. I'm using this command to add i2c r/w permissions for normal user: # chmod a+rw /dev/i2c-* However after reboot, these devices have their default permissions. What is the best way to make my i2c available for r/w for a normal user permanently? Is there a more "elegant" way than adding my script to init.d that runs the command above after my Raspberry Pi boots? | You can do this using udev . Create a file in /etc/udev/rules.d with the suffix .rules , e.g. local.rules , and add a line like this to it: ACTION=="add", KERNEL=="i2c-[0-1]*", MODE="0666" MODE=0666 is rw for owner, group, world. Something you can do instead of, or together with that, is to specify a GID for the node, e.g: GROUP="pi" If you use this instead of the MODE setting, the default, 0660 (rw for owner and group) will apply, but the group will be pi , so user pi will have rw permissions. You can also specify the OWNER the same way. Pay attention to the difference between == and = above. The former is to test if something is true, the latter sets it. Don't mix those up by forgetting a = in == . You have to reboot for this to take effect. "Writing udev rules" Reference | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147494",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16148/"
]
} |
147,499 | At first I create a file and check it's standard permissions and ACL entries: $ touch file; ls -l file; getfacl file-rw-r--r-- 1 user user 0 Jul 30 16:26 file# file: file# owner: user# group: useruser::rw-group::r--other::r-- Then I set the ACL mask on the file and again check it's standard permissions and ACL entries: $ setfacl -m mask:rwx file$ ls -l file; getfacl file-rw-rwxr--+ 1 user user 0 Jul 30 16:26 file# file: file# owner: user# group: useruser::rw-group::r--mask::rwxother::r-- Note that along with ACL mask standard group permission on the file also changed. What connection does exist between ACL mask and standard group permission? What is the reason for coupling ACL mask and file group permissions? What logic does lay behind it? The distributions in question are Debian Linux 7.6 and CentOS 7 EDIT At this point I just wanted to share some findings of mine I came up with while researching the relations between standard file group permissions and ACL mask. Here are the empirical observations I found: The ACL mask can be changed: by directly setting it with setfacl -m m:<perms> command; by changing file group permissions with chmod command (if ACL mask is already present; it may not be present because it is optional if there are no named user or group ACL permissions on the file); by adding either named user or group ACL entry (mask will be automatically recalculated). The mask will enforce maximum access rights (if there are ACL entries with permissions present that exceed the ACL mask permissions) only if the mask is set directly by setfacl or by modification of file group permission with chmod (not auto-calculated). Any changes to ACL entries will trigger the ACL mask automatic recalculation and effectively turn off the "enforcing mode". There are a couple of side effects implicitly affecting standard file group permissions when using ACLs: Named user or group ACL entry applied to a file can change the ACL mask (increase it's permissions) and hence the effective file group permissions. For example if you, as a file owner, have "rw-r--r-- jim students" permissions set on it and you also grant rw permission to the user "jack", you'll also implicitly grant rw permissions to anyone from the "students" group. Stricter (less permissions) ACL mask can permanently remove corresponding standard file group permissions. E.g. if you have a file with rw standard file group permissions and you apply a read-only ACL mask to the file it's group permissions will decrease to read-only. Then if you remove all extended ACL entries (with setfacl -b command), the group permissions will stay read-only. This applies only to stricter ACL mask, softer ACL mask (more permissions) don't permanently alter original file group permission after it is removed. | It doesn't make sense if the unix file permissions disagree to the acl entry and vice versa. Accordingly, the manual page ( acl(5) ) says what you ask for: CORRESPONDENCE BETWEEN ACL ENTRIES AND FILE PERMISSION BITS The permissions defined by ACLs are a superset of the permissions specified by the file permission bits. There is a correspondence between the file owner, group, and other permissions and specific ACL entries: the owner permissions correspond to the permissions of the ACL_USER_OBJ entry. If the ACL has an ACL_MASK entry, the group permissions correspond to the permissions of the ACL_MASK entry. Otherwise, if the ACL has no ACL_MASK entry, the group permissions correspond to the permissions of the ACL_GROUP_OBJ entry. The other permissions correspond to the permissions of the ACL_OTHER_OBJ entry. The file owner, group, and other permissions always match the permissions of the corresponding ACL entry. Modification of the file permission bits results in the modification of the associated ACL entries, and modification of these ACL entries results in the modification of the file permission bits. Addendum in response to the discussion: What is the reason for coupling ACL mask and file group permissions? What logic does lay behind it? A good explanation is here . In essence the mask is an [...] upper bound of the permissions that any entry in the group class will grant. This upper bound property ensures that POSIX.1 applications that are unaware of ACLs will not suddenly and unexpectedly start to grant additional permissions once ACLs are supported. In minimal ACLs, the group class permissions are identical to the owning group permissions. In extended ACLs, the group class may contain entries for additional users or groups. This results in a problem: some of these additional entries may contain permissions that are not contained in the owning group entry, so the owning group entry permissions may differ from the group class permissions. This problem is solved by the virtue of the mask entry. With minimal ACLs, the group class permissions map to the owning group entry permissions. With extended ACLs, the group class permissions map to the mask entry permissions, whereas the owning group entry still defines the owning group permissions. The mapping of the group class permissions is no longer constant. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147499",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58428/"
]
} |
147,560 | Consider the following bash script: #!/bin/bashecho "${1##*.}" This script prints the extension of a file if the file name is supplied as the first command line argument, something like following: $ ./script.sh ./file.pdf In the above case pdf is printed. Please explain how the expression ${1##*.} is able to extract the extension of the file. (I understand what $0, $1, $2, $# do in bash and also understand about regular expressions to some extent) | Bash parameter expansion supports several modifications it can do to the value while expanding a variable. One of them is ## , which removes the longest prefix of the value matching a pattern (patterns are not regular expressions here). In this case the pattern is *. . That matches any zero or more characters followed by a . . ${x##*.} means to remove all of the string up to the last . character, and leave everything after that dot. ${1##*.} means to do that expansion using the value of the first positional parameter, the one you'd usually access with $1 . The final result of echo "${1##*.}" is then to print out the part of the first argument of the script that comes after the last . , which is the filename extension. If the pattern doesn't match at all, the full value of the variable is expanded, just as if you hadn't used the ## . In this case, if the argument you gave didn't have a . in it at all then you'd just get it back out again. Bash also supports a single # to take the shortest matching prefix off, and the same thing with % to match the end of the string instead. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/147560",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77842/"
]
} |
147,563 | I know I can use Up to iterate through previous commands. Running the last command simply involves Up + Enter . However, I was thinking of buying the Happy Hacking Keyboard as I spend a lot of time in vim . This keyboard has no arrow keys, and the only way I know how to get this kind of behaviour is by pressing Ctrl + R and beginning to repeat my previous command. Is there an easy way to emulate Up + Enter in an UNIX terminal without the arrow keys? | With csh or any shell implementing csh -like history substitution ( tcsh , bash , zsh ): !! Then Enter . Or alternatively : !-1 Then Enter . Or Ctrl + P , Enter Magic space Also, note that !! and !-1 will not auto-expand for you, until you execute them (when it might be too late). If using bash , you can put bind Space:magic-space into ~/.bashrc , then pressing Space after the command will auto-expand them inline, allowing you to inspect them before execution. This is particularly useful for history expansion from a command run a while ago, e.g. !echo will pull the last command run starting with echo . With magic space, you get to preview the command before it's run. That's the equivalent of doing bindkey ' ' magic-space in tcsh or zsh . | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/147563",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65634/"
]
} |
147,579 | In many languages it is possible to assign the result of a case/switch statement to a variable, rather than repeating the variable assignment many times within the case statement. Is it possible to do something like this in the Bash shell? color_code=$(case "$COLOR" in (red) 1;; (yellow) 2;; (green) 3;; (blue) 4;;esac) (Or, as an aside, in any other shells?) | The variable=$(...) construct will take the standard output of whatever command is in $(...) and assign it to variable . Thus, to get variable assigned the way that you want, the values have to be sent to standard output. This is easily done with the echo command: color_code=$(case "$COLOR" in red) echo 1;; yellow) echo 2;; green) echo 3;; blue) echo 4;;esac) This will work on bash as well as all other POSIX shells. The Optional Left Parens According to the POSIX standard, the left parens in a case statement is optional and the following works as well: color_code=$(case "$COLOR" in (red) echo 1;; (yellow) echo 2;; (green) echo 3;; (blue) echo 4;;esac) As Gilles points out in the comments, not all shells accept both forms in combination with $(...) : for an impressively detailed table of compatibility, see "$( )" command substitution vs. embedded ")" . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147579",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3358/"
]
} |
147,619 | I wanted to tail the last 100 lines of a file to the same file, but the command tail -n 100 file > file doesn't work, I assume because the stdout gets written to the file 'live', before everything was read from the original file. Is there some way to pipe the output to something , that then keeps it until all 100 lines are there, and then outputs it to the file? Or just another way to shorten the file in this way? | sponge from moreutils is good for this. It will: soak up standard input and write to a file You use it like this: tail -n 100 file | sponge file to get exactly the effect you want. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147619",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33929/"
]
} |
147,630 | I have currently Linux Kernel 2.6, which works for my hardware. Now I want to compile Linux kernel 3.2 for the same hardware. Can I use the same .config of 2.6 directly for v3.2? Is there any documentation/guide about how to do migrate the .config file from one kernel version to the other? | You could use make oldconfig . After you copy the 2.6 .config file, this make option will prompt to you for options in the current kernel source that are not found in the file. However, you will have to deal with choosing options out of the context, being difficult to give the right answer Further reading : What does “make oldconfig” do exactly - Linux kernel makefile Kernel/Upgrade - Gentoo Wiki | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147630",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79265/"
]
} |
147,669 | For some reason I cannot pass the bash variable $FOLDER as a python argument on the following code. The python script downloads some files from amazon s3. Bash script: #!/bin/bashFOLDER=$(./aws get $BUCKET"/location.txt")python /data/pythonscript.py $FOLDER# The output of the $FOLDER is a regenerated date eg. 2014/07/31/14-16-34 which is used as a path. Here is the python script: #!/usr/bin/pythonimport boto, sysfrom boto.s3.connection import S3Connectionaccess_key = 'accesskey'secret_key = 'secretkey'bucket_name = 'a name'folder_path = str(sys.argv[1]) if len(sys.argv) > 1 else ''print("Forwarded folder path " + folder_path)conn = S3Connection(access_key, secret_key)bucket = conn.get_bucket(bucket_name)print("Bucket Location:" + bucket.get_location())for key in bucket.list(prefix=folder_path, delimiter=''): if '.' in key.name: file_name = key.name[len(folder_path)+1:] print("Downloading file " + file_name) key.get_contents_to_filename('/data/temp/' + file_name) When I execute the bash script without changing the python /data/pythonscript.py $FOLDER line, I get the following output: Forwarded folder path 2014/07/31/14-16-34 Buckect Location: But when I change it to python /data/pythonscript.py 2014/07/31/14-16-34 , everything works: Forwarded folder path 2014/07/31/14-16-34Bucket Location: Downloading 2014/07/31/14-16-34/FacetedSearch.zip Downloading file FacetedSearch.zipDownloading 2014/07/31/14-16-34/Location.zipDownloading file Location.zipDownloading 2014/07/31/14-16-34/LocationPage.zipDownloading file LocationPage.zip | Perhaps the aws bash command is returning non-printable characters that you don't see with print() . Try removing them with tr : FOLDER=$(./aws get $BUCKET"/location.txt" | tr -cd "[:print:]") | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147669",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79287/"
]
} |
147,728 | From man bash : CONDITIONAL EXPRESSIONS[...] -a file True if file exists.[...] -e file True if file exists. So what is the difference between [ -a $FILE ] and [ -e $FILE ] , if any? If there is no real difference, why do two flags for the same purpose exist? | In bash , with context of two arguments test command, -a file and -e file are the same. But they have some difference, because -a is also a binary operator. -e unary is defined by POSIX, but -a unary isn't. POSIX only defines -a binary (See test POSIX). POSIX defines three arguments test behaviour: 3 arguments: If $2 is a binary primary, perform the binary test of $1 and $3. If $1 is '!', negate the two-argument test of $2 and $3. If $1 is '(' and $3 is ')', perform the unary test of $2. On systems that do not support the XSI option, the results are unspecified if $1 is '(' and $3 is ')'. Otherwise, produce unspecified results. So -a also leads to strange result: $ [ ! -a . ] && echo truetrue -a is considered as binary operator in context of three arguments. See Bash FAQ question E1 .POSIX also mentions that -a is get from KornShell but was changed later to -e because it makes confusing between -a binary and -a unary. The -e primary, possessing similar functionality to that provided by the C shell, was added because it provides the only way for a shell script to find out if a file exists without trying to open the file. Since implementations are allowed to add additional file types, a portable script cannot use: test -b foo -o -c foo -o -d foo -o -f foo -o -p foo to find out if foo is an existing file. On historical BSD systems, the existence of a file could be determined by: test -f foo -o -d foo but there was no easy way to determine that an existing file was a regular file. An early proposal used the KornShell -a primary (with the same meaning), but this was changed to -e because there were concerns about the high probability of humans confusing the -a primary with the -a binary operator. -a binary is also marked as obsolescent, because it leads to some ambiguous expression, which has greater than 4 arguments. With these >4 arguments expression, POSIX defines the result is unspecified. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/147728",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72471/"
]
} |
147,766 | I'm about to start a big rsync between two servers on my LAN. Is it better for me to push the files from one server to the other or pull them (backwards)? There is not anything that would make one work, and the other not work -- I am just wondering if there is a reason (maybe speed) to do one over the other. Can anyone give me a good reason, or is there no reason to do one over the other? | The way rsync algorithm works can be found from here . The algorithm identifies parts of the source file which are identical to some part of the destination file, and only sends those parts which cannot be matched in this way. Effectively, the algorithm computes a set of differences without having both files on the same machine. The algorithm works best when the files are similar, but will also function correctly and reasonably efficiently when the files are quite different. So it would not make a difference whether you are uploading or downloading as the algorithm works on checksums of the source and destination files. So, any file can be the source/destination. I find some more useful information from here . Some of the excerpts are, RSync is a remote file (or data) synchronization protocol. It allows you to synchronize files between two computers. By synchronize, I mean make sure that both copies of the file is the same. If there are any differences, RSync detects these differences, and sends across the differences, so the client or server can update their copy of the file, to make the copies the same. RSync is capable of synchronizing files without sending the whole file across the network. In the implementation I've done, only data corresponding to about 2% of the total file size is exchanged, in addition to any new data in the file, of course. New data has to be sent across the wire, byte for byte. Because of the way RSync works, it can also be used as an incremental download / upload protocol, allowing you to upload or download a file over many sessions. If the current upload or download fails, you can just resume it later. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147766",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61235/"
]
} |
147,774 | The sudoers line is %game_servers ALL= NOPASSWD:/usr/bin/renice which allows group members renice any process run by any user without pw, but I'd like to allow group members renice their own processes only, to negative value. I couldn't spot the answer from man sudoers , from where I got the idea to change ALL=(root:root) to ALL= , which proved to be bad idea ALL=() is syntax error. | The way rsync algorithm works can be found from here . The algorithm identifies parts of the source file which are identical to some part of the destination file, and only sends those parts which cannot be matched in this way. Effectively, the algorithm computes a set of differences without having both files on the same machine. The algorithm works best when the files are similar, but will also function correctly and reasonably efficiently when the files are quite different. So it would not make a difference whether you are uploading or downloading as the algorithm works on checksums of the source and destination files. So, any file can be the source/destination. I find some more useful information from here . Some of the excerpts are, RSync is a remote file (or data) synchronization protocol. It allows you to synchronize files between two computers. By synchronize, I mean make sure that both copies of the file is the same. If there are any differences, RSync detects these differences, and sends across the differences, so the client or server can update their copy of the file, to make the copies the same. RSync is capable of synchronizing files without sending the whole file across the network. In the implementation I've done, only data corresponding to about 2% of the total file size is exchanged, in addition to any new data in the file, of course. New data has to be sent across the wire, byte for byte. Because of the way RSync works, it can also be used as an incremental download / upload protocol, allowing you to upload or download a file over many sessions. If the current upload or download fails, you can just resume it later. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147774",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11433/"
]
} |
147,795 | I am now under a directory with very long path. For future visiting it quicker, I would like to create a link to it. I tried ln -s . ~/mylink ~/mylink actually links to ~ . So can I expand ~ into the obsolute pathname, and then give it to ln ? | A symlink actually stores the path you give literally, as a string¹. That means your link ~/mylink contains " . " (one character). When you access the link, that path is interpreted relative to where the link is, rather than where you were when you made the link. Instead, you can store the actual path you want in the link: ln -s "$(pwd)" ~/mylink using command substitution to put the output of pwd (the working directory name) into your command line. ln sees the full path and stores it into your symlink, which will then point to the right place. ¹ More or less. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/147795",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
147,816 | I am trying to transpose a content of a file into another. Input file Test.txt : HLRSN = 3IMSI = 404212109727229KIVALUE = A24AD11812232B47688ADBF15CE05CA9K4SNO = 1CARDTYPE = SIMALG = COMP128_3HLRSN = 3IMSI = 404212109727230KIVALUE = A24AD11812232B47688ADBF15CE05CB8K4SNO = 1CARDTYPE = SIMALG = COMP128_3HLRSN = 3IMSI = 404212109727231KIVALUE = A24AD11812232B47688ADBF15CE05CD6K4SNO = 1CARDTYPE = SIMALG = COMP128_3 Output needed in another text file: 3,404212109727229,A24AD11812232B47688ADBF15CE05CA9,1,SIM,COMP128_33,404212109727230,A24AD11812232B47688ADBF15CE05CB8,1,SIM,COMP128_33,404212109727231,A24AD11812232B47688ADBF15CE05CD6,1,SIM,COMP128_3 | Simply: awk -v RS= -v OFS=, '{print $3,$6,$9,$12,$15,$18}' An empty record separator ( RS= ) enables the paragraph mode whereby records are separated by sequences of empty lines. Inside a record, the default field separator applies (records are separated by blanks) so in each record, the fields we are interested in are the 3rd, 6th, 9th... We change the output field separator to a comma character ( OFS=, ) and print the fields we're interested in. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147816",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79374/"
]
} |
147,838 | This is part of a bash find loop, and I wondered which is more correct syntax and why? filename="$(echo "$i" | cut -c5-)";filename=`echo "$i" | cut -c5-`; Both function for the purpose of getting the file name. | Simply: awk -v RS= -v OFS=, '{print $3,$6,$9,$12,$15,$18}' An empty record separator ( RS= ) enables the paragraph mode whereby records are separated by sequences of empty lines. Inside a record, the default field separator applies (records are separated by blanks) so in each record, the fields we are interested in are the 3rd, 6th, 9th... We change the output field separator to a comma character ( OFS=, ) and print the fields we're interested in. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147838",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79386/"
]
} |
147,857 | supervisord is running on CentOS server. If I do ps -e -o %mem,%cpu,cmd | grep supervisord | awk '{memory+=$1;cpu+=$2} END {print memory,cpu}' I get 0 0 just because supervisord is just an initialization daemon. It runs four child processes on my server: # pgrep -P $(pgrep supervisord) | wc -l4 How can I find summarized CPU and memory usage of these child processes in one-line-command? | The code from happyraul's answer , pgrep -P $(pgrep supervisord) | xargs ps -o %mem,%cpu,cmd -p | awk '{memory+=$1;cpu+=$2} END {print memory,cpu}' will get only one child layer. If you want to search for all processes that were derived from a main pid, use this code: ps -o pid,ppid,pgid,comm,%cpu,%mem -u {user name} | {grep PID_PRINCIPAL} The pid of main process is the PGID of child processes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147857",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79405/"
]
} |
147,859 | I can understand the rationale of hiding files and folders in the /home/user directory to prevent users from messing around with things. However, I do not see how the same rationale can be applied to files in the /etc , /boot and /var directories which is the domain of administrators. My question is why are some files and folders hidden from administrators? Example: /boot/.vmlinuz-3.11.1-200.fc20.x86_64.hmac/etc/.pwd.lock/etc/selinux/targeted/.policy.sha512/etc/.java/etc/.java/.systemPrefs/etc/skel/.bash_profile/root/.ssh/root/.config/var/cache/yum/x86_64/20/.gpgkeyschecked.yum/var/spool/at/.SEQ/var/lib/pear/.filemap | You've misinterpreted the primary rationale for "hidden files". It is not to prevent users from messing around with things. Although it may have this consequence for very new users until they learn what a "dot file" is ( dot file and dot directory are perhaps more appropriate and specific terms than "hidden"). All by itself it doesn't prevent you from messing around with things -- that's what permissions are for. It does perhaps help to indicate to new users that this is something they should not mess around with until they understand what it is for. You could thus think of the dot prefix as a sort of file suffix -- notice they usually don't have one of those, although they can. It indicates this file is not of interest for general browsing, which is why ls and file browsers usually will not display it. However, since it's a prefix instead of a suffix, there is the added bonus, when you do display them ( ls -a ) in lexicographical order, to see them all listed together. The normal purpose of a file like this is for use by an application (e.g. configuration). You don't have to use them directly or even be aware of them. So, this "hiding" isn't so much intended to literally hide the file from the user as it is to reduce clutter and provide some organization conceptually. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/147859",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24509/"
]
} |
147,863 | I have read https://lintian.debian.org/ but do not understand what that means in simple words. Which are examples of debian policy rules to be violated and detected by lintian? | Lintian is a quality assurance tool that runs automated checks on various aspects of packages conformity to the Debian policy . If a package don't respect one of the rules, the issue is reported in the Lintian Reports database. It helps packager to get metrics to build better packages. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147863",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23987/"
]
} |
147,885 | Occasionally, I need to check resources on several machines throughout our data-centers for consolidation recommendations and the like. I prefer htop, mostly because of the interactive feel and the display. Is there a way to default some settings to my setup for htop? For example, one thing I'd like to always have shown is the average CPU load. important note: Setting this on specific boxes isn't something feasible - I'm looking for maybe a way to set this dynamically every time I ssh into the box. Is this possible at all? | htop has a setup screen, accessed via F2 , that allows you to customize the top part of the display, including adding or removing a "Load average" field and setting it's style (text, bar, etc.). These seem to be auto saved in $HOME/.config/htop/htoprc , which warns: # Beware! This file is rewritten by htop when settings are changed in the interface.# The parser is also very primitive, and not human-friendly. I.e., edit that at your own risk. However, you should be able to transfer it from one system to another (version differences might occasionally cause a bit of an issue). You could also set up a configuration, quit, and then copy the file, so that you could maintain a set of different configurations by swapping/symlinking whichever one with htoprc . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/147885",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79419/"
]
} |
147,904 | I was reading doc and it is still unclear for me, whether the following is possible to accomplish: service defined in ~/.config/systemd/user/task.service that depends on system sleep.target ( ~/.config/systemd/user/sleep.target.wants/task.service ). Now I expect task.service to start when I run $ systemctl suspend , however task.service is not started. I'm running debian, with systemd version 208, systemd --user configured more or less as described on the ArchWiki . I wonder whether my scenario could be implemented with systemd at all, or are --system and --user completely isolated by design so that --user unit may not be a dependency of a --system unit. In case it is possible, what might be the problem in my case? | From systemd/User - Archwiki systemd --user runs as a separate process from the systemd --system process. User units can not reference or depend on system units. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/147904",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79430/"
]
} |
147,938 | I have a process running that writes standard output and standard error to a log file /var/log/dragonturtle.log . Is there anyway to rotate the log file, and have the process continuing to write to the new log file without killing the process? What happens currently (given the logrotate config below): Process writes to /var/log/dragonturtle.log Logrotate moves /var/log/dragonturtle.log to /var/log/dragonturtle.log.1 Process continues to write to /var/log/dragonturtle.log.1 What I would like to happen: Process writes to /var/log/dragonturtle.log Logrotate copies /var/log/dragonturtle.log to /var/log/dragonturtle.log.1 Logrotate truncates /var/log/dragonturtle.log Process continues to write to /var/log/dragonturtle.log /etc/logrotate.d/dragonturtle : /var/log/dragonturtle.log { daily missingok rotate 7 compress delaycompress notifempty create 644 dragonturtle dragonturtle} | The logrotate option that does what you describe is copytruncate . Simply add this option to your existing logrotate config. Here is the excerpt from the logrotate.conf manual: copytruncate Truncate the original log file in place after creating a copy, instead of moving the old log file and optionally creating a new one, It can be used when some program can not be told to close its logfile and thus might continue writing (appending) to the previous log file forever. Note that there is a very small time slice between copying the file and truncating it, so some log- ging data might be lost. When this option is used, the create option will have no effect, as the old log file stays in place. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/147938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12184/"
]
} |
147,957 | In awk, I can clear an array with a loop, making it an empty array, which is equivalent to deleting it. for (key in array) delete array[key]; Is there a simpler way? Can I completely delete an array, so that the variable name can be re-used for a scalar? | The syntax delete array is not in current versions in POSIX, but it is supported by virtually all existing implementations (including the original awk, GNU, mawk, and BusyBox). It will be added in a future version of POSIX (see defect 0000544 ). An alternate way to clear all array elements, which is both portable and standard-compliant, and which is an expression rather than a statement, is to rely on split deleting all existing elements: split("", array) All of these, including delete array , leave the variable marked as being an array variable in the original awk, in GNU awk and in mawk (but not in BusyBox awk). As far as I know, once a variable has been used as an array, there is no way to use it as a scalar variable. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/147957",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/885/"
]
} |
147,964 | On Fedora/RHEL/CentOS there is a line in /etc/sysconfig/network-scripts/ifcfg-x files which defines UUID : UUID=30fcd648-ad1e-4428-as6f-951e8e4d16df NICs have MAC addresses by them selves, so what is the purpose of pointing UUIDs to NICs when there is already an identification number (MAC) and also unlike file systems UUIDs they can't be stored on device itself? | Ethernet cards might have (supposedly) unique MAC addresses, but what about virtual interfaces like aliases (e.g. eth0:0 ), bridges or VPNs? They need an ID too, so an UUID would be a good fit. By the way, since the question is about NetworkManager and NetworkManager deals with connections, there are scenarios where you can have multiple connections for a device. For example you have a laptop with an Ethernet card which you use both at home and at work. At home you're using only IPv4 like most home users, but at work you're using only IPv6 because the company managed to migrate to it. So you have two different connections which need different IDs, so the MAC address of the Ethernet card can't be used by itself. Therefore an UUID is again a good fit for an ID. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/147964",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73778/"
]
} |
148,015 | I want to get into Linux penetration testing, but still want to have the benefits from anonymous surfing. Is it possible to setup a good and 'secure' (Yes I know, TOR!=security, blabla) merge from Kali and Tails , starting with Kali ? And how? | It is possible. Using a Kali Linux LIVE USB key in forensic mode and using a script to route everything through Tor , and two scripts to wipe RAM on halt and secure-delete (srm or shred) files on halt. Heres a complete writeup, you should really check it out : http://homeofbannedhacker.blogspot.fr/2015/07/merging-kali-linux-with-tails-improving.html?m=1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148015",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77532/"
]
} |
148,035 | I always thought that the only benefit of using dash instead of bash was that dash was smaller, and therefore many instances of dash would start faster at boot time. But I have done some research, and found some people migrating all their scripts to dash in the hope they would run faster, and I also found this in the article DashAsBinSh in the Ubuntu Wiki: The major reason to switch the default shell was efficiency . bash is an excellent full-featured shell appropriate for interactive use; indeed, it is still the default login shell. However, it is rather large and slow to start up and operate by comparison with dash. Nowadays I've been using lots of bash scripts for many things on my system, and my problem is that I have a particular script that I'm running continuously 24/7, that spawns around 200 children, which together heat my computer 10°C more than in normal usage. It is a rather large script with lots of bashisms, so porting them to POSIX or some other shell would be very time consuming (and POSIX doesn't really matter for personal use), but it would be worth if I could reduce some of this CPU usage. I know there are also other things to consider, like calling an external binary like sed for a simple bashism like ${foo/bar} , or grep instead of =~ . TL;DR is really bash slower to start up and operate in comparison with dash? Are there other Unix shells which are more efficient than bash? | SHELL SEQ: Probably a useful means of bench-marking a shell's performance is to do a lot of very small, simple evaluations repetitively. It is important, I think, not just to loop, but to loop over input , because a shell needs to read <&0 . I thought this would complement the tests @cuonglm already posted because it demonstrates a single shell process's performance once invoked, as opposed to his which demonstrates how quickly a shell process loads when invoked. In this way, between us, we cover both sides of the coin. Here's a function to facilitate the demo: sh_bench() ( #don't copy+paste comments o=-c sh=$(command -v "$1") ; shift #get shell $PATH; toss $1 [ -z "${sh##*busybox}" ] && o='ash -c' #cause its weird set -- "$sh" $o "'$(cat <&3)'" -- "$@" #$@ = invoke $shell time env - "$sh" $o "while echo; do echo; done|$*" #time (env - sh|sh) AC/DC) 3<<-\SCRIPT #Everything from here down is run by the different shells i="${2:-1}" l="${1:-100}" d="${3:- }"; set -- "\$((n=\$n\${n:++\$i}))\$d" #prep loop; prep eval set -- $1$1$1$1$1$1$1$1$1$1 #yupwhile read m #iterate on input do [ $(($i*50+${n:=-$i})) -gt "$(($l-$i))" ] || #eval ok? eval echo -n \""$1$1$1$1$1"\" #yay! [ $((n=$i+$n)) -gt "$(($l-$i))" ] && #end game? echo "$n" && exit #and EXIT echo -n "$n$d" #damn - maybe next time done #done #END SCRIPT #end heredoc It either increments a variable once per newline read or, as a slight-optimization, if it can, it increments 50 times per newline read. Every time the variable is incremented it is printed to stdout . It behaves a lot like a sort of seq cross nl . And just to make it very clear what it does - here's some truncated set -x; output after inserting it just before time in the function above: time env - /usr/bin/busybox ash -c ' while echo; do echo; done | /usr/bin/busybox ash -c '"'$( cat <&3 )'"' -- 20 5 busybox' So each shell is first called like: env - $shell -c "while echo; do echo; done |..." ...to generate the input that it will need to loop over when it reads in 3<<\SCRIPT - or when cat does, anyway. And on the other side of that |pipe it calls itself again like: "...| $shell -c '$(cat <<\SCRIPT)' -- $args" So aside from the initial call to env (because cat is actually called in the previous line) ; no other processes are invoked from the time it is called until it exits. At least, I hope that's true. Before the numbers... I should make some notes on portability. posh doesn't like $((n=n+1)) and insists on $((n=$n+1)) mksh doesn't have a printf builtin in most cases. Earlier tests had it lagging a great deal - it was invoking /usr/bin/printf for every run. Hence the echo -n above. maybe more as I remember it... Anyway, to the numbers: for sh in dash busybox posh ksh mksh zsh bashdo sh_bench $sh 20 5 $sh 2>/dev/null sh_bench $sh 500000 | wc -lecho ; done That'll get 'em all in one go... 0dash5dash10dash15dash20real 0m0.909suser 0m0.897ssys 0m0.070s5000010busybox5busybox10busybox15busybox20real 0m1.809suser 0m1.787ssys 0m0.107s5000010posh5posh10posh15posh20real 0m2.010suser 0m2.060ssys 0m0.067s5000010ksh5ksh10ksh15ksh20real 0m2.019suser 0m1.970ssys 0m0.047s5000010mksh5mksh10mksh15mksh20real 0m2.287suser 0m2.340ssys 0m0.073s5000010zsh5zsh10zsh15zsh20real 0m2.648suser 0m2.223ssys 0m0.423s5000010bash5bash10bash15bash20real 0m3.966suser 0m3.907ssys 0m0.213s500001 ARBITRARY = MAYBE OK? Still, this is a rather arbitrary test, but it does test reading input, arithmetic evaluation, and variable expansion. Maybe not comprehensive, but possibly near to there. EDIT by Teresa e Junior : @mikeserv and I have done many other tests (see our chat for details), and we found the results could be summarized like this: If you need speed, go definitely with dash , it is much faster than any other shell and about 4x faster than bash . While busybox 's shell can be much slower than dash , in some tests it could be faster, because it has many of its own userland utilities, like grep , sed , sort , etc., which don't have as many features as the commonly used GNU utilities, but can get the work done as much. If speed is not everything you care about, ksh (or ksh93 ) can be considered the best compromise between speed and features. It's speed compares to the smaller mksh , which is way faster than bash , and it has also some unique features, like floating point arithmetic . Although bash is famous for its simplicity, stability, and functionality, it was the slowest of all shells in the majority of our tests, and by a large margin. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/148035",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9491/"
]
} |
148,041 | Originally posted to AskUbuntu.com ... AskUbuntu has adopted a policy of closing questions about EOL (End Of Life) versions. There's a vocal contingent to remove them as well. To prevent possible loss of this popular question (342335 views to date), am placing a revised version here. --- docsalvager The "classic" system... Puppy Linux 5.2.8 (Lucid) based on Ubuntu 10.04 (Lucid Lynx) GRUB 2 boot loader GRUB 2 puts a number of *.mod files (kernel modules) in /boot/grub . Deleting these files (thinking they were misplaced sound files) resulted in failure on reboot and the prompt grub rescue> . How to recover in this situation? | This answer is for others out there that DocSalvager's answer doesn't work for. I followed DocSalvager's use of ls to find the correct hard drive partition. In my case it was (hd0,msdos5) . Then I executed the following commands to get back to the normal grub boot loader screen. grub rescue> set boot=(hd0,msdos5)grub rescue> set prefix=(hd0,msdos5)/boot/grubgrub rescue> insmod normal grub rescue> normal After booting into Ubuntu I repaired the grub boot loader with the following commands from the terminal. sudo grub-install /dev/sda Please reference this source for a visual walk through of this process. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/148041",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27437/"
]
} |
148,043 | It is said that on Unix and Linux in general, you should avoid having spaces in a filename of a file (ordinary file, dir, link, device file, ...). But I do that all the time. For a filename with a space inside, In Nautilus, the space character is shown as a space. In Bash terminal, I either use \ to represent a space, or enclose the filename within a pair of double quotes. in some applications's files (Nautilus, not sure if OS will also do so), the filename is written with the space replaced with %20 . Is a space really not allowed in a filename? How do you use or deal with a space in a filename correctly? | Spaces, and indeed every character except / and NUL, are allowed in filenames. The recommendation to not use spaces in filenames comes from the danger that they might be misinterpreted by software that poorly supports them. Arguably, such software is buggy. But also arguably, programming languages like shell scripting make it all too easy to write software that breaks when presented with filenames with spaces in them, and these bugs tend to slip through because shell scripts are not often tested by their developers using filenames with spaces in them. Spaces replaced with %20 is not often seen in filenames. That's mostly used for (web) URLs. Though it's true that %-encoding from URLs sometimes makes its way into filenames, often by accident. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/148043",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
148,062 | I have a raid array defined in /etc/mdadm.conf like this: ARRAY /dev/md0 devices=/dev/sdb6,/dev/sdc6ARRAY /dev/md1 devices=/dev/sdb7,/dev/sdc7 but when I try to mount them, I get this: # mount /dev/md0 /mnt/media/mount: special device /dev/md0 does not exist# mount /dev/md1 /mnt/datamount: special device /dev/md1 does not exist /proc/mdstat meanwhile says: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md125 : inactive dm-6[0](S) 238340224 blocksmd126 : inactive dm-5[0](S) 244139648 blocksmd127 : inactive dm-3[0](S) 390628416 blocksunused devices: <none> So I tried this: # mount /dev/md126 /mnt/datamount: /dev/md126: can't read superblock# mount /dev/md125 /mnt/mediamount: /dev/md125: can't read superblock The fs on the partions is ext3 and when I specify the fs with -t , I get mount: wrong fs type, bad option, bad superblock on /dev/md126, missing codepage or helper program, or other error (could this be the IDE device where you in fact use ide-scsi so that sr0 or sda or so is needed?) In some cases useful info is found in syslog - try dmesg | tail or so How can I get my raid arrays mounted? It's worked before. EDIT 1 # mdadm --detail --scanmdadm: cannot open /dev/md/127_0: No such file or directorymdadm: cannot open /dev/md/0_0: No such file or directorymdadm: cannot open /dev/md/1_0: No such file or directory EDIT 2 # dmsetup lsisw_cabciecjfi_Raid7 (252:6)isw_cabciecjfi_Raid6 (252:5)isw_cabciecjfi_Raid5 (252:4)isw_cabciecjfi_Raid3 (252:3)isw_cabciecjfi_Raid2 (252:2)isw_cabciecjfi_Raid1 (252:1)isw_cabciecjfi_Raid (252:0)# dmsetup tableisw_cabciecjfi_Raid7: 0 476680617 linear 252:0 1464854958isw_cabciecjfi_Raid6: 0 488279484 linear 252:0 976575411isw_cabciecjfi_Raid5: 0 11968362 linear 252:0 1941535638isw_cabciecjfi_Raid3: 0 781257015 linear 252:0 195318270isw_cabciecjfi_Raid2: 0 976928715 linear 252:0 976575285isw_cabciecjfi_Raid1: 0 195318207 linear 252:0 63isw_cabciecjfi_Raid: 0 1953519616 mirror core 2 131072 nosync 2 8:32 0 8:16 0 1 handle_errors EDIT 3 # file -s -L /dev/mapper/*/dev/mapper/control: ERROR: cannot read `/dev/mapper/control' (Invalid argument)/dev/mapper/isw_cabciecjfi_Raid: x86 boot sector/dev/mapper/isw_cabciecjfi_Raid1: Linux rev 1.0 ext4 filesystem data, UUID=a8d48d53-fd68-40d8-8dd5-3cecabad6e7a (needs journal recovery) (extents) (large files) (huge files)/dev/mapper/isw_cabciecjfi_Raid3: Linux rev 1.0 ext4 filesystem data, UUID=3cb24366-b9c8-4e68-ad7b-22449668f047 (extents) (large files) (huge files)/dev/mapper/isw_cabciecjfi_Raid5: Linux/i386 swap file (new style), version 1 (4K pages), size 1496044 pages, no label, UUID=f07e031f-368a-443e-a21c-77fa27adf795/dev/mapper/isw_cabciecjfi_Raid6: Linux rev 1.0 ext3 filesystem data, UUID=0f0b401a-f238-4b20-9b2a-79cba56dd9d0 (large files)/dev/mapper/isw_cabciecjfi_Raid7: Linux rev 1.0 ext3 filesystem data, UUID=b2d66029-eeb9-4e4a-952c-0a3bd0696159 (large files)# Also when I have one additional disk /dev/mapper/isw_cabciecjfi_Raid in my system - I tried to mount a partition but got: # mount /dev/mapper/isw_cabciecjfi_Raid6 /mnt/mediamount: unknown filesystem type 'linux_raid_member' I rebooted and confirmed that RAID is turned of in my BIOS . I tried to force a mount which seems to allow me to mount but the content of the partition is inaccessible sio it still doesn't work as expected:# mount -ft ext3 /dev/mapper/isw_cabciecjfi_Raid6 /mnt/media# ls -l /mnt/media/total 0# mount -ft ext3 /dev/mapper/isw_cabciecjfi_Raid /mnt/data# ls -l /mnt/datatotal 0 EDIT 4 After executing suggested commands, I only get: $ sudo mdadm --examine /dev/sd[bc]6 /dev/sd[bc]7mdadm: cannot open /dev/sd[bc]6: No such file or directorymdadm: cannot open /dev/sd[bc]7: No such file or directory EDIT 5 I got /dev/md127 mounted now but /dev/md0 and /dev/md1 are still not accessible: # mdadm --examine /dev/sd[bc]6 /dev/sd[bc]7mdadm: cannot open /dev/sd[bc]6: No such file or directorymdadm: cannot open /dev/sd[bc]7: No such file or directoryroot@regDesktopHome:~# mdadm --stop /dev/md12[567]mdadm: stopped /dev/md127root@regDesktopHome:~# mdadm --assemble --scanmdadm: /dev/md127 has been started with 1 drive (out of 2).root@regDesktopHome:~# cat /proc/mdstatPersonalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md127 : active raid1 dm-3[0] 390628416 blocks [2/1] [U_]md1 : inactive dm-6[0](S) 238340224 blocksmd0 : inactive dm-5[0](S) 244139648 blocksunused devices: <none>root@regDesktopHome:~# ls -l /dev/mappertotal 0crw------- 1 root root 10, 236 Aug 13 22:43 controlbrw-rw---- 1 root disk 252, 0 Aug 13 22:43 isw_cabciecjfi_Raidbrw------- 1 root root 252, 1 Aug 13 22:43 isw_cabciecjfi_Raid1brw------- 1 root root 252, 2 Aug 13 22:43 isw_cabciecjfi_Raid2brw------- 1 root root 252, 3 Aug 13 22:43 isw_cabciecjfi_Raid3brw------- 1 root root 252, 4 Aug 13 22:43 isw_cabciecjfi_Raid5brw------- 1 root root 252, 5 Aug 13 22:43 isw_cabciecjfi_Raid6brw------- 1 root root 252, 6 Aug 13 22:43 isw_cabciecjfi_Raid7root@regDesktopHome:~# mdadm --examinemdadm: No devices to examineroot@regDesktopHome:~# cat /proc/mdstatPersonalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md127 : active raid1 dm-3[0] 390628416 blocks [2/1] [U_]md1 : inactive dm-6[0](S) 238340224 blocksmd0 : inactive dm-5[0](S) 244139648 blocksunused devices: <none>root@regDesktopHome:~# mdadm --examine /dev/dm-[356]/dev/dm-3: Magic : a92b4efc Version : 0.90.00 UUID : 124cd4a5:2965955f:cd707cc0:bc3f8165 Creation Time : Tue Sep 1 18:50:36 2009 Raid Level : raid1 Used Dev Size : 390628416 (372.53 GiB 400.00 GB) Array Size : 390628416 (372.53 GiB 400.00 GB) Raid Devices : 2 Total Devices : 2Preferred Minor : 127 Update Time : Sat May 31 18:52:12 2014 State : active Active Devices : 2Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 23fe942e - correct Events : 167 Number Major Minor RaidDevice Statethis 0 8 35 0 active sync 0 0 8 35 0 active sync 1 1 8 19 1 active sync/dev/dm-5: Magic : a92b4efc Version : 0.90.00 UUID : 91e560f1:4e51d8eb:cd707cc0:bc3f8165 Creation Time : Tue Sep 1 19:15:33 2009 Raid Level : raid1 Used Dev Size : 244139648 (232.83 GiB 250.00 GB) Array Size : 244139648 (232.83 GiB 250.00 GB) Raid Devices : 2 Total Devices : 2Preferred Minor : 0 Update Time : Fri May 9 21:48:44 2014 State : clean Active Devices : 2Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : bfad9d61 - correct Events : 75007 Number Major Minor RaidDevice Statethis 0 8 38 0 active sync 0 0 8 38 0 active sync 1 1 8 22 1 active sync/dev/dm-6: Magic : a92b4efc Version : 0.90.00 UUID : 0abe503f:401d8d09:cd707cc0:bc3f8165 Creation Time : Tue Sep 8 21:19:15 2009 Raid Level : raid1 Used Dev Size : 238340224 (227.30 GiB 244.06 GB) Array Size : 238340224 (227.30 GiB 244.06 GB) Raid Devices : 2 Total Devices : 2Preferred Minor : 1 Update Time : Fri May 9 21:48:44 2014 State : clean Active Devices : 2Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 2a7a125f - correct Events : 3973383 Number Major Minor RaidDevice Statethis 0 8 39 0 active sync 0 0 8 39 0 active sync 1 1 8 23 1 active syncroot@regDesktopHome:~# EDIT 6 I stopped them with mdadm --stop /dev/md[01] and confirmed that /proc/mdstat wouldn't show them anymore, then executed mdadm --asseble --scan and got # mdadm --assemble --scanmdadm: /dev/md0 has been started with 1 drives.mdadm: /dev/md1 has been started with 2 drives. but if I try to mount either of the arrays, I still get: root@regDesktopHome:~# mount /dev/md1 /mnt/datamount: wrong fs type, bad option, bad superblock on /dev/md1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so In the meantime, I've figured out that my superblocks seem to be damaged (PS I have confirmed with tune2fs and fdisk that I'm dealing with an ext3 partition): root@regDesktopHome:~# e2fsck /dev/md1e2fsck 1.42.9 (4-Feb-2014)The filesystem size (according to the superblock) is 59585077 blocksThe physical size of the device is 59585056 blocksEither the superblock or the partition table is likely to be corrupt!Abort<y>? yesroot@regDesktopHome:~# e2fsck /dev/md0e2fsck 1.42.9 (4-Feb-2014)The filesystem size (according to the superblock) is 61034935 blocksThe physical size of the device is 61034912 blocksEither the superblock or the partition table is likely to be corrupt!Abort<y>? yes But both partitions have some super blocks backed up: root@regDesktopHome:~# mke2fs -n /dev/md0 mke2fs 1.42.9 (4-Feb-2014)Filesystem label= OS type: Linux Block size=4096 (log=2) Fragmentsize=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 15261696inodes, 61034912 blocks 3051745 blocks (5.00%) reserved for the superuser First data block=0 Maximum filesystem blocks=4294967296 1863block groups 32768 blocks per group, 32768 fragments per group 8192inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 265408, 4096000, 7962624, 11239424, 20480000, 23887872root@regDesktopHome:~# mke2fs -n /dev/md1 mke2fs 1.42.9 (4-Feb-2014)Filesystem label= OS type: Linux Block size=4096 (log=2) Fragmentsize=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 14901248inodes, 59585056 blocks 2979252 blocks (5.00%) reserved for the superuser First data block=0 Maximum filesystem blocks=4294967296 1819block groups 32768 blocks per group, 32768 fragments per group 8192inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872 What do you think, should I try to restore the backup on both arrays to 23887872 ? I think I could do that with e2fsck -b 23887872 /dev/md[01] do you recommend giving this a shot? I don't necessarily want to try around with something I don't exactly know abd that might destroy the data on my disks... man e2fsck doesn't necessarily say it's dangerous but there might be another, saver way to fix the superblock...? AS A LAST UPDATE TO THE COMMUNITY , I used resize2fs to get my superblocks back in order and my drives mounted again! ( resize2fs /dev/md0 & resize2fs /dev/md1 got my back up!)Long story but it finally worked out! And I learned a lot in terms of mdadm along the way! Thank you @IanMacintosh | Your arrays are not properly started. Remove them from your running config with this: mdadm --stop /dev/md12[567] Now try using the autoscan and assemble feature. mdadm --assemble --scan Assuming that works, save your config (assuming Debian derivative) with (and this will overwrite your config so we make a backup first): mv /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.old/usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf You should be fixed for a reboot now, and it will auto assemble and start every time. If not, give the output of: mdadm --examine /dev/sd[bc]6 /dev/sd[bc]7 It'll be a bit long but shows everything you need to know about the arrays and the member disks of the arrays, their state, etc. Just as an aside, it normally works better if you don't create multiple raid arrays on a disk (ie, /dev/sd[bc]6 and /dev/sd[bc]7) separately. Rather, create only one array, and you can then create partitions on your array if you must. LVM is a much better way to partition your array most of the time. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/148062",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46433/"
]
} |
148,089 | We can use arithmetic operations inside shell script function: function mess{ if (( "$1" > 0 )) ; then total=$1 else total=100 fi tail -$total /var/adm/messages | more} I try to do this arithmetic operations on the function args: #!/bin/bashbyte="Bytes"kilo="KB"mega="MB"giga="GB"function bytesToUnites() {if (( "$1" < 1000 )) then echo $1" "$byteelif (( $1 < 1000000 ))then let $1/=1000 echo $1" "$kilofi}bytesToUnites 28888 But I get this error: line 12: let: 28888/=1000: attempted assignment to non-variable (error token is "/=1000")28888 KB How can I fix this? | The problem is that you are attempting to perform parameter substitution by preceding the first positional parameter name with $ . You can accomplish what you want like so: ...elif [ $1 -lt 1000000 ]then arg="$1" let arg/=1000 echo $arg" "$kilofi As far as I can tell, you can't use the positional parameter directly by saying: let 1/=1000 because this would be a syntax error. Incidentally, from your error message, I can see that $1 was set to 28888 . You should note that Bash doesn't do floating point arithmetic. You will find arg set to 28 (the quotient of integer division of 28888 by 1000) instead of 28.888. Please see this wonderful Q&A on how to do floating point arithmetic in scripts. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148089",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79550/"
]
} |
148,105 | I installed CentOS 5.5 on my VMWare 8 recently and I am trying to add a new user on the system. I am unable to add the user unless I use su - option. I believe it has to do something with path not set properly. I updated the path and here is what it looks like /usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:/home/uone/bin:/sbin I believe the command is in /sbin dir which is already a part of path. Can anyone suggest me what else I might be missing? | Try adding /usr/sbin to your path. For example to add it to the end of the path you would do something like this: export PATH=$PATH:/the/file/path | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/148105",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73029/"
]
} |
148,109 | Say I'm writing a .bashrc file to give me some useful information in my login terminals and I'm telling it to run the cal command (a nice one). How would I go about shifting the calendar produced to the right to match the formatting of the rest of my .bashrc "welcome message"? | cal | sed 's/^/ /' Explanation cal | : pipe the output of cal to… sed 's/^/ /' sed, which will look for the start of lines ^ , replacing with spaces. You can change the number of spaces here to match the required formatting. Edit To preserve the highlighting of the current day from cal , you need to tell it to output "color" (highlighting) to the pipe. From man cal --color [when] Colorize output. The when can be never, auto, or always. Never will turn off coloriz‐ ing in all situations. Auto is default, and it will make colorizing to be in use if output is done to terminal. Always will allow colors to be outputed when cal outputs to pipe, or is called from a script. N.B. there seems to be a typo in the manual; I needed a = for it to work. Hence, the final command is cal --color=always | sed 's/^/ /' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/148109",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65488/"
]
} |
148,114 | I have a tab delimited file with 10 columns and in one of the columns (with about 40 million rows), I would like to add a word before the existing entry in each row (same word in each row!) and a ; after the entry. e.g. two rows before 1 2 3 4 5 6 7 8 text still more text in this column 101 2 3 4 5 6 7 8 text2 still more text in this column 10 to 1 2 3 4 5 6 7 8 test=text; still more text in this column 101 2 3 4 5 6 7 8 test=text2; still more text in this column 10 At the end of the day it's the basic "concatenate" function in excel, but I can't use excel for such large files and also need to move to linux anyways. I looked into concatenate questions here on the forum but I only found topics dealing with merging two strings, eg. foo="Hello"foo="$foo World"echo $foo but not using variables. | cal | sed 's/^/ /' Explanation cal | : pipe the output of cal to… sed 's/^/ /' sed, which will look for the start of lines ^ , replacing with spaces. You can change the number of spaces here to match the required formatting. Edit To preserve the highlighting of the current day from cal , you need to tell it to output "color" (highlighting) to the pipe. From man cal --color [when] Colorize output. The when can be never, auto, or always. Never will turn off coloriz‐ ing in all situations. Auto is default, and it will make colorizing to be in use if output is done to terminal. Always will allow colors to be outputed when cal outputs to pipe, or is called from a script. N.B. there seems to be a typo in the manual; I needed a = for it to work. Hence, the final command is cal --color=always | sed 's/^/ /' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/148114",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79572/"
]
} |
148,133 | I had a problem running a script from crontab. After some research I understood the problem was because PATH parameter doesn't include /sbin. I looked what it does include in /etc/crontab: PATH=/sbin:/bin:/usr/sbin:/usr/bin As a test - simple cron job to print the PATH variable: * * * * * echo $PATH &> /root/TMP.log the output is: cat /root/TMP.log/usr/bin:/bin I don't understand this behaviour... How do I set the PATH variable..? Or better - how to add paths to it? | While they are similar, a user crontab (edited using crontab -e) is different from and keeps a separate path from the system crontab (edited by editing /etc/crontab). The system crontab has 7 fields, inserting a username before the command. The user crontab, on the other hand, has only 6 fields, going directly into the command immediately after the time fields. Likewise, the PATH in the system crontab normally includes the /sbin directories, whereas the PATH in the user crontab does not. If you want to set PATH for the user crontab, you need to define the PATH variable in the user crontab. A simple workaround for adding your regular PATH in shell commands in cron is to have the cronjob source your profile by running bash in a login shell. for example instead of * * * * * some command You can instead run * * * * * bash -lc some command That way if your profile sets the PATH or other environment variables to something special, it also gets included in your command. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/148133",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67047/"
]
} |
148,144 | Installing Nginx on Scientific Linux according this documentation fails: [vagrant@localhost ~]$ sudo su -c 'rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm'Retrieving http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpmwarning: /var/tmp/rpm-tmp.gdSOR9: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEYPreparing... ########################################### [100%] 1:epel-release ########################################### [100%][vagrant@localhost ~]$ sudo yum install nginxLoaded plugins: securityError: Cannot retrieve repository metadata (repomd.xml) for repository: epel. Please verify its path and try again[vagrant@localhost ~]$ Version information [vagrant@localhost ~]$ uname -aLinux localhost.localdomain 2.6.32-431.el6.x86_64 #1 SMP Thu Nov 21 13:35:52 CST 2013 x86_64 x86_64 x86_64 GNU/Linux[vagrant@localhost ~]$ cat /etc/*{release,version}Scientific Linux release 6.5 (Carbon)Scientific Linux release 6.5 (Carbon)cat: /etc/*version: No such file or directory[vagrant@localhost ~]$ Note: sudo yum update -y was issued before starting the installation of nginx Installation of other packages disabled [vagrant@localhost ~]$ sudo yum install vim -yLoaded plugins: securityError: Cannot retrieve repository metadata (repomd.xml) for repository: epel. Please verify its path and try again[vagrant@localhost ~]$ URLGRABBER Debugger Log 2014-08-03 14:22:44,437 attempt 1/10: https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=x86_64INFO:urlgrabber:attempt 1/10: https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=x86_642014-08-03 14:22:44,438 opening local file "/var/cache/yum/x86_64/6.5/epel/metalink.xml.tmp" with mode wbINFO:urlgrabber:opening local file "/var/cache/yum/x86_64/6.5/epel/metalink.xml.tmp" with mode wb* About to connect() to mirrors.fedoraproject.org port 443 (#0)* Trying IP... * connected* Connected to mirrors.fedoraproject.org (IP) port 443 (#0)* Initializing NSS with certpath: sql:/etc/pki/nssdb* NSS error -8018* Closing connection #0* Problem with the SSL CA cert (path? access rights?)2014-08-03 14:22:50,071 exception: [Errno 14] PYCURL ERROR 77 - "Problem with the SSL CA cert (path? access rights?)"INFO:urlgrabber:exception: [Errno 14] PYCURL ERROR 77 - "Problem with the SSL CA cert (path? access rights?)"2014-08-03 14:22:50,072 retrycode (14) not in list [-1, 2, 4, 5, 6, 7], re-raisingINFO:urlgrabber:retrycode (14) not in list [-1, 2, 4, 5, 6, 7], re-raisingError: Cannot retrieve repository metadata (repomd.xml) for repository: epel. Please verify its path and try again Output yum update before and after attempt to install nginx [vagrant@localhost ~]$ sudo yum update -yLoaded plugins: securityError: Cannot retrieve repository metadata (repomd.xml) for repository: epel. Please verify its path and try again[vagrant@localhost ~]$ yum --disablerepo="epel" update [vagrant@localhost ~]$ sudo yum --disablerepo="epel" updateLoaded plugins: securitySetting up Update ProcessNo Packages marked for Update | If the following fails: yum check-update but: yum --disablerepo="epel" check-update works, then run: URLGRABBER_DEBUG=1 yum check-update 2> debug.log and check debug.log for: PYCURL ERROR 77 - "Problem with the SSL CA cert (path? access rights?)" If this message is found, then try: yum --disablerepo="epel" reinstall ca-certificates If that fails to resolve the issue, then you may need to update your ca-certificates: yum --disablerepo="epel" update ca-certificates If that fails to resolve the issue, then backup your current CA certificate: cp /etc/pki/tls/certs/ca-bundle.crt /root/ and run: curl http://curl.haxx.se/ca/cacert.pem -o /etc/pki/tls/certs/ca-bundle.crt Explanation The log shows an error with your system's SSL certificates. The CA certificate bundle on your system might have somehow become corrupt and the yum -disablerepo="epel" reinstall ca-certificates command above simply overwrites yours with a fresh version. This is unlikely to be the answer though as all other repos are working - if there were major SSL issues, then all repos would fail. The curl... command above replaces your system's CA certificates bundle with a newer version. The CA certificates bundle contain all the root CA certificates that your system trusts. In this instance the EPEL repo has new SSL certificates (signed by a new root CA) that your system doesn't trust. The CentOS repos continue to work with their slightly older certificates. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/148144",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65367/"
]
} |
148,181 | I know that I can reverse any sentence: echo "a,b,c,d" | rev but what if I want to reverse only the first part of a sentence, I have tried this: echo "a,b,c,d Access" | rev and I get this: sseccA d,c,b,a , and what I really want is: d,c,b,a Access How can I do this? | One way is to use read to break the line into the first word and the rest, then call rev on only the first word $ echo "a,b,c,d Access" | { read -r first rest; printf '%s %s\n' "$(rev <<< "$first")" "$rest"; }d,c,b,a Access | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148181",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79550/"
]
} |
148,187 | I am curious to know what trick you use to remember options for various commands? Looking up man pages all the time is time consuming and not so cool! | The trick is simple: You just don't. It's a waste of time and just not necessary. Memorizing command options isn't a particularly useful skill. It's much more important to understand how stuff works in general and to have a vague idea which tools exist in the first place and what you use them for. A very important skill here is to know how to find out about stuff you don't know yet. Man pages are time consuming? Not so. It's not like you have to read them - at least, not every time - there is a search function. So if I don't remember which cryptic option was the one for hdparm to disable idle timer on some WD disks, I do man hdparm and /idle3 and hey, it was -J . Looking stuff like that up is so quick I don't even remember doing it afterwards. Imagine someone actually memorizing all of the hdparm options. What a waste of time. It's fine if you just happen to remember options because you use them frequently. That happens automatically without even thinking about it. But actually consciously spending time on memorizing them... what's that supposed to be good for? A paper test? | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/148187",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73029/"
]
} |
148,197 | I have a weird problem where my laptop will wake when it's closed, generating a lot of heat and causing much frustration. Is there a way that I can tell if the laptop's lid is closed so that I can automatically suspend the computer (via a cron script) if it wakes itself while the lid is closed? Closing the lid does currently suspend the machine and opening it does wake it, so that works properly. It's a 2011 MacBook Pro running Ubuntu 12.04. | For my specific case, I can get the status of the lid with $ cat /proc/acpi/button/lid/LID0/statestate: open I can then just grep for open or closed to see if it's open or closed. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/148197",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5614/"
]
} |
148,224 | I'm on RHEL 6.5 and Apache 2.2.15 . When I restart the httpd , I can not start that httpd anymore. Showing following things in the /var/log/httpd/error_log : [Fri Aug 01 18:31:48 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)[Fri Aug 01 18:32:35 2014] [notice] SELinux policy enabled; httpd running as context unconfined_u:system_r:httpd_t:s0[Fri Aug 01 18:32:35 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)[Fri Aug 01 18:42:46 2014] [notice] SELinux policy enabled; httpd running as context system_u:system_r:httpd_t:s0[Fri Aug 01 18:42:46 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)[Fri Aug 01 18:43:15 2014] [notice] SELinux policy enabled; httpd running as context unconfined_u:system_r:httpd_t:s0[Fri Aug 01 18:43:15 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)[Fri Aug 01 18:43:59 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)[Fri Aug 01 18:44:12 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)[Fri Aug 01 18:45:03 2014] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) Actually I have already disabled the SELinux and rebooted. What should I do please? | Sorry, I have found the reason. This is totally because of the SSL CERT problem. Not really because of above notices . It was nothing to do with above mentioned Messages. Therefore please just ignore them. How do I do was that I enabled the Apache Detailed Logs and then that's the real move. It shows what really is happening, by showing the Failure at the loading of mod_ssl module, while starting the Apache. Then I realized it is because of ssl.conf (or the respective Vhost file) having the SSL Cert configurations inside. There I made 2 mistakes. First, I didn't give read permissions to the CERT related files (.crt/ .key/ .csr). After that, more badly, one of the file was wrong. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/148224",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34722/"
]
} |
148,232 | When I import pictures from my camera in Shotwell, it also imports the video clips. This is somewhat annoying, as I would like to store my videos in another folder. I've tried to write a bash command to do this, but have not had success. I need a command that meets the following requirements: Locate all files in a directory structure that do not have an extension of .jpg, .png, .gif, or .xcf (case insensitive). Move all of these files into a target directory, regardless of whether the file names or directory paths contain spaces or special characters. Any help would be appreciated! EDIT: I'm using the default shell in Ubuntu, meaning that some commands are aliased, etc. EDIT 2: I've attempted this myself (not the copy part, just the listing of files part). I turned on extglob and ran the following command: $ ls -R /path | awk ' /:$/&&f{s=$0;f=0} /:$/&&!f{sub(/:$/,"");s=$0;f=1;next} NF&&f{ print s"/"$0 }' This lists everything. I tried using grep on the end of it, but haven't the foggiest idea of how to get it to not match a pattern I give it. The extglob switch didn't help much with grep, even though it does help with other commands. | You can use find to find all files in a directory tree that match (or don't match) some particular tests, and then to do something with them. For this particular problem, you could use: find -type f ! \( -iname '*.png' -o -iname '*.gif' -o -iname '*.jpg' -o -iname '*.xcf' \) -exec echo mv {} /new/path \; This limits the search to regular files ( -type f ), and then to files whose names do not ( ! ) have the extension *.png in any casing ( -iname '*.png' ) or ( -o ) *.gif , and so on. All the extensions are grouped into a single condition between \( ... \) . For each matching file it runs a command ( -exec ) that moves the file, the name of which is inserted in place of the {} , into the directory /new/path . The \; tells find that the command is over. The name substitution happens inside the program-execution code, so spaces and other special characters don't matter. If you want to do this just inside Bash, you can use Bash's extended pattern matching features . These require that shopt extglob is on, and globstar too. In this case, use: mv **/!(*.[gG][iI][fF]|*.[pP][nN][gG]|*.[xX][cC][fF]|*.[jJ][pP][gG]) /new/path This matches all files in subdirectories ( ** ) that do not match *.gif , *.png , etc, in any combination of character cases, and moves them into the new path. The expansion is performed by the shell, so spaces and special characters don't matter again. The above assumes all files are in subdirectories. If not, you can repeat the part after **/ to include the current directory too. There are similar features in zsh and other shells, but you've indicated you're using Bash. (A further note: parsing ls is never a good idea - just don't try it.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/148232",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79632/"
]
} |
148,264 | I have a seedbox-account which I use for torrenting. I have set up a cron job that uses rsync to download all files from the seedbox to my 14.04.1 ubuntu server. The download folder is a samba share, as I have previously used windows to organize the content into folders before moving it to a more permanent place on my server. Before upgrading to 14.04, I used 12.04 and everything worked OK. The problem is that the files I get from the seedbox is owned by a user called "544" (this is the username generated by the seedbox, not some octal thing) and can not be deleted by my user on my server. After changing permissions on the download folder, I can delete them. Also, I have given all users permissions to read/write in the samba folder upon sharing it. So my question is this: Is it possible through some elegant command executed to make rsync change user/permissions on the files when downloading it? Or is it possible to make the folder function in a way that all new files will automatically change user when copied/moved to it? I guess I could put the chmod command in sudo cron to run every once in a while to change the owner of the files, but since sometimes the amount of data to be downloaded can be rather large, it is difficult to know when to trigger it in relation to the rsync-job to make sure all files are downloaded. | rsync only preserves the owner if you ask it to with -o — otherwise files will be owned by the user running the rsync command, just like when any other files are created. -a includes -o , however, so lots of common rsync command lines include it. man rsync includes a passage on this explicitly: For example: if you want to use -a (--archive) but don’t want -o(--owner), instead of converting -a into -rlptgD, you couldspecify -a --no-o (or -a --no-owner). So instead of rsync -a source dest you can use rsync -a --no-o source dest to have files be owned by the user running the command. On the other hand, you can usually only change the owner of a file if you're root, so this doesn't come up much. If it's necessary to run this job as root for some reason, and you want the files to be owned by your normal user rather than root, you can use the --chown option: rsync -a --chown=youruser:yourgroup ... There is also a --usermap option for more complicated mappings, if you do want to keep some users. It takes a comma-separated list of from:to pairs. As a final option, since you're apparently running rsync from a cron job, you can just append the owner/permissions-changing operation to the cron job with && chown ... . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148264",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62759/"
]
} |
148,273 | Consider the following range: 1,/pattern/ . If pattern matches on the first line, the range matches for the whole file: $ cat 1.sh#!/usr/bin/env bashset -euseq 1 4 | sed -rn '1,/'"$1"'/p'$ ./1.sh 11234$ ./1.sh 212 What would you do about it? UPD Here's what I did (just in case): re='/1/'seq 1 4 | sed -rn "1{$re{p;q}}; 1,${re}p" Or this way: seq 1 4 | sed -rn "1{/1/{p;q}}; 1,//p" | Yes, that's an annoying thing about sed (see the sed FAQ about that). Since you're using GNU sed ( -r is GNU specific), you can do: sed -En "0,/$1/p" (I prefer -E over -r as it's also supported by some other sed s like FreeBSDs and is consistent with grep and a few other tools (and is going to be in the next issue of the POSIX/Single UNIX Specification standards )). A better alternative (and portable) would be: sed "/$1/q" To tell sed to quit (and stop reading) after the first match. Note that awk doesn't have the issue so you can write: PATTERN=$1 awk 'NR==1, $0 ~ ENVIRON["PATTERN"]' (though like for sed , you'd rather write): PATTERN=$1 awk '1; $0 ~ ENVIRON["PATTERN"] {exit}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148273",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29867/"
]
} |
148,285 | My query is to extract the value between double quotes "" . Sample input is: 10.219.41.68 - - - [11/Jun/2014:10:23:04 -0400] Sec:0 MicSec:1797 "GET /balancer-manager HTTP/1.1" 200 28980 "-" "curl/7.15.5 (i386-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5" I have large log files, so values can be vary for each line, need to extract the value between first occurrence of double quotes… Expected output: GET /balancer-manager HTTP/1.1 Anyone have any idea then please suggest. | You can just use cut for this: $cut -d '"' -f2 < logfileGET /balancer-manager HTTP/1.1 -d '"' tells cut to use a double quote as its field delimiter. -f2 tells it to take the second field, which is between the first and second quotes - or the first quoted string, exactly what you want. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/148285",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79658/"
]
} |
148,306 | What is the difference between -r and -R in zip command. Obviously, I have googled about it. Also, I have referred this in quest of finding the difference, but didn't got clarification. Can anyone from community help me for this? | From what is likely to be in your man pages : -r--recurse-paths Travel the directory structure recursively-R--recurse-patterns Travel the directory structure recursively starting at the current directory Loosely speaking, zip -r is used when you want to zip files under a specific directory, and zip -R when you want to zip files under a specific directory and where those files match a pattern defined after the -R flag, as you can see in the examples provided in that page. Also, -R starts in the current directory by default. Examples: zip -r foo foo1 foo2First zips up foo1 and then foo2, going down each directory.zip -R foo "*.c"In this case, all the files matching *.c in the tree starting at the current directory are stored into a zip archive named foo.zip. Note that *.c will match file.c, a/file.c and a/b/.c. More than one pattern can be listed as separate arguments. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148306",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79677/"
]
} |
148,321 | Is it possible for awk to read the program and the input from the standardinput? I would like to be able to pipe a file to the following function. process_data () { awk -f - <<EOF{print}EOF} Note: the actual program is longer, it can't be passed as a command lineargument, and I'd rather not use temporary files. Currently it doesn't output anything. $ yes | head | process_data $ | process_data() { awk -f /dev/fd/3 3<< \EOF awk code hereEOF} Note that command line arguments can contain newline character, and while there's a length limit, it's general over a few hundred kilobyte. awk ' BEGIN {...} /.../ ... END {...}' If the issue is about embedding single quote characters in the awk script, another approach is to store the code in a variable: awk_code=$(cat << \EOF{print "'quoted' " $0}EOF) And do: process_data() { awk "$awk_code"} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148321",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53904/"
]
} |
148,333 | So recently, I was discussing strace with somebody, and they were asking what would happen if you straced a running process just as it was creating a network socket or something similar. Could this cause the program to crash in unexpected ways? From what I've read about ptrace, the syscall used by strace, it shouldn't be able to cause anything like that if you're just debugging a thread. The process gets stopped, every time a syscall is called, but it should later resume and be none the wiser. Signals get queued while it's not running, so I assume something similar happens with syscalls/sockets/listen. Can ptrace used in the context of strace cause any weird process crashes? | No , strace should not cause a program crash - Except in this somewhat unusual case: If it has a bug that depends on timing of execution , or runtime memory locations . It may trigger this kind of " heisenbug " - but extremely rarely, because this kind of bug is rare, and it needs to only trigger under strace or other instrumentation.And when you find a heisenbug, that's often a good thing. Regarding ptrace() - the syscall - that is just what strace does inside I think, so it's similar. One can just do more than strace can when using ptrace() directly. Your example would be just this kind of bug: In the example, strace would change the timing of the steps to create a network connection. If that causes a problem, it was a "problem waiting to happen" - the timing of execution changes constantly. With strace , just a little more. But any other application could have changed the timing more, like starting a program. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148333",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33895/"
]
} |
148,340 | I have a 2 files with the following: File1.txt A 1B 2C 5Z 3File2.txtA 4B 7 C 10D 11 What I would like to do is create something like A 1 4 B 2 7C 5 10D - 11Z 3 - Is there a utility that does this? If not how Can this be done? Using a find and awk or something? | join -a1 -a2 -o 0,1.2,2.2 -e - file1.txt file2.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148340",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45551/"
]
} |
148,379 | What is a good command to delete spaces, hyphens, and underscores from all files in a directory, or selected files? I use the following command with Thunar Custom Actions to slugify filenames: for file in %N; do mv "$file" "$(echo "$file" | tr -s ' ' | tr ' A-Z' '-a-z' | tr -s '-' | tr -c '[:alnum:][:cntrl:].' '-')"; done But that command only replaces spaces with dashes/hyphens and lowercases capped characters. I've used the following command in terminal to delete spaces from thousands of filenames in a folder, and it worked pretty fast: rename "s/ //g" * Again, it only deletes spaces, and not hyphens/dashes and underscores as well. Ideally I don't want any spaces, hyphens/dashes, and underscores in my filenames. And it would be great if the command could be used with Thunar Custom Actions on selected files. | The version of rename that comes with the perl package supports regular expressions: rename "s/[-_ ]//g" * Alternatively, rename -i "s/[-_ ]//g" * The -i flag will make rename use interactive mode, prompting if the target already exists, instead of silently overwriting. Perl's rename is sometimes called prename . Perl's rename versus util-linux's rename On Debian-like systems, perl's rename seems to be the default and the above commands should just work. On some distributions, the rename utility from util-linux is the default. This utility is completely incompatible with Perl's rename . All: First, check to see if Perl's rename is available under the name prename . Debian: Perl's rename should be the default. It is also available as prename . The rename executable, though, is under the control of /etc/alternatives and thus could have been altered to something different. archlinux: Run pacman -S perl-rename and the command is available as perl-rename . For a more convenient name, create an alias. (Hat tip: ChiseledAbs) Mac OSX According to this answer , rename can be installed on OSX using homebrew via: brew install rename Direct Download: rename is also available from Perl Monks: wget 'http://www.perlmonks.org/?displaytype=displaycode;node_id=303814' -O rename | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/148379",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77038/"
]
} |
148,382 | Suppose I have a directory / , and it contains many directories /mydir , /hisdir , /herdir , and each of those need to have a similar structure. For each directory in / , there needs to be a directory doc and within it a file doc1.txt . One might naively assume they could execute mkdir */doctouch */doc/doc1.txt but they would be wrong, because wildcards don't work like that. Is there a way to do this without just making the structure once in an example then cp ing it to the others? And, if not, is there a way to do the above workaround without overwriting any existing files (suppose mydir already contains the structure with some data I want to keep)? EDIT: I'd also like to avoid using a script if possible. | With zsh : dirs=(*(/))mkdir -- $^dirs/doctouch -- $^dirs/doc/doc1.txt (/) is a globbing qualifier , / means to select only directories. $^array (reminiscent of rc 's ^ operator) is to turn on a brace-like type of expansion on the array, so $^array/doc is like {elt1,elt2,elt3}/doc (where elt1 , elt2 , elt3 are the elements of the array). One could also do: mkdir -- *(/e:REPLY+=/doc:)touch -- */doc(/e:REPLY+=/doc1.txt:) Where e is another globbing qualifier that executes some given code on the file to select. With rc / es / akanga : dirs = */mkdir -- $dirs^doctouch -- $dirs^doc/doc1.txt That's using the ^ operator which is like an enhanced concatenation operator. rc doesn't support globbing qualifiers (which is a zsh-only feature). */ expands to all the directories and symlinks to directories , with / appended. With tcsh : set dirs = */mkdir -- $dirs:gs:/:/doc::qtouch -- $dirs:gs:/:/doc/doc1.txt::q The :x are history modifiers that can also be applied to variable expansions. :gs is for global substitute. :q quotes the words to avoid problems with some characters. With zsh or bash : dirs=(*/)mkdir -- "${dirs[@]/%/doc}"touch -- "${dirs[@]/%/doc/doc1.txt}" ${var/pattern/replace} is the substitute operator in Korn-like shells. With ${array[@]/pattern/replace} , it's applied to each element of the array. % there means at the end . Various considerations: dirs=(*/) includes directories and symlinks to directories (and there's no way to exclude symlinks other than using [ -L "$file" ] in a loop), while dir=(*(/)) (zsh extension) only includes directories ( dir=(*(-/)) to include symlinks to directories without adding the trailing slash). They exclude hidden dirs. Each shell has specific option to include hidden files). If the current directory is writable by others, you potentially have security problems. As one could create a symlink there to cause you to create dirs or files where you would not want to. Even with solutions that don't consider symlinks, there's still a race condition as one may be able to replace a directory with a symlink in between the dirs=(*/) and the mkdir... . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148382",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53301/"
]
} |
148,454 | This problem has enough layers of complexity to make simple escaping characters difficult for me. I have a bash script which, for a large part, has an embedded awk script that reads a file delimited by semicolons. In this text file, one of the fields is a directory path to some JPEGs. I'd like to copy the JPEGs somewhere else, but the original directory path has spaces in it (and could potentially have other strings that could do damage). I can't refer to the directory in question with single quotes because this stops the awk interpreter, and double quotes turns it into a literal string in awk. I'm using gawk 4.1.1. Here's some awk code of what I'm trying to accomplish: imageDirectory = $1;cpJpegVariable="cp -r " imageDirectory " assets";#screws up if imageDirectory has spaces and other naughty characterssystem(cpJpegVariable); | You can use: awk '... cpJpegVariable="cp -r '\''" imageDirectory "'\'' assets"; ...' (note that ' doesn't need escaping for awk , but you need '\'' for the shell). So when awk expands variable cpJpegVariable , it looks like: cp -r 'file_contain space' assets With that, you can avoid problem with all special characters, except ' itself. If imageDirectory may contain single quote characters, you can escape them with gsub() . Example: awk '{gsub("'\''","'\''\"'\''\"",imageDirectory)}1' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148454",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79386/"
]
} |
148,458 | I have family pictures and movies in a folder /media/data/Selbstgemacht and I'd like to find the size of all pictures. In /media/data I use find Selbstgemacht -type f -iname '*.jpg' -exec du -ch '{}' + which returns 5,1GB. However, if I step down into the folder "Selbstgemacht" and use find . -type f -iname '*.jpg' -exec du -ch '{}' + it returns 7,0GB. I then compared the output of find to check if they find the same files: From parent folder find Selbstgemacht -type f -iname '*.jpg' -printf '%P\n' |sort > test1.txt From subfolder find . -type f -iname '*.jpg' -printf '%P\n' |sort > ../test2.txt The files are identical, so both find commands find exactly the same files, which leads me to thinking that the difference in the size du reports must be due to something else. What exactly is the cause here? System information: Debian stable find (GNU findutils) 4.4.2 D_TYPE O_NOFOLLOW(enabled) LEAF_OPTIMISATION, FTS(), CBO(level=0) du (GNU coreutils) 8.13 | find ... -exec cmd {} + will execute cmd as many times as necessary so as not to break the limit of the size of the arguments passed to a command. When using find . -exec du {} + , the size of the file list is smaller than when using find verylongdirname -exec du {} + . So it's likely the find verylongdirname will run more du commands than the find . one. The total you see in the end is the total for the last run on du , which does not include all the files (there will have been more totals earlier, you can pipe the command to grep 'total$' to confirm. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148458",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49464/"
]
} |
148,479 | While writing a simple shell tool, I found a piece where I don't know how to get it to work. [ "$#" -ne 3 ] || echo "wrong number of arguments" && exit The above works as intended because it's hard to conceive conditions where echo could fail. But what if I replaced echo with a command that can fail, and still execute exit nevertheless? This won't work, because exit quits the shell spawned with ( ) and not the main one: [ "$#" -ne 3 ] && ( command ; exit ) This will exit always: [ "$#" -ne 3 ] && command ; exit I could use the verbose syntax: if [ "$#" -ne 3 ] ; then command exit fi but if I don't want to engage if and keep the syntax terse - how can I string conditional execution of commands, including exit like that? | You can group command in curly braces: [ "$#" -ne 3 ] || { command; exit; } { list; } causes lists command run in current shell context, not in subshell. Read more about bash Grouping commands | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/148479",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30534/"
]
} |
148,483 | I have some script using flock executable. It works well.Problem is when this script calls another script, and it creates background process.In this situation background process inherits file locked file handle, this is system behavior.I'm looking for any tool that works as wrapper and close all unneeded handles, specially for with file locks. In my idea only main process shoud be protected against running twice. I know this is untypical sytuation. Usually all children should finish for leaving file lock, but in this situation this does not work. At now I use some workaround, using some wrapper with main code above, but I'd preffer use some binary wrapper. code: #!/bin/bashfor fd in $(ls /proc/$$/fd); docase "$fd" in0|1|2|255) ;;*)eval "exec $fd>&-" ;; esacdoneexec $1 $2 $3 $4 $5 $6 $7 $8 $9 | You can group command in curly braces: [ "$#" -ne 3 ] || { command; exit; } { list; } causes lists command run in current shell context, not in subshell. Read more about bash Grouping commands | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/148483",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36838/"
]
} |
148,497 | In bash , it's easy enough to set up customized completion of command arguments using the complete built-in. For example, for a hypothetical command with a synopsis of foo --a | --b | --c you could do complete -W '--a --b --c' foo You can also customize the completion you get when you press Tab at an empty prompt using complete -E , for example complete -E -W 'foo bar' . Then, pressing tab at the empty prompt would suggest only foo and bar . How do I customize command completion at a non -empty prompt? For example, if I write f , how do I customize the completion to make it complete to foo ? (The actual case I'd like is loc TAB → localc . And my brother, who prompted me to ask this, wants it with mplayer .) | Completion of the command (along with other things) is handled via bash readline completion . This operates at a slightly lower level than the usual "programmable completion" (which is invoked only when the command is identified, and the two special cases you identified above). Update: the new release of bash-5.0 (Jan 2019) adds complete -I for exactly this problem. The relevant readline commands are: complete (TAB) Attempt to perform completion on the text before point. Bash attempts completion treating the text as a variable (if the text begins with $), username (if the text begins with ~), hostname (if the text begins with @), or command (including aliases and functions) in turn. If none of these produces a match, filename completion is attempted.complete-command (M-!) Attempt completion on the text before point, treating it as a command name. Command completion attempts to match the text against aliases, reserved words, shell functions, shell builtins, and finally executable filenames, in that order. In a similar way to the more common complete -F , some of this can be handed over to a function by using bind -x . function _complete0 () { local -a _cmds local -A _seen local _path=$PATH _ii _xx _cc _cmd _short local _aa=( ${READLINE_LINE} ) if [[ -f ~/.complete.d/"${_aa[0]}" && -x ~/.complete.d/"${_aa[0]}" ]]; then ## user-provided hook _cmds=( $( ~/.complete.d/"${_aa[0]}" ) ) elif [[ -x ~/.complete.d/DEFAULT ]]; then _cmds=( $( ~/.complete.d/DEFAULT ) ) else ## compgen -c for default "command" complete _cmds=( $(PATH=$_path compgen -o bashdefault -o default -c ${_aa[0]}) ) fi ## remove duplicates, cache shortest name _short="${_cmds[0]}" _cc=${#_cmds[*]} # NB removing indexes inside loop for (( _ii=0 ; _ii<$_cc ; _ii++ )); do _cmd=${_cmds[$_ii]} [[ -n "${_seen[$_cmd]}" ]] && unset _cmds[$_ii] _seen[$_cmd]+=1 (( ${#_short} > ${#_cmd} )) && _short="$_cmd" done _cmds=( "${_cmds[@]}" ) ## recompute contiguous index ## find common prefix declare -a _prefix=() for (( _xx=0; _xx<${#_short}; _xx++ )); do _prev=${_cmds[0]} for (( _ii=0 ; _ii<${#_cmds[*]} ; _ii++ )); do _cmd=${_cmds[$_ii]} [[ "${_cmd:$_xx:1}" != "${_prev:$_xx:1}" ]] && break _prev=$_cmd done [[ $_ii -eq ${#_cmds[*]} ]] && _prefix[$_xx]="${_cmd:$_xx:1}" done printf -v _short "%s" "${_prefix[@]}" # flatten ## emulate completion list of matches if [[ ${#_cmds[*]} -gt 1 ]]; then for (( _ii=0 ; _ii<${#_cmds[*]} ; _ii++ )); do _cmd=${_cmds[$_ii]} [[ -n "${_seen[$_cmds]}" ]] && printf "%-12s " "$_cmd" done | sort | fmt -w $((COLUMNS-8)) | column -tx # fill in shortest match (prefix) printf -v READLINE_LINE "%s" "$_short" READLINE_POINT=${#READLINE_LINE} fi ## exactly one match if [[ ${#_cmds[*]} -eq 1 ]]; then _aa[0]="${_cmds[0]}" printf -v READLINE_LINE "%s " "${_aa[@]}" READLINE_POINT=${#READLINE_LINE} else : # nop fi}bind -x '"\C-i":_complete0' This enables your own per-command or prefix string hooks in ~/.complete.d/ . E.g. if you create an executable ~/.complete.d/loc with: #!/bin/bashecho localc This will do (roughly) what you expect. The function above goes to some lengths to emulate the normal bash command completion behaviour, though it is imperfect (particularly the dubious sort | fmt | column carry-on to display a list of matches). However , a non-trivial issue with this it can only use a function to replace the binding to the main complete function (invoked with TAB by default). This approach would work well with a different key-binding used for just custom command completion, but it simply does not implement the full completion logic after that (e.g. later words in the command line). Doing so would require parsing the command line, dealing with cursor position, and other tricky things that probably should not be considered in a shell script... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148497",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/977/"
]
} |
148,534 | I'm using the GNU date command to parse arbitrary natural language dates - for example to detect if a certain epoch time stamp is from last week, I can do: if [ "$timestamp" -lt "$(date +"%s" -d "last sunday")" ]; then ... Now when I'm trying to port my script to FreeBSD, how do I achieve the same functionality? man date didn't show any promise, unless I missed something obvious. | There are a number of commands on FreeBSD that use the same API as GNU date to input natural language dates from the user . I've just found one that can be tricked into converting that date into Unix epoch time: /usr/sbin/fifolog_reader -B 'last sunday' /dev/null 2>&1 | sed 's/^From[[:blank:]]*\([0-9]*\).*/\1/p' (note that at least on FreeBSD 9.1-RELEASE-p2 where I tried that on, it only seems to work reliably if you're in a UTC timezone and the date specifications it recognises are not necessarily the same as those recognised by GNU date). Note that some shells have that capability builtin. ksh93 : if (( timestamp < $(printf '%(%s)T' 'last sunday') )); then zsh : autoload calendar_scandatecalendar_scandate 'last sun'if (( timestamp < REPLY)); then... Or you could use perl and the Date::Manip if installed: last_sun=$(perl -MDate::Manip -e 'print UnixDate("last sunday","%s")')if [ "$timestamp" -lt "$last_sun" ]; then... Or: if perl -MDate::Manip -e 'exit 1 unless $ARGV[0] < UnixDate("last sunday","%s") ' "$timestamp"; then.... If the aim is to check file timestamps, then note that FreeBSD find supports: find file -prune -newermt 'last sunday' In this very case, if you want the time of the beginning of this week (weeks starting on Sunday), you can do: week_start=$(($(date '+%s - 86400*%w - 3600*%H - 60*%M - %S'))) That should work on both GNU and FreeBSD (or any system where %s is supported). In timezones with summer time, that will be off by an hour around the switch from/to summer time. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148534",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4323/"
]
} |
148,545 | I am running Ubuntu 10.04 and I use upstart for daemon management. My enterprise application is run as a daemon and must be run as root because of various privileges. E.g.: sudo start my-application-long-IDsudo stop my-application-long-IDetc I would like to introduce an alias to abbreviate these commands as something like: alias startapp='sudo start my-application-long-ID' and run it as startapp and that works but I would prefer to not have sudo in the alias. alias startapp='start my-application-long-ID' does not when run using sudo startapp , returning sudo: startapp: command not found . However, when I added the alias: alias sudo='sudo ' sudo startapp now works but I am still curious why sudo ignores aliases. | I see the below information from here . When using sudo, use alias expansion (otherwise sudo ignores your aliases) alias sudo='sudo ' The reason why it doesn't work is explained here . Bash only checks the first word of a command for an alias, any words after that are not checked. That means in a command like sudo ll, only the first word (sudo) is checked by bash for an alias, ll is ignored. We can tell bash to check the next word after the alias (i.e sudo) by adding a space to the end of the alias value. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/148545",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23944/"
]
} |
148,563 | Apparently you can rename file to ... . If I were insane, how would I rename file to .. or . ? Is such a filename even allowed? Backslash doesn't seem to disable dot's special meaning: $ mv test \.mv: `test' and `./test' are the same file | You can't rename a file to . or .. because all directories already contain entries for those two names. (Those entries point to directories, and you can't rename a file to a directory.) mv detects the case where the destination is an existing directory, and interprets it as a request to move the file into that directory (using its current name). Backslashes have nothing to do with this, because . is not a shell metacharacter. \. and . are the same to bash . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/148563",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53467/"
]
} |
148,572 | fgrep --help | fgrep "--help" returns just the whole fgrep --help, how do I return just the lines that have the literal "--help" in them? The quotes don't do anything, nor does \-\-help . | I believe you can use fgrep -- --help to achieve this. The man page mentions fgrep -e --help Quote from http://www.openbsd.org/cgi-bin/man.cgi?query=grep : -e pattern Specify a pattern used during the search of the input: an input line is selected if it matches any of the specified patterns. This option is most useful when multiple -e options are used to specify multiple patterns, or when a pattern begins with a dash (‘-’). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148572",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79435/"
]
} |
148,623 | There is a way to visualize multiple terminals at the same time without running a Xorg session ? I have a really low profile machine that could be great for some basic stuff but has an horrible support for the GPU in terms of drivers and computing power. | Check out tmux and/or screen . A comparison of the two programs which satisfy essentially the same needs can be found on the tmux FAQ . A very good blog post for getting started with tmux is at Hawk Host: TMUX the terminal multiplexer part 1 and part 2 . If you want to know more about tmux's versatility, there's a nice book/e-book that covers a lot of ground at a leisurely pace: tmux: Productive Mouse-Free Development by Brian P. Hogan. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/148623",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41194/"
]
} |
148,636 | I have been reading up on control operators, and I was wondering if there was a limit to how many commands you could line up with control operators, such as || , && and ; . In addition, is there a configuration file somewhere where this can be regulated? PS: I am not entirely sure how to tag this. | There isn't really; as long your computer's memory can handle the queue, the shell should do its best. According to POSIX : The shell shall read its input in terms of lines from a file, from a terminal in the case of an interactive shell, or from a string in the case of sh -c or system() . The input lines can be of unlimited length . These lines shall be parsed using two major modes: ordinary token recognition and processing of here-documents. Basically all of those || && strung together amount to a single input line for the shell's parser, because it has to parse tokens for each command list before then evaluating and executing the list's constituent simple commands. I once covered something like this here - and there are a lot of command examples there detailing how the parser works (or at least how I understand it works) . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148636",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72977/"
]
} |
148,652 | I'm trying to debug a code using GDB in a Fedora machine. It produces this message each time I run it. Missing separate debuginfos, use: debuginfo-install glibc-2.18-12.fc20.x86_64 libgcc-4.8.3-1.fc20.x86_64 libstdc++-4.8.3-1.fc20.x86_64 My questions: Should these packages be in GDB by default? What is the function of each of these packages? In real production environments should these packages be installed for GDB? Is it ok if I do not install these packages? What will be the effect? | No. gdb is packaged by a maintainer, glibc is packaged by another maintainer, gcc , libstdc and so on all all packaged by different maintainers. To package the debuginfo for these along with gdb would take considerable coordination. Each time one of the packages changed, the gdb maintainer would have to repackage and release. It would become quite cumbersome to manage. gdb can also debug other languages, for example java , which wouldn't need the debuginfo for the libraries listed. The debuginfo packages contain the source code and symbols stripped from the executable. They are only required during debugging, therefore are redundant during normal use. They do take up a fair amount of space, therefore are stripped during production releases. It depends. Most C code will use glibc etc. However, if you're debugging package X and don't need to delve into the internals of glibc you could manage without installing it. If you want to follow the code in gdb all the way to the low-level glibc , or if you think there's a bug in the library itself, then you'll need to install it. On the other hand, some C code might be statically linked and should have everything needed within it's own debuginfo package, or an application could be written in another language. Neither would need these installed. Yes. The effect of not installing these packages is that you will not be able to debug effectively into the routines provided by them. As in 3 above, it all depends on whether you need to debug at that level or not. Note: You'll find that many applications have been optimised (with the -O flag in the compiler) and don't debug that well with debuginfo. A workaround is to recompile without any optimisation. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148652",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79627/"
]
} |
148,670 | I have a list of URLs and I want to save each of their targets in a separate text file. Here's an example of the input file containing the URLs: ~$: head -3 url.txt http://www.uniprot.org/uniprot/P32234.txthttp://www.uniprot.org/uniprot/P05552.txt http://www.uniprot.org/uniprot/P07701.txt I'm currently using a Python custom function to accomplish this task. It works, but the main inconvenient are: user has to copy-paste URLs manually (there's no direct file input) and the output contains some 'b' characters at the beginning of each line (?binary). ~$: head -3 P32234.txtb' ID 128UP_DROME Reviewed; 368 AA.'b' AC P32234; Q9V648;'b' DT 01-OCT-1993, integrated into UniProtKB/Swiss-Prot. Here's the Python code: def html_to_txt(): import urllib.request url = str(input('Enter URL: ')) page = urllib.request.urlopen(url) with open(str(input('Enter filename: ')), "w") as f: for x in page: f.write(str(x).replace('\\n','\n')) s= 'Done' return s Is there a cleaner way of doing this using some Unix utilities? | Use -i option: wget -i ./url.txt From man wget : -i file --input-file=file Read URLs from a local or external file. If - is specified as file, URLs are read from the standard input. (Use ./- toread from a file literally named -.) If this function is used, no URLsneed be present on the command line. If there are URLs both on thecommand line and in an input file, those on the command lines will bethe first ones to be retrieved. If --force-html is not specified, thenfile should consist of a series of URLs, one per line. However, if you specify --force-html, the document will be regarded ashtml. In that case you may have problems with relative links, whichyou can solve either by adding "" to the documentsor by specifying --base=url on the command line. If the file is an external one, the document will be automaticallytreated as html if the Content-Type matches text/html. Furthermore,the file's location will be implicitly used as base href if none wasspecified. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148670",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74555/"
]
} |
148,714 | The most recent thing I remember is changing the soft and hard memlock ulimit to unlimited. Now I can't ssh into the machine. This is the ssh log. Authenticated to IP ([IP]:22).debug1: channel 0: new [client-session]debug2: channel 0: send opendebug1: Requesting [email protected]: Entering interactive session.debug2: callback startdebug2: fd 3 setting TCP_NODELAYdebug2: client_session2_setup: id 0debug2: channel 0: request pty-req confirm 1debug1: Sending environment.debug1: Sending env LC_CTYPE = debug2: channel 0: request env confirm 0debug2: channel 0: request shell confirm 1debug2: callback donedebug2: channel 0: open confirm rwindow 0 rmax 32768debug2: channel_input_status_confirm: type 99 id 0debug2: PTY allocation request accepted on channel 0debug2: channel 0: rcvd adjust 2097152debug2: channel_input_status_confirm: type 99 id 0debug2: shell request accepted on channel 0Last login: Wed Aug 6 07:18:07 2014 from IP-SOURCEdebug2: channel 0: rcvd eofdebug2: channel 0: output open -> draindebug2: channel 0: obuf emptydebug2: channel 0: close_writedebug2: channel 0: output drain -> closeddebug1: client_input_channel_req: channel 0 rtype exit-status reply 0debug1: client_input_channel_req: channel 0 rtype [email protected] reply 0debug2: channel 0: rcvd eowdebug2: channel 0: close_readdebug2: channel 0: input open -> closeddebug2: channel 0: rcvd closedebug2: channel 0: almost deaddebug2: channel 0: gc: notify userdebug2: channel 0: gc: user detacheddebug2: channel 0: send closedebug2: channel 0: is deaddebug2: channel 0: garbage collectingdebug1: channel 0: free: client-session, nchannels 1Connection to IP closed.Transferred: sent 4256, received 2504 bytes, in 0.4 secondsBytes per second: sent 9616.9, received 5658.0debug1: Exit status 254 I have tried the following unsuccessfully till now, before posting here: Trying for a norc noprofile login by ssh user@host 'bash --noprofile' Forcing a tty by ssh -t user@host Moved the bash_profile. Tried sshing by ssh user@host . Renaming the limits.conf file in hopes that it won't be read. Restarted ssh server. Run a command via knife as knife ssh "name:server" "come_command" ssh user@host 'ulimit -l 64' , ssh user@host 'ulimit -S -l 64' , ssh user@host 'ulimit -H -l 64' , ssh user@host 'exec ulimit -H -l 64' I am not sure if this way of running commands inline: ssh user@host "some_command" works, because I can't get a simple directory listing. I also tried rebooting by ssh user@host 'reboot' but don't think the command was executed. I restarted the machine from AWS also, but unsuccessful. Is it a lost cause trying to ssh? Is there any way I can ssh into the server? | Try to change UsePAM yes on UsePAM no in /etc/ssh/sshd_config (for CentOS) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148714",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3493/"
]
} |
148,715 | I'm running some simulation by using the Linux cluster but I have some problem in my job. I want to cancel any job(ID : 750, 752, 753 name : gib) but I don't know how to cancel this job. Enclosed file is my job screen. How to kill a jobs by its job id? | You can kill a qstat process using the qdel command: qdel *ID* so in your case: qdel 750 If it won't die, you can force kill using -f option: qdel -f 750 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148715",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79925/"
]
} |
148,735 | When running the following command in bash: netstat -i I am returned something which looks like this: Kernel Interface tableiface MTU ...eno1 1500lo 50000 I ran this command with the intentions to find what type of network adapters (hardware) I have. (should return wireless adapter + more) I am unable to tell what these mean. Is this the correct command for this? | eno1 is the onboard Ethernet (wired) adapter. lo is a loopback device. You can imagine it as a virtual network device that is on all systems, even if they aren't connected to any network. It has an IP address of 127.0.0.1 and can be used to access network services locally. For example, if you run a webserver on your machine and browse to it with Firefox or Chromium on the same machine, then it will go via this network device. There is no wi-fi adapter listed. lspci and lsusb may help you find them in the first place at which point you need to figure out why it isn't working. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148735",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79935/"
]
} |
148,754 | I am trying to start GUI application with upstart script on CentOS. I have test script located /etc/init/ folder. start on desktop-session-startstop on desktop-shutdownrespawnscriptexport DISPLAY=:0sleep 5exec /.1/Projects/UpstartTest/start.sh &end script start.sh scripts is running binary files for GUI application. After reboot my computer. When I typed: [root@mg-CentOS ~]# initctl status testtest stop/waiting So my upstart is not runnig. When i type initctl start test manually it works fine without any problem. How can I run this upstart script after user login (desktop started) ? I am trying to find detailed documents for CentOS for upstart but there is no. | On centos 7 use gnome-session-properties to edit this in the GUI: This will add a .desktop file in ~/.config/autostart/ .You can also alternatively copy the .desktop file yourself. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/148754",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79645/"
]
} |
148,794 | I am using virt-install (see below) to create a guest. All seems fine up to the point where it complains about auto-allocation of the SPICE TLS port. Here's what I am running and the full output: # sudo virt-install --name vmname --ram 1024 --os-type=linux --os-variant=ubuntutrusty --disk path=/data/vm/vmname_sda.qcow2,bus=virtio,size=10,sparse=false --noautoconsole --console pty,target_type=virtio --accelerate --hvm --network=network:default --graphics spice,port=20001,listen=127.0.0.1Starting install...Retrieving file MANIFEST... | 2.1 kB 00:00 ...Retrieving file MANIFEST... | 2.1 kB 00:00 ...Retrieving file linux... | 11 MB 00:00 ...Retrieving file initrd.gz... | 41 MB 00:00 ...ERROR unsupported configuration: Auto allocation of spice TLS port requested but spice TLS is disabled in qemu.confDomain installation does not appear to have been successful.If it was, you can restart your domain by running: virsh --connect qemu:///system start vmnameotherwise, please restart your installation. The error is: ERROR unsupported configuration: Auto allocation of spice TLS port requested but spice TLS is disabled in qemu.conf and indeed in my /etc/libvirt/qemu.conf I have: spice_tls = 0 (and intentionally so). So how can I create a KVM guest using the SPICE protocol for graphics, but with TLS disabled ? I doubt it is of relevance, but the reason I want to disable TLS is because I am tunneling the connection to SPICE via SSH already. No need for an extra layer of encryption. The host system is Ubuntu 14.04.1. Package versions are: virtinst: 0.600.4-3ubuntu2 qemu-kvm: 2.0.0+dfsg-2ubuntu1.2 (all up to date as far as apt-get is concerned) | Okay, I worked around it on my own. In the option: --graphics spice,port=20001,listen=127.0.0.1 remove the port parameter such that it becomes: --graphics spice,listen=127.0.0.1 You need to configure the <graphics /> element in the libvirt XML configuration file then. My invocation of virt-install gave me this: <graphics type='spice' autoport='yes' listen='127.0.0.1'> <listen type='address' address='127.0.0.1'/></graphics> There is one caveat. I finished the installation while SPICE was still connected to the default auto-connected port (5900 in my case). If you shut down the guest prior to finishing the installation the whole process initiated by virt-install will be interrupted. In order to change it one should shut down the guest and the edit the XML to something like the following, using virsh edit vmname (where vmname should be replaced with your name): <graphics type='spice' autoport='no' port='20001' listen='127.0.0.1'> <listen type='address' address='127.0.0.1'/></graphics> Possible workaround for "port in use" conflicts. Use any of the local net addresses other than 127.0.0.1 from 127.0.0.0/24, e.g. 127.0.0.2 etc to listen on. NOTE: If someone can come up with a better (i.e. actual) solution, I'll accept that other answer. This writeup is mostly for others that may run into the same issue. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5462/"
]
} |
148,819 | As I understand, the ls command calls getdents , which returns up to x number of directory entries. Are there any other system calls involved? If I run ls -l , are there any more system calls? I am trying to determine if ls -l is more expensive and hence slower than ls . | /bin/ls usually sorts the output. I'm not sure if your "efficient" question is just over system calls or the entire work that is done, but /bin/ls -f would probably do the least work. It only returns the filenames in directory order. No sorting, no additional inode lookups to get metadata (as ls -l would do). Also, if your default ls is colorizing, it may be doing the equivalent of ls -l anyway so that it can tell how to color the output. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/148819",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79988/"
]
} |
148,890 | I need to disable SELinux but cannot restart the machine i followed this link where i get bellow command setenforce 0 But after running this command i checked for that sestatusSELinux status: enabledSELinuxfs mount: /selinuxCurrent mode: permissiveMode from config file: disabledPolicy version: 24Policy from config file: targeted Is there any other option? | sestatus is showing the current mode as permissive . In permissive mode, SELinux will not block anything, but merely warns you. The line will show enforcing when it's actually blocking. I don't believe it's possible to completely disable SELinux without a reboot. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/148890",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64773/"
]
} |
148,912 | I have the following text: A Hello world B Hello world C Hello world I know that I can replace Hello by Hi using sed : sed 's/Hello/Hi/g' -i test but this replace each Hello with Hi : A Hi world B Hi world C Hi world what I really want is to replace only the Hello after B : A Hello world B Hi world C Hello world so I have tried this: sed 's/"B\nHello"/"B\nHi"/g' -i test but nothing happened, How can I do this? Note: There are some white-spaces on the beginning of each line of the file. | Something like: sed '/^[[:blank:]]*B$/{n;s/Hello/Hi/g;}' That assumes there are no consecutive B s (one B line followed by another B line). Otherwise, you could do: awk 'last ~ /^[[:blank:]]*B$/ {gsub("Hello", "Hi")}; {print; last=$0}' The sed equivalent would be: sed 'x;/^[[:blank:]]*B$/{ g;s/Hello/Hi/;b } g' To replace the second word after B , or to replace world with universe only if two lines above contained B : awk 'l2 ~ /B/ {gsub("world","universe")}; {print; l2=l1; l1=$0}' To generalise it to n lines above: awk -v n=12 'l[NR%n] ~ /B/ {gsub("foo", "bar")}; {print; l[NR%n]=$0}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148912",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45764/"
]
} |
148,922 | I use a tool to monitor if the web-page is up and running. The tool uses curl command internally to fetch the output. However, when a web-page takes longer time to respond, it results back with a TIMEOUT error. There is no way that I can increase the timeout from the tool. Is there any way to set/modify the timeout period for a response from a web-page? Is there any variable that can be modified? | You can use -m option: -m, --max-time <seconds> Maximum time in seconds that you allow the whole operation to take. This is useful for preventing your batch jobs from hang‐ ing for hours due to slow networks or links going down. See also the --connect-timeout option. If this option is used several times, the last one will be used. This includes time to connect, if you want to specify it separately, use --connect-timeout option. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/148922",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80040/"
]
} |
148,929 | I am trying to copy a file that has colons and periods, e.g., with: scp "test.json-2014-08-07T11:17:58.662378" remote:tmp/scp test.json-2014-08-07T11\:17\:58.662378 remote:tmp/ and combinations with file: scp "file:///home/.../test.json-2014-08-07T11:17:58.662378" remote:tmp/ My guess is that scp tries to interprete parts of the file as a server and/or port number. How do I avoid that? If I rename the file to test.json then scp test.json remote:tmp/ works ok, but not even scp test*62378 remote:tmp/ works. | Use ./ before your filename: scp ./test.json-2014-08-07T11:17:58.662378 remote:tmp/ That make scp know it's a file. Without it, scp thinks it's a hostname because of the colon. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/148929",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79719/"
]
} |
148,956 | I have an Ubuntu 10.04 server setup remotely that I setup a while back. While I recorded the username and password, I seem to have been clever and changed the usual ssh port from 22 to... something else. How do I find out what that port might be? I do have access to the server via the hosting company's back door, so I can execute whatever Unix commands are needed - but I cannot log in using a normal putty shell on my machine. | First check on the config file which port is configured: $ sudo grep Port /etc/ssh/sshd_configPort 22 Then either restart ssh to make sure it loads the config you just saw or find out on which port ssh is running: $ sudo netstat -tpln | egrep '(Proto|ssh)'Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program nametcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 12586/sshd That's a normal ssh running on port 22. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/148956",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66854/"
]
} |
148,967 | we know that we can get the value of a variable using $ sign: x=3echo $x3 Is there other ways that we can use to get the value without using $ sign. I'm asking this because $ sign is a special character to sed , and I get sed: -e expression #1, char 31: unterminated s' command error when trying to do this: A=25 #lineNumberdestStr="192.168.1.3 192.168.1.4"sed ""$A"s/Allow from .*/Allow from $destStr/" -i test2 and it prints the string destStr (not the value) when I use the escape character \ : sed ""$A"s/Allow from .*/Allow from \$destStr/" -i test2 using echo instead of sed : 25s/Allow from .*/Allow from 192.168.1.3192.168.1.4/ -i test2 | As the final edits revealed, the problem is unrelated to the dollar sign, but is caused by the content of deststr , which is not 192.168.1.3 192.168.1.4 but rather two lines, one containing only 192.168.1.3 and the other containing only 192.168.1.4 , both lines bveing terminated with a newline character. That is, the actual command after variable replacement is: sed "25s/Allow from .*/Allow from 192.168.1.3192.168.1.4/" -i test2 Now sed interprets its command line by line, and thus the first command it tries to interpret is: 25s/Allow from .*/Allow from 192.168.1.3 which clearly is an unterminated s command, and thus reported by sed as such. Now the solution you found, using echo , works because echo $var calls echo with two arguments (because the whitespace is not quoted, it is interpreted as argument delimiter), the first one being 192.168.1.3 and the second one being 192.168.1.4 ; both are forms that are not interpreted further by the shell. Now echo just outputs its (non-option) arguments separated by a space, therefore you now get as command line: sed "25s/Allow from .*/Allow from 192.168.1.3 192.168.1.4/" -i test2 as intended. Note however that for command substitution, instead of backticks you should use $() whereever possible, since it's too easy to get backticks wrong. Therefore the follwing does what you want: sed "$A s/Allow from .*/Allow from $(echo $destStr)/" -i test2 Note that I also took advantage of the fact that sed allows a space between address and command, to simplify the quoting. In situations where such an extra space is not possible, you can also use the following syntax: sed "${A}s/Allow from .*/Allow from $(echo $destStr)/" -i test2 Also note that this relies on the fact that the non-space characters in destStr are interpreted neither by the shell, nor by sed if occurring in the replacement string. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148967",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45764/"
]
} |
148,973 | I have mounted a filesystem at a particular directory and I replace a file present in the filesystem.I now unmount the filesystem. Is there any possibility of Operating system accessing the replaced file present in the filesystem? | As the final edits revealed, the problem is unrelated to the dollar sign, but is caused by the content of deststr , which is not 192.168.1.3 192.168.1.4 but rather two lines, one containing only 192.168.1.3 and the other containing only 192.168.1.4 , both lines bveing terminated with a newline character. That is, the actual command after variable replacement is: sed "25s/Allow from .*/Allow from 192.168.1.3192.168.1.4/" -i test2 Now sed interprets its command line by line, and thus the first command it tries to interpret is: 25s/Allow from .*/Allow from 192.168.1.3 which clearly is an unterminated s command, and thus reported by sed as such. Now the solution you found, using echo , works because echo $var calls echo with two arguments (because the whitespace is not quoted, it is interpreted as argument delimiter), the first one being 192.168.1.3 and the second one being 192.168.1.4 ; both are forms that are not interpreted further by the shell. Now echo just outputs its (non-option) arguments separated by a space, therefore you now get as command line: sed "25s/Allow from .*/Allow from 192.168.1.3 192.168.1.4/" -i test2 as intended. Note however that for command substitution, instead of backticks you should use $() whereever possible, since it's too easy to get backticks wrong. Therefore the follwing does what you want: sed "$A s/Allow from .*/Allow from $(echo $destStr)/" -i test2 Note that I also took advantage of the fact that sed allows a space between address and command, to simplify the quoting. In situations where such an extra space is not possible, you can also use the following syntax: sed "${A}s/Allow from .*/Allow from $(echo $destStr)/" -i test2 Also note that this relies on the fact that the non-space characters in destStr are interpreted neither by the shell, nor by sed if occurring in the replacement string. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/148973",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80063/"
]
} |
148,975 | I am confused how md5sum --check is supposed to work: $ man md5sum-c, --check read MD5 sums from the FILEs and check them I have a file, I can pipe it to md5sum : $ cat file | md5sum44693b9ef883e231cd9f90f737acd58f - When I want to check the integrity of the file tomorrow, how can I check if themd5sum is still 44693b9ef883e231cd9f90f737acd58f ? Note cat file might be a stream. So I want to use the pipe as in my example, not md5sum file . | You do this: cat file | md5sum > sumfile And the next day you can do this: cat file | md5sum --check sumfile Which prints: -: OK if everything is alright. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/148975",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
148,985 | I have been asked to write a shell script To check if a URL is UP/Working or not for my Project. I tried to find some hints over the internet, but whatever I got is about checking whether the URL exists or not. I first tried with wget . wget -S --spider https://genesis-dev.webbank.ssmb.com:21589/gop-ui/app.jsp 2>&1 | awk '/^ /'if [ $? -ne 0 ]then echo "Server is UP"elseecho "Server is down"fi My next attempt was with curl . curl -ivs https://genesis-dev.webbank.ssmb.com:21589/opconsole-sit/opconsole.html#if [ $? -ne 0 ]then echo "Server is UP"elseecho "Server is down"fi But, both are checking existence of the URL not the response. | curl -Is http://www.yourURL.com | head -1 You can try this command to check any URL.Status code 200 OK means that the request has succeeded and the URL is reachable. You can also test URL availability and get the response code using telnet command telnet www.yourURL.com 80 80 is the port number. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/148985",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80080/"
]
} |
148,990 | Ok, so I've been searching the web for solutions to this problem with no answers seeming to work for me. Hopefully someone can help me. I'm only trying to configure the OpenVPN Client. I'm running CrunchBang Linux 3.2.0-4-amd64 Debian 3.2.60-1+deb7u1 x86_64 GNU/Linux and I just switched over to using systemd . The changeover went smooth enough but now I can't get my OpenVPN client to come up using systemd I've tried following these configuration tutorials, but nothing works. http://fedoraproject.org/wiki/Openvpn http://d.stavrovski.net/blog/how-to-install-and-set-up-openvpn-in-debian-7-wheezy And looked at a bunch of other different guides. I can bring up the tunnel from the command line with openvpn /etc/openvpn/vpn.conf . So I know the config file is good, it was working with sysvinit just fine so I'm not surprised. I then attempt to just do a status with systemctl status [email protected] resulting in: $ sudo systemctl status [email protected] [email protected]: error (Reason: No such file or directory)Active: inactive (dead) I realized that I need to do some setup for services. I want to be prompted for a password so I followed this guide to create an [email protected] in /etc/systemd/system/ . But restarting the OpenVPN service still doesn't prompt for a password. $ sudo service openvpn restart[ ok ] Restarting openvpn (via systemctl): openvpn.service. The Fedora tutorials go through the steps of creating symbolic links, but don't create any of the .service files in the walk-throughs. What piece am I missing? Do I need to create an [email protected]? If so, where exactly do I place it? I feel like it shouldn't be this difficult, but I can't seem to find any solution that works for me. I'm happy to provide any more information that's needed. Solution -rw-r--r-- 1 root root 319 Aug 7 10:42 [email protected][Unit]Description=OpenVPN connection to %iAfter=network.target[Service]Type=forkingExecStart=/usr/sbin/openvpn --daemon ovpn-%i --status /run/openvpn/%i.status 10 --cd /etc/openvpn --config /etc/openvpn/%i.confExecReload=/bin/kill -HUP $MAINPIDWorkingDirectory=/etc/openvpn[Install][email protected] (END) Symlink: lrwxrwxrwx 1 root root 36 Aug 7 10:47 [email protected] -> /lib/systemd/system/[email protected] Prompt For Password Everything is working now, except for being prompted for a password to connect. I've attempted this solution . I tweaked the file from above just a bit, and added an Expect script like in the example. Working like a charm! My files are below. Modified lines from the above /lib/systemd/system/[email protected] ExecStart=/usr/sbin/openvpn --daemon ovpn-%i --status /run/openvpn/%i.status 10 --cd /etc/openvpn --management localhost 5559 --management-query-passwords --management-forget-disconnect --config /etc/openvpn/%i.confExecStartPost=/usr/bin/expect /lib/systemd/system/openvpn_pw.exp Expect script /lib/systemd/system/openvpn_pw.exp . Make sure to do the following: chmod +x on the script. Have telnet installed Code of the expect script: #!/usr/bin/expectset pass [exec /bin/systemd-ask-password "Please insert Private Key password: "]spawn telnet 127.0.0.1 5559expect "Enter Private Key Password:"send "password 'Private Key' $pass\r"expect "SUCCESS: 'Private Key' password entered, but not yet verified"send "exit\r"expect eof It should be noted that the above solution does log your password entered in plaintext in the following logs in /var/log/syslog and /var/log/daemon.log | I think the Debian OpenVPN setup with systemd is currently a tad bit broken. To get it to work on my machines I had to: Create /etc/systemd/system/[email protected] (the directory), and place in it a new file with this: [Unit]Requires=networking.serviceAfter=networking.service I called my file local-after-ifup.conf . It needs to end with .conf . (This is the bit that's currently a tad bit broken.) Create a file in /etc/tmpfiles.d (I called mine local-openvpn.conf ) with the contents: # Type Path Mode UID GID Age Argumentd /run/openvpn 0755 root root - - This is Debian bug 741938 (fixed in 2.3.3-1). Create a symlink into multi-user.target.wants (easiest way is systemctl enable openvpn@CONF_NAME.service ) E.g., if you have /etc/openvpn/foo.conf , you'd use [email protected] . If you also have the SysV init script showing up in systemd, disable it. This is Debian bug 700888 (fixed in 2.3.3-1). NOTE: 2.3.3-1 or later is not yet in testing , though it is in unstable. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/148990",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79977/"
]
} |
149,017 | I tried tailing two files using the option: tail -0f file1.log -0f file2.log In Linux I see an error "tail : can process only one file at a time". In AIX I see the error as "Invalid options". This works fine when I use: tail -f file1 -f file 2 in Linux but not in AIX. I want to be able to tail multiple files using -0f or -f in AIX/Linux multitail is not recognized in either of these OS. | What about: tail -f file1 & tail -f file2 Or prefixing each line with the name of the file: tail -f file1 | sed 's/^/file1: /' &tail -f file2 | sed 's/^/file2: /' To follow all the files whose name match a pattern, you could implement the tail -f (which reads from the file every second continuously) with a zsh script like: #! /bin/zsh -zmodload zsh/statzmodload zsh/zselectzmodload zsh/systemset -o extendedglobtypeset -A trackedtypeset -F SECONDS=0pattern=${1?}; shiftdrain() { while sysread -s 65536 -i $1 -o 1; do continue done}for ((t = 1; ; t++)); do typeset -A still_there still_there=() for file in $^@/$~pattern(#q-.NoN); do stat -H stat -- $file || continue inode=$stat[device]:$stat[inode] if (($+tracked[$inode])) || { exec {fd}< $file && tracked[$inode]=$fd; } then still_there[$inode]= fi done for inode fd in ${(kv)tracked}; do drain $fd if ! (($+still_there[$inode])); then exec {fd}<&- unset "tracked[$inode]" fi done ((t <= SECONDS)) || zselect -t $((((t - SECONDS) * 100) | 0))done Then for instance, to follow all the text files in the current directory recursively: that-script '**/*.txt' . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/149017",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77967/"
]
} |
149,029 | There's an issue with Ubuntu that hasn't been fixed yet, where the PC freezes or gets really slow whenever it is copying to an USB stick (see Why is my PC freezing while I'm copying a file to a pendrive? , http://lwn.net/Articles/572911/ and https://askubuntu.com/q/508108/234374 ). A workaround is to execute the following commands as root (see here for an explanation) as root: echo $((16*1024*1024)) > /proc/sys/vm/dirty_background_bytesecho $((48*1024*1024)) > /proc/sys/vm/dirty_bytes How do I revert these changes? When I restart my PC, will it get rolled back to default values? | These are sysctl parameters. You can set them either by writing to /proc/sys/ CATEGORY / ENTRY or by calling the sysctl command with the argument CATEGORY . ENTRY = VALUE . These settings affect the running kernel, they are not persistent. If you want to make these settings persistent, you need to set them at boot time. On Ubuntu, create a file in the directory /etc/sysctl.d called becko-vm-dirty.conf containing # Shrink the disk buffers to a more reasonable size. See http://lwn.net/Articles/572911/vm.dirty_background_bytes = 16777216vm.dirty_bytes = 50331648 To revert the changes, write the old value back. There is no “restore defaults” command. Note that these parameters are a bit peculiar: there are also parameters called vm.dirty_ratio and vm.dirty_background_ratio , which control the same setting but express the size as a percentage of total memory instead of a number of bytes. For each of the two settings, whichever of ratio or bytes was set last takes precedence. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/149029",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56755/"
]
} |
149,033 | If a file tells the OS its file format, how does the OS choose which application to open it by default? In Windows, is the association stored in the registry table? How does Linux choose which application to open a file? I used to use Nautilus a lot, but now I change to terminal. Is it true that in terminal, we always have to explicitly specify which application to open a file? Does the settings of which application to open a file of a certain format by default belong to the file manager (e.g. Nautilus), and it is not an issue when we are living in terminals? | There may be different mechanisms to handle these default settings. However, other answers tend to focus on complete desktop environments, each of them with its own mechanism. Yet, these are not always installed on a system (I use OpenBox a lot), and in this case, tools such as xdg-open may be used. Quoting the Arch Wiki : xdg-open is a desktop-independent tool for configuring the default applications of a user. Many applications invoke the xdg-open command internally. At this moment, I am using Ubuntu (12.04) and xdg-open is available. However, when you use a complete desktop environment such as GNOME, xdg-open acts as a simple forwarder, and relays the file requests to your DE, which is then free to handle it as it wants (see other answers for GNOME and Nautilus, for instance). Inside a desktop environment (e.g. GNOME, KDE, or Xfce), xdg-open simply passes the arguments to that desktop environment's file-opener application (gvfs-open, kde-open, or exo-open, respectively), which means that the associations are left up to the desktop environment. ... which brings you back to the other answers in that case. Still, since this is Unix & Linux, and not Ask Ubuntu: When no desktop environment is detected (for example when one runs a standalone window manager, e.g. Openbox), xdg-open will use its own configuration files. All in all: |-- no desktop env. > handle directly.User Request > xdg-open > --| |-- desktop env. > pass information to the DE. If the first case, you'll need to configure xdg-open directly , using the xdg-mime command (which will also allow you to see which application is supposed to handle which file). In the second case... |-- GNOME? > gvfs-open handles the request. |Info. from xdg-open > --|-- KDE? > kde-open handles the request. | |-- XFCE? > exo-open handles the request. ... you'll need to configure the file-opener associated with your desktop environment. In some cases, configuration made through xdg-mime may be redirected to the proper configuration tool in your environment. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/149033",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
149,045 | Does "shebang" mean "bang she"? Why not "hebang" as "bang he"? | Another interesting name derivation from here . Among UNIX shell (user interface) users, a shebang is a term for the "#!" characters that must begin the first line of a script. In musical notation, a "#" is called a sharp and an exclamation point - "!" - is sometimes referred to as a bang. Thus, shebang becomes a shortening of sharp-bang | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/149045",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
149,058 | I seem to be unable to use a character class for an awk regular expression, almost exactly as described here : user@host:~$ awk -W versionmawk 1.3.3 Nov 1996, Copyright (C) Michael D. Brennancompiled limits:max NF 32767sprintf buffer 2040user@host:~$ echo "host.company.com has address 192.168.22.82" |awk '/^[a-zA-Z0-9.-]+ has address/ { print $4 }'192.168.22.82user@host:~$ echo "host.company.com has address 192.168.22.82" |awk '/^[[:alnum:].-]+ has address/ { print $4 }'user@host:~$ Does anyone see why the second command fails to find the address field? | It's a bug in mawk 1.3.3 and was reported here . You can upgrade to mawk 1.3.4 or use patch to fix the bug. $ mawk -W versionmawk 1.3.4 20130219Copyright 2013, Thomas E. DickeyCopyright 1996, Michael D. Brennaninternal regexcompiled limits:max NF 32767sprintf buffer 2040$ echo "host.company.com has address 192.168.22.82" | mawk '/^[[:alnum:].-]+ has address/ { print $4 }'192.168.22.82 mawk uses extended regular expressions as with egrep , so it must support POSIX characters classes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/149058",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64417/"
]
} |
149,066 | Can anyone please tell me what X Window System is and what it is used for? | You may be confused, and this is not your fault, because Linux can have 2 meanings. Linux is a kernel: This kernel is used in many systems, including android and the systems outlined in 2. Linux also often confusingly is used to refer to systems like Debian, Ubuntu, Redhat, CentOs, Suse, and many more. These systems are better described as Gnu+Linux, and in the desktop case X11+Gnu+Linux. X11 is the correct name for the system that you are asking about. X11 is an architecture independent, network transparent, policy free, windowing system. Not part of the OS The X11 server runs as a user process. Other processes also run, window manager (to decorate windows with frames and title bars, and do the moving and resizing), taskbar, and other. It has been said “MS-Windows is a windowing system with an OS stuck on as an after thought, and Unix/Linux is an OS with a windowing system stuck on as an after thought.” Architecture independent X11 is used on most Unixes: Gnu (including Gnu+Linux), Bsd, Solaris, Hp-Ux, etc. it can also be used on many other systems: Vms, MS-Windows, MacOs, AmigaDos, and may more. Network transparent You can open windows on other machines, if you have the correct authority. And can run application on a remote (possibly more powerful) machine, and display application locally. This is done on a per application, or per window basis, unlike VNC or remote desktop, that do it a desktop at a time. Policy free X11 has no policy as to what things look like or how things are done. This has allowed it to endure since about 1985, with some extensions such as video, shaped windows and 3d. All the changes of look and feel are done by changing or replacing window managers and other helper apps. You can change window manager without logging out. So it is possible to change from something from 1985 (pre Microsoft's windows) to 1995 (Win95), to 2001 (win XP), to 2014 (Win 7) to (no one in there right mind would run something like win 8), to something better — no reboot, not even a log out. (Though if you did have a 1985 version of X11 you would have to logout to upgrade to one that has 3D etc). see also http://en.wikipedia.org/wiki/X_Window_System | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/149066",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80113/"
]
} |
149,074 | I typed set -x in terminal. Now the terminal keeps printing the last command run on top of my output so the command ~]$echo "this is what I see" returns + echo 'this is what I see'this is what I see There is no man page for set , how do I turn set -x off? | Use set +x . More information: $ type setset is a special shell builtin Since set is a shell builtin, it is documented in the documentation of your shell. Beware that some systems have man pages for shell builtins, but these man pages are only correct if you're using the default shell. On Linux, you may have man pages that present POSIX commands, which will turn up for shell builtins because there's no man page of a standalone utility to shadow them; these man pages are correct for all Bourne-style shells (dash, bash, *ksh, and even zsh) but typically incomplete. See Reading and searching long man pages for tips on searching for a builtin in a long shell man page. In this case, the answer is the same for all Bourne-style shells. If set - LETTER turns on an option, set + LETTER turns it off. Thus, set +x turns off traces. The last trace, for the set +x command itself, is not completely avoidable. You can suppress it with { set +x; } 2>/dev/null , but in some shells there's still a trace for the redirection itself. You can avoid a trace for set +x by not running it and instead letting the (sub)shell exit: if it's ok to run the traced command(s) in a subshell, you can use (set -x; command to trace; other command to trace); command that is not traced . | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/149074",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79435/"
]
} |
149,082 | I am trying to ftp some recording files to a remote server for backup every night. I am very confused regarding shell script. My question / problem is : I want to move the whole folder/directory instead of file to the remote server. Here is current script: HOST='10.113.68.50'USER='sms'PASSWD='Abc123451'LOCALPATH='kmpy/unica/Campaign/partitions/partition1/CiktiDosyalari'FILE=*.smsDIR='SMS/'ftp -n $HOST <<EOFquote USER $USERquote PASS $PASSWDcd $DIRlcd $LOCALPATHput $FILEquitexit;EOF | you can use mput * instead of put to upload all of the files in the directory. Further you can screen files, for example: mput *.jpg will transfer all and only jpg files. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/149082",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80119/"
]
} |
149,163 | When you enter an invalid command at a bash prompt you get the message -bash: {command}: command not found What does the - at the very beginning signify? | It means that it is a login shell. From man bash : A login shell is one whose first character of argument zero is a -, or one started with the --login option. (In bash terminology, the "zeroth" argument is the command name which, in your case, was bash .) bash uses this as a signal to do login activities such as executing .bash_profile , etc. One way that the dash may be added automatically is if the shell is started with exec . From the Bash manual : exec [-cl] [-a name] [command [arguments]] [...] If the -l option is supplied, the shell places a dash at the beginning of the zeroth argument passed to command . Example Compare these two attempts to run the command nonexistent . First without -l : $ exec bash$ nonexistentbash: nonexistent: command not found And, second, with: $ exec -l bash$ nonexistent-bash: nonexistent: command not found | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/149163",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40269/"
]
} |
149,169 | I want to encrypt a bunch of strings using openssl. How do I pass plaintext in console to openssl (instead of specifying input file which has plaintext). openssl man page has only these two options related to input/output: -in <file> input file-out <file> output file Here is what I have tried so far: This works fine, openssl aes-256-cbc -a -K 00000000000000000000000000000000 -iv 00000000000000000000000000000000 -in plain.txt -out encrypted.txt If I omit the -out parameter I get encrypted string in console, openssl aes-256-cbc -a -K 00000000000000000000000000000000 -iv 00000000000000000000000000000000 -in plain.txt But If I omit both -in and -out, I get an error - unknown option 'Encrypt ME', openssl aes-256-cbc -a -K 00000000000000000000000000000000 -iv 00000000000000000000000000000000 "Encrypt ME" | Use this: user@host:~$ echo "my string to encrypt" | openssl aes-256-cbc -e -a -K 00000000000000000000000000000000 -iv 00000000000000000000000000000000a7svR6j/uAz4kY9jvWbJaUR/d5QdH5ua/vztLN7u/FE=user@host:~$ echo "a7svR6j/uAz4kY9jvWbJaUR/d5QdH5ua/vztLN7u/FE=" | openssl aes-256-cbc -d -a -K 00000000000000000000000000000000 -iv 00000000000000000000000000000000my string to encrypt Or you could use command substitution: user@host:~$ openssl aes-256-cbc -a -K 00000000000000000000000000000000 -iv \00000000000000000000000000000000 -in <(echo "my string to encrypt") -out encrypted.txt | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/149169",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10472/"
]
} |
149,184 | Having worked with Linux for years, and finding myself with some free time, I decided to revisit some basics. So I re-read the stuff about permissions (without checking source code), and its special cases for folders, and came up with a new (to me at least...) way of thinking about folder permissions (for a specific user/group/others): I imagine a folder as a table with two columns, like so: filename | inode foo | 111 bar | 222 The read permission means you can read (and list) the left column of the table, the write permission corresponds to adding and removing entries to the table, and the execute permission corresponds to being able to translate from file name to inode; i.e. you can access the contents of the folder. I did some experiments, and the results are all consistent with this "worldview" of mine, but one conclusion seems inescapable: that a folder with permissions d-w------- , is totally useless. Elaborating: you can't list its contents, you can't read any files you know exist inside (because you can't translate names into inodes), you can't remove or rename or add files, because again that would imply translation, and you can't even add hardlinks (because, I surmise, that would mean adding a name as well as an inode number, which means you would know both, which in turn, again surmising, violates the purpose of unsetting execution permission). And of course, if there are files inside one such folder, then you can't delete that folder either, because you can't delete its contents. So... I would like to ask two questions: Is this analogy of mine correct, or is it a big blunder? Irrespective of the previous answer, is there any situation where having a folder with permissions as described is appropriate? | Your understanding is pretty much correct. A better way to think of the execute permission is that it allows you to do things with a file or directory name in the directory (other than just reading the name itself). Most of those things involve translating the name to an inode, but it also includes creating new names and removing existing names. Write permission to the directory without execute is therefore pretty useless, since there's nothing you can actually write if you can't access the files within it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/149184",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80177/"
]
} |
149,187 | I have a file that contains a long list of integers written in ascii separated with newlines, like such: -1752-19345345592-452355 etc... I want to convert this file into a "binary" file containing the same integers, written as actual 4 byte integers. What command-line tool can I use to achieve this? | perl -pe '$_=pack"l",$_' < infile > outfile Uses the local endianness. Use l> instead of l for big-endian, and l< for little-endian. See perldoc -f pack for more info. Note that it's l as in lowercase L (for long integer ), not the 1 digit. $ printf '%s\n' 1234 -2 | perl -pe '$_=pack"l",$_'| od -vtd40000000 1234 -20000010$ printf '%s\n' 1234 -2 | perl -pe '$_=pack"l>",$_'| od -vtx10000000 00 00 04 d2 ff ff ff fe0000010 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/149187",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4819/"
]
} |
149,198 | Hey Arch linux users out there! I am new to arch linux( coming from ubuntu), and have just installed arch linux on my machine. After rebooting I wanted to install some packages using pacman -S , but I saw that my machine wasn't able to fetch the files from the servers, I interrupted the installation ustn <Ctrl-C> . So I tried to update my repositorys using pacman -Sy , this didn't work because pacman wasn't able to fetch the packages. So I tried to ping a few IPs, wich didn't work because network is unreachable After that I tried to set my eth0 up using ip link set eth0 up it returned me this error: cannot find device "eth0" the command ip link showed that lo (of course) and emp1s0 (what the heck is that?) are up. I tried to set emp1s0 down using ip link set emp1s0 down wich returned cannot find device "emp1s0" I also tried to load the kernel module tg3 manually and to start dhcpd manually, both didn't work. I actually do't know how to go on. | Have you enabled and started systemd-networkd.service ? First create a file /etc/systemd/network/mynet.network containing (if you use DHCP): [Match]Name=device_name[Network]DHCP=yes Then issue these commands: systemctl enable systemd-networkd.servicesystemctl start systemd-networkd.service If that doesn't work (try rebooting aterwards in case you've been trying many things since last boot), boot from the install cd again and see what modules are loaded and try to find differences between the running environment when booting from harddisk and from cd. The ArchLinux installation guide is a bit tricky to follow for first-time users. Some crucial information are not on the "first" page, such as network configuration and boot loader. In that respect, I find the gentoo handbook a lot more readable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/149198",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80190/"
]
} |
149,203 | I have a shell script named 'teleport.sh' like this: if [ $1="1" ]; then shift mv "$@" ~/lab/Sunelif [ $1="2" ]; then shift mv "$@" ~/lab/Moonelif [ $1="3" ]; then shift mv "$@" ~/lab/Earthfi When I execute: sh teleport.sh 2 testfile This testfile is moved to the ~/lab/Sun directory, which confuses me a lot as I didn't pass 1 or '1' to that script. What's wrong here? | Using spaces fixes your problem. if [ "$1" = 1 ]; then shift mv "$@" ~/lab/Sunelif [ "$1" = 2 ]; then shift mv "$@" ~/lab/Moonelif [ "$1" = 3 ]; then shift mv "$@" ~/lab/Earthfi Though this is neater: #!/bin/bashaction=$1shiftfiles=("$@")case $action in 1) mv -- "${files[@]}" ~/lab/Sun ;; 2) mv -- "${files[@]}" ~/lab/Moon ;; 3) mv -- "${files[@]}" ~/lab/Earth ;;esac | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/149203",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74226/"
]
} |
149,209 | I have a config-file that I keep open in vim, but that sometimes gets changed on disk, without these changes being reflected on the terminal. Can I refresh the content on the screen without closing and re-opening the file? If so, how? | You can use the :edit command, without specifying a file name, to reloadthe current file. If you have made modifications to the file, you can use :edit! to force the reload of the current file (you will lose yourmodifications). The command :edit can be abbreviated by :e . The force-edit can thus be done by :e! | {
"score": 10,
"source": [
"https://unix.stackexchange.com/questions/149209",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77711/"
]
} |
149,223 | I want html number entities like ę and want to convert it to real character. I have emails mostly from linkedin that look like this: chciałabym zapytać, czy rozważa Pan takze udział w nowych projektach w Warszawie ? Obecnie poszukujemy specjalisty javascript/architekta z bardzo dobrą znajomością Angular.js do projektu, który dotyczy systemu, służącego do monitorowania i zarządzania flotą pojazdów. Zespół, do którego poszukujemy I'm using clawsmail, switching to html don't convert it to text, I've try to copy and use xclip -o -sel clip | html2text | less but it didn't convert the entities. Is there a way to have that text using command line tools? The only way I can think of is to use data:text/html,<PASTE THE EMAIL> and open it in a browser, but would prefer the command line. | With Free recode (formerly known as GNU recode ): recode html < file If you don't have recode or HTML::Entities and only need to decode &#x<hex>; entities, you could do it by hand with: perl -Mopen=locale -pe 's/&#x([\da-f]+);/chr hex $1/gie' | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/149223",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1806/"
]
} |
149,226 | I'm running a CentOS server (7.0) and I'd like to login via sshd as a user, not root. So I set PermitRootLogin no in the config file and su - after login. I've received lots of hacking activities and I decided to allow only one user to login via sshd. Since the username is not my real name or any common name, I think it would be good enough. Let's say it's 'hkbjhsqj'. I've tried both ways introduced on nixCraft: AllowUsers in sshd_config or pam_listfile.so in PAM . The only problem to me is that anyone else still has chances to type in passwords and that leaves records in /var/log/secure. I assume these actions consumes my server's resources to run password checking and other stuff. Let's say I try to login with the username 'admin': www$ ssh [email protected]@0.0.0.0's password:Permission denied, please try [email protected]'s password:Permission denied, please try [email protected]'s password:Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). and in the secure log: Aug 8 08:28:40 www sshd[30497]: pam_unix(sshd:auth): check pass; user unknownAug 8 08:28:40 www sshd[30497]: pam_listfile(sshd:auth): Refused user admin for service sshdAug 8 08:28:43 www sshd[30497]: Failed password for invalid user admin from 192.168.0.1 port 52382 ssh2Aug 8 08:28:47 www sshd[30497]: pam_unix(sshd:auth): check pass; user unknownAug 8 08:28:47 www sshd[30497]: pam_listfile(sshd:auth): Refused user admin for service sshdAug 8 08:28:50 www sshd[30497]: Failed password for invalid user admin from 192.168.0.1 port 52382 ssh2Aug 8 08:28:52 www sshd[30497]: pam_unix(sshd:auth): check pass; user unknownAug 8 08:28:52 www sshd[30497]: pam_listfile(sshd:auth): Refused user admin for service sshdAug 8 08:28:55 www sshd[30497]: Failed password for invalid user admin from 192.168.0.1 port 52382 ssh2Aug 8 08:28:55 www sshd[30497]: Connection closed by 192.168.0.1 [preauth]Aug 8 08:28:55 www sshd[30497]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.0.1 While all this will not happen if I add the IP in /etc/hosts.deny: www$ ssh [email protected]_exchange_identification: Connection closed by remote host and in the secure log: Aug 8 08:35:11 www sshd[30629]: refused connect from 192.168.0.1 (192.168.0.1)Aug 8 08:35:30 www sshd[30638]: refused connect from 192.168.0.1 (192.168.0.1) So my question would be, is there a way I can refuse all irrelevant users' ssh requests from anywhere without password checking, like I put them in the hosts.deny list? But at the same time I do need allow all ssh requests with the username 'hkbjhsqj' from anywhere and check the password then. | I don't think it is possible to do what you are asking. If you could, someone could "brute force" to find valid usernames on your server. I am also pretty sure that the username and the password are sent simultaneously by the client, you could verify this by capturing packets using Wireshark on an unencrypted SSH connection. By "hacking activities" I assume you are talking about brute force attempts at passwords. There are many ways to protect yourself from this, I will explain the most common ways. Disable root login By denying root login with SSH the attacker has to know or guess a valid username. Most automated brute force attacks only try logging in as root. Blocking IPs on authentication failure Daemons like fail2ban and sshguard monitor your log files to detect login failures. You can configure these to block the IP address trying to log in after a number of failed login attempts. In your case this is what I would recommend. This reduces log spam and strain on your server, as all packets from this IP would be blocked before they reach the sshd daemon.Your could, for example, set fail2ban to block IPs with 3 login failures in the last 5 minutes for 60 minutes. You will in the worst cases see three failed logins in your log every 60 minutes, assuming the attacker does not give up and move on. Public key authentication You can disable password authentication entirely and only allow clients with specific keys. This is often considered to be the most secure solution (assuming the client keeps his key safe and encrypted). To disable password authentication, add your public key to ~/.ssh/authorized_keys on the server and set PasswordAuthentication to no in sshd_config. There are numerous tutorials and tools to assist with this. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/149226",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80201/"
]
} |
149,234 | I know this command awk '{for(x=1;$x;++x)print $x}' will print out all columns in a line. wouldn't this ++x change x to 2, and thus print $2 first?As I understood based on this: https://stackoverflow.com/questions/1812990/incrementing-in-c-when-to-use-x-or-x And what does the $x do in for(x=1;$x;++x) ? | No. The for(i=0;i<10;i++) is a classic programming construct (see Traditional for loops ) that is present in many languages. It can be broken down to: start-expression; end-condition; end-of-iteration-expression In other words, what I wrote above means "initialize i to 0 and, while i is less than 10, do something and then increment i by 1. Yes the syntax is confusing but that's just the way it is. The end-of-iteration-expression ( ++x in this case) is executed once at the end of each loop. It is equivalent to writing: while(i<10){print i; ++i} As for the $x , I believe that just checks that a field of that number exists and that its contents do not evaluate to false (as explained in Mathias's answer below ). $N will return true if the field number N exists and is not a type of false . For example: $ echo "a b c d" | awk '($4){print "yes"}'yes$ echo "a b c d" | awk '($14){print "yes"}' ## prints nothing, no $14$ echo "a b c 0" | awk '($4){print "yes"}' ## prints nothing, $4 is 0 As you can see above, the first command prints yes because there is a $4 . Since there is no $14 , the second prints nothing. So, to get back to your original example: awk '{for(x=1;$x;x++)print $x}' ___ __ ___ | | | | | |-----> increment x by 1 at the end of each loop. | |--------> run the loop as long as there is a field number x |------------> initialize x to 1 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/149234",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74226/"
]
} |
149,277 | A long while ago, I created a ~/.Xmodmap reversing the 4 and 5 to create "natural scrolling": pointer = 1 2 3 5 4 7 6 8 9 10 11 12 I source .Xmodmap in .xinitrc in the standard fashion ( xmodmap $HOME/.Xmodmap & ).This has worked for years without issues. I just recently installed an application called cockatrice . I have no other issues with the program, except that, for some reason, when I scroll inside the program, my scrolling direction is not "natural" (i.e., it is as if my .Xmodmap is not being obeyed by only this application). At first, I thought it was an issue with my Qt input module, but I realized that I have correctly declared QT_IM_MODULE to xim in my .xinitrc , and I've never had this issue with any other application. Is this an application-specific issue, or is this Qt-specific? What should I try to further troubleshoot this (or solve it)? Attempting to set this universally through xinput fails: $ xinput list ⎡ Virtual core pointer id=2 [master pointer (3)]⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]⎜ ↳ SynPS/2 Synaptics TouchPad id=12 [slave pointer (2)]⎣ Virtual core keyboard id=3 [master keyboard (2)]# unneeded information regarding my keyboard$ xinput set-prop 2 "Evdev Scrolling Distance" -1 -1 -1property 'Evdev Scrolling Distance' doesn't exist, you need to specify its type and format | It seems to be Qt specific (from trying in Qt Assistant).I think it is because Qt uses only the scrolling distance for its wheel events. Instead of using xmodmap here, you can set your scrolling distance to negative values. You can set it through a file in /etc/X11/xorg.conf.d/ , for a mouse managed by evdev : Section "InputClass" Identifier "Reverse Scrolling" MatchIsPointer "on" Option "VertScrollDelta" "-1" Option "HorizScrollDelta" "-1" Option "DialDelta" "-1"EndSection Or you can try with xinput first : xinput set-prop <your device id> "Evdev Scrolling Distance" -1 -1 -1 (To get the device id : xinput list ) The properties are listed with the actual device. Here xinput list-props 12 should list the properties of the touchpad. As it is a synaptics touchpad, from this man page the property should be : xinput set-prop <touchpad id> "Synaptics Scrolling Distance" -1 -1 (Only two values, vertical and horizontal edges.) For the rule in the configuration file, it should work with MatchIsTouchpad : Section "InputClass" Identifier "Natural Scrolling" MatchIsTouchpad "on" Option "VertScrollDelta" "-1" Option "HorizScrollDelta" "-1"EndSection | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/149277",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65525/"
]
} |
149,286 | I'm curious - is there a difference between ls -l and ls -lllllllllllllllllllllllllllll ? The output appears to be the same and I'm confused on why ls allows duplicate switches. Is this a standard practice among most commands? | Short answer : Because it's programmed to ignore multiple uses of a flag. Long answer: As you can see in the source code of ls , there is a part with the function getopt_long() and a huge switch case: 1648 int c = getopt_long (argc, argv,1649 "abcdfghiklmnopqrstuvw:xABCDFGHI:LNQRST:UXZ1",1650 long_options, &oi); ....1654 switch (c)1655 { ....1707 case 'l':1708 format = long_format;1709 break; ....1964 } The function getopt_long() reads all paramters given to the program. In case if -l the variable format is set. So when you type multiple -lllllllll that variable is set multiple times, but that does not change anything. Well, it changes one thing. This huge switch case statement must run through multiple times, because of multiple -l flags. ls needs longer to complete with multiple -l flags. But this time is not worth mentioning. =) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/149286",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1822/"
]
} |
149,293 | I am trying to set up a VPN (using OpenVPN) such that all of the traffic, and only the traffic, to/from specific processes goes through the VPN; other processes should continue to use the physical device directly. It is my understanding that the way to do this in Linux is with network namespaces. If I use OpenVPN normally (i.e. funnelling all traffic from the client through the VPN), it works fine. Specifically, I start OpenVPN like this: # openvpn --config destination.ovpn --auth-user-pass credentials.txt (A redacted version of destination.ovpn is at the end of this question.) I'm stuck on the next step, writing scripts that restrict the tunnel device to namespaces. I have tried: Putting the tunnel device directly in the namespace with # ip netns add tns0# ip link set dev tun0 netns tns0# ip netns exec tns0 ( ... commands to bring up tun0 as usual ... ) These commands execute successfully, but traffic generated inside the namespace (e.g. with ip netns exec tns0 traceroute -n 8.8.8.8 ) falls into a black hole. On the assumption that " you can [still] only assign virtual Ethernet (veth) interfaces to a network namespace " (which, if true, takes this year's award for most ridiculously unnecessary API restriction), creating a veth pair and a bridge, and putting one end of the veth pair in the namespace. This doesn't even get as far as dropping traffic on the floor: it won't let me put the tunnel into the bridge! [EDIT: This appears to be because only tap devices can be put into bridges. Unlike the inability to put arbitrary devices into a network namespace, that actually makes sense, what with bridges being an Ethernet-layer concept; unfortunately, my VPN provider does not support OpenVPN in tap mode, so I need a workaround.] # ip addr add dev tun0 local 0.0.0.0/0 scope link# ip link set tun0 up# ip link add name teo0 type veth peer name tei0# ip link set teo0 up# brctl addbr tbr0# brctl addif tbr0 teo0# brctl addif tbr0 tun0can't add tun0 to bridge tbr0: Invalid argument The scripts at the end of this question are for the veth approach. The scripts for the direct approach may be found in the edit history. Variables in the scripts that appear to be used without setting them first are set in the environment by the openvpn program -- yes, it's sloppy and uses lowercase names. Please offer specific advice on how to get this to work. I'm painfully aware that I'm programming by cargo cult here -- has anyone written comprehensive documentation for this stuff? I can't find any -- so general code review of the scripts is also appreciated. In case it matters: # uname -srvmLinux 3.14.5-x86_64-linode42 #1 SMP Thu Jun 5 15:22:13 EDT 2014 x86_64# openvpn --version | head -1OpenVPN 2.3.2 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [EPOLL] [PKCS11] [eurephia] [MH] [IPv6] built on Mar 17 2014# ip -Vip utility, iproute2-ss140804# brctl --versionbridge-utils, 1.5 The kernel was built by my virtual hosting provider ( Linode ) and, although compiled with CONFIG_MODULES=y , has no actual modules -- the only CONFIG_* variable set to m according to /proc/config.gz was CONFIG_XEN_TMEM , and I do not actually have that module (the kernel is stored outside my filesystem; /lib/modules is empty, and /proc/modules indicates that it was not magically loaded somehow). Excerpts from /proc/config.gz provided on request, but I don't want to paste the entire thing here. netns-up.sh #! /bin/shmask2cidr () { local nbits dec nbits=0 for dec in $(echo $1 | sed 's/\./ /g') ; do case "$dec" in (255) nbits=$(($nbits + 8)) ;; (254) nbits=$(($nbits + 7)) ;; (252) nbits=$(($nbits + 6)) ;; (248) nbits=$(($nbits + 5)) ;; (240) nbits=$(($nbits + 4)) ;; (224) nbits=$(($nbits + 3)) ;; (192) nbits=$(($nbits + 2)) ;; (128) nbits=$(($nbits + 1)) ;; (0) ;; (*) echo "Error: $dec is not a valid netmask component" >&2 exit 1 ;; esac done echo "$nbits"}mask2network () { local host mask h m result host="$1." mask="$2." result="" while [ -n "$host" ]; do h="${host%%.*}" m="${mask%%.*}" host="${host#*.}" mask="${mask#*.}" result="$result.$(($h & $m))" done echo "${result#.}"}maybe_config_dns () { local n option servers n=1 servers="" while [ $n -lt 100 ]; do eval option="\$foreign_option_$n" [ -n "$option" ] || break case "$option" in (*DNS*) set -- $option servers="$serversnameserver $3" ;; (*) ;; esac n=$(($n + 1)) done if [ -n "$servers" ]; then cat > /etc/netns/$tun_netns/resolv.conf <<EOF# name servers for $tun_netns$serversEOF fi}config_inside_netns () { local ifconfig_cidr ifconfig_network ifconfig_cidr=$(mask2cidr $ifconfig_netmask) ifconfig_network=$(mask2network $ifconfig_local $ifconfig_netmask) ip link set dev lo up ip addr add dev $tun_vethI \ local $ifconfig_local/$ifconfig_cidr \ broadcast $ifconfig_broadcast \ scope link ip route add default via $route_vpn_gateway dev $tun_vethI ip link set dev $tun_vethI mtu $tun_mtu up}PATH=/sbin:/bin:/usr/sbin:/usr/binexport PATHset -ex# For no good reason, we can't just put the tunnel device in the# subsidiary namespace; we have to create a "virtual Ethernet"# device pair, put one of its ends in the subsidiary namespace,# and put the other end in a "bridge" with the tunnel device.tun_tundv=$devtun_netns=tns${dev#tun}tun_bridg=tbr${dev#tun}tun_vethI=tei${dev#tun}tun_vethO=teo${dev#tun}case "$tun_netns" in (tns[0-9] | tns[0-9][0-9] | tns[0-9][0-9][0-9]) ;; (*) exit 1;;esacif [ $# -eq 1 ] && [ $1 = "INSIDE_NETNS" ]; then [ $(ip netns identify $$) = $tun_netns ] || exit 1 config_inside_netnselse trap "rm -rf /etc/netns/$tun_netns ||: ip netns del $tun_netns ||: ip link del $tun_vethO ||: ip link set $tun_tundv down ||: brctl delbr $tun_bridg ||: " 0 mkdir /etc/netns/$tun_netns maybe_config_dns ip addr add dev $tun_tundv local 0.0.0.0/0 scope link ip link set $tun_tundv mtu $tun_mtu up ip link add name $tun_vethO type veth peer name $tun_vethI ip link set $tun_vethO mtu $tun_mtu up brctl addbr $tun_bridg brctl setfd $tun_bridg 0 #brctl sethello $tun_bridg 0 brctl stp $tun_bridg off brctl addif $tun_bridg $tun_vethO brctl addif $tun_bridg $tun_tundv ip link set $tun_bridg up ip netns add $tun_netns ip link set dev $tun_vethI netns $tun_netns ip netns exec $tun_netns $0 INSIDE_NETNS trap "" 0fi netns-down.sh #! /bin/shPATH=/sbin:/bin:/usr/sbin:/usr/binexport PATHset -extun_netns=tns${dev#tun}tun_bridg=tbr${dev#tun}case "$tun_netns" in (tns[0-9] | tns[0-9][0-9] | tns[0-9][0-9][0-9]) ;; (*) exit 1;;esac[ -d /etc/netns/$tun_netns ] || exit 1pids=$(ip netns pids $tun_netns)if [ -n "$pids" ]; then kill $pids sleep 5 pids=$(ip netns pids $tun_netns) if [ -n "$pids" ]; then kill -9 $pids fifi# this automatically cleans up the the routes and the veth device pairip netns delete "$tun_netns"rm -rf /etc/netns/$tun_netns# the bridge and the tunnel device must be torn down separatelyip link set $dev downbrctl delbr $tun_bridg destination.ovpn clientauth-user-passping 5dev tunresolv-retry infinitenobindpersist-keypersist-tunns-cert-type serververb 3route-metric 1proto tcpping-exit 90remote [REDACTED]<ca>[REDACTED]</ca><cert>[REDACTED]</cert><key>[REDACTED]</key> | It turns out that you can put a tunnel interface into a network namespace. My entire problem was down to a mistake in bringing up the interface: ip addr add dev $tun_tundv \ local $ifconfig_local/$ifconfig_cidr \ broadcast $ifconfig_broadcast \ scope link The problem is "scope link", which I misunderstood as only affecting routing. It causes the kernel to set the source address of all packets sent into the tunnel to 0.0.0.0 ; presumably the OpenVPN server would then discard them as invalid per RFC1122; even if it didn't, the destination would obviously be unable to reply. Everything worked correctly in the absence of network namespaces because openvpn's built-in network configuration script did not make this mistake. And without "scope link", my original script works as well. (How did I discover this, you ask? By running strace on the openvpn process, set to hexdump everything it read from the tunnel descriptor, and then manually decoding the packet headers.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/149293",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21167/"
]
} |
149,319 | In UNIX, when a parent process disappears, I thought that all child processes reset init as their parent. Is this not correct all the time? Are there any exceptions? | Three answers written in 2014, all saying that in Unices and in Linux the process is reparented to process #1 without exception. Three wrong answers. ☺ As the SUS says, quoted in one of the other answers here so I won't quote it again, the parent process of orphaned children is set to an implementation-defined process. Cristian Ciupitu is right to consult the Linux documentation to see what the implementation defines. But he is being misled by that documentation, which is inconsistent and not up-to-date. Two years before these three answers were written, and fast coming up to three years ago at the time of first writing this answer, the Linux kernel changed. The systemd developers added the ability for processes to set themselves up as "subreapers". From Linux 3.4 onwards processes can issue the prctl() system call with the PR_SET_CHILD_SUBREAPER option, and as a result they, not process #1, will become the parent of any of their orphaned descendant processes. The man page for prctl() is up-to-date, but other man pages have not been brought up to date and made consistent. In version 10.2, FreeBSD gained the same ability, extending its existing procctl() system call with PROC_REAP_ACQUIRE and PROC_REAP_RELEASE options. It adopted this mechanism from DragonFly BSD; which gained it in version 4.2, originally named reapctl() but renamed during development to procctl() . So there are exceptions, and fairly prominent ones: On Linux, FreeBSD/PC-BSD, and DragonFly BSD, the parent process of orphaned children is set to the nearest ancestor process of the child that is marked as a subreaper, or process #1 if there is no ancestor subreaper process. Various daemon supervision utilities — including systemd (the one whose developers put this into the Linux kernel in the first place), upstart, and the nosh service-manager — already make use of this. If such a daemon supervisor is not process #1, and it spawns a service such as an interactive login session, and in that session one does the (quite wrongheaded) trick of attempting to "daemonize" by double- fork() ing , then one's process will end up as a child of the daemon supervisor, not of process #1. Expecting to be able to directly spawn daemons from within login sessions is a fundamental mistake, of course. But that's another answer. Further reading Jonathan Corbet (2012-03-28). 3.4 Merge window part 2 . LWN. "4. Various core changes" . Linux 3.4 . Kernel newbies. 2012. Daemonizing and Upstart . Nekoconeko. 2014-11-12. Lennart Poettering (2012-03-23). prctl: add PR_{SET,GET}_CHILD_SUBREAPER to allow simple process supervision . linux/kernel/git/torvalds/linux.git. Matthew Dillon (2014). Add reapctl() system call for managing sub-processes (3) -> procctl() . dragonfly.git. procctl() . DragonFly BSD Manual pages. § 2. DragonFly BSD 4.2 Release Notes . 2015-07-29. Konstantin Belousov (2014-12-01). Process reapers . freebsd-arch mailing list. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/149319",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79988/"
]
} |
149,342 | I am not talking about recovering deleted files , but overwritten files. Namely by the following methods: # movemv new_file old_file# copycp new_file old_file# editvi existing_file> D> i new_content> :x Is it possible to retrieve anything if any of the above three actions is performed assuming no special programs are installed on the linux machine? | The answer is "Probably yes, but it depends on the filesystem type, and timing." None of those three examples will overwrite the physical data blocks of old_file or existing_file, except by chance. mv new_file old_file . This will unlink old_file. If there are additional hard links to old_file, the blocks will remain unchanged in those remaining links. Otherwise, the blocks will generally (it depends on the filesystem type) be placed on a free list. Then, if the mv requires copying (a opposed to just moving directory entries), new blocks will be allocated as mv writes. These newly-allocated blocks may or may not be the same ones that were just freed . On filesystems like UFS , blocks are allocated, if possible, from the same cylinder group as the directory the file was created in. So there's a chance that unlinking a file from a directory and creating a file in that same directory will re-use (and overwrite) some of the same blocks that were just freed. This is why the standard advice to people who accidentally remove a file is to not write any new data to files in their directory tree (and preferably not to the entire filesystem) until someone can attempt file recovery. cp new_file old_file will do the following (you can use strace to see the system calls): open("old_file", O_WRONLY|O_TRUNC) = 4 The O_TRUNC flag will cause all the data blocks to be freed, just like mv did above. And as above, they will generally be added to a free list, and may or may not get reused by the subsequent writes done by the cp command. vi existing_file . If vi is actually vim , the :x command does the following: unlink("existing_file~") = -1 ENOENT (No such file or directory) rename("existing_file", "existing_file~") = 0 open("existing_file", O_WRONLY|O_CREAT|O_TRUNC, 0664) = 3 So it doesn't even remove the old data; the data is preserved in a backup file. On FreeBSD, vi does open("existing_file",O_WRONLY|O_CREAT|O_TRUNC, 0664) , which will have the same semantics as cp , above. You can recover some or all of the data without special programs; all you need is grep and dd , and access to the raw device. For small text files, the single grep command in the answer from @Steven D in the question you linked to is the easiest way: grep -i -a -B100 -A100 'text in the deleted file' /dev/sda1 But for larger files that may be in multiple non-contiguous blocks, I do this: grep -a -b "text in the deleted file" /dev/sda113813610612:this is some text in the deleted file which will give you the offset in bytes of the matching line. Follow this with a series of dd commands, starting with dd if=/dev/sda1 count=1 skip=$(expr 13813610612 / 512) You'd also want to read some blocks before and after that block. On UFS, file blocks are usually 8KB and are usually allocated fairly contiguously, a single file's blocks being interleaved alternately with 8KB blocks from other files or free space. The tail of a file on UFS is up to 7 1KB fragments, which may or may not be contiguous. Of course, on file systems that compress or encrypt data, recovery might not be this straightforward. There are actually very few utilities in Unix that will overwrite an existing file's data blocks. One that comes to mind is dd conv=notrunc . Another is shred . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/149342",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24509/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.